0% found this document useful (0 votes)
164 views1,292 pages

Power Bi Guidance

Uploaded by

hpclslro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views1,292 pages

Power Bi Guidance

Uploaded by

hpclslro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1292

Tell us about your PDF experience.

Power BI guidance documentation


Power BI guidance documentation provides best practice information from the team
that builds Power BI and the folks that work with our enterprise customers. Here you’ll
find learnings to improve performance and success with Power BI. We’ll update and add
to them as new information is available.

Power BI guidance

e OVERVIEW

Guidance for Power BI

Optimization guide for Power BI

Whitepapers overview

Transform and shape data

p CONCEPT

The importance of query folding

Disable Power Query background refresh

Referencing Power Query queries

Dataflows best practices

Data modeling

p CONCEPT

What is a star schema?

Data reduction techniques

One-to-one relationships

Many-to-many relationships

Active vs inactive relationships

Bi-directional relationships
Relationship troubleshooting

DirectQuery model guidance

Composite model guidance

Row-level security (RLS) guidance

DAX

p CONCEPT

Appropriate use of error functions

Avoid converting BLANKs to values

Avoid using FILTER as a filter argument

Column and measure references

DIVIDE function vs divide operator

Use COUNTROWS instead of COUNT

Use SELECTEDVALUE instead of VALUES

Use variables to improve formulas

Power BI reports

p CONCEPT

Separate reports from models

Use report page tooltips

Use report page drillthrough

Power BI paginated reports

p CONCEPT

When to use paginated reports

Data retrieval guidance


Image use guidance

Use cascading parameters

Avoid blank pages when printing

Migrate SSRS reports to Power BI

Admin and deployment

p CONCEPT

On-premises data gateway sizing

Monitor report performance

Troubleshoot report performance

Deployment pipelines best practices

Access the Power BI activity log

Migration to Power BI

p CONCEPT

Migration overview

Prepare to migrate

Gather requirements

Plan deployment

Conduct proof of concept

Create content

Deploy to Power BI

Learn from customer Power BI migrations

Migrate AAS models to Power BI Premium

Adoption roadmap
p CONCEPT

Overview

Data culture

Executive sponsorship

Content ownership

Content delivery scope

Center of Excellence

Governance

Mentoring and user enablement

Community of practice

User support

Implementation planning

p CONCEPT

Introduction

Usage scenarios

Tenant setup

Workspaces

Security

Info protection and DLP

Auditing and monitoring

Center of Excellence (COE)

p CONCEPT

Microsoft's BI transformation

Establish a COE
BI solution architecture in the COE
Guidance for Power BI
Article • 09/22/2021

Here you will find the guidance and recommended practices for Power BI. Guidance will
continue to be updated and added to.

Data modeling
Guidance Description

Understand star schema and Describes star schema design and its relevance to developing
the importance for Power BI Power BI data models optimized for performance and usability.

Data reduction techniques Describes different techniques to help reduce the data loaded
for Import modeling into Import models.

DAX
Guidance Description

DAX: DIVIDE function vs divide operator Describes proper use of the DIVIDE function within
(/) DAX.

Dataflows
Guidance Description

Dataflows best practice Describes best practices for designing dataflows in Power BI.

More questions? Try asking the Power BI Community


Optimization guide for Power BI
Article • 02/27/2023

This article provides guidance that enables developers and administrators to produce
and maintain optimized Power BI solutions. You can optimize your solution at different
architectural layers. Layers include:

The data source(s)


The data model
Visualizations, including dashboards, Power BI reports, and Power BI paginated
reports
The environment, including capacities, data gateways, and the network

Optimizing the data model


The data model supports the entire visualization experience. Data models are either
hosted in the Power BI ecosystem or externally (using DirectQuery or Live Connection),
and in Power BI they are referred to as datasets. It's important to understand your
options, and to choose the appropriate dataset type for your solution. There are three
dataset modes: Import, DirectQuery, and Composite. For more information, see Datasets
in the Power BI service, and Dataset modes in the Power BI service.

For specific dataset mode guidance, see:

Data reduction techniques for Import modeling


DirectQuery model guidance in Power BI Desktop
Composite model guidance in Power BI Desktop

Optimizing visualizations
Power BI visualizations can be dashboards, Power BI reports, or Power BI paginated
reports. Each has different architectures, and so each has their own guidance.

Dashboards
It's important to understand that Power BI maintains a cache for your dashboard tiles—
except live report tiles, and streaming tiles. If your dataset enforces dynamic row-level
security (RLS), be sure to understand performance implications as tiles will cache on a
per-user basis.
When you pin live report tiles to a dashboard, they're not served from the query cache.
Instead, they behave like reports, and make queries to v-cores on the fly.

As the name suggests, retrieving the data from the cache provides better and more
consistent performance than relying on the data source. One way to take advantage of
this functionality is to have dashboards be the first landing page for your users. Pin
often-used and highly requested visuals to the dashboards. In this way, dashboards
become a valuable "first line of defense", which delivers consistent performance with
less load on the capacity. Users can still click through to a report to analyze details.

For DirectQuery and live connection datasets, the cache is updated on a periodic basis
by querying the data source. By default, it happens every hour, though you can
configure a different frequency in the dataset settings. Each cache update will send
queries to the underlying data source to update the cache. The number of queries that
generate depends on the number of visuals pinned to dashboards that rely on the data
source. Notice that if row-level security is enabled, queries are generated for each
different security context. For example, consider there are two different roles that
categorize your users, and they have two different views of the data. During query cache
refresh, Power BI generates two sets of queries.

Power BI reports
There are several recommendations for optimizing Power BI report designs.

7 Note

When reports are based on a DirectQuery dataset, for additional report design
optimizations, see DirectQuery model guidance in Power BI Desktop (Optimize
report designs).

Apply the most restrictive filters


The more data that a visual needs to display, the slower that visual is to load. While this
principle seems obvious, it's easy to forget. For example: suppose you have a large
dataset. Atop of that dataset, you build a report with a table. End users use slicers on the
page to get to the rows they want—typically, they're only interested in a few dozen
rows.

A common mistake is to leave the default view of the table unfiltered—that is, all
100M+ rows. The data for these rows loads into memory and is uncompressed at every
refresh. This processing creates huge demands for memory. The solution: use the "Top
N" filter to reduce the max number of items that the table displays. You can set the max
item to larger than what users would need, for example, 10,000. The result is the end-
user experience doesn't change, but memory use drops greatly. And most importantly,
performance improves.

A similar design approach to the above is suggested for every visual in your report. Ask
yourself, is all the data in this visual needed? Are there ways to filter the amount of data
shown in the visual with minimal impact to the end-user experience? Remember, tables
in particular can be expensive.

Limit visuals on report pages


The above principle applies equally to the number of visuals added to a report page. It's
highly recommended you limit the number of visuals on a particular report page to only
what is necessary. Drillthrough pages and report page tooltips are great ways to provide
additional details without jamming more visuals onto the page.

Evaluate custom visual performance


Be sure to put each custom visual through its paces to ensure high performance. Poorly
optimized Power BI visuals can negatively affect the performance of the entire report.

Power BI paginated reports


Power BI paginated report designs can be optimized by applying best practice design to
the report's data retrieval. For more information, see Data retrieval guidance for
paginated reports.

Also, ensure your capacity has sufficient memory allocated to the paginated reports
workload.

Optimizing the environment


You can optimize the Power BI environment by configuring capacity settings, sizing data
gateways, and reducing network latency.

Capacity settings
When using capacities—available with Power BI Premium (P SKUs), Premium Per User
(PPU) licenses, or Power BI Embedded (A SKUs, A4-A6)—you can manage capacity
settings. For more information, see Managing Premium capacities.
Gateway sizing
A gateway is required whenever Power BI must access data that isn't accessible directly
over the Internet. You can install the on-premises data gateway on a server on-premises,
or VM-hosted Infrastructure-as-a-Service (IaaS).

To understand gateway workloads and sizing recommendations, see On-premises data


gateway sizing.

Network latency
Network latency can impact report performance by increasing the time required for
requests to reach the Power BI service, and for responses to be delivered. Tenants in
Power BI are assigned to a specific region.

 Tip

To determine where your tenant is located, see Where is my Power BI tenant


located?

When users from a tenant access the Power BI service, their requests always route to this
region. As requests reach the Power BI service, the service may then send additional
requests—for example, to the underlying data source, or a data gateway—which are
also subject to network latency.

Tools such as Azure Speed Test provide an indication of network latency between the
client and the Azure region. In general, to minimize the impact of network latency, strive
to keep data sources, gateways, and your Power BI capacity as close as possible.
Preferably, they reside within the same region. If network latency is an issue, try locating
gateways and data sources closer to your Power BI capacity by placing them inside
cloud-hosted virtual machines.

Monitoring performance
You can monitor performance to identify bottlenecks. Slow queries—or report visuals—
should be a focal point of continued optimization. Monitoring can be done at design
time in Power BI Desktop, or on production workloads in Power BI Premium capacities.
For more information, see Monitoring report performance in Power BI.

Next steps
For more information about this article, check out the following resources:

Power BI guidance
Monitoring report performance
Power BI adoption roadmap
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Query folding guidance in Power BI
Desktop
Article • 06/06/2023

This article targets data modelers developing models in Power BI Desktop. It provides
best practice guidance on when—and how—you can achieve Power Query query
folding.

Query folding is the ability for a Power Query query to generate a single query
statement that retrieves and transforms source data. For more information, see Power
Query query folding.

Guidance
Query folding guidance differs based on the model mode.

For a DirectQuery or Dual storage mode table, the Power Query query must achieve
query folding.

For an Import table, it may be possible to achieve query folding. When the query is
based on a relational source—and if a single SELECT statement can be constructed—you
achieve best data refresh performance by ensuring that query folding occurs. If the
Power Query mashup engine is still required to process transformations, you should
strive to minimize the work it needs to do, especially for large datasets.

The following bulleted-list provides specific guidance.

Delegate as much processing to the data source as possible: When all steps of a
Power Query query can't be folded, discover the step that prevents query folding.
When possible, move later steps earlier in sequence so they may be factored into
the query folding. Note the Power Query mashup engine may be smart enough to
reorder your query steps when it generates the source query.

For a relational data source, if the step that prevents query folding could be
achieved in a single SELECT statement—or within the procedural logic of a stored
procedure—consider using a native SQL query, as described next.

Use a native SQL query: When a Power Query query retrieves data from a
relational source, it's possible for some sources to use a native SQL query. The
query can in fact be any valid statement, including a stored procedure execution. If
the statement produces multiple result sets, only the first will be returned.
Parameters can be declared in the statement, and we recommend that you use the
Value.NativeQuery M function. This function was designed to safely and
conveniently pass parameter values. It's important to understand that the Power
Query mashup engine can't fold later query steps, and so you should include all—
or as much—transformation logic in the native query statement.

There are two important considerations you need to bear in mind when using
native SQL queries:
For a DirectQuery model table, the query must be a SELECT statement, and it
can't use Common Table Expressions (CTEs) or a stored procedure.
Incremental refresh can't use a native SQL query. So, it would force the Power
Query mashup engine to retrieve all source rows, and then apply filters to
determine incremental changes.

) Important

A native SQL query can potentially do more than retrieve data. Any valid
statement can be executed (and possibly multiple times), including one that
modifies or deletes data. It's important that you apply the principle of least
privilege to ensure that the account used to access the database has only read
permission on required data.

Prepare and transform data in the source: When you identify that certain Power
Query query steps can't be folded, it may be possible to apply the transformations
in the data source. The transformations could be achieved by writing a database
view that logically transforms source data. Or, by physically preparing and
materializing data, in advance of Power BI querying it. A relational data warehouse
is an excellent example of prepared data, usually consisting of pre-integrated
sources of organizational data.

Next steps
For more information about this article, check out the following resources:

Power Query Query folding concept article


Incremental refresh for datasets
Questions? Try asking the Power BI Community
Referencing Power Query queries
Article • 02/27/2023

This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance when defining Power Query queries that reference other queries.

Let's be clear about what this means: When a query references a second query, it's as
though the steps in the second query are combined with, and run before, the steps in the
first query.

Consider several queries: Query1 sources data from a web service, and its load is
disabled. Query2, Query3, and Query4 all reference Query1, and their outputs are
loaded to the data model.

When the data model is refreshed, it's often assumed that Power Query retrieves the
Query1 result, and that it's reused by referenced queries. This thinking is incorrect. In
fact, Power Query executes Query2, Query3, and Query4 separately.

You can think that Query2 has the Query1 steps embedded into it. It's the case for
Query3 and Query4, too. The following diagram presents a clearer picture of how the
queries are executed.
Query1 is executed three times. The multiple executions can result in slow data refresh,
and negatively impact on the data source.

The use of the Table.Buffer function in Query1 won't eliminate the additional data
retrieval. This function buffers a table to memory. And, the buffered table can only be
used within the same query execution. So, in the example, if Query1 is buffered when
Query2 is executed, the buffered data couldn't be used when Query3 and Query4 are
executed. They'll themselves buffer the data twice more. (This result could in fact
compound the negative performance, because the table will be buffered by each
referencing query.)

7 Note

Power Query caching architecture is complex, and it's not the focus of this article.
Power Query can cache data retrieved from a data source. However, when it
executes a query, it may retrieve the data from the data source more than once.

Recommendations
Generally, we recommend you reference queries to avoid the duplication of logic across
your queries. However, as described in this article, this design approach can contribute
to slow data refreshes, and overburden data sources.

We recommend you create a dataflow instead. Using a dataflow can improve data
refresh time, and reduce impact on your data sources.

You can design the dataflow to encapsulate the source data and transformations. As the
dataflow is a persisted store of data in the Power BI service, its data retrieval is fast. So,
even when referencing queries result in multiple requests for the dataflow, data refresh
times can be improved.
In the example, if Query1 is redesigned as a dataflow entity, Query2, Query3, and
Query4 can use it as a data source. With this design, the entity sourced by Query1 will
be evaluated only once.

Next steps
For more information related to this article, check out the following resources:

Self-service data prep in Power BI


Creating and using dataflows in Power BI
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Disable Power Query background
refresh
Article • 02/27/2023

This article targets Import data modelers working with Power BI Desktop.

By default, when Power Query imports data, it also caches up to 1000 rows of preview
data for each query. Preview data helps to present you with a quick preview of source
data, and of transformation results for each step of your queries. It's stored separately
on-disk and not inside the Power BI Desktop file.

However, when your Power BI Desktop file contains many queries, retrieving and storing
preview data can extend the time it takes to complete a refresh.

Recommendation
You'll achieve a faster refresh by setting the Power BI Desktop file to update the preview
cache in the background. In Power BI Desktop, you enable it by selecting File > Options
and settings > Options, and then selecting the Data Load page. You can then turn on the
Allow data preview to download in the background option. Note this option can only
be set for the current file.
Enabling background refresh can result in preview data becoming out of date. If it
occurs, the Power Query Editor will notify you with the following warning:

It's always possible to update the preview cache. You can update it for a single query, or
for all queries by using the Refresh Preview command. You'll find it on the Home ribbon
of the Power Query Editor window.
Next steps
For more information related to this article, check out the following resources:

Power Query Documentation


Questions? Try asking the Power BI Community
Introduction to dataflows and self-
service data prep
Article • 08/11/2023

 Tip

You can also try Dataflow Gen2 in Data Factory in Microsoft Fabric, an all-in-one
analytics solution for enterprises. Microsoft Fabric covers everything from data
movement to data science, real-time analytics, business intelligence, and reporting.
Learn how to start a new trial for free.

As data volume continues to grow, so does the challenge of wrangling that data into
well-formed, actionable information. We want data that’s ready for analytics, to populate
visuals, reports, and dashboards, so we can quickly turn our volumes of data into
actionable insights. With self-service data prep for big data in Power BI, you can go from
data to Power BI insights with just a few actions.

When to use dataflows


Dataflows are designed to support the following scenarios:
Create reusable transformation logic that can be shared by many datasets and
reports inside Power BI. Dataflows promote reusability of underlying data
elements, preventing the need to create separate connections with your cloud or
on-premises data sources.

Persist data in your own Azure Data Lake Gen 2 storage, enabling you to expose it
to other Azure services outside Power BI.

Create a single source of truth, curated from raw data using industry standard
definitions, which can work with other services and products in the Power Platform.
Encourage uptake by removing analysts' access to underlying data sources.

Strengthen security around underlying data sources by exposing data to report


creators in dataflows. This approach allows you to limit access to underlying data
sources, reducing the load on source systems, and gives administrators finer
control over data refresh operations.

If you want to work with large data volumes and perform ETL at scale, dataflows
with Power BI Premium scales more efficiently and gives you more flexibility.
Dataflows supports a wide range of cloud and on-premises sources.

You can use Power BI Desktop and the Power BI service with dataflows to create
datasets, reports, dashboards, and apps that use the Common Data Model. From these
resources, you can gain deep insights into your business activities. Dataflow refresh
scheduling is managed directly from the workspace in which your dataflow was created,
just like your datasets.

7 Note

Dataflows may not be available in the Power BI service for all U.S. Government DoD
customers. For more information about which features are available, and which are
not, see Power BI feature availability for U.S. Government customers.

Next steps
This article provided an overview of self-service data prep for big data in Power BI, and
the many ways you can use it.

The following articles provide more information about dataflows and Power BI:

Creating a dataflow
Configure and consume a dataflow
Configuring Dataflow storage to use Azure Data Lake Gen 2
Premium features of dataflows
AI with dataflows
Dataflows considerations and limitations
Dataflows best practices
Power BI usage scenarios: Self-service data preparation

For more information about the Common Data Model, you can read its overview article:

Common Data Model - overview


Understand star schema and the
importance for Power BI
Article • 02/27/2023

This article targets Power BI Desktop data modelers. It describes star schema design and
its relevance to developing Power BI data models optimized for performance and
usability.

This article isn't intended to provide a complete discussion on star schema design. For
more details, refer directly to published content, like The Data Warehouse Toolkit: The
Definitive Guide to Dimensional Modeling (3rd edition, 2013) by Ralph Kimball et al.

Star schema overview


Star schema is a mature modeling approach widely adopted by relational data
warehouses. It requires modelers to classify their model tables as either dimension or
fact.

Dimension tables describe business entities—the things you model. Entities can include
products, people, places, and concepts including time itself. The most consistent table
you'll find in a star schema is a date dimension table. A dimension table contains a key
column (or columns) that acts as a unique identifier, and descriptive columns.

Fact tables store observations or events, and can be sales orders, stock balances,
exchange rates, temperatures, etc. A fact table contains dimension key columns that
relate to dimension tables, and numeric measure columns. The dimension key columns
determine the dimensionality of a fact table, while the dimension key values determine
the granularity of a fact table. For example, consider a fact table designed to store sale
targets that has two dimension key columns Date and ProductKey. It's easy to
understand that the table has two dimensions. The granularity, however, can't be
determined without considering the dimension key values. In this example, consider that
the values stored in the Date column are the first day of each month. In this case, the
granularity is at month-product level.

Generally, dimension tables contain a relatively small number of rows. Fact tables, on the
other hand, can contain a very large number of rows and continue to grow over time.
Normalization vs. denormalization
To understand some star schema concepts described in this article, it's important to
know two terms: normalization and denormalization.

Normalization is the term used to describe data that's stored in a way that reduces
repetitious data. Consider a table of products that has a unique key value column, like
the product key, and additional columns describing product characteristics, including
product name, category, color, and size. A sales table is considered normalized when it
stores only keys, like the product key. In the following image, notice that only the
ProductKey column records the product.
If, however, the sales table stores product details beyond the key, it's considered
denormalized. In the following image, notice that the ProductKey and other product-
related columns record the product.

When you source data from an export file or data extract, it's likely that it represents a
denormalized set of data. In this case, use Power Query to transform and shape the
source data into multiple normalized tables.

As described in this article, you should strive to develop optimized Power BI data
models with tables that represent normalized fact and dimension data. However, there's
one exception where a snowflake dimension should be denormalized to produce a
single model table.

Star schema relevance to Power BI models


Star schema design and many related concepts introduced in this article are highly
relevant to developing Power BI models that are optimized for performance and
usability.

Consider that each Power BI report visual generates a query that is sent to the Power BI
model (which the Power BI service calls a dataset). These queries are used to filter,
group, and summarize model data. A well-designed model, then, is one that provides
tables for filtering and grouping, and tables for summarizing. This design fits well with
star schema principles:

Dimension tables support filtering and grouping


Fact tables support summarization

There's no table property that modelers set to configure the table type as dimension or
fact. It's in fact determined by the model relationships. A model relationship establishes
a filter propagation path between two tables, and it's the Cardinality property of the
relationship that determines the table type. A common relationship cardinality is one-to-
many or its inverse many-to-one. The "one" side is always a dimension-type table while
the "many" side is always a fact-type table. For more information about relationships,
see Model relationships in Power BI Desktop.

A well-structured model design should include tables that are either dimension-type
tables or fact-type tables. Avoid mixing the two types together for a single table. We
also recommend that you should strive to deliver the right number of tables with the
right relationships in place. It's also important that fact-type tables always load data at a
consistent grain.

Lastly, it's important to understand that optimal model design is part science and part
art. Sometimes you can break with good guidance when it makes sense to do so.

There are many additional concepts related to star schema design that can be applied to
a Power BI model. These concepts include:

Measures
Surrogate keys
Snowflake dimensions
Role-playing dimensions
Slowly changing dimensions
Junk dimensions
Degenerate dimensions
Factless fact tables
Measures
In star schema design, a measure is a fact table column that stores values to be
summarized.

In a Power BI model, a measure has a different—but similar—definition. It's a formula


written in Data Analysis Expressions (DAX) that achieves summarization. Measure
expressions often leverage DAX aggregation functions like SUM, MIN, MAX, AVERAGE,
etc. to produce a scalar value result at query time (values are never stored in the model).
Measure expression can range from simple column aggregations to more sophisticated
formulas that override filter context and/or relationship propagation. For more
information, read the DAX Basics in Power BI Desktop article.

It's important to understand that Power BI models support a second method for
achieving summarization. Any column—and typically numeric columns—can be
summarized by a report visual or Q&A. These columns are referred to as implicit
measures. They offer a convenience for you as a model developer, as in many instances
you do not need to create measures. For example, the Adventure Works reseller sales
Sales Amount column could be summarized in numerous ways (sum, count, average,
median, min, max, etc.), without the need to create a measure for each possible
aggregation type.

However, there are three compelling reasons for you to create measures, even for
simple column-level summarizations:

When you know your report authors will query the model by using
Multidimensional Expressions (MDX), the model must include explicit measures.
Explicit measures are defined by using DAX. This design approach is highly relevant
when a Power BI dataset is queried by using MDX, because MDX can't achieve
summarization of column values. Notably, MDX will be used when performing
Analyze in Excel, because PivotTables issue MDX queries.
When you know your report authors will create Power BI paginated reports using
the MDX query designer, the model must include explicit measures. Only the MDX
query designer supports server aggregates. So, if report authors need to have
measures evaluated by Power BI (instead of by the paginated report engine), they
must use the MDX query designer.
When you need to ensure that your report authors can only summarize columns in
specific ways. For example, the reseller sales Unit Price column (which represents a
per unit rate) can be summarized, but only by using specific aggregation functions.
It should never be summed, but it's appropriate to summarize by using other
aggregation functions like min, max, average, etc. In this instance, the modeler can
hide the Unit Price column, and create measures for all appropriate aggregation
functions.

This design approach works well for reports authored in the Power BI service and for
Q&A. However, Power BI Desktop live connections allow report authors to show hidden
fields in the Fields pane, which can result in circumventing this design approach.

Surrogate keys
A surrogate key is a unique identifier that you add to a table to support star schema
modeling. By definition, it's not defined or stored in the source data. Commonly,
surrogate keys are added to relational data warehouse dimension tables to provide a
unique identifier for each dimension table row.

Power BI model relationships are based on a single unique column in one table, which
propagates filters to a single column in a different table. When a dimension-type table
in your model doesn't include a single unique column, you must add a unique identifier
to become the "one" side of a relationship. In Power BI Desktop, you can easily achieve
this requirement by creating a Power Query index column.

You must merge this query with the "many"-side query so that you can add the index
column to it also. When you load these queries to the model, you can then create a one-
to-many relationship between the model tables.

Snowflake dimensions
A snowflake dimension is a set of normalized tables for a single business entity. For
example, Adventure Works classifies products by category and subcategory. Products
are assigned to subcategories, and subcategories are in turn assigned to categories. In
the Adventure Works relational data warehouse, the product dimension is normalized
and stored in three related tables: DimProductCategory, DimProductSubcategory, and
DimProduct.

If you use your imagination, you can picture the normalized tables positioned outwards
from the fact table, forming a snowflake design.

In Power BI Desktop, you can choose to mimic a snowflake dimension design (perhaps
because your source data does) or integrate (denormalize) the source tables into a
single model table. Generally, the benefits of a single model table outweigh the benefits
of multiple model tables. The most optimal decision can depend on the volumes of data
and the usability requirements for the model.

When you choose to mimic a snowflake dimension design:

Power BI loads more tables, which is less efficient from storage and performance
perspectives. These tables must include columns to support model relationships,
and it can result in a larger model size.
Longer relationship filter propagation chains will need to be traversed, which will
likely be less efficient than filters applied to a single table.
The Fields pane presents more model tables to report authors, which can result in
a less intuitive experience, especially when snowflake dimension tables contain just
one or two columns.
It's not possible to create a hierarchy that spans the tables.
When you choose to integrate into a single model table, you can also define a hierarchy
that encompasses the highest and lowest grain of the dimension. Possibly, the storage
of redundant denormalized data can result in increased model storage size, particularly
for very large dimension tables.

Slowly changing dimensions


A slowly changing dimension (SCD) is one that appropriately manages change of
dimension members over time. It applies when business entity values change over time,
and in an ad hoc manner. A good example of a slowly changing dimension is a customer
dimension, specifically its contact detail columns like email address and phone number.
In contrast, some dimensions are considered to be rapidly changing when a dimension
attribute changes often, like a stock's market price. The common design approach in
these instances is to store rapidly changing attribute values in a fact table measure.

Star schema design theory refers to two common SCD types: Type 1 and Type 2. A
dimension-type table could be Type 1 or Type 2, or support both types simultaneously
for different columns.

Type 1 SCD
A Type 1 SCD always reflects the latest values, and when changes in source data are
detected, the dimension table data is overwritten. This design approach is common for
columns that store supplementary values, like the email address or phone number of a
customer. When a customer email address or phone number changes, the dimension
table updates the customer row with the new values. It's as if the customer always had
this contact information.

A non-incremental refresh of a Power BI model dimension-type table achieves the result


of a Type 1 SCD. It refreshes the table data to ensure the latest values are loaded.

Type 2 SCD
A Type 2 SCD supports versioning of dimension members. If the source system doesn't
store versions, then it's usually the data warehouse load process that detects changes,
and appropriately manages the change in a dimension table. In this case, the dimension
table must use a surrogate key to provide a unique reference to a version of the
dimension member. It also includes columns that define the date range validity of the
version (for example, StartDate and EndDate) and possibly a flag column (for example,
IsCurrent) to easily filter by current dimension members.

For example, Adventure Works assigns salespeople to a sales region. When a


salesperson relocates region, a new version of the salesperson must be created to
ensure that historical facts remain associated with the former region. To support
accurate historic analysis of sales by salesperson, the dimension table must store
versions of salespeople and their associated region(s). The table should also include
start and end date values to define the time validity. Current versions may define an
empty end date (or 12/31/9999), which indicates that the row is the current version. The
table must also define a surrogate key because the business key (in this instance,
employee ID) won't be unique.

It's important to understand that when the source data doesn't store versions, you must
use an intermediate system (like a data warehouse) to detect and store changes. The
table load process must preserve existing data and detect changes. When a change is
detected, the table load process must expire the current version. It records these
changes by updating the EndDate value and inserting a new version with the StartDate
value commencing from the previous EndDate value. Also, related facts must use a
time-based lookup to retrieve the dimension key value relevant to the fact date. A
Power BI model using Power Query can't produce this result. It can, however, load data
from a pre-loaded SCD Type 2 dimension table.

The Power BI model should support querying historical data for a member, regardless of
change, and for a version of the member, which represents a particular state of the
member in time. In the context of Adventure Works, this design enables you to query
the salesperson regardless of assigned sales region, or for a particular version of the
salesperson.

To achieve this requirement, the Power BI model dimension-type table must include a
column for filtering the salesperson, and a different column for filtering a specific
version of the salesperson. It's important that the version column provides a non-
ambiguous description, like "Michael Blythe (12/15/2008-06/26/2019)" or "Michael
Blythe (current)". It's also important to educate report authors and consumers about the
basics of SCD Type 2, and how to achieve appropriate report designs by applying
correct filters.

It's also a good design practice to include a hierarchy that allows visuals to drill down to
the version level.

Role-playing dimensions
A role-playing dimension is a dimension that can filter related facts differently. For
example, at Adventure Works, the date dimension table has three relationships to the
reseller sales facts. The same dimension table can be used to filter the facts by order
date, ship date, or delivery date.

In a data warehouse, the accepted design approach is to define a single date dimension
table. At query time, the "role" of the date dimension is established by which fact
column you use to join the tables. For example, when you analyze sales by order date,
the table join relates to the reseller sales order date column.

In a Power BI model, this design can be imitated by creating multiple relationships


between two tables. In the Adventure Works example, the date and reseller sales tables
would have three relationships. While this design is possible, it's important to
understand that there can only be one active relationship between two Power BI model
tables. All remaining relationships must be set to inactive. Having a single active
relationship means there is a default filter propagation from date to reseller sales. In this
instance, the active relationship is set to the most common filter that is used by reports,
which at Adventure Works is the order date relationship.

The only way to use an inactive relationship is to define a DAX expression that uses the
USERELATIONSHIP function. In our example, the model developer must create measures
to enable analysis of reseller sales by ship date and delivery date. This work can be
tedious, especially when the reseller table defines many measures. It also creates Fields
pane clutter, with an overabundance of measures. There are other limitations, too:
When report authors rely on summarizing columns, rather than defining measures,
they can't achieve summarization for the inactive relationships without writing a
report-level measure. Report-level measures can only be defined when authoring
reports in Power BI Desktop.
With only one active relationship path between date and reseller sales, it's not
possible to simultaneously filter reseller sales by different types of dates. For
example, you can't produce a visual that plots order date sales by shipped sales.

To overcome these limitations, a common Power BI modeling technique is to create a


dimension-type table for each role-playing instance. You typically create the additional
dimension tables as calculated tables, using DAX. Using calculated tables, the model can
contain a Date table, a Ship Date table and a Delivery Date table, each with a single and
active relationship to their respective reseller sales table columns.

This design approach doesn't require you to define multiple measures for different date
roles, and it allows simultaneous filtering by different date roles. A minor price to pay,
however, with this design approach is that there will be duplication of the date
dimension table resulting in an increased model storage size. As dimension-type tables
typically store fewer rows relative to fact-type tables, it is rarely a concern.

Observe the following good design practices when you create model dimension-type
tables for each role:

Ensure that the column names are self-describing. While it's possible to have a
Year column in all date tables (column names are unique within their table), it's not
self-describing by default visual titles. Consider renaming columns in each
dimension role table, so that the Ship Date table has a year column named Ship
Year, etc.
When relevant, ensure that table descriptions provide feedback to report authors
(through Fields pane tooltips) about how filter propagation is configured. This
clarity is important when the model contains a generically named table, like Date,
which is used to filter many fact-type tables. In the case that this table has, for
example, an active relationship to the reseller sales order date column, consider
providing a table description like "Filters reseller sales by order date".

For more information, see Active vs inactive relationship guidance.

Junk dimensions
A junk dimension is useful when there are many dimensions, especially consisting of
few attributes (perhaps one), and when these attributes have few values. Good
candidates include order status columns, or customer demographic columns (gender,
age group, etc.).

The design objective of a junk dimension is to consolidate many "small" dimensions into
a single dimension to both reduce the model storage size and also reduce Fields pane
clutter by surfacing fewer model tables.

A junk dimension table is typically the Cartesian product of all dimension attribute
members, with a surrogate key column. The surrogate key provides a unique reference
to each row in the table. You can build the dimension in a data warehouse, or by using
Power Query to create a query that performs full outer query joins, then adds a
surrogate key (index column).

You load this query to the model as a dimension-type table. You also need to merge this
query with the fact query, so the index column is loaded to the model to support the
creation of a "one-to-many" model relationship.

Degenerate dimensions
A degenerate dimension refers to an attribute of the fact table that is required for
filtering. At Adventure Works, the reseller sales order number is a good example. In this
case, it doesn't make good model design sense to create an independent table
consisting of just this one column, because it would increase the model storage size and
result in Fields pane clutter.

In the Power BI model, it can be appropriate to add the sales order number column to
the fact-type table to allow filtering or grouping by sales order number. It is an
exception to the formerly introduced rule that you should not mix table types (generally,
model tables should be either dimension-type or fact-type).

However, if the Adventure Works resellers sales table has order number and order line
number columns, and they're required for filtering, a degenerate dimension table would
be a good design. For more information, see One-to-one relationship guidance
(Degenerate dimensions).

Factless fact tables


A factless fact table doesn't include any measure columns. It contains only dimension
keys.

A factless fact table could store observations defined by dimension keys. For example, at
a particular date and time, a particular customer logged into your web site. You could
define a measure to count the rows of the factless fact table to perform analysis of when
and how many customers have logged in.

A more compelling use of a factless fact table is to store relationships between


dimensions, and it's the Power BI model design approach we recommend defining
many-to-many dimension relationships. In a many-to-many dimension relationship
design, the factless fact table is referred to as a bridging table.
For example, consider that salespeople can be assigned to one or more sales regions.
The bridging table would be designed as a factless fact table consisting of two columns:
salesperson key and region key. Duplicate values can be stored in both columns.

This many-to-many design approach is well documented, and it can be achieved


without a bridging table. However, the bridging table approach is considered the best
practice when relating two dimensions. For more information, see Many-to-many
relationship guidance (Relate two dimension-type tables).

Next steps
For more information about star schema design or Power BI model design, see the
following articles:

Dimensional modeling Wikipedia article


Create and manage relationships in Power BI Desktop
One-to-one relationship guidance
Many-to-many relationship guidance
Bi-directional relationship guidance
Active vs inactive relationship guidance
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Data reduction techniques for Import
modeling
Article • 02/27/2023

This article targets Power BI Desktop data modelers developing Import models. It
describes different techniques to help reduce the data loaded into Import models.

Import models are loaded with data that is compressed and optimized and then stored
to disk by the VertiPaq storage engine. When source data is loaded into memory, it is
possible to see 10x compression, and so it is reasonable to expect that 10 GB of source
data can compress to about 1 GB in size. Further, when persisted to disk an additional
20% reduction can be achieved.

Despite the efficiencies achieved by the VertiPaq storage engine, it is important that you
strive to minimize the data that is to be loaded into your models. It is especially true for
large models, or models that you anticipate will grow to become large over time. Four
compelling reasons include:

Larger model sizes may not be supported by your capacity. Shared capacity can
host models up to 1 GB in size, while Premium capacities can host larger models
depending on the SKU. For further information, read the Power BI Premium
support for large datasets article.
Smaller model sizes reduce contention for capacity resources, in particular
memory. It allows more models to be concurrently loaded for longer periods of
time, resulting in lower eviction rates.
Smaller models achieve faster data refresh, resulting in lower latency reporting,
higher dataset refresh throughput, and less pressure on source system and
capacity resources.
Smaller table row counts can result in faster calculation evaluations, which can
deliver better overall query performance.

There are eight different data reduction techniques covered in this article. These
techniques include:

Remove unnecessary columns


Remove unnecessary rows
Group by and summarize
Optimize column data types
Preference for custom columns
Disable Power Query query load
Disable auto date/time
Switch to Mixed mode

Remove unnecessary columns


Model table columns serve two main purposes:

Reporting, to achieve report designs that appropriate filter, group, and summarize
model data
Model structure, by supporting model relationships, model calculations, security
roles, and even data color formatting

Columns that don't serve these purposes can probably be removed. Removing columns
is referred to as vertical filtering.

We recommend that you design models with exactly the right number of columns based
on the known reporting requirements. Your requirements may change over time, but
bear in mind that it's easier to add columns later than it is to remove them later.
Removing columns can break reports or the model structure.

Remove unnecessary rows


Model tables should be loaded with as few rows as possible. It can be achieved by
loading filtered rowsets into model tables for two different reasons: to filter by entity or
by time. Removing rows is referred to as horizontal filtering.

Filtering by entity involves loading a subset of source data into the model. For example,
instead of loading sales facts for all sales regions, only load facts for a single region. This
design approach will result in many smaller models, and it can also eliminate the need
to define row-level security (but will require granting specific dataset permissions in the
Power BI service, and creating "duplicate" reports that connect to each dataset). You can
leverage the use of Power Query parameters and Power BI Template files to simplify
management and publication. For further information, read the Deep Dive into Query
Parameters and Power BI Templates blog entry.

Filtering by time involves limiting the amount of data history loaded into fact-type
tables (and limiting the date rows loaded into the model date tables). We suggest you
don't automatically load all available history, unless it is a known reporting requirement.
It is helpful to understand that time-based Power Query filters can be parameterized,
and even set to use relative time periods (relative to the refresh date, for example, the
past five years). Also, bear in mind that retrospective changes to time filters will not
break reports; it will just result in less (or more) data history available in reports.
Group by and summarize
Perhaps the most effective technique to reduce a model size is to load pre-summarized
data. This technique can be used to raise the grain of fact-type tables. There is a distinct
trade-off, however, resulting in loss of detail.

For example, a source sales fact table stores one row per order line. Significant data
reduction could be achieved by summarizing all sales metrics, grouping by date,
customer, and product. Consider, then, that an even more significant data reduction
could be achieved by grouping by date at month level. It could achieve a possible 99%
reduction in model size, but reporting at day level—or individual order level—is no
longer possible. Deciding to summarize fact-type data always involves tradeoffs.
Tradeoff could be mitigated by a Mixed model design, and this option is described in
the Switch to Mixed mode technique.

Optimize column data types


The VertiPaq storage engine uses separate data structures for each column. By design,
these data structures achieve the highest optimizations for numeric column data, which
use value encoding. Text and other non-numeric data, however, uses hash encoding. It
requires the storage engine to assign a numeric identifier to each unique text value
contained in the column. It is the numeric identifier, then, that is then stored in the data
structure, requiring a hash lookup during storage and querying.

In some specific instances, you can convert source text data into numeric values. For
example, a sales order number may be consistently prefixed by a text value (e.g.
"SO123456"). The prefix could be removed, and the order number value converted to
whole number. For large tables, it can result in significant data reduction, especially
when the column contains unique or high cardinality values.

In this example, we recommend that you set the column Default Summarization
property to "Do Not Summarize". It will help to minimize the inappropriate
summarization of the order number values.

Preference for custom columns


The VertiPaq storage engine stores model calculated columns (defined in DAX) just like
regular Power Query-sourced columns. However, the data structures are stored slightly
differently, and typically achieve less efficient compression. Also, they are built once all
Power Query tables are loaded, which can result in extended data refresh times. It is
therefore less efficient to add table columns as calculated columns than Power Query
computed columns (defined in M).

Preference should be creating custom columns in Power Query. When the source is a
database, you can achieve greater load efficiency in two ways. The calculation can be
defined in the SQL statement (using the native query language of the provider), or it can
be materialized as a column in the data source.

However, in some instances, model calculated columns may be the better choice. It can
be the case when the formula involves evaluating measures, or it requires specific
modeling functionality only supported in DAX functions. For information on one such
example, refer to the Understanding functions for parent-child hierarchies in DAX article.

Disable Power Query query load


Power Query queries that are intended support data integration with other queries
should not be loaded to the model. To avoid loading the query to the model, take care
to ensure that you disable query load in these instances.

Disable auto date/time


Power BI Desktop includes an option called Auto date/time. When enabled, it creates a
hidden auto date/time table for date columns to support report authors when
configuring filters, grouping, and drill-down actions for calendar time periods. The
hidden tables are in fact calculated tables that will increase the size of the model. For
guidance about using this option, refer to the Auto date/time guidance in Power BI
Desktop article.

Switch to Mixed mode


In Power BI Desktop, a Mixed mode design produces a Composite model. Essentially, it
allows you to determine storage mode for each table. Therefore, each table can have its
Storage Mode property set as Import or DirectQuery (Dual is another option).

An effective technique to reduce the model size is to set the Storage Mode property for
larger fact-type tables to DirectQuery. Consider that this design approach could work
well in conjunction with the Group by and summarize technique introduced earlier. For
example, summarized sales data could be used to achieve high performance "summary"
reporting. A drill through page could display granular sales for specific (and narrow)
filter context, displaying all in-context sales orders. In this example, the drillthrough
page would include visuals based on a DirectQuery table to retrieve the sales order data.

There are, however, many security and performance implications related to Composite
models. For further information, read the Use composite models in Power BI Desktop
article.

Next steps
For more information about Power BI Import model design, see the following articles:

Use composite models in Power BI Desktop


Storage mode in Power BI Desktop
Questions? Try asking the Power BI Community
Create date tables in Power BI Desktop
Article • 02/27/2023

This article targets you as a data modeler working with Power BI Desktop. It describes
good design practices for creating date tables in your data models.

To work with Data Analysis Expressions (DAX) time intelligence functions, there's a
prerequisite model requirement: You must have at least one date table in your model. A
date table is a table that meets the following requirements:

" It must have a column of data type date (or date/time)—known as the date column.
" The date column must contain unique values.
" The date column must not contain BLANKs.
" The date column must not have any missing dates.
" The date column must span full years. A year isn't necessarily a calendar year
(January-December).
" The date table must be marked as a date table.

You can use any of several techniques to add a date table to your model:

The Auto date/time option


Power Query to connect to a date dimension table
Power Query to generate a date table
DAX to generate a date table
DAX to clone an existing date table

 Tip

A date table is perhaps the most consistent feature you'll add to any of your
models. What's more, within an organization a date table should be consistently
defined. So, whatever technique you decide to use, we recommend you create a
Power BI Desktop template that includes a fully configured date table. Share the
template with all modelers in your organization. So, whenever someone develops a
new model, they can begin with a consistently defined date table.

Use Auto date/time


The Auto date/time option delivers convenient, fast, and easy-to-use time intelligence.
Reports authors can work with time intelligence when filtering, grouping, and drilling
down through calendar time periods.
We recommended that you keep the Auto date/time option enabled only when you
work with calendar time periods, and when you have simplistic model requirements in
relation to time. Using this option can also be convenient when creating ad hoc models
or performing data exploration or profiling. This approach, however, doesn't support a
single date table design that can propagate filters to multiple tables. For more
information, see Auto date/time guidance in Power BI Desktop.

Connect with Power Query


When your data source already has a date table, we recommend you use it as the source
of your model date table. It's typically the case when you're connecting to a data
warehouse, as it will have a date dimension table. This way, your model leverages a
single source of truth for time in your organization.

If you're developing a DirectQuery model and your data source doesn't include a date
table, we strongly recommend you add a date table to the data source. It should meet
all the modeling requirements of a date table. You can then use Power Query to connect
to the date table. This way, your model calculations can leverage the DAX time
intelligence capabilities.

Generate with Power Query


You can generate a date table using Power Query. For more information, see Chris
Webb's blog entry Generating A Date Dimension Table In Power Query .

 Tip

If you don't have a data warehouse or other consistent definition for time in your
organization, consider using Power Query to publish a dataflow. Then, have all data
modelers connect to the dataflow to add date tables to their models. The dataflow
becomes the single source of truth for time in your organization.

If you need to generate a date table, consider doing it with DAX. You might find it's
easier. What's more, it's likely to be more convenient, because DAX includes some built-
in intelligence to simplify creating and managing date tables.

Generate with DAX


You can generate a date table in your model by creating a calculated table using either
the CALENDAR or CALENDARAUTO DAX functions. Each function returns a single-
column table of dates. You can then extend the calculated table with calculated columns
to support your date interval filtering and grouping requirements.

Use the CALENDAR function when you want to define a date range. You pass in
two values: the start date and end date. These values can be defined by other DAX
functions, like MIN(Sales[OrderDate]) or MAX(Sales[OrderDate]) .
Use the CALENDARAUTO function when you want the date range to automatically
encompass all dates stored in the model. You can pass in a single optional
parameter that's the end month of the year (if your year is a calendar year, which
ends in December, you don't need to pass in a value). It's a helpful function,
because it ensures that full years of dates are returned—it's a requirement for a
marked date table. What's more, you don't need to manage extending the table to
future years: When a data refresh completes, it triggers the recalculation of the
table. A recalculation will automatically extend the table's date range when dates
for a new year are loaded into the model.

 Tip

For more information about creating calculated tables, including an example of


how to create a date table, work through the Add calculated tables and columns
to Power BI Desktop models learning module.

Clone with DAX


When your model already has a date table and you need an additional date table, you
can easily clone the existing date table. It's the case when date is a role playing
dimension. You can clone a table by creating a calculated table. The calculated table
expression is simply the name of the existing date table.

Next steps
For more information related to this article, check out the following resources:

Auto date/time in Power BI Desktop


Auto date/time guidance in Power BI Desktop
Set and use date tables in Power BI Desktop
Self-service data prep in Power BI
CALENDAR function (DAX)
CALENDARAUTO function (DAX)
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Auto date/time guidance in Power BI
Desktop
Article • 02/27/2023

This article targets data modelers developing Import or Composite models in Power BI
Desktop. It provides guidance, recommendations, and considerations when using Power
BI Desktop Auto date/time in specific situations. For an overview and general
introduction to Auto date/time, see Auto date/time in Power BI Desktop.

The Auto date/time option delivers convenient, fast, and easy-to-use time intelligence.
Reports authors can work with time intelligence when filtering, grouping, and drilling
down through calendar time periods.

Considerations
The following bulleted list describes considerations—and possible limitations—related
to the Auto date/time option.

Applies to all or none: When the Auto date/time option is enabled, it applies to all
date columns in Import tables that aren't the "many" side of a relationship. It can't
be selectively enabled or disabled on a column-by-column basis.

Calendar periods only: The year and quarter columns relate to calendar periods. It
means that the year begins on January 1 and finishes on December 31. There's no
ability to customize the year commencement (or completion) date.

Customization: It's not possible to customize the values used to describe time
periods. Further, it's not possible to add additional columns to describe other time
periods, for example, weeks.

Year filtering: The Quarter, Month, and Day column values don't include the year
value. For example, the Month column contains the month names only (that is,
January, February, etc.). The values are not fully self-describing, and in some report
designs may not communicate the year filter context.

That's why it's important that filters or grouping must take place on the Year
column. When drilling down by using the hierarchy year will be filtered, unless the
Year level is intentionally removed. If there's no filter or group by year, a grouping
by month, for example, would summarize values across all years for that month.
Single table date filtering: Because each date column produces its own (hidden)
auto date/time table, it's not possible to apply a time filter to one table and have it
propagate to multiple model tables. Filtering in this way is a common modeling
requirement when reporting on multiple subjects (fact-type tables) like sales and
sales budget. When using auto date/time, the report author will need to apply
filters to each different date column.

Model size: For each date column that generates a hidden auto date/time table, it
will result in an increased model size and also extend the data refresh time.

Other reporting tools: It isn't possible to work with auto date/time tables when:
Using Analyze in Excel.
Using Power BI paginated report Analysis Services query designers.
Connecting to the model using non-Power BI report designers.

Recommendations
We recommended that you keep the Auto date/time option enabled only when you
work with calendar time periods, and when you have simplistic model requirements in
relation to time. Using this option can also be convenient when creating ad hoc models
or performing data exploration or profiling.

When your data source already defines a date dimension table, this table should be
used to consistently define time within your organization. It will certainly be the case if
your data source is a data warehouse. Otherwise, you can generate date tables in your
model by using the DAX CALENDAR or CALENDARAUTO functions. You can then add
calculated columns to support the known time filtering and grouping requirements. This
design approach may allow you to create a single date table that propagates to all fact-
type tables, possibly resulting a single table to apply time filters. For further information
on creating date tables, read the Set and use date tables in Power BI Desktop article.

 Tip

For more information about creating calculated tables, including an example of


how to create a date table, work through the Add calculated tables and columns
to Power BI Desktop models learning module.

If the Auto date/time option isn't relevant to your projects, we recommend that you
disable the global Auto date/time option. It will ensure that all new Power BI Desktop
files you create won't enable the Auto date/time option.
Next steps
For more information related to this article, check out the following resources:

Create date tables in Power BI Desktop


Auto date/time in Power BI Desktop
Set and use date tables in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
One-to-one relationship guidance
Article • 02/27/2023

This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on working with one-to-one model relationships. A one-to-one
relationship can be created when both tables each contain a column of common and
unique values.

7 Note

An introduction to model relationships is not covered in this article. If you're not


completely familiar with relationships, their properties or how to configure them,
we recommend that you first read the Model relationships in Power BI Desktop
article.

It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.

There are two scenarios that involve one-to-one relationships:

Degenerate dimensions: You can derive a degenerate dimension from a fact-type


table.

Row data spans across tables: A single business entity or subject is loaded as two
(or more) model tables, possibly because their data is sourced from different data
stores. This scenario can be common for dimension-type tables. For example,
master product details are stored in an operational sales system, and
supplementary product details are stored in a different source.

It's unusual, however, that you'd relate two fact-type tables with a one-to-one
relationship. It's because both fact-type tables would need to have the same
dimensionality and granularity. Also, each fact-type table would need unique
columns to allow the model relationship to be created.

Degenerate dimensions
When columns from a fact-type table are used for filtering or grouping, you can
consider making them available in a separate table. This way, you separate columns
used for filter or grouping, from those columns used to summarize fact rows. This
separation can:
Reduce storage space
Simplify model calculations
Contribute to improved query performance
Deliver a more intuitive Fields pane experience to your report authors

Consider a source sales table that stores sales order details in two columns.

The OrderNumber column stores the order number, and the OrderLineNumber column
stores a sequence of lines within the order.

In the following model diagram, notice that the order number and order line number
columns haven't been loaded to the Sales table. Instead, their values were used to
create a surrogate key column named SalesOrderLineID. (The key value is calculated by
multiplying the order number by 1000, and then adding the order line number.)

The Sales Order table provides a rich experience for report authors with three columns:
Sales Order, Sales Order Line, and Line Number. It also includes a hierarchy. These table
resources support report designs that need to filter, group by, or drill down through
orders and order lines.

As the Sales Order table is derived from the sales data, there should be exactly the same
number of rows in each table. Further, there should be matching values between each
SalesOrderLineID column.

Row data spans across tables


Consider an example involving two one-to-one related dimension-type tables: Product,
and Product Category. Each table represents imported data and has a SKU (Stock-
Keeping Unit) column containing unique values.

Here's a partial model diagram of the two tables.

The first table is named Product, and it contains three columns: Color, Product, and
SKU. The second table is named Product Category, and it contains two columns:
Category, and SKU. A one-to-one relationship relates the two SKU columns. The
relationship filters in both directions, which is always the case for one-to-one
relationships.

To help describe how the relationship filter propagation works, the model diagram has
been modified to reveal the table rows. All examples in this article are based on this
data.

7 Note

It's not possible to display table rows in the Power BI Desktop model diagram. It's
done in this article to support the discussion with clear examples.
The row details for the two tables are described in the following bulleted list:

The Product table has three rows:


SKU CL-01, Product T-shirt, Color Green
SKU CL-02, Product Jeans, Color Blue
SKU AC-01, Product Hat, Color Blue
The Product Category table has two rows:
SKU CL-01, Category Clothing
SKU AC-01, Category Accessories

Notice that the Product Category table doesn't include a row for the product SKU CL-
02. We'll discuss the consequences of this missing row later in this article.

In the Fields pane, report authors will find product-related fields in two tables: Product
and Product Category.

Let's see what happens when fields from both tables are added to a table visual. In this
example, the SKU column is sourced from the Product table.
Notice that the Category value for product SKU CL-02 is BLANK. It's because there's no
row in the Product Category table for this product.

Recommendations
When possible, we recommend you avoid creating one-to-one model relationships
when row data spans across model tables. It's because this design can:

Contribute to Fields pane clutter, listing more tables than necessary


Make it difficult for report authors to find related fields, because they're distributed
across multiple tables
Limit the ability to create hierarchies, as their levels must be based on columns
from the same table
Produce unexpected results when there isn't a complete match of rows between
the tables

Specific recommendations differ depending on whether the one-to-one relationship is


intra source group or cross source group. For more information about relationship
evaluation, see Model relationships in Power BI Desktop (Relationship evaluation).

Intra source group one-to-one relationship


When a one-to-one intra source group relationship exists between tables, we
recommend consolidating the data into a single model table. It's done by merging the
Power Query queries.

The following steps present a methodology to consolidate and model the one-to-one
related data:

1. Merge queries: When combining the two queries, give consideration to the
completeness of data in each query. If one query contains a complete set of rows
(like a master list), merge the other query with it. Configure the merge
transformation to use a left outer join, which is the default join type. This join type
ensures you'll keep all rows of the first query, and supplement them with any
matching rows of the second query. Expand all required columns of the second
query into the first query.

2. Disable query load: Be sure to disable the load of the second query. This way, it
won't load its result as a model table. This configuration reduces the data model
storage size, and helps to unclutter the Fields pane.

In our example, report authors now find a single table named Product in the Fields
pane. It contains all product-related fields.

3. Replace missing values: If the second query has unmatched rows, NULLs will
appear in the columns introduced from it. When appropriate, consider replacing
NULLs with a token value. Replacing missing values is especially important when
report authors filter or group by the column values, as BLANKs could appear in
report visuals.

In the following table visual, notice that the category for product SKU CL-02 now
reads [Undefined]. In the query, null categories were replaced with this token text
value.

4. Create hierarchies: If relationships exist between the columns of the now-


consolidated table, consider creating hierarchies. This way, report authors will
quickly identify opportunities for report visual drilling.

In our example, report authors now can use a hierarchy that has two levels:
Category and Product.

If you like how separate tables help organize your fields, we still recommend
consolidating into a single table. You can still organize your fields, but by using display
folders instead.

In our example, report authors can find the Category field within the Marketing display
folder.
Should you still decide to define one-to-one intra source group relationships in your
model, when possible, ensure there are matching rows in the related tables. As a one-
to-one intra source group relationship is evaluated as a regular relationship, data
integrity issues could surface in your report visuals as BLANKs. (You can see an example
of a BLANK grouping in the first table visual presented in this article.)

Cross source group one-to-one relationship


When a one-to-one cross source group relationship exists between tables, there's no
alternative model design—unless you pre-consolidate the data in your data sources.
Power BI will evaluate the one-to-one model relationship as a limited relationship.
Therefore, take care to ensure there are matching rows in the related tables, as
unmatched rows will be eliminated from query results.

Let's see what happens when fields from both tables are added to a table visual, and a
limited relationship exists between the tables.
The table displays two rows only. Product SKU CL-02 is missing because there's no
matching row in the Product Category table.

Next steps
For more information related to this article, check out the following resources:

Model relationships in Power BI Desktop


Understand star schema and the importance for Power BI
Relationship troubleshooting guidance
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Many-to-many relationship guidance
Article • 02/27/2023

This article targets you as a data modeler working with Power BI Desktop. It describes
three different many-to-many modeling scenarios. It also provides you with guidance on
how to successfully design for them in your models.

7 Note

An introduction to model relationships is not covered in this article. If you're not


completely familiar with relationships, their properties or how to configure them,
we recommend that you first read the Model relationships in Power BI Desktop
article.

It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.

There are, in fact, three many-to-many scenarios. They can occur when you're required
to:

Relate two dimension-type tables


Relate two fact-type tables
Relate higher grain fact-type tables, when the fact-type table stores rows at a
higher grain than the dimension-type table rows

7 Note

Power BI now natively supports many-to-many relationships. For more information,


see Apply many-many relationships in Power BI Desktop.

Relate many-to-many dimensions


Let's consider the first many-to-many scenario type with an example. The classic
scenario relates two entities: bank customers and bank accounts. Consider that
customers can have multiple accounts, and accounts can have multiple customers.
When an account has multiple customers, they're commonly called joint account holders.

Modeling these entities is straight forward. One dimension-type table stores accounts,
and another dimension-type table stores customers. As is characteristic of dimension-
type tables, there's an ID column in each table. To model the relationship between the
two tables, a third table is required. This table is commonly referred to as a bridging
table. In this example, it's purpose is to store one row for each customer-account
association. Interestingly, when this table only contains ID columns, it's called a factless
fact table.

Here's a simplistic model diagram of the three tables.

The first table is named Account, and it contains two columns: AccountID and Account.
The second table is named AccountCustomer, and it contains two columns: AccountID
and CustomerID. The third table is named Customer, and it contains two columns:
CustomerID and Customer. Relationships don't exist between any of the tables.

Two one-to-many relationships are added to relate the tables. Here's an updated model
diagram of the related tables. A fact-type table named Transaction has been added. It
records account transactions. The bridging table and all ID columns have been hidden.

To help describe how the relationship filter propagation works, the model diagram has
been modified to reveal the table rows.

7 Note

It's not possible to display table rows in the Power BI Desktop model diagram. It's
done in this article to support the discussion with clear examples.
The row details for the four tables are described in the following bulleted list:

The Account table has two rows:


AccountID 1 is for Account-01
AccountID 2 is for Account-02
The Customer table has two rows:
CustomerID 91 is for Customer-91
CustomerID 92 is for Customer-92
The AccountCustomer table has three rows:
AccountID 1 is associated with CustomerID 91
AccountID 1 is associated with CustomerID 92
AccountID 2 is associated with CustomerID 92
The Transaction table has three rows:
Date January 1 2019, AccountID 1, Amount 100
Date February 2 2019, AccountID 2, Amount 200
Date March 3 2019, AccountID 1, Amount -25

Let's see what happens when the model is queried.

Below are two visuals that summarize the Amount column from the Transaction table.
The first visual groups by account, and so the sum of the Amount columns represents
the account balance. The second visual groups by customer, and so the sum of the
Amount columns represents the customer balance.
The first visual is titled Account Balance, and it has two columns: Account and Amount.
It displays the following result:

Account-01 balance amount is 75


Account-02 balance amount is 200
The total is 275

The second visual is titled Customer Balance, and it has two columns: Customer and
Amount. It displays the following result:

Customer-91 balance amount is 275


Customer-92 balance amount is 275
The total is 275

A quick glance at the table rows and the Account Balance visual reveals that the result is
correct, for each account and the total amount. It's because each account grouping
results in a filter propagation to the Transaction table for that account.

However, something doesn't appear correct with the Customer Balance visual. Each
customer in the Customer Balance visual has the same balance as the total balance. This
result could only be correct if every customer was a joint account holder of every
account. That's not the case in this example. The issue is related to filter propagation. It's
not flowing all the way to the Transaction table.

Follow the relationship filter directions from the Customer table to the Transaction
table. It should be apparent that the relationship between the Account and
AccountCustomer table is propagating in the wrong direction. The filter direction for
this relationship must be set to Both.
As expected, there has been no change to the Account Balance visual.

The Customer Balance visuals, however, now displays the following result:

Customer-91 balance amount is 75


Customer-92 balance amount is 275
The total is 275

The Customer Balance visual now displays a correct result. Follow the filter directions for
yourself, and see how the customer balances were calculated. Also, understand that the
visual total means all customers.

Someone unfamiliar with the model relationships could conclude that the result is
incorrect. They might ask: Why isn't the total balance for Customer-91 and Customer-92
equal to 350 (75 + 275)?

The answer to their question lies in understanding the many-to-many relationship. Each
customer balance can represent the addition of multiple account balances, and so the
customer balances are non-additive.

Relate many-to-many dimensions guidance


When you have a many-to-many relationship between dimension-type tables, we
provide the following guidance:

Add each many-to-many related entity as a model table, ensuring it has a unique
identifier (ID) column
Add a bridging table to store associated entities
Create one-to-many relationships between the three tables
Configure one bi-directional relationship to allow filter propagation to continue to
the fact-type tables
When it isn't appropriate to have missing ID values, set the Is Nullable property of
ID columns to FALSE—data refresh will then fail if missing values are sourced
Hide the bridging table (unless it contains additional columns or measures
required for reporting)
Hide any ID columns that aren't suitable for reporting (for example, when IDs are
surrogate keys)
If it makes sense to leave an ID column visible, ensure that it's on the "one" slide of
the relationship—always hide the "many" side column. It results in the best filter
performance.
To avoid confusion or misinterpretation, communicate explanations to your report
users—you can add descriptions with text boxes or visual header tooltips

We don't recommend you relate many-to-many dimension-type tables directly. This


design approach requires configuring a relationship with a many-to-many cardinality.
Conceptually it can be achieved, yet it implies that the related columns will contain
duplicate values. It's a well-accepted design practice, however, that dimension-type
tables have an ID column. Dimension-type tables should always use the ID column as
the "one" side of a relationship.

Relate many-to-many facts


The second many-to-many scenario type involves relating two fact-type tables. Two
fact-type tables can be related directly. This design technique can be useful for quick
and simple data exploration. However, and to be clear, we generally don't recommend
this design approach. We'll explain why later in this section.

Let's consider an example that involves two fact-type tables: Order and Fulfillment. The
Order table contains one row per order line, and the Fulfillment table can contains zero
or more rows per order line. Rows in the Order table represent sales orders. Rows in the
Fulfillment table represent order items that have been shipped. A many-to-many
relationship relates the two OrderID columns, with filter propagation only from the
Order table (Order filters Fulfillment).

The relationship cardinality is set to many-to-many to support storing duplicate OrderID


values in both tables. In the Order table, duplicate OrderID values can exist because an
order can have multiple lines. In the Fulfillment table, duplicate OrderID values can exist
because orders may have multiple lines, and order lines can be fulfilled by many
shipments.

Let's now take a look at the table rows. In the Fulfillment table, notice that order lines
can be fulfilled by multiple shipments. (The absence of an order line means the order is
yet to be fulfilled.)

The row details for the two tables are described in the following bulleted list:

The Order table has five rows:


OrderDate January 1 2019, OrderID 1, OrderLine 1, ProductID Prod-A,
OrderQuantity 5, Sales 50
OrderDate January 1 2019, OrderID 1, OrderLine 2, ProductID Prod-B,
OrderQuantity 10, Sales 80
OrderDate February 2 2019, OrderID 2, OrderLine 1, ProductID Prod-B,
OrderQuantity 5, Sales 40
OrderDate February 2 2019, OrderID 2, OrderLine 2, ProductID Prod-C,
OrderQuantity 1, Sales 20
OrderDate March 3 2019, OrderID 3, OrderLine 1, ProductID Prod-C,
OrderQuantity 5, Sales 100
The Fulfillment table has four rows:
FulfillmentDate January 1 2019, FulfillmentID 50, OrderID 1, OrderLine 1,
FulfillmentQuantity 2
FulfillmentDate February 2 2019, FulfillmentID 51, OrderID 2, OrderLine 1,
FulfillmentQuantity 5
FulfillmentDate February 2 2019, FulfillmentID 52, OrderID 1, OrderLine 1,
FulfillmentQuantity 3
FulfillmentDate January 1 2019, FulfillmentID 53, OrderID 1, OrderLine 2,
FulfillmentQuantity 10

Let's see what happens when the model is queried. Here's a table visual comparing
order and fulfillment quantities by the Order table OrderID column.

The visual presents an accurate result. However, the usefulness of the model is limited—
you can only filter or group by the Order table OrderID column.

Relate many-to-many facts guidance


Generally, we don't recommend relating two fact-type tables directly using many-to-
many cardinality. The main reason is because the model won't provide flexibility in the
ways you report visuals filter or group. In the example, it's only possible for visuals to
filter or group by the Order table OrderID column. An additional reason relates to the
quality of your data. If your data has integrity issues, it's possible some rows may be
omitted during querying due to the nature of the limited relationship. For more
information, see Model relationships in Power BI Desktop (Relationship evaluation).

Instead of relating fact-type tables directly, we recommend you adopt Star Schema
design principles. You do it by adding dimension-type tables. The dimension-type tables
then relate to the fact-type tables by using one-to-many relationships. This design
approach is robust as it delivers flexible reporting options. It lets you filter or group
using any of the dimension-type columns, and summarize any related fact-type table.

Let's consider a better solution.


Notice the following design changes:

The model now has four additional tables: OrderLine, OrderDate, Product, and
FulfillmentDate
The four additional tables are all dimension-type tables, and one-to-many
relationships relate these tables to the fact-type tables
The OrderLine table contains an OrderLineID column, which represents the
OrderID value multiplied by 100, plus the OrderLine value—a unique identifier for
each order line
The Order and Fulfillment tables now contain an OrderLineID column, and they no
longer contain the OrderID and OrderLine columns
The Fulfillment table now contains OrderDate and ProductID columns
The FulfillmentDate table relates only to the Fulfillment table
All unique identifier columns are hidden

Taking the time to apply star schema design principles delivers the following benefits:

Your report visuals can filter or group by any visible column from the dimension-
type tables
Your report visuals can summarize any visible column from the fact-type tables
Filters applied to the OrderLine, OrderDate, or Product tables will propagate to
both fact-type tables
All relationships are one-to-many, and each relationship is a regular relationship.
Data integrity issues won't be masked. For more information, see Model
relationships in Power BI Desktop (Relationship evaluation).

Relate higher grain facts


This many-to-many scenario is very different from the other two already described in
this article.

Let's consider an example involving four tables: Date, Sales, Product, and Target. The
Date and Product are dimension-type tables, and one-to-many relationships relate each
to the Sales fact-type table. So far, it represents a good star schema design. The Target
table, however, is yet to be related to the other tables.

The Target table contains three columns: Category, TargetQuantity, and TargetYear. The
table rows reveal a granularity of year and product category. In other words, targets—
used to measure sales performance—are set each year for each product category.

Because the Target table stores data at a higher level than the dimension-type tables, a
one-to-many relationship cannot be created. Well, it's true for just one of the
relationships. Let's explore how the Target table can be related to the dimension-type
tables.

Relate higher grain time periods


A relationship between the Date and Target tables should be a one-to-many
relationship. It's because the TargetYear column values are dates. In this example, each
TargetYear column value is the first date of the target year.

 Tip

When storing facts at a higher time granularity than day, set the column data type
to Date (or Whole number if you're using date keys). In the column, store a value
representing the first day of the time period. For example, a year period is recorded
as January 1 of the year, and a month period is recorded as the first day of that
month.

Care must be taken, however, to ensure that month or date level filters produce a
meaningful result. Without any special calculation logic, report visuals may report that
target dates are literally the first day of each year. All other days—and all months except
January—will summarize the target quantity as BLANK.

The following matrix visual shows what happens when the report user drills from a year
into its months. The visual is summarizing the TargetQuantity column. (The Show items
with no data option has been enabled for the matrix rows.)

To avoid this behavior, we recommend you control the summarization of your fact data
by using measures. One way to control the summarization is to return BLANK when
lower-level time periods are queried. Another way—defined with some sophisticated
DAX—is to apportion values across lower-level time periods.
Consider the following measure definition that uses the ISFILTERED DAX function. It only
returns a value when the Date or Month columns aren't filtered.

DAX

Target Quantity =
IF(
NOT ISFILTERED('Date'[Date])
&& NOT ISFILTERED('Date'[Month]),
SUM(Target[TargetQuantity])
)

The following matrix visual now uses the Target Quantity measure. It shows that all
monthly target quantities are BLANK.

Relate higher grain (non-date)


A different design approach is required when relating a non-date column from a
dimension-type table to a fact-type table (and it's at a higher grain than the dimension-
type table).

The Category columns (from both the Product and Target tables) contains duplicate
values. So, there's no "one" for a one-to-many relationship. In this case, you'll need to
create a many-to-many relationship. The relationship should propagate filters in a single
direction, from the dimension-type table to the fact-type table.
Let's now take a look at the table rows.

In the Target table, there are four rows: two rows for each target year (2019 and 2020),
and two categories (Clothing and Accessories). In the Product table, there are three
products. Two belong to the clothing category, and one belongs to the accessories
category. One of the clothing colors is green, and the remaining two are blue.

A table visual grouping by the Category column from the Product table produces the
following result.
This visual produces the correct result. Let's now consider what happens when the Color
column from the Product table is used to group target quantity.

The visual produces a misrepresentation of the data. What is happening here?

A filter on the Color column from the Product table results in two rows. One of the rows
is for the Clothing category, and the other is for the Accessories category. These two
category values are propagated as filters to the Target table. In other words, because
the color blue is used by products from two categories, those categories are used to
filter the targets.

To avoid this behavior, as described earlier, we recommend you control the


summarization of your fact data by using measures.

Consider the following measure definition. Notice that all Product table columns that
are beneath the category level are tested for filters.

DAX

Target Quantity =
IF(
NOT ISFILTERED('Product'[ProductID])
&& NOT ISFILTERED('Product'[Product])
&& NOT ISFILTERED('Product'[Color]),
SUM(Target[TargetQuantity])
)

The following table visual now uses the Target Quantity measure. It shows that all color
target quantities are BLANK.
The final model design looks like the following.

Relate higher grain facts guidance


When you need to relate a dimension-type table to a fact-type table, and the fact-type
table stores rows at a higher grain than the dimension-type table rows, we provide the
following guidance:

For higher grain fact dates:


In the fact-type table, store the first date of the time period
Create a one-to-many relationship between the date table and the fact-type
table
For other higher grain facts:
Create a many-to-many relationship between the dimension-type table and the
fact-type table
For both types:
Control summarization with measure logic—return BLANK when lower-level
dimension-type columns are used to filter or group
Hide summarizable fact-type table columns—this way, only measures can be
used to summarize the fact-type table
Next steps
For more information related to this article, check out the following resources:

Model relationships in Power BI Desktop


Understand star schema and the importance for Power BI
Relationship troubleshooting guidance
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Active vs inactive relationship guidance
Article • 05/04/2023

This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on when to create active or inactive model relationships. By default,
active relationships propagate filters to other tables. Inactive relationship, however, only
propagate filters when a DAX expression activates (uses) the relationship.

7 Note

An introduction to model relationships is not covered in this article. If you're not


completely familiar with relationships, their properties or how to configure them,
we recommend that you first read the Model relationships in Power BI Desktop
article.

It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.

Active relationships
Generally, we recommend defining active relationships whenever possible. They widen
the scope and potential of how your model can be used by report authors, and users
working with Q&A.

Consider an example of an Import model designed to analyze airline flight on-time


performance (OTP). The model has a Flight table, which is a fact-type table storing one
row per flight. Each row records the flight date, flight number, departure and arrival
airports, and any delay time (in minutes). There's also an Airport table, which is a
dimension-type table storing one row per airport. Each row describes the airport code,
airport name, and the country or region.

Here's a partial model diagram of the two tables.


There are two model relationships between the Flight and Airport tables. In the Flight
table, the DepartureAirport and ArrivalAirport columns relate to the Airport column of
the Airport table. In star schema design, the Airport table is described as a role-playing
dimension. In this model, the two roles are departure airport and arrival airport.

While this design works well for relational star schema designs, it doesn't for Power BI
models. It's because model relationships are paths for filter propagation, and these
paths must be deterministic. For this reason, a model cannot have multiple active
relationships between two tables. Therefore—as described in this example—one
relationship is active while the other is inactive (represented by the dashed line).
Specifically, it's the relationship to the ArrivalAirport column that's active. This means
filters applied to the Airport table automatically propagate to the ArrivalAirport column
of the Flight table.

This model design imposes severe limitations on how the data can be reported.
Specifically, it's not possible to filter the Airport table to automatically isolate flight
details for a departure airport. As reporting requirements involve filtering (or grouping)
by departure and arrival airports at the same time, two active relationships are needed.
Translating this requirement into a Power BI model design means the model must have
two airport tables.

Here's the improved model design.


The model now has two airport tables: Departure Airport and Arrival Airport. The
model relationships between these tables and the Flight table are active. Notice also
that the column names in the Departure Airport and Arrival Airport tables are prefixed
with the word Departure or Arrival.

The improved model design supports producing the following report design.

The report page filters by Melbourne as the departure airport, and the table visual
groups by arrival airports.

7 Note

For Import models, the additional table has resulted in an increased model size,
and longer refresh times. As such, it contradicts the recommendations described in
the Data reduction techniques for Import modeling article. However, in the
example, the requirement to have only active relationships overrides these
recommendations.
Further, it's common that dimension-type tables contain low row counts relative to
fact-type table row counts. So, the increased model size and refresh times aren't
likely to be excessively large.

Refactoring methodology
Here's a methodology to refactor a model from a single role-playing dimension-type
table, to a design with one table per role.

1. Remove any inactive relationships.

2. Consider renaming the role-playing dimension-type table to better describe its


role. In the example, the Airport table is related to the ArrivalAirport column of
the Flight table, so it's renamed as Arrival Airport.

3. Create a copy of the role-playing table, providing it with a name that reflects its
role. If it's an Import table, we recommend defining a calculated table. If it's a
DirectQuery table, you can duplicate the Power Query query.

In the example, the Departure Airport table was created by using the following
calculated table definition.

DAX

Departure Airport = 'Arrival Airport'

4. Create an active relationship to relate the new table.

5. Consider renaming the columns in the tables so they accurately reflect their role. In
the example, all columns are prefixed with the word Departure or Arrival. These
names ensure report visuals, by default, will have self-describing and non-
ambiguous labels. It also improves the Q&A experience, allowing users to easily
write their questions.

6. Consider adding descriptions to role-playing tables. (In the Fields pane, a


description appears in a tooltip when a report author hovers their cursor over the
table.) This way, you can communicate any additional filter propagation details to
your report authors.

Inactive relationships
In specific circumstances, inactive relationships can address special reporting needs.
Let's now consider different model and reporting requirements:

A sales model contains a Sales table that has two date columns: OrderDate and
ShipDate
Each row in the Sales table records a single order
Date filters are almost always applied to the OrderDate column, which always
stores a valid date
Only one measure requires date filter propagation to the ShipDate column, which
can contain BLANKs (until the order is shipped)
There's no requirement to simultaneously filter (or group by) order and ship date
periods

Here's a partial model diagram of the two tables.

There are two model relationships between the Sales and Date tables. In the Sales table,
the OrderDate and ShipDate columns relate to the Date column of the Date table. In
this model, the two roles for the Date table are order date and ship date. It's the
relationship to the OrderDate column that's active.

All of the six measures—except one—must filter by the OrderDate column. The Orders
Shipped measure, however, must filter by the ShipDate column.

Here's the Orders measure definition. It simply counts the rows of the Sales table within
the filter context. Any filters applied to the Date table will propagate to the OrderDate
column.

DAX

Orders = COUNTROWS(Sales)
Here's the Orders Shipped measure definition. It uses the USERELATIONSHIP DAX
function, which activates filter propagation for a specific relationship only during the
evaluation of the expression. In this example, the relationship to the ShipDate column is
used.

DAX

Orders Shipped =
CALCULATE(
COUNTROWS(Sales)
,USERELATIONSHIP('Date'[Date], Sales[ShipDate])
)

This model design supports producing the following report design.

The report page filters by quarter 2019 Q4. The table visual groups by month and
displays various sales statistics. The Orders and Orders Shipped measures produce
different results. They each use the same summarization logic (count rows of the Sales
table), but different Date table filter propagation.

Notice that the quarter slicer includes a BLANK item. This slicer item appears as a result
of table expansion. While each Sales table row has an order date, some rows have a
BLANK ship date—these orders are yet to be shipped. Table expansion considers
inactive relationships too, and so BLANKs can appear due to BLANKs on the many-side
of the relationship, or due to data integrity issues.

7 Note

Row-level security filters only propagate through active relationships. Row-level


security filters will not propagate for inactive relationships even if UseRelationship
is added explicitly to a measure definition.
Recommendations
In summary, we recommend defining active relationships whenever possible, especially
when row-level security roles are defined for your data model. They widen the scope
and potential of how your model can be used by report authors, and users working with
Q&A. It means that role-playing dimension-type tables should be duplicated in your
model.

In specific circumstances, however, you can define one or more inactive relationships for
a role-playing dimension-type table. You can consider this design when:

There's no requirement for report visuals to simultaneously filter by different roles


You use the USERELATIONSHIP DAX function to activate a specific relationship for
relevant model calculations

Next steps
For more information related to this article, check out the following resources:

Model relationships in Power BI Desktop


Understand star schema and the importance for Power BI
Relationship troubleshooting guidance
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Bi-directional relationship guidance
Article • 02/27/2023

This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on when to create bi-directional model relationships. A bi-directional
relationship is one that filters in both directions.

7 Note

An introduction to model relationships is not covered in this article. If you're not


completely familiar with relationships, their properties or how to configure them,
we recommend that you first read the Model relationships in Power BI Desktop
article.

It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.

Generally, we recommend minimizing the use of bi-directional relationships. They can


negatively impact on model query performance, and possibly deliver confusing
experiences for your report users.

There are three scenarios when bi-directional filtering can solve specific requirements:

Special model relationships


Slicer items "with data"
Dimension-to-dimension analysis

Special model relationships


Bi-directional relationships play an important role when creating the following two
special model relationship types:

One-to-one: All one-to-one relationships must be bi-directional—it isn't possible


to configure otherwise. Generally, we don't recommend creating these types of
relationships. For a complete discussion and alternative designs, see One-to-one
relationship guidance.
Many-to-many: When relating two dimension-type tables, a bridging table is
required. A bi-directional filter is required to ensure filters propagate across the
bridging table. For more information, see Many-to-many relationship guidance
(Relate many-to-many dimensions).
Slicer items "with data"
Bi-directional relationships can deliver slicers that limit items to where data exists. (If
you're familiar with Excel PivotTables and slicers, it's the default behavior when sourcing
data from a Power BI dataset, or an Analysis Services model.) To help explain what it
means, first consider the following model diagram.

The first table is named Customer, and it contains three columns: Country-Region,
Customer, and CustomerCode. The second table is named Product, and it contains
three columns: Color, Product, and SKU. The third table is named Sales, and it contains
four columns: CustomerCode, OrderDate, Quantity, and SKU. The Customer and
Product tables are dimension-type tables, and each has a one-to-many relationship to
the Sales table. Each relationship filters in a single direction.

To help describe how bi-directional filtering works, the model diagram has been
modified to reveal the table rows. All examples in this article are based on this data.

7 Note

It's not possible to display table rows in the Power BI Desktop model diagram. It's
done in this article to support the discussion with clear examples.
The row details for the three tables are described in the following bulleted list:

The Customer table has two rows:


CustomerCode CUST-01, Customer Customer-1, Country-Region United States
CustomerCode CUST-02, Customer Customer-2, Country-Region Australia
The Product table has three rows:
SKU CL-01, Product T-shirt, Color Green
SKU CL-02, Product Jeans, Color Blue
SKU AC-01, Product Hat, Color Blue
The Sales table has three rows:
OrderDate January 1 2019, CustomerCode CUST-01, SKU CL-01, Quantity 10
OrderDate February 2 2019, CustomerCode CUST-01, SKU CL-02, Quantity 20
OrderDate March 3 2019, CustomerCode CUST-02, SKU CL-01, Quantity 30

Now consider the following report page.

The page consists of two slicers and a card visual. The first slicer is for Country-Region
and it has two items: Australia and United States. It currently slices by Australia. The
second slicer is for Product, and it has three items: Hat, Jeans, and T-shirt. No items are
selected (meaning no products are filtered). The card visual displays a quantity of 30.
When report users slice by Australia, you might want to limit the Product slicer to
display items where data relates to Australian sales. It's what's meant by showing slicer
items "with data". You can achieve this behavior by configuring the relationship between
the Product and Sales table to filter in both directions.

The Product slicer now lists a single item: T-shirt. This item represents the only product
sold to Australian customers.

We first suggest you consider carefully whether this design works for your report users.
Some report users find the experience confusing. They don't understand why slicer
items dynamically appear or disappear when they interact with other slicers.

If you do decide to show slicer items "with data", we don't recommend you configure
bi-directional relationships. Bi-directional relationships require more processing and so
they can negatively impact on query performance—especially as the number of bi-
directional relationships in your model increases.

There's a better way to achieve the same result: Instead of using bi-directional filters,
you can apply a visual-level filter to the Product slicer itself.
Let's now consider that the relationship between the Product and Sales table no longer
filters in both directions. And, the following measure definition has been added to the
Sales table.

DAX

Total Quantity = SUM(Sales[Quantity])

To show the Product slicer items "with data", it simply needs to be filtered by the Total
Quantity measure using the "is not blank" condition.

Dimension-to-dimension analysis
A different scenario involving bi-directional relationships treats a fact-type table like a
bridging table. This way, it supports analyzing dimension-type table data within the filter
context of a different dimension-type table.

Using the example model in this article, consider how the following questions can be
answered:

How many colors were sold to Australian customers?


How many countries/regions purchased jeans?

Both questions can be answered without summarizing data in the bridging fact-type
table. They do, however, require that filters propagate from one dimension-type table to
the other. Once filters propagate via the fact-type table, summarization of dimension-
type table columns can be achieved using the DISTINCTCOUNT DAX function—and
possibly the MIN and MAX DAX functions.
As the fact-type table behaves like a bridging table, you can follow the many-to-many
relationship guidance to relate two dimension-type tables. It will require configuring at
least one relationship to filter in both directions. For more information, see Many-to-
many relationship guidance (Relate many-to-many dimensions).

However, as already described in this article, this design will likely result in a negative
impact on performance, and the user experience consequences related to slicer items
"with data". So, we recommend that you activate bi-directional filtering in a measure
definition by using the CROSSFILTER DAX function instead. The CROSSFILTER function
can be used to modify filter directions—or even disable the relationship—during the
evaluation of an expression.

Consider the following measure definition added to the Sales table. In this example, the
model relationship between the Customer and Sales tables has been configured to filter
in a single direction.

DAX

Different Countries Sold =


CALCULATE(
DISTINCTCOUNT(Customer[Country-Region]),
CROSSFILTER(
Customer[CustomerCode],
Sales[CustomerCode],
BOTH
)
)

During the evaluation of the Different Countries Sold measure expression, the
relationship between the Customer and Sales tables filters in both directions.

The following table visual present statistics for each product sold. The Quantity column
is simply the sum of quantity values. The Different Countries Sold column represents
the distinct count of country-region values of all customers who have purchased the
product.
Next steps
For more information related to this article, check out the following resources:

Model relationships in Power BI Desktop


Understand star schema and the importance for Power BI
One-to-one relationship guidance
Many-to-many relationship guidance
Relationship troubleshooting guidance
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Relationship troubleshooting guidance
Article • 02/27/2023

This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on how to troubleshoot specific issues you may encounter when
developing models and reports.

7 Note

An introduction to model relationships is not covered in this article. If you're not


completely familiar with relationships, their properties or how to configure them,
we recommend that you first read the Model relationships in Power BI Desktop
article.

It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.

Troubleshooting
When a report visual is configured to use fields from two (or more) tables, and it doesn't
present the correct result (or any result), it's possible that the issue is related to model
relationships.

In this case, here's a general troubleshooting checklist to follow. You can progressively
work through the checklist until you identify the issue(s).

1. Switch the visual to a table or matrix, or open the "See Data" pane—it's easier to
troubleshoot issues when you can see the query result
2. If there's an empty query result, switch to Data view—verify that tables have been
loaded with rows of data
3. Switch to Model view—it's easy to see the relationships and quickly determine
their properties
4. Verify that relationships exist between the tables
5. Verify that cardinality properties are correctly configured—they could be incorrect
if a "many"-side column presently contains unique values, and has been incorrectly
configured as a "one"-side
6. Verify that the relationships are active (solid line)
7. Verify that the filter directions support propagation (interpret arrow heads)
8. Verify that the correct columns are related—either select the relationship, or hover
the cursor over it, to reveal the related columns
9. Verify that the related column data types are the same, or at least compatible—it's
possible to relate a text column to a whole number column, but filters won't find
any matches to propagate
10. Switch to Data view, and verify that matching values can be found in related
columns

Troubleshooting guide
Here's a list of issues together with possible solutions.

Issue Possible reason(s)

The visual displays no result - The model is yet to be loaded with data
- No data exists within the filter context
- Row-level security is enforced
- Relationships aren't propagating between tables—follow
checklist above
- Row-level security is enforced, but a bi-directional relationship
isn't enabled to propagate—see Row-level security (RLS) with
Power BI Desktop

The visual displays the same - Relationships don't exist


value for each grouping - Relationships aren't propagating between tables—follow
checklist above

The visual displays results, but - Visual is incorrectly configured


they aren't correct - Measure logic is incorrect
- Model data needs to be refreshed
- Source data is incorrect
- Relationship columns are incorrectly related (for example,
ProductID column maps to CustomerID)
- It's a relationship between two DirectQuery tables, and the
"one"-side column of a relationship contains duplicate values

BLANK groupings or - It's a regular relationship, and "many"-side column contain


slicer/filter items appear, and values not stored in the "one"-side column—see Model
the source columns don't relationships in Power BI Desktop (Regular relationships)
contain BLANKs - It's a regular one-to-one relationship, and related columns
contain BLANKs—see Model relationships in Power BI Desktop
(Regular relationships)
- An inactive relationship "many"-side column stores BLANKs, or
has values not stored on the "one"-side
Issue Possible reason(s)

The visual is missing data - Incorrect/unexpected filters are applied


- Row-level security is enforced
- It's a limited relationship, and there are BLANKs in related
columns, or data integrity issues—see Model relationships in
Power BI Desktop (limited relationships)
- It's a relationship between two DirectQuery tables, the
relationship is configured to assume referential integrity, but
there are data integrity issues (mismatched values in related
columns)

Row-level security is not - Relationships aren't propagating between tables—follow


correctly enforced checklist above
- Row-level security is enforced, but a bi-directional relationship
isn't enabled to propagate—see Row-level security (RLS) with
Power BI Desktop

Next steps
For more information related to this article, check out the following resources:

Model relationships in Power BI Desktop


Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
DirectQuery model guidance in Power
BI Desktop
Article • 06/19/2023

This article targets data modelers developing Power BI DirectQuery models, developed
by using either Power BI Desktop or the Power BI service. It describes DirectQuery use
cases, limitations, and guidance. Specifically, the guidance is designed to help you
determine whether DirectQuery is the appropriate mode for your model, and to improve
the performance of your reports based on DirectQuery models. This article applies to
DirectQuery models hosted in the Power BI service or Power BI Report Server.

This article isn't intended to provide a complete discussion on DirectQuery model


design. For an introduction, refer to the DirectQuery models in Power BI Desktop article.
For a deeper discussion, refer directly to the DirectQuery in SQL Server 2016 Analysis
Services whitepaper. Bear in mind that the whitepaper describes using DirectQuery in
SQL Server Analysis Services. Much of the content, however, is still applicable to Power
BI DirectQuery models.

7 Note

For considerations when using DirectQuery storage mode for Dataverse, see Power
BI modeling guidance for Power Platform.

This article doesn't directly cover composite models. A Composite model consists of at
least one DirectQuery source, and possibly more. The guidance described in this article
is still relevant—at least in part—to Composite model design. However, the implications
of combining Import tables with DirectQuery tables aren't in scope for this article. For
more information, see Use composite models in Power BI Desktop.

It's important to understand that DirectQuery models impose a different workload on


the Power BI environment (Power BI service or Power BI Report Server) and also on the
underlying data sources. If you determine that DirectQuery is the appropriate design
approach, we recommend that you engage the right people on the project. We often
see that a successful DirectQuery model deployment is the result of a team of IT
professionals working closely together. The team usually consists of model developers
and the source database administrators. It can also involve data architects, and data
warehouse and ETL developers. Often, optimizations need to be applied directly to the
data source to achieve good performance results.
Optimize data source performance
The relational database source can be optimized in several ways, as described in the
following bulleted list.

7 Note

We understand that not all modelers have the permissions or skills to optimize a
relational database. While it is the preferred layer to prepare the data for a
DirectQuery model, some optimizations can also be achieved in the model design,
without modifying the source database. However, best optimization results are
often achieved by applying optimizations to the source database.

Ensure data integrity is complete: It's especially important that dimension-type


tables contain a column of unique values (dimension key) that maps to the fact-
type table(s). It's also important that fact-type dimension columns contain valid
dimension key values. They'll allow configuring more efficient model relationships
that expect matched values on both sides of relationships. When the source data
lacks integrity, it's recommended that an "unknown" dimension record is added to
effectively repair the data. For example, you can add a row to the Product table to
represent an unknown product, and then assign it an out-of-range key, like -1. If
rows in the Sales table contain a missing product key value, substitute them with
-1. It ensures every Sales product key value has a corresponding row in the
Product table.

Add indexes: Define appropriate indexes—on tables or views—to support the


efficient retrieval of data for the expected report visual filtering and grouping. For
SQL Server, Azure SQL Database or Azure Synapse Analytics (formerly SQL Data
Warehouse) sources, see SQL Server Index Architecture and Design Guide for
helpful information on index design guidance. For SQL Server or Azure SQL
Database volatile sources, see Get started with Columnstore for real-time
operational analytics.

Design distributed tables: For Azure Synapse Analytics (formerly SQL Data
Warehouse) sources, which use Massively Parallel Processing (MPP) architecture,
consider configuring large fact-type tables as hash distributed, and dimension-
type tables to replicate across all the compute nodes. For more information, see
Guidance for designing distributed tables in Azure Synapse Analytics (formerly SQL
Data Warehouse).
Ensure required data transformations are materialized: For SQL Server relational
database sources (and other relational database sources), computed columns can
be added to tables. These columns are based on an expression, like Quantity
multiplied by UnitPrice. Computed columns can be persisted (materialized) and,
like regular columns, sometimes they can be indexed. For more information, see
Indexes on Computed Columns.

Consider also indexed views that can pre-aggregate fact table data at a higher
grain. For example, if the Sales table stores data at order line level, you could
create a view to summarize this data. The view could be based on a SELECT
statement that groups the Sales table data by date (at month level), customer,
product, and summarizes measure values like sales, quantity, etc. The view can
then be indexed. For SQL Server or Azure SQL Database sources, see Create
Indexed Views.

Materialize a date table: A common modeling requirement involves adding a date


table to support time-based filtering. To support the known time-based filters in
your organization, create a table in the source database, and ensure it's loaded
with a range of dates encompassing the fact table dates. Also ensure that it
includes columns for useful time periods, like year, quarter, month, week, etc.

Optimize model design


A DirectQuery model can be optimized in many ways, as described in the following
bulleted list.

Avoid complex Power Query queries: An efficient model design can be achieved
by removing the need for the Power Query queries to apply any transformations. It
means that each query maps to a single relational database source table or view.
You can preview a representation of the actual SQL query statement for a Power
Query applied step, by selecting the View Native Query option.
Examine the use of calculated columns and data type changes: DirectQuery
models support adding calculations and Power Query steps to convert data types.
However, better performance is often achieved by materializing transformation
results in the relational database source, when possible.
Do not use Power Query relative date filtering: It's possible to define relative date
filtering in a Power Query query. For example, to retrieve to the sales orders that
were created in the last year (relative to today's date). This type of filter translates
to an inefficient native query, as follows:

SQL


from [dbo].[Sales] as [_]
where [_].[OrderDate] >= convert(datetime2, '2018-01-01 00:00:00') and
[_].[OrderDate] < convert(datetime2, '2019-01-01 00:00:00'))

A better design approach is to include relative time columns in the date table.
These columns store offset values relative to the current date. For example, in a
RelativeYear column, the value zero represents current year, -1 represents previous
year, etc. Preferably, the RelativeYear column is materialized in the date table.
While less efficient, it could also be added as a model calculated column, based on
the expression using the TODAY and DATE DAX functions.

Keep measures simple: At least initially, it's recommended to limit measures to


simple aggregates. The aggregate functions include SUM, COUNT, MIN, MAX, and
AVERAGE. Then, if the measures are sufficiently responsive, you can experiment
with more complex measures, but paying attention to the performance for each.
While the CALCULATE DAX function can be used to produce sophisticated measure
expressions that manipulate filter context, they can generate expensive native
queries that don't perform well.

Avoid relationships on calculated columns: Model relationships can only relate a


single column in one table to a single column in a different table. Sometimes,
however, it's necessary to relate tables by using multiple columns. For example, the
Sales and Geography tables are related by two columns: CountryRegion and City.
To create a relationship between the tables, a single column is required, and in the
Geography table, the column must contain unique values. Concatenating the
country/region and city with a hyphen separator could achieve this result.

The combined column can be created with either a Power Query custom column,
or in the model as a calculated column. However, it should be avoided as the
calculation expression will be embedded into the source queries. Not only is it
inefficient, it commonly prevents the use of indexes. Instead, add materialized
columns in the relational database source, and consider indexing them. You can
also consider adding surrogate key columns to dimension-type tables, which is a
common practice in relational data warehouse designs.
There's one exception to this guidance, and it concerns the use of the
COMBINEVALUES DAX function. The purpose of this function is to support multi-
column model relationships. Rather than generate an expression that the
relationship uses, it generates a multi-column SQL join predicate.

Avoid relationships on "Unique Identifier" columns: Power BI doesn't natively


support the unique identifier (GUID) data type. When defining a relationship
between columns of this type, Power BI generates a source query with a join
involving a cast. This query-time data conversion commonly results in poor
performance. Until this case is optimized, the only workaround is to materialize
columns of an alternative data type in the underlying database.

Hide the one-side column of relationships: The one-side column of a relationship


should be hidden. (It's usually the primary key column of dimension-type tables.)
When hidden, it isn't available in the Fields pane and so can't be used to configure
a visual. The many-side column can remain visible if it is useful to group or filter
reports by the column values. For example, consider a model where a relationship
exists between Sales and Product tables. The relationship columns contain product
SKU (Stock-Keeping Unit) values. If product SKU must be added to visuals, it
should be visible only in the Sales table. When this column is used to filter or
group in a visual, Power BI generates a query that doesn't need to join the Sales
and Product tables.

Set relationships to enforce integrity: The Assume Referential Integrity property


of DirectQuery relationships determines whether Power BI generates source
queries using an inner join rather than an outer join. It generally improves query
performance, though it does depend on the specifics of the relational database
source. For more information, see Assume referential integrity settings in Power BI
Desktop.

Avoid use of bi-directional relationship filtering: Use of bi-directional relationship


filtering can lead to query statements that don't perform well. Only use this
relationship feature when necessary, and it's usually the case when implementing a
many-to-many relationship across a bridging table. For more information, see
Relationships with a many-many cardinality in Power BI Desktop.

Limit parallel queries: You can set the maximum number of connections
DirectQuery opens for each underlying data source. It controls the number of
queries concurrently sent to the data source.
The setting is only enabled when there's at least one DirectQuery source in the
model. The value applies to all DirectQuery sources, and to any new DirectQuery
sources added to the model.

Increasing the Maximum Connections per Data Source value ensures more
queries (up to the maximum number specified) can be sent to the underlying data
source, which is useful when numerous visuals are on a single page, or many users
access a report at the same time. Once the maximum number of connections is
reached, further queries are queued until a connection becomes available.
Increasing this limit does result in more load on the underlying data source, so the
setting isn't guaranteed to improve overall performance.

When the model is published to Power BI, the maximum number of concurrent
queries sent to the underlying data source also depends on the environment.
Different environments (such as Power BI, Power BI Premium, or Power BI Report
Server) each can impose different throughput constraints. For more information
about Power BI Premium capacity resource limitations, see Deploying and
Managing Power BI Premium Capacities.
Optimize report designs
Reports based on a DirectQuery dataset can be optimized in many ways, as described in
the following bulleted list.

Enable query reduction techniques: Power BI Desktop Options and Settings


includes a Query Reduction page. This page has three helpful options. It's possible
to disable cross-highlighting and cross-filtering by default, though it can be
overridden by editing interactions. It's also possible to show an Apply button on
slicers and filters. The slicer or filter options won't be applied until the report user
clicks the button. If you enable these options, we recommend that you do so when
first creating the report.

Apply filters first: When first designing reports, we recommend that you apply any
applicable filters—at report, page, or visual level—before mapping fields to the
visual fields. For example, rather than dragging in the CountryRegion and Sales
measures, and then filtering by a particular year, apply the filter on the Year field
first. It's because each step of building a visual will send a query, and while it's
possible to then make another change before the first query has completed, it still
places unnecessary load on the underlying data source. By applying filters early, it
generally makes those intermediate queries less costly and faster. Also, failing to
apply filters early can result in exceeding the 1 million-row limit, as described in
about DirectQuery.

Limit the number of visuals on a page: When a report page is opened (and when
page filters are applied) all of the visuals on a page are refreshed. However, there's
a limit on the number of queries that can be sent in parallel, imposed by the Power
BI environment and the Maximum Connections per Data Source model setting, as
described above. So, as the number of page visuals increases, there's higher
chance that they'll be refreshed in a serial manner. It increases the time taken to
refresh the entire page, and it also increases the chance that visuals may display
inconsistent results (for volatile data sources). For these reasons, it's recommended
to limit the number of visuals on any page, and instead have more simpler pages.
Replacing multiple card visuals with a single multi-row card visual can achieve a
similar page layout.

Switch off interaction between visuals: Cross-highlighting and cross-filtering


interactions require queries be submitted to the underlying source. Unless these
interactions are necessary, it's recommended they be switched off if the time taken
to respond to users' selections would be unreasonably long. These interactions can
be switched off, either for the entire report (as described above for Query
Reduction options), or on a case-by-case basis. For more information, see How
visuals cross-filter each other in a Power BI report.

In addition to the above list of optimization techniques, each of the following reporting
capabilities can contribute to performance issues:

Measure filters: Visuals containing measures (or aggregates of columns) can have
filters applied to those measures. For example, the visual below shows Sales by
Category, but only for categories with more than $15 million of sales.
It may result in two queries being sent to the underlying source:
The first query will retrieve the categories meeting the condition (Sales > $15
million)
The second query will then retrieve the necessary data for the visual, adding the
categories that met the condition to the WHERE clause

It generally performs fine if there are hundreds or thousands of categories, as in


this example. Performance can degrade, however, if the number of categories is
much larger (and indeed, the query will fail if there are more than 1 million
categories meeting the condition, due to the 1 million-row limit discussed above).

TopN filters: Advanced filters can be defined to filter on only the top (or bottom) N
values ranked by a measure. For example, to display only the top five categories in
the above visual. Like the measure filters, it will also result in two queries being
sent to the underlying data source. However, the first query will return all
categories from the underlying source, and then the top N are determined based
on the returned results. Depending on the cardinality of the column involved, it
can lead to performance issues (or query failures due to the 1 million-row limit).

Median: Generally, any aggregation (Sum, Count Distinct, etc.) is pushed to the
underlying source. However, it's not true for Median, as this aggregate isn't
supported by the underlying source. In such cases, detail data is retrieved from the
underlying source, and Power BI evaluates the median from the returned results.
It's fine when the median is to be calculated over a relatively small number of
results, but performance issues (or query failures due to the 1 million-row limit) will
occur if the cardinality is large. For example, median country/region population
might be reasonable, but median sales price might not be.

Multi-select slicers: Allowing multi-selection in slicers and filters can cause


performance issues. It's because as the user selects additional slicer items (for
example, building up to the 10 products they're interested in), each new selection
results in a new query being sent to the underlying source. While the user can
select the next item prior to the query completing, it results in extra load on the
underlying source. This situation can be avoided by showing the Apply button, as
described above in the query reduction techniques.

Visual totals: By default, tables and matrices display totals and subtotals. In many
cases, additional queries must be sent to the underlying source to obtain the
values for the totals. It applies whenever using Count Distinct or Median
aggregates, and in all cases when using DirectQuery over SAP HANA or SAP
Business Warehouse. Such totals should be switched off (by using the Format
pane) if not necessary.

Convert to a Composite Model


The benefits of Import and DirectQuery models can be combined into a single model by
configuring the storage mode of the model tables. The table storage mode can be
Import or DirectQuery, or both, known as Dual. When a model contains tables with
different storage modes, it's known as a Composite model. For more information, see
Use composite models in Power BI Desktop.

There are many functional and performance enhancements that can be achieved by
converting a DirectQuery model to a Composite model. A Composite model can
integrate more than one DirectQuery source, and it can also include aggregations.
Aggregation tables can be added to DirectQuery tables to import a summarized
representation of the table. They can achieve dramatic performance enhancements
when visuals query higher-level aggregates. For more information, see Aggregations in
Power BI Desktop.

Educate users
It's important to educate your users on how to efficiently work with reports based on
DirectQuery datasets. Your report authors should be educated on the content described
in the Optimize report designs section.
We recommend that you educate your report consumers about your reports that are
based on DirectQuery datasets. It can be helpful for them to understand the general
data architecture, including any relevant limitations described in this article. Let them
know to expect that refresh responses and interactive filtering may at times be slow.
When report users understand why performance degradation happens, they're less likely
to lose trust in the reports and data.

When delivering reports on volatile data sources, be sure to educate report users on the
use of the Refresh button. Let them know also that it may be possible to see
inconsistent results, and that a refresh of the report can resolve any inconsistencies on
the report page.

Next steps
For more information about DirectQuery, check out the following resources:

DirectQuery models in Power BI Desktop


Use DirectQuery in Power BI Desktop
DirectQuery model troubleshooting in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Composite model guidance in Power BI
Desktop
Article • 03/24/2023

This article targets data modelers developing Power BI composite models. It describes
composite model use cases, and provides you with design guidance. Specifically, the
guidance can help you determine whether a composite model is appropriate for your
solution. If it is, then this article will also help you design optimal composite models and
reports.

7 Note

An introduction to composite models isn't covered in this article. If you're not


completely familiar with composite models, we recommend that you first read the
Use composite models in Power BI Desktop article.

Because composite models consist of at least one DirectQuery source, it's also
important that you have a thorough understanding of model relationships,
DirectQuery models, and DirectQuery model design guidance.

Composite model use cases


By definition, a composite model combines multiple source groups. A source group can
represent imported data or a connection to a DirectQuery source. A DirectQuery source
can be either a relational database or another tabular model, which can be a Power BI
dataset or an Analysis Services tabular model. When a tabular model connects to
another tabular model, it's known as chaining. For more information, see Using
DirectQuery for Power BI datasets and Analysis Services.

7 Note

When a model connects to a tabular model but doesn't extend it with additional
data, it's not a composite model. In this case, it's a DirectQuery model that
connects to a remote model—so it comprises just the one source group. You might
create this type of model to modify source model object properties, like a table
name, column sort order, or format string.
Connecting to tabular models is especially relevant when extending an enterprise
semantic model (when it's a Power BI dataset or Analysis Services model). An enterprise
semantic model is fundamental to the development and operation of a data warehouse.
It provides an abstraction layer over the data in the data warehouse to present business
definitions and terminology. It's commonly used as a link between physical data models
and reporting tools, like Power BI. In most organizations, it's managed by a central team,
and that's why it's described as enterprise. For more information, see the enterprise BI
usage scenario.

You can consider developing a composite model in the following situations.

Your model could be a DirectQuery model, and you want to boost performance. In
a composite model, you can improve performance by setting up appropriate
storage for each table. You can also add user-defined aggregations. Both of these
optimizations are described later in this article.
You want to combine a DirectQuery model with more data, which must be
imported into the model. You can load imported data from a different data source,
or from calculated tables.
You want to combine two or more DirectQuery data sources into a single model.
These sources could be relational databases or other tabular models.

7 Note

Composite models can't include connections to certain external analytic databases.


These databases include SAP Business Warehouse, and SAP HANA when treating
SAP HANA as a multidimensional source.

Evaluate other model design options


While Power BI composite models can solve particular design challenges, they can
contribute to slow performance. Also, in some situations, unexpected calculation results
can occur (described later in this article). For these reasons, evaluate other model design
options when they exist.

Whenever possible, it's best to develop a model in import mode. This mode provides
the greatest design flexibility, and best performance.

However, challenges related to large data volumes, or reporting on near real-time data,
can't always be solved by import models. In either of these cases, you can consider a
DirectQuery model, providing your data is stored in a single data source that's
supported by DirectQuery mode. For more information, see DirectQuery models in
Power BI Desktop.

 Tip

If your objective is only to extend an existing tabular model with more data,
whenever possible, add that data to the existing data source.

Table storage mode


In a composite model, you can set the storage mode for each table (except calculated
tables).

DirectQuery: We recommend that you set this mode for tables that represent large
data volumes, or which need to deliver near real-time results. Data will never be
imported into these tables. Usually, these tables will be fact-type tables, which are
tables that are summarized.
Import: We recommend that you set this mode for tables that aren't used for
filtering and grouping of fact tables in DirectQuery or Hybrid mode. It's also the
only option for tables based on sources not supported by DirectQuery mode.
Calculated tables are always import tables.
Dual: We recommend that you set this mode for dimension-type tables, when
there's a possibility they'll be queried together with DirectQuery fact-type tables
from the same source.
Hybrid: We recommend that you set this mode by adding import partitions, and
one DirectQuery partition to a fact table when you want to include the latest data
changes in real time, or when you want to provide fast access to the most
frequently used data through import partitions while leaving the bulk of more
infrequently used data in the data warehouse.

There are several possible scenarios when Power BI queries a composite model.

Queries only import or dual table(s): Power BI retrieves all data from the model
cache. It will deliver the fastest possible performance. This scenario is common for
dimension-type tables queried by filters or slicer visuals.
Queries dual table(s) or DirectQuery table(s) from the same source: Power BI
retrieves all data by sending one or more native queries to the DirectQuery source.
It will deliver good performance, especially when appropriate indexes exist on the
source tables. This scenario is common for queries that relate dual dimension-type
tables and DirectQuery fact-type tables. These queries are intra source group, and
so all one-to-one or one-to-many relationships are evaluated as regular
relationships.
Queries dual table(s) or hybrid table(s) from the same source: This scenario is a
combination of the previous two scenarios. Power BI retrieves data from the model
cache when it's available in import partitions, otherwise it sends one or more
native queries to the DirectQuery source. It will deliver the fastest possible
performance because only a slice of the data is queried in the data warehouse,
especially when appropriate indexes exist on the source tables. As for the dual
dimension-type tables and DirectQuery fact-type tables, these queries are intra
source group, and so all one-to-one or one-to-many relationships are evaluated as
regular relationships.
All other queries: These queries involve cross source group relationships. It's either
because an import table relates to a DirectQuery table, or a dual table relates to a
DirectQuery table from a different source—in which case it behaves as an import
table. All relationships are evaluated as limited relationships. It also means that
groupings applied to non-DirectQuery tables must be sent to the DirectQuery
source as materialized subqueries (virtual tables). In this case, the native query can
be inefficient, especially for large grouping sets.

In summary, we recommend that you:

Consider carefully that a composite model is the right solution—while it allows


model-level integration of different data sources, it also introduces design
complexities with possible consequences (described later in this article).
Set the storage mode to DirectQuery when a table is a fact-type table storing
large data volumes, or when it needs to deliver near real-time results.
Consider using hybrid mode by defining an incremental refresh policy and real-
time data, or by partitioning the fact table by using TOM, TMSL, or a third-party
tool. For more information, see Incremental refresh and real-time data for datasets
and the Advanced data model management usage scenario.
Set the storage mode to Dual when a table is a dimension-type table, and it will be
queried together with DirectQuery or hybrid fact-type tables that are in the same
source group.
Set appropriate refresh frequencies to keep the model cache for dual and hybrid
tables (and any dependent calculated tables) in sync with the source database(s).
Strive to ensure data integrity across source groups (including the model cache)
because limited relationships will eliminate rows in query results when related
column values don't match.
Whenever possible, optimize DirectQuery data sources with appropriate indexes
for efficient joins, filtering, and grouping.
User-defined aggregations
You can add user-defined aggregations to DirectQuery tables. Their purpose is to
improve performance for higher grain queries.

When aggregations are cached in the model, they behave as import tables (although
they can't be used like a model table). Adding import aggregations to a DirectQuery
model will result in a composite model.

7 Note

Hybrid tables don't support aggregations because some of the partitions operate
in import mode. It's not possible to add aggregations at the level of an individual
DirectQuery partition.

We recommend that an aggregation follows a basic rule: Its row count should be at least
a factor of 10 smaller than the underlying table. For example, if the underlying table
stores 1 billion rows, then the aggregation table shouldn't exceed 100 million rows. This
rule ensures that there's an adequate performance gain relative to the cost of creating
and maintaining the aggregation.

Cross source group relationships


When a model relationship spans source groups, it's known as a cross source group
relationship. Cross source group relationships are also limited relationships because
there's no guaranteed "one" side. For more information, see Relationship evaluation.

7 Note

In some situations, you can avoid creating a cross source group relationship. See
the Use Sync slicers topic later in this article.

When defining cross source group relationships, consider the following


recommendations.

Use low-cardinality relationship columns: For best performance, we recommend


that the relationship columns be low cardinality, meaning they should store less
than 50,000 unique values. This recommendation is especially true when
combining tabular models, and for non-text columns.
Avoid using large text relationship columns: If you must use text columns in a
relationship, calculate the expected text length for the filter by multiplying the
cardinality by the average length of the text column. The possible text length
shouldn't exceed 1,000,000 characters.
Raise the relationship granularity: If possible, create relationships at a higher level
of granularity. For example, instead of relating a date table on its date key, use its
month key instead. This design approach requires that the related table includes a
month key column, and reports won't be able to show daily facts.
Strive to achieve a simple relationship design: Only create a cross source group
relationship when it's needed, and try to limit the number of tables in the
relationship path. This design approach will help to improve performance and
avoid ambiguous relationship paths.

2 Warning

Because Power BI Desktop doesn't thoroughly validate cross source group


relationships, it's possible to create ambiguous relationships.

Cross source group relationship scenario 1


Consider a scenario of a complex relationship design and how it could produce different
—yet valid—results.

In this scenario, the Region table in source group A has a relationship to the Date table
and Sales table in source group B. The relationship between the Region table and the
Date table is active, while the relationship between the Region table and the Sales table
is inactive. Also, there's an active relationship between the Region table and the Sales
table, both of which are in source group B. The Sales table includes a measure named
TotalSales, and the Region table includes two measures named RegionalSales and
RegionalSalesDirect.
Here are the measure definitions.

DAX

TotalSales = SUM(Sales[Sales])
RegionalSales = CALCULATE([TotalSales], USERELATIONSHIP(Region[RegionID],
Sales[RegionID]))
RegionalSalesDirect = CALCULATE(SUM(Sales[Sales]),
USERELATIONSHIP(Region[RegionID], Sales[RegionID]))

Notice how the RegionalSales measure refers to the TotalSales measure, while the
RegionalSalesDirect measure doesn't. Instead, the RegionalSalesDirect measure uses
the expression SUM(Sales[Sales]) , which is the expression of the TotalSales measure.

The difference in the result is subtle. When Power BI evaluates the RegionalSales
measure, it applies the filter from the Region table to both the Sales table and the Date
table. Therefore, the filter also propagates from the Date table to the Sales table. In
contrast, when Power BI evaluates the RegionalSalesDirect measure, it only propagates
the filter from the Region table to the Sales table. The results returned by RegionalSales
measure and the RegionalSalesDirect measure could differ, even though the
expressions are semantically equivalent.

) Important

Whenever you use the CALCULATE function with an expression that's a measure in a
remote source group, test the calculation results thoroughly.

Cross source group relationship scenario 2


Consider a scenario when a cross source group relationship has high-cardinality
relationship columns.

In this scenario, the Date table is related to the Sales table on the DateKey columns. The
data type of the DateKey columns is integer, storing whole numbers that use the
yyyymmdd format. The tables belong to different source groups. Further, it's a high-
cardinality relationship because the earliest date in the Date table is January 1, 1900 and
the latest date is December 31, 2100—so there's a total of 73,414 rows in the table (one
row for each date in the 1900-2100 time span).
There are two cases for concern.

First, when you use the Date table columns as filters, filter propagation will filter the
DateKey column of the Sales table to evaluate measures. When filtering by a single year,
like 2022, the DAX query will include a filter expression like Sales[DateKey] IN {
20220101, 20220102, …20221231 } . The text size of the query can grow to become

extremely large when the number of values in the filter expression is large, or when the
filter values are long strings. It's expensive for Power BI to generate the long query and
for the data source to run the query.

Second, when you use Date table columns—like Year, Quarter, or Month—as grouping
columns, it results in filters that include all unique combinations of year, quarter, or
month, and the DateKey column values. The string size of the query, which contains
filters on the grouping columns and the relationship column, can become extremely
large. That's especially true when the number of grouping columns and/or the
cardinality of the join column (the DateKey column) is large.

To address any performance issues, you can:

Add the Date table to the data source, resulting in a single source group model
(meaning, it's no longer a composite model).
Raise the granularity of the relationship. For instance, you could add a MonthKey
column to both tables and create the relationship on those columns. However, by
raising the granularity of the relationship, you lose the ability to report on daily
sales activity (unless you use the DateKey column from the Sales table).

Cross source group relationship scenario 3


Consider a scenario when there aren't matching values between tables in a cross source
group relationship.

In this scenario, the Date table in source group B has a relationship to the Sales table in
that source group, and also to the Target table in source group A. All relationships are
one-to-many from the Date table relating the Year columns. The Sales table includes a
SalesAmount column that stores sales amounts, while the Target table includes a
TargetAmount column that stores target amounts.

The Date table stores the years 2021 and 2022. The Sales table stores sales amounts for
years 2021 (100) and 2022 (200), while the Target table stores target amounts for 2021
(100), 2022 (200), and 2023 (300)—a future year.

When a Power BI table visual queries the composite model by grouping on the Year
column from the Date table and summing the SalesAmount and TargetAmount
columns, it won't show a target amount for 2023. That's because the cross source group
relationship is a limited relationship, and so it uses INNER JOIN semantics, which
eliminate rows where there's no matching value on both sides. It will, however, produce
a correct target amount total (600), because a Date table filter doesn't apply to its
evaluation.
If the relationship between the Date table and the Target table is an intra source group
relationship (assuming the Target table belonged to source group B), the visual will
include a (Blank) year to show the 2023 (and any other unmatched years) target amount.

) Important

To avoid misreporting, ensure that there are matching values in the relationship
columns when dimension and fact tables reside in different source groups.

For more information about limited relationships, see Relationship evaluation.

Calculations
You should consider specific limitations when adding calculated columns and calculation
groups to a composite model.

Calculated columns
Calculated columns added to a DirectQuery table that source their data from a relational
database, like Microsoft SQL Server, are limited to expressions that operate on a single
row at a time. These expressions can't use DAX iterator functions, like SUMX , or filter
context modification functions, like CALCULATE .

7 Note

It's not possible to added calculated columns or calculated tables that depend on
chained tabular models.

A calculated column expression on a remote DirectQuery table is limited to intra-row


evaluation only. However, you can author such an expression, but it will result in an error
when it's used in a visual. For example, if you add a calculated column to a remote
DirectQuery table named DimProduct by using the expression [Product Sales] / SUM
(DimProduct[ProductSales]) , you'll be able to successfully save the expression in the

model. However, it will result in an error when it's used in a visual because it violates the
intra-row evaluation restriction.

In contrast, calculated columns added to a remote DirectQuery table that's a tabular


model, which is either a Power BI dataset or Analysis Services model, are more flexible.
In this case, all DAX functions are allowed because the expression will be evaluated
within the source tabular model.
Many expressions require Power BI to materialize the calculated column before using it
as a group or filter, or aggregating it. When a calculated column is materialized over a
large table, it can be costly in terms of CPU and memory, depending on the cardinality
of the columns that the calculated column depends on. In this case, we recommend that
you add those calculated columns to the source model.

7 Note

When you add calculated columns to a composite model, be sure to test all model
calculations. Upstream calculations may not work correctly because they didn't
consider their influence on the filter context.

Calculation groups
If calculation groups exist in a source group that connects to a Power BI dataset or an
Analysis Services model, Power BI could return unexpected results. For more
information, see Calculation groups, query and measure evaluation.

Model design
You should always optimize a Power BI model by adopting a star schema design.

 Tip

For more information, see Understand star schema and the importance for Power
BI.

Be sure to create dimension tables that are separate from fact tables so that Power BI
can interpret joins correctly and produce efficient query plans. While this guidance is
true for any Power BI model, it's especially true for models that you recognize will
become a source group of a composite model. It will allow for simpler and more
efficient integration of other tables in downstream models.

Whenever possible, avoid having dimension tables in one source group that relate to a
fact table in a different source group. That's because it's better to have intra source
group relationships than cross source group relationships, especially for high-cardinality
relationship columns. As described earlier, cross source group relationships rely on
having matching values in the relationship columns, otherwise unexpected results may
be shown in report visuals.
Row-level security
If your model includes user-defined aggregations, calculated columns on import tables,
or calculated tables, ensure that any row-level security (RLS) is set up correctly and
tested.

If the composite model connects to other tabular models, RLS rules are only applied on
the source group (local model) where they're defined. They won't be applied to other
source groups (remote models). Also, you can't define RLS rules on a table from another
source group nor can you define RLS rules on a local table that has a relationship to
another source group.

Report design
In some situations, you can improve the performance of a composite model by
designing an optimized report layout.

Single source group visuals


Whenever possible, create visuals that use fields from a single source group. That's
because queries generated by visuals will perform better when the result is retrieved
from a single source group. Consider creating two visuals positioned side by side that
retrieve data from two different source groups.

Use sync slicers


In some situations, you can set up sync slicers to avoid creating a cross source group
relationship in your model. It can allow you to combine source groups visually that can
perform better.

Consider a scenario when your model has two source groups. Each source group has a
product dimension table used to filter reseller and internet sales.

In this scenario, source group A contains the Product table that's related to the
ResellerSales table. Source group B contains the Product2 table that's related to the
InternetSales table. There aren't any cross source group relationships.
In the report, you add a slicer that filters the page by using the Color column of the
Product table. By default, the slicer filters the ResellerSales table, but not the
InternetSales table. You then add a hidden slicer by using the Color column of the
Product2 table. By setting an identical group name (found in the sync slicers Advanced
options), filters applied to the visible slicer automatically propagate to the hidden slicer.

7 Note

While using sync slicers can avoid the need to create a cross source group
relationship, it increases the complexity of the model design. Be sure to educate
other users on why you designed the model with duplicate dimension tables. Avoid
confusion by hiding dimension tables that you don't want other users to use. You
can also add description text to the hidden tables to document their purpose.

For more information, see Sync separate slicers.

Other guidance
Here's some other guidance to help you design and maintain composite models.

Performance and scale: If your reports were previously live connected to a Power
BI dataset or Analysis Services model, the Power BI service could reuse visual
caches across reports. After you convert the live connection to create a local
DirectQuery model, reports will no longer benefit from those caches. As a result,
you might experience slower performance or even refresh failures. Also, the
workload for the Power BI service will increase, which might require you to scale up
your capacity or distribute the workload across other capacities. For more
information about data refresh and caching, see Data refresh in Power BI.
Renaming: We don't recommend that you rename datasets used by composite
models, or rename their workspaces. That's because composite models connect to
Power BI datasets by using the workspace and dataset names (and not their
internal unique identifiers). Renaming a dataset or workspace could break the
connections used by your composite model.
Governance: We don't recommend that your single version of the truth model is a
composite model. That's because it would be dependent on other data sources or
models, which if updated, could result in breaking the composite model. Instead,
we recommended that you publish an enterprise semantic model as the single
version of truth. Consider this model to be a reliable foundation. Other data
modelers can then create composite models that extend the foundation model to
create specialized models.
Data lineage: Use the data lineage and dataset impact analysis features before
publishing composite model changes. These features are available in the Power BI
service, and they can help you to understand how datasets are related and used.
It's important to understand that you can't perform impact analysis on external
datasets that are displayed in lineage view but are in fact located in another
workspace. To perform impact analysis on an external dataset, you need to
navigate to the source workspace.
Schema updates: You should refresh your composite model in Power BI Desktop
when schema changes are made to upstream data sources. You'll then need to
republish the model to the Power BI service. Be sure to thoroughly test calculations
and dependent reports.

Next steps
For more information related to this article, check out the following resources.

Use composite models in Power BI Desktop


Model relationships in Power BI Desktop
DirectQuery models in Power BI Desktop
Use DirectQuery in Power BI Desktop
Using DirectQuery for Power BI datasets and Analysis Services
Storage mode in Power BI Desktop
User-defined aggregations
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Row-level security (RLS) guidance in
Power BI Desktop
Article • 05/23/2023

This article targets you as a data modeler working with Power BI Desktop. It describes
good design practices for enforcing row-levels security (RLS) in your data models.

It's important to understand RLS filters table rows. They can't be configured to restrict
access to model objects, including tables, columns, or measures.

7 Note

This article doesn't describe RLS or how to set it up. For more information, see
Restrict data access with row-level security (RLS) for Power BI Desktop.

Also, it doesn't cover enforcing RLS in live connections to external-hosted models


with Azure Analysis Services or SQL Server Analysis Services. In these cases, RLS is
enforced by Analysis Services. When Power BI connects using single-sign on (SSO),
Analysis Services will enforce RLS (unless the account has admin privileges).

Create roles
It's possible to create multiple roles. When you're considering the permission needs for
a single report user, strive to create a single role that grants all those permissions,
instead of a design where a report user will be a member of multiple roles. It's because a
report user could map to multiple roles, either directly by using their user account or
indirectly by security group membership. Multiple role mappings can result in
unexpected outcomes.

When a report user is assigned to multiple roles, RLS filters become additive. It means
report users can see table rows that represent the union of those filters. What's more, in
some scenarios it's not possible to guarantee that a report user doesn't see rows in a
table. So, unlike permissions applied to SQL Server database objects (and other
permission models), the "once denied always denied" principle doesn't apply.

Consider a model with two roles: The first role, named Workers, restricts access to all
Payroll table rows by using the following rule expression:

DAX
FALSE()

7 Note

A rule will return no table rows when its expression evaluates to FALSE .

Yet, a second role, named Managers, allows access to all Payroll table rows by using the
following rule expression:

DAX

TRUE()

Take care: Should a report user map to both roles, they'll see all Payroll table rows.

Optimize RLS
RLS works by automatically applying filters to every DAX query, and these filters may
have a negative impact on query performance. So, efficient RLS comes down to good
model design. It's important to follow model design guidance, as discussed in the
following articles:

Understand star schema and the importance for Power BI


All relationship guidance articles found in the Power BI guidance documentation

In general, it's often more efficient to enforce RLS filters on dimension-type tables, and
not fact-type tables. And, rely on well-designed relationships to ensure RLS filters
propagate to other model tables. RLS filters only propagate through active relationships.
So, avoid using the LOOKUPVALUE DAX function when model relationships could
achieve the same result.

Whenever RLS filters are enforced on DirectQuery tables and there are relationships to
other DirectQuery tables, be sure to optimize the source database. It can involve
designing appropriate indexes or using persisted computed columns. For more
information, see DirectQuery model guidance in Power BI Desktop.

Measure RLS impact


It's possible to measure the performance impact of RLS filters in Power BI Desktop by
using Performance Analyzer. First, determine report visual query durations when RLS
isn't enforced. Then, use the View As command on the Modeling ribbon tab to enforce
RLS and determine and compare query durations.

Configure role mappings


Once published to Power BI, you must map members to dataset roles. Only dataset
owners or workspace admins can add members to roles. For more information, see Row-
level security (RLS) with Power BI (Manage security on your model).

Members can be user accounts, security groups, distribution groups or mail enabled
groups. Whenever possible, we recommend you map security groups to dataset roles. It
involves managing security group memberships in Azure Active Directory. Possibly, it
delegates the task to your network administrators.

Validate roles
Test each role to ensure it filters the model correctly. It's easily done by using the View
As command on the Modeling ribbon tab.

When the model has dynamic rules using the USERNAME DAX function, be sure to test
for expected and unexpected values. When embedding Power BI content—specifically
using the embed for your customers scenario—app logic can pass any value as an
effective identity user name. Whenever possible, ensure accidental or malicious values
result in filters that return no rows.

Consider an example using Power BI embedded, where the app passes the user's job
role as the effective user name: It's either "Manager" or "Worker". Managers can see all
rows, but workers can only see rows where the Type column value is "Internal".

The following rule expression is defined:

DAX

IF(
USERNAME() = "Worker",
[Type] = "Internal",
TRUE()
)

The problem with this rule expression is that all values, except "Worker", return all table
rows. So, an accidental value, like "Wrker", unintentionally returns all table rows.
Therefore, it's safer to write an expression that tests for each expected value. In the
following improved rule expression, an unexpected value results in the table returning
no rows.

DAX

IF(
USERNAME() = "Worker",
[Type] = "Internal",
IF(
USERNAME() = "Manager",
TRUE(),
FALSE()
)
)

Design partial RLS


Sometimes, calculations need values that aren't constrained by RLS filters. For example,
a report may need to display a ratio of revenue earned for the report user's sales region
over all revenue earned.

While it's not possible for a DAX expression to override RLS—in fact, it can't even
determine that RLS is enforced—you can use a summary model table. The summary
model table is queried to retrieve revenue for "all regions" and it's not constrained by
any RLS filters.

Let's see how you could implement this design requirement. First, consider the following
model design:
The model comprises four tables:

The Salesperson table stores one row per salesperson. It includes the
EmailAddress column, which stores the email address for each salesperson. This
table is hidden.
The Sales table stores one row per order. It includes the Revenue % All Region
measure, which is designed to return a ratio of revenue earned by the report user's
region over revenue earned by all regions.
The Date table stores one row per date and allows filtering and grouping year and
month.
The SalesRevenueSummary is a calculated table. It stores total revenue for each
order date. This table is hidden.

The following expression defines the SalesRevenueSummary calculated table:

DAX

SalesRevenueSummary =
SUMMARIZECOLUMNS(
Sales[OrderDate],
"RevenueAllRegion", SUM(Sales[Revenue])
)

7 Note

An aggregation table could achieve the same design requirement.

The following RLS rule is applied to the Salesperson table:

DAX

[EmailAddress] = USERNAME()

Each of the three model relationships is described in the following table:

Relationship Description

There's a many-to-many relationship between the Salesperson and Sales tables.


The RLS rule filters the EmailAddress column of the hidden Salesperson table by
using the USERNAME DAX function. The Region column value (for the report user)
propagates to the Sales table.

There's a one-to-many relationship between the Date and Sales tables.


Relationship Description

There's a one-to-many relationship between the Date and


SalesRevenueSummary tables.

The following expression defines the Revenue % All Region measure:

DAX

Revenue % All Region =


DIVIDE(
SUM(Sales[Revenue]),
SUM(SalesRevenueSummary[RevenueAllRegion])
)

7 Note

Take care to avoid disclosing sensitive facts. If there are only two regions in this
example, then it would be possible for a report user to calculate revenue for the
other region.

When to avoid using RLS


Sometimes it makes sense to avoid using RLS. If you have only a few simplistic RLS rules
that apply static filters, consider publishing multiple datasets instead. None of the
datasets define roles because each dataset contains data for a specific report user
audience, which has the same data permissions. Then, create one workspace per
audience and assign access permissions to the workspace or app.

For example, a company that has just two sales regions decides to publish a dataset for
each sales region to different workspaces. The datasets don't enforce RLS. They do,
however, use query parameters to filter source data. This way, the same model is
published to each workspace—they just have different dataset parameter values.
Salespeople are assigned access to just one of the workspaces (or published apps).

There are several advantages associated with avoiding RLS:

Improved query performance: It can result in improved performance due to fewer


filters.
Smaller models: While it results in more models, they're smaller in size. Smaller
models can improve query and data refresh responsiveness, especially if the
hosting capacity experiences pressure on resources. Also, it's easier to keep model
sizes below size limits imposed by your capacity. Lastly, it's easier to balance
workloads across different capacities, because you can create workspaces on—or
move workspaces to—different capacities.
Additional features: Power BI features that don't work with RLS, like Publish to
web, can be used.

However, there are disadvantages associated with avoiding RLS:

Multiple workspaces: One workspace is required for each report user audience. If
apps are published, it also means there's one app per report user audience.
Duplication of content: Reports and dashboards must be created in each
workspace. It requires more effort and time to set up and maintain.
High privilege users: High privilege users, who belong to multiple report user
audiences, can't see a consolidated view of the data. They'll need to open multiple
reports (from different workspaces or apps).

Troubleshoot RLS
If RLS produces unexpected results, check for the following issues:

Incorrect relationships exist between model tables, in terms of column mappings


and filter directions. Keep in mind that RLS filters only propagate through active
relationships.
The Apply security filter in both directions relationship property isn't correctly set.
For more information, see Bi-directional relationship guidance.
Tables contain no data.
Incorrect values are loaded into tables.
The user is mapped to multiple roles.
The model includes aggregation tables, and RLS rules don't consistently filter
aggregations and details. For more information, see Use aggregations in Power BI
Desktop (RLS for aggregations).

When a specific user can't see any data, it could be because their UPN isn't stored or it's
entered incorrectly. It can happen abruptly because their user account has changed as
the result of a name change.

 Tip

For testing purposes, add a measure that returns the USERNAME DAX function.
You might name it something like "Who Am I". Then, add the measure to a card
visual in a report and publish it to Power BI.
Creators and consumers with only Read permission on the dataset will only be able to
view the data they're allowed to see (based on their RLS role mapping).

When a user views a report in either a workspace or an app, RLS may or may not be
enforced depending on their dataset permissions. For this reason, it's critical that
content consumers and creators only possess Read permission on the underlying
dataset when RLS must be enforced. For details about the permissions rules that
determine whether RLS is enforced, see the Report consumer security planning article.

Next steps
For more information related to this article, check out the following resources:

Row-level security (RLS) with Power BI


Restrict data access with row-level security (RLS) for Power BI Desktop
Model relationships in Power BI Desktop
Power BI implementation planning: Report consumer security planning
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Power BI modeling guidance for Power
Platform
Article • 04/20/2023

Microsoft Dataverse is the standard data platform for many Microsoft business
application products, including Dynamics 365 Customer Engagement and Power Apps
canvas apps, and also Dynamics 365 Customer Voice (formerly Microsoft Forms Pro),
Power Automate approvals, Power Apps portals, and others.

This article provides guidance on how to create a Power BI data model that connects to
Dataverse. It describes differences between a Dataverse schema and an optimized Power
BI schema, and it provides guidance for expanding the visibility of your business
application data in Power BI.

Because of its ease of setup, rapid deployment, and widespread adoption, Dataverse
stores and manages an increasing volume of data in environments across organizations.
That means there's an even greater need—and opportunity—to integrate analytics with
those processes. Opportunities include:

Report on all Dataverse data moving beyond the constraints of the built-in charts.
Provide easy access to relevant, contextually filtered reports within a specific
record.
Enhance the value of Dataverse data by integrating it with external data.
Take advantage of Power BI's built-in artificial intelligence (AI) without the need to
write complex code.
Increase adoption of Power Platform solutions by increasing their usefulness and
value.
Deliver the value of the data in your app to business decision makers.

Connect Power BI to Dataverse


Connecting Power BI to Dataverse involves creating a Power BI data model. You can
choose from three methods to create a Power BI model.

Import Dataverse data by using the Dataverse connector: This method caches
(stores) Dataverse data in a Power BI model. It delivers fast performance thanks to
in-memory querying. It also offers design flexibility to modelers, allowing them to
integrate data from other sources. Because of these strengths, importing data is
the default mode when creating a model in Power BI Desktop.
Import Dataverse data by using Azure Synapse Link: This method is a variation on
the import method, because it also caches data in the Power BI model, but does so
by connecting to Azure Synapse Analytics. By using Azure Synapse Link for
Dataverse, Dataverse tables are continuously replicated to Azure Synapse or Azure
Data Lake Storage (ADLS) Gen2. This approach is used to report on hundreds of
thousands or even millions of records in Dataverse environments.
Create a DirectQuery connection by using the Dataverse connector: This method
is an alternative to importing data. A DirectQuery model consists only of metadata
defining the model structure. When a user opens a report, Power BI sends native
queries to Dataverse to retrieve data. Consider creating a DirectQuery model when
reports must show near real-time Dataverse data, or when Dataverse must enforce
role-based security so that users can only see the data they have privileges to
access.

) Important

While a DirectQuery model can be a good alternative when you need near real-
time reporting or enforcement of Dataverse security in a report, it can result in slow
performance for that report.

You can learn about considerations for DirectQuery later in this article.

To determine the right method for your Power BI model, you should consider:

Query performance
Data volume
Data latency
Role-based security
Setup complexity

 Tip

For a detailed discussion on model frameworks (import, DirectQuery, or


composite), their benefits and limitations, and features to help optimize Power BI
data models, see Choose a Power BI model framework.

Query performance
Queries sent to import models are faster than native queries sent to DirectQuery data
sources. That's because imported data is cached in memory and it's optimized for
analytic queries (filter, group, and summarize operations).

Conversely, DirectQuery models only retrieve data from the source after the user opens
a report, resulting in seconds of delay as the report renders. Additionally, user
interactions on the report require Power BI to requery the source, further reducing
responsiveness.

Data volume
When developing an import model, you should strive to minimize the data that's loaded
into the model. It's especially true for large models, or models that you anticipate will
grow to become large over time. For more information, see Data reduction techniques
for import modeling.

A DirectQuery connection to Dataverse is a good choice when the report's query result
isn't large. A large query result has more than 20,000 rows in the report's source tables,
or the result returned to the report after filters are applied is more than 20,000 rows. In
this case, you can create a Power BI report by using the Dataverse connector.

7 Note

The 20,000 row size isn't a hard limit. However, each data source query must return
a result within 10 minutes. Later in this article you will learn how to work within
those limitations and about other Dataverse DirectQuery design considerations.

You can improve the performance of larger datasets by using the Dataverse connector
to import the data into the data model.

Even larger datasets—with several hundreds of thousand or even millions of rows—can


benefit from using Azure Synapse Link for Dataverse. This approach sets up an ongoing
managed pipeline that copies Dataverse data into ADLS Gen2 as CSV or Parquet files.
Power BI can then query an Azure Synapse serverless SQL pool to load an import model.

Data latency
When the Dataverse data changes rapidly and report users need to see up-to-date data,
a DirectQuery model can deliver near real-time query results.

 Tip
You can create a Power BI report that uses automatic page refresh to show real-
time updates, but only when the report connects to a DirectQuery model.

Import data models must complete a data refresh to allow reporting on recent data
changes. Keep in mind that there are limitations on the number of daily scheduled data
refresh operations. You can schedule up to eight refreshes per day on a shared capacity.
On a Premium capacity, you can schedule up to 48 refreshes per day, which can achieve
a 15-minute refresh frequency.

You can also consider using incremental refresh to achieve faster refreshes and near
real-time performance (only available with Premium).

Role-based security
When there's a need to enforce role-based security, it can directly influence the choice
of Power BI model framework.

Dataverse can enforce complex role-based security to control access of specific records
to specific users. For example, a salesperson may be permitted to see only their sales
opportunities, while the sales manager can see all sales opportunities for all salespeople.
You can tailor the level of complexity based on the needs of your organization.

A DirectQuery model based on Dataverse can connect by using the security context of
the report user. That way, the report user will only see the data they're permitted to
access. This approach can simplify the report design, providing performance is
acceptable.

For improved performance, you can create an import model that connects to Dataverse
instead. In this case, you can add row-level security (RLS) to the model, if necessary.

7 Note

It might be challenging to replicate some Dataverse role-based security as Power BI


RLS, especially when Dataverse enforces complex permissions. Further, it might
require ongoing management to keep Power BI permissions in sync with Dataverse
permissions.

For more information about Power BI RLS, see Row-level security (RLS) guidance in
Power BI Desktop.

Setup complexity
Using the Dataverse connector in Power BI—whether for import or DirectQuery models
—is straightforward and doesn't require any special software or elevated Dataverse
permissions. That's an advantage for organizations or departments that are getting
started.

The Azure Synapse Link option requires system administrator access to Dataverse and
certain Azure permissions. These Azure permissions are required to set up the storage
account and a Synapse workspace.

Recommended practices
This section describes design patterns (and anti-patterns) you should consider when
creating a Power BI model that connects to Dataverse. Only a few of these patterns are
unique to Dataverse, but they tend to be common challenges for Dataverse makers
when they go about building Power BI reports.

Focus on a specific use case


Rather than trying to solve everything, focus on the specific use case.

This recommendation is probably the most common and easily the most challenging
anti-pattern to avoid. Attempting to build a single model that achieves all self-service
reporting needs is challenging. The reality is that successful models are built to answer
questions around a central set of facts over a single core topic. While that might initially
seem to limit the model, it's actually empowering because you can tune and optimize
the model for answering questions within that topic.

To help ensure that you have a clear understanding of the model's purpose, ask yourself
the following questions.

What topic area will this model support?


Who is the audience of the reports?
What questions are the reports trying to answer?
What is the minimum viable dataset?

Resist combining multiple topic areas into a single model just because the report user
has questions across multiple topic areas that they want addressed by a single report. By
breaking that report out into multiple reports, each with a focus on a different topic (or
fact table), you can produce much more efficient, scalable, and manageable models.

Design a star schema


Dataverse developers and administrators who are comfortable with the Dataverse
schema may be tempted to reproduce the same schema in Power BI. This approach is an
anti-pattern, and it's probably the toughest to overcome because it just feels right to
maintain consistency.

Dataverse, as a relational model, is well suited for its purpose. However, it's not
designed as an analytic model that's optimized for analytical reports. The most prevalent
pattern for modeling analytics data is a star schema design. Star schema is a mature
modeling approach widely adopted by relational data warehouses. It requires modelers
to classify their model tables as either dimension or fact. Reports can filter or group by
using dimension table columns and summarize fact table columns.

For more information, see Understand star schema and the importance for Power BI.

Optimize Power Query queries


The Power Query mashup engine strives to achieve query folding whenever possible for
reasons of efficiency. A query that achieves folding delegates query processing to the
source system.

The source system, in this case Dataverse, then only needs to deliver filtered or
summarized results to Power BI. A folded query is often significantly faster and more
efficient than a query that doesn't fold.

For more information on how you can achieve query folding, see Power Query query
folding.
7 Note

Optimizing Power Query is a broad topic. To achieve a better understanding of


what Power Query is doing at authoring and at model refresh time in Power BI
Desktop, see Query diagnostics.

Minimize the number of query columns


By default, when you use Power Query to load a Dataverse table, it retrieves all rows and
all columns. When you query a system user table, for example, it could contain more
than 1,000 columns. The columns in the metadata include relationships to other entities
and lookups to option labels, so the total number of columns grows with the complexity
of the Dataverse table.

Attempting to retrieve data from all columns is an anti-pattern. It often results in


extended data refresh operations, and it will cause the query to fail when the time
needed to return the data exceeds 10 minutes.

We recommend that you only retrieve columns that are required by reports. It's often a
good idea to reevaluate and refactor queries when report development is complete,
allowing you to identify and remove unused columns. For more information, see Data
reduction techniques for import modeling (Remove unnecessary columns).

Additionally, ensure that you introduce the Power Query Remove columns step early so
that it folds back to the source. That way, Power Query can avoid the unnecessary work
of extracting source data only to discard it later (in an unfolded step).

When you have a table that contains many columns, it might be impractical to use the
Power Query interactive query builder. In this case, you can start by creating a blank
query. You can then use the Advanced Editor to paste in a minimal query that creates a
starting point.

Consider the following query that retrieves data from just two columns of the account
table.

Power Query M

let
Source = CommonDataService.Database("demo.crm.dynamics.com",
[CreateNavigationProperties=false]),
dbo_account = Source{[Schema="dbo", Item="account"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(dbo_account,
{"accountid", "name"})
in
#"Removed Other Columns"

Write native queries


When you have specific transformation requirements, you might achieve better
performance by using a native query written in Dataverse SQL, which is a subset of
Transact-SQL. You can write a native query to:

Reduce the number of rows (by using a WHERE clause).


Aggregate data (by using the GROUP BY and HAVING clauses).
Join tables in a specific way (by using the JOIN or APPLY syntax).
Use supported SQL functions.

For more information, see:

Use SQL to query data


How Dataverse SQL differs from Transact-SQL

Execute native queries with the EnableFolding option


Power Query executes a native query by using the Value.NativeQuery function.

When using this function, it's important to add the EnableFolding=true option to ensure
queries are folded back to the Dataverse service. A native query won't fold unless this
option is added. Enabling this option can result in significant performance
improvements—up to 97 percent faster in some cases.

Consider the following query that uses a native query to source selected columns from
the account table. The native query will fold because the EnableFolding=true option is
set.

Power Query M

let
Source = CommonDataService.Database("demo.crm.dynamics.com"),
dbo_account = Value.NativeQuery(
Source,
"SELECT A.accountid, A.name FROM account A"
,null
,[EnableFolding=true]
)
in
dbo_account
You can expect to achieve the greatest performance improvements when retrieving a
subset of data from a large data volume.

 Tip

Performance improvement can also depend on how Power BI queries the source
database. For example, a measure that uses the COUNTDISTINCT DAX function
showed almost no improvement with or without the folding hint. When the
measure formula was rewritten to use the SUMX DAX function, the query folded
resulting in a 97 percent improvement over the same query without the hint.

For more information, see Value.NativeQuery. (The EnableFolding option isn't


documented because it's specific to only certain data sources.)

Speed up the evaluation stage


If you're using the Dataverse connector (formerly known as the Common Data Service),
you can add the CreateNavigationProperties=false option to speed up the evaluation
stage of a data import.

The evaluation stage of a data import iterates through the metadata of its source to
determine all possible table relationships. That metadata can be extensive, especially for
Dataverse. By adding this option to the query, you're letting Power Query know that you
don't intend to use those relationships. The option allows Power BI Desktop to skip that
stage of the refresh and move on to retrieving the data.

7 Note

Don't use this option when the query depends on any expanded relationship
columns.

Consider an example that retrieves data from the account table. It contains three
columns related to territory: territory, territoryid, and territoryidname.
When you set the CreateNavigationProperties=false option, the territoryid and
territoryidname columns will remain, but the territory column, which is a relationship
column (it shows Value links), will be excluded. It's important to understand that Power
Query relationship columns are a different concept to model relationships, which
propagate filters between model tables.

Consider the following query that uses the CreateNavigationProperties=false option (in
the Source step) to speed up the evaluation stage of a data import.

Power Query M

let
Source = CommonDataService.Database("demo.crm.dynamics.com"
,[CreateNavigationProperties=false]),
dbo_account = Source{[Schema="dbo", Item="account"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(dbo_account,
{"accountid", "name", "address1_stateorprovince", "address1_country",
"industrycodename", "territoryidname"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Other Columns",
{{"name", "Account Name"}, {"address1_country", "Country"},
{"address1_stateorprovince", "State or Province"}, {"territoryidname",
"Territory"}, {"industrycodename", "Industry"}})
in
#"Renamed Columns"

When using this option, you're likely to experience significant performance


improvement when a Dataverse table has many relationships to other tables. For
example, because the SystemUser table is related to every other table in the database,
refresh performance of this table would benefit by setting the
CreateNavigationProperties=false option.

7 Note
This option can improve the performance of data refresh of import tables or dual
storage mode tables, including the process of applying Power Query Editor
window changes. It doesn't improve the performance of interactive cross-filtering
of DirectQuery storage mode tables.

Resolve blank choice labels


If you discover that Dataverse choice labels are blank in Power BI, it could be because
the labels haven't been published to the Tabular Data Stream (TDS) endpoint.

In this case, open the Dataverse Maker Portal, navigate to the Solutions area, and then
select Publish all customizations. The publication process will update the TDS endpoint
with the latest metadata, making the option labels available to Power BI.

Larger datasets with Azure Synapse Link


Dataverse includes the ability to synchronize tables to Azure Data Lake Storage (ADLS)
and then connect to that data through an Azure Synapse workspace. With minimal
effort, you can set up Azure Synapse Link to populate Dataverse data into Azure
Synapse and enable data teams to discover deeper insights.

Azure Synapse Link enables a continuous replication of the data and metadata from
Dataverse into the data lake. It also provides a built-in serverless SQL pool as a
convenient data source for Power BI queries.

The strengths of this approach are significant. Customers gain the ability to run
analytics, business intelligence, and machine learning workloads across Dataverse data
by using various advanced services. Advanced services include Apache Spark, Power BI,
Azure Data Factory, Azure Databricks, and Azure Machine Learning.

Create an Azure Synapse Link for Dataverse


To create an Azure Synapse Link for Dataverse, you'll need the following prerequisites in
place.

System administrator access to the Dataverse environment.


For the Azure Data Lake Storage:
You must have a storage account to use with ADLS Gen2.
You must be assigned Storage Blob Data Owner and Storage Blob Data
Contributor access to the storage account. For more information, see Role-
based access control (Azure RBAC).
The storage account must enable hierarchical namespace.
It's recommended that the storage account use read-access geo-redundant
storage (RA-GRS).
For the Synapse workspace:
You must have access to a Synapse workspace and be assigned Synapse
Administrator access. For more information, see Built-in Synapse RBAC roles
and scopes.
The workspace must be in the same region as the ADLS Gen2 storage account.

The setup involves signing in to Power Apps and connecting Dataverse to the Azure
Synapse workspace. A wizard-like experience allows you to create a new link by
selecting the storage account and the tables to export. Azure Synapse Link then copies
data to the ADLS Gen2 storage and automatically creates views in the built-in Azure
Synapse serverless SQL pool. You can then connect to those views to create a Power BI
model.

 Tip

For complete documentation on creating, managing, and monitoring Azure


Synapse Link see Create an Azure Synapse Link for Dataverse with your Azure
Synapse Workspace.

Create a second serverless SQL database


You can create a second serverless SQL database and use it to add custom report views.
That way, you can present a simplified set of data to the Power BI creator that allows
them to create a model based on useful and relevant data. The new serverless SQL
database becomes the creator's primary source connection and a friendly representation
of the data sourced from the data lake.
This approach delivers data to Power BI that's focused, enriched, and filtered.

You can create a serverless SQL database in the Azure Synapse workspace by using
Azure Synapse Studio. Select Serverless as the SQL database type and enter a database
name. Power Query can connect to this database by connecting to the workspace SQL
endpoint.

Create custom views


You can create custom views that wrap serverless SQL pool queries. These views will
serve as straightforward, clean sources of data that Power BI connects to. The views
should:

Include the labels associated with choice fields.


Reduce complexity by including only the columns required for data modeling.
Filter out unnecessary rows, such as inactive records.

Consider the following view that retrieves campaign data.

SQL

CREATE VIEW [VW_Campaign]


AS
SELECT
[base].[campaignid] AS [CampaignID]
[base].[name] AS [Campaign],
[campaign_status].[LocalizedLabel] AS [Status],
[campaign_typecode].[LocalizedLabel] AS [Type Code]
FROM
[<MySynapseLinkDB>].[dbo].[campaign] AS [base]
LEFT OUTER JOIN [<MySynapseLinkDB>].[dbo].[OptionsetMetadata] AS
[campaign_typecode]
ON [base].[typecode] = [campaign_typecode].[option]
AND [campaign_typecode].[LocalizedLabelLanguageCode] = 1033
AND [campaign_typecode].[EntityName] = 'campaign'
AND [campaign_typecode].[OptionSetName] = 'typecode'
LEFT OUTER JOIN [<MySynapseLinkDB>].[dbo].[StatusMetadata] AS
[campaign_status]
ON [base].[statuscode] = [campaign_Status].[status]
AND [campaign_status].[LocalizedLabelLanguageCode] = 1033
AND [campaign_status].[EntityName] = 'campaign'
WHERE
[base].[statecode] = 0;

Notice that the view includes only four columns, each aliased with a friendly name.
There's also a WHERE clause to return only necessary rows, in this case active campaigns.
Also, the view queries the campaign table that's joined to the OptionsetMetadata and
StatusMetadata tables, which retrieve choice labels.

 Tip

For more information on how to retrieve metadata, see Access choice labels
directly from Azure Synapse Link for Dataverse.

Query appropriate tables


Azure Synapse Link for Dataverse ensures that data is continually synchronized with the
data in the data lake. For high-usage activity, simultaneous writes and reads can create
locks that cause queries to fail. To ensure reliability when retrieving data, two versions of
the table data are synchronized in Azure Synapse.

Near real-time data: Provides a copy of data synchronized from Dataverse via
Azure Synapse Link in an efficient manner by detecting what data has changed
since it was initially extracted or last synchronized.
Snapshot data: Provides a read-only copy of near real-time data that's updated at
regular intervals (in this case every hour). Snapshot data table names have
_partitioned appended to their name.

If you anticipate that a high volume of read and write operations will be executed
simultaneously, retrieve data from the snapshot tables to avoid query failures.

For more information, see Access near real-time data and read-only snapshot data.

Connect to Synapse Analytics


To query an Azure Synapse serverless SQL pool, you'll need its workspace SQL endpoint.
You can retrieve the endpoint from Synapse Studio by opening the serverless SQL pool
properties.
In Power BI Desktop, you can connect to Azure Synapse by using the Azure Synapse
Analytics SQL connector. When prompted for the server, enter the workspace SQL
endpoint.

Considerations for DirectQuery


There are many use cases when using DirectQuery storage mode can solve your
requirements. However, using DirectQuery can negatively affect Power BI report
performance. A report that uses a DirectQuery connection to Dataverse won't be as fast
as a report that uses an import model. Generally, you should import data to Power BI
whenever possible.

We recommend that you consider the topics in this section when working with
DirectQuery.

For more information about determining when to work with DirectQuery storage mode,
see Choose a Power BI model framework.

Use dual storage mode dimension tables


A dual storage mode table is set to use both import and DirectQuery storage modes. At
query time, Power BI determines the most efficient mode to use. Whenever possible,
Power BI attempts to satisfy queries by using imported data because it's faster.

You should consider setting dimension tables to dual storage mode, when appropriate.
That way, slicer visuals and filter card lists—which are often based on dimension table
columns—will render more quickly because they'll be queried from imported data.
) Important

When a dimension table needs to inherit the Dataverse security model, it isn't
appropriate to use dual storage mode.

Fact tables, which typically store large volumes of data, should remain as DirectQuery
storage mode tables. They'll be filtered by the related dual storage mode dimension
tables, which can be joined to the fact table to achieve efficient filtering and grouping.

Consider the following data model design. Three dimension tables, Owner, Account,
and Campaign have a striped upper border, which means they're set to dual storage
mode.

For more information on table storage modes including dual storage, see Manage
storage mode in Power BI Desktop.

Enable single-sign on
When you publish a DirectQuery model to the Power BI service, you can use the dataset
settings to enable single sign-on (SSO) by using Azure Active Directory (Azure AD)
OAuth2 for your report users. You should enable this option when Dataverse queries
must execute in the security context of the report user.
When the SSO option is enabled, Power BI sends the report user's authenticated Azure
AD credentials in the queries to Dataverse. This option enables Power BI to honor the
security settings that are set up in the data source.

For more information, see Single sign-on (SSO) for DirectQuery sources.

Replicate "My" filters in Power Query


When using Microsoft Dynamics 365 Customer Engagement (CE) and model-driven
Power Apps built on Dataverse, you can create views that show only records where a
username field, like Owner, equals the current user. For example, you might create views
named "My open opportunities", "My active cases", and others.

Consider an example of how the Dynamics 365 My Active Accounts view includes a filter
where Owner equals current user.
You can reproduce this result in Power Query by using a native query that embeds the
CURRENT_USER token.

Consider the following example that shows a native query that returns the accounts for
the current user. In the WHERE clause, notice that the ownerid column is filtered by the
CURRENT_USER token.

Power Query M

let
Source = CommonDataService.Database("demo.crm.dynamics.com",
[CreateNavigationProperties=false],
dbo_account = Value.NativeQuery(Source, "
SELECT
accountid, accountnumber, ownerid, address1_city,
address1_stateorprovince, address1_country
FROM account
WHERE statecode = 0
AND ownerid = CURRENT_USER
", null, [EnableFolding]=true])
in
dbo_account

When you publish the model to the Power BI service, you must enable single sign-on
(SSO) so that Power BI will send the report user's authenticated Azure AD credentials to
Dataverse.

Create supplementary import models


You can create a DirectQuery model that enforces Dataverse permissions knowing that
performance will be slow. You can then supplement this model with import models that
target specific subjects or audiences that could enforce RLS permissions.
For example, an import model could provide access to all Dataverse data but not
enforce any permissions. This model would be suited to executives who already have
access to all Dataverse data.

As another example, when Dataverse enforces role-based permissions by sales region,


you could create one import model and replicate those permissions using RLS.
Alternatively, you could create a model for each sales region. You could then grant read
permission to those models (datasets) to the salespeople of each region. To facilitate the
creation of these regional models, you can use parameters and report templates. For
more information, see Create and use report templates in Power BI Desktop.

Next steps
For more information related to this article, check out the following resources.

Azure Synapse Link for Dataverse


Understand star schema and the importance for Power BI
Data reduction techniques for Import modeling
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Migrate from Azure Analysis Services to
Power BI Premium
Article • 05/22/2023

This article targets Azure Analysis Services (AAS) data modelers and administrators. It
provides them with guidance and rationale to help migrate their AAS databases to
Power BI Premium or Power BI Embedded.

Background
Power BI has evolved into the leading platform for both self-service and IT-managed
enterprise business intelligence (BI). With exponential growth in data volumes and
complexity, Power BI customers demand enterprise BI solutions that scale to petabytes,
are secure, easy to manage, and accessible to all users across the largest of
organizations.

For over two decades, Microsoft has continued to make deep investments in enterprise
BI. AAS and SQL Server Analysis Services (SSAS) are based on mature BI data modeling
technology used by countless enterprises. Today, that same technology is also at the
heart of Power BI datasets.

7 Note

In this article, the terms data model, semantic model, BI model, tabular model,
database, and Power BI dataset have the same meaning. This article commonly uses
the terms data model for AAS model and dataset for Power BI model. This article
describes the process of migrating to Power BI Premium but this also applies to
Power BI Embedded.

In recent years, Microsoft has taken great strides to deliver AAS capabilities to Power BI
Premium . To that end, Power BI instantly inherited a large ecosystem of developers,
partners, BI tools, and solutions that were built up over decades. Today, the full set of
Power BI Premium workloads, features, and capabilities now results in a modern, cloud
BI platform that goes far beyond comparable functionality available in AAS or SSAS.

Today, many customers have Power BI reports that live connect to AAS. Naturally, these
customers are asking whether there's an opportunity to consolidate by hosting their
data models alongside their reports in Power BI. They often ask questions like:
Does all the AAS functionality we depend on work in Power BI?
Is Power BI backwards compatible with AAS tools and processes?
What capabilities are available only in Power BI?
How do we compare costs between AAS and Power BI?
Why is Microsoft converging enterprise and self-service BI?
How do we migrate from AAS to Power BI Premium?
Is AAS marked for deprecation?
What's Microsoft's roadmap for enterprise data models?

Answers to many of these questions are described in this article.

7 Note

The decision to migrate to Power BI Premium depends on the requirements of each


customer. Customers should carefully evaluate additional benefits in order to make
an informed decision. We expect to see organic migration to Power BI Premium
over time, and our intention is that it happens on terms that the customer is
comfortable with.

To be clear, currently there aren't any plans to deprecate AAS. There is a priority to
focus investment on Power BI Premium for enterprise data modeling, and so the
additional value provided by Power BI Premium will increase over time. Customers
who choose Power BI Premium can expect to benefit from alignment with the
Microsoft BI product roadmap.

Convergence of self-service and enterprise BI


Consolidation of items (like reports and dashboards) in Power BI results in simplified
discovery and management due to co-location. Once consolidated, there's no need to
bridge the gap between AAS and Power BI. Central IT teams can then more easily adopt
self-service items that have become popular yet are resulting in a management burden
for the business. IT can take over such items. They can operationalize them for mission-
critical decision making based on governed data that's aligned with corporate standards
and with lineage transparency. Simplifying this workflow by sharing a common platform
promotes better collaboration between the business and IT.

Power BI Premium
Thanks to its distributed architecture, Power BI Premium is less sensitive to overall load,
temporal spikes, and high concurrency. By consolidating capacities to larger Power BI
Premium SKUs, customers can achieve increased performance and throughput.

Scalability benefits associated with Power BI Premium are described later in this article.

Feature comparison
AAS provides the Analysis Services database engine for hosting data models, which is a
core component of a Microsoft enterprise BI architecture. In fact, Power BI Premium is a
superset of AAS because it provides much more functionality. The following table lists
features supported in AAS and Power BI Premium. The table focuses on - but isn't
limited to - Power BI dataset-related capabilities.

Feature AAS Power BI


Premium

Premium workloads

Paginated reports, which are ideal for reports that are designed to be printed, No Yes
especially when table data overflows to multiple pages

Dataflows, which store fragments of data intended for use in a Power BI No Yes
dataset

AI with dataflows, which use artificial intelligence (AI) with Cognitive Services, No Yes
Automated Machine Learning, and Azure Machine Learning (AML) integration

Metrics, which curate key business measures and allow tracking them against No Yes
objectives

Business enablement

Unlimited report distribution to anyone (even outside the organization) No Yes

Business-driven interactive reports, workspaces, and apps No Yes

Platform scalability and resiliency

Power BI Premium architecture, which supports increased scale and No Yes


performance

Optimized dataset memory management No Yes

Scale limits per data model instead of per server No Yes

CPU smoothing for refresh resiliency No Yes

Autoscale, which automatically adds compute capacity to avoid slowdowns No Yes


under heavy use
Feature AAS Power BI
Premium

Business continuity and disaster recovery (BCDR) with Azure regions and No Yes
availability zones

Interactive analysis over big data

Large model sizes (up to 400 GB with compression) Yes Yes

Hybrid tables, which comprise in-memory and DirectQuery partitions that can No Yes
help deliver near real-time results over large tables

Automatic aggregations, which use state-of-the-art machine learning (ML) to No Yes


continuously optimize DirectQuery performance

User-defined aggregations, which can improve query performance over very No Yes
large DirectQuery tables

Query scale-out, which distributes client queries among replicated servers Yes Yes

Security

Bring Your Own Key (BYOK), which allows customers to use their own No Yes
encryption key to encrypt data stored in the Microsoft cloud

Virtual network connectivity, which allows Power BI to work seamlessly in an No Yes


organization's virtual network (VNet)

Azure Private Link, which provides secure access for data traffic in Power BI No Yes

Single sign-on (SSO) for DirectQuery sources, which allows connecting to data No Yes
sources by using the report user's identity

Row-level security (RLS), which restricts access to specific rows of data for Yes Yes
specific users

Object-level security (OLS), which restricts access to specific tables or columns Yes Yes
for specific users

Firewall, which when enabled, allows setting allowed IP address ranges Yes No 1

Governance

Microsoft Purview integration, which helps customers manage and govern No Yes
Power BI items

Microsoft Information Protection (MIP) sensitivity labels and integration with No Yes
Microsoft Defender for Cloud Apps for data loss prevention

Content endorsement, to promote or certify valuable, high-quality Power BI No Yes


items
Feature AAS Power BI
Premium

Semantic modeling

Compatibility with Power BI Desktop No Yes

Composite models including using DirectQuery for Power BI datasets and AAS No Yes

Translations for multi-language model versions observed by the Power BI No Yes


service

Analysis Service engine semantic modeling Yes Yes

Model management

Incremental refresh, which uses policies to automate partition management No Yes


and can help deliver near real-time reporting (see hybrid tables)

Deployment pipelines, which manage the lifecycle of Power BI content No Yes

Scheduled refresh, which keeps cached dataset data current No Yes

Enhanced refresh, which allows any programming language to perform Yes Yes
asynchronous dataset refreshes by using a REST API call

Backup and restore Yes Yes

Dataset workload settings, which control Premium capacity workloads No Yes

Server properties, which control Analysis Services server instance properties Yes Yes

Alias server names, which allow connecting to an Analysis Services server Yes No
instance by using a shorter alias

XMLA endpoint enabled APIs for scripting and compatibility with services for Yes Yes
automation and ALM including Azure Functions, Azure Automation and Azure
DevOps

Connectivity

Support for all Power BI data sources No Yes

XMLA endpoint, which allows open-platform connectivity for data model Yes Yes
consumption and visualization tools, including third-party tools

Multi-Geo feature, which helps multinational customers address regional, Yes Yes
industry-specific, or organizational data residency requirements

Discoverability

Data hub integration, which helps users discover, explore, and use Power BI No Yes
datasets
Feature AAS Power BI
Premium

Data lineage view and dataset impact analysis, which help users understand No Yes
and assess Power BI item dependencies

Monitoring and diagnostic logging

Premium capacity metrics app, which provides monitoring capabilities for No Yes
Power BI capacities

Power BI audit log, which tracks user activities across Power BI and Microsoft No Yes
365

Azure Log Analytics (LA) integration, which allows administrators to configure a Yes Yes
Log Analytics connection for a Power BI workspace

Metric alerts in Azure Monitor, which provide a way to get notified when one Yes No
of your multi-dimensional metrics crosses a threshold

XMLA endpoint, which allows diagnostic logging tool connections, including Yes Yes
SQL Server Profiler

SQL Server Extended Events (xEvents), which is a light-weight tracing and Yes No
performance monitoring system useful for diagnosing issues

1
Use VNet connectivity and Azure Private Link instead

Cost comparison
When comparing Power BI Premium to AAS costs, be sure to consider factors beyond
price per core. Power BI provides reduced cost of ownership and business value, and
with many features that are only available to Power BI data models.

Also, assuming you already use Power BI in your organization, calculate costs based on
the existing profile that combines AAS and Power BI. Compare the existing profile with
the target profile on Power BI Premium. To determine the target profile, be sure to
consider the following points:

Region requirements.
The largest AAS data model size in each region.
The number of users in each region.
The number of users required to develop and manage content.
CPU consumption across AAS and Power BI Premium.

) Important
CPU consumption across AAS and Power BI Premium may vary significantly due to
numerous factors. Factors can include the use of other workloads on the same
capacities, refresh patterns, and query patterns. We recommended that you
perform in-depth analysis to quantify comparative CPU consumption across AAS
and Power BI Premium for migrated models.

 Tip

To help determine the right type and number of licenses for your business
requirements and circumstances, see this related article.

Consolidation opportunity
Many AAS customers already have Power BI reports that connect to AAS. So, migration
to Power BI can represent an opportunity to consolidate BI items in Power BI Premium.
Consolidation makes the larger sized Premium SKUs more economically viable and can
help to provide higher levels of throughput and scalability.

PPU licenses
The Premium Per User (PPU) license is a per-user license that provides a lower-cost price
point for Premium. PPU licenses are typically purchased by small and medium-sized
companies. They support all the Premium capabilities for data modeling listed earlier.

 Tip

It's possible to incrementally upgrade Power BI Pro licenses to PPU licenses.

Pro licenses
A Pro (or PPU) license is required to publish and manage Power BI content. Pro licenses
are typically assigned to developers and administrators, not end users.

Development and test environments


AAS offers the D and B SKUs at lower cost with reduced service-level agreements and/or
fewer features than the S SKUs. Some AAS customers use these SKUs for development
and test environments. While there's no direct equivalent in Power BI, it might make
sense to use PPU licenses for development and test environments. Such environments
typically don't have large numbers of users because they're limited to developers and
testers. Alternatively, consider using an A SKU in Azure for testing Premium capacity
functionality.

For more information, see:

Power BI pricing
Azure Analysis Services pricing
Purchase A SKUs for testing and other scenarios

Scalability benefits
Power BI Premium delivers scalability, performance, and cost-of-ownership benefits not
available in AAS.

Power BI Premium provides features that enable fast interactive analysis over big data.
Such features include aggregations, composite models, and hybrid tables. Each feature
offers a different way to optimally combine import and DirectQuery storage modes,
effectively reducing memory use. AAS, on the other hand, doesn't support these
capabilities; the entire data model uses either import or DirectQuery storage mode.

Power BI Premium limits memory per dataset, and not per capacity or server. Conversely,
AAS requires all data models fit in memory on a single server. That requirement can
compel customers with large data models to purchase larger SKU sizes.

Thanks to the distributed nature of the Premium architecture, more datasets can be
refreshed in parallel. Performing concurrent refreshes on the same AAS server can lead
to refresh errors due to exceeding server memory limits.

In Power BI Premium, CPU consumption during refresh is spread across 24-hour periods.
Power BI Premium evaluates capacity throughput to provide resilience to temporal
spikes in demand for compute resources. When necessary, it can delay refreshes until
sufficient resources become available. This automatic behavior reduces the need for
customers to perform detailed analysis and manage automation scripts to scale servers
up or down. Premium customers should decide on the optimal SKU size for their overall
CPU consumption requirements.

Another advantage of Power BI Premium is that it's able to dynamically balance the
datasets depending on the load of the system. This automatic behavior ensures
busy/active datasets get the necessary memory and CPU resources, while more idle
datasets can be evicted or migrated to other nodes. Datasets are candidates for eviction
when they're not used. They'll be loaded on-demand so that only the required data is
loaded into memory without having to load the whole dataset. On the other hand, AAS
requires all data models be fully loaded in memory always. This requirement means
queries to AAS can rely on the data model being available, but – especially for Power BI
capacities with a high number of data models when some of them are used infrequently
– dynamic memory management can make more efficient use of memory.

Lastly, Power BI Premium is able to better utilize next-generation hardware rollouts to


benefit from scalability and performance enhancements.

Considerations and limitations


There are considerations and limitations to factor into your planning before migrating to
Power BI Premium.

Permissions
AAS and SSAS use roles to manage data model access. There are two types of roles: the
server role and database roles. The server role is a fixed role that grants administrator
access to the Analysis Services server instance. Database roles, which are set by data
modelers and administrators, control access to the database and data for non-
administrator users.

Unlike AAS, in Power BI, you only use roles to enforce RLS or OLS. To grant permissions
beyond RLS and OLS, use the Power BI security model (workspace roles and dataset
permissions). For more information, see Dataset permissions.

For more information about Power BI model roles, see Dataset connectivity with the
XMLA endpoint (Model roles).

When you migrate a data model from AAS to Power BI Premium, you must take the
following points into consideration:

Users who were granted Read permission on a model in AAS must be granted
Build permission on the migrated Power BI dataset.
Users who were granted the Administrator permission on a model in AAS must be
granted Write permission on the migrated Power BI dataset.

Refresh automation
Power BI Premium supports XMLA endpoint-enabled APIs for scripting, such as Tabular
Model Scripting Language (TMSL), Tabular Object Model (TOM), and the PowerShell
SqlServer module . These APIs have almost symmetric interfaces to AAS. For more
information, see Dataset connectivity with the XMLA endpoint (Client applications and
tools).

Compatibility with services for automation, including Azure Functions, Azure


Automation, and Azure Logic Apps, is enabled in the same way.

Generally, scripts and processes that automate partition management and processing
in AAS will work in Power BI Premium. Bear in mind that Power BI Premium datasets
support the incremental refresh feature, which provides automated partition
management for tables that frequently load new and updated data.

Like for AAS, you can use a service principal as an automation account for Power BI
dataset management operations, such as refreshes. For more information, see Dataset
connectivity with the XMLA endpoint (Service principals).

Custom security
Like for AAS, applications can use a service principal to query a Power BI Premium per
capacity or Power BI Embedded dataset by using the CustomData feature.

However, you can't assign a service principal to a model role in Power BI Premium.
Instead, a service principal gains access by assignment to the workspace admin or
member role.

7 Note

You can't use the CustomData feature when querying Premium Per User (PPU)
datasets because it would be in violation of the license terms and conditions.

Impersonation for testing


Impersonation techniques, including the EffectiveUserName and the Roles connection
string properties, are supported by AAS and Power BI Premium. You typically use them
when testing security roles.

Network security
Setting up network security in AAS requires enabling the firewall and configuring IP
address ranges for only those computers accessing the server.

Power BI doesn't have a firewall feature. Instead, Power BI offers a superior network
security model by using VNets and Private Links. For more information, see What is a
virtual network (VNet)?.

Data sources and credentials


AAS defines credentials for each data source declared in the TOM tabular metadata.
However, Power BI doesn't work that way. Because Power BI can share data sources
credentials across multiple datasets, credentials are set in the Power BI service.

Any XMLA-based process that sets data source credentials must be replaced. For more
information, see Dataset connectivity with the XMLA endpoint (Deploy model projects
from Visual Studio).

Backup and restore


Backup and restore in AAS requires Azure Blob storage, while in Power BI Premium it
requires an Azure Data Lake Storage Gen2 (ADLS Gen2) account. Apart from the storage
account difference, backup and restore work the same way in both products.

For more information, see Backup and restore datasets with Power BI Premium.

On-premises data gateway


Both AAS and Power BI Premium use the same on-premises data gateway to connect to
data sources. However, the setup steps are different.

For information on how to set up gateway data sources for Power BI Premium, see Add
or remove a gateway data source.

Server properties
Unlike AAS, Power BI Premium doesn't support server properties. Instead, you manage
Premium capacity settings.

Link files
Unlike AAS, Power BI Premium doesn't support alias server names.

Dynamic management views (DMVs)


Some DMVs that work in AAS aren't accessible in Power BI Premium because they
require Analysis Services server-admin permissions. Power BI has workspace roles, but
there isn't a workspace role that grants the equivalent of Analysis Services server-admin
permissions.

PowerShell
You can use the SqlServer PowerShell module AAS cmdlets to automate dataset
management tasks, including refresh operations. For more information, see Analysis
Services PowerShell Reference.

However, the Az.AnalysisServices module AAS cmdlets aren't supported for Power BI
datasets. Instead, use the Microsoft Power BI Cmdlets for Windows PowerShell and
PowerShell Core.

Diagnostic logging
AAS integrates with Azure Monitor for diagnostic logging. The most common target for
AAS logs is to Log Analytics workspaces.

Power BI Premium also supports logging to Log Analytics workspaces. Currently, the
events sent to Log Analytics are mainly AS engine events. However, not all events
supported for AAS are supported for Power BI. The Log Analytics schema for Power BI
contains differences compared to AAS, which means existing queries on AAS may not
work in Power BI.

Power BI offers another diagnostic logging capability that isn't offered in AAS. For more
information, see Use the Premium metrics app.

SQL Server Extended Events (xEvents) are supported in AAS but not in Power BI
Premium. For more information, see Monitor Analysis Services with SQL Server Extended
Events.

Business-to-business (B2B)
Both AAS and Power BI support Azure AD B2B collaboration, which enables and governs
sharing with external users. Notably, the User Principal Name (UPN) format required by
AAS is different to Power BI.

To identify the user, Power BI utilizes a unique name claim in Azure AD while AAS uses
an email claim. While there may be many instances where these two identifiers align, the
unique name format is more stringent. If using dynamic RLS in Power BI, ensure that the
value in the user identity table matches the account used to sign in to Power BI.
Scale-out
Azure Analysis Services scale-out is supported by Power BI Premium. For more
information see Power BI Dataset Scale Out.

Migration feature
The Microsoft Azure Analysis Services to Microsoft Power BI Premium migration feature
in Power BI migrates as AAS database to a dataset in Power BI Premium, Power BI
Premium Per User, or Power BI Embedded workspace. For more information, see Migrate
Azure Analysis Services to Power BI.

Next steps
For more information about this article, check out the following resources:

Migrate from Azure Analysis Services to Power BI Premium: Migration scenarios


Migrate Azure Analysis Services to Power BI
Questions? Try asking the Power BI community
Suggestions? Contribute ideas to improve Power BI

Power BI partners are available to help your organization succeed with the migration
process. To engage a Power BI partner, visit the Power BI partner portal .
Migrate from Azure Analysis Services to
Power BI Premium: Migration scenarios
Article • 02/27/2023

This article compares six hypothetical scenarios when migrating from Azure Analysis
Services (AAS) to Power BI Premium. These scenarios can help you to determine the
right type and number of licenses for your business requirements and circumstances.

7 Note

An attempt has been made to ensure these scenarios are representative of real
customer migrations, however individual customer scenarios will of course differ.
Also, this article doesn't include pricing details. You can find current pricing here:

Power BI pricing
Azure Analysis Services pricing

When comparing Power BI Premium to AAS costs, be sure to consider factors beyond
price per core. Power BI provides reduced cost of ownership and business value, and
with many features that are only available to Power BI data models.

Also, assuming you already use Power BI in your organization, calculate costs based on
the existing profile that combines AAS and Power BI. Compare the existing profile with
the target profile on Power BI Premium. To determine the target profile, be sure to
consider the following points:

Region requirements.
The largest AAS data model size in each region.
The number of users in each region.
The number of users required to develop and manage content.
CPU consumption across AAS and Power BI Premium.

) Important

CPU consumption across AAS and Power BI Premium may vary significantly due to
numerous factors. Factors can include the use of other workloads on the same
capacities, refresh patterns, and query patterns. We recommended that you
perform in-depth analysis to quantify comparative CPU consumption across AAS
and Power BI Premium for migrated models.
Migration scenario 1
In the first migration scenario, the customer uses Power BI Premium for the frontend
and AAS for the backend. There are 20 developers who are each responsible for the
development and test environments, and for deployment to production.

Here are their current AAS licenses:

Environment Largest model AAS SKU

Production 60 GB S4

Production 30 GB S2

Production 15 GB S1

Test 5 GB B1

Development 1 GB D1

Here are their current Power BI licenses:

Environment Power BI license Users

Production Premium P2 5,000

Test/development Premium P1 20

Production/test/development Pro 20

Once migrated to Power BI Premium:

The three existing production AAS models can be consolidated to run in a


Premium P3 capacity.
The 20 developers will need Premium Per User (PPU) licenses to access test models
above 1 GB in size.

Here are the proposed Power BI licenses:

Environment Power BI license Users Largest model

Production Premium P3 5,000 60 GB

Production/test/development PPU 20 5 GB

Migration scenario 2
In this migration scenario, the customer uses Power BI Premium for the frontend and
AAS for the backend. Production environments are running in different regions. There
are 20 developers who are each responsible for the development and test environments,
and for deployment to production.

Here are their current AAS licenses:

Region Environment Largest model AAS SKU

West Europe Production 60 GB S4

Brazil South Production 30 GB S2

West US Production 15 GB S1

West US Test 5 GB B1

West US Development 1 GB D1

Here are their current Power BI licenses:

Region Environment Power BI license Users

West Europe Production Premium P1 2,000

Brazil South Production Premium P1 2,000

West US Production Premium P1 2,000

West US Test/development Premium P1 20

West US Production/test/development Pro 20

Once migrated to Power BI Premium:

The customer needs a Premium capacity in each of the three regions (because the
three existing production AAS models run in different regions). Each capacity size
is based on the largest model.
The 20 developers will need PPU licenses to access test models above 1 GB in size.

Here are the proposed Power BI licenses:

Region Environment Power BI license Users Largest model

West Europe Production Premium P3 2,000 60 GB

Brazil South Production Premium P2 2,000 30 GB

West US Production Premium P1 2,000 15 GB


Region Environment Power BI license Users Largest model

West US Production/test/development PPU 20 5 GB

Migration scenario 3
In this migration scenario, the customer has Power BI Pro licenses for all users available
with their Office 365 E5 subscription, and they use AAS for the backend. There are 15
developers who are each responsible for the development and test environments, and
for deployment to production.

Here are their current AAS licenses:

Environment Largest model AAS SKU

Production 35 GB S2

Production 30 GB S2

Test 5 GB B1

Development 1 GB D1

Here are their current Power BI licenses:

Environment Power BI license Users

Production Pro (as part of E5) 4,000

Production/test/development Pro (as part of E5) 15

Once migrated to Power BI Premium:

The two existing production AAS models can be consolidated to run in a Premium
P2 capacity.
The 15 developers will need PPU licenses to access test models above 1 GB in size.
(An add-on is available to step up from Pro to PPU.)

Here are the proposed Power BI licenses:

Environment Power BI license Users Largest model

Production Premium P2 4,000 35 GB

Production/test/development PPU 15 5 GB
Migration scenario 4
In this migration scenario, the customer has Power BI Pro licenses for all users, and they
use AAS for the backend. There are five developers who are each responsible for the
development and test environments, and for deployment to production.

Here are their current AAS licenses:

Environment Largest model AAS SKU

Production 35 GB S2

Production 10 GB S1

Test 5 GB B1

Development 1 GB D1

Here are their current Power BI licenses:

Environment Power BI license Users

Production Pro 350

Production/test/development Pro 5

Once migrated to Power BI Premium:

The two existing production AAS models can run in PPU workspaces.
All end users and developers will need PPU licenses.

Here are the proposed Power BI licenses:

Environment Power BI license Users Largest model

Production PPU 350 35 GB

Production/test/development PPU 5 5 GB

Migration scenario 5
In this migration scenario, the customer uses Power BI Premium for the frontend and
AAS for the backend. There are 25 developers who are each responsible for the
development and test environments, and for deployment to production.

Here are their current AAS licenses:


Environment Largest model AAS SKU

Production 220 GB S9

Production 150 GB S8

Production 60 GB S4

Test 5 GB B1

Development 1 GB D1

Here are their current Power BI licenses:

Environment Power BI license Users

Production Premium P3 7,500

Production Premium P2 4,500

Test/development Premium P1 25

Production/test/development Pro 25

Once migrated to Power BI Premium:

The three existing production AAS models can be consolidated to run in a


Premium P5 capacity.
The 20 developers will need PPU licenses to access test models above 1 GB in size.

Here are the proposed Power BI licenses:

Environment Power BI license Users Largest model

Production Premium P5 12,000 220 GB

Production/test/development PPU 25 5 GB

Migration scenario 6
In this migration scenario, an ISV company has 400 customers. Each customer has its
own SQL Server Analysis Services (SSAS) multidimensional model (also known as a cube).
The analysis below compares Azure Analysis Services with the Power BI Embedded
alternative.

The 400 tenants are mainly accessed by 50 analysts from the ISV company as well
as two users (on average) from each customer.
The total size of the models is about 100 GB.

Here are their estimated AAS licenses:

Environment Largest model AAS SKU

Production 8 GB S4

Test 8 GB B1

Development 1 GB D1

Here are their current Power BI licenses:

Users Power BI license Users

Customers Pro 800

Analysts Pro 50

Developers Pro 20

Once migrated to Power BI Premium:

The A1/P4 SKU was chosen to allow for future model size growth (EM3/A3 SKU can
work also).
The 50 analysts will need PPU licenses to access test models above 1 GB in size.
The total size of the 400 models isn't relevant for pricing; only the largest model
size is important.

Here are their proposed Power BI licenses:

Environment Power BI license Users Largest


model

Production Premium P1 / Power BI Embedded Not 25 GB


A4 applicable

Test/development Premium EM3 / Power BI Not 10 GB


Embedded A3 applicable

Developers Pro 20 Not


applicable

Production/test/development PPU 50 Not


applicable
Premium migration benefits
Customers can realize many benefits when they migrate from AAS to Power BI Premium.

Customers can consolidate to a single platform that reduces cost duplication of


paying for both AAS and Power BI Premium.
By using Premium for their entire BI stack, customers can unlock increased
performance and features. They only need Pro licenses for developers and admins,
but not for end users.
Customers can use Power BI Premium scalability to reduce their capacity
requirements, since memory is limited per dataset and isn't compared to total over
the server as it is in AAS. For more information, see Memory allocation.
For development and test environments, customers can take advantage of PPU
licensing instead of having Premium capacities. PPU licenses provide users access
to Premium features like the XMLA endpoint, deployment pipelines, and Premium
dataflow features. Furthermore, they can work with models that above 1 GB in size.

Next steps
For more information about this article, check out the following resources:

Migrate from Azure Analysis Services to Power BI Premium


What is Power BI Premium?
Power BI pricing
Azure Analysis Services pricing
Questions? Try asking the Power BI community
Suggestions? Contribute ideas to improve Power BI

Power BI partners are available to help your organization succeed with the migration
process. To engage a Power BI partner, visit the Power BI partner portal .
Appropriate use of error functions
Article • 09/20/2022

As a data modeler, when you write a DAX expression that might raise an evaluation-time
error, you can consider using two helpful DAX functions.

The ISERROR function, which takes a single expression and returns TRUE if that
expression results in error.
The IFERROR function, which takes two expressions. Should the first expression
result in error, the value for the second expression is returned. It is in fact a more
optimized implementation of nesting the ISERROR function inside an IF function.

However, while these functions can be helpful and can contribute to writing easy-to-
understand expressions, they can also significantly degrade the performance of
calculations. It can happen because these functions increase the number of storage
engine scans required.

Most evaluation-time errors are due to unexpected BLANKs or zero values, or invalid
data type conversion.

Recommendations
It's better to avoid using the ISERROR and IFERROR functions. Instead, apply defensive
strategies when developing the model and writing expressions. Strategies can include:

Ensuring quality data is loaded into the model: Use Power Query transformations
to remove or substitute invalid or missing values, and to set correct data types. A
Power Query transformation can also be used to filter rows when errors, like invalid
data conversion, occur.

Data quality can also be controlled by setting the model column Is Nullable
property to Off, which will fail the data refresh should BLANKs be encountered. If
this failure occurs, data loaded as a result of a successful refresh will remain in the
tables.

Using the IF function: The IF function logical test expression can determine
whether an error result would occur. Note, like the ISERROR and IFERROR
functions, this function can result in additional storage engine scans, but will likely
perform better than them as no error needs to be raised.

Using error-tolerant functions: Some DAX functions will test and compensate for
error conditions. These functions allow you to enter an alternate result that would
be returned instead. The DIVIDE function is one such example. For additional
guidance about this function, read the DAX: DIVIDE function vs divide operator (/)
article.

Example
The following measure expression tests whether an error would be raised. It returns
BLANK in this instance (which is the case when you do not provide the IF function with a
value-if-false expression).

DAX

Profit Margin
= IF(ISERROR([Profit] / [Sales]))

This next version of the measure expression has been improved by using the IFERROR
function in place of the IF and ISERROR functions.

DAX

Profit Margin
= IFERROR([Profit] / [Sales], BLANK())

However, this final version of the measure expression achieves the same outcome, yet
more efficiently and elegantly.

DAX

Profit Margin
= DIVIDE([Profit], [Sales])

See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No
Get help at Microsoft Q&A
Avoid converting BLANKs to values
Article • 09/20/2022

As a data modeler, when writing measure expressions you might come across cases
where a meaningful value can't be returned. In these instances, you may be tempted to
return a value—like zero—instead. It's suggested you carefully determine whether this
design is efficient and practical.

Consider the following measure definition that explicitly converts BLANK results to zero.

DAX

Sales (No Blank) =


IF(
ISBLANK([Sales]),
0,
[Sales]
)

Consider another measure definition that also converts BLANK results to zero.

DAX

Profit Margin =
DIVIDE([Profit], [Sales], 0)

The DIVIDE function divides the Profit measure by the Sales measure. Should the result
be zero or BLANK, the third argument—the alternate result (which is optional)—is
returned. In this example, because zero is passed as the alternate result, the measure is
guaranteed to always return a value.

These measure designs are inefficient and lead to poor report designs.

When they're added to a report visual, Power BI attempts to retrieve all groupings within
the filter context. The evaluation and retrieval of large query results often leads to slow
report rendering. Each example measure effectively turns a sparse calculation into a
dense one, forcing Power BI to use more memory than necessary.

Also, too many groupings often overwhelm your report users.

Let's see what happens when the Profit Margin measure is added to a table visual,
grouping by customer.
The table visual displays an overwhelming number of rows. (There are in fact 18,484
customers in the model, and so the table attempts to display all of them.) Notice that
the customers in view haven't achieved any sales. Yet, because the Profit Margin
measure always returns a value, they are displayed.

7 Note

When there are too many data points to display in a visual, Power BI may use data
reduction strategies to remove or summarize large query results. For more
information, see Data point limits and strategies by visual type.

Let's see what happens when the Profit Margin measure definition is improved. It now
returns a value only when the Sales measure isn't BLANK (or zero).

DAX

Profit Margin =
DIVIDE([Profit], [Sales])

The table visual now displays only customers who have made sales within the current
filter context. The improved measure results in a more efficient and practical experience
for your report users.
 Tip

When necessary, you can configure a visual to display all groupings (that return
values or BLANK) within the filter context by enabling the Show Items With No
Data option.

Recommendation
It's recommended that your measures return BLANK when a meaningful value cannot be
returned.

This design approach is efficient, allowing Power BI to render reports faster. Also,
returning BLANK is better because report visuals—by default—eliminate groupings
when summarizations are BLANK.

See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Avoid using FILTER as a filter argument
Article • 09/20/2022

As a data modeler, it's common you'll write DAX expressions that need to be evaluated
in a modified filter context. For example, you can write a measure definition to calculate
sales for "high margin products". We'll describe this calculation later in this article.

7 Note

This article is especially relevant for model calculations that apply filters to Import
tables.

The CALCULATE and CALCULATETABLE DAX functions are important and useful
functions. They let you write calculations that remove or add filters, or modify
relationship paths. It's done by passing in filter arguments, which are either Boolean
expressions, table expressions, or special filter functions. We'll only discuss Boolean and
table expressions in this article.

Consider the following measure definition, which calculates red product sales by using a
table expression. It will replace any filters that might be applied to the Product table.

DAX

Red Sales =
CALCULATE(
[Sales],
FILTER('Product', 'Product'[Color] = "Red")
)

The CALCULATE function accepts a table expression returned by the FILTER DAX
function, which evaluates its filter expression for each row of the Product table. It
achieves the correct result—the sales result for red products. However, it could be
achieved much more efficiently by using a Boolean expression.

Here's an improved measure definition, which uses a Boolean expression instead of the
table expression. The KEEPFILTERS DAX function ensures any existing filters applied to
the Color column are preserved, and not overwritten.

DAX

Red Sales =
CALCULATE(
[Sales],
KEEPFILTERS('Product'[Color] = "Red")
)

It's recommended you pass filter arguments as Boolean expressions, whenever possible.
It's because Import model tables are in-memory column stores. They are explicitly
optimized to efficiently filter columns in this way.

There are, however, restrictions that apply to Boolean expressions when they're used as
filter arguments. They:

Cannot reference columns from multiple tables


Cannot reference a measure
Cannot use nested CALCULATE functions
Cannot use functions that scan or return a table

It means that you'll need to use table expressions for more complex filter requirements.

Consider now a different measure definition. The requirement is to calculate sales, but
only for months that have achieved a profit.

DAX

Sales for Profitable Months =


CALCULATE(
[Sales],
FILTER(
VALUES('Date'[Month]),
[Profit] > 0)
)
)

In this example, the FILTER function must be used. It's because it requires evaluating the
Profit measure to eliminate those months that didn't achieve a profit. It's not possible to
use a measure in a Boolean expression when it's used as a filter argument.

Recommendations
For best performance, it's recommended you use Boolean expressions as filter
arguments, whenever possible.

Therefore, the FILTER function should only be used when necessary. You can use it to
perform filter complex column comparisons. These column comparisons can involve:

Measures
Other columns
Using the OR DAX function, or the OR logical operator (||)

See also
Filter functions (DAX)
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Column and measure references
Article • 09/20/2022

As a data modeler, your DAX expressions will refer to model columns and measures.
Columns and measures are always associated with model tables, but these associations
are different, so we have different recommendations on how you'll reference them in
your expressions.

Columns
A column is a table-level object, and column names must be unique within a table. So
it's possible that the same column name is used multiple times in your model—
providing they belong to different tables. There's one more rule: a column name cannot
have the same name as a measure name or hierarchy name that exists in the same table.

In general, DAX will not force using a fully qualified reference to a column. A fully
qualified reference means that the table name precedes the column name.

Here's an example of a calculated column definition using only column name references.
The Sales and Cost columns both belong to a table named Orders.

DAX

Profit = [Sales] - [Cost]

The same definition can be rewritten with fully qualified column references.

DAX

Profit = Orders[Sales] - Orders[Cost]

Sometimes, however, you'll be required to use fully qualified column references when
Power BI detects ambiguity. When entering a formula, a red squiggly and error message
will alert you. Also, some DAX functions like the LOOKUPVALUE DAX function, require
the use of fully qualified columns.

It's recommended you always fully qualify your column references. The reasons are
provided in the Recommendations section.

Measures
A measure is a model-level object. For this reason, measure names must be unique
within the model. However, in the Fields pane, report authors will see each measure
associated with a single model table. This association is set for cosmetic reasons, and
you can configure it by setting the Home Table property for the measure. For more
information, see Measures in Power BI Desktop (Organizing your measures).

It's possible to use a fully qualified measure in your expressions. DAX intellisense will
even offer the suggestion. However, it isn't necessary, and it's not a recommended
practice. If you change the home table for a measure, any expression that uses a fully
qualified measure reference to it will break. You'll then need to edit each broken formula
to remove (or update) the measure reference.

It's recommended you never qualify your measure references. The reasons are provided
in the Recommendations section.

Recommendations
Our recommendations are simple and easy to remember:

Always use fully qualified column references


Never use fully qualified measure references

Here's why:

Formula entry: Expressions will be accepted, as there won't be any ambiguous


references to resolve. Also, you'll meet the requirement for those DAX functions
that require fully qualified column references.
Robustness: Expressions will continue to work, even when you change a measure
home table property.
Readability: Expressions will be quick and easy to understand—you'll quickly
determine that it's a column or measure, based on whether it's fully qualified or
not.

See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


DIVIDE function vs. divide operator (/)
Article • 09/20/2022

As a data modeler, when you write a DAX expression to divide a numerator by a


denominator, you can choose to use the DIVIDE function or the divide operator (/ -
forward slash).

When using the DIVIDE function, you must pass in numerator and denominator
expressions. Optionally, you can pass in a value that represents an alternate result.

DAX

DIVIDE(<numerator>, <denominator> [,<alternateresult>])

The DIVIDE function was designed to automatically handle division by zero cases. If an
alternate result is not passed in, and the denominator is zero or BLANK, the function
returns BLANK. When an alternate result is passed in, it's returned instead of BLANK.

The DIVIDE function is convenient because it saves your expression from having to first
test the denominator value. The function is also better optimized for testing the
denominator value than the IF function. The performance gain is significant since
checking for division by zero is expensive. Further using DIVIDE results in a more concise
and elegant expression.

Example
The following measure expression produces a safe division, but it involves using four
DAX functions.

DAX

Profit Margin =
IF(
OR(
ISBLANK([Sales]),
[Sales] == 0
),
BLANK(),
[Profit] / [Sales]
)

This measure expression achieves the same outcome, yet more efficiently and elegantly.
DAX

Profit Margin =
DIVIDE([Profit], [Sales])

Recommendations
It's recommended that you use the DIVIDE function whenever the denominator is an
expression that could return zero or BLANK.

In the case that the denominator is a constant value, we recommend that you use the
divide operator. In this case, the division is guaranteed to succeed, and your expression
will perform better because it will avoid unnecessary testing.

Carefully consider whether the DIVIDE function should return an alternate value. For
measures, it's usually a better design that they return BLANK. Returning BLANK is better
because report visuals—by default—eliminate groupings when summarizations are
BLANK. It allows the visual to focus attention on groups where data exists. When
necessary, in Power BI, you can configure the visual to display all groups (that return
values or BLANK) within the filter context by enabling the Show items with no data
option.

See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Use COUNTROWS instead of COUNT
Article • 09/20/2022

As a data modeler, sometimes you might need to write a DAX expression that counts
table rows. The table could be a model table or an expression that returns a table.

Your requirement can be achieved in two ways. You can use the COUNT function to
count column values, or you can use the COUNTROWS function to count table rows.
Both functions will achieve the same result, providing that the counted column contains
no BLANKs.

The following measure definition presents an example. It calculates the number of


OrderDate column values.

DAX

Sales Orders =
COUNT(Sales[OrderDate])

Providing that the granularity of the Sales table is one row per sales order, and the
OrderDate column does not contain BLANKs, then the measure will return a correct
result.

However, the following measure definition is a better solution.

DAX

Sales Orders =
COUNTROWS(Sales)

There are three reasons why the second measure definition is better:

It's more efficient, and so it will perform better.


It doesn't consider BLANKs contained in any column of the table.
The intention of formula is clearer, to the point of being self-describing.

Recommendation
When it's your intention to count table rows, it's recommended you always use the
COUNTROWS function.
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Use SELECTEDVALUE instead of VALUES
Article • 09/20/2022

As a data modeler, sometimes you might need to write a DAX expression that tests
whether a column is filtered by a specific value.

In earlier versions of DAX, this requirement was safely achieved by using a pattern
involving three DAX functions; IF, HASONEVALUE and VALUES. The following measure
definition presents an example. It calculates the sales tax amount, but only for sales
made to Australian customers.

DAX

Australian Sales Tax =


IF(
HASONEVALUE(Customer[Country-Region]),
IF(
VALUES(Customer[Country-Region]) = "Australia",
[Sales] * 0.10
)
)

In the example, the HASONEVALUE function returns TRUE only when a single value of
the Country-Region column is visible in the current filter context. When it's TRUE, the
VALUES function is compared to the literal text "Australia". When the VALUES function
returns TRUE, the Sales measure is multiplied by 0.10 (representing 10%). If the
HASONEVALUE function returns FALSE—because more than one value filters the column
—the first IF function returns BLANK.

The use of the HASONEVALUE is a defensive technique. It's required because it's
possible that multiple values filter the Country-Region column. In this case, the VALUES
function returns a table of multiple rows. Comparing a table of multiple rows to a scalar
value results in an error.

Recommendation
It's recommended that you use the SELECTEDVALUE function. It achieves the same
outcome as the pattern described in this article, yet more efficiently and elegantly.

Using the SELECTEDVALUE function, the example measure definition is now rewritten.

DAX
Australian Sales Tax =
IF(
SELECTEDVALUE(Customer[Country-Region]) = "Australia",
[Sales] * 0.10
)

 Tip

It's possible to pass an alternate result value into the SELECTEDVALUE function. The
alternate result value is returned when either no filters—or multiple filters—are
applied to the column.

See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Use variables to improve your DAX
formulas
Article • 10/31/2022

As a data modeler, writing and debugging some DAX calculations can be challenging.
It's common that complex calculation requirements often involve writing compound or
complex expressions. Compound expressions can involve the use of many nested
functions, and possibly the reuse of expression logic.

Using variables in your DAX formulas can help you write more complex and efficient
calculations. Variables can improve performance, reliability, readability, and reduce
complexity.

In this article, we'll demonstrate the first three benefits by using an example measure for
year-over-year (YoY) sales growth. (The formula for YoY sales growth is period sales,
minus sales for the same period last year, divided by sales for the same period last year.)

Let's start with the following measure definition.

DAX

Sales YoY Growth % =


DIVIDE(
([Sales] - CALCULATE([Sales], PARALLELPERIOD('Date'[Date], -12,
MONTH))),
CALCULATE([Sales], PARALLELPERIOD('Date'[Date], -12, MONTH))
)

The measure produces the correct result, yet let's now see how it can be improved.

Improve performance
Notice that the formula repeats the expression that calculates "same period last year".
This formula is inefficient, as it requires Power BI to evaluate the same expression twice.
The measure definition can be made more efficient by using a variable, VAR.

The following measure definition represents an improvement. It uses an expression to


assign the "same period last year" result to a variable named SalesPriorYear. The
variable is then used twice in the RETURN expression.

DAX
Sales YoY Growth % =
VAR SalesPriorYear =
CALCULATE([Sales], PARALLELPERIOD('Date'[Date], -12, MONTH))
RETURN
DIVIDE(([Sales] - SalesPriorYear), SalesPriorYear)

The measure continues to produce the correct result, and does so in about half the
query time.

Improve readability
In the previous measure definition, notice how the choice of variable name makes the
RETURN expression simpler to understand. The expression is short and self-describing.

Simplify debugging
Variables can also help you debug a formula. To test an expression assigned to a
variable, you temporarily rewrite the RETURN expression to output the variable.

The following measure definition returns only the SalesPriorYear variable. Notice how it
comments-out the intended RETURN expression. This technique allows you to easily
revert it back once your debugging is complete.

DAX

Sales YoY Growth % =


VAR SalesPriorYear =
CALCULATE([Sales], PARALLELPERIOD('Date'[Date], -12, MONTH))
RETURN
--DIVIDE(([Sales] - SalesPriorYear), SalesPriorYear)
SalesPriorYear

Reduce complexity
In earlier versions of DAX, variables were not yet supported. Complex expressions that
introduced new filter contexts were required to use the EARLIER or EARLIEST DAX
functions to reference outer filter contexts. Unfortunately, data modelers found these
functions difficult to understand and use.

Variables are always evaluated outside the filters your RETURN expression applies. For
this reason, when you use a variable within a modified filter context, it achieves the
same result as the EARLIEST function. The use of the EARLIER or EARLIEST functions can
therefore be avoided. It means you can now write formulas that are less complex, and
that are easier to understand.

Consider the following calculated column definition added to the Subcategory table. It
evaluates a rank for each product subcategory based on the Subcategory Sales column
values.

DAX

Subcategory Sales Rank =


COUNTROWS(
FILTER(
Subcategory,
EARLIER(Subcategory[Subcategory Sales]) < Subcategory[Subcategory
Sales]
)
) + 1

The EARLIER function is used to refer to the Subcategory Sales column value in the
current row context.

The calculated column definition can be improved by using a variable instead of the
EARLIER function. The CurrentSubcategorySales variable stores the Subcategory Sales
column value in the current row context, and the RETURN expression uses it within a
modified filter context.

DAX

Subcategory Sales Rank =


VAR CurrentSubcategorySales = Subcategory[Subcategory Sales]
RETURN
COUNTROWS(
FILTER(
Subcategory,
CurrentSubcategorySales < Subcategory[Subcategory Sales]
)
) + 1

See also
VAR DAX article
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


DAX sample model
Article • 09/20/2022

The Adventure Works DW 2020 Power BI Desktop sample model is designed to support
your DAX learning. The model is based on the Adventure Works data warehouse sample
for AdventureWorksDW2017—however, the data has been modified to suit the
objectives of the sample model.

The sample model does not contain any DAX formulas. It does however support
hundreds or even thousands of potential calculation formulas and queries. Some
function examples, like those in CALCULATE, DATESBETWEEN, DATESIN PERIOD, IF, and
LOOKUPVALUE can be added to the sample model without modification. We're working
on including more examples in other function reference articles that work with the
sample model.

Scenario

The Adventure Works company represents a bicycle manufacturer that sells bicycles and
accessories to global markets. The company has their data warehouse data stored in an
Azure SQL Database.

Model structure
The model has seven tables:

Table Description

Customer Describes customers and their geographic location. Customers purchase products
online (Internet sales).

Date There are three relationships between the Date and Sales tables, for order date, ship
date, and due date. The order date relationship is active. The company's reports sales
using a fiscal year that commences on July 1 of each year. The table is marked as a
date table using the Date column.
Table Description

Product Stores finished products only.

Reseller Describes resellers and their geographic location. Reseller on sell products to their
customers.

Sales Stores rows at sales order line grain. All financial values are in US dollars (USD). The
earliest order date is July 1, 2017, and the latest order date is June 15, 2020.

Sales Describes sales order and order line numbers, and also the sales channel, which is
Order either Reseller or Internet. This table has a one-to-one relationship with the Sales
table.

Sales Sales territories are organized into groups (North America, Europe, and Pacific),
Territory countries, and regions. Only the United States sells products at the region level.

Download sample
Download the Power BI Desktop sample model file here .

See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Separate reports from models in Power
BI Desktop
Article • 02/27/2023

When creating a new Power BI Desktop solution, one of the first tasks you need to do is
"get data". Getting data can result in two distinctly different outcomes. It could:

Create a Live Connection to an already-published model, which could be a Power


BI dataset or a remote-hosted Analysis Services model.
Commence the development of a new model, which could be either an Import,
DirectQuery, or Composite model.

This article is concerned with the second scenario. It provides guidance on whether a
report and model should be combined into a single Power BI Desktop file.

Single file solution


A single file solution works well when there's only ever a single report based on the
model. In this case, it's likely that both the model and report are the efforts of the same
person. We define it as a Personal BI solution, though the report could be shared with
others. Such solutions can represent role-scoped reports or one-time assessments of a
business challenge—often described as ad hoc reports.

Separate report files


It makes sense to separate model and report development into separate Power BI
Desktop files when:
Data modelers and report authors are different people.
It's understood that a model will be the source for multiple reports, now or in the
future.

Data modelers can still use the Power BI Desktop report authoring experience to test
and validate their model designs. However, just after publishing their file to the Power BI
service they should remove the report from the workspace. And, they must remember to
remove the report each time they republish and overwrite the dataset.

Preserve the model interface


Sometimes, model changes are inevitable. Data modelers must take care, then, not the
break the model interface. If they do, it's possible that related report visuals or
dashboard tiles will break. Broken visuals appear as errors, and they can result in
frustration for report authors and consumers. And worse—they can reduce trust in the
data.

So, manage model changes carefully. If possible, avoid the following changes:

Renaming tables, columns, hierarchies, hierarchy levels, or measures.


Modifying column data types.
Modifying measure expressions so they return a different data type.
Moving measures to a different home table. It's because moving a measure could
break report-scoped measures that fully qualify measures with their home table
name. We don't recommend you write DAX expressions using fully qualified
measures names. For more information, see DAX: Column and measure references.
Adding new tables, columns, hierarchies, hierarchy levels, or measures is safe, with one
exception: It's possible that a new measure name could collide with a report-scoped
measure name. To avoid collision, we recommend report authors adopt a naming
convention when defining measures in their reports. They can prefix report-scoped
measure names with an underscore or some other character(s).

If you must make breaking changes to your models, we recommend you either:

View related content for the dataset in the Power BI service.


Explore Data lineage view in the Power BI service.

Both options allow you to quickly identify any related reports and dashboards. Data
lineage view is probably the better choice because it's easy to see the contact person for
each related item. In fact, it's a hyperlink that opens an email message addressed to the
contact.

We recommend you contact the owner of each related item to let them know of any
planned breaking changes. This way, they can be prepared and ready to fix and
republish their reports, helping to minimize downtime and frustration.

Next steps
For more information related to this article, check out the following resources:

Connect to datasets in the Power BI service from Power BI Desktop


View related content in the Power BI service
Data lineage
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Extend visuals with report page tooltips
Article • 03/14/2023

This article targets you as a report author designing Power BI reports. It provides
suggestions and recommendations when creating report page tooltips.

Suggestions
Report page tooltips can enhance the experience for your report users. Page tooltips
allow your report users to quickly and efficiently gain deeper insights from a visual. They
can be associated with different report objects:

Visuals: On a visual-by-visual basis, you can configure which visuals will reveal your
page tooltip. Per visual, it's possible to have the visual reveal no tooltip, default to
the visual tooltips (configured in the visual fields pane), or use a specific page
tooltip.
Visual headers: You can configure specific visuals to display a page tooltip. Your
report users can reveal the page tooltip when they hover their cursor over the
visual header icon—be sure to educate your users about this icon.

7 Note

A report visual can only reveal a page tooltip when tooltip page filters are
compatible with the visual's design. For example, a visual that groups by product is
compatible with a tooltip page that filters by product.

Page tooltips don't support interactivity. If you want your report users to interact,
create a drillthrough page instead.

Here are some suggested design scenarios:

Different perspective
Add detail
Add help

Different perspective
A page tooltip can visualize the same data as the source visual. It's done by using the
same visual and pivoting groups, or by using different visual types. Page tooltips can
also apply different filters than those filters applied to the source visual.
The following example shows what happens when the report user hovers their cursor
over the EnabledUsers value. The filter context for the value is Yammer in November
2018.

A page tooltip is revealed. It presents a different data visual (line and clustered column
chart) and applies a contrasting time filter. Notice that the filter context for the data
point is November 2018. Yet the page tooltip displays trend over a full year of months.

Add detail
A page tooltip can display additional details and add context.

The following example shows what happens when the report user hovers their cursor
over the Average of Violation Points value, for zip code 98022.

A page tooltip is revealed. It presents specific attributes and statistics for zip code
98022.

Add help
Visual headers can be configured to reveal page tooltips to visual headers. You can add
help documentation to a page tooltip by using richly formatted text boxes. It's also
possible to add images and shapes.

Interestingly, buttons, images, text boxes, and shapes can also reveal a visual header
page tooltip.

The following example shows what happens when the report user hovers their cursor
over the visual header icon.

A page tooltip is revealed. It presents rich formatted text in four text boxes, and a shape
(line). The page tooltip conveys help by describing each acronym displayed in the visual.

Recommendations
At report design time, we recommend the following practices:

Page size: Configure your page tooltip to be small. You can use the built-in Tooltip
option (320 pixels wide, 240 pixels high). Or, you can set custom dimensions. Take
care not to use a page size that's too large—it can obscure the visuals on the
source page.
Page view: In report designer, set the page view to Actual Size (page view defaults
to Fit to Page). This way, you can see the true size of the page tooltip as you
design it.
Style: Consider designing your page tooltip to use the same theme and style as the
report. This way, users feel like they are in the same report. Or, design a
complimentary style for your tooltips, and be sure to apply this style to all page
tooltips.
Tooltip filters: Assign filters to the page tooltip so that you can preview a realistic
result as you design it. Be sure to remove these filters before you publish your
report.
Page visibility: Always hide tooltip pages—users shouldn't navigate directly to
them.

Next steps
For more information related to this article, check out the following resources:

Create tooltips based on report pages in Power BI Desktop


Customizing tooltips in Power BI Desktop
Use visual elements to enhance Power BI reports
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Use report page drillthrough
Article • 02/27/2023

This article targets you as a report author who designs Power BI reports. It provides
suggestions and recommendations when creating report page drillthrough.

It's recommended that you design your report to allow report users to achieve the
following flow:

1. View a report page.


2. Identify a visual element to analyze more deeply.
3. Right-click the visual element to drill through.
4. Perform complimentary analysis.
5. Return to the source report page.

Suggestions
We suggest that you consider two types of drillthrough scenarios:

Additional depth
Broader perspective

Additional depth
When your report page displays summarized results, a drillthrough page can lead report
users to transaction-level details. This design approach allows them to view supporting
transactions, and only when needed.

The following example shows what happens when a report user drills through from a
monthly sales summary. The drillthrough page contains a detailed list of orders for a
specific month.
Broader perspective
A drillthrough page can achieve the opposite of additional depth. This scenario is great
for drilling through to a holistic view.

The following example shows what happens when a report user drills through from a zip
code. The drillthrough page displays general information about that zip code.
Recommendations
At report design time, we recommend the following practices:

Style: Consider designing your drillthrough page to use the same theme and style
as the report. This way, users feel like they are in the same report.
Drillthrough filters: Set drillthrough filters so you can preview a realistic result as
you design the drillthrough page. Be sure to remove these filters before you
publish the report.
Additional capabilities: A drillthrough page is like any report page. You can even
enhance it with additional interactive capabilities, including slicers or filters.
Blanks: Avoid adding visuals that could display BLANK, or produce errors when
drillthrough filters are applied.
Page visibility: Consider hiding drillthrough pages. If you decide to keep a
drillthrough page visible, be sure to add a button that allows users to clear any
previously-set drillthrough filters. Assign a bookmark to the button. The bookmark
should be configured to remove all filters.
Back button: A back button is added automatically when you assign a drillthrough
filter. It's a good idea to keep it. This way, your report users can easily return to the
source page.
Discovery: Help promote awareness of a drillthrough page by setting visual header
icon text, or adding instructions to a text box. You can also design an overlay, as
described in this blog post .

 Tip

It's also possible to configure drillthrough to your Power BI paginated reports. You
can do this be adding links to Power BI reports. Links can define URL parameters .

Next steps
For more information related to this article, check out the following resources:

Use drillthrough in Power BI Desktop


Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
When to use paginated reports in Power
BI
Article • 02/27/2023

This article targets you as a report author who designs reports for Power BI. It provides
suggestions to help you choose when to develop Power BI paginated reports.

Power BI paginated reports are optimized for printing, or PDF generation. They also
provide you with the ability to produce highly formatted, pixel-perfect layouts. So,
paginated reports are ideal for operational reports, like sales invoices.

In contrast, Power BI reports are optimized for exploration and interactivity. Also, they
can present your data using a comprehensive range of ultra-modern visuals. Power BI
reports, therefore, are ideal for analytic reports, enabling your report users to explore
data, and to discover relationships and patterns.

We recommend you consider using a Power BI paginated report when:

You know the report must be printed, or output as a PDF document.


Data grid layouts could expand and overflow. Consider that a table, or matrix, in a
Power BI report can't dynamically resize to display all data—it will provide scroll
bars instead. But, if printed, it won't be possible to scroll to reveal any out-of-view
data.
Power BI paginated features and capabilities work in your favor. Many such report
scenarios are described later in this article.

Legacy reports
When you already have SQL Server Reporting Services (SSRS) Report Definition
Language (RDL) reports, you can choose to redevelop them as Power BI reports, or
migrate them as paginated reports to Power BI. For more information, see Migrate SQL
Server Reporting Services reports to Power BI.

Once published to a Power BI workspace, paginated reports are available side by side
with Power BI reports. They can then be easily distributed using Power BI apps.

You might consider redeveloping SSRS reports, rather than migrating them. It's
especially true for those reports that are intended to deliver analytic experiences. In
these cases, Power BI reports will likely deliver better report user experiences.
Paginated report scenarios
There are many compelling scenarios when you might favor developing a Power BI
paginated report. Many are features or capabilities not supported by Power BI reports.

Print-ready: Paginated reports are optimized for printing, or PDF generation. When
necessary, data regions can expand and overflow to multiple pages in a controlled
way. Your report layouts can define margins, and page headers and footers.
Render formats: Power BI can render paginated reports in different formats.
Formats include Microsoft Excel, Microsoft Word, Microsoft PowerPoint, PDF, CSV,
XML, and MHTML. (The MHTML format is used by the Power BI service to render
reports.) Your report users can decide to export in the format that suits them.
Precision layout: You can design highly formatted, pixel-perfect layouts—to the
exact size and location configured in fractions of inches, or centimeters.
Dynamic layout: You can produce highly responsive layouts by setting many report
properties to use VB.NET expressions. Expressions have access to many core .NET
Framework libraries.
Render-specific layout: You can use expressions to modify the report layout based
on the rendering format applied. For example, you can design the report to disable
toggling visibility (to drill down and drill up) when it's rendered using a non-
interactive format, like PDF.
Native queries: You don't need to first develop a Power BI dataset. It's possible to
author native queries (or use stored procedures) for any supported data source.
Queries can include parameterization.
Graphic query designers: Power BI Report Builder includes graphic query
designers to help you write, and test, your dataset queries.
Static datasets: You can define a dataset, and enter data directly into your report
definition. This capability is especially useful to support a demo, or for delivering a
proof of concept (POC).
Data integration: You can combine data from different data sources, or with static
datasets. It's done by creating custom fields using VB.NET expressions.
Parameterization: You can design highly customized parameterization experiences,
including data-driven, and cascading parameters. It's also possible to define
parameter defaults. These experiences can be designed to help your report users
quickly set appropriate filters. Also, parameters don't need to filter report data;
they can be used to support "what-if" scenarios, or dynamic filtering or styling.
Image data: Your report can render images when they're stored in binary format in
a data source.
Custom code: You can develop code blocks of VB.NET functions in your report,
and use them in any report expression.
Subreports: You can embed other Power BI paginated reports (from the same
workspace) into your report.
Flexible data grids: You have fine-grained control of grid layouts by using the
tablix data region. It supports complex layouts, too, including nested and adjacent
groups. And, it can be configured to repeat headings when printed over multiple
pages. Also, it can embed a subreport or other visualizations, including data bars,
sparklines, and indicators.
Spatial data types: The map data region can visualize SQL Server spatial data
types. So, the GEOGRAPHY and GEOMETRY data types can be used to visualize
points, lines, or polygons. It's also possible to visualize polygons defined in ESRI
shape files.
Modern gauges: Radial and linear gauges can be used to display KPI values and
status. They can even be embedded into grid data regions, repeating within
groups.
HTML rendering: You can display richly formatted text when it's stored as HTML.
Mail merge: You can use text box placeholders to inject data values into text. This
way, you can produce a mail merge report.
Interactivity features: Interactive features include toggling visibility (to drill down
and drill up), links, interactive sorting, and tooltips. You can also add links that
drillthrough to Power BI reports, or other Power BI paginated reports. Links can
even jump to another location within the same report.
Subscriptions: Power BI can deliver paginated reports on a schedule as emails, with
report attachments in any supported format.
Per-user layouts: You can create responsive report layouts based on the
authenticated user who opens the report. You can design the report to filter data
differently, hide data regions or visualizations, apply different formats, or set user-
specific parameter defaults.

Next steps
For more information related to this article, check out the following resources:

What are paginated reports in Power BI?


Migrate SQL Server Reporting Services reports to Power BI
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Data retrieval guidance for paginated
reports
Article • 02/27/2023

This article targets you as a report author designing Power BI paginated reports. It
provides recommendations to help you design effective and efficient data retrieval.

Data source types


Paginated reports natively support both relational and analytic data sources. These
sources are further categorized, as either cloud-based or on-premises. On-premises
data sources—whether hosted on-premises, or in a virtual machine—require a data
gateway so Power BI can connect. Cloud-based means that Power BI can connect
directly using an Internet connection.

If you can choose the data source type (possibly the case in a new project), we
recommend that you use cloud-based data sources. Paginated reports can connect with
lower network latency, especially when the data sources reside in the same region as
your Power BI tenant. Also, it's possible to connect to these sources by using Single
Sign-On (SSO). It means the report user's identity can flow to the data source, allowing
per-user row-level permissions to be enforced. Currently, SSO is only supported for on-
premises data sources SQL Server and Oracle (see Supported data sources for Power BI
paginated reports).

7 Note

While it's currently not possible to connect to on-premises databases using SSO,
you can still enforce row-level permissions. It's done by passing the UserID built-in
field to a dataset query parameter. The data source will need to store User Principal
Name (UPN) values in a way that it can correctly filter query results.

For example, consider that each salesperson is stored as a row in the Salesperson a
table. The table has columns for UPN, and also the salesperson's sales region. At
query time, the table is filtered by the UPN of the report user, and it's also related
to sales facts using an inner join. This way, the query effectively filters sales fact
rows to those of the report user's sales region.

Relational data sources


Generally, relational data sources are well suited to operational style reports, like sales
invoices. They're also suited for reports that need to retrieve very large datasets (in
excess of 10,000 rows). Relational data sources can also define stored procedures, which
can be executed by report datasets. Stored procedures deliver several benefits:

Parameterization
Encapsulation of programming logic, allowing for more complex data preparation
(for example, temporary tables, cursors, or scalar user-defined functions)
Improved maintainability, allowing stored procedure logic to be easily updated. In
some cases, it can be done without the need to modify and republish paginated
reports (providing column names and data types remain unchanged).
Better performance, as their execution plans are cached for reuse
Reuse of stored procedures across multiple reports

In Power BI Report Builder, you can use the relational query designer to graphically
construct a query statement—but only for Microsoft data sources.

Analytic data sources


Analytic data sources are well suited to both operational and analytic reports, and can
deliver fast summarized query results even over very large data volumes. Model
measures and KPIs can encapsulate complex business rules to achieve summarization of
data. These data sources, however, are not suited to reports that need to retrieve very
large datasets (in excess of 10,000 rows).

In Power BI Report Builder, you have a choice of two query designers: The Analysis
Services DAX query designer, and the Analysis Services MDX query designer. These
designers can be used for Power BI dataset data sources, or any SQL Server Analysis
Services or Azure Analysis Services model—tabular or multidimensional.

We suggest you use the DAX query designer—providing it entirely meets your query
needs. If the model doesn't define the measures you need, you'll need to switch to
query mode. In this mode, you can customize the query statement by adding
expressions (to achieve summarization).

The MDX query designer requires your model to include measures. The designer has
two capabilities not supported by the DAX query designer. Specifically, it allows you to:

Define query-level calculated members (in MDX).


Configure data regions to request server aggregates in non-detail groups. If your
report needs to present summaries of semi- or non-additive measures (like time
intelligence calculations, or distinct counts), it will likely be more efficient to use
server aggregates than to retrieve low-level detail rows and have the report
compute summarizations.

Query result size


In general, it's best practice to retrieve only the data required by your report. So, don't
retrieve columns or rows that aren't required by the report.

To limit rows, you should always apply the most restrictive filters, and define aggregate
queries. Aggregate queries group and summarize source data to retrieve higher-grain
results. For example, consider that your report needs to present a summary of
salesperson sales. Instead of retrieving all sales order rows, create a dataset query that
groups by salesperson, and summarizes sales for each group.

Expression-based fields
It's possible to extend a report dataset with fields based on expressions. For example, if
your dataset sources customer first name and last name, you might want a field that
concatenates the two fields to produce the customer full name. To achieve this
calculation, you have two options. You can:

Create a calculated field, which is a dataset field based on an expression.


Inject an expression directly into the dataset query (using the native language of
your data source), which results in a regular dataset field.

We recommend the latter option, whenever possible. There are two good reasons why
injecting expressions directly into your dataset query is better:

It's possible your data source is optimized to evaluate the expression more
efficiently than Power BI (it's especially the case for relational databases).
Report performance is improved because there's no need for Power BI to
materialize calculated fields prior to report rendering. Calculated fields can
noticeably extend report render time when datasets retrieve a large number of
rows.

Field names
When you create a dataset, its fields are automatically named after the query columns.
It's possible these names aren't friendly or intuitive. It's also possible that source query
column names contain characters prohibited in Report Definition Language (RDL) object
identifiers (like spaces and symbols). In this case, the prohibited characters are replaced
with an underscore character (_).

We recommend that you first verify that all field names are friendly, concise, yet still
meaningful. If not, we suggest you rename them before you commence the report
layout. It's because renamed fields don't ripple changes through to the expressions used
in your report layout. If you do decide to rename fields after you've commenced the
report layout, you'll need to find and update all broken expressions.

Filter vs parameter
It's likely that your paginated report designs will have report parameters. Report
parameters are commonly used to prompt your report user to filter the report. As a
paginated report author, you have two ways to achieve report filtering. You can map a
report parameter to:

A dataset filter, in which case the report parameter value(s) are used to filter the
data already retrieved by the dataset.
A dataset parameter, in which case the report parameter value(s) are injected into
the native query sent to the data source.

7 Note

All report datasets are cached on a per-session basis for up to 10 minutes beyond
their last use. A dataset can be re-used when submitting new parameter values
(filtering), rendering the report in a different format, or interacting with the report
design in some way, like toggling visibility, or sorting.

Consider, then, an example of a sales report that has a single report parameter to filter
the report by a single year. The dataset retrieves sales for all years. It does so because
the report parameter maps to the dataset filters. The report displays data for the
requested year, which is a subset of the dataset data. When the report user changes the
report parameter to a different year—and then views the report—Power BI doesn't need
to retrieve any source data. Instead, it applies a different filter to the already-cached
dataset. Once the dataset is cached, filtering can be very fast.

Now, consider a different report design. This time the report design maps the sales year
report parameter to a dataset parameter. This way, Power BI injects the year value into
the native query, and the dataset retrieves data only for that year. Each time the report
user changes the year report parameter value—and then views the report—the dataset
retrieves a new query result for just that year.
Both design approaches can filter report data, and both designs can work well for your
report designs. An optimized design, however, will depend on the anticipated volumes
of data, data volatility, and the anticipated behaviors of your report users.

We recommend dataset filtering when you anticipate a different subset of the dataset
rows will be reused many times (thereby saving rendering time because new data
doesn't need to be retrieved). In this scenario, you recognize that the cost of retrieving a
larger dataset can be traded off against the number of times it will be reused. So, it's
helpful for queries that are time consuming to generate. But take care—caching large
datasets on a per-user basis may negatively impact on performance, and capacity
throughput.

We recommend dataset parameterization when you anticipate it's unlikely that a


different subset of dataset rows will be requested—or, when the number of the dataset
rows to be filtered is likely to be very large (and inefficient to cache). This design
approach work well, too, when your data store is volatile. In this case, each report
parameter value change will result in the retrieval of up-to-date data.

Non-native data sources


If you need to develop paginated reports based on data sources that aren't natively
supported by paginated reports, you can first develop a Power BI Desktop data model.
This way, you can connect to over 100 Power BI data sources. Once published to the
Power BI service, you can then develop a paginated report that connects to the Power BI
dataset.

Data integration
If you need to combine data from multiple data sources, you have two options:

Combine report datasets: If the data sources are natively supported by paginated
reports, you can consider creating calculated fields that use the Lookup or
LookupSet Report Builder functions.
Develop a Power BI Desktop model: It's likely more efficient, however, that you
develop a data model in Power BI Desktop. You can use Power Query to combine
queries based on any supported data source. Once published to the Power BI
service, you can then develop a paginated report that connects to the Power BI
dataset.

Network latency
Network latency can impact report performance by increasing the time required for
requests to reach the Power BI service, and for responses to be delivered. Tenants in
Power BI are assigned to a specific region.

 Tip

To determine where your tenant is located, see Where is my Power BI tenant


located?

When users from a tenant access the Power BI service, their requests always route to this
region. As requests reach the Power BI service, the service may then send additional
requests—for example, to the underlying data source, or a data gateway—which are
also subject to network latency. In general, to minimize the impact of network latency,
strive to keep data sources, gateways, and your Power BI capacity as close as possible.
Preferably, they reside within the same region. If network latency is an issue, try locating
gateways and data sources closer to your Power BI capacity by placing them inside
cloud-hosted virtual machines.

SQL Server complex data types


Because SQL Server is an on-premises data source, Power BI must connect via a
gateway. The gateway, however, doesn't support retrieving data for complex data types.
Complex data types include built-in types like the GEOMETRY and GEOGRAPHY spatial
data types, and hierarchyid. They can also include user-defined types implemented
through a class of an assembly in the Microsoft.NET Framework common language
runtime (CLR).

Plotting spatial data and analytics in the map visualization requires SQL Server spatial
data. Therefore, it's not possible to work with the map visualization when SQL Server is
your data source. To be clear, it will work if your data source is Azure SQL Database
because Power BI doesn't connect via a gateway.

Data-related images
Images can be used to add logos or pictures to your report layout. When images relate
to the rows retrieved by a report dataset, you have two options:

It's possible that image data can also be retrieved from your data source (if already
stored in a table).
When the images are stored on a web server, you can use a dynamic expression to
create the image URL path.

For more information and suggestions, see Image guidance for paginated reports.

Redundant data retrieval


It's possible your report retrieves redundant data. This can happen when you delete
dataset query fields, or the report has unused datasets. Avoid these situations, as they
result in an unnecessary burden on your data sources, the network, and Power BI
capacity resources.

Deleted query fields


On the Fields page of the Dataset Properties window, it's possible to delete dataset
query fields (query fields map to columns retrieved by the dataset query). However,
Report Builder doesn't remove corresponding columns from the dataset query.

If you need to delete query fields from your dataset, we recommend you remove the
corresponding columns from the dataset query. Report Builder will automatically
remove any redundant query fields. If you do happen to delete query fields, be sure to
also modify the dataset query statement to remove the columns.

Unused datasets
When a report is run, all datasets are evaluated—even if they're not bound to report
objects. For this reason, be sure to remove any test or development datasets before you
publish a report.

Next steps
For more information related to this article, check out the following resources:

Supported data sources for Power BI paginated reports


Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Image use guidance for paginated
reports
Article • 02/27/2023

This article targets you as a report author designing Power BI paginated reports. It
provides suggestions when working with images. Commonly, images in report layouts
can display a graphic like a company logo, or pictures.

Images can be stored in three different locations:

Within the report (embedded)


On a web server (external)
In a database, which can be retrieved by a dataset

They can then be used in a variety of scenarios in your report layouts:

Free-standing logo, or picture


Pictures associated with rows of data
Background for certain report items:
Report body
Textbox
Rectangle
Tablix data region (table, matrix, or list)

Suggestions
Consider the following suggestions to deliver professional report layouts, ease of
maintenance, and optimized report performance:

Use smallest possible size: We recommend you prepare images that are small in
size, yet still look sharp, and crisp. It's all about a balance between quality and size.
Consider using a graphics editor (like MS Paint) to reduce the image file size.

Avoid embedded images: First, embedded images can bloat the report file size,
which can contribute to slower report rendering. Second, embedded images can
quickly become a maintenance nightmare if you need to update many report
images (as might be the case should your company logo change).

Use web server storage: Storing images on a web server is a good option,
especially for the company logo, which may be sourced from the company
website. However, take care if your report users will access reports outside your
network. In this case, be sure that the images are available over the Internet and
do not require authentication or additional sign-in to access the image. Images
stored on a web server must not exceed 4 MB in size or they will not load in the
Power BI service.

When images relate to your data (like pictures of your salespeople), name image
files so a report expression can dynamically produce the image URL path. For
example, you could name the salespeople pictures using each salesperson's
employee number. Providing the report dataset retrieves the employee number,
you can write an expression to produce the full image URL path.

Use database storage: When a relational database stores image data, it makes
sense to source the image data directly from the database tables—especially when
the images are not too large.

Appropriate background images: If you decide to use background images, take


care not to distract the report user from your report data.

Also, be sure to use watermark styled images. Generally, watermark styled images
have a transparent background (or have the same background color used by the
report). They also use faint colors. Common examples of watermark styled images
include the company logo, or sensitivity labels like "Draft" or "Confidential".

Next steps
For more information related to this article, check out the following resources:

What are paginated reports in Power BI?


Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Use cascading parameters in paginated
reports
Article • 03/23/2023

This article targets you as a report author designing Power BI paginated reports. It
provides scenarios for designing cascading parameters. Cascading parameters are
report parameters with dependencies. When a report user selects a parameter value (or
values), it's used to set available values for another parameter.

7 Note

An introduction to cascading parameters, and how to configure them, isn't covered


in this article. If you're not completely familiar with cascading parameters, we
recommend you first read Add Cascading Parameters to a Report in Power BI
Report Builder.

Design scenarios
There are two design scenarios for using cascading parameters. They can be effectively
used to:

Filter large sets of items


Present relevant items

Example database
The examples presented in this article are based on an Azure SQL Database. The
database records sales operations, and contains various tables storing resellers,
products, and sales orders.

A table named Reseller stores one record for each reseller, and it contains many
thousands of records. The Reseller table has these columns:

ResellerCode (integer)
ResellerName
Country-Region
State-Province
City
PostalCode
There's a table named Sales, too. It stores sales order records, and has a foreign key
relationship to the Reseller table, on the ResellerCode column.

Example requirement
There's a requirement to develop a Reseller Profile report. The report must be designed
to display information for a single reseller. It's not appropriate to have the report user
enter a reseller code, as they rarely memorize them.

Filter large sets of items


Let's take a look at three examples to help you limit large sets of available items, like
resellers. They are:

Filter by related columns


Filter by a grouping column
Filter by search pattern

Filter by related columns


In this example, the report user interacts with five report parameters. They must select
country-region, state-province, city, and then postal code. A final parameter then lists
resellers that reside in that geographic location.

Here's how you can develop the cascading parameters:

1. Create the five report parameters, ordered in the correct sequence.

2. Create the CountryRegion dataset that retrieves distinct country-region values,


using the following query statement:

SQL
SELECT DISTINCT
[Country-Region]
FROM
[Reseller]
ORDER BY
[Country-Region]

3. Create the StateProvince dataset that retrieves distinct state-province values for
the selected country-region, using the following query statement:

SQL

SELECT DISTINCT
[State-Province]
FROM
[Reseller]
WHERE
[Country-Region] = @CountryRegion
ORDER BY
[State-Province]

4. Create the City dataset that retrieves distinct city values for the selected country-
region and state-province, using the following query statement:

SQL

SELECT DISTINCT
[City]
FROM
[Reseller]
WHERE
[Country-Region] = @CountryRegion
AND [State-Province] = @StateProvince
ORDER BY
[City]

5. Continue this pattern to create the PostalCode dataset.

6. Create the Reseller dataset to retrieve all resellers for the selected geographic
values, using the following query statement:

SQL

SELECT
[ResellerCode],
[ResellerName]
FROM
[Reseller]
WHERE
[Country-Region] = @CountryRegion
AND [State-Province] = @StateProvince
AND [City] = @City
AND [PostalCode] = @PostalCode
ORDER BY
[ResellerName]

7. For each dataset except the first, map the query parameters to the corresponding
report parameters.

7 Note

All query parameters (prefixed with the @ symbol) shown in these examples could
be embedded within SELECT statements, or passed to stored procedures.

Generally, stored procedures are a better design approach. It's because their query
plans are cached for quicker execution, and they allow you develop more
sophisticated logic, when needed. However, they aren't currently supported for
gateway relational data sources, which means SQL Server, Oracle, and Teradata.

Lastly, you should always ensure suitable indexes exist to support efficient data
retrieval. Otherwise, your report parameters could be slow to populate, and the
database could become overburdened. For more information about SQL Server
indexing, see SQL Server Index Architecture and Design Guide.

Filter by a grouping column


In this example, the report user interacts with a report parameter to select the first letter
of the reseller. A second parameter then lists resellers when the name commences with
the selected letter.

Here's how you can develop the cascading parameters:


1. Create the ReportGroup and Reseller report parameters, ordered in the correct
sequence.

2. Create the ReportGroup dataset to retrieve the first letters used by all resellers,
using the following query statement:

SQL

SELECT DISTINCT
LEFT([ResellerName], 1) AS [ReportGroup]
FROM
[Reseller]
ORDER BY
[ReportGroup]

3. Create the Reseller dataset to retrieve all resellers that commence with the
selected letter, using the following query statement:

SQL

SELECT
[ResellerCode],
[ResellerName]
FROM
[Reseller]
WHERE
LEFT([ResellerName], 1) = @ReportGroup
ORDER BY
[ResellerName]

4. Map the query parameter of the Reseller dataset to the corresponding report
parameter.

It's more efficient to add the grouping column to the Reseller table. When persisted and
indexed, it delivers the best result. For more information, see Specify Computed
Columns in a Table.

SQL

ALTER TABLE [Reseller]


ADD [ReportGroup] AS LEFT([ResellerName], 1) PERSISTED

This technique can deliver even greater potential. Consider the following script that adds
a new grouping column to filter resellers by pre-defined bands of letters. It also creates
an index to efficiently retrieve the data required by the report parameters.
SQL

ALTER TABLE [Reseller]


ADD [ReportGroup2] AS CASE
WHEN [ResellerName] LIKE '[A-C]%' THEN 'A-C'
WHEN [ResellerName] LIKE '[D-H]%' THEN 'D-H'
WHEN [ResellerName] LIKE '[I-M]%' THEN 'I-M'
WHEN [ResellerName] LIKE '[N-S]%' THEN 'N-S'
WHEN [ResellerName] LIKE '[T-Z]%' THEN 'T-Z'
ELSE '[Other]'
END PERSISTED
GO

CREATE NONCLUSTERED INDEX [Reseller_ReportGroup2]


ON [Reseller] ([ReportGroup2]) INCLUDE ([ResellerCode], [ResellerName])
GO

Filter by search pattern


In this example, the report user interacts with a report parameter to enter a search
pattern. A second parameter then lists resellers when the name contains the pattern.

Here's how you can develop the cascading parameters:

1. Create the Search and Reseller report parameters, ordered in the correct sequence.

2. Create the Reseller dataset to retrieve all resellers that contain the search text,
using the following query statement:

SQL

SELECT
[ResellerCode],
[ResellerName]
FROM
[Reseller]
WHERE
[ResellerName] LIKE '%' + @Search + '%'
ORDER BY
[ResellerName]
3. Map the query parameter of the Reseller dataset to the corresponding report
parameter.

 Tip

You can improve upon this design to provide more control for your report users. It
lets them define their own pattern matching value. For example, the search value
"red%" will filter to resellers with names that commence with the characters "red".

For more information, see LIKE (Transact-SQL).

Here's how you can let the report users define their own pattern.

SQL

WHERE
[ResellerName] LIKE @Search

Many non-database professionals, however, don't know about the percentage (%)
wildcard character. Instead, they're familiar with the asterisk (*) character. By modifying
the WHERE clause, you can let them use this character.

SQL

WHERE
[ResellerName] LIKE SUBSTITUTE(@Search, '%', '*')

Present relevant items


In this scenario, you can use fact data to limit available values. Report users will be
presented with items where activity has been recorded.

In this example, the report user interacts with three report parameter. The first two set a
date range of sales order dates. The third parameter then lists resellers where orders
have been created during that time period.
Here's how you can develop the cascading parameters:

1. Create the OrderDateStart, OrderDateEnd, and Reseller report parameters,


ordered in the correct sequence.

2. Create the Reseller dataset to retrieve all resellers that created orders in the date
period, using the following query statement:

SQL

SELECT DISTINCT
[r].[ResellerCode],
[r].[ResellerName]
FROM
[Reseller] AS [r]
INNER JOIN [Sales] AS [s]
ON [s].[ResellerCode] = [r].[ResellerCode]
WHERE
[s].[OrderDate] >= @OrderDateStart
AND [s].[OrderDate] < DATEADD(DAY, 1, @OrderDateEnd)
ORDER BY
[r].[ResellerName]

Recommendations
We recommend you design your reports with cascading parameters, whenever possible.
It's because they:

Provide intuitive and helpful experiences for your report users


Are efficient, because they retrieve smaller sets of available values

Be sure to optimize your data sources by:

Using stored procedures, whenever possible


Adding appropriate indexes for efficient data retrieval
Materializing column values—and even rows—to avoid expensive query-time
evaluations

Next steps
For more information related to this article, check out the following resources:

Report parameters in Power BI Report Builder


Add Cascading Parameters to a Report (Report Builder)
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Avoid blank pages when printing
paginated reports
Article • 02/27/2023

This article targets you as a report author designing Power BI paginated reports. It
provides recommendations to help you avoid blank pages when your report is exported
to a hard-page format—like PDF or Microsoft Word—or, is printed.

Page setup
Report page size properties determine the page orientation, dimensions, and margins.
Access these report properties by:

Using the report Property Page: Right-click the dark gray area outside the report
canvas, and then select Report Properties.
Using the Properties pane: Click the dark gray area outside the report canvas to
select the report object. Ensure the Properties pane is open.

The Page Setup page of the report Property Page provides a friendly interface to view
and update the page setup properties.
Ensure all page size properties are correctly configured:

Property Recommendation

Page units Select the relevant units—inches or centimeters.

Orientation Select the correct option—portrait or landscape.

Paper size Select a paper size, or assign custom width and height values.

Margins Set appropriate values for the left, right, top, and bottom margins.

Report body width


The page size properties determine the available space available for report objects.
Report objects can be data regions, data visualizations, or other report items.

A common reason why blank pages are output, is the report body width exceeds the
available page space.

You can only view and set the report body width using the Properties pane. First, click
anywhere in an empty area of the report body.
Ensure the width value doesn't exceed available page width. Be guided by the following
formula:

Report body width <= Report page width - (Left margin + Right margin)

7 Note

It's not possible to reduce the report body width when there are report objects
already in the space you want to remove. You must first reposition or resize them
before reducing the width.

Also, the report body width can increase automatically when you add new objects,
or resize or reposition existing objects. The report designer always widens the body
to accommodate the position and size of its contained objects.

Report body height


Another reason why a blank page is output, is there's excess space in the report body,
after the last object.

We recommend you always reduce the height of the body to remove any trailing space.
Page break options
Each data region and data visualization has page break options. You can access these
options in its property page, or in the Properties pane.

Ensure the Add a page break after property isn't unnecessarily enabled.

Consume Container Whitespace


If the blank page issue persists, you can also try disabling the report
ConsumeContainerWhitespace property. It can only be set in the Properties pane.
By default, it's enabled. It directs whether minimum whitespace in containers, such as
the report body or a rectangle, should be consumed. Only whitespace to the right of,
and below, the contents is affected.

Printer paper size


Lastly, if you're printing the report to paper, ensure the printer has the correct paper
loaded. The physical paper size should correspond to the report paper size.

Next steps
For more information related to this article, check out the following resources:

What are paginated reports in Power BI?


Pagination in Power BI paginated reports
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Plan to migrate .rdl reports to Power BI
Article • 06/01/2023

APPLIES TO: Power BI Report Builder Power BI Desktop


Power BI 2022 Report Server SQL Server 2022 Reporting Services

This article targets Power BI Report Server and SQL Server Reporting Services (SSRS)
report authors and Power BI administrators. It provides you with guidance to help you
migrate your Report Definition Language (.rdl) reports to Power BI.

Flow diagram shows the path migrating on-premises .rdl reports to paginated reports the
Power BI service.

7 Note

In Power BI, .rdl reports are called paginated reports.

Guidance is divided into four stages. We recommend that you first read the entire article
prior to migrating your reports.

1. Before you start


2. Pre-migration stage
3. Migration stage
4. Post-migration stage

You can achieve migration without downtime to your report servers, or disruption to
your report users. It's important to understand that you don't need to remove any data
or reports. So, it means you can keep your current environment in place until you're
ready for it to be retired.
Before you start
Before you start the migration, verify that your environment meets certain prerequisites.
We'll describe these prerequisites, and also introduce you to a helpful migration tool.

Preparing for migration


As you prepare to migrate your reports to Power BI, first verify that you have a Power BI
Pro or Premium Per User license to upload content to the target workspace.

Supported versions
You can migrate report server instances running on premises, or on virtual machines
hosted by cloud providers like Azure.

The following list describes the SQL Server Reporting Services versions supported for
migration to Power BI:

" SQL Server Reporting Services 2012


" SQL Server Reporting Services 2014
" SQL Server Reporting Services 2016
" SQL Server Reporting Services 2017
" SQL Server Reporting Services 2019
" SQL Server Reporting Services 2022

You can migrate .rdl files from Power BI Report Server as well.

Migration tool for Power BI Report Server and SQL Server


Reporting Services 2017+
If you're using Power BI Report Server or SQL Server Reporting Services after SQL Server
2016, there's a built-in tool to publish its reports to Power BI. For more information, see
Publish .rdl files to Power BI.

Migration tool for previous versions of SQL Server


For earlier versions of SQL Server Reporting Services, we recommend that you use the
RDL Migration Tool to help prepare and migrate your reports. This tool was developed
by Microsoft to help customers migrate .rdl reports from their SSRS servers to Power BI.
It's available on GitHub, and it documents an end-to-end walkthrough of the migration
scenario.
The tool automates the following tasks:

Checks for unsupported data sources and unsupported report features.


Converts any shared resources to embedded resources:
Shared data sources become embedded data sources.
Shared datasets become embedded datasets.
Publishes reports that pass checks as paginated reports, to a specified Power BI
workspace.

It doesn't modify or remove your existing reports. On completion, the tool outputs a
summary of all actions completed, successful or unsuccessful.

Over time, Microsoft may improve the tool. The community is encouraged to contribute
and help enhance it, too.

Pre-migration stage
After verifying that your organization meets the pre-requisites, you're ready to start the
Pre-migration stage. This stage has three phases:

1. Discover
2. Assess
3. Prepare

Discover
The goal of the Discover phase is to identify your existing report server instances. This
process involves scanning the network to identify all report server instances in your
organization.

You can use the Microsoft Assessment and Planning Toolkit . The "MAP Toolkit"
discovers and reports on your report server instances, versions, and installed features.
It's a powerful inventory, assessment, and reporting tool that can simplify your
migration planning process.

Organizations may have hundreds of SQL Server Reporting Services (SSRS) reports.
Some of those reports may become obsolete due to lack of use. The article Find and
retire unused reports can help you discover unused reports and how to create a cadence
for cleanup.

Assess
Having discovered your report server instances, the goal of the Assess phase is to
understand any .rdl reports—or server items—that can't be migrated.

Your .rdl reports can be migrated from your report servers to Power BI. Each migrated
.rdl report will become a Power BI paginated report.

The following report server item types, however, can't be migrated to Power BI:

Shared data sources and shared datasets: The RDL Migration Tool automatically
converts shared data sources and shared datasets into embedded data sources
and datasets, provided that they're using supported data sources.
Resources such as image files
Linked reports migrate, whether the parent report that links to them is selected for
migration or no. In the Power BI service, they're regular .rdl reports.
KPIs: Power BI Report Server, or Reporting Services 2016 or later—Enterprise
Edition only
Mobile reports: Power BI Report Server, or Reporting Services 2016 or later—
Enterprise Edition only
Report models: deprecated
Report parts: deprecated

If your .rdl reports rely on features not yet supported by Power BI paginated reports, you
can plan to redevelop them as Power BI reports, when it makes sense.

For more information about supported data sources for paginated reports in the Power
BI service, see Supported data sources for Power BI paginated reports.

Generally, Power BI paginated reports are optimized for printing, or PDF generation.
Power BI reports are optimized for exploration and interactivity. For more information,
see When to use paginated reports in Power BI.

Referencing custom code DLL files within a report isn't supported.

Differences in PDF output occur most often when a font that doesn't support non-Latin
characters is used in a report and then non-Latin characters are added to the report. You
should test the PDF rendering output on both the report server and the client
computers to verify that the report renders correctly.

Prepare
The goal of the Prepare phase involves getting everything ready. It covers setting up the
Power BI environment, planning how you'll secure and publish your reports, and ideas
for redeveloping report server items that won't migrate.
1. Verify support for your report data sources, and set up a Power BI gateway to allow
connectivity with any on-premises data sources.
2. Become familiar with Power BI security, and plan how you'll reproduce your report
server folders and permissions with Power BI workspaces.
3. Become familiar with Power BI sharing, and plan how you'll distribute content by
publishing Power BI apps.
4. Consider using shared Power BI datasets in place of your report server shared data
sources.
5. Use Power BI Desktop to develop mobile-optimized reports, possibly using the
Power KPI custom visual in place of your report server mobile reports and KPIs.
6. Reevaluate the use of the UserID built-in field in your reports. If you rely on the
UserID to secure report data, then understand that for paginated reports (when
hosted in the Power BI service) it returns the User Principal Name (UPN). So,
instead of returning the NT account name, for example AW\adelev, the built-in
field returns something like [email protected]. You'll need to revise your
dataset definitions, and possibly the source data. Once revised and published, we
recommend you thoroughly test your reports to ensure data permissions work as
expected.
7. Reevaluate the use of the ExecutionTime built-in field in your reports. For
paginated reports (when hosted in the Power BI service), the built-in field returns
the date/time in Coordinated Universal Time (or UTC). It could impact on report
parameter default values, and report execution time labels (typically added to
report footers).
8. If your data source is SQL Server (on premises), verify that reports aren't using map
visualizations. The map visualization depends on SQL Server spatial data types, and
these aren't supported by the gateway. For more information, see Data retrieval
guidance for paginated reports (SQL Server complex data types).
9. For cascading parameters, be mindful that parameters are evaluated sequentially.
Try preaggregating report data first. For more information, see Use cascading
parameters in paginated reports.
10. Ensure your report authors have Power BI Report Builder installed, and that you
can easily distribute later releases throughout your organization.
11. Utilize capacity planning documentation for paginated reports.

Migration stage
After preparing your Power BI environment and reports, you're ready for the Migration
stage.

There are two migration options: manual and automated. Manual migration is suited to
a small number of reports, or reports requiring modification before migration.
Automated migration is suited to the migration of a large number of reports.

Manual migration
Anyone with permission to access to the report server instance and the Power BI
workspace can manually migrate reports to Power BI. Here are the steps to follow:

1. Open the report server portal that contains the reports you want to migrate.
2. Download each report definition, saving the .rdl files locally.
3. Open the latest version of Power BI Report Builder, and connect to the Power BI
service using your Azure AD credentials.
4. Open each report in Power BI Report Builder, and then:
a. Verify all data sources and datasets are embedded in the report definition, and
that they're supported data sources.
b. Preview the report to ensure it renders correctly.
c. Select Publish, then select Power BI service.
d. Select the workspace where you want to save the report.
e. Verify that the report saves. If certain features in your report design aren't yet
supported, the save action will fail. You'll be notified of the reasons. You'll then
need to revise your report design, and try saving again.

Automated migration
There are three options for automated migration. You can use:

For Power BI Report Server and SQL Server 2022, see Publish .rdl files to Power BI.
For previous versions of Reporting Services, use the RDL Migration Tool in
GitHub.
The publicly available APIs for Power BI Report Server, Reporting Services, and
Power BI

You can also use the publicly available Power BI Report Server, Reporting Services, and
Power BI APIs to automate the migration of your content. While the RDL Migration Tool
already uses these APIs, you can develop a custom tool suited to your exact
requirements.

For more information about the APIs, see:

Power BI REST APIs


SQL Server Reporting Services REST APIs
Post-migration stage
After you've successfully completed the migration, you're ready for the Post-migration
stage. This stage involves working through a series of post-migration tasks to ensure
everything is functioning correctly and efficiently.

Setting query time-out for embedded datasets


You specify query time-out values during report authoring when you define an
embedded dataset. The time-out value is stored with the report, in the Timeout element
of the report definition.

Configure data sources


Once reports have been migrated to Power BI, you'll need to ensure their data sources
are correctly set up. It can involve assigning to gateway data sources, and securely
storing data source credentials. These actions aren't done by the RDL Migration Tool.

Review report performance


We highly recommended you complete the following actions to ensure the best
possible report user experience:

1. Test the reports in each browser supported by Power BI to confirm the report
renders correctly.
2. Run tests to compare report rending times on the report server and in the Power
BI service. Check that Power BI reports render in an acceptable time.
3. For long-rendering reports, consider having Power BI deliver them to your report
users as email subscriptions with report attachments.
4. For Power BI reports based on Power BI datasets, review model designs to ensure
they're fully optimized.

Reconcile issues
The Post-migration phase is crucial for reconciling any issues, and that you address any
performance concerns. Adding the paginated reports workload to a capacity can
contribute to slow performance—for paginated reports and other content stored in the
capacity.

Next steps
For more information about this article, check out the following resources:

Publish .rdl files to Power BI from Power BI Report Server and SQL Server Reporting
Services
RDL Migration Tool for older versions of Reporting Services
Power BI Report Builder
Data retrieval guidance for paginated reports
When to use paginated reports in Power BI
Paginated reports in Power BI: FAQ
Online course: Paginated Reports in a Day
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Power BI partners are available to help your organization succeed with the migration
process. To engage a Power BI partner, visit the Power BI partner portal .
Publish .rdl files to Power BI from Power
BI Report Server and Reporting Services
Article • 04/12/2023

APPLIES TO: Power BI Report Builder Power BI Desktop


Power BI 2022 Report Server SQL Server 2022 Reporting Services

Do you have Report Definition Language (.rdl) paginated reports in Power BI Report
Server or SQL Server 2022 Reporting Services (SSRS) that you want to migrate to the
Power BI service? This article provides step-by-step instructions for migrating .rdl files
and Power BI reports (.pbix files) from Power BI Report Server and SQL Server 2022
Reporting Services to the Power BI service.

7 Note

If you're using a previous version of Reporting Services, continue to use the RDL
Migration Tool for now.

You can migrate reports without downtime to your report servers or disruption to your
report users. It's important to understand that you don't need to remove any data or
reports. You can keep your current environment in place until you're ready for it to be
retired.

Prerequisites

My Workspace
You can publish and share paginated reports to your My Workspace with a Power BI free
license.

Other workspaces
To publish to other workspaces, you need to meet these prerequisites:

You have a Power BI Pro or Premium Per User license.


You have write access to the workspace.

Read more about Power BI licenses.

Supported versions
You can migrate reports from SSRS instances running on-premises or on virtual
machines hosted by cloud providers like Azure.

This publishing tool is designed to help customers migrate SSRS paginated reports (.rdl
files) from their local servers to a Power BI workspace in their tenant. As part of the
migration process, the tool also:

Checks for unsupported data sources or report components when uploading to


Power BI.
Saves the converted files that pass these checks to a Power BI workspace that you
specify.
Provides a summary of the successful and unsuccessful assets migrated.

You can only migrate .rdl reports from your SSRS servers to Power BI. Each migrated .rdl
report becomes a Power BI paginated report.

You can publish individual .rdl reports or the contents of entire folders from the SSRS
web portal to the Power BI service. Read on to learn how to publish .rdl reports to Power
BI.

Step 1: Browse to reports


SQL Server Reporting Services

Select Publish to find the .rdl files you want to publish from SSRS to the Power BI
service.
Select Publish all reports to select all the .rdl files in the current folder and
start the migration.
Select Select reports to publish to open a list view of all .rdl files in the current
folder. Select the reports and folders you want to migrate.

You can also publish individual articles.

On the More info menu next to an .rdl report, select Publish.

Step 1b: Select reports


Items that you can migrate now:

The .rdl files


Linked reports (.rdl files)
Folders (all .rdl reports from the folder are migrated)

SQL Server Reporting Services

If you chose Select reports to publish, the next step is to Select reports to publish
to Power BI.
Step 2: Sign in/Sign up
SQL Server Reporting Services

After you've selected the reports you want to publish, it's time to Sign in to the
Power BI service.
Step 3: Choose a workspace
SQL Server Reporting Services

Now that you're signed in, select the dropdown arrow to find and Select a
workspace.

Step 4: View reports


In the Power BI service, navigate to the workspace where you saved the reports.
Select a report to view it in the Power BI service.

Site properties
If you'd like to disable the migration setting, you need to update your report server. For
more information on server properties, see the article Server Properties Advanced Page -
Power BI Report Server & Reporting Services:

EnablePowerBIReportMigrate
PowerBIMigrateCountLimit
PowerBIMigrateUrl

For sovereign clouds, you can update the Power BI endpoints by changing the site
settings in the web portal.

Limitations and considerations


You can migrate .rdl reports from your report servers to the Power BI service. Each
migrated .rdl report becomes a Power BI paginated report.

Converted report features


Shared data sources and datasets aren't yet supported in the Power BI service. When
you migrate .rdl reports, the RDL Migration Tool automatically converts shared
datasets and data sources to embedded datasets and data sources, provided they're
using supported datasets and data sources.

Unsupported item types


You can't migrate the following item types to the Power BI service:

Resources such as image files


KPIs
Mobile reports (discontinued)
Report models (discontinued)
Report parts (discontinued)

Unsupported report features


See What paginated report features in SSRS aren't yet supported in Power BI? in the
Paginated reports in Power BI FAQ for a complete list of unsupported report features.

Next steps
More questions? Try asking the Reporting Services forum
Find and retire unused .rdl reports
Article • 03/08/2023

APPLIES TO: Power BI Report Builder Power BI Desktop


Power BI 2022 Report Server SQL Server 2022 Reporting Services

Your company may deal with hundreds of paginated reports (.rdl files) in Power BI
Report Server and SQL Server Reporting Services (SSRS). Some of those reports may
become obsolete and need to be retired. As a report author or administrator, you don't
want to migrate unused reports to the Power BI service . As you plan for a migration to
the cloud, we suggest doing some housekeeping to get rid of unused .rdl reports. This
best practice supports retention governance and allows your organization to make use
of a retention schedule and data policy.

There are two processes for checking unused reports. We extend the cleanup to unused
objects, as well as to get rid of unused database tables that could have potentially stale
data.

Run an audit (optional)


First, we suggest that you create a server audit and database audit specification.
Auditing an instance of the SQL Server Database Engine or an individual database
involves tracking and logging events that occur on the Database Engine. SQL Server
audit lets you create server audits, which can contain server audit specifications for
server level events, and database audit specifications for database level events. Audited
events can be written to the event logs or to audit files.

Once you've filled your audit log with tables and stored procedures used for reports,
you can export those objects to an Excel file and share them with stakeholders. Let them
know you're preparing to deprecate unused objects.

7 Note

Some important reports may run only rarely, so be sure to ask for feedback on
database objects that are infrequently used. By deprecating an object, you can alter
the object name by placing a zdel in front of it, so the object drops to the bottom
of the Object Explorer. This way, if you decide later that you need the zdel object,
you can alter the name back to the original. Once you know you're ready to remove
them from your database, you can create a cadence to delete unused objects.
Create a Reports Usage metrics list
Second, you'll want to create an .rdl Reports Usage metrics list by querying Report
Server DB. Use the T-SQL below to derive the usage counts. If your report server is
configured to store one year of report execution history, you can use a specific date to
filter the usage metrics.

Transact-SQL

; with UnusedReportsCte
AS
(
SELECT
Cat.Name,Path,COUNT(ExeLog.TimeStart) AS Cnt

FROM (SELECT * FROM Catalog


WHERE type=2 and Hidden=0) AS Cat
LEFT JOIN
ExecutionLog AS ExeLog
ON ExeLog.ReportID = Cat.ItemID
AND ExeLog.TimeStart>'01/01/2021'
GROUP BY Cat.Name,Path)
SELECT * FROM UnusedReportsCte
ORDER BY Cnt ASC,path

7 Note

Subreports and linked reports don't appear in the execution log if the parent report
is executed.

From here you can decide whether to delete the unused reports right away or replace
the report with a message. You can let your users know the report is no longer being
used, so they can contact an administrator for support. Then you can develop a cadence
to delete them over time.

See also
Publish .rdl files to Power BI from Reporting Services
Migrate SQL Server Reporting Services reports to Power BI
Develop scalable multitenancy
applications with Power BI embedding
Article • 03/20/2023

This article describes how to develop a multitenancy application that embeds Power BI
content while achieving the highest levels of scalability, performance, and security. By
designing and implementing an application with service principal profiles, you can create
and manage a multitenancy solution comprising tens of thousands of customer tenants
that can deliver reports to audiences of over 100,000 users.

Service principal profiles is a feature that makes it easier for you to manage
organizational content in Power BI and use your capacities more efficiently. However,
using service principal profiles can add complexity to your application design. Therefore,
you should only use them when there's a need to achieve significant scale. We
recommend using service principal profiles when you have many workspaces and more
than 1,000 application users.

7 Note

The value of using service principal profiles increases as your need to scale
increases as well as your need to achieve the highest levels of security and tenant
isolation.

You can achieve Power BI embedding by using two different embedding scenarios:
Embed for your organization and Embed for your customer.

The Embed for your organization scenario applies when the application audience
comprises internal users. Internal users have organizational accounts and must
authenticate with Microsoft Azure Active Directory (Azure AD). In this scenario, Power BI
is software-as-a-service (SaaS). It's sometimes referred to as User owns data.

The Embed for your customer scenario applies when the application audience
comprises external users. The application is responsible for authenticating users. To
access Power BI content, the application relies on an embedding identity (Azure AD
service principal or master user account) to authenticate with Azure AD. In this scenario,
Power BI is platform-as-a-service (PaaS). It's sometimes referred to as App owns data.

7 Note
It's important to understand that the service principal profiles feature was designed
for use with the Embed for your customer scenario. That's because this scenario
offers ISVs and enterprise organizations the ability to embed with greater scale to a
large number of users and to a large number of customer tenants.

Multitenancy application development


If you're familiar with Azure AD, the word tenant might lead you think of an Azure AD
tenant. However, the concept of a tenant is different in the context of building a
multitenancy solution that embeds Power BI content. In this context, a customer tenant
is created on behalf of each customer for which the application embeds Power BI
content by using the Embed for your customer scenario. You typically provision each
customer tenant by creating a single Power BI workspace.

To create a scalable multitenancy solution, you must be able to automate the creation of
new customer tenants. Provisioning a new customer tenant typically involves writing
code that uses the Power BI REST API to create a new Power BI workspace, create
datasets by importing Power BI Desktop (.pbix) files, update data source parameters, set
data source credentials, and set up scheduled dataset refresh. The following diagram
shows how you can add Power BI items, such as reports and datasets, to workspaces to
set up customer tenants.

When you develop an application that uses the Embed for your customer scenario, it's
possible to make Power BI REST API calls by using an embedding identity that's either a
master user account or a service principal. We recommend using a service principal for
production applications. It provides the highest security and for this reason it's the
approach recommended by Azure AD. Also, it supports better automation and scale and
there's less management overhead. However, it requires Power BI admin rights to set up
and manage.

By using a service principal, you can avoid common problems associated with master
user accounts, such as authentication errors in environments where users are required to
sign in by using multifactor authentication (MFA). Using a service principal is also
consistent with the idea that the Embed for your customer scenario is based on
embedding Power BI content by using a PaaS mindset as opposed to a SaaS mindset.

1,000-workspace limitation
When you design a multitenancy environment that implements the Embed for your
customer scenario, be sure to consider that the embedding identity can't be granted
access to more than 1,000 workspaces. The Power BI service imposes this limitation to
ensure good performance when making REST API calls. The reason for this limitation is
related to how Power BI maintains security-related metadata for each identity.

Power BI uses metadata to track the workspaces and workspace items an identity can
access. In effect, Power BI must maintain a separate access control list (ACL) for each
identity in its authorization subsystem. When an identity makes a REST API call to access
a workspace, Power BI must do a security check against the identity's ACL to ensure it's
authorized. The time it takes to determine whether the target workspace is inside the
ACL increases exponentially as the number of workspaces increases.

7 Note

Power BI doesn't enforce the 1,000 workspace limitation through code. If you try,
you add an embedding identity to more than 1,000 workspaces, and REST API calls
will still execute successfully. However, your application will move into an
unsupported state, which may have implications should you try to request help
from Microsoft support.

Consider a scenario where two multi-tenant applications have each been set up to use a
single service principal. Now consider that the first application has created 990
workspaces while the second application has created 1,010 workspaces. From a support
perspective, the first application is within the supported boundaries while the second
application isn't.
Now compare these two applications purely from a performance perspective. There's
not that much difference because the ACLs for both service principals have let the
metadata for their ACLs grow to a point where it will degrade performance to some
degree.

Here's the key observation: The number of workspaces created by a service principal has
a direct impact on performance and scalability. A service principal that's a member of
100 workspaces will execute REST API calls faster than a service principal that's a
member of 1,000 workspaces. Likewise, a service principal that's a member of only 10
workspaces will execute REST API calls faster than a service principal that's a member of
100 workspaces.

) Important

From the perspective of performance and scalability, the optimal number of


workspaces for which a service principal is a member is exactly one.

Manage isolation for datasets and data source credentials


Another important aspect when designing a multitenancy application is to isolate
customer tenants. It's critical that users from one customer tenant don't see data that
belongs to another customer tenant. Therefore, you must understand how to manage
dataset ownership and data source credentials.

Dataset ownership
Each Power BI dataset has a single owner, which can be either a user account or a
service principal. Dataset ownership is required to set up scheduled refresh and set
dataset parameters.

 Tip

In the Power BI service, you can determine who the dataset owner is by opening
the dataset settings.

If necessary, you can transfer ownership of the dataset to another user account or
service principal. You can do that in the Power BI service, or by using the REST API
TakeOver operation. When you import a Power BI Desktop file to create a new dataset
by using a service principal, the service principal is automatically set as the dataset
owner.
Data source credentials
To connect a dataset to its underlying data source, the dataset owner must set data
source credentials. Data source credentials are encrypted and cached by Power BI. From
that point, Power BI uses those credentials to authenticate with the underlying data
source when refreshing the data (for import storage tables) or executing passthrough
queries (for DirectQuery storage tables).

We recommend that you apply a common design pattern when provisioning a new
customer tenant. You can execute a series of REST API calls by using the identity of the
service principal:

1. Create a new workspace.


2. Associate the new workspace with a dedicated capacity.
3. Import a Power BI Desktop file to create a dataset.
4. Set the dataset source credentials for that dataset.

On completion of these REST API calls, the service principal will be an admin of the new
workspace and the owner of the dataset and data source credentials.

) Important

There's a common misconception that dataset data source credentials are


workspace-level scoped. That's not true. Data source credentials are scoped to the
service principal (or user account) and that scope extends to all Power BI
workspaces in the Azure AD tenant.

It's possible for a service principal to create data source credentials that are shared by
datasets in different workspaces across customer tenants, as shown in the following
diagram.
When data source credentials are shared by datasets that belong to different customer
tenants, the customer tenants aren't fully isolated.

Design strategies prior to service principal profiles


Understanding design strategies before the service principal profile feature became
available can help you to appreciate the need for the feature. Before that time,
developers built multitenancy applications by using one of the following three design
strategies:

Single service principal


Service principal pooling
One service principal per workspace

There are strengths and weakness associated with each of these design strategies.

The single service principal design strategy requires a once-off creation of an Azure AD
app registration. Therefore, it involves less administrative overhead than the other two
design strategies because there's no requirement to create more Azure AD app
registrations. This strategy is also the most straightforward to set up because it doesn't
require writing extra code that switches the calling context between service principals
when making REST API calls. However, a problem with this design strategy is that it
doesn't scale. It only supports a multitenancy environment that can grow up to 1,000
workspaces. Also, performance is sure to degrade as the service principal is granted
access to a larger number of workspaces. There's also a problem with customer tenant
isolation because the single service principal becomes the owner of every dataset and all
data credentials across all customer tenants.

The service principal pooling design strategy is commonly used to avoid the 1,000-
workspace limitation. It allows the application to scale to any number of workspaces by
adding the correct number of service principals to the pool. For example, a pool of five
service principals makes it possible to scale up to 5,000 workspaces; a pool of 80 service
principals makes it possible to scale up to 80,000 workspaces, and so on. However, while
this strategy can scale to a large number of workspaces, it has several disadvantages.
First, it requires writing extra code and storing metadata to allow context switching
between service principals when making REST API calls. Second, it involves more
administrative effort because you must create Azure AD app registrations whenever you
need to increase the number of the service principals in the pool.

What's more, the service principal pooling strategy isn't optimized for performance
because it allows service principals to become members of hundreds of workspaces. It
also isn't ideal from the perspective of customer tenant isolation because the service
principals can become owners of dataset and data credentials shared across customer
tenants.

The one service principal per workspace design strategy involves creating a service
principal for each customer tenant. From a theoretical perspective, this strategy offers
the best solution because it optimizes the performance of REST API calls while providing
true isolation for datasets and data source credentials at the workspace level. However,
what works best in theory doesn't always work best in practice. That's because the
requirement to create a service principal for each customer tenant is impractical for
many organizations. That's because some organizations have formal approval processes
or they involve excessive bureaucracy to create Azure AD app registrations. These
reasons can make it impossible to grant a custom application the authority it needs to
create Azure AD app registrations on-demand and in the automated way that your
solution requires.

In less common scenarios where a custom application has been granted proper
permissions, it can use the Microsoft Graph API to create Azure AD app registrations on
demand. However, the custom application is often complex to develop and deploy
because it must somehow track authentication credentials for each Azure AD app
registration. It must also gain access to those credentials whenever it needs to
authenticate and acquire access tokens for individual service principals.

Service principal profiles


The service principal profiles feature was designed to make it easier for you to manage
organizational content in Power BI and use your capacities more efficiently. They help
address three specific challenges that involve the lowest amount of developer effort and
overhead. These challenges include:

Scaling to a large number of workspaces.


Optimizing performance of REST API calls.
Isolating datasets and data source credentials at the customer tenant level.

When you design a multitenancy application by using service principal profiles, you can
benefit from the strengths of the three design strategies (described in the previous
section) while avoiding their associated weaknesses.

Service principal profiles are local accounts that are created within the context of Power
BI. A service principal can use the Profiles REST API operation to create new service
principal profiles. A service principal can create and manage its own set of service
principal profiles for a custom application, as shown in the following diagram.
There's always a parent-child relationship between a service principal and the service
principal profiles it creates. You can't create a service principal profile as a stand-alone
entity. Instead, you create a service principal profile by using a specific service principal,
and that service principal serves as the profile's parent. Furthermore, a service principal
profile is never visible to user accounts or other service principals. A service principal
profile can only be seen and used by the service principal that created it.

Service principal profiles aren't known to Azure AD


While the service principal itself and its underlying Azure AD app registration are known
to Azure AD, Azure AD doesn't know anything about service principal profiles. That's
because service principal profiles are created by Power BI and they exist only in the
Power BI service subsystem that controls Power BI security and authorization.

The fact that service principal profiles aren't known to Azure AD has both advantages
and disadvantages. The primary advantage is that an Embed for your customer scenario
application doesn't need any special Azure AD permissions to create service principal
profiles. It also means that the application can create and manage a set of local
identities that are separate from Azure AD.

However, there are also disadvantages. Because service principal profiles aren't known
to Azure AD, you can't add a service principal profile to an Azure AD group to implicitly
grant it access to a workspace. Also, external data sources, such as an Azure SQL
Database or Azure Synapse Analytics, can't recognize service principal profiles as the
identity when connecting to a database. So, the one service principal per workspace
design strategy (creating a service principal for each customer tenant) might be a better
choice when there's a requirement to connect to these data sources by using a separate
service principal with unique authentication credentials for each customer tenant.

Service principal profiles are first-class security principals


While service principal profiles aren't known to Azure AD, Power BI recognizes them as
first-class security principals. Just like a user account or a service principal, you can add
service principal profiles to a workspace role (as an Admin or Member). You can also
make it a dataset owner and the owner of data source credentials. For these reasons,
creating a new service principal profile for each new customer tenant is a best practice.

 Tip

When you develop an Embed for your customer scenario application by using
service principal profiles, you only need to create a single Azure AD app
registration to provide your application with a single service principal. This
approach significantly lowers administrative overhead compared to other
multitenancy design strategies, where it's necessary to create additional Azure AD
app registrations on an ongoing basis after the application is deployed to
production.

Execute REST API calls as a service principal profile


Your application can execute REST API calls by using the identity of a service principal
profile. That means it can execute a sequence of REST API calls to provision and set up a
new customer tenant.

1. When a service principal profile creates a new workspace, Power BI automatically


adds that profile as a workspace admin.
2. When a service principal profile imports a Power BI Desktop file to create a dataset,
Power BI sets that profile as the dataset owner.
3. When a service principal profile sets data source credentials, Power BI sets that
profile as the owner of the data source credentials.

It's important to understand that a service principal has an identity in Power BI that's
separate and distinct from the identities of its profiles. That provides you with choice as
a developer. You can execute REST API calls by using the identity of a service principal
profile. Alternatively, you can execute REST API calls without a profile, which uses the
identity of the parent service principal.

We recommend that you execute REST API calls as the parent service principal when
you're creating, viewing, or deleting service principal profiles. You should use the service
principal profile to execute all other REST API calls. These other calls can create
workspaces, import Power BI Desktop files, update dataset parameters, and set data
source credentials. They can also retrieve workspace item metadata and generate
embed tokens.

Consider an example where you need to set up a customer tenant for a customer
named Contoso. The first step makes a REST API call to create a service principal profile
with its display name set to Contoso. This call is made by using the identity of the service
principal. All remaining set up steps use the service principal profile to complete the
following tasks:

1. Create a workspace.
2. Associate the workspace with a capacity.
3. Import a Power BI Desktop file.
4. Set dataset parameters.
5. Set data source credentials.
6. Set up scheduled data refresh.

It's important to understand that access to the workspace and its content must be done
by using the identity of the service principal profile that was used to create the customer
tenant. It's also important to understand that the parent service principal doesn't need
access to the workspace or its content.

 Tip

Remember: When making REST API calls, use the service principal to create and
manage service principal profiles, and use the service principal profile to create, set
up, and access Power BI content.

Use the Profiles REST API operations


The Profiles REST API operation group comprises operations that create and manage
service principal profiles:

Create Profile

Delete Profile
Get Profile

Get Profiles

Update Profile

Create a service principal profile

Use the Create Profile REST API operation to create a service principal profile. You must
set the displayName property in the request body to provide a display name for the new
tenant. The value must be unique across all the profiles owned by the service principal.
The call will fail if another profile with that display name already exists for the service
principal.

A successful call returns the id property, which is a GUID that represents the profile.
When you develop applications that use service principal profiles, we recommend that
you store profile display names and their ID values in a custom database. That way, it's
straightforward for your application to retrieve the IDs.

If you're programming with the Power BI .NET SDK , you can use the
Profiles.CreateProfile method, which returns a ServicePrincipalProfile object
representing the new profile. It makes it straightforward to determine the id property
value.

Here's an example of creating a service principal profile and granting it workspace


access.

C#

// Create a service principal profile


string profileName = "Contoso";

var createRequest = new CreateOrUpdateProfileRequest(profileName);


var profile = pbiClient.Profiles.CreateProfile(createRequest);

// Retrieve the ID of the new profile


Guid profileId = profile.Id;

// Grant workspace access


var groupUser = new GroupUser {
GroupUserAccessRight = "Admin",
PrincipalType = "App",
Identifier = ServicePrincipalId,
Profile = new ServicePrincipalProfile {
Id = profileId
}
};

pbiClient.Groups.AddGroupUser(workspaceId, groupUser);

In the Power BI service, in the workspace Access pane, you can determine which
identities, including security principals, have access.

Delete a service principal profile


Use the Delete Profile REST API operation to delete a service principal profile. This
operation can only be called by the parent service principal.

If you're programming with the Power BI .NET SDK, you can use the
Profiles.DeleteProfile method.

Retrieve all service principal profiles


Use the Get Profiles REST API operation to retrieve a list of service principal profiles that
belong to the calling service principal. This operation returns a JSON payload that
contains the id and displayName properties of each service principal profile.

If you're programming with the Power BI .NET SDK, you can use the Profiles.GetProfiles
method.

Execute REST API calls by using a service principal profile


There are two requirements for making REST API calls by using a service principal
profile:

You must pass the access token for the parent service principal in the
Authorization header.
You must include a header named X-PowerBI-profile-id with the value of the
service principal profile's ID.

If you're using the Power BI .NET SDK, you can set the X-PowerBI-profile-id header
value explicitly by passing in the service principal profile's ID.

C#

// Create the Power BI client


var tokenCredentials = new TokenCredentials(GetACcessToken(). "Bearer");
var uriPowerBiServiceApiRoot = new Uri(uriPowerBiServiceApiRoot);
var pbiClient = new PowerBIClient(uriPowerBiServiceApiRoot,
tokenCredentials);

// Add X-PowerBI-profile-id header for service principal profile


string profileId = "11111111-1111-1111-1111-111111111111";
pbiClient.HttpClient.DefaultRequestHeaders.Add("X-PowerBI-profile-id",
profileId);

// Retrieve workspaces by using the identity of service principal profile


var workspaces = pbiClient.Groups.GetGroups();

As shown in the above example, once you add the X-PowerBI-profile-id header to the
PowerBIClient object, it's straightforward to invoke methods, such as Groups.GetGroups ,

so they'll be executed by using the service principal profile.

There's a more convenient way to set the X-PowerBI-profile-id header for a


PowerBIClient object. You can initialize the object by passing in the profile's ID to the
constructor.

C#
// Create the Power BI client
string profileId = "11111111-1111-1111-1111-111111111111";

var tokenCredentials = new TokenCredentials(GetACcessToken(). "Bearer");


var uriPowerBiServiceApiRoot = new Uri(uriPowerBiServiceApiRoot);
var pbiClient = new PowerBiClient(uriPowerBiServiceApiRoot,
tokenCredentials, profileId);

As you program a multitenancy application, it's likely that you'll need to switch between
executing calls as the parent service principal and executing calls as a service principal
profile. A useful approach to manage context switching is to declare a class-level
variable that stores the PowerBIClient object. You can then create a helper method that
sets the variable with the correct object.

C#

// Class-level variable that stores the PowerBIClient object


private PowerBIClient pbiClient;

// Helper method that sets the correct PowerBIClient object


private void SetCallingContext(string profileId = "") {

if (profileId.Equals("")) {
pbiClient = GetPowerBIClient();
}
else {
pbiClient = GetPowerBIClientForProfile(new Guid(profileId));
}
}

When you need to create or manage a service principal profile, you can call the
SetCallingContext method without any parameters. This way, you can create and

manage profiles by using the identity of the service principal.

C#

// Always create and manage profiles as the service principal


SetCallingContext();

// Create a service principal profile


string profileName = "Contoso";

var createRequest = new CreateOrUpdateProfileRequest(profileName);


var profile = pbiClient.Profiles.CreateProfile(createRequest);

When you need to create and set up a workspace for a new customer tenant, you want
to execute that code as a service principal profile. Therefore, you should call the
SetCallingContext method by passing in the profile's ID. This way, you can create the

workspace by using the identity of the service principal profile.

C#

// Always create and set up workspaces as a service principal profile


string profileId = "11111111-1111-1111-1111-111111111111";
SetCallingContext(profileId);

// Create a workspace
GroupCreationRequest request = new GroupCreationRequest(workspaceName);
Group workspace = pbiClient.Groups.CreateGroup(request);

After you've used a specific service principal profile to create and configure a workspace,
you should continue to use that same profile to create and set up the workspace
content. There's no need to invoke the SetCallingContext method to complete the
setup.

Developer sample
We encourage you to download a sample application named
AppOwnsDataMultiTenant .

This sample application was developed by using .NET 6 and ASP.NET, and it
demonstrates how to apply the guidance and recommendations described in this article.
You can review the code to learn how to develop a multitenancy application that
implements the Embed for your customer scenario by using service principal profiles.

Next steps
For more information about this article, check out the following resources:

Service principal profiles for multitenancy apps in Power BI Embedded


Migrate multi-customer applications to the service principal profiles model
Profiles Power BI REST API operation group
AppOwnsDataMultiTenant sample application
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Plan translation for multiple-language
reports in Power BI
Article • 07/21/2023

When it comes to localizing Power BI artifacts, such as datasets and reports, there are
three types of translations.

Metadata translation
Report label translation
Data translation

In this article, learn about these types.

Metadata translation
Metadata translation provides localized values for dataset object properties. The object
types that support metadata translation include tables, columns, measures, hierarchies,
and hierarchy levels. Metadata translation rarely provides a complete solution by itself.

The following screenshot shows how metadata translations provide German names for
the measures displayed in Card visuals.

Metadata translation is also used to display column names and measure names in tables
and matrices.

Metadata translations are the easiest to create, manage, and integrate into a Power BI
report. By applying the features of Translations Builder to generate machine translations,
you can add the metadata translations you need to build and test a Power BI report.
Adding metadata translations to your dataset is an essential first step. For more
information, see Create multiple-language reports with Translations Builder.

Power BI support for metadata translation


Metadata translation is the main localization feature in Power BI to build multiple-
language reports. In Power BI, metadata translation support is integrated at the dataset
level.

A metadata translation represents the property for a dataset object that's been
translated for a specific language. If your dataset contains a table with an English name
of Products, you can add translations for the Caption property of this table object to
provide alternative names. These names appear when the report is rendered in a
different language.

In addition to the Caption property, which tracks an object's display name, dataset
objects also support adding metadata translations for two other properties, which are
Description and DisplayFolder.

When you begin designing a dataset that uses metadata translation, you can assume
that you always need translations for the Caption property. If you require support for
metadata translation for report authors who create and edit reports in the Power BI
service, you also need to provide metadata translations for the Description and
DisplayFolder properties.

Power BI reports and datasets that support metadata translation can only run in
workspaces that are associated with a dedicated capacity created using Power BI
Premium or the Power BI Embedded Service. Multiple-language reports don't load
correctly when launched from a workspace in the shared capacity. If you're working in a
Power BI workspace that doesn't display a diamond that indicates a Premium
workspace, multiple-language reports might not work as expected.

Power BI support for metadata translations only applies to datasets. Power BI Desktop
and the Power BI service don't support storing or loading translations for text values
stored as part of the report layout.

If you add a textbox or button to a Power BI report and then add a hard-coded text
value for a string displayed to the user, that text value is stored in the report layout. It
can't be localized. Avoid using hard-coded text values. Page tabs display names can't be
localized. You can design multiple-language reports so that page tabs are hidden and
never displayed to the user.

Report label translation


Report label translation provides localized values for text elements on a report that
aren't directly associated with a dataset object. Examples of report labels include the
report title, section headings, and button captions. Here are examples of report label
translations with the report title and the captions of navigation buttons.

Report label translations are harder to create and manage than metadata translations
because Power BI provides no built-in feature to track or integrate them. Translations
Builder solves this problem using a Localized Labels table, which is a hidden table in the
dataset of a report. Add measures that track the required translations for each report
label.

Data translation
Data translation provides translated values for text-based columns in the underlying
data itself. Suppose a Power BI report displays product names imported from the rows
of the Products table in an underlying database. Data translation is used to display
product names differently for users who speak different languages. For example, some
users see products names in English while other users see product names in other
languages.

Data translations also appear in the axes of cartesian visuals and in legends.

Data translation is harder to design and implement than the other two types of
translation. You must redesign the underlying data source with extra text columns for
secondary language translations. Once the underlying data source has been extended
with extra text columns, you can then use a powerful feature in Power BI Desktop called
Field Parameters. This feature uses filters to control loading the data translations for a
specific language.

A multiple-language report typically requires both metadata translations and report


label translations. Some multiple-language projects require data translations, but others
don't.

Next steps
Use locale values in multiple-language Power BI reports
Use locale values in multiple-language
Power BI reports
Article • 07/21/2023

Every report that loads in the Power BI service initializes with a user context that
identifies a language and a geographical region known as a locale. In most cases, a
locale identifies a country/region. The Power BI service tracks the combination of the
user's language and locale using a culture name.

A culture name is usually a lower-case language identifier and an upper-case locale


identifier separated by a hyphen. The culture name en-US identifies a user in the United
States who speaks English. A culture name of es-ES identifies a user in Spain who
speaks Spanish. A culture name of fr-FR identifies a user in France who speaks French.
A culture name of de-DE identifies a user in Germany who speaks German.

USERCULTURE Language Locale

en-US English United States

es-ES Spanish Spain

fr-FR French France

de-DE German Germany

7 Note

In some cases, a culture name also includes other information. For example, there
are two different culture names for the language Serbian in Serbia, which are sr-
Cyrl-RS and sr-Latn-RS . The part in the middle known as the script (Cyrl and Latn)

indicates whether to use the Cyrillic alphabet or the Latin alphabet. For more
information, see RFC 4646 .

For a list of culture name values, see ISO 639 Language codes and Online Browsing
Platform .

Organize project for metadata translation


At the start of a project that involves creating a new Power BI dataset with metadata
translation, list the culture names that you plan to support. Next, extend the dataset by
adding metadata translations for each culture name.

The following diagram shows a dataset that has a default language setting of en-US .
The dataset has been extended with metadata translations for three other culture
names: es-ES , fr-FR , and de-DE .

Every metadata translation is associated with a specific culture name. Cultures names act
as lookup keys that are used to add and retrieve metadata translations within the
context of a Power BI dataset.

You don't need to supply metadata translations for dataset's default language. Power BI
can just use the dataset object names directly for that culture name. One way to think
about this is that the dataset object names act as a virtual set of metadata translations
for the default language.

It's possible to explicitly add metadata translation for the default language. Use this
approach sparingly. Power BI Desktop doesn't support loading metadata translations in
its report designer. Instead, Power BI Desktop only loads dataset object names. If you
explicitly add metadata translations for the default language, Power BI reports look
different in Power BI Desktop than they do in the Power BI service.

Load a report in Power BI


When a user navigates to a Power BI report with an HTTP GET request, the browser
transmits an HTTP header named Accept-Language with a value set to a valid culture
name. The following screenshot shows a GET request that transmits an Accept-Language
header value of en-US .

When the Power BI service loads a report, it reads the culture name passed in the
Accept-Language header and uses it to initialize the language and locale of the report

loading context. On their devices, users can control which culture name is passed in the
Accept-Language header value by configuring regional settings.

When you open a Power BI report in the Power BI service, you can override the Accept-
Language header value by adding the language parameter at the end of the report URL
and setting its value to a valid culture name. For example, you can test loading a report
for a user in Canada who speaks French by setting the language parameter value to fr-
CA .

7 Note

Adding the language parameter to report URLs provides a convenient way to test
metadata translations in the Power BI service. This technique doesn't require you to
reconfigure any settings on your local machine or in your browser.

Support multiple locales for a single language


You might need to support multiple locales for a single spoken language. Consider a
scenario with users who speak French but live in different countries, such as France,
Belgium, and Canada. You publish a dataset with a default language of en-US and
metadata translations for three more culture names including es-ES , fr-FR , and de-DE .
What happens when a French-speaking Canadian user opens report with an Accept-
Language header value of fr-CA ? Does the Power BI service load translations for French
( fr-FR ) or does it fall back on the English dataset object names?

Measures currently act differently than tables and columns in Power BI. With measures,
the Power BI service attempts to find the closest match. For the culture name of fr-CA ,
the names of measures would load using the metadata translations for fr-FR .

With tables and columns, the Power BI service requires an exact match between the
culture name in the request and the supported metadata translations. If there isn't an
exact match, the Power BI service falls back to loading dataset object names. The names
of tables and columns in this scenario would load using English dataset object names.

7 Note

This use of the default language for the names of tables and columns is a known
issue for Power BI.

We recommend that you add metadata translation for any culture name you want to
support. In this example, add three sets of French translations to support the culture
names of fr-FR , fr-BE and fr-CA . The approach handles the scenario where the French
translations for users in France are different from French translations for users in
Canada.


Implement translations using measures and
USERCULTURE
Another feature in Power BI that helps with building multiple-language reports is the
Data Analysis Expressions (DAX) USERCULTURE function. When called inside a measure,
the USERCULTURE function returns the culture name of the current report loading context.
This approach makes it possible to write DAX logic in measures that implement
translations dynamically.

You can implement translations dynamically by calling USERCULTURE in a measure, but


you can't achieve the same result with calculated tables or calculated columns. The DAX
expressions for calculated tables and calculated columns get evaluated at dataset load
time. If you call the USERCULTURE function in the DAX expression for a calculated table or
calculated column, it returns the culture name of the dataset's default language. Calling
USERCULTURE in a measure returns the culture name for the current user.

The example report displays the USERCULTURE return value in the upper right corner of
the report banner. You don't typically display a report element like this in a real
application.

This code is a simple example of writing a DAX expression for a measure that
implements dynamic translations. You can use a SWITCH statement that calls
USERCULTURE to form a basic pattern for implementing dynamic translations.

DAX

Product Sales Report Label = SWITCH( USERCULTURE() ),


"es-ES", "Informe De Ventas De Productos",
"fr-FR", "Rapport Sur Les Ventes De Produits",
"fr-BE", "Rapport Sur Les Ventes De Produits",
"fr-CA", "Rapport Sur Les Ventes De Produits",
"de-DE", "Produktverkaufsbericht",
"Product Sales Report"
)

For more information, see Learn DAX basics in Power BI Desktop.

Format dates and numbers with current user


locale
You can translate dynamically by writing a DAX expression in a measure with conditional
logic based on the user's culture name. In most cases, you aren't required to write
conditional DAX logic based on the user's locale because Power BI visuals automatically
handle locale-specific formatting behind the scenes.

In a simple scenario, you build a report for an audience of report consumers that live in
both New York ( en-US ) and in London ( en-GB ). All users speak English ( en ), but some
live in different regions ( US and GB ) where dates and numbers are formatted differently.
For example, a user from New York wants to see dates in a mm/dd/yyyy format while a
user from London wants to see dates in a dd/mm/yyyy format.

Everything thing works as expected if you configure columns and measures using
format strings that support regional formatting. If you're formatting a date, we
recommend that you use a format string such as Short Date or Long Date because they
support regional formatting.

Here are a few examples of how a date value formatted with Short Date appears when
loaded under different locales.

Locale Format

en-US 12/31/2022
Locale Format

en-GB 31/12/2022

pt-PT 31-12-2022

de-DE 31.12.2022

ja-JP 2022/12/31

Next steps
Create multiple-language reports with Translations Builder
Use best practices to localize Power BI
reports
Article • 07/31/2023

When it comes to localizing software, there are some universal principles to keep in
mind. The first is to plan for localization from the start of any project. It's harder to add
localization support to an existing dataset or report that was initially built without any
regard for internationalization or localization.

This fact is especially true with Power BI reports because there are so many popular
design techniques that don't support localization. Much of the work for adding
localization support to existing Power BI reports involves undoing things that don't
support localization. Only after that work can you move forward with design techniques
that do support localization.

Package a dataset and reports in project files


Before you proceed, you need to decide how to package your dataset definitions and
report layouts for distribution. There are two popular approaches used by content
creators who work with Power BI Desktop.

Single .pbix project file


Multiple project files with a shared dataset

For adding multiple-language support to a Power BI solution, choose either of these


approaches.

Single project file


You can package both a report layout and its underlying dataset definition together.
Deploy a reporting solution like this by publishing the project into a Power BI service
workspace. If you need to update either the report layout or the dataset definition,
upgrade by publishing an updated version of the .pbix project file.
Shared dataset
The single project file approach doesn't always provide the flexibility you need. Maybe
one team is responsible for creating and updating datasets while other teams are
responsible for building reports. It might make sense to share a dataset with reports in
separate .pbix project files.

To use the shared dataset approach, create one .pbix project file with a dataset and an
empty report, which remains unused. After this dataset has been deployed to the Power
BI service, report builders can connect to it using Power BI Desktop to create report-only
.pbix files.

This approach makes it possible for the teams building reports to build .pbix project files
with report layouts that can be deployed and updated independently of the underlying
dataset. For more information, see Connect to datasets.

Account for text size


Another important concept in localization is to plan for growth. A label that's 400 pixels
wide when displayed in English could require a greater width when translated into
another language. If you optimize the width of your labels for text in English, you might
find that translations in other languages introduce unexpected line breaks or get cut off.
These effects compromise the user experience.

Adding a healthy degree of padding to localized labels is the norm when developing
internationalized software. It's essential that you test your reports with each language
you plan to support. You need to be sure that your report layouts look the way you
expect with any language you choose to support.

Next steps
Create multiple-language reports with Translations Builder
Create multiple-language reports with
Translations Builder
Article • 07/31/2023

Content creators can use Translations Builder to add multiple-language support to .pbix
project files in Power BI Desktop. The following screenshot shows what Translations
Builder looks like when working with a simple .pbix project that supports a few
secondary languages.

Translations Builder is an external tool developed for Power BI Desktop using C#, .NET 6,
and Windows Forms. Translations Builder uses an API called Tabular Object Model (TOM)
to update datasets that are loaded into memory and run in a session of Power BI
Desktop.

Translations Builder does most of its work by adding and updating the metadata
translations associated with datasets objects including tables, columns, and measures.
There are also cases in which Translations Builder creates new tables in a dataset to
implement strategies to handle aspects of building multiple-language reports.

When you open a .pbix project in Power BI Desktop, the dataset defined inside the .pbix
file is loaded into memory in a local session of the Analysis Services engine. Translations
Builder uses TOM to establish a direct connection to the dataset of the current .pbix
project.

Open Translations Builder


If you don't already have Power BI Desktop installed, see Get Power BI Desktop.

On the same computer where you run Power BI Desktop, download and install
Translations Builder by using the Translations Builder Installation Guide .

After you install Translations Builder, you can open it directly from Power BI Desktop in
the External Tools ribbon. The Translations Builder project uses external tools
integration support. For more information, see External tools in Power BI Desktop.

When you launch an external tool like Translations Builder, Power BI Desktop passes
startup parameters to the application, including a connection string. Translations Builder
uses the connection string to establish a connection back to the dataset that's loaded in
Power BI Desktop.

This approach allows Translations Builder to display dataset information and to provide
commands to automate adding metadata translations. For more information, see
Translations Builder Developers Guide .

Translations Builder allows a content creator to view, add, and update metadata
translations using a two-dimensional grid. This translations grid simplifies the user
experience because it abstracts away the low-level details of reading and writing
metadata translation associated with a dataset definition. You work with metadata
translations in the translation grid similar to working with data in an Excel spreadsheet.

Next steps
Add a language to a report in Translations Builder
Add a language to a report in
Translations Builder
Article • 07/31/2023

When you open a .pbix project in Translations Builder for the first time, the translation
grid displays a row for each unhidden table, measure, and column in the project's
underlying data model. The translation grid doesn't display rows for dataset objects in
the data model that are hidden from the report view. Hidden objects aren't displayed on
a report and don't require translations.

The following screenshot shows the starting point for a simple data model before it's
been modified to support secondary languages.

7 Note

If you haven't installed Translations Builder yet, see Create multiple-language


reports with Translations Builder.

If you examine the translation grid for this .pbix project, you can see the first three
columns contain read-only columns used to identity each metadata translation. Each
metadata translation has an Object Type, a Property, and a Name. Translations for the
Caption property are always used. You can add translations for the Description and
DisplayFolder properties if necessary.
The fourth column in the translation grid always displays the translations for the
dataset's default language and locale, which in this case is English [en-US].

7 Note

Translations Builder makes it possible to update the translations for the default
language. Use this technique sparingly. It can be confusing because translations for
the default language don't load in Power BI Desktop.

Add languages
Translations Builder provides an Add Language option to add secondary languages to
the project's data model.

Translations Builder doesn't add metadata translations for a specific language. Instead, it
adds metadata translations for a culture name that identifies both a language and a
locale. For more information, see Use locale values in multiple-language Power BI
reports.

Translations Builder abstracts away the differences between a language and a culture
name to simplify the user experience. Content creators can think in terms of languages
instead of culture names.

To add one or more secondary languages, follow these steps.

1. Select Add Language to display the Add Language dialog box.

2. Select a language in the list or use Ctrl to select multiple languages.

3. Select Add Language.


The added language or languages now appear in the Secondary Languages list.

4. In Power BI Desktop, select Save.

) Important
Translations Builder can modify the dataset loaded in memory, but it can't
save the in-memory changes back to the underlying .pbix file. Always return
to Power BI Desktop and select the Save command after you add languages
or create or update translations.

Adding a new language adds a new column of editable cells to the translations grid.

If content creators speak all the languages involved, they can add and update
translations for secondary languages directly in the translation grid with an Excel-like
editing experience.

Test translations in the Power BI service


You can't verify your multiple-language work in Power BI Desktop. Instead, you must
test your work in the Power BI service in a workspace associated with a Premium
capacity. After you add translation support with Translations Builder, follow these steps:

1. In Power BI Desktop, save changes to the underlying .pbix file.

2. In the Home ribbon, select Publish.


3. In the Publish to Power BI dialog box, highlight a workspace and then choose
Select.

4. When the publishing finishes, select the link to open the project in the Power BI
service.

After the report loads with its default language, select the browser address bar and add
the following language parameter to the report URL.

HTTP

?language=es-ES

When you add the language parameter to the end of the report URL, assign a value that
is a valid culture name. After you add the language parameter and press Enter, you can
verify that the parameter has been accepted by the browser as it reloads the report.

If you forget to add the question mark (?) or if you don't format the language
parameter correctly, the browser rejects the parameter and removes it from the URL.
After you correctly load a report using a language parameter value of es-ES, you should
see the user experience for the entire Power BI service UI change from English to
Spanish.

The report also displays the Spanish translations for the names of columns and
measures.

Implement multiple-language workflow


After you test your work and verify that the translations are working properly, you can
store the .pbix file in a source control system such as GitHub or Azure Repos. This
approach can be part of an application lifecycle management (ALM) strategy where
support for secondary languages and translations evolves over time.

As you begin to work with secondary languages and translations to localize a .pbix
project, follow the same set of steps:

1. Make changes in Power BI Desktop.


2. Publish the .pbix project to the Power BI service.
3. Test your work with a browser in the Power BI service using the language
parameter.
4. Repeat these steps until you complete all the translations.

Embed Power BI reports using a specific


language and locale
If you're developing with Power BI embedding, you can use the Power BI JavaScript API
to load reports with a specific language and locale. This task is accomplished by
extending the config object passed to powerbi.embed with a localeSettings object that
contains a language property as shown in the following code.

JavaScript

let config = {
type: "report",
id: reportId,
embedUrl: embedUrl,
accessToken: embedToken,
tokenType: models.TokenType.Embed,
localeSettings: { language: "de-DE" }
};

let report = powerbi.embed(reportContainer, config);

Next steps
Add a Localized Labels table to a Power BI report
Add a Localized Labels table to a Power
BI report
Article • 07/31/2023

Report label translations provide localized values for text elements on a report that
aren't directly associated with a dataset object. Examples of report labels are the text
values for report titles, section headings, and button captions. Power BI provides no
built-in features to track or integrate report labels. Translations Builder uses Localized
Labels tables to support this approach.

7 Note

If you haven't installed Translations Builder yet, see Create multiple-language


reports with Translations Builder.

Compare localized labels and hard-coded text


There are some design techniques for building datasets and reports with Power BI
Desktop that you should avoid when you build multiple-language reports. These
elements cause problems due to a lack of localization support:

Using text boxes or buttons with hard-coded text values.


Adding a hard-coded text value for the title of a visual.
Displaying page tabs to the user.

Any hard-coded text value that you add to the report layout can't be localized. Suppose
you add a column chart to your report. By default, a Cartesian visual such as a column
chart is assigned a dynamic value to its Title property. That value is based on the names
of the columns and measures that have been added into the data roles, such as Axis,
Legend, and Values.
The default Title property for a Cartesian visual is dynamically parsed together in a
fashion that supports localization. As long as you supply metadata translations for the
names of columns and measures in the underlying dataset definition, the Title property
of the visual uses the translations. So, if you translate Sales Revenue, Day, and Year, the
visual creates a localized title.

The following table shows how the default Title property of this visual is updated for
each of these five languages.

Language Visual Title

English (en-US) Sales Revenue by Day and Year

Spanish (es-ES) Ingresos Por Ventas por Día y Año

French (fr-FR) Chiffre D'Affaires par Jour et Année

German (de-DE) Umsatz nach Tag und Jahr

You might not like the dynamically generated visual Title, but don't replace it with hard-
coded text. Any hard-coded text for the Title property can't be localized. Either leave the
visual Title property with its default value or use the Localized Labels table strategy to
create report labels that support localization.

Use the Localized Labels table strategy


The Power BI localization features are supported at the dataset level but not at the
report layout level. Using a Localized Labels table builds on the fact that Power BI
supports metadata translations for specific types of dataset objects, including measures.
When you add a report label by using Translations Builder, it automatically adds a new
measure to the Localized Labels table behind the scenes.

After a measure has been created for each report label, Power BI can store and manage
its translations in the same fashion that it does for metadata translations. In fact, the
Localized Labels table strategy uses metadata translations to implement report label
translations.

Translations Builder creates the Localized Labels table and adds a measure each time
you need a report label. The Localized Labels table is created as a hidden table. You can
do all the work to create and manage report labels inside the Translation Builder user
experience. There's no need to inspect or modify the Localized Labels table using the
Power BI Desktop Model or Data views.

Here's an example of the Localized Labels table from the example project. It provides
localized report labels for the report title, visual titles, and captions for navigation
buttons used throughout the report.

Create the Localized Labels table


You can create the Localized Labels table for a .pbix project:

1. From the Generate Translated Tables menu, select Create Localized Labels Table.
2. An informational dialog box asks if you want more information about the
Localized Labels table strategy. Select Yes to review documentation or No to
proceed.

After you create the Localized Labels table, there are three sample report labels as
shown in the following screenshot. In most cases, you want to delete these sample
report labels and replace them with the actual report labels required on the current
project.

There's no need to interact with the Localized Labels table in Power BI Desktop. You can
add and manage all the report labels you need in Translations Builder.

Populate the Localized Labels table


To create your first report label, follow these steps:

1. From the Generate Translated Tables menu, select Add Labels to the Localized
Labels Table. You can also run the command using the shortcut key Ctrl+A.
2. Add report labels one at a time by typing the text for the label. Then select Add
Label.

Alternatively, select Advanced Mode to add labels as a batch.


After you add the report labels to your .pbix project, they appear in the translation grid.
Now you can add and edit localized label translations just like any other type of
translation in the translation grid.

About the Localized Labels table


Translations Builder only populates the translation grid with dataset objects that aren't
hidden from Report View. The measures in the Localized Labels table are hidden from
Report View and they provide the one exception to the rule that excludes hidden
objects from being displayed in the translation grid.

In the Localized Labels table strategy, you can create, manage, and store report labels in
the same .pbix project file that holds the metadata translations for the names of tables,
columns, and measures. The Localized Labels table strategy can merge metadata
translations and report label translations together in a unified experience in the
translation grid. There's no need to distinguish between metadata translations and
report label translations when it comes to editing translations or when using
Translations Builder features to generate machine translations.

There are other popular localization techniques that track report label translations in a
separate CSV file. While these techniques work, they aren't as streamlined. Report label
translations must be created separately and managed differently from the metadata
translations in a .pbix project. This strategy allows for report label translations and
metadata translations to be stored together and managed in the same way.

Generate the translated Localized Labels table


The Localized Labels table contains a measure with translations for each report label in
a .pbix project. The measures inside the Localized Labels table are hidden and aren't
intended to be used directly by report authors. Instead, the strategy is based on running
code to generate a second table. The Translated Localized Labels table has measures
that are meant to be used directly on a report page.

To create a Translated Localized Labels table, follow these steps.


In Translations Builder, from the Generate Translated Tables menu, select Generate
Translated Localized Labels Table.

The first time you generate the Translated Localized Labels table, Translations Builder
creates the table and populates it with measures. After that, generating the table deletes
all the measures in the Translated Localized Labels table and recreates them. This action
synchronizes all the report label translations between the Localized Labels table and the
Translated Localized Labels table.

Unlike the Localized Labels table, the Translated Localized Labels table isn't hidden in
the Report View. The table provides measures that are intended to be used as report
labels in a report. Here's how the Translated Localized Labels table appears to a report
author in the Data pane when the report is in the Report View in Power BI Desktop.
Every measure in the Translated Localized Labels table has a name that ends with the
word Label. The reason for this fact is that two measures in the same dataset can't have
the same name. Measure names must be unique on a project-wide basis. It's not
possible to create measures in the Translated Localized Labels table that have the same
name as the measures in the Localized Labels table.

If you examine the machine-generated Data Analysis Expressions (DAX) expressions for
measures in the Translated Localized Labels table, they're based on the same DAX
pattern shown in Implement translations using measures and USERCULTURE. This
pattern uses the DAX function USERCULTURE together with the SWITCH function to return
the best translation for the current user. This DAX pattern defaults to the dataset's
default language if no match is found.
You must run Generate Translated Localized Labels Table anytime you make changes to
the Localized Labels table.

Don't edit the DAX expressions for measures in the Translated Localized Labels table.
Any edits you make are lost because all the measures in this table are deleted and
recreated each time you generate the table.

Surface localized labels on a report page


Report labels are implemented as dynamic measures in the Translated Localized Labels
table. That fact makes them easy to surface in a Power BI report. For example, you can
add a Card visual to a report and then configure its Data role in the Visualizations pane
with a measure from the Translated Localized Labels table.


The example multiple-language project uses a Rectangle shape to display the localized
report label for the report title. The following screenshot shows how to select a
Rectangle shape and navigate to configure its Text property value in the Shape > Text
section in the Format pane.

The Text property of a shape can be configured with a hard-coded string. You must
avoid hard-coding text values into the report layout when you create multiple-language
reports. To use a localized measure, follow these steps.

1. In Power BI Desktop, select the shape, a Rectangle, in this example.

2. Under Format, select Shape > Text. In the Text pane, select the fx button.

Power BI Desktop displays a dialog box that allows you to configure the Text
property of the Rectangle shape.

3. In the Text - Style - Text dialog box, expand the Translated Localized Labels table
and select any measure.

4. Select OK.

You can use the same technique to localize a visual Title using a measure from the
Translated Localized Labels table.

Next steps
Generate machine translations using Azure Translator Service
Generate machine translations using
Azure Translator Service
Article • 07/31/2023

One of the biggest challenges in building multiple-language reports is managing the


language translation process. You must ensure that the quality of translations is high. Be
sure that the translated names of tables, columns, measures, and labels don't lose their
meaning when translated to another language. In most cases, acquiring quality
translations requires human translators to create or at least review translations as part of
the multiple-language report development process.

While human translators are typically an essential part of the end-to-end process, it can
take a long time to send out translation files to a translation team and then to wait for
them to come back. With all the recent industry advances in AI, you can also generate
machine translations using a Web API that can be called directly from an external tool
such as Translations Builder. If you generate machine translations, you have something
to work with while waiting for a translation team to return their high-quality human
translations.

While machine translations aren't always guaranteed to be high quality, they do provide
value in the multiple-language report development process.

They can act as translation placeholders so you can begin your testing by loading
reports using secondary languages to see if there are layout issues or unexpected
line breaks.
They can provide human translators with a better starting point because they just
need to review and correct translations instead of creating every translation from
scratch.
They can be used to quickly add support for languages where there are legal
compliance issues and organizations are facing fines or litigation for
noncompliance.

Generate machine translations


Translations Builder generates machine translations by using Azure AI Translator. This
product makes it possible to automate enumerating through dataset objects to translate
dataset object names from the default language to translations for secondary
languages.
To test the support in Translations Builder for generating machine translations, you need
a key for an instance of the Azure Translator Service. For more information about
obtaining a key, see What is Azure AI Translator?

7 Note

If you haven't installed Translations Builder yet, see Create multiple-language


reports with Translations Builder.

Translations Builder provides a Configuration Options dialog box where you can
configure the key and location to access the Azure Translator Service.

After you configure an Azure Translator Service Key, Translations Builder displays other
command buttons. These buttons generate translations for a single language at a time
or for all languages at once. There are also commands to generate machine translations
only for the translations that are currently empty.

Next steps
Add support for multiple-language page navigation
Add support for multiple-language
page navigation
Article • 07/31/2023

You can't display page tabs to the user in a multiple-language report because page tabs
in a Power BI report don't support localization. For localization, you must provide some
other means for users to navigate from page to page.

You can use a design technique where you add a navigation menu that uses buttons.
When the user selects a button, the button applies a bookmark to navigate to another
page. This section describes the process of building a navigation menu that supports
localization using measures from the Localized Labels table.

This article uses the multiple-language demo project and Power BI Desktop. You don't
need a Power BI license to start developing in Power BI Desktop. If you don't already
have Power BI Desktop installed, see Get Power BI Desktop.

Hide tabs
When you hide all but one of the tabs in your report, none of the tabs appear in the
published report. The report opens to the page of the unhidden tab. Even that tab isn't
displayed.

Start by hiding all but one of the tabs.

1. Open the report in Power BI Desktop.

2. For each tab that you hide, right-click and select Hide Page from the context
menu.

Create bookmarks
Each button uses a bookmark to take the reader to a page. For more information on
bookmarks, see Create page and bookmark navigators.

1. From the View ribbon, select Bookmarks to display the Bookmarks pane.

2. In the Bookmarks pane, create a set of bookmarks. Each bookmark navigates to a


specific page.

a. Select a tab, starting with Sales Summary, which serves as the landing page.

b. In Bookmarks, select Add.

c. Right-click the new bookmark and select Rename. Enter a bookmark name, such
as GoToSalesSummary.

d. Right-click the bookmark name and disable Data and Display. Enable Current
Page behavior.

e. Repeat these steps for each of the hidden tabs. The Bookmark pane has the
following bookmarks:
Configure buttons
The multiple-language demo project contains buttons for navigation. To learn more
about adding buttons, see Create buttons in Power BI reports.

1. Select a button at the top of the report, starting with Sales Summary.

2. Under Format, select Button > Action. Set Action to On.

3. Under Action, set Type to Bookmark and Bookmark to the relevant bookmark,
starting with GoToSalesSummary.

4. In the same way, configure each button in the navigation menu to apply a
bookmark to navigate to a specific page.

5. For each button, select Button > Style > Text and then select the function button.

6. In the Text - State dialog box, from the Translated Localized Labels table, select
the entry that corresponds to that button. For instance, Time Slices Label for Time
Slices.

7. Select OK to save your selection.

The report now has no visible tabs when you publish it to the Power BI service. The
report opens to the Sales Summary page. Readers can move from page to page by
using the buttons, which are localized by using the Translated Localized Labels table.

Next steps
Guidance for Power BI
Enable workflows for human translation
in Power BI reports
Article • 08/04/2023

When you create multiple-language reports for Power BI, you can work quickly and
efficiently by using Translations Builder and by generating machine translations.
However, machine generated translations alone are inadequate for many production
needs. You need to integrate other people acting as translators into a human workflow
process.

Translations Builder uses a translation sheet, which is a .csv file that you export to send
to a translator. The human acting as a translator updates the translation sheet and then
returns it to you. You then import the revised sheet to integrate the changes into the
current dataset.

Translations Builder generates a file for a selected language using a special naming
format, for example, PbixProjectName-Translations-Spanish.csv. The name includes the
dataset name and the language for translation. Translations Builder saves the generated
translation sheet to a folder known as the Outbox.

Human translators can make edits to a translation sheet using Microsoft Excel. When
you receive an updated translation sheet from a translator, copy it to the Inbox folder.
From there, you can import it to integrate those updated translations back into the
dataset for the current project.

Configure import and export folders


By default, folder paths for Outbox and Inbox in Translations Builder are the Documents
folder of the current user. To configure settings used as targets for export and import
operations, follow these steps.

1. From the Dataset Connection menu, select Configure Settings to display the
Configuration Options dialog box.

2. Select the set buttons to update the settings for Translations Outbox Folder Path
and Translations Outbox Folder Path.

3. After you have configured paths, select Save Changes.


After you configure the folder paths for Outbox and Inbox, you can begin to export and
import translation sheets by using the Export/Import Translations options.

Next steps
Export translation sheets in Translations Builder
Export translation sheets in Translations
Builder
Article • 08/04/2023

When you use Translations Builder with external translators, you need to export a
translation sheet that contains the default language and empty cells or machine
generated translations. Translators update the .csv file and return it to you.

You can export the following translations sheets:

A translation sheet for a single language


Translation sheets for all supported languages
A translation sheet that contains all supported languages

Export translation sheet for a single language


1. In Translations Builder, under Export/Import Translations, select a language such
as German [de-DE].

2. Select Export Translations Sheet to generate a translation sheet for that language.

You can select Open Export in Excel to view the exported file immediately.
The result of the export operation is a .csv file in the Outbox directory. If you selected
Open Export in Excel, you also see the result in Excel.

Export translation sheets for all languages


You can export translations sheets for all the languages supported for your project at
once. Under Export/Import Translations, select Export All Translation Sheets.

 Tip

Don't select Open Export in Excel. That option opens all of the files in Excel.

Translations Builder generates the complete set of translation sheets to be sent to


translators.

Export all translations


You can export a single translation sheet that contains all the secondary languages and
translations that have been added to the current project. Under Export/Import
Translations, select Export All Translations.

Translations Builder generates a .csv file for the full translation sheet named
PbixProjectName-Translations-Master.csv. When you open the translations sheet in Excel,
you can see all secondary language columns and all translations. You can think of this
translation sheet as a backup of all translations on a project-wide basis.

Next steps
Import translation sheets in Translations Builder
Import translation sheets in Translations
Builder
Article • 08/04/2023

When you use Translations Builder with external translators, you need to export a
translation sheet that contains the default language and empty cells or machine
generated translations. Translators update the .csv file and return it to you.

Suppose you generate a translation sheet to send to a translator. When opened in Excel,
this translation sheet looks like the following screenshot.

The job of the translator is to review all translations in the relevant column and to make
updates where appropriate. From the perspective of the translator, the top row with
column headers and the first four columns should be treated as read-only values.

Import translation sheet


When you receive the translation sheet back from the translator with updates, follow
these steps.

1. If you opened the translation sheet in Excel, close it before proceeding.

2. In Translations Builder, under Export/Import Translations, select Import


Translations.
3. In the Open dialog box, select the translation sheet file and select Open.

The Spanish translation updates now appear in the translation grid.

Import master translation sheet


The usual workflow is to import updated translation sheets that only contain translations
for a single language. However, you can also import a master translation sheet that has
multiple columns for secondary languages. This approach provides a way to back up
and restore the work you have done with translations on a project-wide basis. You can
also use this approach to reuse translations across multiple projects.

Here's a simple example. After you generate the current master translation sheet for a
project, imagine you delete French as a language from the project by right-clicking the
French [fr-FR] column header and selecting Delete This Language From Data Model.

When you attempt to delete the column for a language, Translations Builder prompts
you to verify.

You can select OK to continue. After you confirm the delete operation, the column for
French has been removed from the translations grid. Behind the scenes, Translations
Builder has also deleted all the French translations from the project.

Suppose that you deleted this language by mistake. You can use a previously generated
a master translation sheet to restore the deleted language. If you import the translation
sheet, the French [fr-FR] column reappears as the last column.

Next steps
Manage dataset translations at the enterprise level
Manage dataset translations at the
enterprise level
Article • 08/04/2023

You can use a master translations sheet as a project backup. Translations Builder adds a
secondary language along with its translations to a .pbix project if it's found in the
translation sheet but not in the target project. For more information, see Import
translation sheets in Translations Builder.

You can also create an enterprise-level master translation sheet to import when you
create new .pbix projects.

Import translations
Imagine you have two .pbix projects that have a similar data model in terms of the
tables, columns, and measures. In the first project, you have already added metadata
translations for all the unhidden dataset objects. In the second project, you haven't yet
started to add secondary languages or translations. You can export the master
translation sheet from the first project and import it into the second project.

The Import Translations command starts by determining whether there are any
secondary languages in the translation sheet that aren't in the target .pbix project. It
adds any secondary languages not already present in the target project. After that, the
command moves down the translation sheet row by row.

For each row, Translations Builder determines whether a dataset object in the .csv file
matches a dataset object of the same name in the .pbix project. When it finds a match,
the command copies all the translations for that dataset object into the .pbix project. If it
finds no match, the command ignores that row and continues to the next row.

The Import Translations command provides special treatment for report labels that
have been added to the Localized Labels table. If you import a translation sheet with
one or more localized report labels into a new .pbix project, the command creates the
Localized Labels table.

Because the Import Translations command creates the Localized Labels table and
copies report labels into a target .pbix project, it can be the foundation for maintaining
an enterprise-level master translation sheet. Use a set of localized report labels across
multiple .pbix projects. Each time you create a new .pbix project, you can import the
enterprise-level translation sheet to add the generalized set of localized report labels.

Next steps
Guidance for Power BI
Implement a data translation strategy
Article • 08/09/2023

All multiple-language reports require metadata translation and report label translation,
but not necessarily data translation. To determine whether your project requires data
translation, think through the use cases that you plan to support. Using data translation
involves planning and effort. You might decide not to support data translation unless it's
a hard requirement for your project.

Implementing data translation is different from implementing metadata translation or


report label translation. Power BI doesn't offer any localization features to assist you
with data translation. Instead, you must implement a data translation strategy. Such a
strategy involves extending the underlying data source with extra columns to track
translations for text in rows of data, such as the names of products and categories.

Determine whether your solution requires data


translation
To determine whether you need to implement data translation, start by thinking about
how to deploy your reporting solution. Think about the use case for its intended
audience. That exercise leads to a key question: Do you have people who use different
languages looking at the same database instance?

Suppose you're developing a report template for a software as a service (SaaS)


application with a well-known database schema. Some customers maintain their
database instance in English while others maintain their database instances in other
languages, such as Spanish or German. There's no need to implement data translations
in this use case because the data from any database instance because users view the
data each in a single language.

Each customer deployment uses a single language for its database and all its users. Both
metadata translations and report label translations must be implemented in this use
case. You deploy a single version of the .pbix file across all customer deployments.
However, there's no need to implement data translations when no database instance
ever needs to be viewed in multiple languages.

A different use case introduces the requirement of data translations. The example .pbix
project file uses a single database instance that contains sales performance data across
several European countries/regions. This solution must display its reports in different
languages with data from a single database instance.

If you have people that use different languages and locales to interact with the same
database instance, you still need to address other considerations.

Examine the text-based columns that are candidates for translation. Determine
how hard translating those text values is. Columns with short text values, like
product names and product categories, are good candidates for data translations.
Columns that hold longer text values, such as product descriptions, require more
effort to generate high quality translations.

Consider the number of distinct values that require translation. You can easily
translate product names in a database that holds 100 products. You can probably
translate product names when the number gets up to 1000. What happens if the
number of translated values reaches 10,000 or 100,000? If you can't rely on
machine-generate translations, your translation team might have trouble scaling
up to handle that volume of human translations.

Consider whether there's on-going maintenance. Every time someone adds a new
record to the underlying database, there's the potential to introduce new text
values that require translation. This consideration doesn't apply to metadata
translation or report label translation. In those situations, you create a finite
number of translations and then your work is done. Metadata translation and
report label translation don't require on-going maintenance as long as the
underlying dataset schema and the report layout remain the same.

There are several factors that go into deciding whether to use data translation. You must
decide whether it's worth the time and effort required to implement data translation
properly. You might decide that implementing metadata translations and report label
translations goes far enough. If your primary goal is to make your reporting solution
compliant with laws or regulations, you might also find that implementing data
translations isn't a requirement.

Next steps
Extend the data source schema to support data translations
Extend the data source schema to
support data translations
Article • 08/09/2023

There are multiple ways to implement data translations in Power BI. Some data
translation strategies are better than others. Whatever approach you choose, make sure
that it scales in terms of performance. You should also ensure your strategy scales in
terms of the overhead required to add support for new secondary languages as part of
the on-going maintenance.

The current series of articles describes a strategy for implementing data translations
made possible by the Power BI Desktop feature called field parameters.

Modify the data source


Start by modifying the underlying data source. For example, the Products table can be
extended with extra columns with translated product names to support data
translations. In this case, the Products table has been extended with a separate column
with product name translations in English, Spanish, French, and German.

The design approach shown here uses a three-part naming convention for table column
names used to hold data translations. A name consists of the following parts:

The entity name, for instance, Product


The word Translation
The language name, for instance, Spanish

For example, the column that contains product names translated into Spanish is
ProductTranslationSpanish. Using this three-part naming convention isn't required for
implementing data translation, but Translations Builder gives these columns special
treatment.
Understand field parameters
A field parameter is a table in which each row represents a field and where each these
fields must be defined as either a column or a measure. In one sense, a field parameter
is just a predefined set of fields. Given that rows in a table represent these fields, the set
of fields of a field parameter supports filtering. You can think of a field parameter as a
filterable set of fields.

When you create a field parameter, you can populate the fields collection using either
measures or columns.

When you use field parameters to implement data translations, use columns instead of
measures. The primary role that field parameters play in implementing data translations
is providing a single, unified field use in report authoring that can be dynamically
switched between source columns.

Next steps
Implement data translation using field parameters
Implement data translation using field
parameters
Article • 08/09/2023

This article shows you how to implement data translation by using a field parameter.
The process has the following steps:

Create a field parameter


Use a slicer and data table
Edit translated names
Add a language ID column

Create a field parameter


1. To create a field parameter in Power BI Desktop, in Modeling, select New
parameter > Fields.

2. In the Parameters dialog box, enter a name Translated Product Names.

3. Populate the fields connection of this field parameter with the columns from the
Products table with the translated product names.
4. Be sure that Add slicer to this page is enabled.

5. Select Create.

After you create a field parameter, it appears in the Fields list on the right as a new
table. Under Data, select Translated Product Names to see the Data Analysis
Expressions (DAX) code that defines the field parameter, as shown in the following
screenshot.

Use a slicer and data table


1. Under Data, expand the Translated Product Names node. Then select the
Translated Product Names item. A table appears in the canvas.

You can see the table type under Visualizations and Translated Product Names as
the Columns value. Position both the slicer and the data table anywhere on the
canvas.

2. Select one item in the slicer, such as ProductTranslationSpanish. The table now
shows a single corresponding column.

Edit translated names


The column values for product names have been translated into Spanish. The column
header still displays the column name from the underlying data source, which is
ProductTranslationSpanish. This fact is because those column header values are hard-
coded into the DAX expression when Power BI Desktop creates the field parameter.

If you examine the DAX expression, the hard-coded column names from the underlying
data source appear, such as ProductTranslationEnglish and ProductTranslationSpanish.

DAX

Translated Product Names = {


("ProductTranslationEnglish",
NAMEOF('Products'[ProductTranslationEnglish]), 0),
("ProductTranslationSpanish",
NAMEOF('Products'[ProductTranslationSpanish]), 1),
("ProductTranslationFrench", NAMEOF('Products'[ProductTranslationFrench]),
2),
("ProductTranslationGerman", NAMEOF('Products'[ProductTranslationGerman]),
3)
}

Update the DAX expression to replace the column names with localized translations for
the word Product as shown in the following code.

DAX

Translated Product Names = {


("Product", NAMEOF('Products'[ProductTranslationEnglish]), 0),
("Producto", NAMEOF('Products'[ProductTranslationSpanish]), 1),
("Produit", NAMEOF('Products'[ProductTranslationFrench]), 2),
("Produkt", NAMEOF('Products'[ProductTranslationGerman]), 3)
}

When you make this change, the column header is translated along with product names.

Edit column names in the Data view


Up to this point, you've looked at the field parameter in Report view. Now open the
Data view. There you can see two more fields in the field parameter that are hidden in
Report view.

The names of the columns in a field parameter are generated based on the name you
give to the top-level field parameter. You should rename the columns to simplify the
data model and to improve readability.

1. To rename a column label, double-click the field. Rename Translated Product


Names to Product.

2. Rename the two hidden fields with shorter names, such as Fields and Sort Order.
Add a language ID column
The field parameter is a table with three columns named Product, Fields, and Sort
Order. The next step is to add a fourth column with a language identifier to enable
filtering by language. You can add the column by modifying the DAX expression for the
field parameter.

1. Add a fourth string parameter to the row for each language with the lower-case
two character language identifier.

DAX

Translated Product Names = {


("Product", NAMEOF('Products'[ProductTranslationEnglish]), 0, "en" ),
("Producto", NAMEOF('Products'[ProductTranslationSpanish]), 1, "es"
),
("Produit", NAMEOF('Products'[ProductTranslationFrench]), 2, "fr" ),
("Produkt", NAMEOF('Products'[ProductTranslationGerman]), 3, "de" )
}

After you update the DAX expression with a language identifier for each language,
a new column appears in the Data view of the Products table named Value4.

2. Double-click the name Value4 and rename it to LanguageId.


3. Select LanguageId to highlight it. From the control ribbon, select Sort by column
> Sort Order.

You don't need to configure the sort column for the two pre-existing fields. Power
BI Desktop configured them when you set up the field parameter.

4. Open the Model view and, next to LanguageId select More options (three dots).
Select Hide in report view.

Report authors never need to see this column because it's used to select a language by
filtering behind the scenes.

In this article, you created a field parameter named Translated Product Names and
extended it with a column named LanguageId. The LanguageId column is used to filter
which source column is used. That action determines which language is displayed to
report consumers.

Next steps
Add the languages table to filter field parameters
Add the languages table to filter field
parameters
Article • 08/09/2023

As a content creator working with Power BI Desktop, there are many different ways to
add a new table to a data model. In this article, you use Power Query to create a table
named Languages.

Add the table


1. In Power BI Desktop, from the Home ribbon, select Transform data > Transform
data to open the Power Query Editor.

2. Under Queries, right-click and select New Query > Blank Query from the context
menu.

3. Select the new query. Under Query Settings > Properties > Name, enter
Languages as the name of the query.

4. From the Home ribbon, select Advanced Editor.

5. Copy the following M code into the editor, then select Done.

Power Query M

let
LanguagesTable = #table(type table [
Language = text,
LanguageId = text,
DefaultCulture = text,
SortOrder = number
], {
{"English", "en", "en-US", 1 },
{"Spanish", "es", "es-ES", 2 },
{"French", "fr", "fr-FR", 3 },
{"German", "de", "de-DE", 4 }
}),
SortedRows = Table.Sort(LanguagesTable,{{"SortOrder",
Order.Ascending}}),
QueryOutput = Table.TransformColumnTypes(SortedRows,{{"SortOrder",
Int64.Type}})
in
QueryOutput

When this query runs, it generates the Languages table with a row for each of the
four supported languages.

6. In the Home ribbon, select Close & Apply.

Create a relationship
Next, create a relationship between the Languages table and the Translated Product
Names table created in Implement data translation using field parameters.

1. In Power BI Desktop, open the Model view.

2. Find the Languages table and the Translated Product Names table.

3. Drag the LanguageId column from one table to the LanguageId entry in the other
table.

After you establish the relationship between Languages and Translated Product Names,
it serves as the foundation for filtering the field parameter on a report-wide basis. For
example, you can open the Filter pane and add the Language column from the
Languages table to the Filters on all pages section. If you configure this filter with the
Require single selection option, you can switch between languages using the Filter
pane.

Next steps
Synchronize multiple field parameters
Synchronize multiple field parameters
Article • 08/09/2023

A field parameter can support translations for a column in a multiple-language report in


Power BI. Most reports contain more than just one column that requires data
translations. You must ensure the mechanism you use to select a language can be
synchronized across multiple field parameters. To test this approach working with the
project in this series of articles, create a second field parameter to translate product
category names from the Products table.

Create a field parameter


1. In Power BI Desktop, in the Modeling ribbon, select New parameter > Fields.

2. In the Parameters dialog box, enter the name Translated Category Names.

3. Populate the fields with the columns from the Products table for the desired
languages.

4. Select Create.

5. Open the Data view. Select the table to view the Data Analysis Expressions (DAX)
code. Update the code to match the following code.

DAX

Translated Category Names = {


("Category", NAMEOF('Products'[CategoryTranslationEnglish]), 0,
"en"),
("Categoría", NAMEOF('Products'[CategoryTranslationSpanish]), 1,
"es"),
("Catégorie", NAMEOF('Products'[CategoryTranslationFrench]), 2,
"fr"),
("Kategorie", NAMEOF('Products'[CategoryTranslationGerman]), 3, "de")
}

After you make your changes, the Category value is localized and there's a new
column.
6. Double-click Value4 and change the name to LanguageId.

Update the model


After you create the new field parameter, you need to update the model to use it.

1. In Power BI Desktop, open the Model view.

2. Locate the Translated Category Names table and the Languages table.

3. Drag LanguageId from Translated Category Names to the Languages table to


create a one-to-one relationship.

The language filter now affects categories.


You have now learned how to synchronize the selection of language across multiple
field parameters. This example involves two field parameters. If your project involves a
greater number of columns requiring data translations such as 10, 20 or even 50, you
can repeat this approach and scale up as much as you need.

7 Note

You can test your implementation of data translations in Power BI Desktop by


changing the filter on the Languages table. However, the other two types of
translations don't work correctly in Power BI Desktop. You have to test metadata
and report label translations in the Power BI service.

Next steps
Implement data translations for a calendar table
Implement data translations for a
calendar table
Article • 08/09/2023

If you're implementing data translations, you can add translation support for text-based
columns in calendar tables. These tables include translations for the names of months or
the days of the week. This approach allows you to create visuals that mention days or
months.

Translated versions make the visual easy to read in your supported languages.

The strategy in this article for calendar table column translations uses Power Query and
the M query language. Power Query provides built-in functions, such as Date.MonthName ,
which accept a Date parameter and return a text-based calendar name. If your .pbix
project has en-US as its default language and locale, the following Power Query function
call evaluates to a text-based value of January.

Power Query M

Date.MonthName( #date(2023, 1, 1) )

The Date.MonthName function accepts an second, optional string parameter to pass a


specific language and locale.
Power Query M

Date.MonthName( #date(2023, 1, 1), "en-US")

If you want to translate the month name into French, you can pass a text value of fr-FR.

Power Query M

Date.MonthName( #date(2022, 12, 1), "fr-FR")

Generate calendar translation table


Look at the Languages table used in previous examples. It includes a DefaultCulture
column.

Power Query is built on a functional query language named M. With that language, you
can iterate through the rows of the Languages table to discover what languages and
what default cultures the project supports. You can write a query that uses the
Languages table as its source to generate a calendar translation table with the names of
months or weekdays.

Here's the M code that generates the Translated Month Names Table.

Power Query M
let
Source = #table( type table [ MonthNumber = Int64.Type ],
List.Split({1..12},1)),
Translations = Table.AddColumn( Source, "Translations",
each
[ MonthDate = #date( 2022, [ MonthNumber ], 1 ),
Translations = List.Transform(Languages[DefaultCulture], each
Date.MonthName( MonthDate, _ ) ),
TranslationTable = Table.FromList( Translations, null ),
TranslationsTranspose = Table.Transpose(TranslationTable),
TranslationsColumns = Table.RenameColumns(
TranslationsTranspose,
List.Zip({ Table.ColumnNames( TranslationsTranspose ),
List.Transform(Languages[Language],
each "MonthNameTranslations" & _ ) })
)
]
),
ExpandedTranslations = Table.ExpandRecordColumn(Translations,
"Translations",
{ "TranslationsColumns" },
{ "TranslationsColumns"
}),
ColumnsCollection = List.Transform(Languages[Language], each
"MonthNameTranslations" & _ ),
ExpandedTranslationsColumns =
Table.ExpandTableColumn(ExpandedTranslations,

"TranslationsColumns",
ColumnsCollection,
ColumnsCollection ),
TypedColumnsCollection = List.Transform(ColumnsCollection, each {_, type
text}),
QueryOutput = Table.TransformColumnTypes(ExpandedTranslationsColumns,
TypedColumnsCollection)
in
QueryOutput

 Tip

You can simply copy and paste the M code from the
ProductSalesMultiLanguage.pbix sample whenever you need to add calendar
translation tables to your project.

If the Languages table contains four rows for English, Spanish, French, and German, the
Translated Month Names Table query generates a table with four translation columns
as shown in the following screenshot.

Likewise, the query named Translated Day Names Table generates a table with weekday
name translations.

The two queries named Translated Month Names Table and Translated Day Names
Table have been written to be generic. They don't contains any hard-coded column
names. These queries don't require any modifications in the future when you add or
remove languages from the project. All you need to do is to update the data rows in the
query.

Configure sort values


When you run these two queries for the first time, they create two tables in the dataset
with the names Translated Month Names Table and Translated Day Names Table.
There's a translation column for each language. You need to configure the sort column
for each of the translation columns:

Configure the translation columns in Translated Month Names Table to use the
sort column MonthNumber
Configure the translations columns in Translated Day Names Table to use the sort
column DayNumber

Integrate translation tables


The next step is to integrate the two tables into the data model with a Calendar table.
The Calendar table is a calculated table based on the following Data Analysis
Expressions (DAX) expression.


Create a relationship between the Calendar table and the fact tables, such as Sales,
using the Date column to create a one-to-many relationship. The relationships created
between the Calendar table and the two translations tables are based on the
MonthNumber column and the DayNumber column.

After you create the required relationships with the Calendar table, create a new field
parameter for each of the two calendar translations tables. Creating a field parameter
for a calendar translation table is just like creating the field parameters for product
names and category names shown earlier.

Add a relationship between these new field parameter tables and the Languages table
to ensure the language filtering strategy works as expected.
After you create the field parameters for Translated Month Names and Translated Day
Names, you can begin to surface them in a report using cartesian visuals, tables, and
matrices.

After you set up everything, you can test your work using a report-level filter on the
Languages table to switch between languages and to verify translations for names of
months and days of the week work as expected.

Next steps
Load multiple-language reports
Load multiple-language reports
Article • 08/09/2023

To load multiple-language reports in the user's language, you can use bookmarks or
embed reports.

Use bookmarks to select a language


If you plan to publish a Power BI report with data translations for access by users
through the Power BI service, you need to load the report with the correct language for
the current user. Create a set of bookmarks that apply filters to the Languages table.

1. Create a separate bookmark for each language that supports data translations.

2. Disable Display and Current Page and only enable Data behavior.
3. To apply the bookmark for a specific language, supply a second parameter in the
report URL.

HTTP

?language=es&bookmarkGuid=Bookmark856920573e02a8ab1c2a

This report URL parameter is named bookmarkGuid. The filtering on the Languages
table is applied before any data is displayed to the user.

Embed reports that implement data


translations
Loading reports with Power BI embedding provides more flexibility than the report
loading process for users accessing the report through the Power BI service. You can
load a report using a specific language and locale by extending the config object
passed to powerbi.embed with a localeSettings object containing a language property.

JavaScript

let config = {
type: "report",
id: reportId,
embedUrl: embedUrl,
accessToken: embedToken,
tokenType: models.TokenType.Embed,
localeSettings: { language: "de-DE" }
};

// embed report using config object


let report = powerbi.embed(reportContainer, config);
When you embed a report with a config object like this which sets the language
property of the localeSettings object, the metadata translations and report label
translations work as expected. However, there's one more step required to filter the
Languages table to select the appropriate language for the current user.

It's possible to apply a bookmark to an embedded a report. Instead, you can apply a
filter directly on the Languages table as the report loads using the Power BI JavaScript
API. There's no need to add bookmarks for filtering the Languages table if you only
intend to use a report by using Power BI embedding.

To apply a filtering during the loading process of an embedded report, register an event
handler for the loaded event. When you register an event handler for an embedded
report's loaded event, you can provide a JavaScript event handler that runs before the
rendering process begins. This approach makes the loaded event the ideal place to
register an event handler whose purpose is to apply the correct filtering on the
Languages table.

Here's an example of JavaScript code that registers an event handler for the loaded
event to apply a filter to the Languages table for Spanish.

JavaScript

let report = powerbi.embed(reportContainer, config);

report.on("loaded", async (event: any) => {

// let's filter data translations for Spanish


let languageToLoad = "es";

// create filter object


const filters = [{
$schema: "http://powerbi.comproduct/schema#basic",
target: { table: "Languages",
column: "LanguageId"
},
operator: "In",
values: [ languageToLoad ], // <- Filter based on Spanish
filterType: models.FilterType.Basic,
requireSingleSelection: true
}];

// call updateFilters and pass filter object to set data translations to


Spanish
await report.updateFilters(models.FiltersOperations.Replace, filters);

});
 Tip

When you set filters with the Power BI JavaScript API, you should prefer the
updateFilters method over the setFilters method. The updateFilters method

allows you to remove existing filters while setFilters does not.

Next steps
Guidance for Power BI
On-premises data gateway sizing
Article • 02/27/2023

This article targets Power BI administrators who need to install and manage the on-
premises data gateway.

The gateway is required whenever Power BI must access data that isn't accessible
directly over the Internet. It can be installed on a server on-premises, or VM-hosted
Infrastructure-as-a-Service (IaaS).

Gateway workloads
The on-premises data gateway supports two workloads. It's important you first
understand these workloads before we discuss gateway sizing and recommendations.

Cached data workload


The Cached data workload retrieves and transforms source data for loading into Power
BI datasets. It does so in three steps:

1. Connection: The gateway connects to source data


2. Data retrieval and transformation: Data is retrieved, and when necessary,
transformed. Whenever possible, the Power Query mashup engine pushes
transformation steps to the data source—it's known as query folding. When it's not
possible, transformations must be done by the gateway. In this case, the gateway
will consume more CPU and memory resources.
3. Transfer: Data is transferred to the Power BI service—a reliable and fast Internet
connection is important, especially for large data volumes
Live Connection and DirectQuery workloads
The Live Connection and DirectQuery workload works mostly in pass-through mode. The
Power BI service sends queries, and the gateway responds with query results. Generally,
query results are small in size.

For more information about Live Connection, see Datasets in the Power BI service
(Externally-hosted models).
For more information about DirectQuery, see Dataset modes in the Power BI
service (DirectQuery mode).

This workload requires CPU resources for routing queries and query results. Usually
there's much less demand for CPU than is required by the Cache data workload—
especially when it's required to transform data for caching.

Reliable, fast, and consistent connectivity is important to ensure report users have
responsive experiences.
Sizing considerations
Determining the correct sizing for your gateway machine can depend on the following
variables:

For Cache data workloads:


The number of concurrent dataset refreshes
The types of data sources (relational database, analytic database, data feeds, or
files)
The volume of data to be retrieved from data sources
Any transformations required to be done by the Power Query mashup engine
The volume of data to be transferred to the Power BI service
For Live Connection and DirectQuery workloads:
The number of concurrent report users
The number of visuals on report pages (each visual sends at least one query)
The frequency of Power BI dashboard query cache updates
The number of real-time reports using the Automatic page refresh feature
Whether datasets enforce Row-level Security (RLS)

Generally, Live Connection and DirectQuery workloads require sufficient CPU, while
Cache data workloads require more CPU and memory. Both workloads depend on good
connectivity with the Power BI service, and the data sources.

7 Note

Power BI capacities impose limits on model refresh parallelism, and Live Connection
and DirectQuery throughput. There's no point sizing your gateways to deliver more
than what the Power BI service supports. Limits differ by Premium SKU (and
equivalently sized A SKU). For more information, see What is Power BI Premium?
(Capacity nodes).

Recommendations
Gateway sizing recommendations depend on many variables. In this section, we provide
you with general recommendations that you can take into consideration.

Initial sizing
It can be difficult to accurately estimate the right size. We recommend that you start
with a machine with at least 8 CPU cores, 8 GB of RAM, and multiple Gigabit network
adapters. You can then measure a typical gateway workload by logging CPU and
memory system counters. For more information, see Monitor and optimize on-premises
data gateway performance.

Connectivity
Plan for the best possible connectivity between the Power BI service and your gateway,
and your gateway and the data sources.

Strive for reliability, fast speeds, and low, consistent latencies


Eliminate—or reduce—machine hops between the gateway and your data sources
Remove any network throttling imposed by your firewall proxy layer. For more
information about Power BI endpoints, see Add Power BI URLs to your allow list.
Configure Azure ExpressRoute to establish private, managed connections to Power
BI
For data sources in Azure VMs, ensure the VMs are colocated with the Power BI
service
For Live Connection workloads to SQL Server Analysis Services (SSAS) involving
dynamic RLS, ensure good connectivity between the gateway machine and the on-
premises Active Directory
Clustering
For large-scale deployments, you can create a gateway with multiple cluster members.
Clusters avoid single points of failure, and can load balance traffic across gateways. You
can:

Install one or more gateways in a cluster


Isolate workloads to standalone gateways, or clusters of gateway servers

For more information, see Manage on-premises data gateway high-availability clusters
and load balancing.

Dataset design and settings


Dataset design, and their settings, can impact on gateway workloads. To reduce gateway
workload, you can consider the following actions.

For Import datasets:

Configure less frequent data refresh


Configure incremental refresh to minimize the amount of data to transfer
Whenever possible, ensure query folding takes place
Especially for large data volumes or a need for low-latency results, convert the
design to a DirectQuery or Composite model

For DirectQuery datasets:

Optimize data sources, model, and report designs—for more information, see
DirectQuery model guidance in Power BI Desktop
Create aggregations to cache higher-level results to reduce the number of
DirectQuery requests
Restrict Automatic page refresh intervals, in report designs and capacity settings
Especially when dynamic RLS is enforced, restrict dashboard cache update
frequency
Especially for smaller data volumes or for non-volatile data, convert the design to
an Import or Composite model

For Live Connection datasets:

Especially when dynamic RLS is enforced, restrict dashboard cache update


frequency

Next steps
For more information related to this article, check out the following resources:

Guidance for deploying a data gateway for Power BI


Configure proxy settings for the on-premises data gateway
Monitor and optimize on-premises data gateway performance
Troubleshoot gateways - Power BI
Troubleshoot the on-premises data gateway
The importance of query folding
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Monitor report performance in Power BI
Article • 04/14/2023

Monitor report performance in Power BI Desktop using the Performance Analyzer.


Monitoring will help you learn where the bottlenecks are, and how you can improve
report performance.

Monitoring performance is relevant in the following situations:

Your Import data model refresh is slow.


Your DirectQuery or Live Connection reports are slow.
Your model calculations are slow.

Slow queries or report visuals should be a focal point of continued optimization.

7 Note

The Performance Analyzer cannot be used to monitor Premium Per User (PPU)
activities or capacity.

Use Query Diagnostics


Use Query Diagnostics in Power BI Desktop to determine what Power Query is doing
when previewing or applying queries. Further, use the Diagnose Step function to record
detailed evaluation information for each query step. The results are made available in a
Power Query, and you can apply transformations to better understand query execution.

Use Performance Analyzer


Use Performance Analyzer in Power BI Desktop to find out how each of your report
elements—such as visuals and DAX formulas—are doing. It's especially useful to
determine whether it's the query or visual rendering that's contributing to performance
issues.

Use SQL Server Profiler


You can also use SQL Server Profiler to identify queries that are slow.

7 Note

SQL Server Profiler is available as part of SQL Server Management Studio.

Use SQL Server Profiler when your data source is either:

SQL Server
SQL Server Analysis Services
Azure Analysis Services

U Caution

Power BI Desktop supports connecting to a diagnostics port. The diagnostic port


allows for other tools to make connections to perform traces for diagnostic
purposes. Making any changes to the Power Desktop data model is supported only
for specific operations. Other changes to the data model with operations that
aren't supported may lead to corruption and data loss.

To create a SQL Server Profiler trace, follow these instructions:

1. Open your Power BI Desktop report (so it will be easy to locate the port in the next
step, close any other open reports).
2. To determine the port being used by Power BI Desktop, in PowerShell (with
administrator privileges), or at the Command Prompt, enter the following
command:

PowerShell

netstat -b -n

The output will be a list of applications and their open ports. Look for the port
used by msmdsrv.exe, and record it for later use. It's your instance of Power BI
Desktop.
3. To connect SQL Server Profiler to your Power BI Desktop report:
a. Open SQL Server Profiler.
b. In SQL Server Profiler, on the File menu, select New Trace.
c. For Server Type, select Analysis Services.
d. For Server Name, enter localhost:[port recorded earlier].
e. Click Run—now the SQL Server Profiler trace is live, and is actively profiling
Power BI Desktop queries.
4. As Power BI Desktop queries are executed, you'll see their respective durations and
CPU times. Depending on the data source type, you may see other events
indicating how the query was executed. Using this information, you can determine
which queries are the bottlenecks.

A benefit of using SQL Server Profiler is that it's possible to save a SQL Server (relational)
database trace. The trace can become an input to the Database Engine Tuning Advisor.
This way, you can receive recommendations on how to tune your data source.

Monitor Premium metrics


Monitor performance of content deployed into your organization's Power BI Premium
capacity with the help of the Premium metrics app.

Next steps
For more information about this article, check out the following resources:

Query Diagnostics
Performance Analyzer
Troubleshoot report performance in Power BI
Power BI Premium Metrics app
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Troubleshoot report performance in
Power BI
Article • 02/27/2023

This article provides guidance that enables developers and administrators to


troubleshoot slow report performance. It applies to Power BI reports, and also Power BI
paginated reports.

Slow reports can be identified by report users who experience reports that are slow to
load, or slow to update when interacting with slicers or other features. When reports are
hosted on a Premium capacity, slow reports can also be identified by monitoring the
Power BI Premium Metrics app. This app helps you to monitor the health and capacity of
your Power BI Premium subscription.

Follow flowchart steps


Use the following flowchart to help understand the cause of slow performance, and to
determine what action to take.
There are six flowchart terminators, each describing action to take:

Terminator Action(s)

Manage capacity
Scale capacity

Investigate capacity activity during typical report usage

Architecture change
Consider Azure Analysis Services
Check on-premises gateway

Consider Azure Analysis Services


Consider Power BI Premium

Use Power BI Desktop Performance Analyzer


Optimize report, model, or DAX

Raise support ticket


Take action
The first consideration is to understand if the slow report is hosted on a Premium
capacity.

Premium capacity
When the report is hosted on a Premium capacity, use the Power BI Premium Metrics
app to determine if the report-hosting capacity frequently exceeds capacity resources.
When there's pressure on resources, it may be time to manage or scale the capacity
(flowchart terminator 1). When there are adequate resources, investigate capacity
activity during typical report usage (flowchart terminator 2).

Shared capacity
When the report is hosted on shared capacity, it's not possible to monitor capacity
health. You'll need to take a different investigative approach.

First, determine if slow performance occurs at specific times of the day or month. If it
does—and many users are opening the report at these times—consider two options:

Increase query throughput by migrating the dataset to Azure Analysis Services, or


a Premium capacity (flowchart terminator 4).
Use Power BI Desktop Performance Analyzer to find out how each of your report
elements—such as visuals and DAX formulas—are doing. It's especially useful to
determine whether it's the query or visual rendering that's contributing to
performance issues (flowchart terminator 5).

If you determine there's no time pattern, next consider if slow performance is isolated to
a specific geography or region. If it is, it's likely that the data source is remote and
there's slow network communication. In this case, consider:

Changing architecture by using Azure Analysis Services (flowchart terminator 3).


Optimizing on-premises data gateway performance (flowchart terminator 3).

Finally, if you determine there's no time pattern and slow performance occurs in all
regions, investigate whether slow performance occurs on specific devices, clients, or web
browsers. If it doesn't, use Power BI Desktop Performance Analyzer, as described earlier,
to optimize the report or model (flowchart terminator 5).

When you determine specific devices, clients, or web browsers contribute to slow
performance, we recommend creating a support ticket through the Power BI support
page (flowchart terminator 6).

Next steps
For more information about this article, check out the following resources:

Power BI guidance
Monitoring report performance
Performance Analyzer
Power BI adoption roadmap
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Lifecycle management best practices
Article • 08/10/2023

This article provides guidance for data & analytics creators who are managing their
content throughout its lifecycle in Microsoft Fabric. The article focuses on the use of Git
integration for source control and deployment pipelines as a release tool. For a general
guidance on Enterprise content publishing, Enterprise content publishing.

) Important

Microsoft Fabric is in preview.

The article is divided into four sections:

Content preparation - Prepare your content for lifecycle management.

Development - Learn about the best ways of creating content in the deployment
pipelines development stage.

Test - Understand how to use a deployment pipelines test stage to test your
environment.

Production - Utilize a deployment pipelines production stage to make your


content available for consumption.

Content preparation
To best prepare your content for on-going management throughout its lifecycle, review
the information in this section before you:

Release content to production.

Start using a deployment pipeline for a specific workspace.

Separate development between teams


Different teams in the org usually have different expertise, ownership, and methods of
work, even when working on the same project. It’s important to set boundaries while
giving each team their independence to work as they like. Consider having separate
workspaces for different teams. This enables each team to have different permissions,
work with different source control repos, and ship content to production in a different
cadence. Most items can connect and use data across workspaces, so it won’t block
collaboration on the same data and project.

Plan your permission model


Both Git integration and deployment pipelines require different permissions than just
the workspace permissions. Read about the permission requirements for Git integration
and deployment pipelines.

To implement a secure and easy workflow, plan who gets access to each part of the
environments being used, both the Git repository and the dev/test/prod stages in a
pipeline. Some of the considerations to take into account are:

Who should have access to the source code in the Git repository?

Which operations should users with pipeline access be able to perform in each
stage?

Who’s reviewing content in the test stage?

Should the test stage reviewers have access to the pipeline?

Who should oversee deployment to the production stage?

Which workspace are you assigning to a pipeline, or connecting to git?

Which branch are you connecting the workspace to? What’s the policy defined for
that branch?

Is the workspace shared by multiple team members? Should they make changes
directly in the workspace, or only through Pull requests?

Which stage are you assigning your workspace to?

Do you need to make changes to the permissions of the workspace you’re


assigning?

Connect different stages to different databases


A production database should always be stable and available. It's best not to overload it
with queries generated by BI creators for their development or test datasets. Build
separate databases for development and testing in order to protect production data
and not overload the development database with the entire volume of production data.
Use parameters for configurations that will change
between stages
Whenever possible, add parameters to any definition that might change between
dev/test/prod stages. Using parameters helps you change the definitions easily when
you move your changes to production. While there’s still no unified way to manage
parameters in Fabric, we recommend using it on items that support any type of
parameterization. Parameters have different uses, such as defining connections to data
sources, or to internal items in Fabric. They can also be used to make changes to
queries, filters, and the text displayed to users.
In deployment pipelines, you can configure parameter rules to set different values for
each deployment stage.

Development
This section provides guidance for working with the deployment pipelines and using fit
for your development stage.

Back up your work into a Git repository


With Git integration, any developer can back up their work by committing it into Git. To
do this properly in Fabric, here are some basic rules:

Make sure you have an isolated environment to work in, so others don’t override
your work before it gets committed. This means working in a Desktop tool (such as
VSCode , Power BI Desktop or others), or in a separate workspace that other
users can’t access.

Commit to a branch that you created and no other developer is using. If you’re
using a workspace as an authoring environment, read about working with
branches.

Commit together changes that must be deployed together. This advice applies for
a single item, or multiple items that are related to the same change. Committing all
related changes together can help you later when deploying to other stages,
creating pull requests, or reverting changes back.

Big commits might hit a max commit size limit. Be mindful of the number of items
you commit together, or the general size of an item. For example, reports can grow
large when adding large images. It’s bad practice to store large-size items in
source control systems, even if it works. Consider ways to reduce the size of your
items if they have lots of static resources, like images.

Rolling back changes


After backing up your work, there might be cases where you want to revert to a previous
version and restore it in the workspace. There are a few options for this:

Undo button: The Undo operation is an easy and fast way to revert the immediate
changes you made, as long as they are not committed yet. You can also undo each
item separately. Read more about the undo operation.

Reverting to older commits: There’s no direct way to go back to a previous


commit in the UI. The best option is to promote an older commit to be the HEAD
using git revert or git reset . Doing this will show that there’s an update in the
source control pane, and you can update the workspace with that new commit.

As data isn’t stored in Git, consider that reverting a data item to an older version might
break the existing data and could possible require you to drop the data or the operation
might fail. Check this in advance before reverting changes back.

Working with a ‘private’ workspace


When you want to work in isolation, use a separate workspace as an isolated
environment. Read more about this in working with branches. For an optimal workflow
for you and the team, consider the following:

Setting up the workspace: Before you start, make sure you can create a new
workspace (if you don’t already have one), that you can assign it to a Fabric
capacity, and that you have access to data to work in your workspace.

Creating a new branch: Create a new branch from the main branch, so you’ll have
the most up-to-date version of your content. Also make sure you connect to the
correct folder in the branch, so you can pull the right content into the workspace.

Small, frequent changes: It's a Git best practice to make small incremental
changes that are easy to merge and less likely to get into conflicts. If that’s not
possible, make sure to update your branch from main so you can resolve conflicts
on your own first.

Configuration changes: If necessary, change the configurations in your workspace


to help you work more productively. Some changes can include connection
between items, or to different data sources or changes to parameters on a given
item. Just remember that anything you commit will be part of the commit and can
accidentally be merged into the main branch.

Use Client tools to edit your work


For items and tools that support it, it might be easier to work with client tools for
authoring, such as Power BI Desktop for datasets and reports, VSCode for
Notebooks etc. These tools can be your local development environment. After you
complete your work, push the changes into the remote repo, and sync the workspace to
upload the changes. Just make sure you are working with the supported structure of the
item you are authoring. If you’re not sure, first clone a repo with content already synced
to a workspace, then start authoring from there, where the structure is already in place.

Managing workspaces and branches


Since a workspace can only be connected to a single branch at a time, it is
recommended to treat this as a 1:1 mapping. However, to reduce the amount of
workspace it entails, consider these options:

If a developer set up a private workspace with all required configurations, they can
continue to use that workspace for any future branch they create. When a sprint is
over, your changes are merged and you are starting a fresh new task, just switch
the connection to a new branch on the same workspace. You can also do this if you
suddenly need to fix a bug in the middle of a sprint. Think of it as a working
directory on the web.

Developers using a client tool (such as VSCode, Power BI Desktop or others), don’t
necessarily need a workspace. They can create branches and commit changes to
that branch locally, push those to the remote repo and create a pull request to the
main branch, all without a workspace. A workspace is needed only as a testing
environment to check that everything works in a real-life scenario. It's up to you to
decide when that should happen.

Test
This section provides guidance for working with a deployment pipelines test stage.

Simulate your production environment


It’s important to see how your change will impact the production stage. A deployment
pipelines test stage allows you to simulate a real production environment for testing
purposes. Alternatively, you can simulate this by connecting Git to an additional
workspace.

Make sure that these three factors are addressed in your test environment:

Data volume

Usage volume

A similar capacity as in production

When testing, you can use the same capacity as the production stage. However, using
the same capacity can make production unstable during load testing. To avoid unstable
production, test using a different capacity similar in resources to the production
capacity. To avoid extra costs, use a capacity where you can pay only for the testing
time.

Use deployment rules with a real-life data source


If you're using the test stage to simulate real life data usage, it's recommended to
separate the development and test data sources. The development database should be
relatively small, and the test database should be as similar as possible to the production
database. Use data source rules to switch data sources in the test stage or parameterize
the connection if not working through deployment pipelines.
Check related items
Changes you make can also affect the dependent items. During testing, verify that your
changes don’t affect or break the performance of existing items, which can be
dependent on the updated ones.

You can easily find the related items by using impact analysis.

Updating data items


Data items are items that store data. The item’s definition in Git defines how the data is
stored. When updating an item in the workspace, we are importing its definition into the
workspace and applying it on the existing data. The operation of updating data items is
the same for Git and deployment pipelines.

As different items have different capabilities when it comes to retaining data when
changes to the definition are applied, be mindful when applying the changes. Some
practices that can help you apply the changes in the safest way:

Know in advance what the changes are and what their impact might be on the
existing data. Use commit messages to describe the changes made.

Upload the changes first to a dev or test environment, to see how that item
handles the change with test data.

If everything goes well, it’s recommended to also check it on a staging


environment, with real-life data (or as close to it as possible), to minimize the
unexpected behaviors in production.

Consider the best timing when updating the Prod environment to minimize the
damage that any errors might cause to your business users who consume the data.

After deployment, post-deployment tests in Prod to verify that everything is


working as expected.

Some changes will always be considered breaking changes. Hopefully, the


preceding steps will help you track them before production. Build a plan for how
to apply the changes in Prod and recover the data to get back to normal state and
minimize downtime for business users.

Test your app


If you're distributing content to your customers through an app, review the app's new
version before it's in production. Since each deployment pipeline stage has its own
workspace, you can easily publish and update apps for development and test stages.
Publishing and updating apps allows you to test the app from an end user's point of
view.

) Important

The deployment process doesn't include updating the app content or settings. To
apply changes to content or settings, manually update the app in the required
pipeline stage.

Production
This section provides guidance to the deployment pipelines production stage.

Manage who can deploy to production


Because deploying to production should be handled carefully, it's good practice to let
only specific people manage this sensitive operation. However, you probably want all BI
creators for a specific workspace to have access to the pipeline. Use production
workspace permissions to manage access permissions. Other users can have a
production workspace viewer role to see content in the workspace but not make
changes from Git or deployment pipelines.

In addition, limit access to the repo or pipeline by only enabling permissions to users
that are part of the content creation process.

Set rules to ensure production stage availability


Deployment rules are a powerful way to ensure the data in production is always
connected and available to users. With deployment rules applied, deployments can run
while you have the assurance that customers can see the relevant information without
disturbance.

Make sure that you set production deployment rules for data sources and parameters
defined in the dataset.

Update the production app


Deployment in a pipeline updates the workspace content, but it can also update the
associated app through the deployment pipelines API. It's not possible to update the
app through the UI. You need to update the app manually. If you use an app for content
distribution, don’t forget to update the app after deploying to production so that end
users are immediately able to use the latest version.

Deploying into production using Git branches


As the repo serves as the ‘single-source-of-truth’, some teams might want to deploy
updates into different stages directly from Git. This is possible with Git integration, with
a few considerations:

It’s recommended to use release branches. You will be required to continuously


change the connection of workspace to the new release branches before every
deployment.

If your build or release pipeline requires you to change the source code, or run
scripts in a build environment before deployment to the workspace, then
connecting the workspace to Git won't help you.

After deploying to each stage, make sure to change all the configuration specific
to that stage.

Quick fixes to content


Sometimes there are issues in production that require a quick fix. Deploying a fix
without testing it first is bad practice. Therefore, always implement the fix in the
development stage and push it to the rest of the deployment pipeline stages. Deploying
to the development stage allows you to check that the fix works before deploying it to
production. Deploying across the pipeline takes only a few minutes.

If you are using deployment from Git, we recommend following the practices described
in Adopt a Git branching strategy.

Next steps
End to end lifecycle management tutorial
Get started with Git integration
Get started with deployment pipelines
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Access the Power BI activity log
Article • 04/27/2023

This article targets Power BI administrators who need to access and analyze data
sourced from the Power BI activity log. It focuses on the programmatic retrieval of
Power BI activities by using the Get-PowerBIActivityEvent cmdlet from the Power BI
Management module. Up to 30 days of history is available. This cmdlet uses the Get
Activity Events Power BI REST API operation, which is an admin API. PowerShell cmdlets
add a layer of abstraction on top of the underlying APIs. Therefore, the PowerShell
cmdlet simplifies access to the Power BI activity log.

There are other manual and programmatic ways to retrieve Power BI activities. For more
information, see the Access user activity data.

Analyzing the Power BI activity log is crucial for governance, compliance, and to track
adoption efforts. For more information about the Power BI activity log, see Track user
activities in Power BI.

 Tip

We recommend that you fully review the Tenant-level auditing article. This article
covers planning, key decisions, prerequisites, and key solution development
activities to consider when building an end-to-end auditing solution.

Examples available
The goal of this article is to provide you with examples to help get you started. The
examples include scripts that retrieve data from the activity log by using the Power BI
Management PowerShell module.

2 Warning

The scripts aren't production-ready because they're intended only for educational
purposes. You can, however, adapt the scripts for production purposes by adding
logic for logging, error handling, alerting, and refactoring for code reuse and
modularization.

Because they're intended for learning, the examples are simplistic, yet they're real-world.
We recommend that you review all examples to understand how they apply slightly
different techniques. Once you identify the type of activity data that you need, you can
mix and match the techniques to produce a script that best suits your requirements.

This article includes the following examples.

Example name Type of activity data

Authenticate with the Power BI service N/A

View all activities for a user for one day All

View an activity for N days Share report (link or direct access)

View three activities for N days Create app, update app, and install app

View all activities for a workspace for one day All

Export all activities for the previous N days All

For simplicity, most of the examples output their result to the screen. For instance, in
Visual Studio Code, the data is output to the terminal panel , which holds a buffer set
of data in memory.

Most of the examples retrieve raw JSON data. Working with the raw JSON data has
many advantages.

All of the information that's available for each activity event is returned. That's
helpful for you to learn what data is available. Keep in mind that the contents of an
API response differs depending on the actual activity event. For example, the data
available for a CreateApp event is different to the ViewReport event.
Because data that's available in the activity log changes as Power BI evolves over
time, you can expect the API responses to change too. That way, new data that's
introduced won't be missed. Your process is also more resilient to change and less
likely to fail.
The details of an API response can differ for the Power BI commercial cloud and
the national/regional clouds.
If you have different team members (such as data engineers) who get involved
with this process, simplifying the initial process to extract the data makes it easier
for multiple teams to work together.

 Tip

We recommend that you keep your scripts that extract data as simple as possible.
Therefore, avoid parsing, filtering, or formatting the activity log data as it's
extracted. This approach uses an ELT methodology, which has separate steps to
Extract, Load, and Transform data. This article only focuses on the first step, which is
concerned with extracting the data.

Requirements
To use the example scripts, you must meet the following requirements.

PowerShell client tool: Use your preferred tool for running PowerShell commands.
All examples were tested by using the PowerShell extension for Visual Studio
Code with PowerShell 7. For information about client tools and PowerShell
versions, see Tenant-level auditing.
Power BI Management module: Install all Power BI PowerShell modules. If you
previously installed them, we recommend that you update the modules to ensure
that you're using the latest published version.
Power BI administrator role: The example scripts are designed to use an
interactive authentication flow. Therefore, the user running the PowerShell example
scripts must sign in to use the Power BI REST APIs. To retrieve activity log data, the
authenticating user must belong to the Power BI administrator role (because
retrieving activity events is done with an admin API). Service principal
authentication is out of scope for these learning examples.

The remainder of this article includes sample scripts that show you different ways to
retrieve activity log data.

Example 1: Authenticate with the Power BI


service
All Power BI REST API operations require you to sign in. Authentication (who is making
the request) and authorization (what the user has permission to do) are managed by the
Microsoft Identity Platform. The following example uses the Connect-
PowerBIServiceAccount cmdlet from the Power BI Management module. This cmdlet
supports a simple method to sign in.

Sample request 1
The first script redirects you to a browser to complete the sign in process. User accounts
that have multi-factor authentication (MFA) enabled are able to use this interactive
authentication flow to sign in.

PowerShell
Connect-PowerBIServiceAccount

) Important

Users without Power BI administrator privileges can't run any of the sample scripts
that follow in this article. Power BI administrators have permission to manage the
Power BI service and to retrieve tenant-wide metadata (such as activity log data).
Although using service principal authentication is out of scope for these examples,
we strongly recommend that you set up a service principal for production-ready,
unattended scripts that will run on a schedule.

Be sure to sign in before running any of the following scripts.

Example 2: View all activities for a user for one


day
Sometimes you need to check all the activities that a specific user performed on a
specific day.

 Tip

When extracting data from the activity log by using the PowerShell cmdlet, each
request can extract data for one day (a maximum of 24 hours). Therefore, the goal
of this example is to start simply by checking one user for one day. There are other
examples later in this article that show you how to use a loop to export data for
multiple days.

Sample request 2
This script declares two PowerShell variables to make it easier to reuse the script:

$UserEmailAddr : The email address for the user you're interested in.
$ActivityDate : The date you're interested in. The format is YYYY-MM-DD (ISO

8601 format). You can't request a date earlier than 30 days before the current date.

PowerShell

#Input values before running the script:


$UserEmailAddr = '[email protected]'
$ActivityDate = '2023-03-15'
#----------------------------------------------------------------------
#View activity events:
Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate + 'T00:00:00.000') `
-EndDateTime ($ActivityDate + 'T23:59:59.999') `
-User $UserEmailAddr

7 Note

You might notice a backtick (`) character at the end of some of the lines in the
PowerShell scripts. In PowerShell, one way you can use the backtick character is as a
line continuation character. We've used it to improve the readability of the scripts in
this article.

 Tip

In the script, each of the PowerShell variables correlate to a required or optional


parameter value in the Get-PowerBIActivityEvent cmdlet. For example, the value
you assign to the $UserEmailAddr variable is passed to the -User parameter.
Declaring PowerShell variables in this way is a lightweight approach to avoid hard-
coding values that could change in your script. That's a good habit to adopt, and it
will be useful as your scripts become more complex. PowerShell parameters are
more robust than variables, but they're out of scope for this article.

Sample response 2
Here's a sample JSON response. It includes two activities that the user performed:

The first activity shows that a user viewed a report.


The second activity shows that an administrator exported data from the Power BI
activity log.

JSON

[
{
"Id": "10af656b-b5a2-444c-bf67-509699896daf",
"RecordType": 20,
"CreationTime": "2023-03-15T15:18:30Z",
"Operation": "ViewReport",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100FFF92C7717B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"Activity": "ViewReport",
"ItemName": "Gross Margin Analysis",
"WorkSpaceName": "Sales Analytics",
"DatasetName": "Sales Data",
"ReportName": "Gross Margin Analysis",
"WorkspaceId": "e380d1d0-1fa6-460b-9a90-1a5c6b02414c",
"ObjectId": "Gross Margin Analysis",
"DatasetId": "cfafbeb1-8037-4d0c-896e-a46fb27ff229",
"ReportId": "94e57e92-Cee2-486d-8cc8-218c97200579",
"ArtifactId": "94e57e92-Cee2-486d-8cc8-218c97200579",
"ArtifactName": "Gross Margin Analysis",
"IsSuccess": true,
"ReportType": "PowerBIReport",
"RequestId": "53451b83-932b-f0b0-5328-197133f46fa4",
"ActivityId": "beb41a5d-45d4-99ee-0e1c-b99c451e9953",
"DistributionMethod": "Workspace",
"ConsumptionMethod": "Power BI Web",
"SensitivityLabelId": "e3dd4e72-5a5d-4a95-b8b0-a0b52b827793",
"ArtifactKind": "Report"
},
{
"Id": "5c913f29-502b-4a1a-a089-232edaf176f7",
"RecordType": 20,
"CreationTime": "2023-03-15T17:22:00Z",
"Operation": "ExportActivityEvents",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 2,
"UserKey": "100FFF92C7717B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "MicrosoftPowerBIMgmt/1.2.1111.0",
"Activity": "ExportActivityEvents",
"IsSuccess": true,
"RequestId": "2af6a22d-6f24-4dc4-a26a-5c234ab3afad",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"ExportEventStartDateTimeParameter": "2023-03-17T00:00:00Z",
"ExportEventEndDateTimeParameter": "2023-03-17T23:59:59.999Z"
}
]

 Tip

Extracting the Power BI activity log data is also a logged operation, as shown in the
previous response. When you analyze user activities, you might want to omit
administrator activities—or analyze them separately.
Example 3: View an activity for N days
Sometimes you might want to investigate one specific type of activity for a series of
days. This example shows how to retrieve per-item report sharing activities. It uses a
loop to retrieve activities from the previous seven days.

Sample request 3
The script declares two variables:

$ActivityType : The operation name for the activity that you're investigating.
$NbrOfDaysToCheck : How many days you're interested in checking. It performs a

loop working backward from the current day. The maximum value allowed is 30
days (because the earliest date that you can retrieve is 30 days before the current
day).

PowerShell

#Input values before running the script:


$ActivityType = 'ShareReport'
$NbrOfDaysToCheck = 7
#-----------------------------------------------------------------------

#Use today to start counting back the number of days to check:


$DayUTC = (([datetime]::Today.ToUniversalTime()).Date)

#Iteratively loop through each of the last N days to view events:


For($LoopNbr=0; $LoopNbr -le $NbrOfDaysToCheck; $LoopNbr++)
{
$PeriodStart=$DayUTC.AddDays(-$LoopNbr)
$ActivityDate=$PeriodStart.ToString("yyyy-MM-dd")
Write-Verbose "Checking $ActivityDate" -Verbose

#Check activity events once per loop (once per day):


Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate + 'T00:00:00.000') `
-EndDateTime ($ActivityDate + 'T23:59:59.999') `
-ActivityType $ActivityType
}

 Tip

You can use this looping technique to check any of the operations recorded in the
activity log.
Sample response 3
Here's a sample JSON response. It includes two activities that the user performed:

The first activity shows that a sharing link for a user was created. Note that the
SharingAction value differs depending on whether the user created a link, edited a
link, or deleted a link. For brevity, only one type of sharing link activity is shown in
the response.
The second activity shows that direct access sharing for a group was created. Note
that the SharingInformation value differs depending on the action taken. For
brevity, only one type of direct access sharing activity is shown in the response.

JSON

[
{
"Id": "be7506e1-2bde-4a4a-a210-bc9b156142c0",
"RecordType": 20,
"CreationTime": "2023-03-15T19:52:42Z",
"Operation": "ShareReport",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "900GGG12D2242A",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/110.0",
"Activity": "ShareReport",
"ItemName": "Call Center Stats",
"WorkSpaceName": "Sales Analytics",
"SharingInformation": [
{
"RecipientEmail": "[email protected]",
"RecipientName": "Turner",
"ObjectId": "fc9bbc6c-e39b-44cb-9c8a-d37d5665ec57",
"ResharePermission": "ReadReshare",
"UserPrincipalName": "[email protected]"
}
],
"WorkspaceId": "e380d1d0-1fa6-460b-9a90-1a5c6b02414c",
"ObjectId": "Call Center Stats",
"Datasets": [
{
"DatasetId": "fgagrwa3-9044-3e1e-228f-k24bf72gg995",
"DatasetName": "Call Center Data"
}
],
"ArtifactId": "81g22w11-vyy3-281h-1mn3-822a99921541",
"ArtifactName": "Call Center Stats",
"IsSuccess": true,
"RequestId": "7d55cdd3-ca3d-a911-5e2e-465ac84f7aa7",
"ActivityId": "4b8b53f1-b1f1-4e08-acdf-65f7d3c1f240",
"SharingAction": "CreateShareLink",
"ShareLinkId": "J_5UZg-36m",
"ArtifactKind": "Report",
"SharingScope": "Specific People"
},
{
"Id": "b4d567ac-7ec7-40e4-a048-25c98d9bc304",
"RecordType": 20,
"CreationTime": "2023-03-15T11:57:26Z",
"Operation": "ShareReport",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "900GGG12D2242A",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "69.132.26.0",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "ShareReport",
"ItemName": "Gross Margin Analysis",
"WorkSpaceName": "Sales Analytics",
"SharingInformation": [
{
"RecipientName": "SalesAndMarketingGroup-NorthAmerica",
"ObjectId": "ba21f28b-6226-4296-d341-f059257a06a7",
"ResharePermission": "Read"
}
],
"CapacityId": "1DB44EEW-6505-4A45-B215-101HBDAE6A3F",
"CapacityName": "Shared On Premium - Reserved",
"WorkspaceId": "e380d1d0-1fa6-460b-9a90-1a5c6b02414c",
"ObjectId": "Gross Margin Analysis",
"Datasets": [
{
"DatasetId": "cfafbeb1-8037-4d0c-896e-a46fb27ff229",
"DatasetName": "Sales Data"
}
],
"ArtifactId": "94e57e92-Cee2-486d-8cc8-218c97200579",
"ArtifactName": "Gross Margin Analysis",
"IsSuccess": true,
"RequestId": "82219e60-6af0-0fa9-8599-c77ed44fff9c",
"ActivityId": "1d21535a-257e-47b2-b9b2-4f875b19855e",
"SensitivityLabelId": "16c065f5-ba91-425e-8693-261e40ccdbef",
"SharingAction": "Direct",
"ArtifactKind": "Report",
"SharingScope": "Specific People"
}
]

7 Note
This JSON response shows that the data structure is different based on the type of
event. Even the same type of event can have different characteristics that produce a
slightly different output. As recommended earlier in this article, you should get
accustomed to retrieving the raw data.

Example 4: View three activities for N days


Sometimes you might want to investigate several related activities. This example shows
how to retrieve three specific activities for the previous seven days. It focuses on
activities related to Power BI apps including creating an app, updating an app, and
installing an app.

Sample request 4
The script declares the following variables:

$NbrOfDaysToCheck : How many days you're interested in checking. It performs a


loop that works backward from the current day. The maximum value allowed is 30
days (because the earliest date that you can retrieve is 30 days before the current
day).
$Activity1 : The operation name for the first activity that you're investigating. In
this example, it's searching for Power BI app creation activities.
$Activity2 : The second operation name. In this example, it's searching for Power

BI app update activities.


$Activity3 : The third operation name. In this example, it's searching for Power BI

app installation activities.

You can only retrieve activity events for one activity at a time. So, the script searches for
each operation separately. It combines the search results into a variable named
$FullResults , which it then outputs to the screen.

U Caution

Running many loops many times greatly increases the likelihood of API throttling.
Throttling can happen when you exceed the number of requests you're allowed to
make in a given time period. The Get Activity Events operation is limited to 200
requests per hour. When you design your scripts, take care not to retrieve the
original data more times than you need. Generally, it's a better practice to extract
all of the raw data once per day and then query, transform, filter, or format that
data separately.
The script shows results for the current day.

7 Note

To retrieve results for the previous day only—avoiding partial day results—see the
Export all activities for previous N days example.)

PowerShell

#Input values before running the script:


$NbrOfDaysToCheck = 7
$Activity1 = 'CreateApp'
$Activity2 = 'UpdateApp'
$Activity3 = 'InstallApp'
#-----------------------------------------------------------------------
#Initialize array which will contain the full resultset:
$FullResults = @()

#Use today to start counting back the number of days to check:


$DayUTC = (([datetime]::Today.ToUniversalTime()).Date)

#Iteratively loop through each day (<Initilize> ; <Condition> ; <Repeat>)


#Append each type of activity to an array:
For($LoopNbr=0; $LoopNbr -le $NbrOfDaysToCheck; $LoopNbr++)
{
$PeriodStart=$DayUTC.AddDays(-$LoopNbr)
$ActivityDate=$PeriodStart.ToString("yyyy-MM-dd")
Write-Verbose "Checking $ActivityDate" -Verbose

#Get activity 1 and append its results into the full resultset:
$Activity1Results = @()
$Activity1Results += Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999') `
-ActivityType $Activity1 | ConvertFrom-Json
If ($null -ne $Activity1Results) {$FullResults += $Activity1Results}

#Get activity 2 and append its results into the full resultset:
$Activity2Results = @()
$Activity2Results += Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999') `
-ActivityType $Activity2 |
ConvertFrom-Json
If ($null -ne $Activity2Results) {$FullResults += $Activity2Results}

#Get activity 3 and append its results into the full resultset:
$Activity3Results = @()
$Activity3Results += Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999') `
-ActivityType $Activity3 |
ConvertFrom-Json
If ($null -ne $Activity3Results) {$FullResults += $Activity3Results}

}
#Convert all of the results back to a well-formed JSON object:
$FullResults = $FullResults | ConvertTo-Json

#Display results on the screen:


$FullResults

Sample response 4
Here's a sample JSON response. It includes three activities that the user performed:

The first activity shows a Power BI app was created.


The second activity shows that a Power BI app was updated.
The third activity shows that a Power BI app was installed by a user.

2 Warning

The response only includes the user permissions that were modified. For example,
it's possible that three audiences could've been created in a CreateApp event. In the
UpdateApp event, if only one audience changed, then only one audience would
appear in the OrgAppPermission data. For that reason, relying on the UpdateApp
event for tracking all app permissions is incomplete because the activity log only
shows what's changed.

For a snapshot of all Power BI app permissions, use the Get App Users as Admin
API operation instead.

JSON

[
{
"Id": "65a26480-981a-4905-b3aa-cbb3df11c7c2",
"RecordType": 20,
"CreationTime": "2023-03-15T18:42:13Z",
"Operation": "CreateApp",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100FFF92C7717B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "CreateApp",
"ItemName": "Sales Reconciliations App",
"WorkSpaceName": "Sales Reconciliations",
"OrgAppPermission": {
"recipients": "Sales Reconciliations App(Entire Organization)",
"permissions": "Sales Reconciliations App(Read,CopyOnWrite)"
},
"WorkspaceId": "9325a31d-067e-4748-a592-626d832c8001",
"ObjectId": "Sales Reconciliations App",
"IsSuccess": true,
"RequestId": "ab97a4f1-9f5e-4a6f-5d50-92c837635814",
"ActivityId": "9bb54a9d-b688-4028-958e-4d7d21ca903a",
"AppId": "42d60f97-0f69-470c-815f-60198956a7e2"
},
{
"Id": "a1dc6d26-b006-4727-bac6-69c765b7978f",
"RecordType": 20,
"CreationTime": "2023-03-16T18:39:58Z",
"Operation": "UpdateApp",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100GGG12F9921B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "UpdateApp",
"ItemName": "Sales Analytics",
"WorkSpaceName": "Sales Analytics",
"OrgAppPermission": {
"recipients": "Sales Reps Audience(SalesAndMarketingGroup-
NorthAmerica,SalesAndMarketingGroup-Europe)",
"permissions": "Sales Reps Audience(Read,CopyOnWrite)"
},
"WorkspaceId": "c7bffcd8-8156-466a-a88f-0785de2c8b13",
"ObjectId": "Sales Analytics",
"IsSuccess": true,
"RequestId": "e886d122-2c09-4189-e12a-ef998268b864",
"ActivityId": "9bb54a9d-b688-4028-958e-4d7d21ca903a",
"AppId": "c03530c0-db34-4b66-97c7-34dd2bd590af"
},
{
"Id": "aa002302-313d-4786-900e-e68a6064df1a",
"RecordType": 20,
"CreationTime": "2023-03-17T18:35:22Z",
"Operation": "InstallApp",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100HHH12F4412A",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "InstallApp",
"ItemName": "Sales Reconciliations App",
"ObjectId": "Sales Reconciliations App",
"IsSuccess": true,
"RequestId": "7b3cc968-883f-7e13-081d-88b13f6cfbd8",
"ActivityId": "9bb54a9d-b688-4028-958e-4d7d21ca903a"
}
]

Example 5: View all activities for a workspace


for one day
Sometimes you might want to investigate activities related to a specific workspace. This
example retrieves all activities for all users for one day. It then filters the results so that
you can focus on analyzing activities from one workspace.

Sample request 5
The script declares two variables:

$ActivityDate : The date you're interested in. The format is YYYY-MM-DD. You

can't request a date earlier than 30 days before the current date.
$WorkspaceName : The name of the workspace you're interested in.

The script stores the results in the $Results variable. It then converts the JSON data to
an object so the results can be parsed. It then filters the results to retrieve five specific
columns. The CreationTime data is renamed as ActivityDateTime. The results are filtered
by the workspace name, then output to the screen.

There isn't a parameter for the Get-PowerBIActivityEvent cmdlet that allows you to
specify a workspace when checking the activity log (earlier examples in this article used
PowerShell parameters to set a specific user, date, or activity name). In this example, the
script retrieves all of the data and then parses the JSON response to filter the results for
a specific workspace.

U Caution

If you're in a large organization that has hundreds or thousands of activities per


day, filtering the results after they've been retrieved can be very inefficient. Bear in
mind that the Get Activity Events operation is limited to 200 requests per hour.
To avoid API throttling (when you exceed the number of requests that you're
allowed to make in a given time period), don't retrieve the original data more than
you need to. You can continue to work with the filtered results without running the
script to retrieve the results again. For ongoing needs, it's a better practice to
extract all of the data once per day and then query it many times.

PowerShell

#Input values before running the script:


$ActivityDate = '2023-03-22'
$WorkspaceName = 'Sales Analytics'
#----------------------------------------------------------------------
#Run cmdlet to check activity events and store intermediate results:
$Events = Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999')

#Convert from JSON so we can parse the data:


$ConvertedResults = $Events | ConvertFrom-Json

#Obtain specific attributes and save to a PowerShell object:


$FilteredResults = $ConvertedResults `
|
Select-Object `
@{Name="ActivityDateTime";Expression={$PSItem.CreationTime}}, ` #alias
name
Activity, `
UserId, `
ArtifactName, `
WorkspaceName `
|
#Filter the results:
Where-Object {($PSItem.WorkspaceName -eq $WorkspaceName)}

#View the filtered results:


$FilteredResults

#Optional - Save back to JSON format:


#$FilteredResults = $FilteredResults | ConvertTo-Json -Depth 10
#$FilteredResults

Sample response 5
Here are the filtered results, which include a small subset of properties. The format is
easier to read for occasional analysis. However, we recommend that you convert it back
to JSON format if you plan to store the results.
7 Note

After converting the JSON results to a PowerShell object, time values are converted
to local time. The original audit data is always recorded in Coordinated Universal
Time (UTC) time, so we recommend that you get accustomed to using only UTC
time.

Output

ActivityDateTime : 4/25/2023 3:18:30 PM


Activity : ViewReport
UserId : [email protected]
ArtifactName : Gross Margin Analysis
WorkSpaceName : Sales Analytics

CreationTime : 4/25/2023 5:32:10 PM


Activity : ShareReport
UserId : [email protected]
ArtifactName : Call Center Stats
WorkSpaceName : Sales Analytics

CreationTime : 4/25/2023 9:03:05 PM


Activity : ViewReport
UserId : [email protected]
ArtifactName : Call Center Stats
WorkSpaceName : Sales Analytics

 Tip

You can use this technique to filter results by any property in the results. For
example, you can use a specific event RequestId to analyze just one specific event.

Example 6: Export all activities for previous N


days
Sometimes you might want to export all activity data to a file so that you can work with
the data outside of PowerShell. This example retrieves all activities for all users for up to
30 days. It exports the data to one JSON file per day.

) Important
Activity log data is available for a maximum of 30 days. It's important that you
export and retain the data so you can do historical analysis. If you don't currently
export and store the activity log data, we strongly recommend that you prioritize
doing so.

Sample request 6
The script retrieves all activities for a series of days. It declares three variables:

$NbrDaysDaysToExtract : How many days you're interested in exporting. It performs


a loop, working backward from the previous day. The maximum value allowed is 30
days (because the earliest date that you can retrieve is 30 days before the current
day).
$ExportFileLocation : The folder path where you want to save the files. The folder

must exist before running the script. Don't include a backslash (\) character at the
end of the folder path (because it's automatically added at runtime). We
recommend that you use a separate folder to store raw data files.
$ExportFileName : The prefix for each file name. Because one file per day is saved,
the script adds a suffix to indicate the data contained in the file, and the date and
time the data was retrieved. For example, if you ran a script at 9am (UTC) on April
25, 2023 to extract activity data for April 23, 2023, the file name would be:
PBIActivityEvents-20230423-202304250900. Although the folder structure where
it's stored is helpful, each file name should be fully self-describing.

We recommend that you extract data that's at least one day before the current day. That
way, you avoid retrieving partial day events, and you can be confident that each export
file contains the full 24 hours of data.

The script gathers up to 30 days of data, through to the previous day. Timestamps for
audited events are always in UTC. We recommend that you build all of your auditing
processes based on UTC time rather than your local time.

The script produces one JSON file per day. The suffix of the file name includes the
timestamp (in UTC format) of the extracted data. If you extract the same day of data
more than once, the suffix in the file name helps you identify the newer file.

PowerShell

#Input values before running the script:


$NbrDaysDaysToExtract = 7
$ExportFileLocation = 'C:\Power-BI-Raw-Data\Activity-Log'
$ExportFileName = 'PBIActivityEvents'
#--------------------------------------------
#Start with yesterday for counting back to ensure full day results are
obtained:
[datetime]$DayUTC = (([datetime]::Today.ToUniversalTime()).Date).AddDays(-1)

#Suffix for file name so we know when it was written:


[string]$DateTimeFileWrittenUTCLabel =
([datetime]::Now.ToUniversalTime()).ToString("yyyyMMddHHmm")

#Loop through each of the days to be extracted (<Initilize> ; <Condition> ;


<Repeat>)
For($LoopNbr=0 ; $LoopNbr -lt $NbrDaysDaysToExtract ; $LoopNbr++)
{
[datetime]$DateToExtractUTC=$DayUTC.AddDays(-$LoopNbr).ToString("yyyy-
MM-dd")

[string]$DateToExtractLabel=$DateToExtractUTC.ToString("yyyy-MM-dd")

#Create full file name:


[string]$FullExportFileName = $ExportFileName `
+ '-' + ($DateToExtractLabel -replace '-', '') `
+ '-' + $DateTimeFileWrittenUTCLabel `
+ '.json'

#Obtain activity events and store intermediary results:


[psobject]$Events=Get-PowerBIActivityEvent `
-StartDateTime ($DateToExtractLabel+'T00:00:00.000') `
-EndDateTime ($DateToExtractLabel+'T23:59:59.999')

#Write one file per day:


$Events | Out-File "$ExportFileLocation\$FullExportFileName"

Write-Verbose "File written: $FullExportFileName" -Verbose


}
Write-Verbose "Extract of Power BI activity events is complete." -Verbose

There are several advantages to using the Get-PowerBIActivityEvent PowerShell cmdlet


rather than the Get Activity Events REST API operation.

The cmdlet allows you to request one day of activity each time you make a call by
using the cmdlet. Whereas when you communicate with the API directly, you can
only request one hour per API request.
The cmdlet handles continuation tokens for you. If you use the API directly, you
need to check the continuation token to determine whether there are any more
results to come. Some APIs need to use pagination and continuation tokens for
performance reasons when they return a large amount of data. They return the
first set of records, then with a continuation token you can make a subsequent API
call to retrieve the next set of records. You continue calling the API until a
continuation token isn't returned. Using the continuation token is a way to
consolidate multiple API requests so that you can consolidate a logical set of
results. For an example of using a continuation token, see Activity Events REST API.
The cmdlet handles Azure Active Directory (Azure AD) access token expirations for
you. After you've authenticated, your access token expires after one hour (by
default). In this case, the cmdlet automatically requests a refresh token for you. If
you communicate with the API directly, you need to request a refresh token.

For more information, see Choose APIs or PowerShell cmdlets.

7 Note

A sample response is omitted because it's an output similar to the responses shown
in the previous examples.

Next steps
For more information related to this article, check out the following resources:

Track user activities in Power BI


Power BI implementation planning: Tenant-level auditing
Power BI adoption roadmap: Auditing and monitoring
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Power BI migration overview
Article • 02/27/2023

Customers are increasingly standardizing on Power BI to drive a data culture, which


involves enabling managed self-service business intelligence (SSBI), rationalizing the
delivery of enterprise BI, and addressing economic pressures. The purpose of this series
of Power BI migration articles is to provide you with guidance on how to plan and
conduct a migration from a third-party BI tool to Power BI.

The articles in the Power BI migration series include:

1. Power BI migration overview (this article)


2. Prepare to migrate to Power BI
3. Gather requirements to migrate to Power BI (Stage 1)
4. Plan deployment to migrate to Power BI (Stage 2)
5. Conduct proof of concept to migrate to Power BI (Stage 3)
6. Create content to migrate to Power BI (Stage 4)
7. Deploy to Power BI (Stage 5)
8. Learn from customer Power BI migrations

7 Note

We also recommend that you thoroughly read the Power BI adoption roadmap
and Power BI implementation planning articles.

There are two assumptions: Your organization has a legacy BI platform currently in place
and the decision has been made to formally migrate content and users to Power BI.
Migrating to the Power BI service is the primary focus of this series. Additional
considerations may apply for national/regional cloud customers beyond what is
discussed in this series of articles.

The following diagram shows four high-level phases for deploying Power BI in your
organization.
Phase Description

Set up and evaluate Power BI. The first phase involves establishing the initial Power BI
architecture. Preliminary deployment and governance planning are handled at this point,
as well as Power BI evaluations including return on investment and/or cost benefit
analysis.

Create new solutions quickly in Power BI. In the second phase, self-service BI authors
can begin using and evaluating Power BI for their needs, and value can be obtained from
Power BI quickly. Activities in Phase 2 place importance on agility and rapid business
value, which is critical to gaining acceptance for the selection of a new BI tool such as
Power BI. For this reason, the diagram depicts activities in Phase 2 happening side by
side with the migration activities in Phase 3.

Migrate BI assets from legacy platform to Power BI. The third phase addresses the
migration to Power BI. It's the focus of this series of Power BI migration articles. Five
specific migration stages are discussed in the next section.

Adopt, govern, and monitor Power BI. The final phase comprises ongoing activities such
as nurturing a data culture, communication, and training. These activities greatly impact
on an effective Power BI implementation. It's important to have governance and security
policies and processes that are appropriate for your organization, as well as auditing and
monitoring to allow you to scale, grow, and continually improve.

) Important

A formal migration to Power BI almost always occurs in parallel with the


development of a new Power BI solution. Power BI solution is generic term that
encompasses the use of both data and reports. A single Power BI Desktop (pbix) file
may contain a data model or report, or both. Separating the data model from
reports is encouraged for data reusability purposes, but isn't required.
Using Power BI to author new requirements, while you plan and conduct the formal
migration, will help gain buy-in. Simultaneous phases provide content authors with
practical, real-world experience with Power BI.

Five stages of a Power BI migration


Phase 3 of the diagram addresses migration to Power BI. During this phase, there are
five common stages.

The following stages shown in the previous diagram are:

Pre-migration steps
Stage 1: Gather requirements and prioritize
Stage 2: Plan for deployment
Stage 3: Conduct proof of concept
Stage 4: Create and validate content
Stage 5: Deploy, support, and monitor

Pre-migration steps
The pre-migration steps include actions you may consider prior to beginning a project
to migrate content from a legacy BI platform to Power BI. It typically includes the initial
tenant-level deployment planning. For more information about these activities, see
Prepare to migrate to Power BI.

Stage 1: Gather requirements and prioritize


The emphasis of Stage 1 is on gathering information and planning for the migration of a
single solution. This process should be iterative and scoped to a reasonable sized effort.
The output for Stage 1 includes a prioritized inventory of reports and data that are to be
migrated. Additional activities in Stages 2 and 3 are necessary to fully estimate the level
of effort. For more information about the activities in Stage 1, see Gather requirements
to migrate to Power BI.

Stage 2: Plan for deployment


The focus of Stage 2 is on how the requirements defined in Stage 1 may be fulfilled for
each specific solution. The output of Stage 2 includes as many specifics as possible to
guide the process, though it is an iterative, non-linear process. Creation of a proof of
concept (in Stage 3) may occur in parallel with this stage. Even while creating the
solution (in Stage 4), additional information may come to light that influences
deployment planning decisions. This type of deployment planning in Stage 2 focuses on
the solution level, while respecting the decisions already made at the organizational
level. For more information about the activities in Stage 2, see
Plan deployment to migrate to Power BI.

Stage 3: Conduct proof of concept


The emphasis of Stage 3 is to address unknowns and mitigate risks as early as possible.
A technical proof of concept (POC) is helpful for validating assumptions, and it can be
done iteratively alongside deployment planning (Stage 2). The output of this stage is a
Power BI solution that's narrow in scope. Note that we don't intend for the POC to be
disposable work. However, it will likely require additional work in Stage 4 to make it
production-ready. In this respect, in your organization, you may refer to this activity as
either a prototype, pilot, mockup, quickstart, or minimally viable product (MVP).
Conducting a POC isn't always necessary and it can be done informally. For more
information about the activities in Stage 3, see
Conduct proof of concept to migrate to Power BI.

Stage 4: Create and validate content


Stage 4 is when the actual work to convert the POC to a production-ready solution is
done. The output of this stage is a completed Power BI solution that's been validated in
a development environment. It should be ready for deployment in Stage 5. For more
information about the activities in Stage 4, see Create content to migrate to Power BI.

Stage 5: Deploy, support, and monitor


The primary focus of Stage 5 is to deploy the new Power BI solution to production. The
output of this stage is a production solution that's actively used by business users. When
using an agile methodology, it's acceptable to have some planned enhancements that
will be delivered in a future iteration. Depending on your comfort level with Power BI,
such as minimizing risk and user disruption, you may choose to do a staged
deployment. Or, you might initially deploy to a smaller group of pilot users. Support and
monitoring are also important at this stage, and on an ongoing basis. For more
information about the activities in Stage 5, see Migrate to Power BI.

 Tip

Most of the concepts discussed throughout this series of Power BI migration


articles also apply to a standard Power BI implementation project.

Consider migration reasons


Enabling a productive and healthy data culture is a principal goal of many organizations.
Power BI is an excellent tool to facilitate this objective. Three common reasons you may
consider migrating to Power BI can be distilled down to:

Enable managed self-service BI by introducing new capabilities that empower the


self-service BI user community. Power BI makes access to information and
decision-making more broadly available, while relying less on specialist skills that
can be difficult to find.
Rationalize the delivery of enterprise BI to meet requirements that aren't
addressed by existing BI tools, while decreasing complexity level, reducing cost of
ownership, and/or standardizing from multiple BI tools currently in use.
Address economic pressures for increased productivity with fewer resources, time,
and staffing.

Achieve Power BI migration success


Every migration is slightly different. It can depend on the organizational structure, data
strategies, data management maturity, and organizational objectives. However, there are
some practices we consistently see with our customers who achieve Power BI migration
success.

Executive sponsorship: Identify an executive sponsor early in the process. This


person should be someone who actively supports BI in the organization and is
personally invested in achieving a positive outcome for the migration. Ideally, the
executive sponsor has ultimate authority and accountability for outcomes related
to Power BI. For more information, see this article.
Training, support, and communication: Recognize that it's more than just a
technology initiative. Any BI or analytics project is also a people initiative, so
consider investing early in user training and support. Also, create a communication
plan that transparently explains to all stakeholders what is occurring, why, and sets
realistic expectations. Be sure to include a feedback loop in your communication
plan to capture input from stakeholders.
Quick wins: Initially, prioritize high value items that have tangible business value
and are pressing. Rather than strictly attempting to always migrate reports
precisely as they appear in the legacy BI platform, focus on the business question
the report is trying to answer—including action to be taken—when addressing the
redesigned report.
Modernization and improvements: Be willing to rethink how things have always
been done. A migration can provide an opportunity to deliver improvements. For
example, it could eliminate manual data preparation or relocate business rules that
were confined to a single report. Consider refactoring, modernizing, and
consolidating existing solutions when the effort can be justified. It can include
consolidating multiple reports into one, or eliminating legacy items that haven't
been used for some time.
Continual learning: Be prepared to use a phased approach while continually
learning and adapting. Work in short, iterative cycles to bring value quickly. Make a
frequent practice of completing small POCs to minimize risk of unknowns, validate
assumptions, and learn about new features. As Power BI is a cloud service that
updates monthly, it's important to keep abreast of developments and adjust
course when appropriate.
Resistance to change: Understand there may be varying levels of resistance to
change; some users will resist learning a new tool. Also, some professionals who
have dedicated significant time and effort to gain expertise with a different BI tool
may feel threatened by being displaced. Be prepared, because it can result in
internal political struggles, particularly in highly decentralized organizations.
Constraints: Be realistic with migration plans, including funding, time estimates, as
well as roles and responsibilities for everyone involved.

Acknowledgments
This series of articles was written by Melissa Coates, Data Platform MVP and owner of
Coates Data Strategies . Contributors and reviewers include Marc Reguera, Venkatesh
Titte, Patrick Baumgartner, Tamer Farag, Richard Tkachuk, Matthew Roche, Adam Saxton,
Chris Webb, Mark Vaillancourt, Daniel Rubiolo, David Iseminger, and Peter Myers.
Next steps
In the next article in this Power BI migration series, learn about the pre-migration steps
when migrating to Power BI.

Other helpful resources include:

Power BI adoption roadmap


Power BI implementation planning
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Migrate SSRS reports to Power BI
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Prepare to migrate to Power BI
Article • 02/27/2023

This article describes actions you can consider prior to migrating to Power BI.

7 Note

For a complete explanation of the above graphic, see Power BI migration


overview.

The pre-migration steps emphasize up-front planning, which is important preparation


before moving through the five migration stages. Most of the pre-migration steps will
occur once, though for larger organizations some portions may be iterative for each
business unit or departmental area.

The output from the pre-migration steps includes an initial governance model, initial
high-level deployment planning, in addition to an inventory of the reports and data to
be migrated. Additional information from activities in Stages 1, 2, and 3 will be
necessary to fully estimate the level of effort for migrating individual solutions.

 Tip

Most of the topics discussed in this article also apply to a standard Power BI
implementation project.

Create cost/benefit analysis and evaluation


Several top considerations during the initial evaluation include obtaining:
Clarity on the business case and BI strategy to reach a specific desired future state.
Clarity on what success means, and how to measure progress and success for the
migration initiative.
Cost estimates and return on investment (ROI) calculation results.
Successful results for several productive Power BI initiatives that are smaller in
scope and complexity level.

Identify stakeholders and executive support


Several considerations for identifying stakeholders include:

Ensure executive sponsorship is in place.


Ensure alignment with stakeholders on the business case and BI strategy.
Include representatives from throughout the business units—even if their content
is slated for migration on a later timetable—to understand their motivations and
concerns.
Involve Power BI champions early.
Create, and follow, a communication plan with stakeholders.

 Tip

If you fear you're starting to overcommunicate, then it's probably just about right.

Generate initial governance model


Several key items to address early in a Power BI implementation include:

Specific goals for Power BI adoption and where Power BI fits into the overall BI
strategy for the organization.
How the Power BI administrator role will be handled, particularly in decentralized
organizations.
Policies related to achieving trusted data: use of authoritative data sources,
addressing data quality issues, and use of consistent terminology and common
definitions.
Security and data privacy strategy for data sources, data models, reports, and
content delivery to internal and external users.
How internal and external compliance, regulatory, and audit requirements will be
met.

) Important
The most effective governance model strives to balance user empowerment with
the necessary level of control. See more information, read about discipline at the
core and flexibility at the edge.

Conduct initial deployment planning


Initial deployment planning involves defining standards, policies, and preferences for
the organization's Power BI implementation.

Note that Stage 2 references solution-level deployment planning. The Stage 2 activities
should respect the organizational-level decisions whenever possible.

Some critical items to address early in a Power BI implementation include:

Power BI tenant setting decisions, which should be documented.


Workspace management decisions, which should be documented.
Considerations and preferences related to data and content distribution methods,
such as apps, workspaces, sharing, subscriptions, and embedding of content.
Preferences related to dataset modes, such as use of Import mode, DirectQuery
mode, or combining the two modes in a Composite model.
Securing data and access.
Working with shared datasets for reusability.
Applying data certification to promote the use of authoritative and trustworthy
data.
Use of different report types, including Power BI reports, Excel reports, or
paginated reports for different use cases or business units.
Change management approaches for managing centralized BI items and business-
managed BI items.
Training plans for consumers, data modelers, report authors, and administrators.
Support for content authors by using Power BI Desktop templates, custom
visuals , and documented report design standards.
Procedures and processes for managing user requirements, such as requesting
new licenses, adding new gateway data sources, gaining permission to gateway
data sources, requesting new workspaces, workspace permissions changes, and
other common requirements that may be encountered on a regular basis.

) Important

Deployment planning is an iterative process. Deployment decisions will be refined


and augmented many times as your organization's experience with Power BI grows,
and as Power BI evolves. The decisions made during this process will be used
during the solution-level deployment planning discussed in Stage 2 of the
migration process.

Establish initial architecture


Your BI solution architecture will evolve and mature over time. Power BI setup tasks to
handle right away include:

Power BI tenant setup and integration with Azure Active Directory.


Define Power BI administrators.
Procure and assign initial user licenses.
Configure and review Power BI tenant settings.
Setup workspace roles and assign access to Azure Active Directory security groups
and users.
Configure an initial data gateway cluster—with a plan to update regularly.
Procure initial Premium capacity license (if applicable).
Configure Premium capacity workloads—with a plan to manage on an ongoing
basis.

Define success criteria for migration


The first task is to understand what success looks like for migrating an individual
solution. Questions you might ask include:

What are the specific motivations and objectives for this migration? For more
information, see Power BI migration overview (Consider migration reasons). This
article describes the most common reasons for migrating to Power BI. Certainly,
your objectives should be specified at the organizational level. Beyond that,
migrating one legacy BI solution may benefit significantly from cost savings,
whereas migrating a different legacy BI solution may focus on gaining workflow
optimization benefits.
What's the expected cost/benefit or ROI for this migration? Having a clear
understanding of expectations related to cost, increased capabilities, decreased
complexity, or increased agility, is helpful in measuring success. It can provide
guiding principles to help with decision-making during the migration process.
What key performance indicators (KPIs) will be used to measure success? The
following list presents some example KPIs:
Number of reports rendered from legacy BI platform, decreasing month over
month.
Number of reports rendered from Power BI, increasing month over month.
Number of Power BI report consumers, increasing quarter over quarter.
Percentage of reports migrated to production by target date.
Cost reduction in licensing cost year over year.

 Tip

The Power BI activity log can be used as a source for measuring KPI progress.

Prepare inventory of existing reports


Preparing an inventory of existing reports in the legacy BI platform is a critical step
towards understanding what already exists. The outcome of this step is an input to
assessing the migration effort level. Activities related to preparing an inventory may
include:

1. Inventory of reports: Compile a list of reports and dashboards that are migration
candidates.
2. Inventory of data sources: Compile a list of all data sources accessed by existing
reports. It should include both enterprise data sources as well as departmental and
personal data sources. This process may unearth data sources not previously
known to the IT department, often referred to as shadow IT.
3. Audit log: Obtain data from the legacy BI platform audit log to understand usage
patterns and assist with prioritization. Important information to obtain from the
audit log includes:

Average number of times each report was executed per week/month/quarter.


Average number of consumers per report per week/month/quarter.
The consumers for each report, particularly reports used by executives.
Most recent date each report was executed.

7 Note

In many cases, the content isn't migrated to Power BI exactly as is. The migration
represents an opportunity to redesign the data architecture and/or improve report
delivery. Compiling an inventory of reports is crucial to understanding what
currently exists so you can begin to assess what refactoring needs to occur. The
remaining articles in this series describe possible improvements in more detail.

Explore automation options


It isn't possible to completely automate a Power BI conversion process end-to-end.

Compiling the existing inventory of data and reports is a possible candidate for
automation when you have an existing tool that can do it for you. The extent to which
automation can be used for some portions of the migration process—such as compiling
the existing inventory—highly depends upon the tools you have.

Next steps
In the next article in this Power BI migration series, learn about Stage 1, which is
concerned with gathering and prioritizing requirements when migrating to Power BI.

Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Gather requirements to migrate to
Power BI
Article • 02/27/2023

This article describes Stage 1, which is concerned with gathering and prioritizing
requirements when migrating to Power BI.

7 Note

For a complete explanation of the above graphic, see Power BI migration


overview.

The emphasis of Stage 1 is on information gathering and planning for an individual


solution that will be migrated to Power BI.

The output from Stage 1 includes detailed requirements that have been prioritized.
However, additional activities in Stages 2 and 3 must be completed to fully estimate the
level of effort.

) Important

Stages 1-5 represent activities related to one specific solution. There are decisions
and activities at the organizational/tenant level which impact the process at the
solution level. Some of those higher-level planning activities are discussed in the
Power BI migration overview article. When appropriate, defer to the
organizational-level decisions for efficiency and consistency.
The Power BI adoption roadmap describes these types of strategic and tactical
considerations. It has an emphasis on organizational adoption.

 Tip

Most of the topics discussed in this article also apply to a standard Power BI
implementation project.

Compile requirements
The inventory of existing BI items, compiled in the pre-migration steps, become the
input for the requirements of the new solution to be created in Power BI. Collecting
requirements is about understanding the current state, as well as what items users
would like changed or refactored when reports are redesigned in Power BI. Detailed
requirements will useful for solution deployment planning in Stage 2, during creation of
a proof of concept in Stage 3, and when creating the production-ready solution in Stage
4.

Gather report requirements


Compile thorough, easy-to-reference, information about reports, such as:

Purpose, audience, and expected action: Identify the purpose and business
process applicable to each report, as well as the audience, analytical workflow, and
expected action to be taken by report consumers.
How consumers use the report: Consider sitting with report consumers of the
existing report to understand exactly what they do with it. You may learn that
certain elements of the report can be eliminated or improved in the new Power BI
version. This process involves additional time investment but it's valuable for
critical reports or reports that are used often.
Owner and subject matter expert: Identify the report owner and any subject
matter expert(s) associated with the report or data domain. They may become the
owners of the new Power BI report going forward. Include any specific change
management requirements (which typically differ between IT-managed and
business-managed solutions) as well as approvals and sign-offs, which will be
required when changes are made in the future. For more information, see this
article.
Content delivery method: Clarify report consumer expectations for content
delivery. It may be on-demand, interactive execution, embedded within a custom
application, or delivery on a schedule using an e-mail subscription. There may also
be requirements to trigger alert notifications.
Interactivity needs: Determine must-have and nice-to-have interactivity
requirements, such as filters, drill-down actions, or drillthrough actions.
Data sources: Ensure all data sources required by the report are discovered, and
data latency needs (data freshness) are understood. Identify historical data,
trending, and data snapshot requirements for each report so they can be aligned
with the data requirements. Data source documentation can also be useful later on
when performing data validation of a new report with its source data.
Security requirements: Clarify security requirements (such as allowed viewers,
allowed editors, and any row-level security needs), including any exceptions to
normal organizational security. Document any data sensitivity level, data privacy, or
regulatory/compliance needs.
Calculations, KPIs, and business rules: Identify and document all calculations, KPIs,
and business rules that are currently defined within the existing report so they can
be aligned with the data requirements.
Usability, layout, and cosmetic requirements: Identify specific usability, layout,
and cosmetic needs related to data visualizations, grouping and sorting
requirements, and conditional visibility. Include any specific considerations related
to mobile device delivery.
Printing and exporting needs: Determine whether there are any requirements
specific to printing, exporting, or pixel-perfect layout. These needs will influence
which type of report will be most suitable (such as a Power BI, Excel, or paginated
report). Be aware that report consumers tend to place a lot of importance on how
they've always done things, so don't be afraid to challenge their way of thinking.
Be sure to talk in terms of enhancements rather than change.
Risks or concerns: Determine whether there are other technical or functional
requirements for reports, as well as any risks or concerns regarding the information
being presented in them.
Open issues and backlog items: Identify any future maintenance, known issues, or
deferred requests to add to the backlog at this time.

 Tip

Consider ranking requirements by classifying them as must have or nice to have.


Frequently consumers ask for everything they may possibly need up-front because
they believe it may be their only chance to make requests. Also, when addressing
priorities in multiple iterations, make the backlog available to stakeholders. It helps
with communication, decision-making, and the tracking of pending commitments.
Gather data requirements
Compile detailed information pertaining to data, such as:

Existing queries: Identify whether there are existing report queries or stored
procedures that can be used by a DirectQuery model or a Composite model, or
can be converted to an Import model.
Types of data sources: Compile the types of data sources that are necessary,
including centralized data sources (such as an enterprise data warehouse) as well
as non-standard data sources (such as flat files or Excel files that augment
enterprise data sources for reporting purposes). Finding where data sources are
located, for purposes of data gateway connectivity, is important too.
Data structure and cleansing needs: Determine the data structure for each
requisite data source, and to what extent data cleansing activities are necessary.
Data integration: Assess how data integration will be handled when there are
multiple data sources, and how relationships can be defined between each model
table. Identify specific data elements needed to simplify the model and reduce its
size.
Acceptable data latency: Determine the data latency needs for each data source. It
will influence decisions about which data storage mode to use. Data refresh
frequency for Import model tables is important to know too.
Data volume and scalability: Evaluate data volume expectations, which will factor
into decisions about large model support and designing DirectQuery or Composite
models. Considerations related to historical data needs are essential to know too.
For larger datasets, determining incremental data refresh will also be necessary.
Measures, KPIs, and business rules: Assess needs for measures, KPIs, and business
rules. They will impact decisions regarding where to apply the logic: in the dataset
or the data integration process.
Master data and data catalog: Consider whether there are master data issues
requiring attention. Determine if integration with an enterprise data catalog is
appropriate for enhancing discoverability, accessing definitions, or producing
consistent terminology accepted by the organization.
Security and data privacy: Determine whether there are any specific security or
data privacy considerations for datasets, including row-level security requirements.
Open issues and backlog items: Add any known issues, known data quality
defects, future maintenance, or deferred requests to the backlog at this time.

) Important

Data reusability can be achieved with shared datasets, which can optionally be
certified to indicate trustworthiness and improve discoverability. Data preparation
reusability can be achieved with dataflows to reduce repetitive logic in multiple
datasets. Dataflows can also significantly reduce the load on source systems
because the data is retrieved less often—multiple datasets can then import data
from the dataflow.

Identify improvement opportunities


In most situations, some modifications and improvements occur. It's rare that a direct
one-to-one migration occurs without any refactoring or enhancement. Three types of
improvements you may consider include:

Consolidation of reports: Similar reports may be consolidated using techniques


such as filters, bookmarks, or personalization. Having fewer reports, which are each
more flexible, can significantly improve the experience for report consumers.
Consider optimizing datasets for Q&A (natural language querying) to deliver even
greater flexibility to report consumers, allowing them to create their own
visualizations.
Efficiency improvements: During requirements gathering, improvements can often
be identified. For instance, when analysts compile numbers manually or when a
workflow can be streamlined. Power Query can play a large role in replacing
manual activities that are currently performed. If business analysts find themselves
performing the same activities to cleanse and prepare data on a regular basis,
repeatable Power Query data preparation steps can yield significant time savings
and reduce errors.
Centralization of data model: An authoritative and certified dataset serves as the
backbone for managed self-service BI. In this case, the data is managed once, and
analysts have flexibility to use and augment that data to meet their reporting and
analysis needs.

7 Note

For more information about centralization of data models, read about discipline at
the core and flexibility at the edge.

Prioritize and assess complexity


At this point, the initial inventory is available and may include specific requirements.
When prioritizing the initial set of BI items ready for migration, reports and data should
be considered collectively as well as independently of each other.
Identify high priority reports, which may include reports that:

Bring significant value to the business.


Are executed frequently.
Are required by senior leadership or executives.
Involve a reasonable level of complexity (to improve chances of success during the
initial migration iterations).

Identify high priority data, which may include data that:

Contains critical data elements.


Is common organizational data that serves many use cases.
May be used to create a shared dataset for reuse by reports and many report
authors.
Involves a reasonable level of complexity (to improve chances of success when in
the initial migration iterations).

Next steps
In the next article in this Power BI migration series, learn about Stage 2, which is
concerned with planning the migration for a single Power BI solution.

Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Plan deployment to migrate to Power BI
Article • 03/14/2023

This article describes Stage 2, which is concerned with planning the migration for a
single Power BI solution.

7 Note

For a complete explanation of the above graphic, see Power BI migration


overview.

The focus of Stage 2 is on defining how the requirements that were defined in Stage 1
are used to migrate a solution to Power BI.

The output from Stage 2 includes as many specific decisions as possible to guide the
deployment process.

Decision-making of this nature is an iterative and non-linear process. Some planning will
have already occurred in the pre-migration steps. Learnings from a proof of concept
(described in Stage 3) may occur in parallel with deployment planning. Even while
creating the solution (described in Stage 4), additional information may arise that
influences deployment decisions.

) Important

Stages 1-5 represent activities related to one specific solution. There are decisions
and activities at the organizational/tenant level which impact the process at the
solution level. Some of those higher-level planning activities are discussed in the
Power BI migration overview article. When appropriate, defer to the
organizational-level decisions for efficiency and consistency.

 Tip

The topics discussed in this article also apply to a standard Power BI


implementation project.

Choose Power BI product


One of the first decisions is to choose the Power BI product. It's a decision between the
Power BI service or Power BI Report Server. Once content has been published, many
additional options become available, such as embedding, mobile delivery, and email
subscriptions.

For more information about architectural considerations, see Section 3 of the Planning a
Power BI enterprise deployment whitepaper .

U Caution

If you're tempted to rely on using Power BI Desktop files stored in a file system, be
aware that it's not an optimal approach. Using the Power BI service (or Power BI
Report Server) has significant advantages for security, content distribution, and
collaboration. The ability to audit and monitor activities is also enabled by the
Power BI service.

Decide on workspace management approach


Workspaces are a core concept of the Power BI service, which makes workspace
management an important aspect of planning. Questions to ask include:

Is a new workspace needed for this new solution?


Will separate workspaces be needed to accommodate development, test, and
production?
Will separate workspaces be used for data and reports, or will a single workspace
be sufficient? Separate workspaces have numerous advantages, especially for
securing datasets. When necessary, they can be managed separately from those
users who publish reports.
What are the security requirements for the workspace? It influences planning for
workspace roles. If an app will be used by content consumers, audience
permissions for the app are managed separately from the workspace. Distinct
permissions for app viewers allow additional flexibility in meeting security
requirements for read-only consumers of reports or dashboards.
Can existing groups be used for securing the new content? Both Azure Active
Directory and Microsoft 365 groups are supported. When aligned with existing
processes, using groups makes permissions management easier than assignments
to individual users.
Are there any security considerations related to external guest users? You may
need to work with your Azure Active Directory administrator and your Power BI
administrator to configure guest user access.

 Tip

Consider creating a workspace for a specific business activity or project. You may
be tempted to start off structuring workspaces based on your organizational
structure (such as a workspace per department), but this approach frequently ends
up being too broad.

Determine how content will be consumed


It's helpful to understand how consumers of a solution prefer to view reports and
dashboards. Questions to ask include:

Will a Power BI app (which comprises reports and dashboards from a single
workspace) be the best way to deliver content to consumers, or will direct access
to a workspace be sufficient for content viewers?
Will certain reports and dashboards be embedded elsewhere, such as Teams,
SharePoint Online, or a secure portal or website?
Will consumers access content using mobile devices? Requirements to deliver
reports to small form factor devices will influence some report design decisions.

Decide if other content may be created


There are several key decisions to be made related to allowing consumers to create new
content, such as:

Will consumers be allowed to create new reports from the published dataset? This
capability can be enabled by assigning dataset build permission to a user.
If consumers want to customize a report, can they save a copy of it and personalize
it to meet their needs?

U Caution

Although the Save a copy capability is a nice feature, it should be used with caution
when the report includes certain graphics or header/footer messages. Since logos,
icons, and textual messages often relate to branding requirements or regulatory
compliance, it's important to carefully control how they're delivered and
distributed. If Save a copy is used, but the original graphics or header/footer
messages remain unchanged by the new author, it can result in confusion about
who actually produced the report. It can also reduce the meaningfulness of the
branding.

Evaluate needs for Premium capacity


Additional capabilities are available when a workspace is stored on a Premium capacity.
Here are several reasons why workspaces on Premium capacity can be advantageous:

Content can be accessed by consumers who don't have a Power BI Pro or Premium
Per User (PPU) license.
Support for large datasets.
Support for more frequent data refreshes.
Support for using the full feature set of dataflows.
Enterprise features, including deployment pipelines and the XMLA endpoint.

Determine data acquisition method


The data required by a report may influence several decisions. Questions to ask include:

Can an existing Power BI shared dataset be used, or is the creation of a new Power
BI dataset appropriate for this solution?
Does an existing shared dataset need to be augmented with new data or measures
to meet additional needs?
Which data storage mode will be most appropriate? Options include Import,
DirectQuery, Composite, or Live Connection.
Should aggregations be used to enhance query performance?
Will creation of a dataflow be useful and can it serve as a source for numerous
datasets?
Will a new gateway data source need to be registered?
Decide where original content will be stored
In addition to planning the target deployment destination, it's also important to plan
where the original—or source—content will be stored, such as:

Specify an approved location for storing the original Power BI Desktop (.pbix) files.
Ideally, this location is available only to people who edit the content. It should
align with how security is set up in the Power BI service.
Use a location for original Power BI Desktop files that includes versioning history or
source control. Versioning permits the content author to revert to a previous file
version, if necessary. OneDrive for work or school or SharePoint work well for this
purpose.
Specify an approved location for storing non-centralized source data, such as flat
files or Excel files. It should be a path that any of the dataset authors can reach
without error and is backed up regularly.
Specify an approved location for content exported from the Power BI service. The
goal is to ensure that security defined in the Power BI service isn't inadvertently
circumvented.

) Important

Specifying a protected location for original Power BI Desktop files is particularly


important when they contain imported data.

Assess the level of effort


Once sufficient information is available from the requirements (which were described in
Stage 1) and the solution deployment planning process, it's now possible to assess the
level of effort. It's then possible to formulate a project plan with tasks, timeline, and
responsibility.

 Tip

Labor costs—salaries and wages—are usually among the highest expenses in most
organizations. Although it can be difficult to accurately estimate, productivity
enhancements have an excellent return on investment (ROI).

Next steps
In the next article in this Power BI migration series, learn about Stage 3, which is
concerned with conducting a proof of concept to mitigate risk and address unknowns as
early as possible when migrating to Power BI.

Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Conduct proof of concept to migrate to
Power BI
Article • 02/27/2023

This article describes Stage 3, which is concerned with conducting a proof of concept
(POC) to mitigate risk and address unknowns as early as possible when migrating to
Power BI.

7 Note

For a complete explanation of the above graphic, see Power BI migration


overview.

The focus of Stage 3 is to address unknowns and mitigate risks as early as possible. A
technical POC is helpful for validating assumptions. It can be done iteratively alongside
solution deployment planning (described in Stage 2).

The output from this stage is a Power BI solution that's narrow in scope, addresses the
initial open questions, and is ready for additional work in Stage 4 to make it production-
ready.

) Important

We don't intend for the POC to be disposable work. Rather, we expect it to be an


early iteration of the production-ready solution. In your organization, you may refer
to this activity as a prototype, pilot, mockup, quick start, or minimally viable
product (MVP). Conducting a POC isn't always necessary and it could even happen
informally.
 Tip

Most of the topics discussed in this article also apply to a standard Power BI
implementation project. As your organization becomes more experienced with
Power BI, the need to conduct POCs diminishes. However, due to the fast release
cadence with Power BI and the continual introduction of new features, you might
regularly conduct technical POCs for learning purposes.

Set POC goals and scope


When conducting a POC, focus on the following goals:

Verify your assumptions about how a feature works.


Educate yourself on differences in how Power BI works compared with the legacy
BI platform.
Validate initial understandings of certain requirements with subject matter experts.
Create a small dataset with real data to understand and detect any issues with the
data structure, relationships, data types, or data values.
Experiment with, and validate, DAX syntax expressions used by model calculations.
Test data source connectivity using a gateway (if it's to be a gateway source).
Test data refresh using a gateway (if it's to be a gateway source).
Verify security configurations, including row-level security when applicable.
Experiment with layout and cosmetic decisions.
Verify that all functionality in the Power BI service works as expected.

The POC scope is dependent on what the unknowns are, or which goals need to be
validated with colleagues. To reduce complexity, keep a POC as narrow as possible in
terms of scope.

Most often with a migration, requirements are well known because there's an existing
solution to start from. However, depending on the extent of improvements to be made
or existing Power BI skills, a POC still provides significant value. In addition, rapid
prototyping with consumer feedback may be appropriate to quickly clarify requirements
—especially if enhancements are made.

) Important

Even if a POC includes only a subset of data, or includes only limited visuals, it's
often important to take it from start to finish. That is, from development in Power BI
Desktop to deployment to a development workspace in the Power BI service. It's
the only way to fully accomplish the POC objectives. It's particularly true when the
Power BI service must deliver critical functionality that you haven't used before, like
a DirectQuery dataset using single sign-on. During the POC, focus your efforts on
aspects you're uncertain about or need to verify with others.

Handle differences in Power BI


Power BI can be used as a model-based tool or as a report-based tool. A model-based
solution involves developing a data model, whereas a report-based solution connects to
an already-deployed data model.

Due to its extreme flexibility, there are some aspects about Power BI that may be
fundamentally different from the legacy BI platform you're migrating from.

Consider redesigning the data architecture


If you're migrating from a legacy BI platform that has its own semantic layer, then the
creation of an Import dataset is likely to be a good option. Power BI functions best with
a star schema table design. Therefore, if the legacy semantic layer is not a star schema,
it's possible that some redesign may be required to fully benefit from Power BI. Putting
effort into defining a semantic layer adhering to star schema design principles (including
relationships, commonly used measures, and friendly organizational terminology) serves
as an excellent starting point for self-service report authors.

If you're migrating from a legacy BI platform where reports reference relational data
sources using SQL queries or stored procedures, and if you're planning to use Power BI
in DirectQuery mode, you may be able to achieve close to a one-to-one migration of
the data model.

U Caution

If you see the creation of lots of Power BI Desktop files comprising a single
imported table, it's usually an indicator that the design isn't optimal. Should you
notice this situation, investigate whether the use of shared datasets that are
created using a star schema design could achieve a better result.

Decide how to handle dashboard conversions


In the BI industry, a dashboard is a collection of visuals that displays key metrics on a
single page. However, in Power BI, a dashboard represents a specific visualization
feature that can only be created in the Power BI service. When migrating a dashboard
from a legacy BI platform, you have two choices:

1. The legacy dashboard can be recreated as a Power BI report. Most reports are
created with Power BI Desktop. Paginated reports and Excel reports are alternative
options, too.
2. The legacy dashboard can be recreated as a Power BI dashboard. Dashboards are a
visualization feature of the Power BI service. Dashboard visuals are often created
by pinning visuals from one or more reports, Q&A, or Quick Insights.

 Tip

Because dashboards are a Power BI content type, refrain from using the word
dashboard in the report or dashboard name.

Focus on the big picture when recreating visuals


Every BI tool has its strengths and focus areas. For this reason, the exact report visuals
you depended on in a legacy BI platform may not have a close equivalent in Power BI.

When recreating report visuals, focus more on the big picture business questions that
are being addressed by the report. It removes the pressure to replicate the design of
every visual in precisely the same way. While content consumers appreciate consistency
when using migrated reports, it's important not to get caught up in time-consuming
debates about small details.

Next steps
In the next article in this Power BI migration series, learn about stage 4, which is
concerned with creating and validating content when migrating to Power BI.

Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Create content to migrate to Power BI
Article • 03/14/2023

This article describes Stage 4, which is concerned with creating and validating content
when migrating to Power BI.

7 Note

For a complete explanation of the above graphic, see Power BI migration


overview.

The focus of Stage 4 is performing the actual work to convert the proof of concept
(POC) to a production-ready solution.

The output from this stage is a Power BI solution that has been validated in a
development workspace and is ready for deployment to production.

 Tip

Most of the topics discussed in this article also apply to a standard Power BI
implementation project.

Create the production solution


At this juncture, the same person who performed the POC may carry on with producing
the production-ready Power BI solution. Or, someone different may be involved. If
timelines are not jeopardized, it's great to get people involved who will be responsible
for Power BI development in the future. This way, they can actively learn.
) Important

Reuse as much of the work from the POC as possible.

Develop new import dataset


You may choose to create a new Import dataset when an existing Power BI dataset
doesn't already exist to meet your needs, or if it can't be enhanced to meet your needs.

Ideally, from the very beginning, consider decoupling the development work for data
and reports. Decoupling data and reports will facilitate the separation of work, and
permissions, when different people are responsible for data modeling and reports. It
makes for a more scalable approach and encourages data reusability.

The essential activities related to development of an Import dataset include:

Acquire data from one or more data sources (which may be a Power BI dataflow).
Shape, combine, and prepare data.
Create the dataset model, including date tables.
Create and verify model relationships.
Define measures.
Set up row-level security, if necessary.
Configure synonyms and optimize Q&A.
Plan for scalability, performance, and concurrency, which may influence your
decisions about data storage modes, such as using a Composite model or
aggregations.

 Tip

If you have different development/test/production environments, consider


parameterizing data sources. It will make deployment, described in Stage 5,
significantly easier.

Develop new reports and dashboards


The essential activities related to development of a Power BI report or dashboard
include:

Decide on using a Live Connection to an existing data model, or creating a new


data model
When creating a new data model, decide on the data storage mode for model
tables (Import, DirectQuery, or Composite).
Decide on the best data visualization tool to meet requirements: Power BI Desktop,
Paginated Report Builder, or Excel.
Decide on the best visuals to tell the story the report needs to tell, and to address
the questions the report needs to answer.
Ensure all visuals present clear, concise, and business-friendly terminology.
Address interactivity requirements.
When using Live Connection, add report-level measures.
Create a dashboard in the Power BI service, especially when consumers want an
easy way to monitor key metrics.

7 Note

Many of these decisions will have been made in earlier stages of planning or in the
technical POC.

Validate the solution


There are four main aspects to validation of a Power BI solution:

1. Data accuracy
2. Security
3. Functionality
4. Performance

Validate data accuracy


As a one-time effort during the migration, you'll need to ensure the data in the new
report matches what's displayed in the legacy report. Or—if there's a difference—be
able to explain why. It's more common than you might think to find an error in the
legacy solution that gets resolved in the new solution.

As part of ongoing data validation efforts, the new report will typically need to be cross-
checked with the original source system. Ideally, this validation occurs in a repeatable
way every time you publish a report change.

Validate security
When validating security, there are two primary aspects to consider:
Data permissions
Access to datasets, reports, and dashboards

In an Import dataset, data permissions are applied by defining row-level security (RLS).
It's also possible that data permissions are enforced by the source system when using
DirectQuery storage mode (possibly with single sign-on).

The main ways to grant access to Power BI content are:

Workspace roles (for content editors and viewers).


App audience permissions applied to a packaged set of workspace content (for
viewers).
Sharing an individual report or dashboard (for viewers).

 Tip

We recommend training content authors on how to manage security effectively. It's


also important to have robust testing, auditing and monitoring in place.

Validate functionality
It's the time to double-check dataset details like field names, formatting, sorting, and
default summarization behavior. Interactive report features, such as slicers, drill-down
actions, drillthrough actions, expressions, buttons, or bookmarks, should all be verified,
too.

During the development process, the Power BI solution should be published to a


development workspace in the Power BI service on a regular basis. Verify all functionality
works as expected in the service, such as the rendering of custom visuals. It's also a
good time to do further testing. Test scheduled refresh, Q&A, and how reports and
dashboards look on a mobile device.

Validate performance
Performance of the Power BI solution is important for consumer experience. Most
reports should present visuals in under 10 seconds. If you have reports that take longer
to load, pause and reconsider what may be contributing to delays. Report performance
should be assessed regularly in the Power BI service, in addition to Power BI Desktop.

Many performance issues arise from substandard DAX (Data Analysis eXpressions), poor
dataset design, or suboptimal report design (for instance, trying to render too many
visuals on a single page). Technical environment issues, such as the network, an
overloaded data gateway, or how a Premium capacity is configured can also contribute
to performance issues. For more information, see the Optimization guide for Power BI
and Troubleshoot report performance in Power BI.

Document the solution


There are two main types of documentation that are useful for a Power BI solution:

Dataset documentation
Report documentation

Documentation can be stored wherever it's most easily accessed by the target audience.
Common options include:

Within a SharePoint site: A SharePoint site may exist for your Center of Excellence
or an internal Power BI community site.
Within an app: URLs may be configured when publishing a Power BI app to direct
the consumer to more information.
Within individual Power BI Desktop files: Model elements, like tables and
columns, can define a description. These descriptions appear as tooltips in the
Fields pane when authoring reports.

 Tip

If you create a site to serve as a hub for Power BI-related documentation, consider
customizing the Get Help menu with its URL location.

Create dataset documentation


Dataset documentation is targeted at users who will be managing the dataset in the
future. It's useful to include:

Design decisions made and reasons why.


Who owns, maintains, and certifies datasets.
Data refresh requirements.
Custom business rules defined in datasets.
Specific dataset security or data privacy requirements.
Future maintenance needs.
Known open issues or deferred backlog items.
You may also elect to create a change log that summarizes the most important changes
that have happened to the dataset over time.

Create report documentation


Report documentation, which is typically structured as a walk-through targeted at report
consumers, can help consumers get more value from your reports and dashboards. A
short video tutorial often works well.

You may also choose to include additional report documentation on a hidden page of
your report. It could include design decisions and a change log.

Next steps
In the next article in this Power BI migration series, learn about stage 5, which is
concerned with deploying, supporting, and monitoring content when migrating to
Power BI.

Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Deploy to Power BI
Article • 02/27/2023

This article describes Stage 5, which is concerned with deploying, supporting, and
monitoring content when migrating to Power BI.

7 Note

For a complete explanation of the above graphic, see Power BI migration


overview.

The primary focus of Stage 5 is to deploy the new Power BI solution to production.

The output from this stage is a production solution ready for use by the business. When
working with an agile method, it's acceptable to have some planned enhancements that
will be delivered in a future iteration. Support and monitoring are also important at this
stage, and on an ongoing basis.

 Tip

Except for running in parallel and decommissioning the legacy reports, which are
discussed below, the topics discussed in this article also apply to a standard Power
BI implementation project.

Deploy to test environment


For IT-managed solutions, or solutions that are critical to business productivity, there's
generally a test environment. A test environment sits between development and
production, and it's not necessary for all Power BI solutions. A test workspace can serve
as a stable location, separated from development, for user acceptance testing (UAT) to
occur before release to production.

If your content has been published to a workspace on Premium capacity, deployment


pipelines can simplify the deployment process to development, test, and production
workspaces. Alternatively, publishing may be done manually or with PowerShell
scripts .

Deploy to test workspace


Key activities during a deployment to the test workspace typically include:

Connection strings and parameters: Adjust dataset connection strings if the data
source differs between development and test. Parameterization can be used to
effectively manage connection strings.
Workspace content: Publish datasets and reports to the test workspace, and create
dashboards.
App. Publish an app using the content from the test workspace, if it will form part
of the UAT process. Usually, app permissions are restricted to a small number of
people involved with UAT.
Data refresh: Schedule the dataset refresh for any Import datasets for the period
when UAT is actively occurring.
Security: Update or verify workspace roles. Testing workspace access includes a
small number of people who are involved with UAT.

7 Note

For more information about options for deployment to development, test, and
production, see Section 9 of the Planning a Power BI enterprise deployment
whitepaper .

Conduct user acceptance testing


Generally, UAT involves business users who are subject matter experts. Once verified,
they provide their approval that the new content is accurate, meets requirements, and
may be deployed for wider consumption by others.

The extent to which this UAT process is formal, including written sign-offs, will depend
on your change management practices.
Deploy to production environment
There are several considerations for deploying to the production environment.

Conduct a staged deployment


If you're trying to minimize risk and user disruption, or if there are other concerns, you
may opt to perform a staged deployment. The first deployment to production may
involve a smaller group of pilot users. With a pilot, feedback can be actively requested
from the pilot users.

Expand permissions in the production workspace, or the app, gradually until all target
users have permission to the new Power BI solution.

 Tip

Use the Power BI Activity Log to understand how consumers are adopting and
using the new Power BI solution.

Handle additional components


During the deployment process, you may need to work with your Power BI
administrators to address other requirements that are needed to support the entire
solution, such as:

Gateway maintenance: A new data source registration in the data gateway may be
required.
Gateway drivers and connectors: A new proprietary data source may require
installation of a new driver or custom connector on each server in the gateway
cluster.
Create a new Premium capacity: You may be able to use an existing Premium
capacity. Or, there may be situations when a new Premium capacity is warranted. It
could be the case when you purposely wish to separate a departmental workload.
Set up a Power BI dataflow: Data preparation activities can be set up once in a
Power BI dataflow using Power Query Online. It helps avoid replicating data
preparation work in many different Power BI Desktop files.
Register a new organizational visual: Organizational visual registration can be
done in the admin portal for custom visuals that didn't originate from AppSource.
Set featured content: A tenant setting exists that controls who may feature
content in the Power BI service home page.
Set up sensitivity labels: All sensitivity labels are integrated with Microsoft Purview
Information Protection.

Deploy to production workspace


Key activities during a deployment to the production workspace typically include:

Change management: If necessary, obtain approval to deploy, and communicate


deployment to the user population using your standard change management
practices. There may be an approved change management window during which
production deployments are allowed. Usually, it's applicable to IT-managed
content and much less frequently applied to self-service content.
Rollback plan: With a migration, the expectation is that it's the migration of a new
solution for the first time. If content does already exist, it's wise to have a plan to
revert to the previous version, should it become necessary. Having previous
versions of the Power BI Desktop files (using SharePoint or OneDrive versioning)
works well for this purpose.
Connection strings and parameters: Adjust dataset connection strings when the
data source differs between test and production. Parameterization can be used
effectively for this purpose.
Data refresh: Schedule the dataset refresh for any imported datasets.
Workspace content: Publish datasets and reports to the production workspace,
and create dashboards. Deployment pipelines can simplify the process to deploy
to development, test, and production workspaces if your content has been
published to workspaces on Premium capacity.
App: If apps are part of your content distribution strategy, publish an app using
the content from the production workspace.
Security: Update and verify workspace roles based on your content distribution
and collaboration strategy.
Dataset settings: Update and verify settings for each dataset, including:
Endorsement (such as certified or promoted)
Gateway connection or data source credentials
Scheduled refresh
Featured Q&A questions
Report and dashboard settings: Update and verify settings for each report and
dashboard. The most important settings include:
Description
Contact person or group
Sensitivity label
Featured content
Subscriptions: Set up report subscriptions, if necessary.
) Important

At this point, you have reached a big milestone. Celebrate your accomplishment at
completing the migration.

Communicate with users


Announce the new solution to consumers. Let them know where they can find the
content, as well as associated documentation, FAQs, and tutorials. To introduce the new
content, consider hosting a lunch-and-learn type of session or prepare some on-
demand videos.

Be sure to include instructions on how to request help, as well as how to provide


feedback.

Conduct a retrospective
Consider conducting a retrospective to examine what went well with the migration, and
what could be done better with the next migration.

Run in parallel
In many situations, the new solution will run in parallel to the legacy solution for a
predetermined time. Advantages of running in parallel include:

Risk reduction, particularly if the reports are considered mission-critical.


Allows time for users to become accustomed to the new Power BI solution.
Allows for the information presented in Power BI to be cross-referenced to the
legacy reports.

Decommission the legacy report


At some point, the reports migrated to Power BI should be disabled in the legacy BI
platform. Decommissioning legacy reports can occur when:

The predetermined time for running in parallel—which should have been


communicated to the user population—has expired. It's commonly 30-90 days.
All users of the legacy system have access to the new Power BI solution.
Significant activity is no longer occurring on the legacy report.
No issues have occurred with the new Power BI solution that could impact user
productivity.

Monitor the solution


Events from the Power BI activity log can be used to understand usage patterns of the
new solution (or the execution log for content deployed to Power BI Report Server).
Analyzing the activity log can help determine whether actual use differs from
expectations. It can also validate that the solution is adequately supported.

Here's some questions that can be addressed by reviewing the activity log:

How frequently is the content being viewed?


Who is viewing the content?
Is the content typically viewed through an app or a workspace?
Are most users using a browser or mobile application?
Are subscriptions being used?
Are new reports being created that are based on this solution?
Is the content being updated frequently?
How is security defined?
Are problems occurring regularly, such as data refresh failures?
Are concerning activities happening (for instance, significant export activity or
numerous individual report shares) which could mean additional training might be
warranted?

) Important

Be sure to have someone regularly review the activity log. Merely capturing it and
storing the history does have value for auditing or compliance purposes. However,
the real value is when proactive action can be taken.

Support the solution


Although the migration is complete, the post-migration period is vital for addressing
issues and handling any performance concerns. Over time, the migrated solution will
likely undergo changes as business needs evolve.

Support tends to happen a little differently depending on how self-service BI is


managed across the organization. Power BI champions throughout the business units
often informally act as first-line support. Although it's an informal role, it's a vital one
that should be encouraged.

Having a formal support process, staffed by IT with support tickets, is also essential for
handling routine system-oriented requests and for escalation purposes.

7 Note

The different types of internal and external support are described in the Power BI
adoption roadmap.

You may also have a Center of Excellence (COE) that acts like internal consultants who
support, educate, and govern Power BI in the organization. A COE can be responsible for
curating helpful Power BI content in an internal portal.

Lastly, it should be clear to content consumers to know who to contact with questions
about the content, and to have a mechanism for providing feedback on issues or
improvements.

For more information about user support, with a focus on the resolution of issues, see
Power BI adoption roadmap: User support.

Next steps
In the final article in this series, learn from customers when migrating to Power BI.

Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Learn from customer Power BI
migrations
Article • 02/27/2023

This article, which concludes the series on migrating to Power BI, shares key lessons
learned by two customers who have successfully migrated to Power BI.

International consumer goods company


An international consumer goods company, which sells hundreds of products, made the
decision in 2017 to pursue a cloud-first strategy. One of the major factors for selecting
Power BI as its business intelligence (BI) platform is its deep integration with Azure and
Microsoft 365.

Conduct a phased migration


In 2017, the company began using Power BI. The initial organizational objective was to
introduce Power BI as an additional BI tool. The decision provided content authors,
consumers, and IT with the time to adapt to new ways of delivering BI. It also allowed
them to build expertise in Power BI.

During the second half of 2018, a formal announcement was made declaring that Power
BI was the approved BI tool for the organization. And, accordingly, all new BI
development work should take place in Power BI. The availability of Power BI Premium
was a key driver for making this decision. At this time, the organization discouraged the
use of the former BI platform, and planning for transition commenced.

Towards the end of 2019, work began to migrate existing content from the legacy BI
platform to Power BI. Some early adopters migrated their content rapidly. That helped
build even more momentum with Power BI around the organization. Content owners
and authors were then asked to begin preparations to fully migrate to Power BI by the
end of 2020. The organization does still face challenges related to skills, time, and
funding—though none of their challenges are related to the technology platform itself.

) Important

Power BI had already become successful and entrenched within the organization
before the business units were asked to undergo a formal migration effort away
from the former BI platform.
Prepare to handle varying responses
In this large decentralized organization, there were varying levels of receptiveness and
willingness to move to Power BI. Beyond concerns related to time and budget, there
were staff who had made significant investments in building their skills in the former BI
platform. So, the announcement about standardizing on Power BI wasn't news
welcomed by everyone. Since each business unit has its own budget, individual business
units could challenge decisions such as this one. As IT tool decisions were made
centrally, that resulted in some challenges for the executive sponsor and BI leaders to
handle.

) Important

Communication with leadership teams throughout the business units was critical to
ensure they all understood the high-level organizational benefits of standardizing
on Power BI. Effective communication became even more essential as the migration
progressed and the decommissioning date of the legacy BI platform approached.

Focus on the bigger picture


The company found that while some migrated reports could closely replicate the
original design, not every individual report could be faithfully replicated in Power BI.
Although it's to be expected—since all BI platforms are different. It did bring to light
that a different design mindset was required.

Guidance was provided to content authors: focus on creating fit-for-purpose reports in


Power BI, rather than attempt an exact replica of the legacy report. For this reason,
subject matter experts need to be actively available during the migration process for
consultation and validation. Efforts were taken to consider the report design purpose
and to improve it when appropriate.

) Important

Sometimes the better approach is to take on improvements during the migration.


At other times, the better choice is to deliver the exact same value as before—
without significant improvements—so as not to jeopardize the migration timeline.

Cautiously assess priorities


An analysis of the former BI platform was conducted to fully understand its usage. The
former BI platform had thousands of published reports, of which approximately half had
been accessed in the previous year. That number could be cut in half once again when
assessing which reports were deemed to deliver significant value to the organization.
Those reports were prioritized first for the migration.

) Important

It's very easy to overestimate how critical a report actually is. For reports that aren't
used frequently, evaluate whether they can be decommissioned entirely.
Sometimes, the cheapest and easiest thing to do is nothing.

Cautiously assess complexity


Of the first prioritized reports, time estimates were compiled based on estimated effort
levels: simple, medium, or complex. Although it sounds like a relatively straightforward
process, don't expect time estimates to be accurate on an individual report basis. You
may find an estimate can be wildly inaccurate. For example, the company had a report
that it deemed highly complex. It received a conversion estimate of 50 days by the
consultants. However, the redesigned report in Power BI was completed in about 50
hours.

) Important

Although time estimates are often necessary to obtain funding and personnel
assignments, they're probably most valuable in the aggregate.

Decide how change management is handled


With such a high volume of BI assets, change management for the business-owned
reports represented a challenge. IT-managed reports were handled according to
standard change management practices. However, due to the high volume, driving
change centrally for business-owned content wasn't possible.

) Important

Additional responsibility falls to the business units when it's impractical to manage
change from one central team.
Create an internal community
The company established a Center of Excellence (COE) to provide internal training
classes and resources. The COE also serves as an internal consultancy group that's ready
to assist content authors with technical issues, resolution of roadblocks, and best
practice guidance.

There's also an internal Power BI community, which has been a massive success
counting in excess of 1,600 members. The community is managed in Yammer. Members
can ask internally relevant questions and receive answers adhering to best practices and
framed within organizational constraints. This type of user-to-user interaction alleviates
much of the support burden from the COE. However, the COE does monitor the
questions and answers, and it gets involved in conversations when appropriate.

An extension of the internal community is the newer Power BI expert network. It


includes a small number of pre-selected Power BI champions from within the
organization. They are highly skilled Power BI practitioners from the business units, who
are enthusiastic champions, and who actively want to solve challenges within the
business. Members of the Power BI expert network are expected to abide by best
practices and guidelines established by the COE, and help the broader internal Power BI
community understand and implement them. Although the Power BI expert network
collaborates with the COE, and can receive dedicated training, Power BI experts operate
independently from the COE. Each Power BI expert may define the parameters for how
they operate, bearing in mind they have other responsibilities and priorities in their
official role.

) Important

Have a very well defined scope for what the COE does, such as: adoption,
governance, guidance, best practices, training, support, and perhaps even hands-on
development. While a COE is incredibly valuable, measuring its return on
investment can be difficult.

Monitor migration progress and success


Key performance indicators (KPIs) are continually monitored during the migration to
Power BI. They help the company to understand trends for metrics such as number of
report visits, number of active reports, and distinct users per month. Increased usage of
Power BI is measured alongside decreased usage of the former BI platform, with the
goal of achieving an inverse relationship. Targets are updated each month to adapt to
changes. If usage isn't happening at the desired pace, bottlenecks are identified so
appropriate action can be taken.

) Important

Create a migration scorecard with actionable business intelligence to monitor the


success of the migration effort.

Large transportation and logistics company


A large North American transportation and logistics company is actively investing in the
modernization of its data infrastructure and analytical systems.

Allow a period of gradual growth


The company started using Power BI in 2018. By mid-2019, Power BI became the
preferred platform for all new BI use cases. Then, in 2020, the company focused on
phasing out their existing BI platform, in addition to a variety of custom developed
ASP.NET BI solutions.

) Important

Power BI had many active users across the organization before commencing the
phase out of their legacy BI platform and solutions.

Balance centralized and distributed groups


In the company, there are two types of BI teams: a central BI team and analytics groups
distributed throughout the organization. The central BI team has ownership
responsibility for Power BI as a platform, but it doesn't own any of the content. This way,
the central BI team is a technical enablement hub that supports the distributed analytics
groups.

Each of the analytics groups is dedicated to a specific business unit or a shared services
function. A small group may contain a single analyst, while a larger group can have 10-
15 analysts.

) Important
The distributed analytics groups comprise subject matter experts who are familiar
with the day-to-day business needs. This separation allows the central BI team to
focus primarily on technical enablement and support of the BI services and tools.

Focus on dataset reusability


Relying on custom ASP.NET BI solutions was a barrier to developing new BI solutions.
The required skill set meant that the number of self-service content authors was small.
Because Power BI is a much more approachable tool—specifically designed for self-
service BI—it spread quickly throughout the organization once it was released.

The empowerment of data analysts within the company resulted in immediate positive
outcomes. However, the initial focus with Power BI development was on visualization.
While it resulted in valuable BI solutions, this focus resulted in a large number of Power
BI Desktop files, each with a one-to-one relationship between the report and its dataset.
It resulted in many datasets and duplication of data and business logic. To reduce
duplication of data, logic, and effort, the company delivered training and provided
support to content authors.

) Important

Include information about the importance of data reusability in your internal


training efforts. Address important concepts as early as is practical.

Test data access multiple ways


The company's data warehouse platform is DB2. Based on the current data warehouse
design, the company found that DirectQuery models—instead of Import models—
worked best for their requirements.

) Important

Conduct a technical proof of concept to evaluate the model storage mode that
works best. Also, teach data modelers about model storage modes and how they
can choose an appropriate mode for their project.

Educate authors about Premium licensing


Since it was easier to get started with Power BI (compared with their legacy BI platform),
many of the early adopters were people who didn't have a license to the previous BI
tool. As expected, the number of content authors grew considerably. These content
authors understandably wanted to share their content with others, resulting in a
continual need for additional Power BI Pro licenses.

The company made a large investment in Premium workspaces, most notably to


distribute Power BI content to many users with Power BI free licenses. The support team
works with content authors to ensure they use Premium workspaces when appropriate.
It avoids unnecessarily allocating Power BI Pro licenses when a user only needs to
consume content.

) Important

Licensing questions often arise. Be prepared to educate and help content authors
to address licensing questions. Validate that user requests for Power BI Pro licenses
are justified.

Understand the data gateways


Early on, the company had many personal gateways. Using an on-premises data
gateway cluster shifts the management efforts to the central BI team, which allows the
content author community to focus on producing content. The central BI team worked
with the internal Power BI user community to reduce the number of personal gateways.

) Important

Have a plan for creating and managing on-premises data gateways. Decide who is
permitted to install and use a personal gateway and enforce it with gateway
policies.

Formalize your support plan


As the adoption of Power BI grew within the organization, the company found that a
multi-tier support approach worked well:

Layer 1: Intra-team: People learn from, and teach, each other on a day-to-day
basis.
Layer 2: Power BI community: People ask questions of the internal Teams
community to learn from each other and communicate important information.
Layer 3: Central BI team and COE: People submit email requests for assistance.
Office hour sessions are held twice per week to collectively discuss problems and
share ideas.

) Important

Although the first two layers are less formal, they're equally important as the third
layer of support. Experienced users tend to rely mostly on people they know,
whereas newer users (or those who are the single data analyst for a business unit or
shared service) tend to rely more on formal support.

Invest in training and governance


Over the past year, the company improved its internal training offerings and enhanced
its data governance program. The governance committee includes key members from
each of the distributed analytics groups, plus the COE.

There are now six internal Power BI courses in their internal catalog. The Dashboard in a
Day course remains a popular course for beginners. To help users deepen their skills,
they deliver a series of three Power BI courses and two DAX courses.

One of their most important data governance decisions related to management of


Premium capacities. The company opted to align their capacity with key analytics areas
in business units and shared services. Therefore, if inefficiencies exist, the impact is felt
only within that one area, and the decentralized capacity administrators are empowered
to manage the capacity as they see fit.

) Important

Pay attention to how Premium capacities are used, and how workspaces are
assigned to them.

Next steps
Other helpful resources include:

Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Dashboard in a Day
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Power BI adoption roadmap
Article • 02/27/2023

The goal of this series of articles is to provide a roadmap. The roadmap presents a series
of strategic and tactical considerations and action items that directly lead to successful
Power BI adoption, and help build a data culture in your organization.

Advancing adoption and cultivating a data culture is about more than implementing
technology features. Technology can assist an organization in making the greatest
impact, but a healthy data culture involves a lot of considerations across the spectrum of
people, processes, and technology.

7 Note

While reading this series of articles, it's recommended you also take into
consideration Power BI implementation planning guidance. After you're familiar
with the concepts in the Power BI adoption roadmap, consider reviewing the usage
scenarios. Understanding the diverse ways how Power BI is used can influence your
implementation strategies and decisions.

This series of articles correlates with the following Power BI adoption roadmap diagram:
The areas in the above diagram include:

Area Description

Data culture: Data culture refers to a set of behaviors and norms in the organization that
encourages a data-driven culture. Building a data culture is closely related to adopting
Power BI, and it's often a key aspect of an organization's digital transformation.

Executive sponsor: An executive sponsor is someone with credibility, influence, and


authority throughout the organization. They advocate for building a data culture and
adopting Power BI.

Content ownership and management: There are three primary strategies for how
business intelligence (BI) content is owned and managed: business-led self-service BI,
managed self-service BI, and enterprise BI. These strategies have a significant influence on
adoption, governance, and the Center of Excellence (COE) operating model.
Area Description

Content delivery scope: There are four primary strategies for content delivery including
personal BI, team BI, departmental BI, and enterprise BI. These strategies have a significant
influence on adoption, governance, and the COE operating model.

Center of Excellence: A Power BI COE is an internal team of technical and business experts.
These experts actively assist others who are working with data within the organization. The
COE forms the nucleus of the broader community to advance adoption goals that are
aligned with the data culture vision.

Governance: Data governance is a set of policies and procedures that define the ways in
which an organization wants data to be used. When adopting Power BI, the goal of
governance is to empower the internal user community to the greatest extent possible,
while adhering to industry, governmental, and contractual requirements and regulations.

Mentoring and user enablement: A critical objective for adoption efforts is to enable
users to accomplish as much as they can within the guardrails established by governance
guidelines and policies. The act of mentoring users is one of the most important
responsibilities of the COE. It has a direct influence on adoption efforts.

Community of practice: A community of practice comprises a group of people with a


common interest, who interact with and help each other on a voluntary basis. An active
community is an indicator of a healthy data culture. It can significantly advance adoption
efforts.

User support: User support includes both informally organized, and formally organized,
methods of resolving issues and answering questions. Both formal and informal support
methods are critical for adoption.

System oversight: System oversight includes the day-to-day administration responsibilities


to support the internal processes, tools, and people.

The relationships in the diagram shown above can be summarized in the following
bullet list:

Your organizational data culture vision will strongly influence the strategies that
you follow for self-service and enterprise content ownership and management
and content delivery scope.
These strategies will, in turn, have a big impact on the operating model for your
Center of Excellence and governance decisions.
The established governance guidelines, policies, and processes affect the
implementation methods used for mentoring and enablement, the community of
practice, and user support.
Governance decisions will dictate the day-to-day system oversight (administration)
activities.
All data culture and adoption-related decisions and actions are accomplished more
easily with guidance and leadership from an executive sponsor.

Each individual article in this series discusses key topics associated with the items in the
diagram. Considerations and potential action items are provided. Each article concludes
with a set of maturity levels to help you assess your current state so you can decide
what action to take next.

Power BI adoption
Successful Power BI adoption involves making effective processes, support, tools, and
data available and integrated into regular ongoing patterns of usage for content
creators, consumers, and stakeholders in the organization.

) Important

This series of adoption articles is focused on organizational adoption. See the


Power BI adoption maturity levels article for an introduction to the three types of
adoption: organizational, user, and solution.

A common misconception is that adoption relates primarily to usage or the number of


users. There's no question that usage statistics are an important factor. However, usage
isn't the only factor. Adoption isn't just about using the technology regularly; it's about
using it effectively. Effectiveness is much harder to define and measure.

Whenever possible, adoption efforts should be aligned across analytics platforms, BI


services, and other Power Platform products. These products include Power Apps and
Power Automate.

7 Note

Individuals—and the organization itself—are continually learning, changing, and


improving. That means there's no formal end to adoption-related efforts.

The remaining articles in this Power BI adoption series discuss the following aspects of
adoption.

Adoption maturity levels


Data culture
Executive sponsorship
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Conclusion and additional resources

) Important

You may be wondering how this Power BI adoption roadmap is different from the
Power BI adoption framework . The adoption framework was created primarily to
support Microsoft partners. It is a lightweight set of resources to help partners
deploy Power BI solutions for their customers.

This Power BI adoption series is more current. It is intended to guide any person or
organization that is using—or considering using—Power BI. If you're seeking to
improve your existing Power BI implementation, or planning a new Power BI
implementation, this adoption roadmap is a great place to start. You will find a lot
of valuable information in the Power BI adoption framework , so we encourage
you to review it.

Target audience
The intended audience of this series of articles is interested in one or more of the
following outcomes.

Improving their organization's ability to effectively use Power BI.


Increasing their organization's maturity level related to Power BI delivery.
Understanding and overcoming adoption-related challenges faced when scaling
Power BI.
Increasing their organization's return on investment (ROI) in data and analytics.

Primarily, this series of articles will be helpful to those who work in an organization with
one or more of the following characteristics.

Power BI is deployed with some successes.


Power BI has pockets of viral adoption, but isn't purposefully governed across the
entire organization.
Power BI is deployed with some meaningful scale, but there remains a need to
determine:
What is effective and what should be maintained.
What should be improved.
How future deployments could be more strategic.
An expanded implementation of Power BI is under consideration or is planned.

Secondarily, this series of articles will be helpful for:

Organizations that are in the early stages of a Power BI implementation.


Organizations that have had success with adoption and now want to evaluate their
current maturity level.

Assumptions and scope


The primary focus of this series of articles is on the Power BI technology platform, with
an emphasis on the Power BI service.

To fully benefit from the information provided in these articles, it's an advantage to have
at least an understanding of Power BI fundamental concepts.

Next steps
In the next article in this series, learn about the Power BI adoption maturity levels. The
maturity levels are referenced throughout the entire series of articles. Also, see the
conclusion article for additional adoption-related resources.

Other helpful resources include:

Power BI implementation planning


Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Experienced Power BI partners are available to help your organization succeed with
adoption of Power BI. To engage a Power BI partner, visit the Power BI partner portal .

Acknowledgments
This series of articles was written by Melissa Coates, Data Platform MVP, and owner of
Coates Data Strategies , with significant contributions from Matthew Roche. Reviewers
include Cory Moore, James Ward, Timothy Bindas, Greg Moir, Chuy Varela, Daniel
Rubiolo, Sanjay Raut, and Peter Myers.
Power BI adoption roadmap maturity
levels
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

There are three inter-related perspectives to consider when adopting a technology like
Power BI.

The three types of adoption shown in the above diagram include:

Type Description

Organizational adoption: Organizational adoption refers to the effectiveness of Power BI


governance. It also refers to data management practices that support and enable business
intelligence efforts.

User adoption: User adoption is the extent to which consumers and creators continually
increase their knowledge. It's concerned with whether they're actively using Power BI, and
whether they're using it in the most effective way.
Type Description

Solution adoption: Solution adoption refers to the impact and business value achieved for
individual requirements and Power BI solutions.

As the four arrows in the previous diagram indicate, the three types of adoption are all
strongly inter-related:

Solution adoption affects user adoption. A well-designed and well-managed


solution—which could be many things, such as a set of reports, an app, or a
dataset—impacts and guides users on how to use Power BI in an optimal way.
User adoption impacts organizational adoption. The patterns and practices used
by individual users influence organizational adoption decisions, policies, and
practices.
Organizational adoption influences user adoption. Effective organizational
practices—including mentoring, training, support, and community—encourage
users to do the right thing in their day-to-day workflow.
User adoption affects solution adoption. Stronger user adoption, because of the
effective use of Power BI by educated and informed users, contributes to stronger
and more successful individual solutions.

The remainder of this article introduces the three types of Power BI adoption in more
detail.

Organizational adoption maturity levels


Organizational adoption measures the state of Power BI governance and data
management practices. There are several organizational adoption goals:

Effectively support the community


Enable and empower users
Oversee information delivery via enterprise BI and self-service BI with continuous
improvement cycles

It's helpful to think about organizational adoption from the perspective of a maturity
model. For consistency with the Power CAT adoption maturity model and the maturity
model for Microsoft 365, this Power BI adoption roadmap aligns with the five levels from
the Capability Maturity Model , which were later enhanced by the Data Management
Maturity (DMM) model from ISACA (note the DMM was a paid resource that has been
retired).

Every organization has limited time, funding, and people. So, it requires them to be
selective about where they prioritize their efforts. To get the most from your investment
in Power BI, seek to attain at least maturity level 300 or 400, as discussed below. It's
common that different business units in the organization evolve and mature at different
rates, so be cognizant of the organizational state as well as progress for key business
units.

7 Note

Organizational adoption maturity is a long journey. It takes time, effort, and


planning to progress to the higher levels.

Maturity level 100 – Initial


Level 100 is referred to as initial or performed. It's the starting point for new data-related
investments that are new, undocumented, and without any process discipline.

Common characteristics of maturity level 100 include:

Pockets of success and experimentation with Power BI exist in one or more areas of
the organization.
Achieving quick wins has been a priority, and it has delivered some successes.
Organic growth has led to the lack of a coordinated strategy or governance
approach.
Practices are undocumented, with significant reliance on tribal knowledge.
There are few formal processes in place for effective data management.
Risk exists due to a lack of awareness of how data is used throughout the
organization.
The potential for a strategic investment with Power BI is acknowledged, but there's
no clear path forward for purposeful, organization-wide execution.

Maturity level 200 – Repeatable


Level 200 is referred to as repeatable or managed. At this point on the maturity curve,
data management is planned and executed. Data management is based on defined
processes, though these processes may not apply uniformly throughout the
organization.

Common characteristics of maturity level 200 include:

Certain Power BI content is now critical in importance and/or it's broadly used by
the organization.
There are attempts to document and define repeatable practices, however efforts
are siloed, reactive, and deliver varying levels of success.
There's an over-reliance on individuals having good judgment and adopting
healthy habits that they learned on their own.
Power BI adoptions continues to grow organically and produces value. However, it
takes place in an uncontrolled way.
Resources for an internal community are established, such as a Teams channel or
Yammer group.
Initial planning for a consistent Power BI governance strategy is underway.
There's recognition that a Power BI Center of Excellence (COE) can deliver value.

Maturity level 300 – Defined


Level 300 is referred to as defined. At this point on the maturity curve, a set of
standardized data management processes are established and consistently applied
across organizational boundaries.

Common characteristics of maturity level 300 include:

Measurable success is achieved for the effective use of Power BI.


Progress is made on the standardization of repeatable practices, though less-than-
optimal aspects may still exist due to early uncontrolled growth.
The Power BI COE is established, and it has clear goals and scopes of
responsibilities.
The internal community gains traction with the participation of a growing number
of users.
Power BI champions emerge in the community.
Initial investments in training, documentation, and resources are made.
An initial governance model is in place.
Power BI has an active and engaged executive sponsor.
Roles and responsibilities for all Power BI stakeholders are well understood.

Maturity level 400 – Capable


Level 400 is known as capable or measured. At this point on the maturity curve, data is
well-managed across its entire lifecycle.

Common characteristics of maturity level 400 include:

Business intelligence efforts deliver significant value.


Power BI is commonly used for delivering critical content throughout the
organization.
There's an established and accepted governance model with cooperation from all
key business units.
Training, documentation, and resources are readily available for, and actively used
by, the Power BI community of users.
Standardized processes are in place for the oversight and monitoring of Power BI
usage and practices.
The Power BI COE includes representation from all key business units.
A Power BI champions network supports the internal community: champions
actively work with their colleagues and the COE.

Maturity level 500 – Efficient


Level 500 is known as efficient or optimizing because at this point on the maturity curve,
the emphasis is now on automation and continuous improvement.

Common characteristics of maturity level 500 include:

The value of Power BI solutions is prevalent in the organization, and Power BI is


widely accepted throughout the organization.
Power BI skillsets are highly valued in the organization, and they're recognized by
leadership.
The internal Power BI community is self-sustaining, with support from the COE. The
community isn't over-reliant on key individuals.
The COE reviews key performance indicators regularly to measure success of
implementation and adoption goals.
Continuous improvement is a continual priority.
Use of automation adds value, improves productivity, or reduces risk for error.

7 Note

The above characteristics are generalized. When considering maturity levels and
designing a plan, you'll want to consider each topic or goal independently. In
reality, it's probably not possible to reach level 500 maturity level for every aspect
of Power BI adoption for the entire organization. So, assess maturity levels
independently per goal. That way, you can prioritize your efforts where they will
deliver the most value. The remainder of the articles in this Power BI adoption
series present maturity levels on a per-topic basis.

Individuals—and the organization itself—continually learn, change, and improve. So,


that means there's no formal end to adoption-related efforts. However, it's common
that effort is reduced as higher maturity levels are reached.
The remainder of this article introduces the second and third types of adoption: user
adoption and solution adoption.

7 Note

The remaining articles in this series focus primarily on organizational adoption.

User adoption stages


User adoption measures the extent to which content consumers and self-service content
creators are actively using Power BI effectively. Usage statistics alone don't indicate user
adoption. User adoption is also concerned with individual user behaviors and practices.
The aim is to ensure users engage with Power BI in the correct way and to its fullest
extent.

User adoption encompasses how consumers view content, as well as how self-service
creators generate content for others to consume.

User adoption occurs on an individual user basis, but it's measured and analyzed in the
aggregate. Individual users progress through the four stages of user adoption at their
own pace. An individual who adopts a new technology will take some time to achieve
proficiency. Some users will be eager; others will be reluctant to learn yet another tool,
regardless of the promised productivity improvements. Advancing through the user
adoption stages involves time and effort, and it involves behavioral changes to become
aligned with organizational adoption objectives. The extent to which the organization
supports users advancing through the user adoption stages has a direct correlation to
the organizational-level adoption maturity.

User adoption stage 1 – Awareness


Common characteristics of stage 1 user adoption include:

An individual has heard of, or been initially exposed to, Power BI in some way.
An individual may have access to Power BI but isn't yet actively using it.

User adoption stage 2 – Understanding


Common characteristics of stage 2 user adoption include:

An individual develops understanding of the benefits of Power BI to deliver


analytical value and support decision-making.
An individual shows interest and starts to use Power BI.

User adoption stage 3 – Momentum


Common characteristics of stage 3 user adoption include:

An individual actively gains Power BI skills by attending formal training, self-


directed learning, or experimentation.
An individual gains basic competency with the aspects of Power BI relevant to their
role.

User adoption stage 4 – Proficiency


Common characteristics of stage 4 user adoption include:

An individual actively uses Power BI regularly.


An individual understands how to use Power BI in the way in which it was intended,
as relevant for their role.
An individual modifies their behavior and activities to align with organizational
governance processes.
An individual's willingness to support organizational processes and change efforts
is growing over time, and they become an advocate for Power BI in the
organization.
An individual makes the effort to continually improve their skills and stay current
with new product capabilities and features.

It's easy to underestimate the effort it takes to progress from stage 2 (understanding) to
stage 4 (proficiency). Typically, it takes the longest time to progress from stage 3
(momentum) to stage 4 (proficiency).

) Important

By the time a user reaches the momentum and proficiency stages, the organization
needs to be ready to support them in their efforts. You can consider some proactive
efforts to encourage users to progress through stages. For more information, see
the community of practice and the user support articles.

Solution adoption phases


Solution adoption is concerned with measuring the impact of individual Power BI
solutions. It's also concerned with the level of value solutions provide. The scope for
evaluating solution adoption is for one set of requirements, like a set of reports or a
single Power BI app.

As a solution progresses to phases 3 or 4, expectations to operationalize the solution


are higher.

 Tip

The importance of scope on expectations for governance is described in the


content delivery scope article. That concept is closely related to this topic, but this
article approaches it from a different angle. It considers when you already have a
solution that is operationalized and distributed to many users. That doesn't
immediately equate to phase 4 solution adoption, as the concept of solution
adoption focuses on how much value the content delivers.

Solution phase 1 – Exploration


Common characteristics of phase 1 solution adoption include:

Exploration and experimentation are the main approaches to testing out new
ideas. Exploration of new ideas can occur through informal self-service BI, or
through a formal proof of concept (POC), which is purposely narrow in scope. The
goal is to confirm requirements, validate assumptions, address unknowns, and
mitigate risks.
A small group of users test the proof of concept solution and provide useful
feedback.
All exploration—and initial feedback—could occur within Power BI Desktop or
Excel. Use of the Power BI service is limited.

Solution phase 2 – Functional


Common characteristics of phase 2 solution adoption include:

The solution is functional and meets the basic set of user requirements. There are
likely plans to iterate on improvements and enhancements.
The solution is deployed to the Power BI service.
All necessary supporting components are in place, such as gateways to support
scheduled refresh.
Users are aware of the solution and show interest in using it. Potentially, it may be
a limited preview release, and may not yet be ready to promote to a production
workspace.
Solution phase 3 – Valuable
Common characteristics of phase 3 solution adoption include:

Target users find the solution is valuable and experience tangible benefits.
The solution is promoted to a production workspace.
Validations and testing occur to ensure data quality, accurate presentation,
accessibility, and acceptable performance.
Content is endorsed, when appropriate.
Usage metrics for the solution are actively monitored.
User feedback loops are in place to facilitate suggestions and improvements that
can contribute to future releases.
Solution documentation is generated to support the needs of information
consumers (such as data sources used or how metrics are calculated), and help
future creators (such as documenting any future maintenance or planned
enhancements).
Ownership and subject matter experts for the content is clear.
Report branding and theming are in place, and they're inline with governance
guidelines.

Solution phase 4 – Essential


Common characteristics of phase 4 solution adoption include:

Target users actively and routinely use the solution, and it's considered essential
for decision-making purposes.
The solution resides in a production workspace well-separated from development
and test content. Change management and release management are carefully
controlled due to the impact of changes.
A subset of users regularly provides feedback to ensure the solution continues to
meet requirements.
Expectations for the success of the solution are clear and are measured.
Expectations for support of the solution are clear, especially if there are service
level agreements.
The solution aligns with organizational governance guidelines and practices.
Most content is certified since it's critical in nature.
Formal user acceptance testing for new changes may occur, particularly for IT-
managed content.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about the
organizational data culture and its impact on adoption efforts.
Power BI adoption roadmap: Data
culture
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

Building a data culture is closely related to adopting Power BI, and it's often a key aspect
of an organization's digital transformation. The term data culture can be defined in
different ways by different organizations. In this series of articles, data culture means a
set of behaviors and norms in an organization. It encourages a culture that regularly
employs informed data decision-making:

By more stakeholders throughout more areas of the organization.


Based on analytics, not opinion.
In an effective, efficient way that's based on best practices endorsed by the Center
of Excellence (COE).
Based on trusted data.
That reduces reliance on undocumented tribal knowledge.
That reduces reliance on hunches and gut decisions.

) Important

Think of data culture as what you do, not what you say. Your data culture is not a
set of rules (that's governance). So, data culture is a somewhat abstract concept. It's
the behaviors and norms that are allowed, rewarded, and encouraged—or those
that are disallowed and discouraged. Bear in mind that a healthy data culture
motivates employees at all levels of the organization to generate and distribute
actionable knowledge.

Within an organization, certain business units or teams are likely to have their own
behaviors and norms for getting things done. The specific ways to achieve data culture
objectives can vary across organizational boundaries. What's important is that they
should all align with the organizational data culture objectives. You can think of this
structure as aligned autonomy.

The following circular diagram conveys the interrelated aspects that influence your data
culture:

The diagram represents the somewhat ambiguous relationships among the following
items:

Data culture is the outer circle. All topics within it contribute to the state of the
data culture.
Organizational adoption (including the implementation aspects of mentoring and
user enablement, user support, community of practice, governance, and system
oversight) is the inner circle. All topics are major contributors to the data culture.
Executive support and the Center of Excellence are drivers for the success of
organizational adoption.
Data literacy, data democratization, and data discovery are data culture aspects
that are heavily influenced by organizational adoption.
Content ownership, content management, and content delivery scope are closely
related to data democratization.

The elements of the diagram are discussed throughout this series of articles.

Data culture vision


The concept of data culture can be difficult to define and measure. Even though it's
challenging to articulate data culture in a way that's meaningful, actionable, and
measurable, you need to have a well-understood definition of what a healthy data
culture means to your organization. This vision of a healthy data culture should:

Originate from the executive level.


Align with organizational objectives.
Directly influence your adoption strategy.
Serve as the high-level guiding principles for enacting governance policies and
guidelines.

Data culture outcomes aren't specifically mandated. Rather, the state of the data culture
is the result of following the governance rules as they're enforced (or the lack of
governance rules). Leaders at all levels need to actively demonstrate what's important
through their actions, including how they praise, recognize, and reward staff members
who take initiative.

 Tip

If you can take for granted that your efforts to develop a data solution (such as a
dataset or a report) will be valued and appreciated, that's an excellent indicator of a
healthy data culture. Sometimes, however, it depends on what your immediate
manager values most.

The initial motivation for establishing a data culture often comes from a specific
strategic business problem or initiative. It might be:

A reactive change, such as responding to new agile competition.


A proactive change, such as starting a new line of business or expanding into new
markets to seize a "green field" opportunity. Being data driven from the beginning
can be relatively easy when there are fewer constraints and complications,
compared with an established organization.
Driven by external changes, such as pressure to eliminate inefficiencies and
redundancies during an economic downturn.

In any of these situations, there's often a specific area where the data culture takes root.
The specific area could be a scope of effort that's smaller than the entire organization,
even if it's still significant. After necessary changes are made at this smaller scope, they
can be incrementally replicated and adapted for the rest of the organization.

Although technology can help advance the goals of a data culture, implementing
specific tools or features isn't the objective. This series of articles covers a lot of topics
that contribute to adoption of a healthy data culture. The remainder of this article
addresses three essential aspects of data culture: data discovery, data democratization,
and data literacy.

Data discovery
A successful data culture depends on users working with the right data in their day-to-
day activities. To achieve this goal, users need to find and access data sources, reports,
and other items.

Data discovery is the ability to effectively search for, and locate, relevant data sources
and reports across the organization. Primarily, data discovery is concerned with
improving awareness that data exists, particularly when data is siloed in departmental
systems. After a user is aware of the data's existence, that user can go through the
standard process to request access to the information. Today, technology helps a lot
with data discovery, advancing well past asking colleagues where to find datasets.

 Tip

It's important to have a clear and simple process so users can request access to
data. Knowing that a dataset exists—but being unable to access it within the
guidelines and processes that the domain owner has established—can be a source
of frustration for users. It can force them to use inefficient workarounds instead of
requesting access through the proper channels.

Data discovery contributes to adoption efforts and the implementation of governance


practices by:

Encouraging the use of trusted high-quality data sources.


Encouraging users to take advantage of investments in existing data resources.
Promoting the use and enrichment of existing Power BI items.
Helping people understand who owns and manages datasets.
Establishing connections between consumers, creators, and owners.

In Power BI, the data hub and the use of endorsements help promote data discovery of
shared datasets. They also encourage self-service creators to reuse and augment
datasets.

Further, data catalog solutions are extremely valuable for data discovery. They can
record metadata tags and descriptions to provide deeper context and meaning. For
example, Azure Purview can scan and catalog an entire Power BI tenant.

Data democratization
Data democratization refers to putting data into the hands of more users who are
responsible for solving business problems. It's about enabling them to make decisions
with the data.

7 Note

The concept of data democratization does not imply a lack of security or a lack of
justification based on job role. As part of a healthy data culture, data
democratization helps reduce shadow IT by providing datasets that:

Are secured, governed, and well managed.


Meet business needs in cost-effective and timely ways.

Your organization's position on data democratization will have a wide-reaching impact


on adoption and governance-related efforts. Here are some examples of Power BI
governance decisions that can affect data democratization:

Who is permitted to have Power BI Desktop installed?


Who is permitted to have Power BI Pro or Power BI Premium Per User (PPU)
licenses?
What is the desired level of self-service business intelligence (BI) user enablement?
How does this level vary based on business unit or job role?
What is the desired balance between enterprise BI and self-service BI?
Are any data sources strongly preferred? What is the allowed use of unsanctioned
data sources?
Who can manage content? Is this decision different for data versus reports? Is the
decision different for enterprise BI users versus decentralized users who own and
manage self-service BI content?
Who can consume content? Is this decision different for external partners,
customers, and suppliers?

2 Warning

If access to data or the ability to perform analytics is limited to a select number of


individuals in the organization, that's typically a warning sign because the ability to
work with data is a key characteristic of a data culture.

Data literacy
Data literacy refers to the ability to interpret, create, and communicate data accurately
and effectively.

Training efforts, as described in the mentoring and user enablement article, often focus
on how to use the technology itself. Technology skills are important to producing high-
quality solutions, but it's also important to consider how to purposely advance data
literacy throughout the organization. Put another way, successful adoption takes a lot
more than merely providing Power BI software and licenses to users.

How you go about improving data literacy in your organization depends on many
factors, such as current user skillsets, complexity of the data, and the types of analytics
that are required. You can focus on these activities related to data literacy:

Interpreting charts and graphs


Assessing the validity of data
Performing root cause analysis
Discerning correlation from causation
Understanding how context and outliers affect how results are presented
Using storytelling to help consumers quickly understand and act

 Tip

If you're struggling to get data culture or governance efforts approved, focusing on


tangible benefits that you can achieve with data discovery ("find the data"), data
democratization ("use the data"), or data literacy ("understand the data") can help.
It can also be helpful to focus on specific problems that you can solve or mitigate
through data culture advancements.

Getting the right stakeholders to agree on the problem is usually the first step.
Then, it's a matter of getting the stakeholders to agree on the strategic approach to
a solution, along with the solution details.

Considerations and key actions

Checklist - Here are some considerations and key actions that you can take to
strengthen your data culture.

" Align on data culture goals and strategy: Give serious consideration to the type of
data culture that you want to cultivate. Ideally, it's more from a position of user
empowerment than a position of command and control.
" Understand your current state: Talk to stakeholders in different business units to
understand which analytics practices are currently working well and which practices
aren't working well for data-driven decision-making. Conduct a series of workshops
to understand the current state and to formulate the desired future state.
" Speak with stakeholders: Talk to stakeholders in IT, BI, and the COE to understand
which governance constraints need consideration. These talks can present an
opportunity to educate teams on topics like security and infrastructure. You can also
use the opportunity to educate them on what Power BI actually is (and how it
includes powerful data preparation and modeling capabilities, in addition to being
a vizualiation tool).
" Verify executive sponsorship: Verify the level of executive sponsorship and support
that you have in place to advance data culture goals.
" Make purposeful decisions about your BI strategy: Decide what the ideal balance
of business-led self-service BI, managed self-service BI, and enterprise BI should be
for the key business units in the organization (covered in the content ownership
and management article). Also consider how the strategy relates to the extent of
published content for personal BI, team BI, departmental BI, and enterprise BI
(described in the content delivery scope article). Determine how these decisions
affect your action plan.
" Create an action plan: Begin creating an action plan for immediate, short-term, and
long-term action items. Identify business groups and problems that represent
"quick wins" and can make a visible difference.
" Create goals and metrics: Determine how you'll measure effectiveness for your
data culture initiatives. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate the results of your efforts.
Maturity levels

The following maturity levels will help you assess the current state of your data culture.

Level State of data culture

100: Initial The enterprise BI team can't keep up with the needs of the business. A significant
backlog of requests exists for the enterprise BI team.

Self-service BI initiatives are taking place—with some successes—in various areas of


the organization. These activities are occurring in a somewhat chaotic manner, with
few formal processes and no strategic plan.

There's a lack of oversight and visibility into self-service BI activities. The successes
or failures of BI solutions aren't well understood.

200: Multiple teams have had measurable successes with self-service BI solutions. People
Repeatable in the organization are starting to pay attention.

Investments are being made to identify the ideal balance of enterprise BI and self-
service BI.

300: Specific goals are established for advancing the data culture. These goals are
Defined implemented incrementally.

Learnings from what works in individual business units is shared.

Effective self-service BI practices are incrementally—and purposely—replicated


throughout more areas of the organization.

400: The data culture goals to employ informed decision-making are aligned with
Capable organizational objectives. They're actively supported by the executive sponsor, the
COE, and they have a direct impact on adoption strategies.

A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared goals.

Individuals who take initiative in building valuable BI solutions are recognized and
rewarded.
Level State of data culture

500: The business value of BI solutions is regularly evaluated and measured. KPIs or OKRs
Efficient are used to track data culture goals and the results of BI efforts.

Feedback loops are in place, and they encourage ongoing data culture
improvements.

Continual improvement of organizational adoption, user adoption, and solution


adoption is a top priority.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about the
importance of an executive sponsor.
Power BI adoption roadmap: Executive
sponsorship
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

When planning to advance the data culture and the state of organizational adoption for
Power BI, it's crucial to have executive support. An executive sponsor is imperative
because adopting Power BI is far more than just a technology project.

Although some successes can be achieved by a few determined individual contributors,


the organization is in a significantly better position when a senior leader is engaged,
supportive, informed, and available to assist with the following activities.

Formulating a strategic vision and priorities for BI and analytics.


Leading by example by actively using Power BI in a way that's consistent with data
culture and adoption goals.
Allocating staffing and prioritizing resources.
Approving funding (for example, Power BI licenses).
Removing barriers to enable action.
Communicating announcements that are of critical importance, to help them gain
traction.
Decision-making, particularly for strategic-level governance decisions.
Dispute resolution (for escalated issues that can't be resolved by operational or
tactical personnel).
Supporting organizational change initiatives (for example, creating or expanding
the Center of Excellence).

) Important

The ideal executive sponsor has sufficient credibility, influence, and authority
throughout the organization.
Identifying an executive sponsor
There are multiple ways to identify an executive sponsor.

Top-down pattern
An executive sponsor may be selected by a more senior executive. For example, the
Chief Executive Officer (CEO) may hire a Chief Data Officer (CDO) or Chief Analytics
Officer (CAO) to explicitly advance the organization's data culture objectives or lead
digital transformation efforts. The CDO or CAO then becomes the ideal candidate to
serve as the executive sponsor for Power BI (or analytics in general).

Here's another example: The CEO may empower an existing executive, such as the Chief
Financial Officer (CFO), because they have a good track record leading data and
analytics in their organization. As the new executive sponsor, the CFO could then lead
efforts to replicate the finance team's success to other areas of the organization.

7 Note

Having a Power BI executive sponsor at the C-level is an excellent indicator. It


indicates that the organization recognizes the importance of data as a strategic
asset and is advancing its data culture in a positive direction.

Bottom-up pattern
Alternatively, a candidate for the executive sponsor role could emerge due to the
success they've experienced with creating BI solutions. For example, a business unit
within the organization, such as Finance, has organically achieved great success with
their use of data and analytics. Essentially, they've successfully formed their own data
culture on a small scale. A junior-level leader who hasn't reached the executive level
(such as a director) may then grow into the executive sponsor role by sharing successes
with other business units across the organization.

The bottom-up approach is more likely to occur in smaller organizations. It may be


because the return on investment and strategic imperative of a data culture (or digital
transformation) isn't an urgent priority for C-level executives.

The success for a leader using the bottom-up pattern depends on being recognized by
senior leadership.
With a bottom-up approach, the sponsor may be able to make some progress, but they
won't have formal authority over other business units. Without clear authority, it's only a
matter of time until challenges occur that are beyond their level of authority. For this
reason, the top-down approach has a higher probability of success. However, initial
successes with a bottom-up approach can convince leadership to increase their level of
sponsorship, which may start a healthy competition across other business units in the
adoption of BI.

Considerations and key actions

Checklist - Here's a list of considerations and key actions you can take to establish or
strengthen executive support for Power BI.

" Identify an executive sponsor with broad authority: Find someone in a sufficient


position of influence and authority (across organizational boundaries) who
understands the value and impact of business intelligence. It is important that the
individual has a vested interest in the success of analytics in the organization.
" Involve your executive sponsor: Consistently involve your executive sponsor in all
strategic-level governance decisions involving data management, business
intelligence, and analytics. Also involve your sponsor in all governance data culture
initiatives to ensure alignment and consensus on goals and priorities.
" Establish responsibilities and expectation: Formalize the arrangement with
documented responsibilities for the executive sponsor role. Ensure that there's no
uncertainty about expectations and time commitments.
" Identify a backup for the sponsor: Consider naming a backup executive sponsor.
The backup can attend meetings in the sponsor's absence and make time-sensitive
decisions when necessary.
" Identify business advocates: Find influential advocates in each business unit.
Determine how their cooperation and involvement can help you to accomplish your
objectives. Consider involving advocates from various levels in the organization
chart.

Maturity levels
The following maturity levels will help you assess your current state of executive
support.

Level State of Power BI executive support

100: Initial There may be awareness from at least one executive about the strategic importance
of how Power BI can play a part in advancing the organization's data culture goals.
However, neither a Power BI sponsor nor an executive-level decision-maker is
identified.

200: Informal executive support exists for Power BI through informal channels and
Repeatable relationships.

300: An executive sponsor is identified. Expectations are clear for the role.
Defined

400: An executive sponsor is well established with someone with sufficient authority
Capable across organizational boundaries.

A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared data culture goals.

500: The executive sponsor is highly engaged. They're a key driver for advancing the
Efficient organization's data culture vision.

The executive sponsor is involved with ongoing organizational adoption


improvements. KPIs (key performance indicators) or OKRs (objectives and key
results) are used to track data culture goals and the results of BI efforts.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about content
ownership and management, and its effect on business-led self-service BI, managed
self-service BI, and enterprise BI.
Power BI adoption roadmap: Business
alignment
Article • 09/11/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

Business intelligence (BI) activities and solutions have the best potential to deliver value
when they're well aligned to organizational business goals. In general, effective business
alignment helps to improve adoption. With effective business alignment, the data
culture and BI strategy enable business users to achieve their business objectives.

You can achieve effective business alignment with analytics activities and BI solutions by
having:

An understanding of the strategic importance of data and analytics in achieving


measurable progress toward business goals.
A shared awareness of the business strategy and key business objectives among
content owners, creators, consumers, and administrators. A common
understanding should be integral to the data culture and decision-making across
the organization.
A clear and unified understanding of the business data needs, and how meeting
these needs helps content creators and content consumers achieve their
objectives.
A governance strategy that effectively balances user enablement with risk
mitigation.
An engaged executive sponsor who provides top-down guidance to regularly
promote, motivate, and support the BI strategy and Power BI activities and
solutions.
Productive and solution-oriented discussions between business teams and
technical teams that address business data needs and problems.
Effective and flexible requirements gathering processes to design and plan
solutions.
Structured and consistent processes to validate, deploy, and support solutions.
Structured and sustainable processes to regularly update existing solutions so that
they remain relevant and valuable, despite changes in technology or business
objectives.

Effective business alignment brings significant benefits to an organization. Here are


some benefits of effective business alignment.

Improved adoption, because content consumers are more likely to use solutions
that enable them to achieve their objectives.
Increased business return on investment (ROI) for analytics initiatives and
solutions, because these initiatives and solutions will be more likely to directly
advance progress toward business goals.
Less effort and fewer resources spent on change management and changing
business requirements, due to an improved understanding of business data needs.

Achieve business alignment


There are multiple ways to achieve business alignment of BI activities and initiatives.

Communication alignment
Effective and consistent communication is critical to aligning processes. Consider the
following actions and activities when you want to improve communication for successful
business alignment.

Make and follow a plan for central teams and the user community to follow.
Plan regular alignment meetings between different teams and groups. For
example, central teams can plan regular planning and priority alignments with
business units. Another example are when central teams schedule regular meetings
to mentor and enable self-service users.
Setup a centralized portal to consolidate communication and documentation for
user communities. For strategic solutions and initiatives, consider using a
communication hub.
Limit complex business and technical terminology in cross-functional
communications.
Strive for concise communication and documentation that's formatted and well
organized. That way, people can easily find the information that they need.
Consider maintaining a visible roadmap that shows the planned BI solutions and
activities relevant to the user community in the next quarter.
Be transparent when communicating policies, decisions, and changes.
Create a process for people to provide feedback, and review that feedback
regularly as part of regular planning activities.

) Important

To achieve effective business alignment, you should make it a priority to dismantle


any communication barriers between business teams and technical teams.

Strategic alignment
Your business strategy should be well aligned with your BI strategy. To incrementally
achieve this alignment, we recommend that you commit to follow structured, iterative
planning processes.

Strategic planning: Define BI goals and priorities based on the business strategy
and current state of BI adoption and implementation. Typically, strategic planning
occurs every 12-18 months to iteratively define high-level desired outcomes. You
should synchronize strategic planning with key business planning processes.
Tactical planning: Define objectives, action plans, and a backlog of solutions that
help you to achieve your BI goals. Typically, tactical planning occurs quarterly to
iteratively re-evaluate and align the BI strategy and activities to the business
strategy. This alignment is informed by business feedback and changes to business
objectives or technology. You should synchronize tactical planning with key project
planning processes.
Solution planning: Design, develop, test, and deploy BI solutions that support
content creators and consumers in achieving their business objectives. Both
centralized content creators and self-service content creators conduct solution
planning to ensure that the solutions they create are well aligned with business
objectives. You should synchronize solution planning with key adoption and
governance planning processes.

) Important

Effective business alignment is a key prerequisite for a successful BI strategy.

Governance and compliance alignment


A key aspect of effective business alignment is balancing user enablement and risk
mitigation. This balance is an important aspect of your governance strategy, together
with other activities related to compliance, security and privacy, that can include:
Transparently document and justify compliance criteria, key governance decisions,
and policies so that content creators and consumers know what's expected of
them.
Regularly audit and assess activities to identify risk areas or strong deviations from
the desired behaviors.
Provide mechanisms for content owners, content creators, and content consumers
to request clarification or provide feedback about existing policies.

U Caution

A governance strategy that's poorly aligned with business objectives can result in
more conflicts and compliance risk, because users might pursue workarounds to
complete their tasks.

Executive alignment
Executive leadership plays a key role in defining the business strategy and business
goals. To this end, executive engagement is an important part of achieving top-down
business alignment.

To achieve executive alignment, consider the following key considerations and activities.

Work with your executive sponsor to organize short, quarterly executive feedback
sessions about the use of BI in the organization. Use this feedback to identify
changes in business objectives, re-assess the BI strategy, and inform future actions
to improve business alignment.
Schedule regular alignment meetings with the executive sponsor to promptly
identify any potential changes in the business strategy or data needs.
Deliver monthly executive summaries that highlight relevant information,
including:
Key performance indicators (KPIs) that measure progress toward BI goals.
Power BI adoption and implementation milestones.
Technology changes that may impact organizational business goals.

) Important

Don't underestimate the importance of the role your executive sponsor has in
achieving and maintaining effective business alignment.
Maintain business alignment
Business alignment is a continual process. To maintain business alignment, consider the
following factors.

Assign a responsible team: A working team reviews feedback and organizes re-
alignment sessions. This team is responsible for the alignment of planning and
priorities between the business and BI strategy.
Create and support a feedback process: Your user community requires the means
to provide feedback. Examples of feedback can include requests to change existing
solutions, or to create new solutions and initiatives. This feedback is essential for
bottom-up business user alignment, and it drives iterative and continuous
improvement cycles.
Measure the success of business alignment: Consider using surveys, sentiment
analysis, and usage metrics to assess the success of business alignment. When
combined with other concise feedback mechanisms, this can provide valuable
input to help define future actions and activities to improve business alignment
and Power BI adoption.
Schedule regular re-alignment sessions: Ensure that BI strategic planning and
tactical planning occur alongside relevant business strategy planning (when
business leadership review business goals and objectives).

7 Note

Because business objectives continually evolve, you should understand that


solutions and initiatives will change over time. Don't assume that requirements for
BI projects are rigid and can't be altered. If you struggle with changing
requirements, it may be an indication that your requirements-gathering process is
ineffective or inflexible, or that your development workflows don't sufficiently
incorporate regular feedback.

) Important

To effectively maintain business alignment, it's essential that user feedback be


promptly and directly addressed. Regularly review and analyze feedback, and
consider how you can integrate it into iterative strategic planning, tactical planning,
and solution planning processes.

Questions to ask
Use questions like those found below to assess business alignment.

Can people articulate the goals of the organization and the business objectives of
their team?
To what extent do descriptions of organizational goals align across the
organization? How do they align between the business user community and
leadership community? How do they align between business teams and technical
teams?
Does executive leadership understand the strategic importance of data in
achieving business objectives? Does the user community understand the strategic
importance of data in helping them succeed in their jobs?
Are changes in the business strategy reflected promptly in changes to the BI
strategy?
Are changes in business user data needs addressed promptly in BI solutions?
To what extent do data policies support or conflict with existing business processes
and the way that users work?
Do solution requirements focus more on technical features than addressing
business questions? Is there a structured requirements gathering process? Do
content owners and creators interact effectively with stakeholders and content
consumers during requirements gathering?
How are decisions about data or BI investments made? Who makes these
decisions?
How well do people trust existing data and BI solutions? Is there a single version of
truth, or are there regular debates about who has the correct version?
How are BI initiatives and strategy communicated across the organization?

Maturity levels

A business alignment assessment evaluates integration between the business strategy


and BI strategy. Specifically, this assessment focuses on whether or not BI initiatives and
solutions support business users to achieve business strategic objectives.
The following maturity levels will help you assess your current state of business
alignment.

Level State of BI and business alignment

100: Initial • Business and BI strategies lack formal alignment, which leads to reactive
implementation and misalignment between data teams and business users.

• Misalignment in priorities and planning hinders productive discussions and


effectiveness.

• Executive leadership doesn't recognize data as a strategic asset.

200: • There are efforts to align BI initiatives with specific data needs without a
Repeatable consistent approach or understanding of their success.

• Alignment discussions focus on immediate or urgent needs and focus on


features, solutions, tools or data, rather than strategic alignment.

• People have a limited understanding of the strategic importance of data in


achieving business objectives.

300: • BI initiatives are prioritized based on their alignment with strategic business
Defined objectives. However, alignment is siloed and typically focuses on local needs.

• Strategic initiatives and changes have a clear, structured involvement of both the
business and BI strategic decision makers. Business teams and technical teams can
have productive discussions to meet business and governance needs.

400: • There's a consistent, organization-wide view of how BI initiatives and solutions


Capable support business objectives.

• Regular and iterative strategic alignments occur between the business and
technical teams. Changes to the business strategy result in clear actions that are
reflected by changes to the BI strategy to better support business needs.

• Business and technical teams have healthy, productive relationships.

500: • The BI strategy and the business strategy are fully integrated. Continuous
Efficient improvement processes drive consistent alignment, and they are themselves data
driven.

• Business and technical teams have healthy, productive relationships.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about content
ownership and management, and its effect on business-led self-service BI, managed
self-service BI, and enterprise BI.
Power BI adoption roadmap: Content
ownership and management
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

7 Note

The Power BI implementation planning usage scenarios explore many concepts


discussed in this article. The usage scenario articles include detailed diagrams that
you may find helpful to support your planning and decision making.

There are three primary strategies for how business intelligence (BI) content is owned
and managed: business-led self-service BI, managed self-service BI, and enterprise BI.
For the purposes of this series of articles, the term content refers to any type of data
item (like a report or dashboard). It's synonymous with solution.

The organization's data culture is the driver for why, how, and by whom each of these
three content ownership strategies is implemented.

The areas in the above diagram include:

Area Description
Area Description

Business-led self-service BI: All content is owned and managed by the creators and
subject matter experts within a business unit. This ownership strategy is also known as a
decentralized or bottom-up BI strategy.

Managed self-service BI: The data is owned and managed by a centralized team, whereas
business users take responsibility for reports and dashboards. This ownership strategy is
also known as discipline at the core and flexibility at the edge.

Enterprise BI: All content is owned and managed by a centralized team such as IT,
enterprise BI, or the Center of Excellence (COE).

It's unlikely that an organization operates exclusively with one content ownership and
management strategy. Depending on your data culture, one strategy might be far more
dominant than the others. The choice of strategy could differ from solution to solution,
or from team to team. In fact, a single team can actively use multiple strategies if it's
both a consumer of enterprise BI content and a producer of its own self-service content.
The strategy to pursue depends on factors such as:

Requirements for a solution (such as a collection of reports).


User skills.
Ongoing commitment for training and skills growth.
Flexibility required.
Complexity level.
Priorities and leadership commitment level.

The organization's data culture—particularly its position on data democratization—has


considerable influence on the extent of which of the three content ownership strategies
are used. While there are common patterns for success, there's no one-size-fits-all
approach. Each organization's governance model and approach to content ownership
and management should reflect the differences in data sources, applications, and
business context.

How content is owned and managed has a significant effect on governance, the extent
of mentoring and user enablement, needs for user support, and the COE operating
model.

As discussed in the governance article, the level of governance and oversight depends
on:

Who owns and manages the content.


The scope of content delivery.
The data subject area and sensitivity level.
The importance of the data.
In general:

Business-led self-service BI content is subject to the least stringent governance and


oversight controls. It often includes personal BI and team BI solutions.
Managed self-service BI content is subject to moderately stringent governance and
oversight controls. It frequently includes team BI and departmental BI solutions.
Enterprise BI solutions are subject to more rigorous governance controls and
oversight.

As stated in the adoption maturity levels article, organizational adoption measures the
state of data management processes and governance. The choices made for content
ownership and management significantly affect how organizational adoption is
achieved.

Ownership and stewardship


There are many roles related to data management. Roles can be defined many ways and
can be easily misunderstood. The following table presents possible ways you may
conceptually define these roles:

Role Description

Data Responsible for defining and/or managing acceptable data quality levels as well as
steward master data management (MDM).

Subject Responsible for defining what the data means, what it's used for, who may access it,
matter and how the data is presented to others. Collaborates with domain owner as needed
expert and supports colleagues in their use of data.
(SME)

Technical Responsible for creating, maintaining, publishing, and securing access to data and
owner reporting items.

Domain Higher-level decision-maker who collaborates with governance teams on data


owner management policies, processes, and requirements. Decision-maker for defining
appropriate and inappropriate uses of the data. Participates on the data governance
board, as described in the governance article.

Assigning ownership for a data domain tends to be more straightforward when


managing transactional source systems. In BI solutions, data is integrated from multiple
domain areas, then transformed and enriched. For downstream analytical solutions, the
topic of ownership becomes more complex.

7 Note
Be clear about who is responsible for managing data items. It's crucial to ensure a
good experience for content consumers. Specifically, clarity on ownership is helpful
for:

Who to contact with questions.


Feedback.
Enhancement requests.
Support requests.

In the Power BI service, content owners can set the contact list property for many
types of items. The contact list is also used in security workflows. For example,
when a user is sent a URL to open an app but they don't have permission, they will
be presented with an option to make a request for access.

Guidelines for being successful with ownership:

Define how ownership and stewardship terminology is used in your organization,


including expectations for these roles.
Set contacts for each workspace and for individual items to communicate
ownership and/or support responsibilities.
Specify 2-4 workspace administrators and conduct an audit of workspace admins
regularly (perhaps twice a year). Workspace admins might be directly responsible
for managing workspace content, or it may be that those tasks are assigned to
colleagues who do the hands-on work. In all cases, workspace admins should be
able to easily contact owners of specific content.
Include consistent branding on reports to indicate who produced the content and
who to contact for help. A small image or text label located in the report footer is
valuable, especially when the report is exported from the Power BI service. A
standard template, as described in the mentoring and user enablement article, can
encourage and simplify the consistent use of branding.
Make use of best practices reviews with the COE, which are discussed in the COE
article.

The remainder of this article covers considerations related to the three content
ownership and management strategies.

Business-led self-service BI
With business-led self-service BI, all content is owned and managed by creators and
subject matter experts. Because responsibility is retained within a business unit, this
strategy is often described as the bottom-up, or decentralized, approach. Business-led
self-service BI is often a good strategy for personal BI and team BI solutions.

) Important

The concept of business-led self-service BI is not the same as shadow IT. In both
scenarios, BI content is created, owned, and managed by business users. However,
shadow IT implies that the business unit is circumventing IT and so the solution is
not sanctioned. With business-led self-service BI solutions, the business unit has full
authority to create and manage content. Resources and support from the COE are
available to self-service content creators. It's also expected that the business unit
complies with all established data governance guidelines and policies.

Business-led self-service BI is most suitable when:

Decentralized data management aligns with the organization's data culture, and
the organization is prepared to support these efforts.
Data exploration and freedom to innovate is a high priority.
The business unit wants to have the most involvement and retain the highest level
of control.
The business unit has skilled people capable of—and fully committed to—
supporting solutions through the entire lifecycle. It covers all types of Power BI
items, including the data (dataflows and datasets), the visuals (reports and
dashboards), and apps.
The flexibility to respond to changing business conditions and react quickly
outweighs the need for stricter governance and oversight.

Guidelines for being successful with business-led self-service BI:

Teach your creators to use the same techniques that IT would use, like shared
datasets and dataflows. Having fewer duplicated datasets reduces maintenance,
improves consistency, and reduces risk.
Focus on providing mentoring, training, resources, and documentation (described
in the mentoring and user enablement article). The importance of these efforts
can't be overstated. Be prepared for skill levels of self-service content creators to
vary significantly. It's also common for a solution to deliver excellent business value
yet be built in such a way that it won't scale or perform well over time (as historic
data volumes increase). Having the COE available to help when these situations
arise is very valuable.
Provide guidance on the best way to use endorsements. The promoted
endorsement is for content produced by self-service creators. Consider reserving
use of the certified endorsement for enterprise BI content and managed self-
service BI content (discussed next).
Analyze the activity log to discover situations where the COE could proactively
contact self-service owners to offer helpful information. It's especially useful when
a suboptimal usage pattern is detected. For example, log activity could reveal
overuse of individual item sharing when an app or workspace roles may be a
better choice. The data from the activity log allows the COE to offer support and
advice to the business units. In turn, this information can help increase the quality
of solutions, while allowing the business to retain full ownership and control of
their content.

Managed self-service BI
Managed self-service BI is a blended approach. The data is owned and managed by a
centralized team (such as IT, enterprise BI, or the COE), while responsibility for reports
and dashboards belongs to creators and subject matter experts within the business
units. Managed self-service BI is frequently a good strategy for team BI and
departmental BI solutions.

This approach is often called discipline at the core and flexibility at the edge. It's because
the data architecture is maintained by a single team with an appropriate level of
discipline and rigor. Business units have the flexibility to create reports and dashboards
based on centralized data. This approach allows report creators to be far more efficient
because they can remain focused on delivering value from their data analysis and
visuals.

Managed self-service BI is most suitable when:

Centralized data management aligns with the organization's data culture.


The organization has a team of BI experts who manage the data architecture.
There's value in the reuse of data by many self-service report creators across
organizational boundaries.
Self-service report creators need to produce content at a pace faster than the
centralized team can accommodate.
Different people are responsible for handling data preparation, data modeling, and
report creation.

Guidelines for being successful with self-service BI:

Teach users to separate model and report development. They can use live
connections to create reports based on existing datasets. When the dataset is
decoupled from the report, it promotes data reuse by many reports and many
authors. It also facilitates the separation of duties.
Use dataflows to centralize data preparation logic and to share commonly used
data tables—like date, customer, product, or sales—with many dataset creators.
Refine the dataflow as much as possible, using friendly column names and correct
data types to reduce the downstream effort required by dataset authors, who
consume the dataflow as a source. Dataflows are an effective way to reduce the
time involved with data preparation and improve data consistency across datasets.
The use of dataflows also reduces the number of data refreshes on source systems
and allows fewer users who require direct access to source systems.
When self-service creators need to augment an existing dataset with departmental
data, educate them to use DirectQuery connections to Power BI datasets and Azure
Analysis Services. This feature allows for an ideal balance of self-service
enablement while taking advantage of the investment in data assets that are
centrally managed.
Use the certified endorsement for datasets and dataflows to help content creators
identify trustworthy sources of data.
Include consistent branding on all reports to indicate who produced the content
and who to contact for help. Branding is particularly helpful to distinguish content
that is produced by self-service creators. A small image or text label in the report
footer is valuable when the report is exported from the Power BI service.
Consider implementing separate workspaces for storing data and reports. This
approach allows for better clarity on who is responsible for content. It also allows
for more restrictive workspace roles assignments. That way, report creators can
only publish content to their reporting workspace; and, read and build dataset
permissions allow creators to create new reports with row-level security (RLS) in
effect, when applicable.
Use the Power BI REST APIs to compile an inventory of Power BI items. Analyze the
ratio of datasets to reports to evaluate the extent of dataset reuse.

Enterprise BI
Enterprise BI is a centralized approach in which all content is owned and managed by a
centralized team. This team is usually IT, enterprise BI, or the COE.

Enterprise BI is most suitable when:

Centralizing content management with a single team aligns with the organization's
data culture.
The organization has BI expertise to manage all the BI items end-to-end.
The content needs of consumers are well-defined, and there's little need to
customize or explore data beyond the reporting solution that's delivered.
Content ownership and direct access to data needs to be limited to a few people.
The data is highly sensitive or subject to regulatory requirements.

Guidelines for being successful with enterprise BI:

Implement a rigorous process for use of the certified endorsement for datasets,
reports, and apps. Not all enterprise BI content needs to be certified, but much of
it probably should be. Certified content should indicate that data quality has been
validated. Certified content should also follow change management rules, have
formal support, and be fully documented. Because certified content has passed
rigorous standards, the expectations for trustworthiness are higher.
Include consistent branding on enterprise BI reports to indicate who produced the
content, and who to contact for help. A small image or text label in the report
footer is valuable when the report is exported from the Power BI service.
If you use specific report branding to indicate enterprise BI content, be careful with
the save a copy functionality that would allow a user to download a copy of a
report and personalize it. Although this functionality is an excellent way to bridge
enterprise BI with managed self-service BI, it dilutes the value of the branding. A
more seamless solution is to provide a separate Power BI Desktop template file for
self-service authors. The template defines a starting point for report creation with a
live connection to an existing dataset, and it doesn't include branding. The
template file can be shared as a link within a Power BI app, or from the community
site.

Ownership transfers
Occasionally, the ownership of a particular solution may need to be transferred to
another team. An ownership transfer from a business unit to a centralized team can
happen when:

A business-led solution is used by a significant number of people, or it now


supports critical business decisions. In these cases, the solution should be
managed by a team with processes in place to implement higher levels of
governance and support.
A business-led solution is a candidate to be used far more broadly throughout the
organization, so it needs to be managed by a team who can set security and
deploy content widely throughout the organization.
A business unit no longer has the expertise, budget, or time available to continue
managing the content.
The size or complexity of a solution has grown to a point where a different data
architecture or redesign is required.
A proof of concept is ready to be operationalized.

The COE should have well-documented procedures for identifying when a solution is a
candidate for ownership transfer. It's very helpful if help desk personnel know what to
look for as well. Having a customary pattern for self-service creators to build and grow a
solution, and hand it off in certain circumstances, is an indicator of a productive and
healthy data culture. A simple ownership transfer may be addressed during COE office
hours; a more complex transfer may warrant a small project managed by the COE.

7 Note

There's potential that the new owner will need to do some refactoring before
they're willing to take full ownership. Refactoring is most likely to occur with the
less visible aspects of data preparation, data modeling, and calculations. If there are
any manual steps or flat file sources, it's an ideal time to apply those
enhancements. The branding of reports and dashboards may also need to change,
for example, if there's a footer indicating report contact or a text label indicating
that the content is certified.

It's also possible for a centralized team to transfer ownership to a business unit. It could
happen when:

The team with domain knowledge is better equipped to own and manage the
content going forward.
The centralized team has created the solution for a business unit that doesn't have
the skills to create it from scratch, but it can maintain and extend the solution
going forward.

 Tip

Don't forget to recognize and reward the work of the original creator, particularly if
ownership transfers are a common occurrence.

Considerations and key actions


Checklist - Here's a list of considerations and key actions you can take to strengthen
your approach to content ownership and management.

" Gain a full understanding of what's currently happening: Ensure you deeply


understand how content ownership and management is happening throughout the
organization. Recognize that there likely won't be a one-size-fits-all approach to
apply uniformly across the entire organization. Review the Power BI implementation
planning usage scenarios to understand how Power BI can be used in diverse ways.
" Conduct discussions: Determine what is currently working well, what isn't working
well, and what the desired balance is between the three ownership strategies. If
necessary, schedule discussions with specific people on various teams. Develop a
plan for moving from the current state to the desired state.
" Perform an assessment: If your enterprise BI team currently has challenges related
to scheduling and priorities, do an assessment to determine if a managed self-
service BI strategy can be put in place to empower more content creators
throughout the organization. Managed self-service BI can be extremely effective on
a global scale.
" Clarify terminology: Clarify terms used in your organization for owner, data
steward, and subject matter expert.
" Assign clear roles and responsibilities: Make sure roles and responsibilities for
owners, stewards, and subject matter experts are documented and well understood
by everyone involved. Include backup personnel.
" Ensure community involvement: Ensure that all your content owners—from both
the business and IT—are part of your community of practice.
" Create user guidance for owners and contacts in Power BI: Determine how you
will use the contacts feature in Power BI. Communicate with content creators about
how it should be used, and why it's important.
" Create a process for handling ownership transfers: If ownership transfers occur
regularly, create a process for how it will work.
" Support your advanced content creators: Determine your strategy for using
external tools for advanced authoring capabilities and increased productivity.

Maturity levels

The following maturity levels will help you assess the current state of your content
ownership and management.
Level State of Power BI content ownership and management

100: Initial Self-service content creators own and manage content in an uncontrolled way,
without a specific strategy.

A high ratio of datasets to reports exists. When many datasets only support one
report, it indicates opportunities to improve data reusability, improve
trustworthiness, reduce maintenance and the number of duplicate datasets.

Discrepancies between different reports is common, causing distrust of content


produced by others.

200: A plan is in place for which content ownership and management strategy to use and
Repeatable in which circumstances.

Initial steps are taken to improve the consistency and trustworthiness levels for self-
service BI efforts.

Guidance for the user community is available that includes expectations for self-
service versus enterprise content.

Roles and responsibilities are clear and well understood by everyone involved.

300: Managed self-service BI is a priority and an area of investment to further advance


Defined the data culture. The priority is to allow report creators the flexibility they need while
using well-managed, secure, and trustworthy data sources.

Report branding is consistently used to indicate who produced the content.

A mentoring program exists to educate self-service content creators on how to


apply best practices and make good decisions.

400: Criteria is defined to align governance requirements for self-service versus


Capable enterprise content.

There's a plan in place for how to request and handle ownership transfers.

Managed self-service BI—and techniques for the reuse of data—are commonly used
and well-understood.

500: Proactive steps to communicate with users occur when any concerning activities are
Efficient detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.

Third-party tools are used by highly proficient content creators to improve


productivity and efficiency.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about the scope
of content delivery.
Power BI adoption roadmap: Content
delivery scope
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

The four delivery scopes described in this article include personal BI, team BI,
departmental BI, and enterprise BI. To be clear, focusing on the scope of a delivered BI
solution does refer to the number of people who may view the solution, though the
impact is much more than that. The scope strongly influences best practices for content
distribution, sharing, security, and information protection. The scope has a direct
correlation to the level of governance (such as requirements for change management,
support, or documentation), the extent of mentoring and user enablement, and needs
for user support. It also influences user licensing decisions.

The related content ownership and management article makes similar points. Whereas
the focus of that article was on the content creator, the focus of this article is on the
target content usage. Both inter-related aspects need to be considered to arrive at
governance decisions and the Center of Excellence (COE) operating model.

) Important

Not all data and solutions are equal. Be prepared to apply different levels of data
management and governance to different teams and various types of content.
Standardized rules are easier to maintain, however flexibility or customization is
often necessary to apply the appropriate level of oversight for particular
circumstances. Your executive sponsor can prove invaluable by reaching consensus
across stakeholder groups when difficult situations arise.

Scope of content delivery


The following diagram focuses on the number of target consumers who will consume
the content.
The four scopes of content delivery shown in the above diagram include:

Personal BI: Personal BI solutions are, as the name implies, intended for use by the
creator. Sharing content with others isn't an objective. Therefore, personal BI has
the fewest number of target consumers.
Team BI: Collaborates and shares content with a relatively small number of
colleagues who work closely together.
Departmental BI: Delivers content to a large number of consumers, who can
belong to a department or business unit.
Enterprise BI: Delivers content broadly across organizational boundaries to the
largest number of target consumers. Enterprise content is most often managed by
a centralized team and is subject to additional governance requirements.

Contrast the above four scopes of content delivery with the following diagram, which
has an inverse relationship with respect to the number of content creators.
The four scopes of content creators shown in the above diagram include:

Personal BI: Represents the largest number of creators because any user can work
with data using business-led self-service BI methods. Although managed self-
service BI methods can be used, it's less common with personal BI.
Team BI: Colleagues within a team collaborate and share with each other using
business-led self-service BI patterns. It has the next largest number of creators in
the organization. Managed self-service BI patterns may also begin to emerge as
skill levels advance.
Departmental BI: Involves a smaller population of creators. They're likely to be
considered power users who are using sophisticated tools to create sophisticated
solutions. Managed self-service BI practices are very common and highly
encouraged.
Enterprise BI: Involves the smallest number of content creators because it typically
includes only professional BI developers who work in the BI team, the COE, or in IT.

The content ownership and management article introduced the concepts of business-
led self-service BI, managed self-service BI, and enterprise BI. The most common
alignment between ownership and delivery scope is:
Business-led self-service BI ownership: Commonly deployed as personal and team
BI solutions.
Managed self-service BI ownership: Can be deployed as personal, team, or
departmental BI solutions.
Enterprise BI ownership: Deployed as enterprise BI-scoped solutions.

Some organizations also equate self-service content with community-based support. It's
the case when self-service content creators and owners are responsible for supporting
the content they publish. The user support article describes multiple informal and formal
levels for support.

7 Note

The term sharing can be interpreted two ways: It's often used in a general way
related to sharing content with colleagues, which could be implemented multiple
ways. It can also reference a specific feature in Power BI, which is a specific
implementation where a user or group is granted read-only access to a single item.
In this article, the term sharing is meant in a general way to describe sharing
content with colleagues. When the per-item sharing feature is intended, this article
will make a clear reference to that feature.

Personal BI
Personal BI is about enabling an individual to gain analytical value. It's also about
allowing them to more efficiently perform business tasks through the effective personal
use of data, information, and analytics. It could apply to any type of information worker
in the organization, not just data analysts and developers.

Sharing of content with others isn't the objective. Personal content can reside in Power
BI Desktop or in a personal workspace in the Power BI service. Usage of the personal
workspace is permitted with the free Power BI license.

Characteristics of personal BI:

The creator's primary intention is data exploration and analysis, rather than report
delivery.
The content is intended to be analyzed and consumed by one person: the creator.
The content may be an exploratory proof of concept that may, or may not, evolve
into a project.

Guidelines for being successful with personal BI:


Consider personal BI solutions to be like an analytical sandbox that has little formal
governance and oversight from the governance team or COE. However, it's still
appropriate to educate content creators that some general governance guidelines
may still apply to personal content. Valid questions to ask include: Can the creator
export the personal report and email it to others? Can the creator store a personal
report on a non-organizational laptop or device? What limitations or requirements
exist for content that contains sensitive data?
See the techniques described for business-led self-service BI, and managed self-
service BI in the content ownership and management article. They're highly
relevant techniques that help content creators create efficient and personal BI
solutions.
Analyze data from the activity log to discover situations where personal BI
solutions appear to have expanded beyond the original intended usage. It's usually
discovered by detecting a significant amount of content sharing from a personal
workspace.

 Tip

See the adoption maturity levels article for information about how users progress
through the stages of user adoption. See the system oversight article for
information about usage tracking via the activity log.

Team BI
Team BI is focused on a team of people who work closely together, and who are tasked
with solving closely related problems using the same data. Collaborating and sharing
content with each other in a workspace is usually the primary objective. Due to this work
style, team members will typically each have a Power BI Pro or Power BI Premium Per
User (PPU) license.

Content is often shared among the team more informally as compared to departmental
or enterprise BI. For instance, the workspace is often sufficient for consuming content
within a small team. It doesn't require for formality of publishing the workspace to
distribute it as an app. There isn't a specific number of users when team-based delivery
is considered too informal; each team can find the right number that works for them.

Characteristics of team BI:

Content is created, managed, and viewed among a group of colleagues who work
closely together.
Collaboration and co-management of content is the highest priority.
Formal delivery of reports may occur by report viewers (especially for managers of
the team), but it's usually a secondary priority.
Reports aren't always highly sophisticated or attractive; functionality and accessing
the information is what matters most.

Guidelines for being successful with team BI:

Ensure the Center of Excellence (COE) is prepared to support the efforts of self-
service creators publishing content for their team.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. It's tempting to start with one workspace per team, but that
may not be flexible enough to satisfy all needs.
See the techniques described for business-led self-service BI and managed self-
service BI in the content ownership and management article. They're highly
relevant techniques that help content creators create efficient and effective team BI
solutions.

Departmental BI
Content is delivered to members of a department or business unit. Content distribution
to a larger number of consumers is a priority for departmental BI.

Usually there's a much larger number of consumers who are content viewers (versus a
much smaller number of content creators). Therefore, a combination of Power BI Pro
licenses, Premium Per User licenses, and/or Premium capacity licenses may be used.

Characteristics of departmental BI delivery:

A few content creators typically publish content for colleagues to consume.


Formal delivery of reports and apps is a high priority to ensure consumers have the
best experience.
Additional effort is made to deliver more sophisticated and polished reports.
Following best practices for data preparation and higher quality data modeling is
also expected.
Needs for change management and application lifecycle management (ALM) begin
to emerge to ensure release stability and a consistent experience for consumers.

Guidelines for being successful with departmental BI delivery:

Ensure the COE is prepared to support the efforts of self-service creators. Creators
who publish content used throughout their department or business unit may
emerge as candidates to become champions, or they may become candidates to
join the COE as a satellite member.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. Several workspaces will likely be required to meet all the
needs of a large department or business unit.
Plan how Power BI apps will distribute content to the enterprise. An app can
provide a significantly better user experience for consuming content. In many
cases, content consumers can be granted permissions to view content via the app
only, reserving workspace permissions management for content creators and
reviewers only. The use of app audience groups allows you to mix and match
content and target audience in a flexible way.
Be clear about what data quality validations have occurred. As the importance and
criticality level grows, expectations for trustworthiness grow too.
Ensure that adequate training, mentoring, and documentation is available to
support content creators. Best practices for data preparation, data modeling, and
data presentation will result in better quality solutions.
Provide guidance on the best way to use the promoted endorsement, and when
the certified endorsement may be permitted for departmental BI solutions.
Ensure that the owner is identified for all departmental content. Clarity on
ownership is helpful, including who to contact with questions, feedback,
enhancement requests, or support requests. In the Power BI service, content
owners can set the contact list property for many types of items (like reports and
dashboards). The contact list is also used in security workflows. For example, when
a user is sent a URL to open an app but they don't have permission, they'll be
presented with an option to make a request for access.
Consider using deployment pipelines in conjunction with separate workspaces.
Deployment pipelines can support development, test, and production
environments, which provide more stability for consumers.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Include consistent branding on reports to align with departmental colors and
styling. It can also indicate who produced the content. For more information, see
the Content ownership and management article. A small image or text label in the
report footer is valuable when the report is exported from the Power BI service. A
standard Power BI Desktop template file can encourage and simplify the consistent
use of branding. For more information, see the Mentoring and user enablement
article.
See the techniques described for business-led self-service BI and managed self-
service BI in the content ownership and management article. They're highly
relevant techniques that help content creators create efficient and effective
departmental BI solutions.

Enterprise BI
Enterprise BI content is typically managed by a centralized team and is subject to
additional governance requirements. Content is delivered broadly across organizational
boundaries.

Enterprise BI usually has a significantly larger number of consumers versus content


creators. Therefore, a combination of Power BI Pro licenses, Premium Per User licenses,
and/or Premium capacity licenses may be used.

Characteristics of enterprise BI delivery:

A centralized team of BI experts manages the content end-to-end and publishes it


for others to consume.
Formal delivery of reports and apps is a high priority to ensure consumers have the
best experience.
The content is highly sensitive, subject to regulatory requirements, or is considered
extremely critical.
Published enterprise-level datasets and dataflows may be used as a source for self-
service creators, thus creating a chain of dependencies to the source data.
Stability and a consistent experience for consumers are highly important.
Application lifecycle management, such as deployment pipelines and DevOps
techniques , is commonly used. Change management processes to review and
approve changes before they're deployed are commonly used for enterprise BI
content, for example, by a change review board or similar group.
Processes exist to gather requirements, prioritize efforts, and plan for new projects
or enhancements to existing content.
Integration with other enterprise-level data architecture and management services
may exist, possibly with other Azure services and Power Platform products.

Guidelines for being successful with enterprise BI delivery:

Governance and oversight techniques described in the governance article are


relevant for managing an enterprise BI solution. Techniques primarily include
change management and application lifecycle management.
Plan for how to effectively use Premium Per User or Premium capacity licensing per
workspace. Align your workspace management strategy, like how workspaces will
be organized and secured, to the planned licensing strategy.
Plan how Power BI apps will distribute enterprise BI content. An app can provide a
significantly better user experience for consuming content. Align the app
distribution strategy with your workspace management strategy.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Implement a rigorous process for use of the certified endorsement for enterprise
BI reports and apps. Datasets and dataflows can be certified, too, when there's the
expectation that self-service creators will build solutions based on them. Not all
enterprise BI content needs to be certified, but much of it probably will be.
Make it a common practice to announce when changes will occur. For more
information, see the community of practice article for a description of
communication types.
Include consistent branding on reports to align with departmental colors and
styling. It can also indicate who produced the content. For more information, see
the Content ownership and management article. A small image or text label in the
report footer is valuable when the report is exported from the Power BI service. A
standard Power BI Desktop template file can encourage and simplify the consistent
use of branding. For more information, see the Mentoring and user enablement
article.
Use the lineage view to understand dependencies, perform impact analysis, and
communicate to downstream content owners when changes will occur.
See the techniques described for enterprise BI in the content ownership and
management article. They're highly relevant techniques that help content creators
create efficient and effective enterprise BI solutions.
See the techniques described in the system oversight article for auditing,
governing, and the oversight of enterprise BI content.

Considerations and key actions

Checklist - Considerations and key actions you can take to strengthen your approach to
content delivery.

" Align goals for content delivery: Ensure that guidelines, documentation, and other
resources align with the strategic goals defined for Power BI adoption.
" Clarify the scopes for content delivery in your organization: Determine who each
scope applies to, and how each scope aligns with governance decisions. Ensure that
decisions and guidelines are consistent with how content ownership and
management is handled.
" Consider exceptions: Be prepared for how to handle situations when a smaller
team wants to publish content for an enterprise-wide audience.
Will it require the content be owned and managed by a centralized team? For
more information, see the Content ownership and management article, which
describes an inter-related concept with content delivery scope.
Will there be an approval process? Governance can become more complicated
when the content delivery scope is broader than the owner of the content. For
example, when an app that's owned by a divisional sales team is distributed to
the entire organization.
" Create helpful documentation: Ensure that you have sufficient training
documentation and support so that your content creators understand when it's
appropriate to use workspaces, apps, or per-item sharing (direct access or link) .
" Create a licensing strategy: Ensure that you have a specific strategy in place to
handle user licensing considerations for Power BI Pro, Premium Per User, and
Premium capacity. Create a process for how workspaces may be assigned each
license type, and the prerequisites required for the type of content that may be
assigned to Premium.

Maturity levels

The following maturity levels will help you assess the current state of your content
delivery.

Level State of Power BI content delivery

100: Initial Content is published for consumers by self-service creators in an uncontrolled way,
without a specific strategy.

200: Pockets of good practices exist. However, good practices are overly dependent on
Repeatable the knowledge, skills, and habits of the content creator.

300: Clear guidelines are defined and communicated to describe what can and can't
Defined occur within each delivery scope. These guidelines are followed by some—but not
all—groups across the organization.
Level State of Power BI content delivery

400: Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.

Guidelines for content delivery scope are followed by most, or all, groups across the
organization.

Change management requirements are in place to approve critical changes for


content that's distributed to a larger-sized audience.

Changes are announced and follow a communication plan. Content creators are
aware of the downstream effects on their content. Consumers are aware of when
reports and apps are changed.

500: Proactively steps to communicate with users occur when any concerning activities
Efficient are detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.

The business value that's achieved for deployed solutions is regularly evaluated.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about the Center
of Excellence (COE).
Power BI adoption roadmap: Center of
Excellence
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

A Power BI Center of Excellence (COE) is an internal team of technical and business


experts. The team actively assists others within the organization who are working with
data. The COE forms the nucleus of the broader community to advance adoption goals,
which align with the data culture vision.

A COE might also be known as business intelligence (BI) competency center, capability
center, or a center of expertise. Some organizations use the term squad. Many
organizations perform the COE responsibilities within their BI team or analytics team.

7 Note

Having a COE team formally recognized in your organizational chart is


recommended, but not required. What's most important is that the COE roles and
responsibilities are identified, prioritized, and assigned. It's common for a
centralized BI or analytics team to take on many of the COE responsibilities; some
responsibilities may also reside within IT. For simplicity, in this series of articles, COE
means a specific group of people, although you may implement it differently. It's
also very common to implement the COE with a scope broader than Power BI
alone: for instance, a Power Platform COE or an analytics COE.

Goals for a COE


Goals for a COE include:

Evangelizing a data-driven culture.


Promoting the adoption of Power BI.
Nurturing, mentoring, guiding, and educating internal users to increase their skills
and level of self-reliance.
Coordinating efforts and disseminating knowledge across organizational
boundaries.
Creating consistency and transparency for the user community, which reduces
friction and pain points related to finding relevant data and analytics content.
Maximizing the benefits of self-service BI, while reducing the risks.
Reducing technical debt by helping make good decisions that increase consistency
and result in fewer inefficiencies.

) Important

One of the most powerful aspects of a COE is the cross-departmental insight into
how Power BI is used by the organization. This insight can reveal which practices
work well and which don't, that can facilitate a bottom-up approach to governance.
A primary goal of the COE is to learn which practices work well, share that
knowledge more broadly, and replicate best practices across the organization.

Scope of COE responsibilities


The scope of COE responsibilities can vary significantly between organizations. In a way,
a COE can be thought of as a consultancy service because its members routinely provide
expert advice to others. To varying degrees, most COEs handle hands-on work too.

Common COE responsibilities include:

Mentoring the internal Power BI community. For more information, see the
Community of practice article.
Producing, curating, and promoting training materials. For more information, see
the Mentoring and user enablement article.
Creating documentation and resources to encourage consistent use of standards
and best practices. For more information, see the Mentoring and user enablement
article.
Applying, communicating, and assisting with governance guidelines. For more
information, see the Governance article.
Handling and assisting with system oversight and administration. For more
information, see the System oversight article.
Responding to user support issues escalated from the help desk. For more
information, see the User support article.
Developing solutions and/or proofs of concept.
Establishing and maintaining the BI platform and data architecture.

Staffing a COE
People who are good candidates as COE members tend to be those who:

Understand the analytics vision for the organization.


Have a desire to continually improve analytics practices for the organization.
Have a deep interest in, and expertise with, Power BI.
Are interested in seeing Power BI used effectively and adopted successfully
throughout the organization.
Take the initiative to continually learn, adapt, and grow.
Readily share their knowledge with others.
Are interested in repeatable processes, standardization, and governance with a
focus on user enablement.
Are hyper-focused on collaboration with others.
Are comfortable working in an agile fashion.
Have an inherent interest in being involved and helping others.
Can effectively translate business needs into solutions.
Communicate well with both technical and business colleagues.

 Tip

If you have Power BI creators in your organization who constantly push the
boundaries of what can be done, they might be a great candidate to become a
recognized champion, or perhaps even a member of the COE.

When recruiting for the COE, it's important to have a mix of complementary analytical
skills, technical skills, and business skills.

Roles and responsibilities


Very generalized roles within a COE are listed below. It's common for multiple people to
overlap roles, which is useful from a backup and cross-training perspective. It's also
common for the same person to serve multiple roles. For instance, most COE members
serve also as a coach or mentor.

Role Description
Role Description

COE Manages the day-to-day operations of the COE. Interacts with the executive sponsor
leader and other organizational teams, such as the data governance board, as necessary. For
details of additional roles and responsibilities, see the Governance article.

Coach Coaches and educates others on BI skills via office hours (community engagement),
best practices reviews, or co-development projects. Oversees and participates in the
discussion channel of the internal community. Interacts with, and supports, the
champions network.

Trainer Develops, curates, and delivers internal training materials, documentation, and
resources.

Data Domain-specific subject matter expert. Acts as a liaison between the COE and the
analyst business unit. Content creator for the business unit. Assists with content certification.
Works on co-development projects and proofs of concept.

Data Creates and manages shared datasets and dataflows to support self-service content
modeler creators.

Report Creates and publishes reports and dashboards.


creator

Data Plans Power BI deployment and architecture, including integration with Azure services
engineer and other data platforms. Publishes data assets which are utilized broadly across the
organization.

User Assists with the resolution of data discrepancies and escalated help desk support
support issues.

As mentioned previously, the scope of responsibilities for a COE can vary significantly
between organizations. Therefore, the roles found for COE members can vary too.

Structuring a COE
The selected COE structure can vary among organizations. It's also possible for multiple
structures to exist inside of a single large organization. That's particularly true when
there are subsidiaries or acquisitions have occurred.

7 Note

The following terms may differ to those defined for your organization, particularly
the meaning of federated, which tends to have many different IT-related meanings.
Centralized COE
A centralized COE comprises a single shared services team.

Pros:

There's a single point of accountability for a single team that manages standards,
best practices, and delivery end-to-end.
The COE is one group from an organizational chart perspective.
It's easy to start with this approach and then evolve to the unified or federated
model over time.

Cons:

A centralized team might have an authoritarian tendency to favor one-size-fits-all


decisions that don't always work well for all business units.
There can be a tendency to prefer IT skills over business skills.
Due to the centralized nature, it may be more difficult for the COE members to
sufficiently understand the needs of all business units.

Unified COE
A unified COE is a single, centralized, shared services team that has been expanded to
include embedded team members. The embedded team members are dedicated to
supporting a specific functional area or business unit.

Pros:

There's a single point of accountability for a single team that includes cross-
functional involvement from the embedded COE team members. The embedded
COE team members are assigned to various areas of the business.
The COE is one group from an organizational chart perspective.
The COE understands the needs of business units more deeply due to dedicated
members with domain expertise.

Cons:

The embedded COE team members, who are dedicated to a specific business unit,
have a different organizational chart responsibility than the people they serve
directly within the business unit. It may potentially lead to complications,
differences in priorities, or necessitate the involvement of the executive sponsor.
Preferably, the executive sponsor has a scope of authority that includes the COE
and all involved business units to help resolve conflicts.
Federated COE
A federated COE comprises a shared services team plus satellite members from each
functional area or major business unit. A federated team works in coordination, even
though its members reside in different business units. Typically, satellite members are
primarily focused on development activities to support their business unit while the
shared services personnel support the entire community.

Pros:

There's cross-functional involvement from satellite COE members who represent


their specific functional area and have domain expertise.
There's a balance of centralized and decentralized representation across the core
and satellite COE members.
When distributed data ownership situations exist—as could be the case when
business units take direct responsibility for data management activities—this
model is effective.

Cons:

Since core and satellite members span organizational boundaries, the federated
COE approach requires strong leadership, excellent communication, robust project
management, and ultra-clear expectations.
There's a higher risk of encountering competing priorities due to the federated
structure.
This approach typically involves part-time people and/or dotted line organizational
chart accountability that can introduce competing time pressures.

 Tip

Some organizations have success by using a rotational program. It involves


federated members of the COE joining the COE for a period of time, such as six
months. This type of program allows federated members to learn best practices
and understand more deeply how and why things are done. Although the
federated member is still focused on their specific business unit, they gain a deeper
understanding of the organization's challenges. This deeper understanding leads to
a more productive partnership over time.

Decentralized COE
Decentralized COEs are independently managed by business units.
Pros:

A specialized data culture exists that's focused on the business unit, making it
easier to learn quickly and adapt.
Policies and practices are tailored to each business unit.
Agility, flexibility, and priorities are focused on the individual business unit.

Cons:

There's a risk that decentralized COEs operate in isolation. As a result, they might
not share best practices and lessons learned outside of their business unit.
Collaboration with a centralized team may be informal and/or inconsistent.
Inconsistent policies are created and applied across business units.
It's difficult to scale a decentralized model.
There's potential rework to bring one or more decentralized COEs in alignment
with organizational-wide policies.
Larger business units with significant funding may have more resources available
to them, which may not serve cost optimization goals from an organizational-wide
perspective.

) Important

A highly centralized COE tends to be more authoritarian, while highly decentralized


COEs tend to be more siloed. Each organization will need to weigh the pros and
cons that apply to them to determine the best choice. For most organizations, the
most effective approach tends to be the unified or federated, which bridges
organizational boundaries.

Funding the COE


The COE may obtain its operating budget in multiple ways:

Cost center.
Profit center with project budget(s).
A combination of cost center and profit center.

When the COE operates as a cost center, it absorbs the operating costs. Generally, it
involves an approved annual budget. Sometimes this is called a push engagement
model.

When the COE operates as a profit center (for at least part of its budget), it could accept
projects throughout the year based on funding from other business units. Sometimes
this is called a pull engagement model.

Funding is important because it impacts the way the COE communicates and engages
with the internal community. As the COE experiences more and more successes, they
may receive more requests from business units for help. It's especially the case as
awareness grows throughout the organization.

 Tip

The choice of funding model can determine how the COE actively grows its
influence and ability to help. The funding model can also have a big impact on
where authority resides and how decision-making works. Further, it impacts the
types of services a COE can offer, such as co-development projects and/or best
practices reviews. For more information, see the Mentoring and user enablement
article.

Some organizations cover the COE operating costs with chargebacks to business units
based on the usage goals of Power BI. For a Power BI shared capacity, this could be
based on number of active users. For Premium capacity, chargebacks could be allocated
based on which business units are using the capacity. Ideally, chargebacks are directly
correlated to the business value gained.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish or improve your
Power BI COE.

" Define the scope of responsibilities for the COE: Ensure that you're clear on what
activities the COE can support. Once the scope of responsibilities is known, identify
the skills and competencies required to fulfill those responsibilities.
" Identify gaps in the ability to execute: Analyze whether the COE has the required
systems and infrastructure in place to meet its goals and scope of responsibilities.
" Determine the best COE structure: Identify which COE structure is most
appropriate (centralized, unified, federated, or decentralized). Verify that staffing,
roles and responsibilities, and appropriate organizational chart relationships (HR
reporting) are in place.
" Plan for future growth: If you're starting out with a centralized or decentralized
COE, consider how you will scale the COE over time by using the unified or
federated approach. Plan for any actions that you can take now that'll facilitate
future growth.
" Identify customers: Identify the internal customers, and any external customers, to
be served by the COE. Decide how the COE will generally engage with those
customers, whether it's a push model, pull model, or both models.
" Verify the funding model for the COE: Decide whether the COE is purely a cost
center with an operating budget, whether it will operate partially as a profit center,
and/or whether chargebacks to other business units will be required.
" Create a communication plan: Create you communications strategy to educate the
Power BI community about the services the COE offers, and how to engage with the
COE.
" Create goals and metrics: Determine how you'll measure effectiveness for the COE.
Create KPIs (key performance indicators) or OKRs (objectives and key results) to
validate that the COE consistently provides value to the user community.

Maturity levels

The following maturity levels will help you assess the current state of your COE.

Level State of Power BI Center of Excellence

100: Initial One or more COEs exist, or the activities are performed within the BI team or IT.
There's no clarity on the specific goals nor expectations for responsibilities.

Requests for assistance from the COE are handled in an unplanned manner.

200: The COE is in place with a specific charter to mentor, guide, and educate self-service
Repeatable users. The COE seeks to maximize benefits of self-service BI while reducing the risks.

The goals, scope of responsibilities, staffing, structure, and funding model are
established for the COE.

300: The COE operates with active involvement from all business units in a unified or
Defined federated mode.
Level State of Power BI Center of Excellence

400: The goals of the COE align with organizational goals, and they are reassessed
Capable regularly.

The COE is well-known throughout the organization, and consistently proves its
value to the internal user community.

500: Regular reviews of KPIs or OKRs evaluate COE effectiveness in a measurable way.
Efficient
Agility and implementing continual improvements from lessons learned (including
scaling out methods that work) are top priorities for the COE.

Next steps
In the next article in the Power BI adoption roadmap series, learn about implementing
governance guidelines, policies, and processes.

Also, consider reading about Microsoft's journey and experience with driving a data
culture. This article describes the importance of discipline at the core and flexibility at the
edge. It also shares Microsoft's views and experiences about the importance of
establishing a COE.
Power BI adoption roadmap:
Governance
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

Data governance is a broad and complex topic. This article introduces key concepts and
considerations. It identifies important actions to take when adopting Power BI, but it's
not a comprehensive reference for data governance.

As defined by the Data Governance Institute , data governance is "a system of decision
rights and accountabilities for information-related processes, executed according to
agreed-upon models which describe who can take what actions, with what information,
and when, under what circumstances, using what methods."

The term data governance is a misnomer. The primary focus for governance isn't on the
data itself. The focus is on governing what users do with the data. Put another way: The
true focus is on governing user's behavior to ensure organizational data is well-
managed.

When focused on self-service business intelligence, the primary goals of governance are
to achieve the proper balance of:

User empowerment: Empower the internal user community to be productive and


efficient, within requisite guardrails.
Regulatory compliance: Comply with the organization's industry, governmental,
and contractual regulations.
Internal requirements: Adhere to the organization's internal requirements.

The optimal balance between control and empowerment will differ between
organizations. It's also likely to differ among different business units within an
organization. With a platform like Power BI, you'll be most successful when you put as
much emphasis on user empowerment as on clarifying its practical usage within
established guardrails.
 Tip

Think of governance as a set of established guidelines and formalized policies. All


governance guidelines and policies should align with your organizational data
culture and adoption objectives. Governance is enacted on a day-to-day basis by
your system oversight (administration) activities.

Governance strategy
When considering data governance in any organization, the best place to start is by
defining a governance strategy. By focusing first on the strategic goals for data
governance, all detailed decisions when implementing governance policies and
processes can be informed by the strategy. In turn, the governance strategy will be
defined by the organization's data culture.

Governance decisions are implemented with documented guidance, policies, and


processes. Objectives for governance of a self-service BI platform, such as Power BI,
include:

Empowering users throughout the organization to use data and make decisions,
within the defined boundaries.
Improving the user experience by providing clear and transparent guidance (with
minimal friction) on what actions are permitted, why, and how.
Ensuring that the data usage is appropriate for the needs of the business.
Ensuring that content ownership and stewardship responsibilities are clear. For
more information, see the Content ownership and management article.
Enhancing the consistency and standardization of working with data across
organizational boundaries.
Reducing risk of data leakage and misuse of data. For more information, see the
information protection and data loss prevention series of articles article.
Meeting regulatory, industry, and internal requirements for the proper use of data.

 Tip

A well-executed data governance strategy makes it easier for more users to work
with data. When governance is approached from the perspective of user
empowerment, users are more likely to follow the documented processes.
Accordingly, the users become a trusted partner too.
Governance success factors
Governance isn't well-received when it's enacted with top-down mandates that are
focused more on control than empowerment. Governing Power BI is most successful
when:

The most lightweight governance model that accomplishes required objectives is


used.
Governance is approached on an iterative basis and doesn't significantly impede
productivity.
A bottom-up approach to formulating governance guidelines is used whenever
practical. The Center of Excellence (COE) and/or the data governance team
observes successful behaviors that are occurring within a business unit. The COE
then takes action to scale out to other areas of the organization.
Governance decisions are co-defined with input from different business units
before they're enacted. Although there are times when a specific directive is
necessary (particularly in heavily regulated industries), mandates should be the
exception rather than the rule.
Governance needs are balanced with flexibility and the ability to be productive.
Governance requirements can be satisfied as part of users' regular workflow,
making it easier for users to do the right thing in the right way with little friction.
The answer to new requests for data isn't "no" by default, but rather "yes and" with
clear, simple, transparent rules for what governance requirements are for data
access, usage, and sharing.
Users that need access to data have incentive to do so through normal channels,
complying with governance requirements, rather than circumventing them.
Governance decisions, policies, and requirements for users to follow are in
alignment with organizational data culture goals as well as other existing data
governance initiatives.
Decisions that affect what users can—and can't—do aren't made solely by a
system administrator.

Introducing governance to your organization


There are three primary timing methods organizations take when introducing Power BI
governance to an organization.
The methods in the above diagram include:

Method Strategy followed

Roll out Power BI first, then introduce governance: Power BI is made widely available
to users in the organization as a new self-service BI tool. Then, at some time in the
future, a governance effort begins. This method prioritizes agility.

Full governance planning first, then roll out Power BI: Extensive governance planning
occurs prior to permitting users to begin using Power BI. This method prioritizes
control and stability.

Iterative governance planning with rollouts of Power BI in stages: Just enough


governance planning occurs initially. Then Power BI is iteratively rolled out in stages to
individual teams while iterative governance enhancements occur. This method equally
prioritizes agility and governance.

Choose method 1 when Power BI is already used for self-service scenarios, and you're
ready to start working in a more efficient manner.

Choose method 2 when your organization already has a well-established approach to


governance that can be readily expanded to include Power BI.

Choose method 3 when you want to have a balance of control agility. This balanced
approach is the best choice for most organizations and most scenarios.

Method 1: Roll out Power BI first


Method 1 prioritizes agility and speed. It allows users to quickly get started creating
solutions. This method occurs when Power BI has been made widely available to users in
the organization as a new self-service BI tool. Quick wins and some successes are
achieved. At some point in the future, a governance effort begins, usually to bring order
to an unacceptable level of chaos since the self-service user population didn't receive
sufficient guidance.

Pros:

Fastest to get started


Highly capable users can get things done quickly
Quick wins are achieved

Cons:

Higher effort to establish governance once Power BI is used prevalently


throughout the organization
Resistance from self-service users who are asked to change what they've been
doing
Self-service users need to figure out things on their own and use their best
judgment

See other possible cons in the Governance challenges section below.

Method 2: In-depth governance planning first


Method 2 prioritizes control and stability. It lies at the opposite end of the spectrum
from method 1. Method 2 involves doing extensive governance planning before rolling
out Power BI. This situation is most likely to occur when the implementation of Power BI
is led by IT. It's also likely to occur when the organization operates in a highly regulated
industry, or when an existing data governance board imposes significant prerequisites
and up-front requirements.

Pros:

More fully prepared to meet regulatory requirements


More fully prepared to support the user community

Cons:

Favors enterprise BI more than self-service BI


Slower to allow the user population to begin to get value and improve decision-
making
Encourages poor habits and workarounds when there's a significant delay in
allowing the use of data for decision-making

Method 3: Iterative governance with rollouts


Method 3 seeks a balance between agility and governance. It's an ideal scenario that
does just enough governance planning upfront. Frequent and continual governance
improvements iteratively occur over time alongside Power BI development projects that
deliver value.

Pros:

Puts equal priority on governance and user productivity


Emphasizes a learning as you go mentality
Encourages iterative releases to groups of users in stages

Cons:

Requires a high level of communication to be successful with agile governance


practices
Requires additional discipline to keep documentation and training current
Introducing new governance guidelines and policies too often causes a certain
level of user disruption

For more information about up-front planning, see the Preparing to migrate to Power BI
article.

Governance challenges
If your organization has implemented Power BI without a governance approach or
strategic direction (as described above by method 1), there could be numerous
challenges requiring attention. Depending on the approach you've taken and your
current state, some of the following challenges may be applicable to your organization.

Strategy challenges
Lack of a cohesive data governance strategy that aligns with the business strategy
Lack of executive support for governing data as a strategic asset
Insufficient adoption planning for advancing adoption and the maturity level of BI
and analytics

People challenges
Lack of aligned priorities between centralized teams and business units
Lack of identified champions with sufficient expertise and enthusiasm throughout
the business units to advance organizational adoption objectives
Lack of awareness of self-service best practices
Resistance to following newly introduced governance guidelines and policies
Duplicate effort spent across business units
Lack of clear accountability, roles, and responsibilities

Process challenges
Lack of clearly defined processes resulting in chaos and inconsistencies
Lack of standardization or repeatability
Insufficient ability to communicate and share lessons learned
Lack of documentation and over-reliance on tribal knowledge
Inability to comply with security and privacy requirements

Data quality and data management challenges


Sprawl of data and reports
Inaccurate, incomplete, or outdated data
Lack of trust in the data, especially for self-service content
Inconsistent reports produced without data validation
Valuable data not used or difficult to access
Fragmented, siloed, and duplicated datasets
Lack of data catalog, inventory, glossary, or lineage
Unclear data ownership and stewardship

Skills and data literacy challenges


Varying levels of ability to interpret, create, and communicate with data effectively
Varying levels of technical skillsets and skill gaps
Lack of ability to confidently manage data diversity and volume
Underestimating the level of complexity for BI solution development and
management throughout its entire lifecycle
Short tenure with continual staff transfers and turnover
Coping with the speed of change for cloud services

 Tip

Identifying your current challenges—as well as your strengths—is essential to do


proper governance planning. There's no single straightforward solution to the
challenges listed above. Each organization needs to find the right balance and
approach that solves the challenges that are most important to them. The
challenges presented above will help you identify how they may affect your
organization, so you can start thinking about what the right solution is for your
circumstances.

Governance planning
Some organizations have implemented Power BI without a governance approach or
clear strategic direction (as described above by method 1). In this case, the effort to
begin governance planning can be daunting.

If a formal governance body doesn't currently exist in your organization, then the focus
of your governance planning and implementation efforts will be broader. If, however,
there's an existing data governance board in the organization, then your focus is
primarily to integrate with existing practices and customize them to accommodate the
objectives for self-service BI and enterprise BI.

) Important

Governance is a big undertaking, and it's never completely done. Relentlessly


prioritizing and iterating on improvements will make the scope more manageable.
If you track your progress and accomplishments each week and each month, you'll
be amazed at the impact over time. The maturity levels at the end of each article in
this series can help you to assess where you are currently.

Some potential governance planning activities and outputs that you may find valuable
are described next.

Strategy
Key activities:

Assess current state of data culture, adoption, and BI practices


Conduct a series of information gathering sessions to define the desired future
state, strategic vision, priorities, and objectives for data culture, adoption, and BI
practices. Be sure to include adoption goals for Power BI as suggested in the Power
BI adoption framework series . They're a useful approach if you don't already
have a structured method for information gathering.
Validate the focus and scope of the governance program
Identify existing bottom-up initiatives in progress
Identify immediate pain points, issues, and risks
Educate senior leadership about governance, and ensure executive support is
sufficient to sustain and grow the program
Clarify where Power BI fits in to the overall data and analytics strategy for the
organization
Assess internal factors such as organizational readiness, maturity levels, and key
challenges
Assess external factors such as risk, exposure, regulatory, and legal requirements—
including regional differences

Key output:

Business case with cost/benefit analysis


Approved governance objectives, focus, and priorities that are in alignment with
high-level business objectives
Plan for short-term goals and priorities. These are quick wins
Plan for long-term and deferred goals and priorities
Success criteria and measurable key performance indicators (KPIs)
Known risks documented with a mitigation plan
Plan for meeting industry, governmental, contractual, and regulatory requirements
that impact BI and analytics in the organization
Funding plan

People
Key activities:

Establish a governance board and identify key stakeholders


Determine focus, scope, and a set of responsibilities for the governance board
Establish a COE
Determine focus, scope, and a set of responsibilities for COE
Define roles and responsibilities
Confirm who has decision-making, approval, and veto authority

Key output:

Charter for the governance board


Charter for the COE
Staffing plan
Roles and responsibilities
Accountability and decision-making matrix
Communication plan
Issue management plan
Policies and processes
Key activities:

Analyze immediate pain points, issues, risks, and areas to improve the user
experience
Prioritize data policies to be addressed by order of importance
Identify existing processes in place that work well and can be formalized
Determine how new data policies will be socialized
Decide to what extent data policies may differ or be customized for different
groups

Key output:

Process for how data policies and documentation will be defined, approved,
communicated, and maintained
Plan for requesting valid exceptions and departures from documented policies

Project management
The implementation of the governance program should be planned and managed as a
series of projects.

Key activities:

Establish a timeline with priorities and milestones


Identify related initiatives and dependencies
Identify and coordinate with existing bottom-up initiatives
Create an iterative project plan that's aligned with high-level prioritization
Obtain budget approval and funding
Establish a tangible way to track progress

Key output:

Project plan with iterations, dependencies, and sequencing


Cadence for retrospectives with a focus on continual improvements

) Important

The scope of activities listed above that will be useful to take on will vary
considerably between organizations. If your organization doesn't have existing
processes and workflows for creating these types of outputs, refer to the guidance
found in the Roadmap conclusion article for some helpful resources.
Governance policies

Decision criteria
All governance decisions should be in alignment with the established goals for
organizational adoption. Once the strategy is clear, more tactical governance decisions
will need to be made which affect the day-to-day activities of the self-service user
community. These types of tactical decisions correlate directly to the data policies that
get created.

How we go about making governance decisions depends on:

Who owns and manages the BI content? The Content ownership and
management article introduced three types of strategies: business-led self-service
BI, managed self-service BI, and enterprise BI. Who owns and manages the content
has a significant impact on governance requirements.
What is the scope for delivery of the BI content? The Content delivery scope
article introduced four scopes for delivery of content: personal BI, team BI,
departmental BI, and enterprise BI. The scope of delivery has a considerable impact
on governance requirements.
What is the data subject area? The data itself, including its sensitivity level, is an
important factor. Some data domains inherently require tighter controls. For
instance, personally identifiable information (PII), or data subject to regulations,
should be subject to stricter governance requirements than less sensitive data.
Is the data, and/or the BI solution, considered critical? If you can't make an
informed decision easily without this data, you're dealing with critical data
elements. Certain reports and apps may be deemed critical because they meet a
set of predefined criteria. For instance, the content is delivered to executives.
Predefined criteria for what's considered critical helps everyone have clear
expectations. Critical data is usually subject to stricter governance requirements.

 Tip

Different combinations of the above four criteria will result in different governance
requirements for Power BI content.

Key Power BI governance decisions


As you explore your goals and objectives and pursue more tactical data governance
decisions as described above, it will be important to determine what the highest
priorities are. Deciding where to focus your efforts can be challenging.

The following list includes items that you may choose to prioritize when introducing
governance for Power BI.

Recommendations and requirements for content ownership and management


Recommendations and requirements for content delivery scope
Recommendations and requirements for content distribution and sharing with
colleagues, as well as for external users, such as customers, partners, or vendors
How users are permitted to work with regulated data and highly sensitive data
Allowed use of unverified data sources that are unknown to IT
When manually maintained data sources, such as Excel or flat files, are permitted
How to manage workspaces effectively
Who is allowed to be a Power BI administrator
Security, privacy, and data protection requirements, and allowed actions for
content assigned to each sensitivity label
Allowed or encouraged use of personal gateways
Allowed or encouraged use of self-service purchasing of user licenses
Requirements for who may certify datasets, as well as requirements that must be
met
Application lifecycle management for managing content through its entire
lifecycle, including development, test, and production stages
Additional requirements applicable to critical content, such as data quality
verifications and documentation
Requirements to use standardized master data and common data definitions to
improve consistency across datasets and reports
Recommendations and requirements for use of external tools by advanced content
creators

If you don't make governance decisions and communicate them well, users will use their
own judgment for how things should work—and that often results in inconsistent
approaches to common tasks.

Although not every governance decision needs to be made upfront, it's important that
you identify the areas of greatest risk in your organization. Then, incrementally
implement governance policies and processes that will deliver the most impact.

Data policies
A data policy is a document that defines what users can and can't do. You may call it
something different, but the goal remains the same: when decisions—such as those
discussed in the previous section—are made, they're documented for use and reference
by the community of users.

A data policy should be as short as possible. That way, it's easy for people to understand
what is being asked of them.

A data policy should include:

Policy name, purpose, description, and details


Specific responsibilities
Scope of the policy (organization-wide versus departmental-specific)
Audience for the policy
Policy owner, approver, and contact
How to request an exception
How the policy will be audited and enforced
Regulatory or legal requirements met by the policy
Reference to terminology definitions
Reference to any related guidelines or policies
Effective date, last revision date, and change log

7 Note

Locate, or link to, data policies from your centralized portal.

Here are three common data policy examples you may choose to prioritize:

Policy Description

Data Specifies when an owner is required for a dataset, and what the data owner's
ownership responsibilities include, such as: supporting colleagues who view the content,
policy maintaining appropriate confidentiality and security, and ensuring compliance.

Data Specifies the process that is followed to certify a dataset. Requirements may
certification include activities such as: data accuracy validation, data source and lineage
(endorsement) review, technical review of the data model, security review, and documentation
policy review.

Data Specifies activities that are allowed and not allowed per classification (sensitivity
classification level). It should specify activities such as: allowed sharing with external users
and (with or without NDA), encryption requirements, and ability to download the
protection dataset. Sometimes, it's also called a data handling policy or a data usage policy.
policy For more information, see the Information protection for Power BI article.

U Caution
Having a lot of documentation can lead to a false sense that everything is under
control, which can lead to complacency. The level of engagement that the COE has
with the user community is one way to improve the chances that governance
guidelines and policies are consistently followed. Auditing and monitoring activities
are also important.

Scope of policies
Governance decisions will rarely be one-size-fits-all across the entire organization. When
practical, it's wise to start with standardized policies, and then implement exceptions as
needed. Having a clearly defined strategy for how policies will be handled for
centralized and decentralized teams will make it much easier to determine how to
handle exceptions.

Pros of organization-wide policies:

Much easier to manage and maintain


Greater consistency
Encompasses more use cases
Fewer policies overall

Cons of organization-wide policies:

Inflexible
Less autonomy and empowerment

Pros of departmental-scope policies:

Expectations are clearer when tailored to a specific group


Customizable and flexible

Cons of departmental-scope policies:

More work to manage


More policies that are siloed
Potential for conflicting information
Difficult to scale more broadly throughout the organization

 Tip

Finding the right balance of standardization and customization for supporting self-
service BI across the organization can be challenging. However, by starting with
organizational policies and mindfully watching for exceptions, you can make
meaningful progress quickly.

Staffing and accountability


The organizational structure for data governance varies substantially between
organizations. In larger organizations there may be a data governance office with
dedicated staff. Some organizations have a data governance board, council, or steering
committee with assigned members coming from different business units. Depending on
the extent of the data governance body within the organization, there may be an
executive team separate from a functional team of people.

) Important

Regardless of how the governance body is structured, it's important that there's a
person or group with sufficient influence over data governance decisions. This
person should have authority to enforce those decisions across organizational
boundaries.

Checks and balances


Governance accountability is about checks and balances.
Starting at the bottom, the levels in the above diagram include:

Level Description

Operational - Business units: Level 1 is the foundation of a well-governed system, which


includes users within the business units performing their work. Self-service BI creators
have a lot of responsibilities related to authoring, publishing, sharing, security, and data
quality. Self-service BI consumers also have responsibilities for the proper use of data.

Tactical - Supporting teams: Level 2 includes several groups that support the efforts of
the users in the business units. Supporting teams include the COE, enterprise BI, the data
governance office, as well as other ancillary teams. Ancillary teams can include IT, security,
HR, and legal. A change control board is included here as well.

Tactical - Audit and compliance: Level 3 includes internal audit, risk management, and
compliance teams. These teams provide guidance to levels 1 and 2. They also provide
enforcement when necessary.

Strategic - Executive sponsor and steering committee: The top level includes the
executive-level oversight of strategy and priorities. This level handles any escalated issues
that couldn't be solved at lower levels. Therefore, it's important to have people with
sufficient authority to be able to make decisions when necessary.

) Important

Everyone has a responsibility to adhere to policies for ensuring that organizational


data is secure, protected, and well-managed as an organizational asset. Sometimes
this is cited as everyone is a data steward. To make this a reality, start with the users
in the business units (level 1 described above) as the foundation.

Roles and responsibilities


Once you have a sense for your governance strategy, roles and responsibilities should
be defined to establish clear expectations.

Governance team structure, roles (including terminology), and responsibilities vary


widely among organizations. Very generalized roles are described in the table below. In
some cases, the same person may serve multiple roles. For instance, the Chief Data
Officer (CDO) may also be the executive sponsor.

Role Description
Role Description

Chief Data Defines the strategy for use of data as an enterprise asset. Oversees enterprise-
Officer or wide governance guidelines and policies.
Chief
Analytics
Officer

Data Steering committee with members from each business unit who, as domain
governance owners, are empowered to make enterprise governance decisions. They make
board decisions on behalf of the business unit and in the best interest of the
organization. Provides approvals, decisions, priorities, and direction to the
enterprise data governance team and working committees.

Data Creates governance policies, standards, and processes. Provides enterprise-wide


governance oversight and optimization of data integrity, trustworthiness, privacy, and
team usability. Collaborates with the COE to provide governance education, support,
and mentoring to data owners and content creators.

Data Temporary or permanent teams that focus on individual governance topics, such
governance as security or data quality.
working
committees

Change Coordinates the requirements, processes, approvals, and scheduling for release
management management processes with the objective of reducing risk and minimizing the
board impact of changes to critical applications.

Project Manages individual governance projects and the ongoing data governance
management program.
office

Power BI Promotes adoption and the successful use of Power BI. Actively ensures that
executive Power BI decisions are consistently aligned with business objectives, guiding
sponsor principles, and policies across organizational boundaries. For more information,
see the Executive sponsorship article.

Center of Mentors the community of creators and consumers to promote the effective use
Excellence of Power BI for decision-making. Provides cross-departmental coordination of
Power BI activities to improve practices, increase consistency, and reduce
inefficiencies. For more information, see the Center of Excellence article.

Power BI A subset of content creators found within the business units who help advance
champions the adoption of Power BI. They contribute to data culture growth by advocating
the use of best practices and actively assisting colleagues. For more information,
see the Community of practice article.

Power BI Day-to-day-system oversight responsibilities to support the internal processes,


administrators tools, and people. Handles monitoring, auditing, and management. For more
information, see the System oversight article.
Role Description

Information Provides occasional assistance to Power BI administrators for services related to


technology Power BI, such as Azure Active Directory, Microsoft 365, Teams, SharePoint, or
OneDrive.

Risk Reviews and assesses data sharing and security risks. Defines ethical data policies
management and standards. Communicates regulatory and legal requirements.

Internal audit Auditing of compliance with regulatory and internal requirements.

Data steward Collaborates with governance committee and/or COE to ensure that
organizational data has acceptable data quality levels.

All BI creators Adheres to policies for ensuring that data is secure, protected, and well-
and managed as an organizational asset.
consumers

 Tip

Name a backup for each person in key roles, for example, members of the data
governance board. In their absence, the backup person can attend meetings and
make time-sensitive decisions when necessary.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish or strengthen your
governance initiatives.

" Align goals and guiding principles: Confirm that the high-level goals and guiding
principles of the data culture goals are clearly documented and communicated.
Ensure that alignment exists for any new governance guidelines or policies.
" Understand what's currently happening: Ensure that you have a deep
understanding of how Power BI is currently used for self-service BI and enterprise
BI. Document opportunities for improvement. Also, document strengths and good
practices that would be helpful to scale out more broadly.
" Prioritize new governance guidelines and policies: For prioritizing which new
guidelines or policies to create, select an important pain point, high priority need,
or known risk for a data domain. It should have significant benefit and can be
achieved with a feasible level of effort. When you implement your first governance
guidelines, choose something users are likely to support because the change is low
impact, or because they are sufficiently motivated to make a change.
" Create a schedule to review policies: Determine the cadence for how often data
policies are reevaluated. Reassess and adjust when needs change.
" Decide how to handle exceptions: Determine how conflicts, issues, and requests
for exceptions to documented policies will be handled.
" Understand existing data assets: Confirm that you understand what critical data
assets exist. Create an inventory of ownership and lineage, if necessary. Keep in
mind that you can't govern what you don't know about.
" Verify executive sponsorship: Confirm that you have support and sufficient
attention from your executive sponsor, as well as from business unit leaders.
" Prepare an action plan: Include the following key items:
Initial priorities: Select one data domain or business unit at a time.
Timeline: Work in iterations long enough to accomplish meaningful progress, yet
short enough to periodically adjust.
Quick wins: Focus on tangible, tactical, and incremental progress.
Success metrics: Create measurable metrics to evaluate progress.

Maturity levels

The following maturity levels will help you assess the current state of your governance
initiatives.

Level State of Power BI governance

100: Initial Due to a lack of governance planning, the good data management and informal
governance practices that are occurring are overly reliant on judgment and
experience level of individuals.

There's a significant reliance on undocumented tribal knowledge.

200: Some areas of the organization have made a purposeful effort to standardize,
Repeatable improve, and document their data management and governance practices.

An initial governance approach exists. Incremental progress is being made.


Level State of Power BI governance

300: A complete governance strategy with focus, objectives, and priorities is enacted and
Defined broadly communicated.

Specific governance guidelines and policies are implemented for the top few
priorities (pain points or opportunities). They're actively and consistently followed by
users.

Roles and responsibilities are clearly defined and documented.

400: All Power BI governance priorities align with organizational goals and business
Capable objectives. Goals are reassessed regularly.

Processes exist to customize policies for decentralized business units, or to handle


valid exceptions to standard governance policies.

It's clear where Power BI fits in to the overall BI strategy for the organization.

Power BI activity log and API data is actively analyzed to monitor and audit Power BI
activities. Proactive action is taken based on the data.

500: Regular reviews of KPIs or OKRs evaluate measurable governance goals. Iterative,
Efficient continual progress is a priority.

Agility and implementing continual improvements from lessons learned (including


scaling out methods that work) are top priorities for the COE.

Power BI activity log and API data is actively used to inform and improve adoption
and governance efforts.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about mentoring
and user enablement.
Power BI adoption roadmap: Mentoring
and user enablement
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

A critical objective for adoption efforts is to enable users to accomplish as much as they
can within the requisite guardrails established by governance guidelines and policies.
For this reason, the act of mentoring users is one of the most important responsibilities
of the Center of Excellence (COE), and it has a direct influence on how user adoption
occurs. For more information about user adoption, see the Power BI adoption maturity
levels article.

Skills mentoring
Mentoring and helping users in the Power BI community become more effective can
take on various forms, such as:

Office hours
Co-development projects
Best practices reviews
Extended support

Office hours
Office hours are a form of ongoing community engagements managed by the COE. As
the name implies, office hours are times of regularly scheduled availability where
members of the community can engage with experts from the COE to receive assistance
with minimal process overhead. Since office hours are group-based, Power BI
champions and other members of the community can also pitch in to help solve an
issue if a topic is in their area of expertise.

Office hours are a very popular and productive activity in many organizations. Some
organizations call them drop-in hours or even a fun name such as Power Hour. The
primary goal is usually to get questions answered, solve problems, and remove blockers.
Office hours can also be used as a platform for the user community to share ideas,
suggestions, and even complaints.

The COE publishes the times for regular office hours when one or more COE members
are available. Ideally, office hours are held on a regular and frequent basis. For instance,
it could be every Tuesday and Thursday. Consider offering different time slots or
rotating times if you have a global workforce.

 Tip

One option is to set specific office hours each week. However, people may or may
not show up, so that can end up being inefficient. Alternatively, consider leveraging
Microsoft Bookings to schedule office hours. It shows the blocks of time when
each COE expert is available, with Outlook integration ensuring availability is up to
date.

Office hours are an excellent user enablement approach because:

Content creators and the COE actively collaborate to answer questions and solve
problems together.
Real work is accomplished while learning and problem solving.
Others may observe, learn, and participate.
Individual groups can head to a breakout room to solve a specific problem.

Office hours benefit the COE as well because:

They're a great way for the COE to identify champions or people with specific skills
that the COE didn't previously know about.
The COE can learn what people throughout the organization are struggling with. It
helps inform whether additional resources, documentation, or training might be
required.

 Tip

It's common for some tough issues to come up during office hours that cannot be
solved quickly, such as getting a complex DAX calculation to work. Set clear
expectations for what's in scope for office hours, and if there's any commitment for
follow up.

Co-development projects
One way the COE can provide mentoring services is during a co-development project. A
co-development project is a form of assistance offered by the COE where a user or
business unit takes advantage of the technical expertise of the COE to solve business
problems with data. Co-development involves stakeholders from the business unit and
the COE working in partnership to build a high-quality self-service BI solution that the
business stakeholders couldn't deliver independently.

The goal of co-development is to help the business unit develop expertise over time
while also delivering value. For example, the sales team has a pressing need to develop
a new set of commission reports, but the sales team doesn't yet have the knowledge to
complete it on their own.

A co-development project forms a partnership between the business unit and the COE.
In this arrangement, the business unit is fully invested, deeply involved, and assumes
ownership for the project.

Time involvement from the COE reduces over time until the business unit gains expertise
and becomes self-reliant.

The active involvement shown in the above diagram changes over time, as follows:

Business unit: 50% initially, up to 75%, finally at 98%-100%.


COE: 50% initially, down to 25%, finally at 0%-2%.

Ideally, the period for the gradual reduction in involvement is identified up-front in the
project. This way, both the business unit and the COE can sufficiently plan the timeline
and staffing.

Co-development projects can deliver significant short- and long-term benefits. In the
short term, the involvement from the COE can often result in a better-designed and
better-performing solution that follows best practices and aligns with organizational
standards. In the long term, co-development helps increase the knowledge and
capabilities of the business stakeholder, making them more self-sufficient, and more
confident to deliver quality self-service BI solutions in the future.

) Important

Essentially, a co-development project helps less experienced users learn the right
way to do things. It reduces risk that refactoring might be needed later, and it
increases the ability for a solution to scale and grow over time.

Best practices reviews


The COE may also offer best practices reviews. A best practices review can be extremely
helpful for content creators who would like to validate their work. They might also be
known as advisory services, internal consulting time, or technical reviews.

During a review, an expert from the COE evaluates self-service Power BI content
developed by a member of the community and identifies areas of risk or opportunities
for improvement. The following bullet list presents some examples of when a best
practices review could be beneficial:

The sales team has an app that they intend to distribute to thousands of users
throughout the organization. Since the app represents high priority content
distributed to a large audience, they'd like to have it certified. The standard
process to certify content includes a best practices review.
The finance team would like to assign a workspace to Premium capacity. A review
of the workspace content is required to ensure sound development practices were
followed. This type of review is common when the capacity is shared among
multiple business units. (A review may not be required when the capacity is
assigned to only one business unit.)
The operations team is creating a new solution they expect to be widely used. They
would like to request a best practices review before it goes into user acceptance
testing (UAT), or before a request is submitted to the change management board.

A best practices review is most often focused on the dataset design, though the review
can encompass all types of Power BI items (dataflows, datasets, reports, or apps).

Before content is deployed to the Power BI service, a best practices review may verify
that:

Data sources used are appropriate and query folding is invoked whenever possible.
Connectivity mode and storage mode choices (for example, import, live
connection, DirectQuery composite model frameworks) are appropriate.
Location for data sources, like flat files, and original Power BI Desktop files are
suitable (preferably stored in a backed-up location with versioning and appropriate
security, such as Teams files or a SharePoint shared library).
Data preparation steps are clean, orderly, and efficient.
Datasets are well-designed, clean, and understandable (a star schema design is
highly recommended).
Relationships are configured correctly.
DAX calculations use efficient coding practices (particularly if the data model is
large).
The dataset size is within a reasonable limit and data reduction techniques are
applied.
Row-level security (RLS) appropriately enforces data permissions.
Data is accurate and has been validated against the authoritative source(s).
Approved common definitions and terminology are used.
Good data visualization practices are followed, including designing for
accessibility.

Once the content has been deployed to the Power BI service, the best practices review
isn't necessarily complete yet. Completing the remainder of the review may also include
items such as:

The target workspace is suitable for the content.


Workspace security roles are appropriate for the content.
Other permissions (app audience permissions, Build permission, use of the
individual item sharing feature) are correctly and appropriately configured.
Contacts are identified, and correctly correlate to the owners of the content.
Sensitivity labels are correctly assigned.
Power BI item endorsement (certified or promoted) is appropriate.
Data refresh is configured correctly, failure notifications include the proper users,
and uses the appropriate data gateway in standard mode (if applicable).
All best practices rules are followed and, preferably, are automated via a
community tool called Best Practices Analyzer for maximum efficiency and
productivity.

Extended support
From time to time, the COE may get involved with complex issues escalated from the
help desk. For more information, see the User support article.
7 Note

Offering mentoring services might be a culture shift for your organization. Your
reaction might be that users don't usually ask for help with a tool like Excel, so why
would they with Power BI? The answer lies in the fact that Power BI is an
extraordinarily powerful tool, providing data preparation and data modeling
capabilities in addition to data visualization. The complexity of the tool inherently
means that there's a significant learning curve to develop mastery. Having the
ability to aid and enable users can significantly improve their skills and increase the
quality of their solutions—it reduces risks too.

Centralized portal
A single centralized portal, or hub, is where the user community can find:

Access to the community Q&A forum.


Announcements of interest to the community, such as new features and release
plan updates.
Schedules and registration links for office hours, lunch and learns, training
sessions, and user group meetings.
Announcements of key changes to datasets and change log (if appropriate).
How to request help or support.
Training materials.
Documentation, onboarding materials, and frequently asked questions (FAQ).
Governance guidance and approaches recommended by the COE.
Templates.
Recordings of knowledge sharing sessions.
Entry points for accessing managed processes, such as license acquisition, access
requests, and gateway configuration.

 Tip

In general, only 10%-20% of your community will go out of their way to actively
seek out training and educational information. These types of people might
naturally evolve to become your Power BI champions. Everyone else is usually just
trying to get the job done as quickly as possible because their time, focus, and
energy are needed elsewhere. Therefore, it's important to make information easy
for your community users to find.
The goal is to consistently direct users in the community to the centralized portal to find
information. The corresponding obligation for the COE is to ensure that the information
users need is available in the centralized portal. Keeping the portal updated requires
discipline when everyone is busy.

In larger organizations, it may be difficult to implement one single centralized portal.


When it's not practical to consolidate into a single portal, a centralized hub can serve as
an aggregator, which contains links to the other locations.

) Important

Although saving time finding information is important, the goal of a centralized


portal is more than that. It's about making information readily available to help
your user community do the right thing. They should be able to find information
during their normal course of work, with as little friction as possible. Until it's easier
to complete a task within the guardrails established by the COE and data
governance team, some users will continue to complete their tasks by
circumventing policies that are put in place. The recommended path must become
the path of least resistance. Having a centralized portal can help achieve this goal.

It takes time for community users to think of the centralized portal as their natural first
stop for finding information. It takes consistent redirection to the portal to change
habits. Sending someone a link to an original document location in the portal builds
better habits than, for instance, including the answer in an email response. It's the same
challenge described in the User support article.

Training
A key factor for successfully enabling users in a Power BI community is training. It's
important that the right training resources are readily available and easily discoverable.
While some users are so enthusiastic about Power BI that they'll find information and
figure things out on their own, it isn't true for most of the user community.

Making sure your community users have access to the training resources they need to
be successful doesn't mean that you need to develop your own training content.
Developing training content is often counterproductive due to the rapidly evolving
nature of the product. Fortunately, an abundance of training resources is available in the
worldwide community. A curated set of links goes a long way to help users organize and
focus their training efforts, especially for tool training, which focuses on the technology.
All external links should be validated by the COE for accuracy and credibility. It's a key
opportunity for the COE to add value because COE stakeholders are in an ideal position
to understand the learning needs of the community, and to identify and locate trusted
sources of quality learning materials.

You'll find the greatest return on investment with creating custom training materials for
organizational-specific processes, while relying on content produced by others for
everything else. It's also useful to have a short training class that focuses primarily on
topics like how to find documentation, getting help, and interacting with the
community.

 Tip

One of the goals of training is to help people learn new skills while helping them
avoid bad habits. It can be a balancing act. For instance, you don't want to
overwhelm people by adding in a lot of complexity and friction to a beginner-level
class for report creators. However, it's a great investment to make newer content
creators aware of things that could otherwise take them a while to figure out. An
ideal example is teaching the ability to use a live connection to report from an
existing dataset. By teaching this concept at the earliest logical time, you can save a
less experienced creator thinking they always need one dataset for every report
(and encourage the good habit of reusing existing datasets across reports).

Some larger organizations experience continual employee transfers and turnover. Such
frequent change results in an increased need for a repeatable set of training resources.

Training resources and approaches


There are many training approaches because people learn in different ways. If you can
monitor and measure usage of your training materials, you'll learn over time what works
best. Some training might be delivered more formally, such as classroom training with
hands-on labs. Other types of training are less formal, such as:

Lunch and learn presentations


Short how-to videos targeted to a specific goal
Curated set of online resources
Internal user group presentations
One-hour, one-week, or one-month challenges
Hackathon-style events

The advantages of encouraging knowledge sharing among colleagues are described in


the Community of practice article.
 Tip

Whenever practical, learning should be correlated with building something


meaningful and realistic. However, simple demo data does have value during a
training course. It allows a learner to focus on how to use the technology rather
than the data itself. After completion of introductory session(s), consider offering a
bring your own data type of session. These types of sessions encourage the learner
to apply their new technical skills to an actual business problem. Try to include
multiple facilitators from the COE during this type of follow-up session so questions
can be answered quickly.

The types of users you may target for training include:

Content consumers
Report creators
Data creators (datasets and dataflows)
Content owners, subject matter experts, and workspace administrators
Satellite COE members and the champions network
Power BI administrators

) Important

Each type of user represents a different audience that has different training needs.
The COE will need to identify how best to meet the needs of each audience. For
instance, one audience might find a standard introductory Power BI Desktop class
overwhelming, whereas another will want more challenging information with depth
and detail. If you have a diverse population of Power BI content creators, consider
creating personas and tailoring the experience to an extent that's practical.

The completion of training can be a leading indicator for success with user adoption.
Some organizations grant badges, like blue belt or black belt, as people progress
through the training programs.

Give some consideration to how you want to handle users at various stages of user
adoption. Training to onboard new users (sometimes referred to as training day zero)
and for less experienced users is very different to training for more experienced users.

How the COE invests its time in creating and curating training materials will change over
time as adoption and maturity grows. You may also find over time that some community
champions want to run their own tailored set of training classes within their functional
business unit.
Sources for trusted Power BI training content
A curated set of online resources is valuable to help community members focus and
direct their efforts on what's important. Some publicly available training resources you
might find helpful include:

Microsoft Learn training for Power BI


Power BI courses and "in a day" training materials
LinkedIn Learning
Virtual workshops and training

Consider using Microsoft Viva Learning , which is integrated into Microsoft Teams. It
includes content from sources such as Microsoft Learn and LinkedIn Learning . Custom
content produced by your organization can be included as well.

In addition to Microsoft content and custom content produced by your organization,


you may choose to provide your user community with a curated set of recommended
links to trusted online sources. There's a wide array of videos, blogs, and articles
produced by the worldwide community. The community comprises Power BI experts,
Microsoft Most Valued Professions (MVPs) , and enthusiasts. Providing a curated
learning path that contains specific, reputable, current, and high quality resources will
provide the most value to your user community.

If you do make the investment to create custom in-house training, consider creating
short, targeted content that focuses on solving one specific problem. It makes the
training easier to find and consume. It's also easier to maintain and update over time.

 Tip

The Help and Support menu in the Power BI service is customizable. Once your
centralized location for training documentation is operational, update the tenant
setting in the Admin portal with the link. The link can then be accessed from menu
when users select the Get Help option. Also, be sure to teach users about the Help
ribbon tab in Power BI Desktop. It includes links to guided learning, training videos,
documentation, and more.

Documentation
Concise, well-written documentation can be a significant help for users trying to get
things done. Your needs for documentation, and how it's delivered, will depend on how
Power BI is managed in your organization. For more information, see the Content
ownership and management article.

Certain aspects of Power BI tend to be managed by a centralized team, such as the COE.
The following types of documentation are helpful in these situations:

How to request a Power BI license (and whether there are requirements for
manager approval)
How to request a new Premium capacity
How to request a new workspace
How to request a workspace be added to Premium capacity
How to request access to a gateway data source
How to request software installation

 Tip

For certain activities that are repeated over and over, consider automating them
using Power Apps and Power Automate. In this case, your documentation will also
include how to access and use the Power Platform functionality.

Other aspects of Power BI can be managed by self-service users, decentralized teams, or


by a centralized team. The following types of documentation might differ based on who
owns and manages the content:

How to request a new report


How to request a report enhancement
How to request access to a dataset
How to request a dataset enhancement

 Tip

When planning for a centralized portal, as described earlier in this article, plan how
to handle situations when guidance or governance policies need to be customized
for one or more business units.

There are also going to be some governance decisions that have been made and should
be documented, such as:

How to request content be certified


What are the approved file storage locations
What are the data retention and purge requirements
What are the requirements for handling sensitive data and personally identifiable
information (PII)

Documentation should be located in your centralized portal, which is a searchable


location where, preferably, users already work. Either Teams or SharePoint work very
well. Creating documentation in either wiki pages or in documents can work equally
well, provided that the content is organized well and is easy to find. Shorter documents
that focus on one topic are usually easier to consume than long, comprehensive
documents.

) Important

One of the most helpful pieces of documentation you can publish for the
community is a description of the tenant settings, and the group memberships
required for each tenant setting. Users read about features and functionality online,
and sometimes find that it doesn't work for them. When they are able to quickly
look up your organization's tenant settings, it can save them from becoming
frustrated and attempting workarounds. Effective documentation can reduce the
number of help desk tickets that are submitted. It can also reduce the number of
people who need to be assigned the Power BI administrator role (who might have
this role solely for the purpose of viewing settings).

Over time, you may choose to allow some documentation to be maintained by the
community if you have willing volunteers. In this case, you may want to introduce an
approval process for changes.

When you see questions repeatedly arise in the Q&A forum (as described in the User
support article), during office hours, or during lunch and learns, it's a great indicator that
creating new documentation may be appropriate. When the documentation exists, it
allows colleagues to reference it when needed. It contributes to user enablement and a
self-sustaining community.

 Tip

When creating custom documentation or training materials, reference existing


Microsoft sites using links when possible. Since Power BI is in a continual state of
evolution, it will reduce the level of documentation maintenance needed over time.

Templates
A Power BI template is a .pbit file. It can be provided as a starting point for content
creators. It's the same as a .pbix file, which can contain queries, a data model, and a
report, but with one exception: the template file doesn't contain any data. Therefore, it's
a smaller file that can be shared with the community, and it doesn't present a risk of
inappropriately sharing data.

Providing Power BI template files for your community is a great way to:

Promote consistency.
Reduce learning curve.
Show good examples and best practices.
Increase efficiency.

Power BI template files can improve efficiency and help people learn during the normal
course of their work. A few ways that template files are helpful include:

Reports can use examples of good visualization practices


Reports can incorporate organizational branding and design standards
Datasets can include the structure for commonly used tables, like a date table
Helpful DAX calculations can be included, like a year-over-year (YoY) calculation
Common parameters can be included, like a data source connection string
An example of report and/or dataset documentation can be included

7 Note

Providing templates not only saves your content creators time, it also helps them
move quickly beyond a blank page in an empty solution.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish, or improve,
mentoring and user enablement.

" Consider what mentoring services the COE can support: Decide what types of
mentoring services the COE is capable of offering. Types may include office hours,
co-development projects, and best practices reviews.
" Communicate about mentoring services regularly: Decide how you will
communicate and advertise mentoring services, such as office hours, to the user
community.
" Establish a regular schedule for office hours: Ideally, hold office hours at least once
per week (depending on demand from users as well as staffing and scheduling
constraints).
" Decide what the expectations will be for office hours: Determine what the scope
of allowed topics or types of issues users can bring to office hours. Also, determine
how the queue of office hours requests will work, whether any information should
be submitted ahead of time, and whether any follow up afterwards can be
expected.
" Create a centralized portal: Ensure that you have a well supported centralized hub
where users can easily find Power BI training, documentation, and resources. The
centralized portal should also provide links to other community resources such as
the Q&A forum and how to find help.
" Create documentation and resources: In the centralized portal, being creating and
compiling useful documentation. Identify and promote the top 3-5 resources that
will be most useful to the user community.
" Update documentation and resources regularly: Ensure that content is reviewed
and updated on a regular basis. The objective is to ensure that the information
available in the portal is current and reliable.
" Compile a curated list of reputable training resources: Identify training resources
that target the training needs and interests of your user community. Post the list in
the centralized portal and create a schedule to review and validate the list.
" Consider whether custom in-house training will be useful: Identify whether
custom training courses, developed in-house, will be useful and worth the time
investment. Invest in creating content that's specific to the organization.
" Create goals and metrics: Determine how you'll measure effectiveness of the
mentoring program. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate that the COE's mentoring efforts strengthen the
community and its ability to provide self-service BI.

Maturity levels

The following maturity levels will help you assess the current state of your mentoring
and user enablement.

Level State of Power BI mentoring and user enablement


Level State of Power BI mentoring and user enablement

100: Initial Some documentation and resources exist. However, they're siloed and inconsistent.

Few users are aware of, or take advantage of, available resources.

200: A centralized portal exists with a library of helpful documentation, and resources.
Repeatable
A curated list of training links and resources are available in the centralized portal.

Office hours are available so the user community can get assistance from the COE.

300: The centralized portal is the primary hub for community members to locate training,
Defined documentation, and resources. The resources are commonly referenced by
champions and community members when supporting and learning from each
other.

The COE's skills mentoring program is in place to assist users in the community in
various ways.

400: Office hours have regular and active participation from all business units in the
Capable organization.

Best practices reviews from the COE are regularly requested by business units.

Co-development projects are repeatedly executed with success by the COE and
members of business units.

500: Training, documentation, and resources are continually updated and improved by
Efficient the COE to ensure the community has current and reliable information.

Measurable and tangible business value is gained from the mentoring program by
using KPIs or OKRs.

Next steps
In the next article in the Power BI adoption roadmap series, learn more about the
community of practice.
Power BI adoption roadmap:
Community of practice
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

A community of practice is a group of people with a common interest that interacts with,
and helps, each other on a voluntary basis. Using Power BI to produce effective analytics
is a common interest that can bring people together across an organization.

The following diagram provides an overview of an internal community.

The above diagram shows the following:

The community of practice includes everyone with an interest in Power BI.


The Center of Excellence (COE) forms the nucleus of the community. It oversees
the entire community and interacts most closely with its champions.
Self-service content creators and subject matter experts (SMEs) produce, publish,
and support content that's used by their colleagues, who are consumers.
Content consumers view content produced by both self-service creators and
enterprise BI developers.
Champions are a subset of the self-service content creators. Champions are in an
excellent position to support their fellow content creators to generate effective
Power BI solutions.

Champions are the smallest group among creators and SMEs. Self-service content
creators and SMEs represent a larger number of people. Content consumers represent
the largest number of people.

7 Note

All references to the Power BI community in this adoption series of articles refer to
internal users, unless explicitly stated otherwise. There's an active and vibrant
worldwide community of bloggers and presenters who produce a wealth of
knowledge about Power BI. However, internal users are the focus of this article.

For information about related topics including resources, documentation, and training
provided for the Power BI community, see the Mentoring and user enablement article.

Champions network
One important part of a community of practice is its champions. A champion is a Power
BI content creator who works in a business unit that engages with the COE. A champion
is recognized by their peers as the go-to Power BI expert. A champion continually builds
and shares their knowledge even if it's not an official part of their job role. Power BI
champions influence and help their colleagues in many ways including solution
development, learning, skills improvement, troubleshooting, and keeping up to date.

Champions emerge as leaders of the community of practice who:

Have a deep interest in Power BI being used effectively and adopted successfully
throughout the organization.
Possess strong Power BI skills as well as domain knowledge for their functional
business unit.
Have an inherent interest in getting involved and helping others.
Are early adopters who are enthusiastic about experimenting and learning.
Can effectively translate business needs into solutions.
Communicate well with colleagues.

 Tip

To add an element of fun, some organizations refer to their champions network as


ambassadors, Jedis, ninjas, or rangers. Microsoft has an internal community called BI
Champs.

Often, people aren't directly asked to become champions. Commonly, champions are
identified by the COE and recognized for the activities they're already doing, such as
frequently answering questions in an internal discussion channel or participating in
lunch and learns.

Different approaches will be more effective for different organizations, and each
organization will find what works best for them as their maturity level increases.

) Important

Someone very well may be acting in the role of a champion without even knowing
it, and without a formal recognition. The COE should always be on the lookout for
champions. COE members should actively monitor the discussion channel to see
who is helpful. They should deliberately encourage and support potential
champions, and when appropriate, invite them into a champions network to make
the recognition formal.

Knowledge sharing
The overriding objective of a community of practice is to facilitate knowledge sharing
among colleagues and across organizational boundaries. There are many ways
knowledge sharing occurs. It could be during the normal course of work. Or, it could be
during a more structured activity, such as:

Activity Description

Discussion A Q&A forum where anyone in the community can post and view messages. Often
channel used for help and announcements. For more information, see the User support
article.
Activity Description

Lunch and Regularly scheduled sessions where someone presents a short session about
learn something they've learned or a solution they've created. The goal is to get a variety
sessions of presenters involved, because it's a powerful message to hear firsthand what
colleagues have achieved.

Office Regularly scheduled times when COE experts are available so the community can
hours with engage with them. Community users can receive assistance with minimal process
the COE overhead. For more information, see the Mentoring and user enablement article.

Internal Short blog posts, usually covering technical how-to topics.


blog posts
or wiki
posts

Internal A subset of the community that chooses to meet as a group on a regularly


Power BI scheduled basis. User group members often take turns presenting to each other to
user group share knowledge and improve their presentation skills.

Book club A subset of the community select a book to read on a schedule. They discuss what
they've learned and share their thoughts with each other.

Internal An annual or semi-annual internal conference that delivers a series of sessions


Power BI focused on Power BI.
conferences
or events

 Tip

Inviting an external presenter can reduce the effort level and bring a fresh
viewpoint for learning and knowledge sharing.

Incentives
A lot of effort goes into forming and sustaining a successful community. It's
advantageous to everyone to empower and reward users who work for the benefit of
the community.

Rewarding community members


Incentives that the entire community (including champions) find particularly rewarding
can include:
Contests with a small gift card or time off: For example, you might hold a
performance tuning event with the winner being the person who successfully
reduced the size of their data model the most.
Ranking based on help points: The more frequently someone participates in Q&A,
they achieve a change in status on a leaderboard. This type of gamification
promotes healthy competition and excitement. By getting involved in more
conversations, the participant learns and grows personally in addition to helping
their peers.
Leadership communication: Reach out to a manager when someone goes above
and beyond so that their leader, who may not be active in the Power BI
community, sees the value that their staff member provides.

 Tip

Different types of incentives will appeal to different types of people. Some


community members will be highly motivated by praise and feedback. Some will be
inspired by gamification and a bit of fun. Others will highly value the opportunity to
improve their level of knowledge.

Rewarding champions
Incentives that champions find particularly rewarding can include:

More direct access to the COE: The ability to have connections in the COE is
valuable. It's depicted in the diagram shown earlier in this article.
Champion of the month: Publicly thank one of your champions for something
outstanding they did during the previous month. It could be a fun tradition at the
beginning of a monthly lunch and learn.
A private experts discussion area: A private area for the champions to share ideas
and learn from each other is usually highly valued.
Specialized or deep dive information and training: Access to additional
information to help champions grow their skillsets (as well as help their colleagues)
will be appreciated. It could include attending advanced training classes or
conferences.

Communication plan
Communication with the community occurs through various types of communication
channels. Common communication channels include:
Internal discussion channel or forum.
Announcements channel.
Organizational newsletter.

The most critical communication objectives include ensuring your community members
know that:

The COE exists.


How to get help and support.
Where to find resources and documentation.
Where to find governance guidelines.
How to share suggestions and ideas.

 Tip

Consider requiring a simple Power BI quiz before a user is granted a Power BI


license. This quiz is a misnomer because it doesn't focus on any Power BI skills.
Rather, it's a short series of questions to verify that the user knows where to find
help and resources. It sets them up for success. It's also a great opportunity to have
users acknowledge any governance policies or data privacy and protection
agreements you need them to be aware of. For more information, see the System
oversight article.

Types of communication
There are generally four types of communication to plan for:

New employee communications can be directed to new employees (and


contractors). It's an excellent opportunity to provide onboarding materials for new
employees to get started with Power BI. It can include articles on topics like how to
get Power BI Desktop installed, how to request a license, and where to find
introductory training materials. It can also include general data governance
guidelines that all users should be aware of.
Onboarding communications can be directed to employees who are just acquiring
a Power BI license or are getting involved with the Power BI community. It presents
an excellent opportunity to provide the same materials as given to new employee
communications (as mentioned above).
Ongoing communications can include regular announcements and updates
directed to all Power BI users, or subsets of users. It can include announcing
changes that are planned to key organizational content. For example, changes are
to be published for a critical shared dataset that's used heavily throughout the
organization. It can also include the announcement of new features from the
Microsoft Power BI blog and Microsoft Power BI release plan updates. For
more information about planning for change, see the System oversight article.
Feature announcements are more likely to receive attention from the reader if the
message includes meaningful context about why it's important. (Although an RSS
feed can be a helpful technique, with the frequent pace of change, it can become
noisy and might be ignored.)
Situational communications can be directed to specific users or groups based on
a specific occurrence discovered while monitoring the platform. For example,
perhaps you notice a significant amount of sharing from the personal workspace a
particular user, so you choose to send them some information about the benefits
of workspaces and apps.

 Tip

One-way communication to the user community is important. Don't forget to also


include bidirectional communication options to ensure the user community has an
opportunity to provide feedback.

Community resources
Resources for the internal community, such as documentation, templates, and training,
are critical for adoption success. For more information about resources, see the
Mentoring and user enablement article.

Considerations and key actions

Checklist - Considerations and key actions you can take for the community of practice
follow.

Initiate, grow, and sustain your champions network:

" Clarify goals: Clarify what your specific goals are for cultivating a champions
network. Make sure these goals align with your overall Power BI strategy, and that
your executive sponsor is on board.
" Create a plan for the champions network: Although some aspects of a champions
network will always be informally led, determine to what extent the COE will
purposefully cultivate and support champion efforts throughout individual business
units. Consider how many champions is ideal for each functional business area.
Usually, 1-2 champions per area works well, but it can vary based on the size of the
team, the needs of the self-service community, and how the COE is structured.
" Decide on commitment level for champions: Decide what level of commitment
and expected time investment will be required of Power BI champions. Consider
whether the time investment will vary wildly from person to person, and team to
team due to different responsibilities. Plan to clearly communicate expectations to
people who are interested to get involved. Obtain manager approval when
appropriate.
" Decide how to identify champions: Determine how you will respond to requests to
become a champion, and how the COE will seek out champions. Decide if you will
openly encourage interested employees to self-identify as a champion and ask to
learn more (less common). Or, whether the COE will observe efforts and extend a
private invitation (more common).
" Determine how members of the champions network will be managed: Once
excellent option for managing who the champions are is with a security group.
Consider:
How you will communicate with the champions network (for example, in a Teams
channel, a Yammer group, and/or an email distribution list).
How the champions network will communicate and collaborate with each other
directly (across organizational boundaries).
Whether a private and exclusive discussion forum for champions and COE
members is appropriate
" Plan resources for champions: Ensure members of the champions network have
the resources they need, including:
Direct access to COE members.
Influence on data policies being implemented (for example, requirements for a
dataset certification policy).
Influence on the creation of best practices and guidance (for example,
recommendations for accessing a specific source system).
" Involve champions: Actively involve certain champions as satellite members of the
COE. For more information about federating the COE, see the Center of Excellence
article.
" Create a feedback loop for champions: Ensure that members of the champions
network can easily provide information or submit suggestions to the COE.
" Routinely provide recognition and incentives for champions: Not only is praise an
effective motivator, but the act of sharing examples of successful efforts can
motivate and inspire others.

Improve knowledge sharing:

" Identify knowledge sharing activities: Determine what kind of activities for


knowledge sharing fit well into the organizational data culture. Ensure that all
planned knowledge sharing activities are supportable and sustainable.
" Confirm roles and responsibilities: Verify who will take responsibility for
coordinating all knowledge sharing activities.

Introduce incentives:

" Identify incentives for champions: Consider what type of incentives you could offer
to members of your champions network.
" Identify incentives for community members: Consider what type of incentives you
could offer to your broader internal community.

Improve communications:

" Establish communication methods: Evaluate which methods of communication fit


well in your data culture. Set up different ways to communicate, including history
retention and search.
" Identify responsibility: Determine who will be responsible for different types of
communication, how, and when.

Maturity levels

The following maturity levels will help you assess the current state of your community of
practice.

Level State of Power BI community

100: Initial Some self-service content creators are doing great work throughout the
organization. However, their efforts aren't recognized.

Efforts to purposefully share knowledge across the organizational boundaries are


rare and unstructured.

Communication is inconsistent, without a purposeful plan.


Level State of Power BI community

200: The first set of champions are identified.


Repeatable
The goals for a champions network are identified.

Knowledge sharing practices are gaining traction.

300: Knowledge sharing in multiple forms is a normal occurrence. Information sharing


Defined happens frequently and purposefully.

Goals for transparent communication with the user community are defined.

400: Champions are identified for all business units. They actively support colleagues in
Capable their self-service efforts.

Incentives to recognize and reward knowledge sharing efforts are a common


occurrence.

Regular and frequent communication occurs based on a predefined communication


plan.

500: Bidirectional feedback loops exist between the champions network and the COE.
Efficient
Key performance indicators measure community engagement and satisfaction.

Automation is in place when it adds direct value to the user experience (for example,
automatic access to the community).

Next steps
In the next article in the Power BI adoption roadmap series, learn about user support.
Power BI adoption roadmap: User
support
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

This article addresses user support. It focuses primarily on the resolution of issues.

The first sections of this article focus on user support aspects you have control over
internally within your organization. The final topics focus on external resources that are
available.

For a description of related topics, including skills mentoring, training, documentation,


and co-development assistance provided to the internal Power BI user community, see
the Mentoring and user enablement article. The effectiveness of those activities can
significantly reduce the volume of formal user support requests and increase user
experience overall.

Types of user support


If a user has an issue, do they know what their options are to resolve it?

The following diagram shows some common types of user support that organizations
employ successfully:

The six types of user support shown in the above diagram include:

Type Description
Type Description

Intra-team support (internal) is very informal. Support occurs when team members learn
from each other during the natural course of their job.

Internal community support (internal) can be organized informally, formally, or both. It


occurs when colleagues interact with each other via internal community channels.

Help desk support (internal) handles formal support issues and requests.

Extended support (internal) involves handling complex issues escalated by the help desk.

Microsoft support (external) includes support for licensed users and administrators. It
also includes comprehensive Power BI documentation.

Community support (external) includes the worldwide community of Power BI experts,


Microsoft Most Valued Professionals (MVPs) , and enthusiasts who participate in forums
and publish content.

In some organizations, intra-team and internal community support are most relevant for
self-service BI (content is owned and managed by creators and owners in decentralized
business units). Conversely, the help desk and extended support are reserved for
technical issues and enterprise BI (content is owned and managed by a centralized
business intelligence team or Center of Excellence). In some organizations, all four types
of support could be relevant for any type of content.

 Tip

For more information about business-led self-service BI, managed self-service BI,
and enterprise BI concepts, see the Content ownership and management article.

Each of the four types of internal user support introduced above are described in further
detail in this article.

Intra-team support
Intra-team support refers to when team members learn from and help each other during
their daily work. People who emerge as your Power BI champions tend to take on this
type of informal support role voluntarily because they have an intrinsic desire to help.
Although it's an informal support mode, it shouldn't be undervalued. Some estimates
indicate that a large percentage of learning at work is peer learning, which is particularly
helpful for analysts who are creating domain-specific Power BI solutions.
7 Note

Intra-team support does not work well for individuals who are the only data analyst
within a department. It's also not effective for those who don't have very many
connections yet in their organization. When there aren't any close colleagues to
depend on, other types of support, as described in this article, become more
important.

Internal community support


Assistance from your fellow community members often takes the form of messages in a
discussion channel or a forum set up specifically for the Power BI community of practice.
For example, someone posts a message that they're having problems getting a DAX
calculation to work. They then receive a response from someone in the organization
with suggestions or links.

 Tip

The goal of an internal Power BI community is to be self-sustaining, which can lead


to reduced formal support demands and costs. It can also facilitate managed self-
service BI occurring at a broader scale versus a purely centralized BI approach.
However, there will always be a need to monitor, manage, and nurture the internal
community. Here are two specific tips:

Be sure to cultivate multiple experts in the more difficult topics like Data
Analysis eXpressions (DAX) and the Power Query M formula language.
When someone becomes a recognized expert, they may become
overburdened with too many requests for help.
A greater number of community members may readily answer certain types of
questions (for example, report visualizations), whereas a smaller number of
members will answer others (for example, complex DAX). It's important for the
COE to allow the community a chance to respond yet also be willing to
promptly handle unanswered questions. If users repeatedly ask questions and
don't receive an answer, it will significantly hinder growth of the community.
In this case, a user is likely to leave and never return if they don't receive any
responses to their questions.
An internal community discussion channel is commonly set up as a Teams channel or a
Yammer group. The technology chosen should reflect where users already work, so that
the activities occur within their natural workflow.

One benefit of an internal discussion channel is that responses can come from people
that the original requester has never met before. In larger organizations, a community of
practice brings people together based on a common interest. It can offer diverse
perspectives for getting help and learning in general.

Use of an internal community discussion channel allows the Center of Excellence (COE)
to monitor the kind of questions people are asking. It's one way the COE can understand
the issues users are experiencing (commonly related to content creation, but it could
also be related to consuming content).

Monitoring the discussion channel can also reveal additional Power BI experts and
potential champions who were previously unknown to the COE.

) Important

It's a best practice to continually identify emerging Power BI champions, and to


engage with them to make sure they're equipped to support their colleagues. As
described in the Community of practice article, the COE should actively monitor
the discussion channel to see who is being helpful. The COE should deliberately
encourage and support community members. When appropriate, invite them into
the champions network.

Another key benefit of a discussion channel is that it's searchable, which allows other
people to discover the information. It is, however, a change of habit for people to ask
questions in an open forum rather than private messages or email. Be sensitive to the
fact that some individuals aren't comfortable asking questions in such a public way. It
openly acknowledges what they don't know, which might be embarrassing. This
reluctance may reduce over time by promoting a friendly, encouraging, and helpful
discussion channel.

 Tip

You may be tempted to create a bot to handle some of the most common,
straightforward questions from the community. A bot can work for uncomplicated
questions such as "How do I request a Power BI license?" or "How do I request a
workspace?" Before taking this approach, consider if there are enough routine and
predictable questions that would make the user experience better rather than
worse. Often, a well-created FAQ (frequently asked questions) works better, and it's
faster to develop and easier to maintain.

Help desk support


The help desk is usually operated as a shared service, staffed by the IT department.
Users who will likely rely on a more formal support channel include those who are:

Less experienced with Power BI.


Newer to the organization.
Reluctant to post a message to the internal discussion community.
Lacking connections and colleagues within the organization.

There are also certain technical issues that can't be fully resolved without IT involvement,
like software installation and upgrade requests when machines are IT-managed.

Busy help desk personnel are usually dedicated to supporting multiple technologies. For
this reason, the easiest types of issues to support are those which have a clear resolution
and can be documented in a knowledgebase. For instance, software installation
prerequisites or requirements to get a license.

Some organizations ask the help desk to handle only very simple break-fix issues. Other
organizations have the help desk get involved with anything that is repeatable, like new
workspace requests, managing gateway data sources, or requesting a new Premium
capacity.

) Important

Your Power BI governance decisions will directly impact the volume of help desk
requests. For example, if you choose to limit workspace creation permissions in
the tenant settings, it will result in users submitting help desk tickets. While it's a
legitimate decision to make, you must be prepared to satisfy the request very
quickly. Respond to this type of request within 1-4 hours, if possible. If you delay
too long, users will use what they already have or fine a way to work around your
requirements. That may not be the ideal scenario. Promptness is critical for certain
help desk requests. Consider that automation by using Power Apps and Power
Automate can help make some processes more efficient.

Over time, troubleshooting and problem resolution skills become more effective as help
desk personnel expand their knowledgebase and experience with Power BI. The best
help desk personnel are those who have a good grasp of what users need to accomplish
with Power BI.

 Tip

Purely technical issues, for example data refresh failure or the need to add a new
user to a gateway data source, usually involve straightforward responses
associated with a service level agreement. For instance, there may be an agreement
to respond to blocking issues within one hour and resolve them within eight hours.
It's generally more difficult to define service level agreements (SLAs) for
troubleshooting issues, like data discrepancies.

Extended support
Since the COE has deep insight into how Power BI is used throughout the organization,
they're a great option for extended support should a complex issue arise. Involving the
COE in the support process should be by an escalation path.

Managing requests as purely an escalation path from the help desk gets difficult to
enforce since COE members are often well-known to business users. To encourage the
habit of going through the proper channels, COE members should redirect users to
submit a help desk ticket. It will also improve the data quality for analyzing help desk
requests.

Microsoft support
In addition to the internal user support approaches discussed in this article, there are
valuable external support options directly available to Power BI users and administrators
that shouldn't be overlooked.

Microsoft documentation
Check the Power BI support site high-priority issues that broadly affect all customers.
Global Microsoft 365 administrators have access to additional support issue details
within the Microsoft 365 portal.

Monitor the Microsoft 365 Twitter account . Microsoft posts timely information and
updates about outages for all Microsoft 365 services.
Refer to the comprehensive Power BI documentation. It's an authoritative resource that
can aid troubleshooting and search for information. You can prioritize results from the
Power BI documentation site. For example, enter a site-targeted search request into
your web search engine, like power bi dataset site:learn.microsoft.com .

Power BI Pro and Premium Per User end-user support


Users with a Power BI Pro or Premium Per User license are eligible to log a support ticket
with Microsoft .

 Tip

Make it clear to your internal user community whether you prefer technical issues
be reported to the internal help desk. If your help desk is equipped to handle the
workload, having a centralized internal area collect user issues can provide a
superior user experience versus every user trying to resolve issues on their own.
Having visibility and analyzing support issues is also helpful for the COE.

Administrator support
There are several support options available for global and Power BI administrators.

For customers who have a Microsoft Unified Support contract, consider granting help
desk and COE members access to the Microsoft Services Hub . One advantage of the
Microsoft Services Hub is that your help desk and COE members can be set up to
submit and view support requests.

Worldwide community support


In addition to the internal user support approaches discussed in this article, and
Microsoft support options discussed previously, you can leverage the worldwide Power
BI community.

The worldwide community is useful when a question can be easily understood by


someone not close to the problem, and when it doesn't involve confidential data or
sensitive internal processes.

Publicly available community forums


There are several public Power BI community forums where users can post issues and
receive responses from any Power BI user in the world. Getting answers from anyone,
anywhere, can be very powerful and exceedingly helpful. However, as is the case with
any public forum, it's important to validate the advice and information posted on the
forum. The advice posted on the internet may not be suitable for your situation.

Publicly available discussion areas


It's very common to see people posting Power BI technical questions on platforms like
Twitter. A quick look at the #PowerBI hashtag reveals a vibrant global community of
Power BI enthusiasts. You'll find discussions, post announcements, and users helping
each other. The #PowerBIHelp hashtag is sometimes used, though less frequently.

Community documentation
The Power BI global community is vibrant. Every day, there are a great number of Power
BI blog posts, articles, webinars, and videos published. When relying on community
information for troubleshooting, watch out for:

How recent the information is. Try to verify when it was published or last updated.
Whether the situation and context of the solution found online truly fits your
circumstance.
The credibility of the information being presented. Rely on reputable blogs and
sites.

Considerations and key actions

Checklist - Considerations and key actions you can take for user support follow.

Improve your intra-team support:

" Provide recognition and encouragement: Provide incentives to your Power BI


champions as described in the Community of practice article.
" Reward efforts: Recognize and praise meaningful grassroots efforts when you see
them happening.
" Create formal roles: If informal intra-team efforts aren't adequate, consider
formalizing the roles you want to enact in this area. Include the expected
contributions and responsibilities in the HR job description, when appropriate.

Improve your internal community support:

" Continually encourage questions: Encourage users to ask questions in the


designated community discussion channel. As the habit builds over time, it will
become normalized to use that as the first option. Over time, it will evolve to
become more self-supporting.
" Actively monitor the discussion area: Ensure that the appropriate COE members
actively monitor this discussion channel. They can step in if a question remains
unanswered, improve upon answers, or make corrections when appropriate. They
can also post links to additional information to raise awareness of existing
resources. Although the goal of the community is to become self-supporting, it still
requires dedicated resources to monitor and nurture it.
" Communicate options available: Make sure your user population knows the
internal community support area exists. It could include the prominent display of
links. You can include a link in regular communications to your user community. You
can also customize the help menu links in the Power BI service to direct users to
your internal resources.
" Set up automation: Ensure that all your Free, Power BI Pro, and Premium Per User
licensed users automatically have access to the community discussion channel. It's
possible to automate license setup using group-based licensing.

Improve your internal help desk support:

" Determine help desk responsibilities: Decide what the initial scope of Power BI
topics that the help desk will handle.
" Assess the readiness level: Determine whether your help desk is prepared to
handle Power BI support. Identify whether there are readiness gaps to be
addressed.
" Arrange for additional training: Conduct knowledge transfer sessions or training
sessions to prepare the help desk staff.
" Update the help desk knowledgebase: include known Power BI questions and
answers in a searchable knowledgebase. Ensure someone is responsible for regular
updates to the knowledgebase to reflect new and enhanced features over time.
" Set up a ticket tracking system: Ensure a good system is in place to track requests
submitted to the help desk.
" Decide if anyone will be on-call for any issues related to Power BI: If appropriate,
ensure the expectations for 24/7 support are clear.
" Determine what SLAs will exist: When a specific service level agreement (SLA)
exists, ensure that expectations for response and resolution are clearly documented
and communicated.
" Be prepared to act quickly: Be prepared to address specific common issues
extremely quickly. Slow support response will result in users finding workarounds.

Improve your internal COE extended support:

" Determine how escalated support will work: Decide what the escalation path will
be for requests the help desk cannot directly handle. Ensure that the COE (or
equivalent personnel) is prepared to step in when needed. Clearly define where
help desk responsibilities end, and where COE extended support responsibilities
begin.
" Encourage collaboration between COE and system administrators: Ensure that
COE members have a direct escalation path to reach global administrators for
Microsoft 365 and Azure. It's critical to have a communication channel when a
widespread issue arises that's beyond the scope of Power BI.
" Create a feedback loop from the COE back to the help desk: When the COE learns
of new information, the IT knowledgebase should be updated. The goal is for the
primary help desk personnel to continually become better equipped at handling
more issues in the future.
" Create a feedback loop from the help desk to the COE: When support personnel
observe redundancies or inefficiencies, they can communicate that information to
the COE, who might choose to improve the knowledgebase or get involved
(particularly if it relates to governance or security).

Maturity levels

The following maturity levels will help you assess the current state of your Power BI user
support.

Level State of Power BI user support

100: Initial Individual business units find effective ways of supporting each other. However, the
tactics and practices are siloed and not consistently applied.

An internal discussion channel is available. However, it's not monitored closely.


Therefore, the user experience is inconsistent.
Level State of Power BI user support

200: The COE actively encourages intra-team support and growth of the champions
Repeatable network.

The internal discussion channel gains traction. It becomes known as the default
place for Power BI questions and discussions.

The help desk handles a small number of the most common Power BI technical
support issues.

300: The internal discussion channel is popular and largely self-sustaining. The COE
Defined actively monitors and manages the discussion channel to ensure that all questions
are answered quickly and correctly.

A help desk tracking system is in place to monitor support frequency, response


topics, and priorities.

The COE provides appropriate extended support when required.

400: The help desk is fully trained and prepared to handle a broader number of known
Capable and expected Power BI technical support issues.

SLAs are in place to define help desk support expectations, including extended
support. The expectations are documented and communicated so they're clear to
everyone involved.

500: Bidirectional feedback loops exist between the help desk and the COE.
Efficient
Key performance indicators measure satisfaction and support methods.

Automation is in place to allow the help desk to react faster and reduce errors (for
example, use of APIs and scripts).

Next steps
In the next article in the Power BI adoption roadmap series, learn about system
oversight and administration activities.
Power BI adoption roadmap: System
oversight
Article • 08/31/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

System oversight—also known as Power BI administration—is the ongoing, day-to-day,


administrative activities. It's specifically concerned with:

Governance: Enact governance guidelines and policies to support self-service BI


and enterprise BI.
User empowerment: Facilitate and support the internal processes and systems that
empower the internal user community to the extent possible, while adhering to the
organization's regulations and requirements.
Adoption: Allow for broader organizational adoption of Power BI with effective
governance and data management practices.

) Important

Your organizational data culture objectives provide direction for your governance
decisions, which in turn dictate how Power BI administration activities take place
and by whom.

System oversight is a broad and deep topic. The goal of this article is to introduce some
of the most important considerations and actions to help you become successful with
your organizational adoption objectives.

Power BI administrators
The Fabric administrator role is a defined role in Microsoft 365, which delegates a subset
of Power BI-specific management activities. Global Microsoft 365 administrators are
implicitly Power BI administrators. Power Platform administrators are also implicitly
Power BI administrators (however, Power BI administrators don't have the same role in
other Power Platform applications).

A key governance decision is who to assign as a Power BI administrator. It's a centralized


role that affects your entire Power BI tenant. Ideally, there are 2-4 people in the
organization who are capable of managing the Power BI service. Your administrators
should operate in close coordination with the Center of Excellence (COE).

High privilege role


The Fabric administrator role is a high privilege role because:

User experience: Settings that are managed by a Power BI administrator have a


significant effect on user capabilities and user experience (described in the Tenant
settings section below).
Full security access: Power BI administrators can update access permissions for
workspaces in the tenant. The result is that an administrator can allow permission
to view or download data and reports as they see fit (described in the Tenant
settings section below).
Personal workspace access: Administrators can access contents and govern the
personal workspace of any user.
Metadata: Power BI administrators can view all tenant metadata, including all user
activities that occur in the Power BI service (described in the Auditing and
monitoring section below).

) Important

Having too many Power BI administrators is a risk. It increases the probability of


unapproved, unintended, or inconsistent management of the tenant.

Roles and responsibilities


The types of activities that an administrator will do on a day-to-day basis will differ
between organizations. What's important, and given priority in your data culture, will
heavily influence what an administrator does to support business-led self-service BI,
managed self-service BI, and enterprise BI. For more information, see the Content
ownership and management article.

 Tip
The best type of person to assign as a Power BI administrator is one who has
enough knowledge about Power BI to understand what self-service users need to
accomplish. With this understanding, the administrator can balance user
empowerment and governance.

There are several types of Power BI administrators. The following table describes the
roles that are used most often on a regular basis.

Role Scope Description

Power BI Power BI Manages tenant settings and other aspects of the Power BI
administrator tenant service. All general references to administrator in this article
refer to this type of administrator.

Power BI Premium One Manages workspaces, workloads, and monitors the health
capacity capacity of a Premium capacity.
administrator

Power BI gateway One Manages gateway data source configuration, credentials,


administrator gateway and users assignments. May also handle gateway software
updates (or collaborate with infrastructure team on
updates).

Power BI workspace One Manages workspace settings and access.


administrator workspace

The Power BI ecosystem is broad and deep. There are many ways that the Power BI
service integrates with other systems and platforms. From time to time, it will be
necessary to work with other system administrators and IT professionals, such as:

Global Microsoft 365 administrator


Azure Active Directory (Azure AD) administrator
Teams administrator
OneDrive administrator
SharePoint administrator
Database administrator
Licensing and billing administrator
Intune administrator
Desktop support team
Infrastructure team
Networking team
Security and compliance team

The remainder of this article discusses the most common activities that a Power BI
administrator does. It focuses on those activities that are important to carry out
effectively when taking a strategic approach to Power BI organizational adoption.

Service management
Overseeing the Power BI service is a crucial aspect to ensure that all users have a good
experience with Power BI.

Tenant settings
Proper management of tenant settings in the Power BI service is critical. Tenant settings
are the main way to control which Power BI capabilities are enabled, and for which
groups of users in your organization.

It's essential that tenant settings align with governance guidelines and policies, and with
how the COE makes decisions. If a Power BI administrator independently decides which
settings to enable or disable, that's a clear indicator of an opportunity to improve and
refine your governance processes.

) Important

Changing the tenant settings should go through a change control process with an
approval mechanism. It should document all changes, recording who made the
change, when, and why.

Because content creators and consumers can easily read online about available features
in Power BI, it can be frustrating for users when capabilities don't function as expected.
It can lead to dissatisfied users and less effective organizational adoption, user adoption,
and solution adoption.

Here's a list of common questions asked by confused and frustrated users:

Why can't I create a workspace?


Why can't I export data?
Why doesn't my custom visual work?
Why can't I certify a dataset?

U Caution

An administrator may discover situations that aren't ideal, such as too many data
exports in the activity log. Resist the urge to disable the feature entirely. Prohibiting
features leads to user frustration, and leads users to find workarounds. Before
disabling a setting, find out why users are relying on certain techniques. Perhaps a
solution needs to be redesigned, or additional user education and training could
mitigate the concerns. The bottom line: knowledge sharing is an effective form of
governance.

Since there's no reader role to view tenant settings, it can be a challenge in larger
organizations. Consider publishing a document to the centralized portal that describes
the tenant settings, as described in the mentoring and user enablement article.

The following activities apply when reviewing and validating each tenant setting:

Tenant setting:
Enabled, or
Disabled
Tenant setting applicable to:
The entire organization, or
Limited to specific security group(s):
Does a suitable security group already exist?, or
Does a new security group need to be created?

Admin portal
As described in the Power BI adoption maturity levels article, organizational adoption
refers to the effectiveness of Power BI governance and data management practices to
support and enable enterprise BI and self-service BI. Actively managing all areas of the
Power BI service in accordance with adoption goals helps ensure that all your users have
a good experience with Power BI.

Key topics for managing the Power BI service include:

Tenant settings
Auditing and monitoring
Workspace management and access
Premium capacity and Premium Per User settings
Embed codes
Organizational visuals
Azure connections
Custom branding
Protection metrics
Featured content
User machines and devices
The management of user machines and devices is usually a responsibility of the IT
department. The adoption of Power BI depends directly on content creators and
consumers having the applications they need.

Here are some questions that can help you plan for user machines and devices.

How will users request access to new tools? Will access to licenses, data, and
training be available to help users use tools effectively?
How will content consumers view content that's been published by others?
How will content creators develop, manage, and publish content? What's your
criteria for deciding which tools and applications are appropriate for each use
case?
How will you install and set up tools? Does that include related prerequisites and
data connectivity components?
How will you manage ongoing updates for tools and applications?

For more information, see User tools and devices.

The following software installations are available for content creators.

Software Audience

Power BI Desktop Content creators who develop data models and interactive reports for
deployment to the Power BI service.

Power BI Desktop Content creators who develop data models and interactive reports for
Optimized for Report deployment to Power BI Report Server.
Server

Power BI Report Content creators who develop paginated reports for deployment to the
Builder Power BI service or Power BI Report Server.

Power BI Mobile Content creators or consumers who interact with content that's been
Application published to the Power BI service or Power BI Report Server, using iOS,
Android, or Windows 10 applications.

On-premises data Content creators who publish datasets to the Power BI service and
gateway (personal manage scheduled data refresh (see more detail in the Gateway
mode) architecture and management section of this article).

Third-party tools Advanced content creators may optionally use third-party tools for
advanced data model management.

) Important
Not all the listed software will be necessary for all content creators. Power BI
Desktop is the most common requirement and is the starting point when in doubt.

All content creators who collaborate with others should use the same version of the
software—especially Power BI Desktop, which is updated monthly. Ideally, software
updates are available from the Microsoft Store or installed by an automated IT process.
This way, users don't have to take any specific action to obtain updates.

Because new capabilities are continually released, software updates should be released
promptly. This way, users can take advantage of the new capabilities, and their
experience is aligned to documentation. It's also important to be aware of the update
channel. It provides new (and updated) features for Office apps, such as Excel and Word,
on a regular basis.

Other common items that may need to be installed on user machines include:

Drivers to support data connectivity, for example, Oracle, HANA, or the Microsoft
Access Database Engine
The Analyze in Excel provider
External tools. For example, Tabular Editor, DAX Studio, or ALM Toolkit.
Custom data source connectors

In addition to software installations, user machines may be managed for:

Group policy settings: For example, group policy can specify the allowed usage of
custom visuals in Power BI Desktop. The objective is for a consistent user
experience in Power BI Desktop and the Power BI service. The objective is to
prevent user frustration (if they were allowed to create content in Power BI
Desktop that can't be displayed in the Power BI service).
Registry settings: For example, you can choose to disable the Power BI Desktop
sign-in form or tune Query Editor performance.

 Tip

Effective management of software, drivers, and settings can make a big difference
to the user experience, and that can translate to increased user adoption and
satisfaction, and reduced user support costs.

Architecture
Data architecture
Data architecture refers to the principles, practices, and methodologies that govern and
define what data is collected, and how it's ingested, stored, managed, integrated,
modeled, and used.

There are many data architecture decisions to make. Frequently the COE engages in
data architecture design and planning. It's common for administrators to get involved as
well, especially when they manage databases or Azure infrastructure.

) Important

Data architecture decisions have a significant impact on Power BI adoption, user


satisfaction, and individual project success rates.

A few data architecture considerations that affect adoption of Power BI include:

Where does Power BI fit into the organization's entire data architecture? And, are
there other existing components such as an enterprise data warehouse (EDW) or a
data lake that will be important to factor into plans?
Is Power BI used end-to-end for data preparation, data modeling, and data
presentation? Or, is Power BI used only some of those capabilities?
Are managed self-service BI patterns followed to find the best balance between
data reusability and report creator flexibility?
Where will users consume the content? Generally, the three main ways to deliver
content are: the Power BI service, Power BI Report Server, and embedded in
custom applications. The Planning a Power BI enterprise deployment whitepaper
includes a section on Power BI architectural choices, which describes when to
consider each of these three main choices. Additionally, Microsoft Teams is a
convenient alternative to the Power BI service, especially for users who spend a lot
of time in Teams.
Who is responsible for managing and maintaining the data architecture? Is it a
centralized team, or a decentralized team? How is the COE represented in this
team? Are certain skillsets required?
What data sources are the most important? What types of data will we be
acquiring?
What connectivity mode and storage mode choices (for example, import, live
connection, DirectQuery, or composite model frameworks) are the best fit for the
use cases?
To what extent is data reusability encouraged using shared datasets?
To what extent is the reusability of data preparation logic and advanced data
preparation encouraged by using dataflows?

When becoming acquainted with Power BI, many system administrators assume it's a
query tool much like SQL Server Reporting Services (SSRS). The breadth of capabilities
for Power BI, however, are vast in comparison. So, it's important for administrators to
become aware of Power BI capabilities before they make architectural decisions.

 Tip

Get into the good habit of completing a technical proof of concept (POC) to test
out assumptions and ideas. Some organizations also call them micro-projects when
the goal is to deliver a small unit of work. The goal of a POC is to address
unknowns and reduce risk as early as possible. A POC doesn't have to be
throwaway work, but it should be narrow in scope. Best practices reviews, as
discussed in the Mentoring and user enablement article, are another useful way to
help content creators with important architectural decisions.

Premium capacity management


Power BI Premium includes features and capabilities to deliver BI solutions at scale.
Premium subscriptions may be purchased by capacity or per user with Premium Per
User (PPU). This section primarily focuses on Premium capacity, which requires more
administrative oversight.

Power BI Premium can play a significant role in your BI strategy. Some top reasons to
invest in Premium include:

Unlimited content distribution to large numbers of read-only (content


consumption with a free Power BI license is available in Premium capacity only, not
PPU).
Deployment pipelines to manage the publication of content to development, test,
and production workspaces. They're highly recommended for critical content to
improve release stability.
Paginated reports to deliver highly formatted, pixel-perfect reports. This report
type allows content creators to meet other types of information delivery
requirements.
XMLA endpoint, which is an industry standard protocol for managing and
publishing a dataset, or querying the dataset from any XMLA-compliant tool.
Increased model size limits, including large dataset support.
More frequent data refreshes.
Storage of data in a specific geographic area (multi-geo is available by capacity
only).

This list isn't all-inclusive. For a complete list of Premium features, see Power BI Premium
FAQ.

Managing Premium capacity

Overseeing the health of Power BI Premium capacity is an essential ongoing activity for
administrators. By definition, Premium capacity includes a fixed level of system
resources. It equates to memory and CPU limits that must be managed to achieve
optimal performance.

U Caution

Lack of management and exceeding the limits of Premium capacity can often result
in performance challenges and user experience challenges. Both challenges, if not
managed correctly, can contribute to negative impact on adoption efforts.

Suggestions for managing Premium capacity:

Create a specific set of criteria for content that will be published to Premium
capacity. It's especially relevant when a single capacity is used by multiple business
units because the potential exists to disrupt other users if the capacity isn't well-
managed. For a list of items that may be included in the best practices review (such
as reasonable dataset size and efficient calculations), see the Mentoring and user
enablement article.
Regularly use the Premium monitoring app to understand resource utilization and
patterns for the Premium capacity. Most importantly, look for consistent patterns
of overutilization, which will contribute to user disruptions. An analysis of usage
patterns should also make you aware if the capacity is underutilized, indicating
more value could be gained from the investment.
Configure the tenant setting so Power BI notifies you if the Premium capacity
becomes overloaded , or an outage or incident occurs.

Autoscale
Autoscale is a capability of Power BI Premium. It's intended to handle occasional or
unexpected bursts in Premium usage levels. Autoscale can respond to these bursts by
automatically increasing CPU resources to support the increased workload.
Automated scaling up reduces the risk of performance and user experience challenges
in exchange for a financial impact. If the Premium capacity isn't well-managed, autoscale
may trigger more often than expected. In this case, the Premium monitoring app can
help you to determine underlying issues and do capacity planning.

Decentralized Premium capacity management

Capacity administrators are responsible for assigning workspaces to a specific capacity.

Be aware that workspace administrators can also assign a workspace to PPU if the
workspace administrator possesses a PPU license. However, it would require that all
other workspace users must also have a PPU license.

It's possible to set up multiple capacities to facilitate decentralized management by


different business units. Decentralizing management of certain aspects of Power BI is a
great way to balance agility and control.

Here's an example that describes one way you could manage Premium capacity.

Purchase a P3 capacity node in Microsoft 365. It includes 32 virtual cores.


Use 16 cores to create the first capacity. It will be used by the Sales team.
Use 8 cores to create the second capacity. It will be used by the Operations team.
Use the remaining 8 cores to create the third capacity. It will support general use.

The previous example has several advantages.

Separate capacity administrators may be configured for each capacity. Therefore, it


facilitates decentralized management situations.
If a capacity isn't well-managed, the effect is confined to that capacity only. The
other capacities aren't impacted.

However, the previous example has disadvantages, too.

The limits per capacity are lower. The maximum memory size allowed for datasets
isn't the entire P3 capacity node size. Rather, it's the assigned capacity size where
the dataset is hosted.
It's more likely one of the smaller capacities will need to be scaled up at some
point in time.
There are more capacities to manage in the Power BI tenant.

Gateway architecture and management


A data gateway facilitates the secure and efficient transfer of data between
organizational data sources and the Power BI service. A data gateway is needed for data
connectivity to on-premises or cloud services when a data source is:

Located within the enterprise data center.


Configured behind a firewall.
Within a virtual network.
Within a virtual machine.

There are three types of gateways.

On-premises data gateway (standard mode) is a gateway service that supports


connections to registered data sources for many users to use. The gateway
software installations and updates are installed on a machine that's managed by
the customer.
On-premises data gateway (personal mode) is a gateway service that supports
data refresh only. This gateway mode is typically installed on the PC of a content
creator. It supports use by one user only. It doesn't support live connection or
DirectQuery connections.
Virtual network data gateway is a Microsoft managed service that supports
connectivity for many users. Specifically, it supports connectivity for datasets and
dataflows stored in workspaces assigned to Premium capacity or Premium Per
User.

 Tip

The decision of who can install gateway software is a governance decision. For
most organizations, use of the data gateway in standard mode, or a virtual network
data gateway, should be strongly encouraged. They're far more scalable,
manageable, and auditable than data gateways in personal mode.

Decentralized gateway management


The On-premises data gateway (standard mode) and Virtual network data gateway
support specific data source types that can be registered, together with connection
details and how credentials are stored. Users can be granted permission use the
gateway data source so that they can schedule a refresh or run DirectQuery queries.

Certain aspects of gateway management can be done effectively on a decentralized


basis to balance agility and control. For example, the Operations group may have a
gateway dedicated to its team of self-service content creators and data owners.
Decentralized gateway management works best when it's a joint effort as follows.

Managed by the decentralized data owners:

Departmental data source connectivity information and privacy levels.


Departmental data source stored credentials (including responsibility for updating
routine password changes).
Departmental data source users who are permitted to use each data source.

Managed by centralized data owners (includes data sources that are used broadly across
the organization; management is centralized to avoid duplicated data sources):

Centralized data source connectivity information and privacy levels.


Centralized data source stored credentials (including responsibility for updating
routine password changes).
Centralized data source users who are permitted to use each data source.

Managed by IT:

Gateway software updates (gateway updates are usually released monthly).


Installation of drivers and custom connectors (the same ones that are installed on
user machines).
Gateway cluster management (number of machines in the gateway cluster for high
availability, disaster recovery, and to eliminate a single point of failure, which can
cause significant user disruptions).
Server management (for example, operating system, RAM, CPU, or networking
connectivity).
Management and backup of gateway encryption keys.
Monitoring of gateway logs to assess when scale-up or scale-out is necessary.
Alerting of downtime or persistent low resources on the gateway machine.

 Tip

Allowing a decentralized team to manage certain aspects of the gateway means


they can move faster. The tradeoff of decentralized gateway management does
mean running more gateway servers so that each can be dedicated to a specific
area of the organization. If gateway management is handled entirely by IT, it's
imperative to have a good process in place to quickly handle requests to add data
sources and apply user updates.

User licenses
Every user of the Power BI service needs a commercial license, which is integrated with
an Azure AD identity. The user license may be Free, Power BI Pro, or Power BI Premium
Per User.

A user license is obtained via a subscription, which authorizes a certain number of


licenses with a start and end date.

There are two approaches to procuring subscriptions.

Centralized: Microsoft 365 billing administrator purchases a subscription for Power


BI Pro or Premium Per User . It's the most common way to manage subscriptions
and assign licenses.
Decentralized: Individual departments purchase a subscription via self-service
purchasing.

Self-service purchasing
An important governance decision relates to what extent self-service purchasing will be
allowed or encouraged.

Self-service purchasing is useful for:

Larger organizations with decentralized business units that have purchasing


authority and want to handle payment directly with a credit card.
Organizations that intend to make it as easy as possible to purchase subscriptions
on a monthly commitment.

Consider disabling self-service purchasing when:

Centralized procurement processes are in place to meet regulatory, security, and


governance requirements.
Discounted pricing is obtained through an Enterprise Agreement (EA).
Existing processes are in place to handle intercompany chargebacks.
Existing processes are in place to handle group-based licensing assignments.
Prerequisites are required for obtaining a license, such as approval, justification,
training, or a governance policy requirement.
There's a valid need, such as a regulatory requirement, to control access to the
Power BI service closely.

User license trials


Another important governance decision is whether user license trials are allowed. By
default, trials are enabled. That means when content is shared with a colleague, if the
recipient doesn't have a Power BI Pro or Premium Per User license, they'll be prompted
to start a trial to view the content (if the content doesn't reside within Premium
capacity). The trial experience is intended to be a convenience that allows users to
continue with their normal workflow.

Generally, disabling trials isn't recommended. It can encourage users to seek


workarounds, perhaps by exporting data or working outside of supported tools and
processes.

Consider disabling trials only when:

There are serious cost concerns that would make it unlikely to grant full licenses at
the end of the trial period.
Prerequisites are required for obtaining a license (such as approval, justification, or
a training requirement). It's not sufficient to meet this requirement during the trial
period.
There's a valid need, such as a regulatory requirement, to control access to the
Power BI service closely.

 Tip

Don't introduce too many barriers to obtaining a Power BI license. Users who need
to get work done will find a way, and that way may involve workarounds that aren't
ideal. For instance, without a license to use the Power BI service, people may rely far
too much on sharing files on a file system or via email when significantly better
approaches are available.

Cost management
Managing and optimizing the cost of cloud services, like Power BI, is an important
activity. Here are several activities you may want to consider.

Analyze who is using—and, more to the point, not using—their allocated Power BI
licenses and make necessary adjustments. Power BI usage is analyzed using the
activity log.
Analyze the cost effectiveness of Premium capacity or Premium Per User. In
addition to the additional features, perform a cost/benefit analysis to determine
whether Premium licensing is more cost-effective when there are a large number
of consumers. Unlimited content distribution is only available with Premium
capacity, not PPU licensing.
Carefully monitor and manage Premium capacity. Understanding usage patterns
over time will allow you to predict when to purchase more capacity. For example,
you may choose to scale up a single capacity from a P1 to P2, or scale out from
one P1 capacity to two P1 capacities.
If there are occasional spikes in the level of usage, use of autoscale with Power BI
Premium is recommended to ensure the user experience isn't interrupted.
Autoscale will scale up capacity resources for 24 hours, then scale them back down
to normal levels (if sustained activity isn't present). Manage autoscale cost by
constraining the maximum number of v-cores, and/or with spending limits set in
Azure (because autoscale is supported by the Azure Power BI Embedded service).
Due to the pricing model, autoscale is best suited to handle occasional unplanned
increases in usage.
For Azure data sources, co-locate them in the same region as your Power BI tenant
whenever possible. It will avoid incurring Azure egress charges . Data egress
charges are minimal, but at scale can add up to be considerable unplanned costs.

Security, information protection, and data loss


prevention
Security, information protection, and data loss prevention (DLP) are joint responsibilities
among all content creators, consumers, and administrators. That's no small task because
there's sensitive information everywhere: personal data, customer data, or customer-
authored data, protected health information, intellectual property, proprietary
organizational information, just to name a few. Governmental, industry, and contractual
regulations may have a significant impact on the governance guidelines and policies
that you create related to security.

The Power BI security whitepaper is an excellent resource for understanding the breadth
of considerations, including aspects that Microsoft manages. This section will introduce
several topics that customers are responsible for managing.

User responsibilities
Some organizations ask Power BI users to accept a self-service user acknowledgment.
It's a document that explains the user's responsibilities and expectations for
safeguarding organizational data.

One way to automate its implementation is with an Azure AD terms of use policy. The
user is required to agree to the policy before they're permitted to visit the Power BI
service for the first time. You can also require it to be acknowledged on a recurring
basis, like an annual renewal.

Data security
In a cloud shared responsibility model, securing the data is always the responsibility of
the customer. With a self-service BI platform, self-service content creators have
responsibility for properly securing the content that they shared with colleagues.

The COE should provide documentation and training where relevant to assist content
creators with best practices (particularly situations for dealing with ultra-sensitive data).

Administrators can be help by following best practices themselves. Administrators can


also raise concerns when they see issues that could be discovered when managing
workspaces, auditing user activities, or managing gateway credentials and users. There
are also several tenant settings that are usually restricted except for a few users (for
instance, the ability to publish to web or the ability to publish apps to the entire
organization).

External guest users


External users—such as partners, customers, vendors, and consultants—are a common
occurrence for some organizations, and rare for others. How you handle external users is
a governance decision.

External user access is controlled by tenant settings in the Power BI service and certain
Azure AD settings. For details of external user considerations, review the Distribute
Power BI content to external guest users using Azure AD B2B whitepaper.

Information protection and data loss prevention


Power BI supports capabilities for information protection and data loss prevention (DLP)
in the following ways.

Information protection: Microsoft Purview Information Protection (formerly known


as Microsoft Information Protection) includes capabilities for discovering,
classifying, and protecting data. A key principle is that data can be better protected
once it's been classified. The key building block for classifying data is sensitivity
labels. For more information, see Information protection for Power BI planning.
Data loss prevention for Power BI: Microsoft Purview Data Loss Prevention
(formerly known as Office 365 Data Loss Prevention) supports DLP policies for
Power BI. By using sensitivity labels or sensitive information types, DLP policies for
Power BI help an organization locate sensitive datasets. For more information, see
Data loss prevention for Power BI planning.
Microsoft Defender for Cloud Apps: Microsoft Defender for Cloud Apps (formerly
known as Microsoft Cloud App Security) supports policies that help protect data,
including real-time controls when users interact with the Power BI service. For
more information, see Defender for Cloud Apps for Power BI planning.

Data residency
For organizations with requirements to store data within a geographic region, Premium
capacity (not PPU) can be configured for a specific region that's different from the
region of the Power BI home tenant.

Encryption keys
Microsoft handles encryption of data at rest in Microsoft data centers with transparent
server-side encryption and auto-rotation of certificates. For customers with regulatory
requirements to manage the Premium encryption key themselves, Premium capacity can
be configured to use Azure Key Vault. Using customer-managed keys—also known as
bring-your-own-key or BYOK—is a precaution to ensure that, in the event of a human
error by a service operator, customer data can't be exposed.

Be aware that Premium Per User (PPU) only supports BYOK when it's enabled for the
entire Power BI tenant.

Auditing and monitoring


It's critical that you make use of auditing data to analyze adoption efforts, understand
usage patterns, educate users, support users, mitigate risk, improve compliance, manage
license costs, and monitor performance. For more information about why auditing your
data is valuable, see Auditing and monitoring overview.

There are different ways to approach auditing and monitoring depending on your role
and your objectives. The following articles describe various considerations and planning
activities.

Report-level auditing: Techniques that report creators can use to understand


which users are using the reports that they create, publish, and share.
Data-level auditing: Methods that data creators can use to track the performance
and usage patterns of data assets that they create, publish, and share.
Tenant-level auditing: Key decisions and actions administrators can take to create
an end-to-end auditing solution.
Tenant-level monitoring: Tactical actions administrators can take to monitor the
Power BI service, including updates and announcements.

Power BI REST APIs


The Power BI REST APIs provide a wealth of information about your Power BI tenant.
Retrieving data by using the REST APIs should play an important role in managing and
governing a Power BI implementation. For more information about planning for the use
of REST APIs for auditing, see Tenant-level auditing.

You can retrieve auditing data to build an auditing solution, manage content
programmatically, or increase the efficiency of routine actions. The following table
presents some actions you can perform with the REST APIs.

Action Documentation resource(s)

Audit user activities REST API to get activity events

Audit workspaces, items, and Collection of asynchronous metadata scanning REST APIs
permissions to obtain a tenant inventory

Audit content shared to entire REST API to check use of widely shared links
organization

Audit tenant settings REST API to check tenant settings

Publish content REST API to deploy items from a deployment pipeline or


clone a report to another workspace

Manage content REST API to refresh a dataset or take over ownership of a


dataset

Manage gateway data sources REST API to update credentials for a gateway data source

Export content REST API to export a report

Create workspaces REST API to create a new workspace

Manage workspace permissions REST API to assign user permissions to a workspace

Update workspace name or REST API to update workspace attributes


description

Restore a workspace REST API to restore a deleted workspace


Action Documentation resource(s)

Programmatically retrieve a query REST API to run a DAX query against a dataset
result from a dataset

Assign workspaces to Premium REST API to assign workspaces to capacity


capacity

Programmatically change a data Tabular Object Model (TOM) API


model

Embed Power BI content in custom Power BI embedded analytics client APIs


applications

 Tip

There are many other REST APIs. For a complete list, see Using the Power BI REST
APIs.

Planning for change


Every month, Microsoft releases new Power BI features and functionality. To be effective,
it's crucial that everyone involved with system oversight stays current. For more
information, see Tenant-level monitoring.

) Important

Don't underestimate the importance of staying current. If you get a few months
behind on announcements, it can become difficult to properly manage the Power
BI service and support your users.

Considerations and key actions

Checklist - Considerations and key actions you can take for system oversight follow.

Improve system oversight:


" Verify who is permitted to be a Power BI administrator: If possible, reduce the
number of people granted the Fabric administrator role if it's more than a few
people.
" Use PIM for occasional administrators: If you have people who occasionally need
Power BI administrator rights, consider implementing Privileged Identity
Management (PIM) in Azure AD. It's designed to assign just-in-time role
permissions that expire after a few hours.
" Train administrators: Check the status of cross-training and documentation in place
for handling Power BI administration responsibilities. Ensure that a backup person is
trained so that needs can be met timely, in a consistent way.

Improve management of the Power BI service:

" Review tenant settings: Conduct a review of all tenant settings to ensure they're
aligned with data culture objectives and governance guidelines and policies. Verify
which groups are assigned for each setting.
" Document the tenant settings: Create documentation of your tenant settings for
the internal Power BI community and post it in the centralized portal. Include which
groups a user would need to request to be able to use a feature.
" Customize the Get Help links: When user resources are established, as described in
the Mentoring and user enablement article, update the tenant setting to customize
the links under the Get Help menu option. It will direct users to your
documentation, community, and help.

Improve management of user machines and devices:

" Create a consistent onboarding process: Review your process for how onboarding
of new content creators is handled. Determine if new requests for software, such as
Power BI Desktop, and user licenses (Power BI Pro or Premium Per User) can be
handled together. It can simplify onboarding since new content creators won't
always know what to ask for.
" Handle user machine updates: Ensure an automated process is in place to install
and update software, drivers, and settings to ensure all users have the same version.

Data architecture planning:

" Assess what your end-to-end data architecture looks like: Make sure you're clear
on:
How Power BI is currently used by the different business units in your
organization versus how you want Power BI to be used. Determine if there's a
gap.
If there are any risks that should be addressed.
If there are any high-maintenance situations to be addressed.
What data sources are important for Power BI users, and how they're
documented and discovered.
" Review existing data gateways: Find out what gateways are being used throughout
your organization. Verify that gateway administrators and users are set correctly.
Verify who is supporting each gateway, and that there's a reliable process in place
to keep the gateway servers up to date.
" Verify use of personal gateways: Check the number of personal gateways that are
in use, and by whom. If there's significant usage, take steps to move towards use of
the standard mode gateway.

Improve management of user licenses:

" Review the process to request a user license: Clarify what the process is, including
any prerequisites, for users to obtain a license. Determine whether there are
improvements to be made to the process.
" Determine how to handle self-service license purchasing: Clarify whether self-
service licensing purchasing is enabled. Update the settings if they don't match
your intentions for how licenses can be purchased.
" Confirm how user trials are handled: Verify user license trials are enabled or
disabled. Be aware that all user trials are Premium Per User. They apply to Free
licensed users signing up for a trial, and Power BI Pro users signing up for a
Premium Per User trial.

Improve cost management:

" Determine your cost management objectives: Consider how to balance cost,


features, usage patterns, and effective utilization of resources. Schedule a routine
process to evaluate costs, at least annually.
" Obtain acitivity log data: Ensure you have access to the activity log data to assist
with cost analysis. It can be used to understand who is—or isn't—using the license
assigned to them.

Improve security and data protection:

" Clarify exactly what the expectations are for data protection: Ensure the
expectations for data protection, such as how to use sensitivity labels, are
documented and communicated to users.
" Determine how to handle external users: Understand and document the
organizational policies around sharing Power BI content with external users. Ensure
that settings in the Power BI service support your policies for external users.
" Set up monitoring: Investigate the use of Microsoft Defender for Cloud Apps to
monitor user behavior and activities in the Power BI service.
Improve auditing and monitoring:

" Plan for auditing needs: Collect and document the key business requirements for
an auditing solution. Consider your priorities for auditing and monitoring. Make key
decisions related to the type of auditing solution, permissions, technologies to be
used, and data needs. Consult with IT to clarify what auditing processes currently
exist, and what preferences of requirements exist for building a new solution.
" Consider roles and responsibilities: Identify which teams will be involved in
building an auditing solution, as well as the ongoing analysis of the auditing data.
" Extract and store user activity data: If you aren't currently extracting and storing
the raw data, begin retrieving user activity data.
" Extract and store snapshots of tenant inventory data: Begin retrieving metadata to
build a tenant inventory, which describes all workspaces and items.
" Extract and store snapshots of users and groups data: Begin retrieving metadata
about users, groups, and service principals.
" Create a curated data model: Perform data cleansing and transformations of the
raw data to create a curated data model that'll support analytical reporting for your
auditing solution.
" Analyze auditing data and act on the results: Create analytic reports to analyze the
curated auditing data. Clarify what actions are expected to be taken, by whom, and
when.
" Include additional auditing data: Over time, determine whether other auditing
data would be helpful to complement the activity log data, such as security data.

 Tip

For more information, see Tenant-level auditing.

Use the Power BI REST APIs:

" Plan for your use of the REST APIs: Consider what data would be most useful to
retrieve from the Power BI REST APIs.
" Conduct a proof of concept: Do a small proof of concept to validate data needs,
technology choices, and permissions.

Maturity levels
The following maturity levels will help you assess the current state of your Power BI
system oversight.

Level State of Power BI system oversight

100: Initial Tenant settings are configured independently by one or more administrators
based on their best judgment.

Architecture needs, such as gateways and capacities, are satisfied on an as-needed


basis. However, there isn't a strategic plan.

Power BI activity logs are unused, or selectively used for tactical purposes.

200: The tenant settings purposefully align with established governance guidelines and
Repeatable policies. All tenant settings are reviewed regularly.

A small number of specific administrators are selected. All administrators have a


good understanding of what users are trying to accomplish in Power BI, so they're
in a good position to support users.

A well defined process exists for users to request licenses and software. Request
forms are easy for users to find. Self-service purchasing settings are specified.

Sensitivity labels are configured in Microsoft 365. However, use of labels remains
inconsistent. The advantages of data protection aren't well understood by users.

300: Defined The tenant settings are fully documented in the centralized portal for users to
reference, including how to request access to the correct groups.

Cross-training and documentation exist for administrators to ensure continuity,


stability, and consistency.

Sensitivity labels are assigned to content consistently. The advantages of using


sensitivity labels for data protection are understood by users.

An automated process is in place to export Power BI activity log and API data to a
secure location for reporting and auditing.

400: Capable Administrators work closely with the COE and governance teams to provide
oversight of Power BI. A balance of user empowerment and governance is
successfully achieved.

Decentralized management of data architecture (such as gateways or capacity


management) is effectively handled to balance agility and control.

Automated policies are set up and actively monitored in Microsoft Defender for
Cloud Apps for data loss prevention.
Level State of Power BI system oversight

Power BI activity log and API data is actively analyzed to monitor and audit Power
BI activities. Proactive action is taken based on the data.

500: Efficient The Power BI administrators work closely with the COE actively stay current. Blog
posts and release plans from the Power BI product team are reviewed frequently
to plan for upcoming changes.

Regular cost management analysis is done to ensure user needs are met in a cost-
effective way.

Power BI activity log and API data is actively used to inform and improve adoption
and governance efforts.

Next steps
For more information about system oversight and Power BI administration, see the
following resources.

Administer Power BI - Part 1


Administer Power BI - Part 2
Administrator in a Day Training – Day 1
Administrator in a Day Training – Day 2
Power BI security whitepaper
External guest users whitepaper
Planning a Power BI enterprise deployment whitepaper
Power BI adoption framework

In the next article in the Power BI adoption roadmap series, in conclusion, learn about
adoption-related resources that you might find valuable.
Power BI adoption roadmap: Change
management
Article • 09/11/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

When working toward improved business intelligence (BI) adoption, you should plan for
effective change management. In the context of BI, change management includes
procedures that address the impact of change for people in an organization. These
procedures safeguard against disruption and productivity loss due to changes in
solutions or processes.

7 Note

Effective change management is particularly important when you migrate to Power


BI.

Effective change management improves adoption and productivity because it:

Helps content creators and consumers use analytics more effectively and sooner.
Limits redundancy in data, analytical tools, and solutions.
Reduces the likelihood of risk-creating behaviors that affect shared resources (like
Premium or Fabric capacity) or organizational compliance (like data security and
privacy).
Mitigates resistance to change that obstructs planning and inhibits user adoption.
Mitigates the impact of change and improving user wellbeing by reducing the
potential for disruption, stress, and conflict.

Effective change management is critical for successful adoption at all levels. To


successfully manage change, consider the key actions and activities described in the
following sections.

) Important
Change management is a fundamental obstacle to success in many organizations.
Effective change management requires that you understand that it's about people
—not tools or processes.

Successful change management involves empathy and communication. Ensure that


change isn't forced or resistance to change is ignored, because it can widen
organizational divides and further inhibit effectiveness.

 Tip

Whenever possible, we recommend that you describe and promote change as


improvement—it's much less threatening. For many people, change implies a cost in
terms of effort, focus, and time. Alternatively, improvement means a benefit
because it's about making something better.

Types of change to manage


When implementing BI, you should manage different types of change. Also, depending
on the scale and scope of your implementation, you should address different aspects of
change.

Consider the following types of change to manage when you plan for Power BI
adoption.

Process-level changes
Process-level changes are changes that affect a broader user community or the entire
organization. These changes typically have a larger impact, and so they require more
effort to manage. Specifically, this change management effort includes specific plans
and activities.

Here are some examples of process-level changes.

Change from centralized BI to decentralized BI (change in content ownership and


management).
Change from enterprise BI to departmental BI, or from team BI to personal BI
(change in content delivery scope).
Change of central team structure (for example, forming a Center of Excellence).
Changes in governance policies.
Migration from other analytics products to Power BI, and the changes this
migration involves, like:
The separation of datasets and reports, and a model-based approach to
analytics.
Transitioning from exports or static reports to interactive analytical reports,
which can involve filtering and cross-filtering.
Moving from distributing reports as PowerPoint files or flat files to accessing
reports directly from the Power BI service.
Shifting from information in tables, paginated reports, and spreadsheets to
interactive visualizations and charts.

7 Note

Typically, giving up export-based processes or Excel reporting is a significant


challenge. That's because these methods are usually deeply engrained in the
organization and are tied to the autonomy and data skills of your users.

Solution-level changes
Solution-level changes are changes that affect a single solution or set of solutions.
These changes limit their impact to the user community of those solutions and their
dependent processes. Although solution-level changes typically have a lower impact,
they also tend to occur more frequently.

7 Note

In the context of this article, a solution is built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a data lakehouse, a
Power BI dataset, or a report. The considerations for change management
described in this article are relevant for all types of solutions, and not only
reporting projects.

Here are some examples of solution-level changes.

Changes in calculation logic for KPIs or measures.


Changes in how master data or hierarchies for business attributes are mapped,
grouped, or described.
Changes in data freshness, detail, format, or complexity.
Introduction of advanced analytics concepts, like predictive analytics or
prescriptive analytics, or general statistics (if the user community aren't familiar
with these concepts, already).
Changes in the presentation of data, like:
Styling, colors, and other formatting choices for visuals.
The type of visualization.
How data is grouped or summarized (such as changing from different measures
of central tendency, like average, median, or geometric mean).
Changes in how content consumers interact with data (like connecting to a shared
dataset instead of exporting information for personal BI usage scenarios).

How you prepare change management plans and activities will depend on the types of
change. To successfully and sustainably manage change, we recommend that you
implement incremental changes.

Address change incrementally


Change management can be a significant undertaking. Taking an incremental approach
can help you facilitate change in a way that's sustainable. To adopt an incremental
approach, you identify the highest priority changes and break them into manageable
parts, implementing each part with iterative phases and action plans.

The following steps outline how you can incrementally address change.

1. Define what's changing: Describe the change by outlining the before and after
states. Clarify the specific parts of the process or situation that you'll change,
remove, or introduce. Justify why this change is necessary, and when it should
occur.
2. Describe the impact of the change: For each of these changes, estimate the
business impact. Identify which processes, teams, or individuals the change affects,
and how disruptive it will be for them. Also consider any downstream effects the
change has on other dependent solutions or processes. Downstream effects may
result in other changes. Additionally, consider how long the situation remained the
same before it was changed. Changes to longer-standing processes tend to have a
higher impact, as preferences and dependencies arise over time.
3. Identify priorities: Focus on the changes with the highest potential impact. For
each change, outline a more detailed description of the changes and how it will
affect people.
4. Plan how to incrementally implement the change: Identify whether any high-
impact changes can be broken into stages or parts. For each part, describe how it
might be incrementally implemented in phases to limit its impact. Determine
whether there are any constraints or dependencies (such as when changes can be
made, or by whom).
5. Create an action plan for each phase: Plan the actions you will take to implement
and support each phase of the change. Also, plan for how you can mitigate
disruption in high-impact phases. Be sure to include a rollback plan in your action
plan, whenever possible.

 Tip

Iteratively plan how you'll implement each phase of these incremental changes as
part of your quarterly BI tactical planning.

When you plan to mitigate the impact of changes on Power BI adoption, consider the
activities described in the following sections.

Effectively communicate change


Ensure that you clearly and concisely describe planned changes for the user community.
Important communication should be originate from the executive sponsor, or another
leader with relevant authority. Be sure to communicate the following details.

What's changing: What the situation is now and what it will be after the change.
Why it's changing: The benefit and value of the change for the audience.
When it's changing: An estimation of when the change will take effect.
Further context: Where people can go for more information.
Contact information: Who people should contact provide feedback, ask questions,
or raise concerns.

Consider maintaining a history of communications in your centralized portal. That way,


it's easy to find communications, timings, and details of changes after they've occurred.

) Important

You should communicate change with sufficient advanced notice so that people are
prepared. The higher the potential impact of the change, the earlier you should
communicate it. If unexpected circumstances prevent advance notice, be sure to
explain why in your communication.

Plan training and support


Changes to tools, processes, and solutions typically require training to use them
effectively. Additionally, extra support may be required to address questions or respond
to support requests.

Here are some actions you can take to plan for training and support.

Centralize training and support by using a centralized portal. The portal can help
organize discussions, collect feedback, and distribute training materials or
documentation by topic.
Consider incentives to encourage self-sustaining support within a community.
Schedule recurring office hours to answer questions and provide mentorship.
Create and demonstrate end-to-end scenarios for people to practice a new
process.
For high-impact changes, prepare training and support plans that realistically
assess the effort and actions needed to prevent the change from causing
disruption.

7 Note

These training and support actions will differ depending on the scale and scope of
the change. For high-impact, large-scale changes (like transitioning from enterprise
BI to managed self-service BI), you'll likely need to plan iterative, multi-phase plans
that span multiple planning periods. In this case, carefully consider the effort and
resources needed to deliver success.

Involve executive leadership


Executive support is critical to effective change management. When an executive
supports a change, it demonstrates its strategic importance or benefit to the rest of the
organization. This top-down endorsement and reinforcement is particularly important
for high-impact, large-scale changes, which have a higher potential for disruption. For
these scenarios, ensure that you actively engage and involve your executive sponsor to
endorse and reinforce the change.

U Caution

Resistance to change from the executive leadership is often a warning sign that
stronger business alignment is needed between the business and BI strategies. In
this scenario, consider specific alignment sessions and change management actions
with executive leadership.

Involve stakeholders
To effectively manage change, you can also take a bottom-up approach by engaging the
stakeholders, who are the people the change affects. When you create an action plan to
address the changes, identify and engage key stakeholders in focused, limited sessions.
In this way you can understand the impact of the change on the people whose work will
be affected by the change. Take note of their concerns and their ideas for how you
might lessen the impact of this change. Ensure that you identify any potentially
unexpected effects of the change on other people and processes.

Handle resistance to change


It's important to address resistance to change, as it can have substantial negative
impacts on adoption and productivity. When you address resistance to change, consider
the following actions and activities.

Involve your executive sponsor: The authority, credibility, and influence of the
executive sponsor is essential to support change management and resolve
disputes.
Identify blocking issues: When change disrupts the way people work, this change
can prevent people from effectively completing tasks in their regular activities. For
such blocking issues, identify potential workarounds when you take into account
the changes.
Focus on data and facts instead of opinions: Resistance to change is sometimes
due to opinions and preferences, because people are familiar with the situation
prior to the change. Understand why people have these opinions and preferences.
Perhaps it's due to convenience, because people don't want to invest time and
effort in learning new tools or processes.
Focus on business questions and processes instead of requirements: Changes
often introduce new processes to address problems and complete tasks. New
processes can lead to a resistance to change because people focus on what they
miss instead of fully understanding what's new and why.

Additionally, you can have a significant impact on change resistance by engaging


promoters and detractors.

Identify and engage promoters


Promoters are vocal, credible individuals in a user community who advocate in favor of a
tool, solution, or initiative. Promoters can have a positive impact on adoption because
they can influence peers to understand and accept change.
To effectively manage change, you should identify and engage promoters early in the
process. You should involve them and inform them about the change to better utilize
and amplify their advocacy.

 Tip

The promoters you identify might also be great candidates for your champions
network.

Identify and engage detractors


Detractors are the opposite of promoters. They are vocal, credible individuals in a user
community who advocate against a tool, solution, or initiative. Detractors can have a
significant negative influence on adoption because they can convince peers that the
change isn't beneficial. Additionally, detractors can advocate for alternative or solutions
marked for retirement, making it more difficult to decommission old tools, solutions, or
processes.

To effectively manage change, you should identify and engage detractors early in the
process. That way, you can mitigate the potential negative impact they have.
Furthermore, if you address their concerns, you might convert these detractors into
promoters, helping your adoption efforts.

 Tip

A common source of detractors is content owners for solutions that are going to be
modified or replaced. The change can sometimes threaten these content owners,
who are incentivized to resist the change in the hope that their solution will remain
in use. In this case, identify these content owners early and involve them in the
change. Giving these individuals a sense of ownership of the implementation will
help them embrace, and even advocate in favor, of the change.

Questions to ask
Use questions like those found below to assess change management.

Is there a role or team responsible for change management in the organization? If


so, how are they involved in data and BI initiatives?
Is change seen as an obstacle to achieving strategic success among people in the
organization? Is the importance of change management acknowledged in the
organization?
Are there any significant promoters for BI solutions and processes in the user
community? Conversely, are there any significant detractors?
What communication and training efforts are performed to launch new BI tools
and solutions? How long do they last?
How is change in the user community handled (for example, with new hires or
promoted individuals)? What onboarding activities introduce these new individuals
to existing solutions, processes, and policies?
Do people who create Excel reports feel threatened or frustrated by initiatives to
automate reporting with BI tools? To what extent do people associate their
identities with the tools they use and the solutions they have created and own?
How are changes to existing solutions planned and managed? Are changes
planned, with a visible roadmap, or are they reactive? Do people get sufficient
notification about upcoming changes?
How frequently do changes disrupt existing processes and tools?
How long does it take to decommission legacy systems or solutions when new
ones become available? How long does it take to implement changes to existing
solutions?
To what extent do people agree with the statement I am overwhelmed with the
amount of information I am required to process? To what extent do people agree
with the sentiment things are changing too much, too quickly?

Maturity levels

An assessment of change management evaluates how effectively the organization can


enact and respond to change.

The following maturity levels will help you assess your current state of change
management, as it relates to data and BI initiatives.
Level State of change management

100: Initial • Change is usually reactive, and it's also poorly communicated and
communicated.

• The purpose or benefits of change aren't well understood, and resistance to


change causes conflict and disruption.

• No clear teams or roles are responsible for managing change for data initiatives.

200: • Executive leadership and decision makers recognize the need for change
Repeatable management in BI projects and initiatives.

• Some efforts are taken to plan or communicate change, but they're inconsistent
and often reactive. Resistance to change is still common. Change often disrupts
existing processes and tools.

300: • Formal change management plans or roles are in place. These plans include
Defined communication tactics and training, but they're not consistently or reliably
followed. Change occasionally disrupts existing processes and tools.

• Successful change management is championed by key individuals that bridge


organizational boundaries.

400: • Empathy and effective communication are integral to change management


Capable strategies.

• Change management efforts are owned by particular roles or teams, and


effective communication results in a clear understanding of the purpose and
benefits of change. Change rarely interrupts existing processes and tools.

500: • Change is an integral part of the organization. People in the organization


Efficient understand the inevitability of change, and see it as a source for momentum
instead of disruption. Change almost never unnecessarily interrupts existing
processes or tools.

• Systematic processes address change as a challenge of people and not


processes.

Next steps
In the next article in the Power BI adoption roadmap series, in conclusion, learn about
adoption-related resources that you might find valuable.
Power BI adoption roadmap conclusion
Article • 02/27/2023

7 Note

This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.

This article concludes the series on Power BI adoption. The strategic and tactical
considerations and action items presented in this series will assist you in your Power BI
adoption efforts, and with creating a productive data culture in your organization.

This series covered the following aspects of Power BI adoption.

Adoption overview
Adoption maturity levels
Data culture
Executive sponsorship
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight

The rest of this article includes suggested next actions to take. It also includes other
adoption-related resources that you might find valuable.

Next actions to take


It can be overwhelming to decide where to start. The following series of steps provides a
process to help you approach your next actions.

1. Learn: First, read this series of articles end-to-end. Become familiar with the
strategic and tactical considerations and action items that directly lead to
successful Power BI adoption. They'll help you to build a data culture in your
organization. Discuss the concepts with your colleagues.
2. Assess current state: For each area of the adoption roadmap, assess your current
state. Document your findings. Your goal is to have full clarity on where you're now
so that you can make informed decisions about what to do next.
3. Clarify strategy: Ensure that you're clear on what your organization's goals are for
adopting Power BI. Confirm that your Power BI adoption goals align with your
organization's broader strategic goals for the use of data, analytics, and business
intelligence in general. Focus on what your immediate strategy is for the next 3-12
months.
4. Identify future state: For each area of the roadmap, identify the gaps between
what you want to happen (your future state) and what's happening (your current
state). Focus on the next 3-12 months for identifying your desired future state.
5. Customize maturity levels: Using the information you have on your strategy and
future state, customize the maturity levels for each area of the roadmap. Update or
delete the description for each maturity level so that they're realistic, based on
your goals and strategy. Your current state, priorities, staffing, and funding will
influence the time and effort it will take to advance to the higher maturity levels.
6. Prioritize: Clarify what's most important to achieve in the next 3-12 months. For
instance, you might identify specific user enablement or risk reduction items that
are a higher priority than other items. Determine which advancements in maturity
levels you should prioritize first.
7. Define measurable goals: Create KPIs (key performance indicators) or OKRs
(objectives and key results) to define specific goals. Ensure that the goals are
measurable and achievable. Confirm that each goal aligns with your strategy and
desired future state.
8. Create action items: Add specific action items to your project plan. Action items
will identify who will do what, and when. Include short, medium, and longer-term
(backlog) items in your project plan to make it easy to track and reprioritize.
9. Track action items: Use your preferred project planning software to track
continual, incremental progress of your action items. Summarize progress and
status every quarter for your executive sponsor.
10. Adjust: As new information becomes available—and as priorities change—
reevaluate and adjust your focus. Reexamine your strategy, goals, and action items
once a quarter so you're certain that you're focusing on the right actions.
11. Celebrate: Pause regularly to appreciate your progress. Celebrate your wins.
Reward and recognize people who take the initiative and help achieve your goals.
Encourage healthy partnerships between IT and the different areas of the business.
12. Repeat: Continue learning, experimenting, and adjusting as you progress with your
Power BI implementation. Use feedback loops to continually learn from everyone
in the organization. Ensure that continual, gradual, improvement is a priority.

A few important key points are implied within the previous suggestions.

Focus on the near term: Although it's important to have an eye on the big picture,
we recommend that you focus primarily on the next quarter, next semester, and
next year. It's easier to assess, plan, and act when you focus on the near term.
Progress will be incremental: Changes that happen every day, every week, and
every month add up over time. It's easy to become discouraged and sense a lack
of progress when you're working on a large adoption initiative that takes time. If
you keep track of your incremental progress, you'll be surprised at how much you
can accomplish over the course of a year.
Changes will continually happen: Be prepared to reconsider decisions that you
make, perhaps every quarter. It's easier to cope with continual change when you
expect the plan to change.
Everything correlates together: As you progress through each of the steps listed
above, it's important that everything's correlated from the high-level strategic
organizational objectives, all the way down to more detailed action items. That
way, you'll know that you're working on the right things.

Power BI implementation planning


Successfully implementing Power BI throughout the organization requires deliberate
thought and planning. The Power BI implementation planning series of articles, which is
a work in progress, is intended to complement the Power BI adoption roadmap. It will
include key considerations, actions, decision-making criteria, recommendations, and it
will describe implementation patterns for important common usage scenarios.

Power BI adoption framework


The Power BI adoption framework describes additional aspects of how to adopt Power
BI in more detail. The original intent of the framework was to support Microsoft partners
with a lightweight set of resources for use when helping their customers deploy and
adopt Power BI.

The framework can augment this Power BI adoption roadmap series. The roadmap series
focuses on the why and what of adopting Power BI, more so than the how.

7 Note
When completed, the Power BI implementation planning series (described in the
previous section) will replace the Power BI adoption framework.

Enterprise deployment whitepaper


The Planning a Power BI enterprise deployment whitepaper provides a comprehensive
overview for Power BI implementers. Its primary goal is awareness of options, key
considerations, decisions, and best practices. Because of the breadth of content,
different sections of the whitepaper will appeal to managers, IT professionals, and self-
service authors.

The whitepaper goes deeper into the what and how of adopting Power BI, with a strong
focus on technology. When you've finished reading the series of Power BI adoption
articles, the whitepaper will fill you in with extra information to help put your plans into
action.

7 Note

The Enterprise deployment whitepaper won't be updated again. When completed,


the Power BI implementation planning series (described in the previous section) will
replace the Enterprise deployment whitepaper.

Microsoft's BI transformation
Consider reading about Microsoft's journey and experience with driving a data culture.
This article describes the importance of two terms: discipline at the core and flexibility at
the edge. It also shares Microsoft's views and experience about the importance of
establishing a COE.

Power Platform adoption


The Power Platform team has an excellent set of adoption-related content. Its primary
focus is on Power Apps, Power Automate, and Power Virtual Agents. Many of the ideas
presented in this content can be applied to Power BI also.

The Power CAT Adoption Maturity Model , published by the Power CAT team,
describes repeatable patterns for successful Power Platform adoption.
The Power Platform Center of Excellence Starter Kit is a collection of components and
tools to help you develop a strategy for adopting and supporting Microsoft Power
Platform.

The Power Platform adoption best practices includes a helpful set of documentation and
best practices to help you align business and technical strategies.

The Power Platform adoption framework is a community-driven project with excellent


resources on adoption of Power Platform services at scale.

Microsoft 365 and Azure adoption


You may also find useful adoption-related guidance published by other Microsoft
technology teams.

The Maturity Model for Microsoft 365 provides information and resources to use
capabilities more fully and efficiently.
Microsoft Learn has a learning path for using the Microsoft service adoption
framework to drive adoption in your enterprise.
The Microsoft Cloud Adoption Framework for Azure is a collection of
documentation, implementation guidance, best practices, and tools to accelerate
your cloud adoption journey.

A wide variety of other adoption guides for individual technologies can be found online.
A few examples include:

Microsoft Teams adoption guide .


Microsoft Security and Compliance adoption guide .
SharePoint Adoption Resources .

Industry guidance
The Data Management Book of Knowledge (DMBOK2) is a book available for
purchase from DAMA International. It contains a wealth of information about maturing
your data management practices.

7 Note

The additional resources provided in this article aren't required to take advantage
of the guidance provided in this Power BI adoption series. They're reputable
resources should you wish to continue your journey.
Partner community
Experienced Power BI partners are available to help your organization succeed with
Power BI. To engage a Power BI partner, visit the Power BI partner portal .
Power BI implementation planning
Article • 08/31/2023

In this video, watch Matthew introduce you to the Power BI implementation planning
series of articles.
https://www.microsoft.com/en-us/videoplayer/embed/RWUWA9?postJsllMsg=true

Successfully implementing Power BI throughout the organization requires deliberate


thought and planning. The Power BI implementation planning series provides you with
key considerations, actions, decision-making criteria, and tactical recommendations. The
articles in this series cover key subject areas when implementing Power BI, and they
describe patterns for common usage scenarios.

Subject areas
When you implement Power BI, there are many subject areas to consider. The following
subject areas form part of the Power BI implementation planning series:

BI strategy
User tools and devices
Tenant setup
Subscriptions, licenses, and trials
Roles and responsibilities
Power BI service administration
Workspaces
Data management
Content deployment
Content distribution and sharing
Security
Information protection and data loss prevention
Power BI Premium
Gateways
Integration with other services
Auditing and monitoring
Adoption tracking
Scaling and growing

7 Note
The series is a work in progress. We will gradually release new and updated articles
over time.

Usage scenarios
The series includes usage scenarios that illustrate different ways that creators and
consumers can deploy and use Power BI:

Content collaboration and delivery


Self-service BI
Content management and deployment
Real-time
Embedding and hybrid

Purpose
When completed, the series will:

Complement the Power BI adoption roadmap, which describes considerations for


successful Power BI adoption and a healthy data culture. Power BI implementation
planning guidance that correlates with the adoption roadmap goals will be added
to this series.
Replace the Planning a Power BI enterprise deployment white paper , which was
designed to describe various technical factors when deploying Power BI. Relevant
white paper content will be merged into this series in a new format that's more
discoverable and actionable.
Replace the Power BI adoption framework (together with the Power BI adoption
roadmap), which is a lightweight set of resources (videos and presentation slides)
that were designed to help Microsoft partners deploy Power BI solutions for their
customers. Relevant adoption framework action items will be merged into this
series.

Recommendations
To set yourself up for success, we recommend that you work through the following
steps:

1. Read the complete Power BI adoption roadmap, familiarizing yourself with each
roadmap subject area. Assess your current state of Power BI adoption, and gain
clarity on the data culture objectives for your organization.
2. Explore Power BI implementation planning articles that are relevant to you. Start
with the Power BI usage scenarios, which convey how you can use Power BI in
diverse ways. Be sure to understand which usage scenarios apply to your
organization, and by whom. Also, consider how these usage scenarios may
influence the implementation strategies you decide on.
3. Read the articles for each of the subject areas that are listed above. You might
choose to initially do a broad review of the contents from top to bottom. Or you
may choose to start with subject areas that are your highest priority. Carefully
review the key decisions and actions that are included for each topic (at the end of
each section). We recommend that you use them as a starting point for creating
and customizing your plan.
4. When necessary, refer to Power BI documentation for details on specific topics.

Target audience
The intended audience of this series of articles may be interested in the following
outcomes:

Identifying areas to improve or strengthen their Power BI implementation.


Increasing their ability to efficiently manage and securely deliver Power BI content.
Planning the implementation of Power BI within their organization.
Increasing their organization's return on investment (ROI) in Power BI.

This series is certain to be helpful for organizations that are in their early stages of a
Power BI implementation or are planning an expanded implementation. It may also be
helpful for those who work in an organization with one or more of the following
characteristics:

Power BI has pockets of viral adoption and success in the organization, but it's not
consistently well-managed or purposefully governed.
Power BI is deployed with some meaningful scale, but there are many unrealized
opportunities for improvement.

 Tip

Some knowledge of Power BI and general business intelligence concepts is


assumed. To get the most from this content, we recommend that you become
familiar with the Power BI adoption roadmap first.

Acknowledgments
The Power BI implementation planning articles are written by Melissa Coates, Kurt
Buhler, and Peter Myers. Matthew Roche, from the Fabric Customer Advisory Team,
provides strategic guidance and feedback to the subject matter experts.

Next steps
In the next article in this series, learn about usage scenarios that describe how you can
use Power BI in diverse ways.

Other helpful resources include:

Power BI adoption roadmap


Power BI migration overview

Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Power BI implementation planning: BI
strategy
Article • 09/11/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article introduces the business intelligence (BI) strategy series of articles. The BI
strategy series is targeted at multiple audiences:

Executive leadership: Individuals who are responsible for defining organizational


goals and strategies, like the Power BI executive sponsor or a Chief Executive
Officer (CEO), Chief Information Officer (CIO), or Chief Data Officer (CDO).
BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and BI strategic planning.
Center of Excellence (COE), IT, and BI teams: The teams that are responsible for
tactical planning, measuring, and monitoring progress toward the BI objectives.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a team or department and conduct BI
solution planning. These teams and individuals are responsible for representing
the strategy and data needs of their business area when defining the BI strategy.

Defining your BI strategy is essential to get the most business value from BI initiatives
and solutions. Having a clearly defined BI strategy is important to ensure efforts are
aligned with organizational priorities. In some circumstances, it's particularly important.

We recommend that you pay special attention to these articles if your organization is:

Migrating to, or implementing, Microsoft Fabric or Power BI for the first time: A
clear BI strategy is crucial to the successful implementation of any new platform or
tool.
Experiencing significant growth of Microsoft Fabric or Power BI usage: A BI
strategy brings clarity and structure to organic adoption, helping to enable users
while mitigating risk.
Seeking to become data-driven or achieve digital transformation: A BI strategy is
critical to modernizing your organization and helping you to achieve a competitive
advantage.
Experiencing significant business or technological change: Planning your BI
strategy ensures that your organization can use change as momentum and not as
an obstacle.
Reevaluating your business strategy: Your business strategy should influence your
BI strategy, which in turn can lead to changes in your business strategy. All
strategies should be in alignment in order to achieve your organizational goals.

In short, this series of articles is about defining a BI strategy. It describes what a BI


strategy is, why it's important, and how you can plan your BI strategy. The articles in this
series are intended to complement the Power BI adoption roadmap.

Become data-driven with a BI strategy


Successful adoption and implementation of analytics solutions helps an organization
meet their business goals. To achieve successful adoption and implementation, you
need a BI strategy. A BI strategy is sometimes described as an analytics strategy or
becoming data driven.

A BI strategy is a plan to implement, use, and manage data and analytics to better
enable your users to meet their business goals. An effective BI strategy ensures that data
and analytics support your business strategy.

Relationship between BI strategy and business strategy


Your business strategy should directly inform your BI strategy. As your business
objectives evolve, your BI processes and tools may also need to evolve, especially as
new data needs arise. New opportunities and insights learned from BI solutions can also
lead to changes to your business strategy. Understanding and supporting the
relationship between your business and BI strategies is essential in order to make
valuable BI solutions, and to ensure that people use them effectively.

The following diagram depicts how a BI strategy supports the business strategy by
enabling business users.
The diagram depicts the following concepts.

Item Description

The business strategy describes how the organization will achieve its business goals.

The business strategy directly informs the BI strategy. The primary purpose of the BI
strategy is to support—and potentially inform—the business strategy.

The BI strategy is a plan to implement, use, and manage data and analytics.

BI goals define how BI will support the business goals. BI goals describe the desired future
state of the BI environment.

To make progress toward your BI goals, you identify and describe BI objectives that you
want to achieve in a specific time period. These objectives describe paths to your desired
future state.

To achieve your BI objectives, you plan and implement BI solutions and initiatives. A
solution might be developed by a central IT or BI team, or by a member of the community
of practice as a self-service solution.

The purpose of BI solutions and initiatives is to enable business users to achieve their
objectives.

Business users use BI solutions and initiatives to make informed decisions that lead to
effective actions.

Business users follow through on the business strategy with their achieved results. They
achieve these results by taking the right actions at the right time, which is made possible
in part by an effective BI strategy.

7 Note
In this series, goals are high-level descriptions of what you want to achieve. In
contrast, objectives are specific, actionable targets to help you achieve a goal. While
a goal describes the desired future state, objectives describe the path to get there.

Further, solutions are processes or tools built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a data lakehouse, or
a Power BI dataset or report.

Consider the following, high-level example for a hypothetical organization.

Area Example

Business The organizational goal is to improve customer satisfaction and reduce customer
strategy churn. One business strategy to achieve this goal is to reduce the number of late
customer deliveries.

BI strategy BI goal: To support the business strategy, the BI goal is to improve the effectiveness
of orders and deliveries reporting.

BI objectives: To achieve the BI goal, the organization defines specific BI objectives,


like producing a unified view of the order fulfillment process across all distribution
centers.

BI solutions and initiatives: To achieve these BI objectives, the organization plans BI


solutions and initiatives, like implementing a consolidated data lakehouse that
stores business-ready orders data to support reporting and analytics.

Business Enabled by these BI solutions, business users can more effectively identify and
users mitigate potential late deliveries. These solutions result in fewer late deliveries and
improved customer satisfaction, allowing the organization to achieve progress
toward its business goals.

Relationship between BI strategy and data strategy


Your BI strategy describes how successful Power BI adoption and implementation will
deliver business value to your organization. However, a BI strategy transcends tools and
technologies. While your BI strategy might start small, it can grow to encompass all of
your analytical data, tools, and processes when you experience success. Furthermore,
the concepts in a BI strategy are also important in a broader data strategy. While a BI
strategy is about the use of data and tools for analytical purposes, a data strategy is
concerned with the wider management and use of data within the organization. Thus,
your BI strategy is a subset of your data strategy, as they share many related concepts.

The following diagram depicts how a BI strategy is a subset of a data strategy, and how
they share concepts related to data culture and technology.
The diagram depicts the following concepts.

Item Description

A data strategy describes the goals and priorities for the wider use and management of
data in an organization. A data strategy encompasses more than only BI.

The BI strategy is a subset of a data strategy.

Data culture is important in both a BI strategy and a data strategy. Different data culture
areas describe a vision for behaviors, values, and processes that enable people to work
effectively with data. An example of a data culture area is data literacy.

Technology is important in both a BI strategy and a data strategy. Different technical areas
support the business data needs and use cases. An example of a technical area is data
visualization.

A BI strategy can encompass many data culture and technical areas. However, when
planning your BI strategy, you should be cautious not to attempt to address too many
of these areas at first. A successful BI strategy starts small. It focuses on a few prioritized
areas and broadens scope over time, ensuring consistent progress. Later, as you
experience sustainable success with your BI strategy, it can incrementally evolve to
encompass more areas.

) Important

This series of BI strategy articles focuses on the Power BI workload in Fabric.


However, planning a BI strategy is a technology-agnostic exercise. As such, the
concepts described in the articles may apply irrespective of your chosen BI tools
and technologies.

Defining a BI strategy
There are many ways to define a BI strategy. Typically, when you define a BI strategy,
you begin with the priority areas that describe your BI goals. Based on these prioritized
goals, you define specific, measurable objectives. To achieve these objectives, you build
solutions and enact specific initiatives. You then incrementally scale your BI strategy to
include more areas as you experience success.

You The following diagram depicts how you can define your BI strategy at three
different planning levels, as depicted in the following diagram.

The diagram depicts the following three planning levels.

Item Description

Strategic planning: You begin by defining your strategic BI goals and priorities, and how
they support your business strategy. These BI goals are high-level descriptions of what you
want to achieve and why.

Tactical planning: You then identify your specific BI objectives. These objectives are
specific, measurable, short-term actions that describe how you'll make progress toward
your long-term, strategic BI goals.
Item Description

Solution planning: The BI solutions and initiatives that you create should be a direct result
of tactical planning. These solutions enable you to achieve your BI objectives and make
progress toward your BI goals.

) Important

Defining a BI strategy requires prioritization, planning, and active involvement from


many teams and individuals across your organization.

BI strategy example
The following high-level, hypothetical example explains how you can transition from
business goals to BI goals. It then explains how to transition from BI goals to objectives,
and then to BI solutions.

Business goals and strategy

In this example, the organization has set a goal to increase sales effectiveness. One
strategy the business uses to achieve this goal is to sell more high-margin products to
its top customers.

BI goals and priorities


To achieve the business strategy, the organization wants the salespeople to adopt data-
driven decision making. To this end, the BI team works with the sales teams to
understand their data needs, and to define long-term, strategic BI goals and priorities.

In this example, the BI goals and priorities are:

Data literacy: Improve the ability of the salespeople to make decisions based on
data and report visualizations.
Content ownership: Clarify who owns the data and reporting items for different
use cases.
Mentoring and user enablement: More effectively enable the salespeople with the
skills and tools to answer questions with data.
Governance: More effectively balance governance risk and enablement of the sales
teams.
Data engineering: Create a unified view of sales and profitability data for analytics.
7 Note

In this example, many other factors might be important. However, the organization
has identified these particular priorities to advance progress towards their business
goals and strategy.

Objectives

To achieve their BI goals, the BI team conducts tactical planning to identify and describe
their short-term objectives. The BI team creates a data literacy program for the
salespeople. Also, the BI team drafts a user enablement plan and an accountability plan
for salespeople who want to perform self-service analytics. These plans allow the
salespeople to request access to data after they've completed specific training materials
and signed a self-service user acknowledgment.

In this example, the BI objectives are:

Data literacy: Ensure that 90 percent of the salespeople complete the data literacy
program.
Content ownership: Adopt the managed self-service BI usage scenario, where
central teams manage central, certified Power BI datasets and reports. Some self-
service content creators can connect to these datasets for their own analysis and
reporting needs.
Mentoring and user enablement: Create a centralized portal to share training
resources, template files, and host weekly office hours Q&A sessions.
Governance: Draft a tenant-wide monitoring solution for user activities based on
data from the Power BI activity log, and identify data democratization and data
discovery priorities for the next quarter.
Data engineering: Design and start a proof of concept for a medallion lakehouse
architecture to store the sales and profitability data.
Data security: Define and implement data security rules for BI solutions.
Information protection and data loss prevention (DLP): Define how content
creators should endorse content by promoting or certifying data items. Conduct
an investigation into whether to use sensitivity labels and implement DLP policies.

BI solutions

To achieve its objectives, the organization aims to design and deploy the following BI
solutions.
Central BI teams will work to store profitability data for customers and products in
a unified lakehouse.
Central BI teams will publish an enterprise semantic model as a Power BI dataset
that includes all data required for central reporting and key self-service reporting
scenarios.
Security rules applied to the Power BI dataset enforce that salespeople can only
access data for their assigned customers.
Central BI teams will create central reports that show aggregate sales and
profitability across regions and product groups. These central reports will support
more sophisticated analysis by using interactive visualizations.
Salespeople can connect directly to the BI dataset to perform personal BI and
answer specific, one-off business questions.

7 Note

This example describes a simple scenario for the purpose of explaining the three
planning levels of a BI strategy. In reality, your strategic BI goals, objectives, and BI
solutions are likely to be more complex.

Iteratively plan a BI strategy


Your BI strategy should evolve as you scale and as your organization experiences
change. For these reasons, planning your BI strategy is a continuous, iterative process.

Iteratively planning your BI strategy is beneficial for two reasons.

Incremental progress: Define your BI strategy by focusing on priorities and


breaking them into manageable parts. You can implement these parts in phases
and complete them incrementally over multiple continuous improvement cycles.
With each cycle, you can evaluate progress and lesson learned to sustainably grow
your strategy. In contrast, an all-in approach can become overwhelming, running
out of steam before it produces value.
Overcome change: Keep pace with changes to technology and your business
strategy. Iterative planning and implementation phases can help your strategy
remain relevant to business data needs. In contrast, detailed, multi-year strategic
plans can quickly become outdated.

It's unrealistic to expect all-encompassing, long-term planning to survive beyond 12-18


months. For instance, attempting to create an exhaustive three to five-year plan can
result in over-investment, a failure to keep up with change, and an inability to support
changes in your business strategy. Instead, you should define and operationalize your
strategies by using iterative approaches, with achievable outcomes in a maximum
period of 18 months.

There are many ways to iteratively plan your BI strategy. A common approach is to
schedule planning revisions over periods that align with existing planning processes in
the organization.

The following diagram depicts recommendations for how you can schedule planning
revisions.

The diagram depicts how you can iteratively structure your BI strategy planning the
following concepts.

Item Description

Avoid detailed, long-term planning: Long-term, detailed plans can become outdated as
technology and business priorities change.

Strategic planning (every 12-18 months): This high-level planning focuses on aligning
business goals and BI goals. It's valuable to align this strategic planning with other
annualized business processes, like budgeting periods.

Tactical planning (every 1-3 months): Monthly or quarterly planning sessions focus on
evaluating and adjusting existing tactical plans. These sessions assess existing priorities
and tasks, which should take business feedback and changes in business or technology
into account.

Continuous improvement (every month): Monthly sessions focus on feedback and urgent
changes that impact ongoing planning. If necessary, decision makers can make decisions,
Item Description

take corrective action, or influence ongoing planning.

How to plan a BI strategy


This series of articles presents a structured framework that helps you to plan the three
levels of your BI strategy, as depicted in the following diagram.

The diagram shows three levels of BI strategy planning, which are each described in
separate articles. We recommend that you read these articles in the following order.

1. BI strategic planning: This article describes how you can form a working team to
lead the initiative to define your BI strategy. The working team prepares workshops
with key stakeholders to understand and document the business strategy. The
team then assesses the effectiveness of BI in supporting the business strategy. This
assessment helps to define the strategic BI goals and priorities. After strategic
planning, the working team proceeds to tactical planning.
2. BI tactical planning: This article describes how the working team can identify
specific objectives to achieve the BI goals. As part of these objectives, the working
team creates a prioritized backlog of BI solutions. The team then defines what
success of the BI strategy will look like, and how to measure it. Finally, the working
team commits to revise tactical planning every quarter. After tactical planning, you
proceed to solution planning.
3. BI solution planning: This article describes how you design and build BI solutions
that support the BI objectives. You first assemble a project team that's responsible
for a solution in the prioritized solution backlog. The project team first gathers
requirements to define the solution design. Next, it plans for deployment and
conducts a proof of concept (POC) of the solution to validate assumptions. If the
POC is successful, the project team creates and tests content with iterative cycles
that gradually onboard the user community. When ready, the project team deploys
the solution to production, supporting and monitoring it as needed.

 Tip

Before you read the BI strategy articles, we recommend that you're already familiar
with the Power BI adoption roadmap. The adoption roadmap describes
considerations to achieve Power BI adoption and a healthy data culture. These BI
strategy articles build upon the adoption roadmap.

Next steps
In the next article in this series, learn about BI strategic planning.
Power BI implementation planning: BI
strategic planning
Article • 09/11/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article helps you to define your business intelligence (BI) goals and priorities
through strategic planning. It's primarily targeted at:

BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and BI strategic planning.
Center of Excellence (COE), IT, and BI teams: The teams that are responsible for
tactical planning, and for measuring and monitoring progress toward the BI
objectives.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a team or department and conduct BI
solution planning. These teams and individuals are responsible for representing
the strategy and data needs of their business area when you define the BI strategy.

A BI strategy is a plan to implement, use, and manage data and analytics. As described
in the BI strategy overview article, your BI strategy is a subset of your data strategy. It
supports your business strategy by enabling business users to make decisions and take
actions by using data and BI solutions more effectively.

In short, this article describes how you can perform strategic planning to define the
goals and priorities of your BI strategy.

7 Note

In this series, we define goals as high-level descriptions of what you want to


achieve. In contrast, objectives are specific, actionable targets that help you achieve
a goal. While a goal describes the desired future state, objectives describe the path
to get there.

Further, we define solutions as processes or tools built to address specific business


needs for users. A solution can take many forms, such as a data pipeline, a data
lakehouse, a Power BI dataset or report.

The following high-level diagram depicts how to conduct BI strategic planning.

You take the following steps to define your strategic BI goals and priorities.

Step Description

1 Establish a working team to lead the BI strategy initiative.

2 Establish business alignment by conducting research and workshops to gather information


about the business objectives and data needs, and also existing BI solutions and initiatives.

3 Complete a current state assessment by running a series of strategic planning workshops


with key stakeholders.

4 Use the assessments and stakeholder input to decide on the strategic BI goals and
priorities.

This article describes each step of the BI strategic planning process.

Step 1: Assemble a working team


Your first step when defining your BI strategy is to establish a working team. A working
team leads the initiative to describe and plan the BI strategy. It's a cross-functional
group of experts that's enabled by the support of the executive sponsor. The group
should have a deep understanding of technical and business processes across the
organization.
Ideally, the working team should represent each department, business unit, and region
that's in scope for the initiative.

The following diagram depicts the following roles that appoint the working team
members, and the types of members in a working team.

The diagram depicts the following roles.

Item Description

The executive sponsor typically provides top-down goals and support of the working
team, including funding. The executive sponsor may also appoint the working team
members together with the Center of Excellence (COE).

A COE or central BI team confers with the executive sponsor to identify and appoint
working team members. The COE may also provide guidance to the working team to
support their activities.

COE members form part of the working team. They're responsible for using their BI
expertise to drive BI information gathering and complete the current state assessments.

Business SMEs form part of the working team. They represent the interests of their
department or business unit. SMEs are responsible for driving business strategy
information gathering.

Functional team members, like those from a master data team, can form part of the
working team. They're responsible for clarifying strategically important processes during
information gathering.
Item Description

Technical team members, like those from a data engineering team, can form part of the
working team. They're responsible for clarifying strategically important systems during
information gathering.

Security team members form part of the working team. They're responsible for identifying
and clarifying compliance, security, and privacy requirements during information
gathering.

Other IT team members can form part of the working team. They may identify other
technical requirements related to areas such as networking or application management.

7 Note

Not all roles depicted in the diagram have to be present in the working team.
Involve roles that are relevant for the scope and scale of your BI strategy initiative.

) Important

Defining the BI strategy is a significant undertaking. It's important that working


team members understand what's expected of them, and that they have the
resources and time to fulfill their role. An engaged executive sponsor can help by
clarifying priorities and ensuring that all required resources are available.

Working team members are typically appointed and guided by an executive sponsor of
BI and analytics, like the Power BI executive sponsor. Identifying and engaging an
executive sponsor is the first step of a BI strategy initiative.

Identify and engage an executive sponsor


A key role of the executive sponsor is to help formulate strategic BI goals and priorities.
The executive sponsor is an individual in a position of senior, strategic leadership who
has an invested stake in BI efforts and the BI strategy. They provide top-down guidance
and reinforcement by regularly promoting, motivating, and investing in the BI strategy.

In addition to the many activities listed in the adoption roadmap, an executive sponsor
plays a key role in BI strategic planning by:

Supporting the working team and COE: The executive sponsor takes a leading
role in defining the BI strategy by providing top-down guidance and
reinforcement.
Allocating resources: They confirm funding, staffing, roles, and responsibilities for
the working team.
Advocating for the initiative: They advance the strategic initiative by:
Legitimizing working team activities, particularly when the working team faces
resistance to change.
Promoting the BI strategy initiative with announcements or public endorsement.
Motivating action and change to progress the BI strategy initiative.
Representing the working team and sharing the BI strategic plan among C-level
executives to obtain executive feedback.
Making strategic decisions: They make decisions about priorities, goals, and
desired outcomes.

 Tip

Before assembling the working team, you should first identify and engage an
executive sponsor. Work through this checklist to ensure that you take the
necessary actions to ensure a sufficiently engaged executive sponsor.

Decide on the scope of the BI initiative


Because the working team contains members from different business areas, the
composition of the working team will depend on the scope of your BI initiative.
Typically, a BI strategy encompasses many areas of an organization. However, you
should refine this scope to define the specific areas it should address. You might limit
the scope of your BI strategy initiative for two reasons.

Practical reasons: A successful BI strategy starts small and simple, achieving


incremental growth as you experience success. When you first define the BI
strategy, focus on priority areas so that you achieve quick wins and sustainable,
incremental progress.
Strategic reasons: You can have distinct initiatives for different business areas. For
example, different parts of the organization may require independent BI strategies
because their business strategies are sufficiently different. These independent
strategies should align with an overall BI strategy, whenever possible.

As part of the scoping exercise, you should also plan how you'll set expectations with
stakeholders that the BI strategy will be defined iteratively.

Understand the working team purpose and


responsibilities
Once you've identified and engaged an executive sponsor and clarified the scope of the
BI initiative, you then assemble the working team. This team leads the initiative to define
and plan the BI strategy.

The responsibilities of the working team include:

Planning and preparation: The working team should plan and prepare the various
aspects of the BI strategy initiative, such as:
Defining the timelines, deliverables, and milestones for the initiative.
Identifying stakeholders who can accurately describe the business goals and
objectives of their respective departments.
Communication with stakeholders, the executive sponsor, and each other.
Information gathering: The working team should gather sufficient information to
accurately assess the current state of the BI implementation. Examples of
information gathering activities include:
Conducting independent research about the business context and existing BI
solutions or initiatives.
Running interactive workshops with stakeholders to elicit input about business
objectives and data needs.
Documenting summarized findings and sharing conclusions.
Feedback and follow-up: The working team summarizes the findings from the
information gathered and proposes BI goals, priorities, and next steps. It gathers
feedback and follows up by:
Assessing the current state of BI adoption and implementation.
Creating a prioritized list of business data needs.
Presenting their conclusions and proposed next steps to stakeholders and
executive leadership.

7 Note

Because the working team communicates with stakeholders and business users,
they're often considered ambassadors for Fabric, Power BI, and other BI initiatives
in your organization. Ensure that your working team members understand and
accept this responsibility.

Assemble and prepare the working team


Working team members should include representatives from different departments and
business units. The following sections describe where you might source working team
members.
) Important

The working team should be comprised of people with credibility in the


organization. These people should have knowledge of technical processes and
business processes. A working team that consists only of consultants could indicate
that the BI initiative isn't sufficiently understood or prioritized by the organization.

Center of Excellence members


You can source working team members from the Power BI Center of Excellence (COE), or
a similar group of multidisciplinary BI experts. The main responsibility of COE members
in the working team is to take advantage of their COE expertise to contribute to
information gathering. Further, COE members can share workshop findings back to the
COE to inform tactical planning decisions and actions.

Some organizations don't have a COE, possibly because the role of a COE is performed
by their BI team or IT. In this case, consider adding members from the BI team or IT to
the working team.

7 Note

Ensure that the working team doesn't consist only of members from your COE,
central IT, or BI teams. A BI strategy encompasses many areas of the organization,
and each of these areas should be well represented.

 Tip

If you don't have a COE, consider establishing one while assembling the working
team. Establishing a COE with the working team members could be a natural
evolution of the working team's activities, once the BI strategic vision is defined.
This approach is a good way to capture the knowledge and understanding that the
working team gained during the BI strategy initiative.

Business subject matter experts (SMEs)

Working team members should include business SMEs. The main responsibility of
business SMEs is to represent their business unit. You should include business SMEs in
the working team to avoid assumptions that results in a BI vision that doesn't work for
part of the organization.
Business SMEs in the working team must have a deep understanding of data needs and
business processes within their business unit or department. Ideally, they should also
understand the BI tools and technologies used to address these needs.

7 Note

It may not be practical to include every department, business unit, or region in the
working team. In this case, ensure that you dedicate effort to identifying
assumptions and exceptions for any unrepresented departments, business units, or
regions.

Champions network
Working team members can include users from your existing champions network in the
community of practice. A champion typically has exceptional knowledge of both BI tools
and the data needs of their business area. They're often leaders in the community of
practice. The main responsibility of champions in the working team is to promote
information gathering, and to involve their community of practice in the initiative.

7 Note

Including champions can help to avoid making assumptions that can result in an
inaccurate assessment of the current state of Power BI adoption and
implementation.

Functional, IT, and security team members


A working team may include members from specific functional areas, especially when
other expertise is required. The main responsibility of these members is to bring their
expertise about specific important topics to the BI strategy.

Here are some examples of when you might include members from functional areas in
the working team.

Functional teams: Include relevant representatives from functional teams in the


working team. For example, if your organization uses one or more large enterprise
resource planning systems (ERPs), then you should include an expert of these ERPs
in the working team. This individual would be responsible for clarifying how the
systems are used in the context of feedback provided during information
gathering.
IT teams: Include relevant IT experts in the working team. For example, your
organization may have specific networking requirements, or a complex scenario
involving multiple tenants. The IT experts would be responsible for describing
specific requirements, which is particularly important in tactical planning. They can
also help identify risks or pain points during information gathering.
Security teams: Include members from security teams in the working team. For
example, your organization may have specific compliance, security, or privacy
requirements. These individuals would be responsible for describing security-
related requirements when defining the future state. They can also help identify
compliance risks and security threats during information gathering.

Create a communication hub


Efficient and structured communication between working team members and
stakeholders is critical for your initiative to succeed. One way to improve communication
is by centralizing it in a communication hub. A communication hub is a place that you
use to consolidate communication, documentation, and planning about the BI strategy.
It also helps promote collaboration between the working team and stakeholders.

The following diagram depicts how to use a communication hub to centralize BI


strategic planning and input.

The diagram conveys the following concepts or processes.


Item Description

A communication hub is a central location in Microsoft Teams or a similar platform. Its


purpose is to centralize communication, documentation, and planning.

The working team creates and manages different channels for each business area. The
separation by business area should correspond to the top-level structure of the initiative.
Each channel contains a repository of summarized findings, timelines, and discussions
about the BI strategy.

Designated working team members curate and moderate the communication hub.
Moderation ensures that the communication hub remains useful and current.

Key stakeholders and working team members actively participate in the communication
hub.

The executive sponsor limits participation. For example, they might resolve disputes as
they arise.

 Tip

We recommend that you use the communication hub beyond the strategic
planning workshops. Because the communication hub is a source of regular input
and discussion from key business stakeholders, it can help your team keep the BI
strategy relevant and up to date.

Communicate consistently and effectively


The working team should maintain and follow a concise, organized, and transparent
process to define the BI strategy by using the communication hub.

Here are some recommendations to get the most value from the communication hub.

Have well-defined working team responsibilities: Ensure that the working team
has well-defined responsibilities for the communication hub, such as curating and
moderating it. Having active and involved moderation ensures that the
communication hub remains current, informative, and useful for the working team
and key stakeholders.
Organize discussions and files: Ensure that it's easy to find files or previous
discussions in the communication hub by creating and maintaining a logical
structure. An organized communication hub encourages its effective use.
Be concise in documents and posts: Avoid overwhelming people with large
volumes of information. Key stakeholders have limited time, so encourage people
to publish posts and documents to the communication hub that are concise and
easy to understand.
Be consistent in communication: Ensure that the communication hub is used
instead of alternative channels, like email. You should also strive to ensure that
documents and updates are consistent in tone, format, and length.
Be transparent and foster a collaborative environment: An effective
communication hub has an active, collaborative social environment. It requires
transparency from the working team who should be sharing regular updates and
findings throughout the initiative.

) Important

Success of strategic planning relies on effective communication. Promoting concise


and consistent communication benefits not only strategic planning, but also the
broader adoption and implementation of BI initiatives across your organization.

Checklist - When establishing a working team, key decisions and actions include:

" Involve an executive sponsor: If there isn't an executive sponsor, identify and


engage one before assembling the working team.
" Decide on the scope of the BI initiative: Together with the executive sponsor,
determine which business areas the BI strategy will cover.
" Communicate the initiative: Have the executive sponsor raise awareness
throughout the organization of the initiative to define the BI strategy.
" Assemble the working team: Appoint members who can provide sufficient
coverage of the relevant business areas, technical areas, and compliance areas.
" Set expectations of the working team members: Clarify the time and effort
requirements, and ensure that team members understand what's expected of them
(and that they have the time and resources to fulfil their role).
" Clarify working team roles and responsibilities: Ensure that everyone in the
working team knows what they should do to drive successful strategic planning.

Step 2: Plan workshops and conduct research


After you assemble the working team (step 1), the newly assembled working team can
start the following activities to establish business alignment.
Conduct independent research: The working team performs research into the
business context and existing BI solutions or initiatives.
Plan workshops: The working team prepares strategic planning workshops to
collect input from key stakeholders about their business objectives and data needs.

These activities are prerequisites for the workshops and complete assessments (step 3).

Conduct independent research


The working team conducts research to document the current state of BI adoption and
implementation. This research is used to complete assessments, but it also helps the
working team to prepare for the workshops.

Research the business context


To define an effective BI strategy, the working team must understand the business goals.
By understanding the business goals, the working team has the right business context to
describe why people use data and BI tools, and comprehension of their desired
outcomes. You should define data needs and use cases with respect to the business
processes they support and the objectives they address.

Business SMEs in the working team should use their expertise to lead the effort to
describe the business context. However, it's important that all members of the working
team participate. It's essential that the working team has a shared understanding of the
business strategy. That way, the BI strategy focuses on addressing business needs
instead of solving abstract, technical problems.

Research existing BI initiatives and solutions

To define an effective BI strategy, the working team must also understand the current
state of BI adoption and implementation. The current state describes how people use
existing data and BI tools, and what data and tools are strategically important. You
should identify the existing BI initiatives and solutions with respect to the business
processes they support and objectives they address. These solutions help illustrate what
business users do today to address their data needs, so that you can assess whether it's
effective.

COE members in the working team should use their expertise to lead the effort to
describe the current state of BI adoption and implementation. An example of an activity
that helps this effort is tenant-level auditing. Auditing allows the working team to collect
an inventory of current BI initiatives and solutions to prepare for workshops.

) Important

Ensure that the working team has a good understanding of the compliance
requirements and information protection needed in your organization. Document
these requirements during independent research, and ensure that they're well
understood by everyone in the working team.

Topics to address with independent research


The following diagram depicts topics typically addressed with independent research.

The diagram depicts the following concepts and processes.


Item Description

The working team researches the business context to document and understand the
business strategy. This research is led by business SMEs for their respective departments or
business units.

The working team researches the business context by first identifying the business goals.

The working team identifies specific business objectives that departments or business units
have to make progress toward with their goals.

Business processes are initiatives or plans created to work toward business objectives. The
working team identifies the processes in place to help achieve the business objectives.

Business data needs are the data, tools, and solutions required to support business
processes and strategic objectives. The working team identifies the business data needs.

The working team researches any existing BI initiatives and solutions to understand the
current state of BI adoption and implementation. COE members or BI experts lead this
research.

The working team investigates strategically important BI solutions to understand how the
organization currently addresses business data needs. Specifically, the working team
identifies who the business users are, how they use the solutions. The working team also
documents key data questions or problems that these solutions address, and also
potential flaws, opportunities, and inefficiencies.

The working team surveys and documents the existing tools and technologies that the
organization uses to address business data needs.

The working team identifies past or parallel initiatives to define the BI strategy. Past
initiatives might contain valuable learnings, while parallel initiatives may be combined to
avoid duplication of effort.

The working team identifies strategically important KPIs and master data. These KPIs and
master data are critical to enabling the business to achieve their business objectives.

The working team assesses the usage and adoption of strategically important BI solutions
among the user community.

The working team identifies any potential governance and compliance risks identified in
existing BI solutions.

) Important

The topics and examples presented in this section are intended to guide you in
conducting your own independent research. These topics aren't an exhaustive or
required list. Use these topics as inspiration. We recommend that you use the
maturity levels documented in the Power BI adoption roadmap to help you
evaluate and prioritize areas that are most important for your organization and its
business context.

Taken together, research on the business context and existing BI initiatives and solutions
describe the current state of BI adoption and implementation. The working team verifies
this research in workshops when capturing stakeholder input.

Plan workshops
While performing independent research, you should plan workshops with stakeholders.
The purpose of these workshops is to gather input about the business objectives and
data needs. You also validate the conclusions from independent research in these
workshops.

7 Note

This article uses the term workshops to describe interactive meetings with key
stakeholders. The objective of the workshops is to gather input so you can
accurately describe and understand the objectives and data needs.

The following sections describe key considerations for planning and preparing
workshops.

Involve the correct key stakeholders


Successful BI strategic planning requires the working team to involve the right
stakeholders in the workshops. The working team should identify key stakeholders who
have sufficient knowledge and credibility to represent their business area. In each
workshop, the role of these stakeholders is to participate in discussions that are led by
the working team. Stakeholders need to describe the business objectives and data
needs for their areas, and the current state of data and analytics initiatives to support
the business objectives.

Identifying the right stakeholders is essential in order to run successful workshops and
gain an accurate understanding of the business areas in scope.

2 Warning

If you engage the wrong stakeholders, there's significant risk that the BI strategy
won't align with strategic business goals or support business users to achieve their
objectives.

The following diagram depicts the process to identify and inform the right key
stakeholders about the BI strategy initiative.

The diagram depicts the following steps.

Item Description

List the functional areas (departments and business units) in scope for the BI strategy
initiative.

For each functional area, identify two to three candidate key stakeholder representatives.

Engage with stakeholders to inform them of the initiative, and validate their selection. At
this stage, candidate stakeholders may decline to participate and might suggest alternative
people.

Select a final list of key stakeholders.

The executive sponsor informs key stakeholders and formally requests their participation.
All further communication with the key stakeholders is posted to the communication hub.

When you initially request key stakeholder participation, ensure that you:

Obtain approval from their manager, as appropriate.


Explain the scope of the initiative, and its objectives, timelines, and deliverables.
Describe specifically why they've been asked to participate and what the desired
outcomes are.
Outline the necessary time commitment and participation that you need from
them.
Communicate clearly and concisely.

) Important

Often, top-down BI initiatives limit stakeholders to executives and decision makers.


While these individuals have a significant role to play (to obtain sufficient executive
buy-in and strategic alignment), they aren't necessarily the right stakeholders. In
this scenario, you risk defining a strategy isolated from the reality experienced by
business users. This misalignment can result in strategies and solutions that don't
meet the needs of day-to-day users, and consequentially they aren't used.

To mitigate this risk, ensure that you involve stakeholders from various levels of the
organization. When selecting key stakeholders, engage with different teams to
briefly introduce the initiative and gather input on who the right stakeholders
might be. This level of engagement not only raises awareness of the initiative, but it
also enables you to involve the right people more easily.

Checklist - When planning workshops and conducting research, key decisions and
actions include:

" Agree on communication values: Encourage all working team members to engage


with concise, clear, and consistent communication throughout the initiative.
" Set up the communication hub: Create a central, structured hub for all
communication, documentation, and planning. Document how the hub can be used
effectively.
" Research the business context: With assistance from business SMEs, describe the
business objectives for each of the business areas that are in scope.
" Research existing BI initiatives and solutions: Conduct tenant-level auditing and
targeted investigation of strategically important solutions to describe the current
state of BI adoption and implementation.
" Select the right key stakeholders: Engage representatives from each business area
who have sufficient knowledge and credibility.
" Invite key stakeholders to the communication hub: When ready, onboard the key
stakeholders to the communication hub and send meeting invitations for the
workshops.

Step 3: Run workshops and complete


assessments
After you complete independent research and workshop planning (step 2), you run the
workshops and complete the assessments. The goal of the workshops is to use
stakeholder input to document:
The business goals, strategy, and data needs of the in-scope business areas.
The current state of BI adoption and implementation for the in-scope business
areas.

The working team combines the stakeholder input together with independent research.
These inputs should provide the working team with a sufficient understanding of the
business strategy and the current state of BI adoption and implementation.

With this understanding, the working team evaluates the maturity and effectiveness of
the current state of BI adoption and implementation. This evaluation is summarized in a
data culture assessment and a technical assessment, which are the key outputs of the
workshops. The objective of these assessments is to clearly identify both weaknesses
and opportunities in the data culture, and technical areas that should be prioritized for
the BI strategy.

) Important

If no working team members have experience running and moderating interactive


meetings or workshops, the working team should first undertake training or seek
support to help run the workshops.

Run workshops
The workshops are organized as a series of interactive sessions structured to effectively
elicit and collect information from stakeholders. The number of sessions and how
they're conducted will depend on the number of stakeholders, their location, time
availability, and other factors.

The following sections describe the types of sessions you typically conduct when
running workshops.

Introduction session
The introduction session is run by the working team, and it should involve all
stakeholders and the executive sponsor. It introduces the initiative and clarifies the
scope, objectives, timeline, and deliverables.

The objective of this session is to set expectations about the purpose of the workshops
and what's needed for the BI initiative to succeed.

Workshops
Workshops are interactive meetings between a few members of the working team and
the key stakeholders. A member of the working team moderates the discussion, posing
questions to stakeholders to elicit input. Stakeholders provide input about their business
strategy, existing BI initiatives and solutions, and their data needs.

7 Note

While a moderator should be proficient in eliciting information, they don't require


deep domain knowledge. Ideally, all workshops for a given business area should be
led by the same moderator.

The objective of the workshops is to collect sufficient input from stakeholders to


accurately describe their business objectives and data needs. A successful workshop
concludes with stakeholders feeling that the working team members understand the
business objectives and data needs. This stakeholder input is used together with the
working team's independent research to complete an assessment of the current state of
BI adoption and implementation.

Here are some practical considerations to help you plan and organize effective
workshops.

Keep workshop attendance focused: Don't saturate meetings with too many
attendees. Involving too many people may result in prolonged discussions, or
discussions where only the most assertive personalities provide input.
Keep the discussion focused: Take any debates, excessively specific questions, or
remarks offline to discuss later in short one-on-one meetings. Similarly, identify
and address any resistance directly, and involve the executive sponsor whenever
necessary. Keeping the discussion focused ensures that workshops concentrate on
the overall discussion of strategic planning, and they don't get distracted by small
details.
Be flexible with preparation: Depending on time and preference, you can use
prepared material to conduct more effective discussion. However, understand that
discussions may go in unexpected directions. If a session departs from your
prepared material but still produces helpful input, don't force the discussion back
to a fixed agenda. When stakeholders are focused on a different point, it means
that it's important. Be flexible by addressing these points to capture the most
valuable input.
Document stakeholder input: During the workshops, you should document
stakeholders' inputs about their business objectives and the BI strategy.
Document business data needs: One outcome of workshop information gathering
is a high-level list of the unmet business data needs. You should first organize the
list from the highest to lowest priority. Determine these priorities based on
stakeholder input, and the impact the list items have on business effectiveness.

7 Note

The list of prioritized data needs is a key outcome of strategic planning that later
facilitates tactical planning and solution planning.

Complete assessments
The working team should combine independent research and the stakeholder input into
summarized findings. These objective findings should convey an accurate description of
the current state of BI adoption and implementation (for conciseness, referred to as the
current state). For each business area in scope, these findings should describe:

Business goals.
Business objectives to make progress towards their goals.
Business processes and initiatives to achieve their objectives.
Business data needs to support the processes and initiatives.
BI tools and solutions that people use to address their business data needs.
How people use the tools and solutions, and any challenges that prevent them
from using the tools and solutions effectively.

With an understanding of the current state, the working team should then proceed to
assess the overall BI maturity and its effectiveness in supporting the business strategy.
These assessments address specific data culture and technical areas. They also help you
to define your priorities by identifying weaknesses and opportunities that you'll
prioritize in your BI strategy. To address these weaknesses and opportunities, you define
high-level, strategic BI goals.

To help identify priorities, the working team conducts two types of assessment: a data
culture assessment and a technical assessment.
Contents of an assessment
Making a concise and accurate assessment of the current state is essential. Assessments
should highlight the strengths and challenges of the organization's ability to use data to
drive decisions and take actions.

An effective maturity assessment consists of the following content.

Maturity level: Evaluate the overall maturity level on a five-point scale from 100
(initial) to 500 (efficient). The score represents a high-level, subjective assessment
by the working team of the effectiveness in different areas.
Business cases: Justify and illustrate the maturity level scores in the assessment.
Concrete examples include actions, tools, and processes taken by business users to
achieve their business objectives with data. The working team uses business cases
together with summarized findings to support their assessment. A business case
typically consists of:
A clear explanation of the desired outcome, and business data needs the
current process aims to address.
An as-is description of how the general process is currently done.
Challenges, risks, or inefficiencies in the current process.
Supplemental information: Support the conclusions, or documents significant
details that are relevant to the BI and business strategy. The working team
documents supplemental information to support later decision-making and tactical
planning.

Complete the data culture assessment

The data culture assessment evaluates the current state of BI adoption. In order to
complete this assessment, the working team performs the following tasks.

1. Review summarized findings: The working team reviews the inputs collected from
conducting independent research and running workshops.
2. Evaluate the maturity levels: The working team proceeds through each of the data
culture areas described in this section. Using the Power BI adoption roadmap, they
evaluate the effectiveness of each area by assigning a maturity score.
3. Justify the subjective evaluation with objective evidence: The working team
describes several key business cases and supporting information that justifies their
evaluation of the maturity scores for each area.
4. Identify weaknesses and opportunities: The working team highlights or
documents specific findings that could reflect a particular strength or challenge in
the organization's data culture. It can be the lowest-scoring or highest-scoring
areas, or any areas that they feel have a high impact on the organization's data
culture. These key areas will be used to identify the BI goals and priorities.

 Tip

Use the Power BI adoption roadmap to guide you when completing the data
culture assessment. Also, consider other factors specific to your organizational
culture and the ways your users work. If you're looking for more information,
consult other reputable sources like the Data Management Body of Knowledge
(DMBOK) .

The following diagram depicts how the working team assesses the organizational data
culture in BI strategic planning for specific data culture areas.

The diagram depicts the following data culture areas.

Item Description

Business alignment: How well the data culture and data strategy enable business users to
achieve business objectives.

Executive sponsorship: How effectively a person of sufficient credibility, authority, and


influence supports BI solutions and initiatives to drive successful adoption.

Center of Excellence (COE): How effectively a central BI team enables the user community,
and whether this team has filled all the COE roles.

**Data literacy: How effectively users are able to read, interpret, and use data to form
opinions and make decisions.

Data discovery: How discoverable the right data is, at the right time, for the people who
need it.
Item Description

Data democratization: Whether data is put in the hands of users who are responsible for
solving business problems.

Content ownership and management: Whether there's a clear vision for centralized and
decentralized ways that content creators manage data (such as data models), and how
they're supported by the COE.

Content delivery scope: Whether there's a clear vision of who uses, or consumes,
analytical content (such as reports), and how they're supported by the COE.

Mentoring and user enablement: Whether end users have the resources and training to
effectively use data and improve their data literacy.

Community of practice: How effectively people with a common interest can interact with
and help each other on a voluntary basis.

User support: How effectively users can get help when data, tool, or process issues arise.

Governance: The effectiveness of processes for monitoring user behavior to empower


users, maintain regulatory requirements, and fulfill internal requirements.

System oversight: The effectiveness of everyday administrative activity concerned with


enacting governance guidelines, empowering users, and facilitating adoption.

Change management: How effectively change is handled, including procedures that


safeguard against disruption and productivity loss due to changes in solutions or
processes.

To evaluate these data culture areas, see the Power BI adoption roadmap. Specifically,
refer to the maturity level sections and Questions to ask sections, which guide you to
perform assessments.

Complete the technical assessment

The technical assessment evaluates technical areas that strategically enable the success
of BI implementation. The purpose of this assessment isn't to audit individual technical
solutions or assess the entirety of technical areas related to BI. Instead, the working
team describes the maturity level and general effectiveness for strategically critical areas,
like those described in this section. To complete this assessment, the working team
performs the following tasks.

1. Identify technical areas: The working team identifies specific technical areas that
are relevant and strategically important to the success of BI to include in their
assessment. Some examples of technical areas are described in this section and
show in the following diagram.
2. Define maturity levels: The working team defines the maturity levels to score the
high-level effectiveness for each technical area in the assessment. These maturity
levels should follow a consistent scale, such as those found in the template
provided in the maturity levels of the Power BI adoption roadmap.
3. Review summarized findings: The working team reviews the collected inputs by
conducting independent research and running workshops.
4. Evaluate the maturity levels: The working team evaluates the effectiveness of each
area by assigning a maturity score.
5. Justify the subjective evaluation with objective evidence: The working team
describes several key business cases and supporting information that justifies their
evaluation of the maturity scores for each area.
6. Identify weaknesses and opportunities: The working team highlights or
documents specific findings that could reflect a particular strength or challenge in
the organization's BI implementation. It can be the lowest-scoring technical areas,
or any areas that they feel have a high impact on the organization's strategic
success with implementing BI tools and processes. These key areas will be used to
identify the BI goals and priorities.

The following diagram depicts technical areas that you might assess when defining your
BI strategy.

7 Note

If you're adopting Microsoft Fabric, be aware that many of these areas are
represented as separate parts of the Fabric analytics platform.

The diagram depicts the following technical areas.


Item Description

Data integration: How effectively tools or systems connect to, ingest, and transform data
from various sources to create harmonized views for analytical purposes. Evaluating data
integration means equally assessing enterprise data pipelines and self-service data
integration solutions, like dataflows in Power BI and Fabric.

Data engineering: How effective the current architectures are at supporting analytical use
cases and adapting to changes in business data needs.

Data science: Whether the organization can use exploratory and sophisticated techniques
to discover new insights and benefit from predictive or prescriptive analytics.

Data warehousing: The effectiveness of relational databases in modeling business logic to


support downstream analytical use cases. Data warehousing is often considered together
with data engineering.

Real-time analytics: Whether the organization can correctly identify, capture, and use low
latency data to provide an up-to-date picture of systems and processes.

Data visualization: Whether visualizations can be used effectively to reduce the time-to-
action of reporting experiences for business users. Effective visualizations follow best
practices, directing attention to important, actionable elements, enabling users to
investigate deeper or take the correct actions.

Actions and automation: How consistently and effectively tasks are automated and data
alerts are used to enable manual intervention at critical moments in a system or process.
You should also evaluate how actionable BI solutions are, meaning how effectively and
directly they enable report users to take the right actions at the right time.

Lifecycle management: How effectively content creators can collaborate to manage and
track changes in BI solutions for consistent, regular releases or updates.

Data security: Whether data assets comply with regulatory and organizational policies to
ensure that unauthorized people can't view, access, or share data. Data security is typically
evaluated together with information protection and data loss prevention.

Information protection: How well the organization mitigates risk by identifying and
classifying sensitive information by using tools like sensitivity labels. Information
protection is typically evaluated together with data security and data loss prevention.

Data loss prevention (DLP): Whether the organization can proactively prevent data from
leaving the organization. For example, by using DLP policies based on a sensitivity label or
sensitive information type. DLP is typically evaluated together with data security and
information protection.

Master data management :Whether quantitative fields and business attributes are
effectively managed, centrally documented, and uniformly maintained across the
organization.

Data quality: Whether BI solutions and data are trustworthy, complete, and accurate
according to the business user community.
Item Description

Artificial intelligence (AI): Whether the organization makes effective use of generative AI
tools and models to enhance productivity in BI processes. Additionally, whether AI is used
to deliver valuable insights in analytics workloads.

7 Note

The technical areas depicted in the diagram aren't all necessarily part of BI; instead
some are strategic enablers of a successful BI implementation. Further, these areas
don't represent an exhaustive list. Be sure to identify and assess the technical areas
that are strategically important for your organization.

U Caution

When performing the technical assessment, don't assess details beyond the scope
of strategic planning. Ensure that all activities that investigate the BI
implementation focus directly on defining and evaluating the current state to
define your BI goals and priorities.

Getting too detailed in the technical assessment risks diluting key messages about
the BI strategy. Always keep in mind the big picture questions like: Where do we
want to go? and How can BI effectively support the business?

Checklist - When running workshops and completing assessments, key decisions and
actions include:

" Decide and communicate the workshop format: Outline the number of sessions,
their length, participants, and other relevant details for participating stakeholders.
" Nominate a moderator from the working team: Decide who from the working
team will moderate the workshops. Their goal is to guide discussions and elicit
information.
" Collect input: Organize the workshops so that you collect sufficient input about the
business strategy and the current state of BI implementation and adoption.
" Summarize findings: Document the inputs that justify the assessments. Include
specific business cases that illustrate strategically important processes and
solutions.
" Complete the maturity assessments: Complete the relevant assessments for the
current state of BI adoption and implementation.
" Document business cases and supporting information: Objectively document the
evidence used to justify the maturity levels you assign in each assessment.

Step 4: Decide on the BI goals and priorities


After you run the workshops and complete assessments (step 3), the working team,
together with the executive sponsor, decide on the BI goals and priorities to address in
tactical planning.

7 Note

While the working team should be involved in clarifying and documenting goals
and priorities, it isn't responsible for defining them. The executive sponsor and
equivalent decision makers own these decisions. The executive sponsor and other
decision makers have the authority to decide and allocate resources to deliver on
these goals and priorities.

Decide on strategic priorities


The assessments should clearly identify weaknesses and opportunities in the data
culture or technical areas to prioritize for the BI strategy. From the weaknesses and
opportunities in the assessments, work with key decision makers, like your executive
sponsor, to decide which of the areas are priorities you'll focus on in the short-term. By
prioritizing, you aim for sustainable, incremental progress toward your BI goals.

Decide on strategic BI goals


In the last step of BI strategic planning, for each of the prioritized areas, the working
team usually defines several goals to work toward in the next 12-18 months. Typically,
these goals represent the desired outcomes and maturity level growth.
 Tip

For data culture areas, we recommend that you define your goals by using the
Power BI adoption roadmap. It can help you to identify the maturity level you
should aim to achieve for your desired future state. However, it's not realistic to aim
for a level 500 for each category. Instead, aim for an achievable maturity level
increase in the next planning period.

For technical areas, we recommend that you define your goals by using the maturity
scales described in the technical assessment by the working team.

Examples of strategic BI goals


Here are some examples of strategic BI goals.

Improve executive support of BI initiatives and solutions.


Improve the effectiveness of the COE.
Create a clear content ownership strategy and structure.
Better understand and monitor user behavior with data to improve governance.
Move from descriptive analytics solutions to predictive analytics solutions.
Improve decision-making processes with more effective data visualization.
Expand the number of effective content creators to improve time-to-delivery and
the business value obtained from BI solutions.

Before you conclude strategic planning, the working team should align the decided BI
goals and priorities with stakeholders and executives.

Align with stakeholders and executives


It's critical that the final assessments and decisions be shared with stakeholders. In the
communication hub, stakeholders can asynchronously follow up on the progress of
these deliverables and contribute feedback. However, you should conclude strategic
planning by presenting the assessments and priorities back to stakeholders and
executives.

The following sections describe how you align with stakeholders and executives.

Conduct an alignment session

The alignment session is the final meeting for each business area. Each alignment
session involves key stakeholders and the executive sponsor, who review the
assessments made by the working team.

The objective of this session is to achieve consensus about the conclusions and
assessments, and the agreed BI goals and priorities.

7 Note

Ensure that stakeholders understand that the BI strategy isn't final and unchanging.
Emphasize that the BI strategy evolves alongside the business and technology.
Ideally, the same stakeholders will continue to take part in this iterative exercise.

Prepare and present an executive summary

The executive summary is typically delivered by the executive sponsor to other


executives responsible for the overall business strategy. The executive sponsor describes
the assessment results and outlines the key challenges and opportunities that justify the
priority decisions. Importantly, the executive sponsor describes the next steps to define
the future state.

The objective of this session is to obtain executive alignment and approval on the
outcomes of strategic planning and the next steps.

Proceed with tactical planning


Once you've identified your BI goals and priorities, you have concluded strategic
planning. The next step is to identify objectives to help you make progress towards your
BI goals, which you do by conducting tactical planning.

Checklist – When deciding BI goals and priorities, key decisions and actions include:

" Curate a list of business data needs and opportunities: Create a consolidated,


prioritized list of the business data needs, pain points, and opportunities. This
output is used in tactical planning.
" Decide on the strategic BI goals: Work with your executive sponsor and other
decision makers to identify high-level BI goals for the next 12-18 months.
" Align with stakeholders: Obtain consensus agreement that the assessments and
other deliverables are accurate.
" Align with executives: Obtain approvals on the outcomes of strategic planning and
the next steps.

Next steps
In the next article in this series, learn how to conduct BI tactical planning.
Power BI implementation planning: BI
tactical planning
Article • 09/11/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article helps you to identify your business intelligence (BI) objectives and form
actionable plans to achieve incremental progress toward your strategic BI goals. It's
primarily targeted at:

BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and BI strategic planning.
Center of Excellence (COE), IT, and BI teams: The teams that are responsible for
tactical planning, and for measuring and monitoring progress toward the BI
objectives.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a department and conduct BI solution
planning.

A BI strategy is a plan to implement, use, and manage data and analytics. You define
your BI strategy by starting with BI strategic planning. In strategic planning, you
assemble a working team to research and assess the current state of BI adoption and
implementation to identify your BI goals and priorities. To work toward your BI goals,
the working team defines specific objectives by doing tactical planning. Tactical planning
is based on the assessments done and the BI goals and priorities decided during BI
strategic planning.

In short, this article describes how the working team can perform tactical planning to
define objectives and success for the BI strategy. It also describes how the working team
should prepare to iteratively reevaluate and assess this planning.

7 Note

In this series, we define goals as high-level descriptions of what you want to


achieve. In contrast, objectives are specific, actionable targets that help you achieve
a goal. While a goal describes the desired future state, objectives describe the path
to get there.

Further, we define solutions as processes or tools built to address specific business


needs for users. A solution can take many forms, such as a data pipeline, a data
lakehouse, a Power BI dataset or report.

The following high-level diagram depicts how to conduct BI tactical planning.

You take the following steps to conduct BI tactical planning.

Step Description

1 Identify and describe specific, actionable objectives for your BI goals and priorities.

2 Define what success will look like, and how you'll measure progress toward your desired
outcomes.

3 Prepare to reevaluate and assess planning with continuous improvement cycles.

Step 1: Identify and describe objectives


Your first step to conduct tactical planning involves identifying specific objectives for
your prioritized BI goals. This process shifts the focus from strategic planning to tactical
planning.
Objectives describe paths to your desired future state. They're specific, actionable, and
achievable within a defined period of time. To make progress toward the desired future
state, you should identify objectives that address the business data needs, BI goals, and
priorities that you identified during BI strategic planning.

 Tip

Refer to your assessments from BI strategic planning when you identify and
describe your objectives. Be sure to focus on the key weaknesses to improve, and
the opportunities to leverage.

Ensure that your objectives:

Address one or more of the prioritized goals of your BI strategy.


Result in measurable and achievable outcomes within the tactical planning period.
Relate to both your business strategy and BI strategy.
Follow consistent criteria, like the SMART system , and that they're:
Specific: Target an explicit area of improvement.
Measurable: Use an indicator to monitor progress.
Assignable: Specify who's responsible for the objective.
Realistic: State whether you can achieve the objective, given the current level of
organizational readiness and available resources.
Time-related: Specify when you can achieve the results.

To start, we recommend that you first address time-sensitive, quick-win, and high-
impact objectives.

Identify time-sensitive objectives


Some objectives have a defined window of action, meaning that they must be addressed
before a deadline or specific event occurs. Typically, these objectives address problems
that don't currently impact the business, but will impact the business at some time in
the future (if left unaddressed). Alternatively, these objectives can be linked to
technology or business deadlines. You should identify and address these objectives
before the time window of action expires.
Here are some examples of time-sensitive objectives.

Tools, systems, or features that have a known decommission date.


Business processes or initiatives that have a deadline.
Known flaws or risks inherent in existing solutions or processes.
Processes with a high degree of manual data handling and capacity constraints.
The conclusion of a fiscal or budgeting period.

Identify quick-win and high-impact objectives


When assessing timelines and priorities, you should identify quick-wins. Quick wins are
objectives that deliver more benefit than the effort required to implement them. They
typically have few dependencies, and they don't involve significant new designs or
processes. The key benefit of a quick win is that it can quickly demonstrate a return on
the BI strategic initiative for the business. This return creates momentum and can result
in fast agreement to support larger initiatives.

) Important

When you identify objectives, also consider how you can objectively evaluate and
measure their impact. It's critical that you accurately describe the (potential) return
on investment (ROI) for BI initiatives in order to attain sufficient executive support
and resources. You can assess this impact together with your measures of success
for your BI strategy.

Quick wins may also be high-impact objectives. In this case, they're initiatives or
solutions that have the potential to make substantial advancements across many areas
of the business. Typically, identifying high-impact objectives is essential to progress
further in your BI strategy because they can prompt other, downstream objectives.

Here are some examples of quick-win or high-impact objectives.

Minor changes that improve existing solutions for a large number of end users.
Solution audits and optimizations that improve performance and reduce capacity
usage and costs.
Training initiatives for key users.
Setting up a centralized portal to consolidate a user community of practice.
Creating shared, central themes, templates, and design guidelines for reports.

Identify other objectives


Once you've identified time-sensitive, high-priority, and quick-win objectives, you
should next identify and describe objectives for adoption, governance, and
implementation. Identify objectives that you can achieve in the next quarter and that
directly address the weaknesses and opportunities that you identified in your data
culture and technical assessments. Describe how achieving these objectives will help
make progress toward the BI goals in the next 12-18 months.

 Tip

Refer to the relevant sections of the Power BI adoption roadmap and the Power BI
implementation planning to help you identify and describe your objectives.

) Important

When identifying your objectives, remember that the successful implementation of


your BI strategy is more likely when you aim for an evolution instead of a revolution
from your current state. Evolution implies that you strive for gradual change over
time. Small but consistent, sustained progress is better than an abundance of
change that risks disruption to ongoing activities.

Adoption
First, identify your adoption objectives. These objectives can address many areas, but
typically describe the actions you'll take to improve overall organizational adoption and
data culture.

Here are some examples of adoption objectives.

For each enterprise BI solution, document the specific business questions,


objectives, and processes that it supports.
Increase the proportion of business users who respond positively to the question
BI tools and initiatives help me achieve my business objectives.
Create a survey to measure business user data literacy and a training plan to
improve data literacy.
Increase the use of endorsed, centralized Power BI datasets in managed self-
service BI usage scenarios.
Create a process for self-service content creators so they can request mentoring or
support from the COE or the central BI team.

Governance
Next, identify your governance objectives. These objectives should describe how you'll
sustainably enable users to answer business problems with data, while mitigating risks
to data security or compliance. These governance objectives should be motivated by,
and closely tied to, your adoption objectives.

Here are some examples of governance objectives.

Identify overall usage events, including abnormal or unexpected events, such as a


high amount of data export.
Reduce the number of data exports from reports.
Perform a content delivery audit to minimize or eliminate content distribution from
workspaces in favor of publishing content in Power BI apps.
Reduce the number of reports that are shared with the executive leadership.
Create standard processes so users can request access to new data.
Define how endorsement should be used to promote and certify content.

) Important

If you don't have an effective process to monitor user activities and content, you
should make it one of your highest governance priorities. An understanding of
these activities and items informs better governance decisions and actions.

Implementation

Finally, identify your implementation objectives. These objectives have two purposes.
They:

Support adoption and governance objectives: Describe the solutions you build
and initiatives you enact to achieve your adoption and governance objectives.
These solutions help you work toward improving organizational adoption and user
adoption.
Support business data needs: Describe specific solutions you'll build to address
the prioritized needs that the working team described in BI strategic planning.
With these solutions, you should aim to achieve or improve solution adoption.

Implementation objectives typically describe either initiatives you'll enact or solutions


that you'll build.

Initiatives: Processes, training resources, and policies that support other objectives.
Initiatives are typically non-technical instruments that support users or processes.
Examples of initiatives include:
Processes for self-service content creators so that they can request access to
tools, data, or training.
Governance data policies that describe how certain data should be accessed
and used.
A curated, moderated centralized portal for the user community of practice.
Solutions: Processes or tools built to directly address specific business problems or
data needs for users. Examples of solutions include:
An actionable monitoring solution that allows governance teams to follow up
on governance and adoption objectives.
A unified data lakehouse that delivers business-ready data for consumption by
content creators planning other downstream analytical solutions.
A Power BI app that addresses specific business data needs for content
consumers.

Describing your implementation objectives produces a list of initiatives and solutions.


The working team should use this list to produce an ordered backlog prioritized from
highest to lowest, so that it's clear what will be implemented first. After tactical planning,
you'll work through this backlog to iteratively design and deliver the solutions, which is
described in the BI solution planning article.

When curating this backlog for your implementation objectives, consider the following
points.

Justify the prioritization of the initiative or solution.


Approximate the effort involved, if possible.
Outline the anticipated scope.
Describe relevant timelines and stakeholders.
Refer to any existing documentation, research, or related solutions.
Agree on who will design and implement the solution.

) Important

While your implementation objectives aim to address the business data needs, it's
unlikely you'll be able to address all of these needs immediately. Ensure that you
plan to mitigate the potential impact of unmet business data needs that you won't
address now. Try to assess the impact of these data needs and plan to either
partially address them with quick-wins or even stopgap solutions to at least
temporarily alleviate the business impact.

Define organizational readiness


As described in the previous sections, the objectives you identify must be achievable.
You should assess your organizational readiness to evaluate how prepared the
organization is to achieve the objectives you've identified.

Assess organizational readiness by considering the factors described in the following


sections.

Identify obstacles
For each of your objectives, identify any obstacles or dependencies that could hinder
success or block progress. When you identify obstacles, describe how they could affect
your objectives. Define any relevant timelines, what actions could remove these
obstacles, and who should perform these actions. You should also assess the risk of
possible future obstacles that could prevent you from achieving your objectives.

Here are some examples of obstacles.

System migrations and other ongoing technical initiatives


Business processes and planning, like fiscal year budgets
Business mergers and restructuring
Availability of stakeholders
Availability of resources, including the available time of central team members
Skills of central team members and business users
Communication and change management activities to adequately inform and
prepare business users about the BI strategy

Assess the necessary skills and knowledge


Teams and individuals in the organization should have the necessary skills and
knowledge to achieve your objectives. That's particularly true for central teams, like the
COE or support teams that should lead by example. Confer with these teams about the
objectives you've described. Identify early on whether they require training to
understand and support your objectives.

To appraise the skills and knowledge of teams for organizational readiness, ask yourself
the following questions.

Do central teams (like the COE) understand the objectives, and how they relate to
the high-level strategic goals?
Are special training programs needed for topics like security, compliance, and
privacy?
What new tools or processes require user training? Who will organize this training?
) Important

Improving the skills and competences of internal teams is particularly important


when you migrate to Fabric or Power BI from other technologies. Don't rely
exclusively on external consultants for these migrations. Ensure that internal team
members have sufficient time and resources to upskill, so they'll work effectively
with the new tools and processes.

Anticipate change management efforts


Change management is a crucial part of successful adoption and implementation. It's
essential that you prepare and support people at all levels of the organization to
successfully adopt new behaviors, tools, and processes for working with data. Consider
who will be responsible for change management activities and what resources are
available to effectively follow through on change management.

After you've favorably assessed organizational readiness, you should proceed with step
2 of tactical planning to define success and how it's measured.

Checklist - When identifying your BI objectives, key decisions and actions include:

" Review BI goals and priorities: Ensure that your BI goals are current, and that
they're understood by everyone who participates in tactical planning.
" Review the current state assessments: The weaknesses and opportunities that the
working team identified in the current state assessments directly inform your
objectives.
" Identify time-sensitive objectives: Identify any objectives that have a defined time
period. Clarify the deadline and its impact on the priority of each objective.
" Identify quick-win objectives: Identify objectives that require low effort or time
investment to achieve. Justify why these are quick-win objectives.
" Identify high-impact objectives: Identify objectives that have a significant impact
on your BI strategy. Define why these objectives have a high impact.
" Identify adoption objectives: Identify objectives that will help you realize your data
culture vision and achieve the BI goals for organizational adoption.
" Identify governance objectives: Identify objectives that will help you balance user
enablement and risk mitigation.
" Identify implementation objectives: Identify objectives to either support defined
adoption and governance objectives or specific business data needs. Classify
implementation objectives as either initiatives or solutions.
" Curate the prioritized solution backlog: Create a prioritized list of BI solutions that
you'll implement this quarter. (You will work through this backlog in BI solution
planning.)
" Assess organizational readiness: Evaluate whether the organization is capable of
achieving the objectives you identified and described—and if not, whether you
need to change objectives or perform specific actions to improve organizational
readiness.

Step 2: Define success and how it's measured


Once you've defined your objectives and you're sure that you can achieve them, you're
ready to take the next step. In step 2 of tactical planning, you define success and how
it's measured for each of your objectives.

Define and measure success


You should define what success will look like for both your BI strategy and the specific
objectives you've identified. There are several reasons why you should define and
measure success.

Demonstrate progress: A key element of clear success criteria is the ability to


acknowledge progress and achievements. Good measures of success demonstrate
a clear return on investment (ROI) in BI initiatives. While ROI can be challenging to
measure, doing so drives motivation and allows leadership to acknowledge the
realized business value of the BI strategy.
Continuous improvement: Clear success criteria help you to evaluate your
strategy. This evaluation should motivate your quarterly tactical planning, together
with user feedback and changes to the business or technology.
Corrective action: A good definition of success is backed by measurable outcomes.
Monitoring these measurable outcomes during operations can inform specific
decisions and actions to adjust tactical planning, or intervene if you're heading off
track.

There are two ways to track measurable achievement. Some organizations use KPIs (Key
Performance Indicators), while others use OKRs (Objective Key Results). Both approaches
are equally valid.

KPIs: Evaluate the success of a particular activity against a target.


OKRs: Evaluate key measurable success criteria that track achievement of
objectives. While KPIs typically measure performance, OKRs measure outcomes.

KPIs and OKRs provide measurable success criteria that you monitor to take corrective
or proactive actions when there's significant deviation from your objectives. What's most
important is that you find an approach to measure progress toward your objectives that
works for your teams and your organization.

7 Note

Your measures of success should be closely aligned with business objectives. Ensure
that your success criteria aren't specific to technical tasks or implementations.
Instead, they should focus on better enabling business users to work toward
organizational goals.

U Caution

Measure a limited number of KPIs or OKRs. These metrics are only useful when you
know what they measure and how you should act upon them. It's better to have a
few strategic, valuable KPIs or OKRs than many metrics, which you don't regularly
monitor or follow up.

Identify and describe indicators

You should identify and describe indicators, such as KPIs or OKRs, for your objectives. To
this end, you should first have a clear understanding of the hierarchical relationship
between your BI goals, objectives, and the KPIs or OKRs you want to measure.
Here are some examples of BI goals together with related objectives and the KPIs to
track them.

Example BI goal Example BI objectives Example KPIs

Improve • Identify and engage an • Executive feedback score: Measures


executive executive sponsor. executive endorsement and sentiment.
adoption and Collected from a brief survey of
support of BI. • Create a communication plan executives, including (but not limited to)
with the Center of Excellence the executive sponsor. The survey
(COE), which will involve should ask for quantitative feedback
distributing a regular newsletter about the effectiveness, usability, and
from the executive sponsor to relevance of BI solutions—a high score
share updates, announcements, indicates progress toward the BI goal.
and highlights from BI solutions
and initiatives.

• Hold targeted mentoring


sessions with the executive
sponsor to improve their
knowledge and understanding
about relevant BI topics, and
allowing them to lead by
example.

Achieve a better • Perform a tenant-wide audit to • Ratio of Power BI datasets to reports:


balance of user gain visibility on general usage Measures whether datasets are reused
enablement and trends and anomalies. for ad hoc analysis and reporting, or
risk mitigation in whether data is duplicated across
BI governance. • Create a tenant-wide monitoring models—a ratio close to one indicates
solution to track critical solutions that users may be creating a new
and risk-creating behaviors. dataset for each report, which is a
governance risk.
• Create a centralized portal to
share templates and training • Ratio of exports to views: Measures
materials, and to provide visibility how often users export data to files
instead of using existing reports for
Example BI goal Example BI objectives Example KPIs

on governance team activities and their analysis—a ratio close to one


policies. indicates that users are regularly
exporting data, which is a governance
risk.

Improve data- • Create a data literacy training • Number of users trained in the data
driven decision program to improve the data literacy program: Measures how many
making in the competences of the user users have completed data literacy
user community community. training and have achieved a passing
score.
• Create organizational design
standards, templates, and theme • Time-to-insight: Uses controlled trials
files for Power BI reports—adopt to measure how long it takes a random
these standards in business- sample of users to correctly answer
critical reporting solutions. typical business questions from
available datasets and reports—a fast
• Hold weekly office hours events (low) time-to-insight indicates effective
to allow users to ask questions data-driven decision making.
about central reports, or request
guidance for their decentralized
self-service BI solutions.

) Important

Ensure that your chosen KPIs or OKRs genuinely reflect your desired outcomes.
Regularly evaluate these indicators to avoid incentivizing counterproductive
behaviors. Consider Goodhart's Law, which states: When a measure becomes a
target, it ceases to be a good measure.

Effectively use indicators

Once you implement relevant indicators to measure progress toward your BI objectives,
you should regularly monitor them to track progress and take action where necessary.

Here are some key decisions and considerations to help you successfully use KPIs or
OKRs.

Report your KPIs or OKRs: Create reporting solutions for your indicators that let
you effectively monitor them. Ensure that these reports are highly visible for the
relevant teams and individuals who need this information. In the reports,
communicate how the metric is calculated and which strategic objective it
supports.
Automate data collection: Ensure that data for KPIs and OKRs aren't collected
manually. Find efficient ways to streamline and automate the collection of the data
so that it's current, accurate, and reliable.
Track change: Visualize the current indicator value, but also the trend over time.
Progress is best demonstrated as a gradual improvement. If the indicator exhibits
high volatility or variance, consider using a moving average to better illustrate the
trend.
Assign an owner: Ensure that a team or individual is responsible for measuring the
indicator and keeping its data current.
Define an acceptable range: Establish targets or an acceptable range of values to
assign status (like on track or off track) to the indicator. When values fall outside
the target or range, it should prompt someone to investigate or take corrective
action.
Set up data-driven alerts: Set up automated alerts that notify key teams or
individuals, for example, by using Power Automate. That way, timely action can be
taken when the indicator is off track.
Define actions and interventions: Clearly describe how you'll use this information
to take action, either to address issues or to justify moving to the next step in your
BI strategy.

Validate tactical planning


When you've clearly defined success for your objectives, you should get approval from
executives and the key stakeholders before enacting your tactical planning. Present the
objectives to executives and key stakeholders, highlighting the expected benefits and
relevant outcomes for the business should tactical planning be successful. Also, explain
how the described BI objectives support the business objectives and data needs
identified in BI strategic planning. Use any feedback to adjust tactical planning, where
necessary.

Checklist - When considering your desired future state, key decisions and actions
include:

" Define success for your BI strategy: Clearly describe the success criteria for each of
your objectives.
" Identify measures of success: For each objective, identify how to measure progress.
Ensure that these measures can be reliably tracked and that they'll effectively
encourage the behaviors that you expect.
" Create KPIs or OKRs to measure progress toward critical goals: Create indicators
that objectively report progress toward your strategic BI goals. Ensure that the
indicators are well documented and have clear owners.
" Create monitoring solutions: Create solutions that automatically collect data for
KPI or OKR reporting. Set up data alerts for when the KPIs or OKRs exceed
thresholds or fall outside of an acceptable range. Agree upon the necessary action
to take when these metrics get off track, and by whom.
" Identify obstacles to success: Ensure that key risks and obstacles to success are
identified and define how you'll address them.
" Evaluate organizational readiness: Assess how prepared the organization is to
adopt and implement the BI strategy and enact your tactical plan.
" Plan to address gaps in skills, knowledge, and executive support: Ensure that gaps
are addressed in objectives and tactical planning.
" Validate tactical planning with executives: Request that the executive sponsor
present an executive summary of the tactical planning to leadership for their input.
" Validate tactical planning with key stakeholders: Obtain feedback from key
stakeholders about the final tactical plan and its objectives.

Step 3: Periodically revise the plan


The business and technology context of your organization regularly changes. As such,
you should periodically reevaluate and reassess your BI strategy and tactical planning.
The goal is to keep them relevant and useful for your organization. In step 3 of tactical
planning, you take practical steps to iteratively reevaluate and reassess planning.

Prepare iterative planning and anticipate change


To ensure BI and business strategic alignment, you should establish continuous
improvement cycles. These cycles should be influenced by the success criteria (your KPIs
or OKRs) and the feedback that you regularly collect to evaluate progress.

We recommend that you conduct tactical planning at regular intervals with evaluation
and assessment, as depicted in the following diagram.
The diagram depicts how you can iteratively revise the BI strategy to achieve
incremental progress.

Item Description

BI strategic planning: Define and reassess your BI goals and priorities every 12-18 months.
In between BI strategic planning sessions, strive for incremental progress toward your BI
goals by achieving your BI objectives defined in tactical planning. Additionally, in between
strategic planning, you should collect feedback to inform future strategic decision-making.

BI tactical planning: Identify and reevaluate your BI objectives every 1-3 months. In
between, you implement these tactical plans by building BI solutions and launching BI
initiatives. Additionally, in between tactical planning, you should collect feedback and
monitor your KPIs or OKRs to inform future tactical decision-making.

Future objectives and priorities defined in your strategic and tactical planning are
informed by using regular feedback and evaluation mechanisms, such as those
described in the following sections.

Collect feedback about the business strategy

Business objectives regularly change, resulting in new business data needs and changing
requirements. For this reason, your tactical planning must be flexible and remain well
aligned with the business strategy. To enable this alignment, you can:

Schedule business alignment meetings: When conducting tactical planning,


schedule strategic meetings with key business and data decision makers to assess
what was done in the previous period. You should schedule these meetings to
align with other key strategic business meetings. Discussions during these
meetings provide an opportunity to revise BI strategic and tactical planning and
also to demonstrate and reflect upon progress.
Review feedback and requests: Feedback and requests from the user community
is valuable input to reevaluate your BI strategy. Consider setting up a
communication hub, possibly with channels like office hours, or feedback forms to
collect feedback.
Couple tactical planning with project planning: Tactical planning can be
integrated with your project planning processes. For example, you might integrate
tactical planning with your agile planning processes. Agile planning is a project
management approach that focuses on delivering value through iterative work
cycles. Coupling tactical and agile planning helps to create a consistent, iterative
structure around the operationalization of your BI strategy.

 Tip

Creating structured processes to handle changing business objectives can help to


avoid reactive or spontaneous planning, especially to meet new, urgent business
requests.

Anticipate change in technology


Tactical planning should address relevant technological changes. Technological changes
can strongly impact on your planning and business processes. Many changes are also
opportunities to improve existing or planned implementations. It's important to ensure
that your planning is always current to ensure that you can best use the opportunities
new technology provide. Not only does it help people remain effective, it helps your
organization remain competitive in its market.

Here are some examples of technological changes that can affect your tactical planning.

New products, features, or settings (including those in preview release)


Decommissioned tools, systems, or features
Changes in how the user community use tools or analyze data (such as generative
AI)

To mitigate impact and capitalize on opportunities of change, you should regularly


examine the technological context of your business. Consider the following points about
responding to technological change.

Follow updates: Keep current with new developments and features in Microsoft
Fabric. Read the monthly community blog posts and keep pace with
announcements at conference events.
Document key changes: Ensure that any impactful changes are included in your
tactical planning, and include relevant references. Call attention to any changes
that have a direct or urgent impact on business data needs or BI objectives.
Decide how to handle features in preview: Clarify how you'll use new preview
features that aren't yet generally available. Identify any preview features or tools
that have a strategic impact in your organization or help you achieve strategic
objectives. Consider how you'll benefit from these preview features while
identifying and mitigating any potential risks or limitations.
Decide how to handle new third-party and community tools: Clarify your policy
about third-party and community tools. If these tools are allowed, describe a
process to identify new tools that have a strategic impact in your organization or
help you achieve strategic objectives. Consider how you'll benefit from these tools
while identifying and mitigating any potential risks or limitations.

Proceed with solution planning


A key outcome of tactical planning is the prioritized backlog of solutions that you'll
implement to address business data needs. The next step is to plan and implement
these solutions. Implementing these solutions helps you to achieve your BI objectives
and make incremental progress toward your BI goals.

Checklist - When planning to revise your strategic and tactical planning, key decisions
and actions include:

" Schedule periodic planning workshops: At the end of each planning period,


schedule workshops to assess progress and review the milestones attained.
" Schedule regular workshops to re-align with business strategy: Use workshops to
align the BI strategy with the business strategy by having a cross-functional
discussion between the working team and key stakeholders.
" Create mechanisms for assessment and feedback: Ensure that feedback relevant to
the BI strategy is documented. Create forms, or encourage key stakeholders to use
the communication hub to provide feedback and submit new requests.
" Assign a team to own feedback: Ensure that there's a team that has clear
ownership of user feedback and requests. This team should respond to users to
acknowledge their requests or request more detail.
" Create a schedule to review requests: Review feedback regularly, like every week.
Identify priority requests before they become urgent and disrupt existing planning.
Clearly and transparently communicate any rejected requests to users. Propose
alternatives and workarounds so that users can continue their work without
disruption.

Next steps
In the next article in this series, learn how to conduct BI solution planning.
Power BI implementation planning: BI
solution planning
Article • 09/11/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article helps you to plan solutions that support your business intelligence (BI)
strategy. It's primarily targeted at:

BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and strategically important BI solutions.
Center of Excellence (COE), IT, and BI teams: The teams that design and deploy
enterprise BI solutions for their organization.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a department and design and deploy
solutions for self-service, departmental BI, or team BI usage scenarios.

A BI strategy is a plan to implement, use, and manage data and analytics. You define
your BI strategy by starting with BI strategic planning. Strategic planning helps you to
identify your BI goals and priorities. To determine the path to progress toward your BI
goals, you describe specific objectives by using tactical planning. You then achieve
progress toward your BI objectives by planning and deploying BI solutions.

7 Note

In this series, we define solutions as processes or tools built to address specific


business needs for users. A solution can take many forms, such as a data pipeline, a
data lakehouse, a Power BI dataset or report.

There are many approaches to plan and implement BI solutions. This article describes
one approach that you can take to plan and implement BI solutions that support your BI
strategy.

The following high-level diagram depicts how to conduct BI solution planning.


You take the following steps to conduct BI solution planning.

Step Description

1 Assemble a project team that gathers requirements and defines the design of the solution.

2 Plan for solution deployment by performing initial setup of tools and processes.

3 Conduct a solution proof of concept (POC) to validate assumptions about the design.

4 Create and validate content by using iterative development and validation cycles.

5 Deploy, support, and monitor the solution after it's released to the production
environment.

7 Note

BI solution planning applies to both self-service BI and enterprise BI projects.

For more information, see the Power BI migration series. While the series is concerned
with migration, the key actions and considerations are relevant to solution planning.

Step 1: Gather requirements


You commence solution planning by first gathering requirements and defining the
solution design.

Note: Strategic and tactical planning is led by a working team, which leads the initiative.
In contrast, solution planning is led by a project team, which consists of content owners
and creators.
Gathering the right requirements is critical to achieve successful solution deployment
and adoption. An effective way to gather requirements is to identify and involve the
right stakeholders, collaboratively define the problem to be solved, and use that shared
understanding of the problem to create a solution design.

Here are some benefits from using a collaborative approach to gather requirements.

User input produces more useful designs: By engaging users in focused


discussions to collect requirements, you can more effectively capture business data
needs. For example, users can demonstrate to content creators how they use
existing solutions and provide feedback about the perceived effectiveness of these
solutions.
Avoid assumptions and mitigate change requests: Discussions with users often
reveal nuances, exceptions, and hidden complexities. These insights reduce the
likelihood of late-stage requests, which can be costly to address.
Onboarding users increases solution adoption: By involving users in design and
early development, you provide them with an opportunity to influence the final
result. Involvement can also give users a sense of intellectual ownership and
accountability for the solution. Highly involved users will be more likely to endorse
the solution and lead their community of practice in using it effectively.
Designs set expectations for stakeholders and business users: By producing
mock-ups or illustrations of the solution design, you can clearly show stakeholders
what the solution will deliver. It also helps by creating a mutual understanding of
the expected project result. This process is also known as design thinking, and it
can be an effective way to approach and understand complex problems.

You can take different approaches to engage users and gather requirements. For
example, you can gather requirements with business design and technical design
(described in detail in later sections of this article).

Business design is an approach to gather business requirements. It focuses on engaging


business users in business design sessions to collaboratively design the solution. The
output of a business design consists of solution mock-ups and descriptive design
documentation.

Technical design is an approach to translate business requirements to technical


requirements, and to address design assumptions. A technical design focuses on
validating the business design and defining a technical approach to use. To validate the
design, content creators typically engage with technical experts in focused discussions
called technical design sessions, where necessary.

) Important

Collecting the wrong requirements is a common reason why implementations fail.


Often, teams collect the wrong requirements because they engaged with the wrong
stakeholders, like decision makers who provide top-down requests for solutions to
be built.

Engaging business users by using collaborative approaches like a business design


can help you collect better requirements. Better requirements often lead to more
efficient development and more robust solutions.

7 Note

For some teams, adopting a structured requirements gathering process is a


momentous change. Ensure that you manage this change, and that it doesn't
disrupt solution planning. We recommend that you find ways to adapt these
approaches to fit with the way your team works.

Prepare for solution planning


You should first prepare for solution planning by considering the factors described in
the following sections.

Identify who will conduct solution planning: As part of the BI tactical planning,
the working team created a prioritized backlog of solutions. In solution planning, a
project team is responsible for designing, developing, and deploying one or more
solutions from the backlog. For each solution in the backlog, you should assemble
a project team that will be responsible for the solution. In addition to running BI
solution planning, the project team should:
Define timelines and milestones for solution planning.
Identify and involve the right stakeholders for requirements gathering.
Set up a centralized location for communication, documentation, and planning.
Engage stakeholders to gather requirements.
Communicate and coordinate with stakeholders and business users.
Orchestrate iterative development and testing cycles with business users.
Document the solution.
Onboard users to the solution by creating and enacting a training plan.
Provide post-deployment solution support.
Address user requests to change or update the solution after deployment.
Conduct solution handover after deployment, if necessary.
Centralize communication and documentation: It's important that the project
team centralizes communication and documentation for BI solution planning. For
example, the project team should centralize requirements, stakeholder
communication, timelines, and deliverables. Consider storing all documentation in
a centralized portal.
Plan requirements gathering: The project team should begin by planning the
business design sessions to gather business requirements. These sessions take the
form of interactive meetings, and they can follow a similar format to the strategic
planning workshops.

 Tip

Consider identifying and involving the support teams responsible for the solution
early in the requirements gathering process. To effectively support the solution, the
support teams will need a comprehensive understanding of the solution, its
purpose, and the users. That's particularly important when the project team is
comprised only of external consultants.

Gather business requirements


Gathering the right business requirements is critical to designing the right solution. To
gather the right requirements and define an effective solution design, the project team
can conduct business design sessions together with the business users.

The purpose of the business design sessions is to:

Confirm the solution scope.


Define and understand the problem the solution should address.
Identify the right key stakeholders for the solution.
Gather the right business requirements.
Prepare a solution design that meets the business requirements.
Prepare supporting design documentation.

The following diagram depicts how to gather business requirements and define the
solution design by using a business design approach.
The diagram depicts the following steps.

Item Description

The project team begins the business design by confirming the solution scope that was
first documented in tactical planning. They should clarify the business areas, systems, and
data covered by the solution.

The project team identifies key stakeholders from the user community who will be
involved in the business design sessions. Key stakeholders are users with sufficient
knowledge and credibility to represent the subject areas of the solution.

The project team plans business design sessions. Planning involves informing stakeholders,
organizing meetings, preparing deliverables, and engaging with business users.

The project team gathers and researches existing solutions that business users currently
use to address existing business data needs. To accelerate this process, the project team
can use relevant research from BI strategic planning, which has been documented in the
communication hub.

The project team runs business design sessions with stakeholders. These sessions are
small, interactive meetings, where the project team guides stakeholders to understand
business data needs and requirements.

The project team concludes the business design by presenting a draft solution design to
stakeholders and other users for feedback and approval. The business design is successful
when the stakeholders agree that the design will help them achieve their business
objectives.

The business design concludes with the following deliverables.

Draft solution designs: Mock-ups, prototypes, or wireframe diagrams illustrate the


solution design. These documents translate the requirements to a concrete design
blueprint.
List of business metrics: Quantitative fields expected in the solution, including
business definitions, and expected aggregations. If possible, rank them by
importance to the users.
List of business attributes: Relevant attributes and data structures expected in the
solution, including business definitions and attribute names. If possible, include
hierarchies and rank the attributes by importance to the users.
Supplemental documentation: Descriptions of key functional or compliance
requirements. This documentation should be as precise as necessary, yet as
concise as possible.

The business design deliverables are used in, and validated by, the technical design.

 Tip

Solution mock-ups, prototypes, or wireframe diagrams can create a clear


understanding of the expected result, both for developers and end users. Creating
effective mock-ups doesn't require artistic skill or talent. You can use simple tools
like Microsoft Whiteboard , PowerPoint, or even just a pen and paper to illustrate
the design.

Gather technical requirements


After completing the business design, the project team validates its outcomes by using a
technical design. The technical design is an approach similar to the business design.
While the business design focuses on business data needs, the technical design focuses
on the technical aspects of a solution. A key outcome of the technical design is the
solution plan, which describes the final solution design and informed estimates of the
effort to implement it.

7 Note

Unlike the business design, the technical design is largely an independent


investigation into source data and systems conducted by content creators and
owners.

The purpose of a technical design is to:

Validate the results of the business design.


Address technical assumptions in the current design.
Identify the relevant data sources in scope, and define the field calculations and
field-source mappings for each data source.
Translate the business requirements to technical requirements.
Produce estimations of the effort required for the implementation.

The project team engages technical or functional stakeholders in limited, focused


technical design sessions. These sessions are interactive meetings with the functional
stakeholders to gather technical requirements. Stakeholders are responsible for specific
functional areas required for the solution to work effectively.

Examples of stakeholders in a technical design could be:

Security and networking teams: Responsible for ensuring security and compliance
of the data.
Functional teams and data stewards: Responsible for curating the source data.
Architects: Owners of specific platforms, tools, or technology.

The project team engages stakeholders in technical design sessions to address technical
aspects of the solution. Technical aspects can include:

Data source connections: Details about how to connect to, and integrate, data
sources.
Networking and data gateways: Details about private networks or on-premises
data sources.
Field source mapping: Data mappings of business metrics and attributes to data
source fields.
Calculation logic: A translation of business definitions to technical calculations.
Technical features: Features or functionality needed to support business
requirements.

 Tip

The project team who conducted the business design should also conduct the
technical design. However, for practical reasons, different individuals may lead the
technical design. In this case, begin the technical design by reviewing the outcomes
of the business design.

Ideally, the individuals who lead the technical design should have a thorough
understanding of the outcomes and the business users.

The following diagram depicts how to translate business requirements into technical
requirements by using a technical design.
The diagram depicts the following steps.

Item Description

The project team begins the technical design by defining the data source scope based on
the results of the business design. To identify the right data sources, the project team
consults with the business and functional SMEs.

The project team identifies technical or functional stakeholders to involve later in the
technical design sessions.

The project team plans limited, focused sessions with functional stakeholders to address
technical aspects of the solution. Planning involves informing stakeholders, organizing
meetings, and preparing deliverables.

The project team researches technical requirements. Research includes defining field
calculations and data source mappings, and also addressing the business design
assumptions with technical analysis and documentation.

If necessary, the project team involves stakeholders in technical design sessions. Sessions
focus on a specific, technical aspect of the solution, like security or data source
connections. In these sessions, the project team gathers qualitative feedback from
stakeholders and SMEs.

The project team prepares their findings by using a solution plan, which they present to
stakeholders and decision makers. The plan is an iteration and extension of the business
design outcomes that includes the final design, estimations, and other deliverables.

The technical design should conclude with a final meeting with stakeholders and decision
makers to decide whether or not to proceed. This meeting provides a final opportunity to
evaluate the solution planning before resources are committed to developing the solution.

7 Note
The technical design may reveal unexpected complexity that may make the solution
planning infeasible given the current resource availability or organizational
readiness. In this case, the solution should be reevaluated in the subsequent
tactical planning period. Depending on the urgency of the business data needs, a
decision maker, like the executive sponsor, may still want to proceed with a proof
of concept, or only one part of the planned solution.

The technical design concludes with a solution plan, which consists of the following
deliverables.

Tools and technologies: List of the relevant technical instruments needed to


implement the solution. The list typically includes relevant new license options (like
Fabric capacity or Premium per user), features, and tools.
Defined list of business metrics: Calculations and field-source mappings of the
business metrics for all of the in-scope data sources. To produce this deliverable,
the project team uses the list of business metrics created in the business design.
Defined list of business attributes: Field-source mappings of the business
attributes for all of the in-scope data sources. To produce this deliverable, the
project team uses the list of business attributes created in the business design.
Revised designs: Revisions to the solution design based on changes to, or invalid
assumptions about, the business design. Revised designs are updated versions of
the mock-ups, prototypes, or wireframe diagrams produced in the business design.
If no revisions are necessary, communicate that the technical design validates the
business design.
Estimate of effort: Estimate of the resources needed to develop, support, and
maintain the solution. The estimate informs the final decision about whether to
proceed with implementing the solution, or not.

) Important

Ensure that the project team notifies stakeholders of any changes or unexpected
discoveries from the technical design. These technical design sessions should still
involve relevant business users. However, ensure that stakeholders aren't
unnecessarily exposed to complex technical information.

 Tip

Because business objectives invariably evolve, it's expected that requirements will
change. Don't assume that requirements for BI projects are fixed. If you struggle
with changing requirements, it may be an indication that your requirements
gathering process isn't effective, or that your development workflows don't
sufficiently incorporate regular feedback.

Checklist - When gathering requirements, key decisions and actions include:

" Clarify who owns solution planning: For each solution, ensure that roles and
responsibilities are clear for the project team.
" Clarify the solution scope: The solution scope should already be documented as
part of BI tactical planning. You may need to spend additional time and effort to
clarify the scope before you start solution planning.
" Identify and inform stakeholders: Identify stakeholders for business designs and
technical designs. Inform them in advance about the project and explain the scope,
objectives, required time investment, and deliverables from the business design.
" Plan and conduct business design sessions: Moderate the business design sessions
to elicit information from stakeholders and business users. Request that users
demonstrate how they use existing solutions.
" Document business metrics and attributes: By using existing solutions and input
from stakeholders, create a list of business metrics and attributes. In the technical
designs, map the fields to the data source and describe the calculation logic for
quantitative fields.
" Draft the solution design: Create iterative mock-ups based on stakeholder input
that visually reflect the expected solution result. Ensure that mock-ups accurately
represent and address the business requirements. Communicate to business users
that the mock-ups must still be validated (and possibly revised) during the technical
design.
" Create the solution plan: Research source data and relevant technical
considerations to ensure that the business design is achievable. Where relevant,
describe key risks and threats to the design, and any alternative approaches. If
necessary, prepare a revision of the solution design and discuss it with the
stakeholders.
" Create effort estimates: As part of the final solution plan, estimate the effort to
build and support the solution. Justify these estimates with the information
gathered during the business design and technical design sessions.
" Decide whether to proceed with the plan: To conclude the requirements gathering
process, present the final plan to stakeholders and decision makers. The purpose of
this meeting is to determine whether to proceed with solution development.
Step 2: Plan for deployment
When the project team finishes gathering requirements, creating the solution plan, and
receiving approval to proceed, it's ready to plan for solution deployment.

Deployment planning tasks differ depending on the solution, your development


workflow, and your deployment process. A deployment plan typically pertains to many
activities involving the planning and setup of tools and processes for the solution.

Plan to address key areas


The project team should plan for key areas of solution deployment. Typically, planning
should address the following areas.

Compliance: Ensure that all the compliance criteria identified in requirements


gathering will be addressed by specific actions. Assign each of these actions to
specific people, and clearly define the delivery timeframe.
Security: Decide how different layers of solution access will be managed, and any
data security rule requirements. Review whether the solution security will be more
or less strict than standard content in the tenant.
Data gateways: Evaluate whether the solution needs a data gateway to connect to
data sources. Determine whether specific gateway settings or high availability
clusters are necessary. Plan who will be able to manage gateway connections via
the gateway security roles, and how to monitor the gateways.
Workspaces: Decide how to set up and use workspaces. Determine whether the
solution requires lifecycle management tools like Git integration and deployment
pipelines, and whether it requires advanced logging with Azure Log Analytics.
Support: Establish who's responsible for supporting and maintaining the solution
after production deployment. If the individuals responsible for support are
different than the project team, involve these individuals in development. Ensure
that whoever will support the solution understands the solution design, the
problem it should address, who should use it, and how.
User training: Anticipate the efforts needed to train the user community so they
can effectively use the solution. Consider whether any specific change
management actions are necessary.
Governance: Identify any potential governance risks for the solution. Anticipate the
effort needed to enable users to effectively use the solution, while mitigating any
governance risk (for example, by using sensitivity labels and policies).

Conduct initial setup


The project team should perform initial set up to commence development. Initial set up
activities can include:

Initial tools and processes: Perform first-time setup for any new tools and
processes needed for development, testing, and deployment.
Identities and credentials: Create security groups and service principals that will
be used to access tools and systems. Effectively and securely store the credentials.
Data gateways: Deploy data gateways for on-premises data sources (enterprise
mode gateways) or data sources on a private network (virtual network, or VNet,
gateways).
Workspaces and repositories: Create and set up workspaces and remote
repositories for publishing and storing content.

7 Note

Deployment planning differs depending on the solution and your preferred


workflow. This article describes only on high-level planning and actionable items.

For more information about deployment planning, see Plan deployment to migrate to
Power BI.

Checklist - When planning solution deployment, key decisions and actions include:

" Plan for key areas: Plan to address the processes and tools that you need to
successfully develop and deploy your solution. Address both technical areas (like
data gateways or workspaces) and also adoption (like user training and
governance).
" Conduct initial setup: Establish the tools, processes, and features that you need to
develop and deploy the solution. Document the setup to help others who will need
to do a first-time setup in the future.
" Test data source connections: Validate that the appropriate components and
processes are in place to connect to the right data to start the proof of concept.
Step 3: Conduct a proof of concept
The project team conducts a solution proof of concept (POC) to validate outstanding
assumptions and to demonstrate early benefits for business users. A POC is an initial
design implementation that's limited in scope and maturity. A well-run POC is
particularly important for large or complex solutions because it can identify and address
complexities (or exceptions) that weren't detected in the technical design.

We recommend factoring in the following considerations when preparing a POC.

Objectives and scope: Describe the purpose of the solution POC and the
functional areas it will address. For example, the project team may decide to limit
the POC to a single functional area, or a specific set of requirements or features.
Source data: Identify what data will be used in the POC. Depending on the
solution, the project team may decide to use different types of data, such as:
Production (real) data
Sample data
Generated synthetic data that resembles actual data volumes and complexity
observed in production environments
Demonstration: Describe how and when the project team will demonstrate the
POC to stakeholders and users. Demonstrations may be given during regular
updates, or when the POC fulfills specific functional criteria.
Environment: Describe where the project team will build the POC. A good
approach is to use a distinct sandbox environment for the POC, and deploy it to a
development environment when it's ready. A sandbox environment has more
flexible policies and fluid content, and it's focused on producing quick results. In
contrast, a development environment follows more structured processes that
enable collaboration, and it focuses on completing specific tasks.
Success criteria: Define the threshold for when the POC is successful and should
move to the next iteration and enter formal development. Before starting the POC,
the project team should identify clear criteria for when the POC is successful. By
setting these criteria in advance, the project team defines when the POC
development ends and when iterative development and validation cycles begin.
Depending on the objectives of the POC, the project team may set different
success criteria, such as:
Approval of the POC by stakeholders
Validation of features or functionality
Favorable review of the POC by peers after a fixed development time
Failure: Ensure that the project team can identify failure of the POC. Identifying
failure early on will help to investigate root causes. It can also help to avoid further
investment in a solution that won't work as expected when it's deployed to
production.

U Caution

When the project team conducts the POC, they should remain alert for assumptions
and limitations. For example, the project team can't easily test solution
performance and data quality by using a small set of data. Additionally, ensure that
the scope and purpose of the POC is clear to the business users. Be sure to
communicate that the POC is a first iteration, and stress that it's not a production
solution.

7 Note

For more information, see Conduct proof of concept to migrate to Power BI.

Checklist - When creating a POC, key decisions and actions include:

" Define the objectives: Ensure that the objectives of the POC are clear to all people
who are involved.
" Define the scope of the POC: Ensure that creating the POC won't take too much
development effort, while still delivering value and demonstrating the solution
design.
" Decide what data will be used: Identify what source data you'll use to make the
POC, justifying your decision and outlining the potential risks and limitations.
" Decide when and how to demonstrate the POC: Plan to show progress by
presenting the POC to decision makers and business users.
" Clarify when the POC ends: Ensure that the project team decides on a clear
conclusion for the POC, and describe how it'll be promoted to formal development
cycles.

Step 4: Create and validate content


When the POC is successful, the project team shifts from the POC to creating and
validating content. The project team can develop the BI solution with iterative
development and validation cycles. These cycles consist of iterative releases, where the
project team creates content in a development environment and releases it to a test
environment. During development, the project team gradually onboards the user
community in a pilot process to early (beta) versions of the solution in the test
environment.

 Tip

Iterative delivery encourages early validation and feedback that can mitigate
change requests, promote solution adoption, and realize benefits before the
production release.

Iterative development and validation cycles proceed until the project team arrives at a
predefined conclusion. Typically, development concludes when there are no more
features to implement or user feedback to address. When the development and
validation cycles conclude, the project team deploys the content to a production
environment with the final production release.

The following diagram depicts how the project team can iteratively deliver BI solutions
with development and validation cycles.
The diagram depicts the following steps.

Item Description

The project team communicates each release to the user community, describing changes
and new features. Ideally, communication includes a solution demonstration and Q&A, so
users understand what's new in the release, and they can provide verbal feedback.

During validation, users provide feedback via a central tool or form. The project team
should review feedback regularly to address issues, accept or reject requests, and inform
upcoming development phases.

The project team monitors usage of the solution to confirm that users are testing it. If
there isn't any usage, the project team should engage with the user community to
understand the reasons why. Low usage may indicate that the project team needs to take
further enablement and change management actions.

The project team promptly responds to user feedback. If the project team takes too long
to address feedback, users may quickly lose motivation to provide it.

The project team incorporates accepted feedback into the solution planning. If necessary,
they review the planning priorities to clarify and delegate tasks before the next
development phase begins.

The project team continues development of the solution for the next release.

The project team iterates through all steps until they reach a predefined conclusion, and
the solution is ready for production deployment.

The following sections describe key considerations for using iterative development and
validation cycles to deliver BI solutions.
Create content
The project team develops the solution by following their normal development
workflow. However, they should consider the following points when creating content.

During each development cycle, update documentation to describe the solution.


Conclude each development cycle with an announcement to the user community.
Announcements should be posted to the centralized portal, and they should
provide brief descriptions of changes and new features in each release.
With each release, consider organizing sessions to demonstrate changes and new
features to the user community, and to answer any verbal questions.
Define when iterative development and validation cycles will conclude. Ensure that
there's a clear process to deploy the solution to the production environment,
including a transition to support and adoption activities.

Validate content
Each iterative development cycle should conclude with content validation. For BI
solutions, there are typically two kinds of validation.

Developer validation: Solution testing is done by content creators and peers. The
purpose of developer validation is to identify and resolve all critical and visible
issues before the solution is made available to business users. Issues can pertain to
data correctness, functionality, or the user experience. Ideally, content is validated
by a content creator who didn't develop it.
User validation: Solution testing is done by the user community. The purpose of
user validation is to provide feedback for later iterations, and to identify issues that
weren't found by developers. Formal user validation periods are typically referred
to as user acceptance testing (UAT).

) Important

Ensure that any data quality issues are addressed during developer validation
(before UAT). These issues can quickly erode trust in the solution, and they can
harm long-term adoption.

 Tip

When conducting user validation, consider occasional, short calls with key users.
Observe them when they use the solution. Take notes about what they find difficult
to use, or what parts of the solution aren't working as expected. This approach can
be an effective way to collect feedback.

Factor in the following considerations when the project team validates content.

Encourage user feedback: With each release, request users provide feedback, and
demonstrate how they can effectively do so. Consider regularly sharing examples
of feedback and requests that have led to recent changes and new features. By
sharing examples, you're demonstrating that feedback is acknowledged and
valued.
Isolate larger requests: Some feedback items require more effort to address.
Ensure that the project team can identify these items and discuss whether they'll
be implemented, or not. Consider documenting larger requests to discuss in later
tactical planning sessions.
Begin change management activities: Train users how to use the solution. Be sure
to spend extra effort on new processes, new data, and different ways of working.
Investing in change management has a positive return on long-term solution
adoption.

When the solution reaches a predefined level of completeness and maturity, the project
team is ready to deploy it to production. After deployment, the project team transitions
from iterative delivery to supporting and monitoring the production solution.

7 Note

Development and testing differ depending on the solution and your preferred
workflow.

This article describes only high-level planning and actionable items. For more
information about iterative development and testing cycles, see Create content to
migrate to Power BI.

Checklist - When creating and validating content, key decisions and actions include:

" Use an iterative process to plan and assign tasks: Plan and assign tasks for each
release of the solution. Ensure that the process to plan and assign tasks is flexible
and incorporates user feedback.
" Set up content lifecycle management: Use tools and processes to streamline and
automate solution deployment and change management.
" Create a tool to centralize feedback: Automate feedback collection by using a
solution that's simple for you and your users. Create a straightforward form to
ensure that feedback is concise yet actionable.
" Schedule a meeting to review feedback: Meet to briefly review each new or
outstanding feedback item. Decide whether you'll implement the feedback or not,
who will be responsible for the implementation, and what actions to take to close
the feedback item.
" Decide when iterative delivery concludes: Describe the conditions for when the
iterative delivery cycles will conclude, and when you'll release content to the
production environment.

Step 5: Deploy, support, and monitor


When ready, the project team deploys the validated solution to the production
environment. The project team should take key adoption and support actions to ensure
that the deployment is successful.

To ensure a successful deployment, you perform the following support and adoption
tasks.

Communicate the final release: The executive sponsor, a manager, or another


person with sufficient authority and credibility should announce the release to the
user community. Communication should be clear, concise, and include links to the
relevant solutions and support channels.
Conduct training for content consumers: Training should be available for content
consumers during the first weeks after release to production. Training should focus
on clarifying the solution scope, answering user questions, and explaining how to
use the solution.
Address feedback and requests: Consider providing users with a channel to
submit feedback and requests to the project team. Ensure that reasonable
feedback and requests are discussed and, when appropriate, implemented during
the post-deployment support period. Acting on feedback and requests after the
production release is important. It indicates an agile solution that responds to
changing business needs.
Plan to connect with the user community: Even after the post-deployment
support period ends, ensure that solution owners regularly meet with the user
community. These meetings are valuable sources of feedback for revising your BI
strategy. Also, they help support solution adoption by enabling users.
Handover actions: Members of the project team may not be responsible for
maintaining the solution. In this case, the team should identify who's responsible
and perform a handover. The handover should occur soon after the release to
production, and it should address both the solution and the user community.

U Caution

Failing to conduct an effective handover may lead to preventable issues with


solution support and adoption during its lifecycle.

After deployment, the project team should plan to proceed to the next solution in the
prioritized solution backlog. Ensure that you collect any new feedback and requests and
make revisions to tactical planning—including the solution backlog—if necessary.

Checklist – When considering solution deployment, key decisions and actions include:

" Create a communication plan: Plan how to communicate the release, training, and
other solution support or adoption actions. Ensure that any outages or issues are
communicated and promptly addressed in the post-deployment support period.
" Follow through with a training plan: Train users to use the solution. Ensure that the
training includes both live and recorded training sessions for several weeks after
release.
" Conduct handover activities: If necessary, prepare a handover from the
development team to the support team.
" Conduct solution office hours: After the post-deployment support period, consider
holding regular office hours sessions to answer questions and collect feedback from
users.
" Set up a continuous improvement process: Schedule a monthly audit of the
solution to review potential changes or improvements over time. Centralize user
feedback and review feedback periodically between audits.
Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Power BI implementation planning:
Tenant setup
Article • 08/23/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This tenant setup article introduces important aspects to know about setting up your
Fabric tenant, with an emphasis on the Power BI experience. It's targeted at multiple
audiences:

Fabric administrators: The administrators who are responsible for overseeing


Fabric in the organization.
Azure Active Directory administrators: The team who is responsible for
overseeing and managing Azure Active Directory (Azure AD).

Fabric is part of a larger Microsoft ecosystem. If your organization is already using other
cloud subscription services, such as Azure, Microsoft 365, or Dynamics 365, then Fabric
operates within the same Azure AD tenant. Your organizational domain (for example,
contoso.com) is associated with Azure AD. Like all Microsoft cloud services, your Fabric
tenant relies on your organization's Azure AD for identity and access management.

 Tip

Many organizations have an on-premises Active Directory (AD) environment that


they synchronize with Azure AD in the cloud. This setup is known as a hybrid
identity solution, which is out of scope for this article. The important concept to
understand is that users, groups, and service principals must exist in Azure AD for a
cloud-based suite of services like Fabric to work. Having a hybrid identity solution
will work for Fabric. We recommend talking to your Azure AD administrators about
the best solution for your organization.

Azure AD tenant
Most organizations have one Azure AD tenant, so it's commonly true that an Azure AD
tenant represents an organization.

Usually, Azure AD is set up before a Fabric implementation begins. However, sometimes


it's when you provision a cloud service that you become aware of the importance of
Azure AD.

 Tip

Because most organizations have one Azure AD tenant, it can be challenging to


explore new features in an isolated way. For non-production testing purposes,
consider using a free Microsoft 365 E5 instant sandbox. It's available through the
Microsoft 365 Developer Program.

Unmanaged tenant
A managed tenant has a global administrator assigned within Azure AD. If an Azure AD
tenant doesn't exist for an organizational domain (for example, contoso.com), when the
first user from that organization signs up for a Fabric trial or account, an unmanaged
tenant is created in Azure AD. An unmanaged tenant is also known as a shadow tenant,
or a self-service-created tenant. It has a basic configuration, allowing the cloud service
to work without assigning a global administrator.

To properly manage, configure, and support Fabric, a managed tenant is required.


There's a process that a system administrator can follow to take over an unmanaged
tenant so that they can manage it properly on behalf of the organization.

 Tip

The administration of Azure AD is a broad and deep topic. We recommend that you
assign specific people in your IT department as system administrators to securely
manage Azure AD for your organization.

Checklist - When reviewing your Azure AD tenant for use with Fabric, key decisions and
actions include:
" Take over tenant: If applicable, initiate the process to take over an unmanaged
tenant.
" Confirm the Azure tenant is managed: Verify that your system administrators
actively manage your Azure AD tenant.

Tenant ID for external users


You must consider how users will access your tenant when you have external users (such
as customers, partners, or vendors) or when internal users must access another tenant
outside of your organization. To access a different organizational tenant, a modified URL
is used.

Every Azure AD tenant has a globally unique identifier (GUID) known as the tenant ID. In
Fabric, it's known as the customer tenant ID (CTID). The CTID is appended to the end of
the tenant URL. You can find the CTID in the Fabric portal by opening the About
Microsoft Fabric dialog window. It's available from the Help & Support (?) menu, which is
located at the top-right of the Fabric portal.

Knowing your CTID is important for Azure AD B2B scenarios. URLs that you provide to
external users (for example, to view a Power BI report) must append the CTID parameter
in order to access the correct tenant.

If you intend to collaborate with or provide content to external users, we recommend


setting up custom branding. Use of a logo, cover image, and theme helps users identify
which organizational tenant they're accessing.

Checklist - When granting external users permission to view your content, or when you
have multiple tenants, key decisions and actions include:

" Include your CTID in relevant user documentation: Record the URL that appends
the tenant ID (CTID) in user documentation.
" Set up custom branding in Fabric: In the Fabric admin portal, set up custom
branding to help users identify the organizational tenant.

Azure AD administrators
Fabric administrators periodically need to work with the Azure AD administrators.
The following list includes some common reasons for collaboration between Fabric
administrators and Azure AD administrators.

Security groups: You'll need to create new security groups to properly manage the
Fabric tenant settings. You may also need new groups to secure workspace content
or for distributing content.
Security group ownership: You may want to assign a group owner to allow more
flexibility in who can manage a security group. For example, it could be more
efficient to permit the Center of Excellence (COE) to manage the memberships of
certain Fabric-specific groups.
Service principals: You may need to create an Azure AD app registration to
provision a service principal. Authenticating with a service principal is a
recommended practice when a Fabric administrator wants to run unattended,
scheduled scripts that extract data by using the admin APIs, or when embedding
content in an application.
External users: You'll need to understand how the settings for external (guest)
users are set up in Azure AD. There are several Fabric tenant settings related to
external users, and they rely on how Azure AD is set up. Also, certain security
capabilities for the Power BI workload only work when using the planned invitation
approach for external users in Azure AD.
Real-time control policies: You may choose to set up real-time session control
policies, which involves both Azure AD and Microsoft Defender for Cloud Apps. For
example, you can prohibit the download of a Power BI report when it has a specific
sensitivity label.

Checklist - When considering how to work with your Azure AD administrators, key
decisions and actions include:

" Identify your Azure AD administrators: Make sure you know the Azure AD
administrators for your organization. Be prepared to work with them as needed.
" Involve your Azure AD administrators: As you work through your implementation
planning process, invite Azure AD administrators to pertinent meetings and involve
them in relevant decision-making.

Location for data storage


When a new tenant is created, resources are provisioned in Azure, which is Microsoft's
cloud computing platform. Your geographic location becomes the home region for your
tenant. The home region is also known as the default data region.

Home region
The home region is important because:

The performance of reports and dashboards depends, in part, on users being in


proximity to the tenant location.
There may be legal or regulatory reasons that the organization's data be stored in
a specific jurisdiction.

The home region for the organization's tenant is set to the location of the first user that
signs up. If most of your users are located in a different region, that region might not be
the best choice.

You can determine the home region for your tenant by opening the About Microsoft
Fabric dialog window in the Fabric portal. The region is displayed next to the Your data is
stored in label.

You may discover that your tenant resides in a region that isn't ideal. You can use the
Multi-Geo feature by creating a capacity in a specific region (described in the next
section), or, you can move it. To move your tenant to another region, your global
Microsoft 365 administrator should open a support request.

The relocation of a tenant to another region isn't a fully automated process, and some
downtime is involved. Be sure to take into consideration the prerequisites and actions
that are required before and after the move.

 Tip

Because a lot of effort is involved, when you determine that a move is necessary,
we recommend that you do it sooner rather than later.

Checklist - When considering the home region for storing data in your tenant, key
decisions and actions include:

" Identify your home region: Determine the home region for your tenant.
" Initiate the process to move your tenant: If you discover that your tenant is located
in an unsuitable geographic region (that can't be solved with the Multi-Geo
feature), research the process to move your tenant.

Other specific data regions


Some organizations have data residency requirements. Data residency requirements
typically include regulatory or industry requirements for storing data in a specific
geographic region. Data sovereignty requirements are similar, but more stringent
because the data is subject to the laws of the country or region in which the data is
stored. Some organizations also have data localization requirements, which dictate that
data created within certain borders needs to remain within those borders.

Regulatory, industry, or legal requirements can require you to store certain data
elsewhere from the home region (described in the previous section). In these situations,
you can benefit from the Multi-Geo feature by creating a capacity in a specific region. In
this case, you must assign workspaces to the correct capacity to ensure that the
workspace data is stored in the desired geographic location.

Multi-Geo support enables organizations to:

Meet data residency requirements for data at rest.


Improve the ability to locate data near the user base.

7 Note

The Multi-Geo feature is available with any type of capacity license (except shared
capacity). It's not available with Premium Per User (PPU) because data stored in
workspaces assigned to PPU is always stored in the home region (just like shared
capacity).

Checklist - When considering other specific data regions for your tenant, key decisions
and actions include:

" Identify data residency requirements: Determine what your requirements are for
data residency. Identify which regions are appropriate, and which users might be
involved.
" Investigate use of the Multi-Geo feature: For specific situations where data should
be stored elsewhere from the home region, investigate enabling Multi-Geo.

Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.

 Tip

To learn how to manage a Fabric tenant, we recommend that you work through the
Administer Microsoft Fabric module.
Power BI implementation planning: User
tools and devices
Article • 08/31/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article introduces key considerations for planning user tools and managing devices
to enable and support Power BI consumers and authors in the organization. This article
is targeted at:

Center of Excellence (COE) and BI teams: The teams that are responsible for
overseeing Power BI in the organization. These teams include decision makers who
need to decide which tools to use for creating Power BI content.
Fabric administrators: The administrators who are responsible for overseeing
Fabric in the organization.
IT and infrastructure teams: Technical staff who installs, updates, and manages
user devices and machines.
Content creators and content owners: Users who need to communicate with
colleagues and make requests for what they need to have installed.

One important aspect of analytics adoption is ensuring that content consumers and
content creators have the software applications they need. The effective management of
tools–particularly for users who create content–leads to increased user adoption and
reduces user support costs.

Requests for new tools


How you handle requests for new tools and software applications is a governance
decision. Many users who are new to the organization, or are just getting started with
analytics, don't know what to request. To simplify the process, consider handling the
following requests together:

Software requests
User license requests
Training requests
Data access requests

Software installations are usually the responsibility of the IT department. To ensure an


optimal user experience, it's critical that IT collaborate with the Center of Excellence
(COE) on key decisions and processes, such as:

Process for users to request software installation. There are several ways to
handle software installation requests:
Common tools can be included in a standard machine setup. IT teams
sometimes refer to it as the standard build.
Certain applications might be installed automatically based on job role. The
software that's installed could be based on an attribute in the user profile in
Microsoft Entra ID (Azure Active Directory).
For custom requests, using a standard request form works well. A form (rather
than email) builds up a history of requests. When prerequisites or more licenses
are required, approval can be included in the workflow.
Process for installing software updates. The timely installation of software
updates is important. The goal is to stay as current as possible. Be aware that users
can read online what's possible and might become confused or frustrated when
newer features aren't available to them. For more information, see Client tools later
in this article.

Checklist - When planning for how to handle requests for new tools, key decisions and
actions include:

" Decide how to handle software requests: Clarify who's responsible for receiving
and fulfilling new requests for software installation.
" Confirm whether prerequisites are required: Determine what organizational
prerequisites exist related to training, funding, licensing, and approvals prior to
requesting software to be installed.
" Create a tracking system: Create a system to track the status and history of
software requests.
" Create guidance for users: Provide documentation in the centralized portal for how
to request new tools and software applications. Consider co-locating this guidance
with how to request licenses, training, and access to data.

Plan for consumer tools


In an organization, many users are classified as consumers. A consumer views content
that's been created and published by others.

The most common ways that a consumer can access Power BI content include:

Software Target audience

Power BI service Content consumers view content by using a web browser (such as
Microsoft Edge).

Teams Content consumers who view content that's been published to the Power
BI service by using the Power BI app for Microsoft Teams. This option is
convenient when users spend a lot of time in Teams. For more
information, see Guide to enabling your organization to use Power BI in
Microsoft Teams .

Power BI Mobile Content consumers who interact with content that's been published to
Application the Power BI service (or Power BI Report Server) using iOS, Android, or
Windows 10 applications.

OneDrive/SharePoint Content consumers who view Power BI Desktop (.pbix) files that are
viewer stored in OneDrive or SharePoint by using a web browser. This option is a
useful alternative to sharing the original Power BI Desktop files. The
OneDrive/SharePoint viewer is most suitable for informal teams who want
to provide a friendly, web-based, report consumer experience without
explicitly publishing .pbix files to the Power BI service.

Power Apps solutions Content consumers who view content from the Power BI service that's
embedded in a Power Apps solution.

Custom application Content consumers who view content from the Power BI service that's
been embedded in a custom application for your organization or for your
customers.

7 Note

This list isn't intended to be an all-inclusive list of ways to access Power BI content.

Because the user experience can vary slightly between different web browsers, we
recommend that you document browser recommendations in your centralized portal.
For more information, see Supported browsers for Power BI.

Checklist - When planning for consumer tools, key decisions and actions include:
" Use a modern web browser: Ensure that all users have access to a modern web
browser that's supported for Power BI. Confirm that the preferred browser is
updated regularly on all user devices.
" Decide how Teams should be used with Power BI: Determine how users currently
work, and to what extent Teams integration is useful. Set the Enable Teams
integration and the Install Power BI app automatically tenant settings in the Fabric
admin portal according to your decision.
" Enable and install the Teams app: If Teams is a commonly used tool, enable the
Power BI app for Microsoft Teams. Consider pre-installing the app for all users as a
convenience.
" Decide whether viewing Power BI Desktop files is permitted: Consider whether
viewing Power BI Desktop files stored in OneDrive or SharePoint is allowed or
encouraged. Set the Users can view Power BI files saved in OneDrive and SharePoint
tenant setting according to your decision.
" Educate users: Provide guidance and training for content creators on how to make
the best use of each option, and where to securely store files. Include
recommendations, such as preferred web browsers, in your centralized portal.
" Conduct knowledge transfer with the support team: Confirm that the support
team is prepared to answer frequently asked questions from users.

Plan for authoring tools


Some users are considered content creators. A content creator authors and publishes
content that's viewed by consumers.

There are several tools that content creators can use to author Power BI content. Some
tools are targeted at self-service content creators. Other tools are targeted at advanced
content creators.

 Tip

This section introduces the most common authoring tools. However, an author
doesn't need all of them. When in doubt, start by only installing Power BI Desktop.

Available tools for authoring


The following table lists the most common tools and applications that are available for
content creators.
Software Target audience

Power BI service Content consumers and creators who develop content by using a web
browser.

Power BI Desktop Content creators who develop data models and interactive reports that will
be published to the Power BI service.

Power BI Desktop Content creators who develop data models and interactive reports that will
Optimized for be published to Power BI Report Server (a simplified on-premises report
Report Server portal).

Power BI Report Report creators who develop paginated reports that will be published to
Builder the Power BI service or to Power BI Report Server.

Power BI App for Content creators and consumers who interact with content in the Power BI
Teams service, when their preference is to remain within the Microsoft Teams
application.

Power BI Mobile Content creators and consumers who interact with and manage content
Application that's been published to the Power BI service (or Power BI Report Server)
using iOS, Android, or Windows 10 applications.

Excel Content creators who develop Excel-based reports in workbooks that may
include PivotTables, charts, slicers, and more. Optionally, Excel workbooks
can be viewed in the Power BI service when they're stored in SharePoint or
OneDrive for work or school.

Third-party tools Advanced content creators may optionally use third-party tools and extend
the built-in capabilities for purposes such as advanced data model
management and enterprise content publishing.

Choose an authoring tool


When choosing an authoring tool, there are some key factors you should consider.
Some of the following decisions can be made once, whereas other decisions need to be
evaluated for each project or solution that you create.

Is browser-based authoring desirable? To improve ease of use and reduce friction,


Power BI (and other Fabric workloads) supports browser-based functionality for
both content consumption as well as content creation. That's an advantage
because a web browser is readily available to all users, regardless of the desktop
operating system they use (including Mac users).
What's the desired development experience? Consider that Power BI Desktop can
be used to create data models and interactive reports, whereas Power BI Report
Builder is a design tool for creating paginated reports. Also, third-party tools offer
extra functionality to developers that isn't available in Power BI Desktop. Because
development experience differs among tools, the requirements for each specific
solution should factor into your decision on which tool to use.
What's the desired publishing experience? Advanced content creators and
content owners may prefer to publish content by using a third-party tool (such as
ALM Toolkit to compare and merge models). The requirements for each specific
solution should be considered.
What's the preferred way to access and/or manage datasets? Rather than using
the standard Power Query experience, advanced content creators might prefer to
read and/or write to datasets with their tool of choice by using the XMLA
endpoint. The requirements for each specific solution should be considered.
How easily can you keep client tools updated? Some organizations find it
challenging to install frequent updates of client applications. In this case, users
might prefer to use a web browser whenever possible.
What are the skills and expertise of the users? There may be existing knowledge
and preferences that impact which tool is selected. This choice impacts both initial
development activities, and also whoever will support users and maintain existing
solutions.
How will versioning be managed? Version control can be accomplished in
multiple ways. When working in a client tool, self-service users might prefer to use
OneDrive or SharePoint, whereas more advanced users might prefer Git integration
with client tools. When working in the Power BI service, Git workspace integration
is available.

 Tip

We recommend that you adopt one method of working and then consistently use
that method. For example, when content creators are inconsistent about using
Power BI Desktop versus the Power BI service for report creation, it becomes much
harder to determine where the original report resides and who's responsible for it.

When to use each authoring tool


The remainder of this section considers when to use the most common authoring tools.

Web-based authoring
The capabilities in the Power BI service for authoring and editing content are continually
evolving (alongside capabilities for viewing, sharing, and distributing content). For
content creators that use a non-Windows operating system (such as macOS, Linux, or
Unix), web-based authoring in the Power BI service is a viable option. Web-based
authoring is also useful for organizations that aren't capable of keeping Power BI
Desktop updated on a timely basis.

7 Note

Because the Power BI service is a web application, Microsoft installs all updates to
ensure it's the latest version. That can be a significant advantage for busy IT teams.
However, it's also important to closely monitor when releases occur so that you're
informed about feature changes.

There are some types of Power BI items that can be created in the web-based
experience, such as:

Dataflows
Datamarts
Paginated reports
Power BI reports
Dashboards
Scorecards

A Fabric solution can be created end-to-end in a browser. The solution may include
Power BI items, and also non-Power BI items (such as a lakehouse).

) Important

When choosing to create content in the browser, it's important that you educate
content creators where to save content. For example, it's easy to save a new report
to a personal workspace, but that's not always an ideal choice. Also, it's important
to consider how versioning will be handled (such as Git integration).

Power BI Desktop

Because it's a free application, Power BI Desktop is a great way for content creators to
get started with developing data models and creating interactive reports. Power BI
Desktop allows you to connect to many data sources, combine data from multiple data
sources, clean and transform data, create a data model, add DAX calculations, and build
reports within a single application. Power BI Desktop is well-suited to building
interactive reports with a focus on exploration.

Here are some points to consider when using Power BI Desktop.


You can create reports within Power BI Desktop or the Power BI service. Due to this
flexibility, a consistent process for how and where to develop content is necessary.
Using version control is considered a best practice. One option for self-service
content creators is to save files created by Power BI Desktop in a location with
versioning enabled (such as OneDrive or SharePoint) that can be secured to
authorized users. Advanced content creators might prefer to use Git integration.
Power BI Desktop is available as a Windows desktop application. Optionally, it's
possible to run Power BI Desktop in a virtualized environment.
Power BI Desktop is usually updated every month. Regular updates allow users to
access new features quickly. However, rolling out frequent updates in a large
organization requires planning. For more information, see Client tools later in this
article.

7 Note

There are many options and settings in Power BI Desktop that significantly affect
the user experience. Not all settings can be programmatically maintained with
group policy or registry settings (described later in this article). One key setting
relates to preview features that users can enable in Power BI Desktop. However,
preview features are subject to change, have limited support, and may not always
work in the same way in the Power BI service (during the preview period).

We recommend that you only use preview features to evaluate and learn new
functionality. Preview features shouldn't be used for mission-critical production
content.

Power BI Desktop for Report Server

Like the standard version of Power BI Desktop, content creators can use Power BI
Desktop for Report Server to create .pbix files. It supports publishing content to Power
BI Report Server. New versions align with the release cadence of Power BI Report Server,
which is usually three times per year.

It's important that content creators use the correct report server version of Power BI
Desktop to avoid compatibility issues after content has been published to Power BI
Report Server. You can manually download and install Power BI Desktop for Report
Server from the Microsoft Download Center.

For users who publish content to both the Power BI service and Power BI Report Server,
there are two options.
Option 1: Only use Power BI Desktop for Report Server because it produces files
that can be published to both the Power BI service and the report server. New
authoring features will become available to users approximately every four months
(to remain consistent with the Power BI Report Server release cadence).
Pros:
Content creators only need to use one tool.
Content creators are assured that the content they publish is compatible with
the report server.
Fewer tools are simpler to manage.
Cons:
Some features that are only supported in the Power BI service aren't available
in the Report Server version of Power BI Desktop. Therefore, content creators
may find it limiting.
New features are slower to become available.
Preview features aren't available.
Option 2: Run both versions—Power BI Desktop, and Power BI Desktop for Report
Server—side by side.
Pros:
All features in standard Power BI Desktop are available to be used.
New features for the standard Power BI Desktop are available more quickly.
Preview features for the standard Power BI Desktop are available to use, at
the content creator's discretion.
Cons:
Content creators must be prepared for complexity because they need to
remember which version to use when, based on the target deployment
location. The risk is that when a .pbix file from the newer version is
inadvertently published to Power BI Report Server, it may not function
correctly. For example, data model queries fail, data refresh fails, or reports
don't render properly.
Content creators need to be aware of the default behavior when they directly
open .pbix files (instead of opening them from within Power BI Desktop).

Microsoft Excel

Many business users are proficient with Microsoft Excel and want to use it for data
analysis by using PivotTables, charts, and slicers. There are other useful Excel features as
well (such as cube functions ) that allow greater flexibility and formatting when
designing a grid layout of values. Some content creators might also prefer to use Excel
formulas for some types of calculations (instead of DAX calculations in the data model),
particularly when they perform data exploration activities.
Here are several ways to efficiently use Excel with Power BI.

Connect Excel to a Power BI dataset: This capability is known as an Excel live


connection (when you start from Excel) or as Analyze in Excel (when you start from
the Power BI service). Connecting Excel to a Power BI dataset is best suited to
report creators who prefer using Excel for creating visualizations that are
connected to an existing shared dataset. The advantage of this approach is that it's
a connection—rather than an export of data—so the data in the Excel workbook is
refreshable.
Connect Excel to featured tables in a Power BI dataset: If you prefer to connect
Excel to a subset of tables within a Power BI dataset (rather than the entire shared
dataset), you can use featured tables. This option works well when you need to
relate data that's in Excel to data that's stored in Power BI.
Export to Excel with a live connection: When viewing a visual, you can export a
table of refreshable data to Excel. This technique is useful when you want to
further explore the data by using a PivotTable in Excel.
Create an Excel data model: The Excel data model (formerly known as Power
Pivot) is a native feature of Excel. It uses the same database engine as Power BI for
storing imported datasets, and the same Power Query functionality to get data.
However, in Excel, the functionality is updated much less frequently than Power BI.
It's useful for content creators who create small models and have a strong
preference for working in Excel. Optionally, you can import your workbook from
SharePoint or OneDrive for work or business. That allows you to view the
workbook in the Power BI service. Or you can create a new Power BI dataset that's
synchronized with the data in the workbook (when it's stored in OneDrive for work
or school).

There are other ways to work with Excel. These options are less optimal, and so you
should use them only when necessary.

Export to Excel: Many users have established a habit of exporting data to Excel
from reports or dashboards. While Power BI supports this capability, it should be
used cautiously and in moderation because it results in a static set of data. To
ensure that data exports to Excel aren't overused, users in the organization should
be educated on the downsides of exports and administrators should track exports
in the user activity data.
Get source data from Excel: Excel can be used as a data source when importing
data to Power BI. This capability works best for small projects when a user-friendly
Excel-based solution is required to maintain source data. It can also be useful to
quickly conduct a proof of concept (POC). However, to reduce the risk associated
with Excel data sources, the source Excel file should be stored in a secure, shared
location. Also, column names shouldn't be changed to ensure data refreshes
succeed.

 Tip

We recommend that you primarily encourage the use of Excel as a live connection.

Here are some important points to consider when determining whether Excel is an
appropriate authoring tool.

Certain prerequisites must be in place to allow users to connect to a Power BI


dataset from Excel.
In some organizations, users have the 32-bit version of Excel installed rather than
the 64-bit version. The 64-bit version of Excel can support larger data volumes, and
generally performs better than the 32-bit version. All data providers must also
align with this choice.
Some features in Power BI Desktop aren't available in the Excel data model, or
they're released on a significantly slower cadence. Therefore, complex modeling
requirements may not be (easily) possible in Excel.
Some connectors and data sources that are available in Power BI Desktop aren't
available in Excel.

 Tip

Many organizations have existing Excel solutions that can be modernized by


connecting the Excel file to a Power BI shared dataset (rather than using a data
export). Live connectivity saves users from repeating tedious steps, prevents data
from becoming stale, and ensures data security is consistently applied when users
refresh the Excel data.

Power BI Report Builder


Power BI Report Builder is a tool for creating a paginated report (.rdl) file. Paginated
reports can be deployed to either the Power BI service or Power BI Report Server. If you
have experience creating reports in SQL Server Reporting Services (SSRS), you'll find it's
a similar report creation experience.

Paginated reports are best suited to highly formatted, or pixel-perfect, reports such as
financial statements. They're also suitable for reports that are intended to be printed or
for PDF generation, and when user input (with report parameters) is required.
 Tip

For other scenarios that favor choosing paginated reports, see When to use
paginated reports in Power BI.

Here are some important points to consider when deciding on using Power BI Report
Builder.

Approach working in Power BI Report Builder with a different mindset than when
you work in Power BI Desktop. A paginated report always focuses on the creation
of one individual report (conversely, a dataset created in Power BI Desktop may
serve many different reports).
Developing paginated reports involves more skill than creating Power BI reports.
However, the main benefit is fine-grained control over data retrieval, layout, and
placement.
A paginated report is concerned with both data retrieval and layout. You're
required to develop a query (known as a dataset—not to be confused with a Power
BI dataset) to retrieve data from an external data source, which might involve
writing a native query statement (in DAX, T-SQL, or other language). The dataset
belongs to one report, so it can't be published and used by other paginated
reports.
Report consumers become accustomed to the built-in interactivity of Power BI
reports. However, report interactivity isn't a strength of paginated reports.
Attempting to achieve similar interactivity in paginated reports can be challenging
or impossible.
If you need to access data by using a database stored procedure (such as an Azure
SQL database stored procedure), that's possible with paginated reports.
There are some feature differences and unsupported capabilities depending on
whether the paginated report is published to the Power BI service or Power BI
Report Server. We recommend that you conduct a proof of concept to determine
what's possible for your target environment.

 Tip

For more information, see Paginated reports in Power BI FAQ and Design tips for
reports in Power BI Report Builder.

Third-party tools
Advanced content creators can choose to use third-party tools, especially for enterprise-
scale operations. They can use third-party tools to develop, publish, manage, and
optimize data models. The goal of these tools is to broaden the development and
management capabilities available to dataset creators. Common examples of third-party
tools include Tabular Editor, DAX Studio, and ALM Toolkit. For more information, see the
advanced data model management usage scenario.

7 Note

The use of third-party tools has become prevalent in the global Power BI
community, especially by advanced content creators, developers, and IT
professionals.

There are three main ways to use third-party tools for dataset development and
management.

Use an external tool to connect to a local data model in Power BI Desktop: Some
third-party tools can connect to the data model in an open Power BI Desktop file.
When registered with Power BI Desktop, these tools are known as external tools
and extend the native capabilities of Power BI Desktop.
Use the XMLA endpoint to connect to a remote data model in the Power BI
service: Some third-party tools can use the XML for Analysis (XMLA) protocol to
connect to a dataset that's been published to the Power BI service. Tools that are
compliant with the XMLA protocol use Microsoft client libraries to read and/or
write data to a data model by using Tabular Object Model (TOM) operations.
Use a template file to connect to a local data model in Power BI Desktop: Some
third-party tools distribute their functionality in a lightweight way by using a Power
BI Desktop template (.pbit) file.

Some third-party tools are proprietary and require a paid license (such as Tabular Editor
3). Other community tools are free and open source (such as Tabular Editor 2, DAX
Studio, and ALM Toolkit). We recommend that you carefully evaluate the features of
each tool, cost, and its support model so you can sufficiently support your content
creators.

 Tip

Some organizations find it easier to get a new tool approved that's fully supported
(even when there's a licensing cost). However, other organizations find it easier to
get a free open-source tool approved. Your IT department can provide guidance
and help you do the necessary due diligence.

Checklist - When planning for authoring tools, key decisions and actions include:

" Decide which authoring tools to encourage: For self-service creators and advanced
content creators, consider which of the available tools will be actively promoted for
use in the organization.
" Decide which authoring tools will be supported: For self-service creators and
advanced content creators, consider which of the available tools will be supported
and by whom.
" Evaluate the use of third-party tools: Consider which third-party tools will be
allowed or encouraged for advanced content creators. Investigate the privacy
policy, licensing cost, and the support model.
" Create guidance for content creators: Provide guidance and training to help users
choose and use the appropriate authoring tool for their circumstances.

Manage and set up devices


This section describes considerations for installing and updating tools and applications
and setting up user devices.

Client tools
IT often uses the term client tools to refer to software that's installed on client machines
(user devices). The most common Power BI software installed on a user device is Power
BI Desktop.

Because Microsoft usually updates Power BI Desktop every month, it's important to have
a seamless process for managing installations and updates.

Here are several ways that organizations can manage installations and updates of Power
BI Desktop.
Type of Supports Description
installation automatic
updates

Microsoft Yes Power BI Desktop is distributed from the Microsoft Store . All
Store updates, including bug fixes, are automatically installed. This option
is an easy and seamless approach, provided that your organization
doesn't block some (or all) apps from the Microsoft Store for some
(or all) users.

Manual No You can manually download and install an executable (.exe) file from
installation the Microsoft Download Center . However, be aware that the user
who installs the software must have local administrator rights—in
most organizations, those rights are restricted. If you choose to use
this approach (and it isn't managed by IT), there's a risk that users
will end up with different versions of Power BI Desktop installed,
possibly resulting in compatibility issues. Also, with this approach,
every user will need to be notified to install quick fix engineering
(QFE) releases, also known as bug fixes, when they come out.

IT-managed Depends You can use a variety of IT-managed organizational deployment


systems upon the methods, like Microsoft System Center or Microsoft Application
setup Virtualization (App-V). This option is best suited for organizations
that need to manage many installations at scale or in a customized
way.

It's important that user devices have adequate system resources. To be productive,
content creators who work with large data volumes may need system resources that
exceed the minimum requirements—especially memory (RAM) and CPU. IT may have
suggested machine specifications based on their experience with other content creators.

All content creators collaborating on Power BI development should use the same
version of the software—especially Power BI Desktop, which is usually updated every
month. We recommend that you make updates automatically available to users because:

Multiple content creators who collaborate on a Power BI Desktop file are assured
of being on the same version. It's essential that creators who work together on the
same .pbix file use the same software version.
Users won't have to take any specific action to obtain updates.
Users can take advantage of new capabilities, and their experience is aligned to
announcements and documentation. It can impact adoption and user satisfaction
when content creators learn about new capabilities and features, yet they
experience long delays between software updates.
Only the latest version of Power BI Desktop is supported by Microsoft. If a user has
an issue and files a support ticket, they'll be asked by Microsoft support to
upgrade their software to the latest version.
In addition to Power BI Desktop (described previously), you may need to install and
manage other Microsoft tools or third-party tools on user devices, including mobile
devices. For a list of possible tools, see Available tools for authoring earlier in this article.

Users who create and manage files located in Fabric OneLake might also benefit from
OneLake File Explorer. This tool allows them to conveniently upload, download, edit, or
delete files in OneLake by using Windows file explorer.

7 Note

Your IT department may have managed device policies in place. These policies
could control what software can be installed, and how it's managed.

Client tool prerequisites


Content creators that have client tools installed, such as Power BI Desktop, may require
specific prerequisite software or packages.

WebView2: (Mandatory) For content creators running Power BI Desktop, the


Microsoft Edge WebView2 Runtime is a prerequisite. WebView2 allows the
embedding of web technologies (such as HTML, CSS, and JavaScript) in Power BI
Desktop in a secure way. WebView2 will already be installed if the user device has
the latest version of Windows or has Microsoft 365 applications installed and
monthly updates are enabled.
.NET Framework: (Mandatory) For content creators running Power BI Desktop or a
third-party tool, the .NET Framework is a prerequisite. The .NET Framework is a
technology that supports building and running Windows apps. Power BI Desktop
requires a specific version, or later.
Microsoft Edge: (Mandatory) For content creators running Power BI Desktop, the
Edge browser is a prerequisite.
Python and R packages: (Optional) Python and R scripts can be used in multiple
ways with Power BI, when enabled by the tenant setting. Scripts can be used to
create Python visuals or R visuals. Scripts can also be created in the Query Editor; in
this case, a personal gateway is required because Python and R are not supported
in the standard data gateway. Python packages or R packages are a prerequisite.
To avoid incompatibilities, IT should manage which packages get installed, where
they're installed, and that the versions installed match what's supported in the
Power BI service.

Data connectivity components


Depending on your data sources, you may need to install drivers, connectors, or
providers on user devices. These components enable data connectivity when a user
works in a client tool (such as Power BI Desktop) or a third-party tool.

Drivers: A driver is a software component that connects to other systems. For


example, to connect to an Oracle database, you may need the Oracle Data Access
Client software. Or, to connect to SAP HANA, you may need an ODBC driver.
Custom connectors: A custom data source connector may be required when
connecting to a legacy or proprietary system.
Excel provider: The Analyze in Excel provider allows users to create visualizations
in Excel while connected to an existing shared dataset that's been published to the
Power BI service.
Analysis Services client libraries: When connecting to an Analysis Services source,
a client library needs to be installed.
Access Database OLE DB provider: When connecting to an Access database, an
OLE DB provider needs to be installed.

) Important

For data sources that require connectivity through a gateway, the same drivers,
connectors, and providers will need to be installed on each data gateway machine.
Missing components on a data gateway are a common reason for data refresh
failures once content has been published to the Power BI service.

 Tip

To simplify delivery to a larger number of users, many IT teams deploy the most
common drivers, connectors, and providers as part of a standard user device setup.

Version control tools


Content creators that have client tools installed, such as Power BI Desktop, should also
have a way to save versions, or historical copies, of files. Access to previous versions is
particularly helpful when a change needs to be rolled back.

There are two main ways to handle versioning of development files.

Teams, OneDrive for Business, SharePoint: Self-service content creators often save
files in Teams, OneDrive for work or school, or SharePoint. Users find these tools
are familiar and simple to use. Shared libraries can be organized, secured for
appropriate coworkers, and versioning is built in.
Source control plug-ins: Advanced content creators may need to integrate with a
source control tool. It typically involves installing Git for source control, then
using a source control management tool like Visual Studio Code to commit
content changes to a remote repository, such as Azure DevOps Repos. For Power
BI Desktop, creators can use developer mode. In this mode, content is saved as a
Power BI project (.pbip) file, which is compatible for use with a preferred source
control system. When working with Fabric, Git integration is supported for working
with a client tool.

For more information, see Strategy for file locations.

Custom visuals
Power BI custom visuals, which developers can create by using the Power BI visuals
SDK , allow Power BI report creators to work beyond the built-in core visuals. A custom
visual can be created and released by Microsoft, software developers, vendors, or
partners.

To use a custom visual in Power BI Desktop, it must first be installed on the machine of
the content creator. There are several ways to distribute visuals to users.

AppSource: AppSource is a marketplace for applications, add-ins, and


extensions for Microsoft software. Visuals are distributed in AppSource using a
Power BI visual (.pbiviz) file. A visual might be distributed freely or requires a
license.
Advantages:
It's simple for users to search for, and locate, visuals in AppSource.
All reports and dashboards are automatically updated to use the latest
version of custom visuals that have been sourced from AppSource.
Supports the use of certified visuals.
Microsoft performs basic validations of visuals published to AppSource. The
extent of the review depends on whether the visual is certified or not.
Potential disadvantages:
When each content creator downloads what they need from AppSource, it
can lead to incompatibilities when users have different versions installed.
A content creator might download a visual that hasn't yet been tested or
approved for use in the organization.
The developer of the visual needs to follow a strict publishing process.
Although it strengthens security and improves stability, the process can make
it challenging to release a bug fix quickly,
Import a visual file: A content creator may import a visual file into Power BI
Desktop.
Advantages:
Visuals that are available publicly, or privately distributed, can be installed.
That includes internally developed visuals or proprietary visuals purchased
from a vendor.
Allows a way to obtain a visual file outside of AppSource.
Potential disadvantages:
Without a centralized system, it can be difficult for content creators to know
what visuals have been approved for use in the organization.
When each content creator imports the visual file they have, it can lead to
incompatibilities when users have different versions installed.
Updates aren't automatically propagated to user devices. Reports in local
Power BI Desktop files aren't updated until each user updates their visual
files.
Doesn't support the use of certified visuals.
Organizational visuals: The organizational visuals repository is a centralized area in
the Fabric admin portal for managing visuals.
Advantages:
Content creators don't have to manage visual files. Instead, a Fabric
administrator centrally manages the version of a visual that's available for all
users. Version consistency is ensured for all users and all reports.
Visuals that are available publicly or privately distributed can be installed.
That includes internally developed visuals or proprietary visuals purchased
from a vendor.
Visuals can be tested and pre-approved for use in the organization. This
verification process reduces the risk that non-approved visuals are used. It
also allows greater flexibility for setting which specific version of a visual is
approved for use.
All reports and dashboards are automatically updated to use the latest
version (when a visual file is updated in the admin portal or made available in
AppSource).
If a visual that's currently in use by the organization is deemed to be no
longer trustworthy, it can be disabled or deleted from the organizational
visuals repository. In this case, the visual won't be rendered in reports and
dashboards.
Allows the use of non-certified visuals from AppSource. That's useful when
you've set the tenant setting to block uncertified visuals, yet a specific non-
certified visual has been validated and approved for use in the organization.
Potential disadvantages:
Organizational visuals need to be managed centrally by a Fabric
administrator.
Centralization correlates to reduced user flexibility and the potential for
delays in updating the version of a visual.
Some features aren't available when a visual isn't certified (which requires
importing from AppSource).

) Important

If your organization is highly concerned about data privacy and data leakage,
consider governing all custom visuals through the organizational visuals repository.

 Tip

How you distribute custom visuals is a governance consideration. We recommend


that you carefully evaluate the features of each visual, considering its cost and
support model so you can sufficiently support your content creators.

Also, before you approve the use of a new custom visual, it's critical that you
evaluate any security and data privacy risks because:

Visuals execute JavaScript code and have access to the data that they
visualize.
Visuals can transmit data to an external service. For example, a visual may
need to transmit data to an API to run an AI algorithm or to render a map.
Just because a visual transmits data to an external service, it doesn't mean it's
untrustworthy. A visual that transmits data can't be certified.

Group policy settings


Group policy provides centralized management and configuration of operating systems,
applications, and user settings of Windows machines and the network environment. It
helps IT roll out and manage consistent user accounts and machine settings. For Power
BI Desktop, the most common use of group policy is to manage custom visuals
(described in the previous section).

You can specify whether uncertified visuals are allowed or blocked in Power BI Desktop.
To ensure users have a consistent experience in both Power BI Desktop and the Power BI
service, it's important that custom visuals be managed consistently in two places.
Tenant setting: The Add and use certified visuals only (block uncertified) tenant
setting allows or blocks using custom visuals when users create or edit reports in
the Power BI service.

Group policy: The group policy setting controls the use of custom visuals when
users create or edit reports in Power BI Desktop. If a content creator spent
considerable time creating content in Power BI Desktop that can't be displayed in
the Power BI service (due to a misaligned tenant setting), it would result in a
significant amount of user frustration. That's why it's important to keep them both
aligned.

You can also use group policy to specify whether data exports are allowed or blocked
from custom visuals.

Registry settings
The Windows operating system stores machine information, settings, and options in the
Windows registry. For Power BI Desktop, registry settings can be set to customize user
machines. Registry settings can be updated by group policy, which helps IT set up
default settings that are consistent for all users (or groups of users).

Here are several common uses of registry settings related to Power BI Desktop.

Disable notifications that a software update is available. That's useful when you're
certain that IT will obtain the Power BI Desktop update, perform validations, and
then push updates to user devices through their normal process.
Set the global privacy level. It's wise to set this setting to Organizational as the
default because it can help to avoid data privacy violations when different data
sources are merged.
Disable the Power BI Desktop sign-in form. Disabling the form is useful when
organizational machines are automatically signed in. In this case, the user doesn't
ever need to be prompted.
Tune Query Editor performance. This setting is useful when you need to influence
query execution behavior by changing defaults.
Disable the external tools ribbon tab. You might disable the ribbon tab when you
know you can't approve or support the use of external tools.

 Tip

Usually, the goal isn't to significantly limit what users can do with tools. Rather, it's
about improving the user experience and reducing support needs.
Mobile device management
Many users like to interact with Power BI content on a mobile device, such as a tablet or
a phone, whether they're at home or traveling. The Power BI mobile apps for iOS,
Android, and Windows are primarily designed for smaller form factors and touch
screens. They make it easier to interact with content that's been published to the Power
BI service or Power BI Report Server.

You can specify app protection policies and device protection policies for managed and
unmanaged devices by using Microsoft Intune. Intune is a software service that provides
mobile device and application management, and it supports mobile application
management (MAM) policies. Policies can be set at various levels of protection.

Optionally, a mobile device management (MDM) solution from Microsoft 365, or a third
party, may also be used to customize the behavior of Power BI mobile apps. The Power
BI app for Windows also supports Windows Information Protection (WIP).

Here are several ways that you might choose to use MAM and MDM policies.

Specify data protection settings.


Encrypt application data when the app isn't in use.
Selectively wipe data when a device if it gets lost.
Prevent saving of data to a personal storage location.
Restrict actions to cut, copy, and paste.
Prevent printing of organizational data.
Require biometric data, or an access PIN, to open the mobile app.
Specify the default behavior when a user selects or taps in a mobile app.

For more information about securing devices and data, see the Power BI security
whitepaper.

Checklist - When managing devices, key decisions and actions include:

" Determine how Power BI Desktop will be updated: Consider how to install Power
BI Desktop (and other client tools). Whenever possible, ensure that updates are
automatically installed.
" Identify the necessary client tool prerequisites: Ensure that all prerequisite
software and packages are installed and updated regularly.
" Identify the necessary data connectivity components: Ensure that all drivers,
connectors, and providers that are required for data connectivity are installed and
updated regularly.
" Determine how to handle custom visuals: Decide how custom visuals will be
handled from AppSource and other sources. Set the Allow visuals created from the
Power BI SDK tenant setting and the Add and use certified visuals only tenant setting
to align with your decisions. Consider creating a process that allows users to
request a new custom visual.
" Set up group policy settings: Set up group policy to ensure that custom visuals are
managed the same way in Power BI Desktop as they are in the Power BI service.
" Set up registry settings: Set up the registry settings to customize user machines,
when applicable.
" Investigate mobile device management: Consider using app protection policies
and device protection policies for mobile devices, when appropriate.

Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Power BI implementation planning:
Workspaces
Article • 08/23/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This workspaces article introduces the Fabric workspace planning articles, which have an
emphasis on the Power BI experience. These articles are targeted at multiple audiences:

Fabric administrators: The administrators who are responsible for overseeing


Power BI in the organization.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing analytics throughout the organization.
Content creators and owners: Self-service creators who need to create, publish,
and manage content in workspaces.

Proper workspace planning is an integral part of making an implementation successful.


Inadequate workspace planning can lead to less user flexibility and inferior workarounds
for organizing and securing content.

Fundamentally, a workspace is a container in the Fabric portal for storing and securing
content. Primarily, workspaces are designed for content creation and collaboration.

7 Note

The concept of a workspace originated in Power BI. With Fabric, the purpose of a
workspace has become broader. The result is that a workspace can now contain
items from one or more different Fabric experiences (also known as workloads).
Even though the content scope has become broader than Power BI, most of the
workspace planning activities described in these articles can be applied to Fabric
workspace planning.

The workspace planning content is organized into the following articles:

Tenant-level workspace planning: Strategic decisions and actions that affect all
workspaces in the tenant.
Workspace-level planning: Tactical decisions and actions to take for each
workspace.

Next steps
In the next article in this series, learn about tenant-level workspace planning.
Power BI implementation planning:
Tenant-level workspace planning
Article • 08/23/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article covers tenant-level Fabric workspace planning, with an emphasis on the
Power BI experience. It's primarily targeted at:

Fabric administrators: The administrators who are responsible for overseeing


Fabric in the organization.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing analytics and supporting self-service users throughout the
organization.

Secondarily, this article may also be of interest to self-service creators who need to
create, publish, and manage content in workspaces.

Because workspaces can be used in different ways, most tactical decisions will be made
at the workspace level (described in the next article). However, there are some strategic
planning decisions to make at the tenant level, too.

We recommend that you make the tenant-level workspace decisions as early as possible
because they'll affect everything else. Also, it's easier to make individual workspace
decisions when you have clarity on your overall workspace goals and objectives.

7 Note

The concept of a workspace originated in Power BI. With Fabric, the purpose of a
workspace has become broader. The result is that a workspace can now contain
items from one or more different Fabric experiences (also known as workloads).
Even though the content scope has become broader than Power BI, most of the
workspace planning activities described in these articles can be applied to Fabric
workspace planning.
Workspace creation permissions
The decision on who is allowed to create workspaces in the Power BI service is a data
culture and governance decision. Generally, there are two ways to approach this
decision:

All (or most) users are permitted to create new workspaces: This approach
usually aligns with existing decisions for other applications. For example, when
users are permitted to create their own SharePoint sites or Teams channels, it
makes sense that Fabric adopts the same policy.
Limited to a selective set of users who are permitted to create new workspaces:
This approach usually indicates a governance plan is in place or is planned.
Managing this process can be fully centralized (for instance, only IT is permitted to
create a workspace). A more flexible and practical approach is when it's a
combination of centralized and decentralized individuals. In this case, certain
satellite members of the Center of Excellence (COE), champions, or trusted users
have been trained to create and manage workspaces on behalf of their business
unit.

You should set up the Create workspaces tenant setting in the Fabric admin portal
according to your decision on who is allowed to create workspaces.

Checklist - When considering permissions for who can create workspaces, key decisions
and actions include:

" Determine and validate user needs: Schedule collaborative discussions with


relevant stakeholders and interested parties to learn how users currently work. The
goal is to ensure that you have a clear understanding of user needs.
" Decide who is allowed to create workspaces: Determine whether all users, only a
centralized team, or certain centralized and decentralized users will be permitted to
create a new workspace. Ensure this decision is purposefully aligned with your data
culture goals. Be sure to obtain approval from your executive sponsor.
" Create a security group for who is permitted to create workspaces: If a subset of
users will be permitted to create workspaces, a security group is needed. Name the
group clearly, like Fabric workspace creators. Add members who are permitted to
create workspaces to this security group.
" Update the tenant setting: Add the new security group to the Create workspaces
tenant setting in the admin portal. In addition to the Fabric workspace creators
group, other groups that might also be allowed for this tenant setting are the COE,
support, and Fabric administrators.

Workspace naming conventions


Workspace naming conventions are an agreed-upon pattern for how workspaces are
named. Usually, naming conventions are more of a requirement than a suggestion.

It can be difficult to strictly enforce naming conventions when many users possess the
permission to create workspaces. You can mitigate this concern with user education and
training. You can also conduct an auditing process to find workspaces that don't
conform to the naming conventions.

The workspace name can convey additional information about the workspace, including:

Purpose: A workspace name should always include a description of its content. For
example, Sales Quarterly Bonus Tracking.
Item types: A workspace name can include a reference to the types of items it
contains. For example, use Sales Data to indicate the workspace stores items like a
lakehouse or datasets. Sales Analytics could indicate that the workspace stores
analytical reports and dashboards.
Stage (environment): A workspace name might include its stage. For example, it's
common to have separate workspaces (development, test, and production) for
lifecycle management.
Ownership and responsibility: A workspace name might include an indication of
who's responsible for managing the content. For example, use of an SLS prefix or
suffix can indicate that the sales team owns and manages the content.

 Tip

To keep workspace names short, you can include additional detail in the workspace
description. However, make sure that the most relevant information is included in
the workspace name, particularly if you anticipate users will search for workspaces.
You can also use a workspace image to augment the workspace name. These
considerations are described further in the workspace settings section in the next
article.

Having consistent workspace names helps everyone. The user experience is improved
because users can find content more easily. Also, administrators can oversee the content
more easily when predictable naming conventions are used.
We recommend that you include the workspace naming conventions in your centralized
portal and training materials.

The following list describes more considerations related to workspace naming.

Use short yet descriptive names: The workspace name should accurately reflect its
contents, with the most important part at the beginning of the name. In the Fabric
portal, long workspace names may become truncated in user interfaces, requiring
the user to hover the cursor over the workspace name to reveal the full name in a
tooltip. Here's an example of a short yet descriptive name: Quarterly Financials.
Use a standard prefix: A standard prefix can arrange similar workspaces together
when sorted. For example: FIN-Quarterly Financials.
Use a standard suffix: You can add a suffix for additional information, such as
when you use different workspaces for development, test, and production. We
recommend appending [Dev] or [Test] suffixes but leaving production as a user-
friendly name without a suffix. For example: FIN-Quarterly Financials [Dev].
Be consistent with the Power BI app name: The workspace name and its Power BI
app can be different, particularly if it improves usability or understandability for
app consumers. We recommend keeping the names similar to avoid confusion.
Omit unnecessary words: The following words may be redundant, so avoid them
in your workspace names:
The word workspace.
The words Fabric or Power BI. Many Fabric workspaces contain items from
various workloads. However, you might create a workspace that's intended to
target only a specific workload (such as Power BI, Data Factory, or Synapse Data
Engineering). In that case, you might choose a short suffix so that the
workspace purpose is made clear.
The name of the organization. However, when the primary audience is external
users, including the organization's name can be helpful.

7 Note

We recommend that you notify users when a workspace name will change. For the
most part, it's safe to rename a workspace in the Fabric portal because the GroupID,
which is the unique identifier of a workspace, doesn't change (it's found in the
workspace URL). However, XMLA connections are impacted because they connect
by using the workspace name instead of the GroupID.
Checklist - When considering creating workspace naming conventions, key decisions
and actions include:

" Determine requirements or preferences for workspace names: Consider how you


want to name your workspaces. Decide whether you want strict naming convention
requirements or more flexible requirements guided by suggestions and examples.
" Review existing workspace names: Update existing workspace names as
appropriate so that they're good examples for users to follow. When users see
existing workspace being renamed, they'll interpret that as an implied standard to
adopt.
" Create documentation for workspace naming conventions: Provide reference
documentation about workspace naming convention requirements and
preferences. Be sure to include examples that show the correct use of acronyms,
prefixes, and suffixes. Make the information available in your centralized portal and
training materials.

Workspace creation process


If you've decided to limit who can create workspaces, then the broader user population
will need to know what the process is to request a new workspace. In this case, it's
important to establish a request process that's easy and convenient for users to find and
follow.

It's also critical to respond quickly to a request for a new workspace. A service level
agreement (SLA) of 2-4 hours is ideal. If the process to request a new workspace is too
slow or burdensome, people will use the workspaces they have so they can keep
moving. If they elect to skip creating a new workspace, what they use instead may be
suboptimal. They might choose to reuse an existing workspace that isn't well-suited to
the new content, or they might share content from their personal workspace.

 Tip

The goal when creating a new process is to make it easy for people to comply with
the process. As with all data governance decisions, the point is to make it easy for
users to do the right thing.

The following table lists the information to collect in a request for a new workspace.
Information Example Validation required
needed

Workspace name SLS-Field Sales Analytics Does the name adhere to


naming conventions?

Does another workspace with


the same name exist?

Stages needed SLS-Field Sales Analytics [Dev], SLS-Field Are multiple workspaces
Sales Analytics [Test], and SLS-Field Sales necessary to properly support
Analytics the content?

If so, should a deployment


pipeline be created too?

Description Customer sales and order history for Is there an expectation that
monthly, quarterly, and yearly analysis. sensitive data, or regulated
data, will be stored?

If so, will that affect how the


workspace is governed?

Target audience Global field sales organization How broad is the content
delivery scope?

How will that affect how the


workspace is governed?

License mode A Fabric capacity for the sales team is What level of Fabric capacity is
assigned to the needed because a large number of the required?
workspace salespeople are viewers only and they
have a free license

Data storage Data residency in Canada Are there data residency needs
requirements that will require Multi-Geo?

What are the expected data


volumes?

Workspace FabricContentAdmins-FieldSalesAnalytics Is the administrator


administrators (preferably) a group?

Are there at least two


administrators?

Person submitting [email protected] Does the person submitting


the request the request work in a role or
line of business related to the
information provided?
The above table includes the minimum amount of information required to set up a
workspace. However, it doesn't include all possible configurations. In most cases, a
workspace administrator will take responsibility for completing the setup once the
workspace is created. For more information, see the Workspace-level settings article.

There are many technology options you can use to create an online form for the
workspace creation request. Consider using Microsoft Power Apps, which is a low-code
software option that's ideal for building simple web-based forms and applications. The
technology you choose to use for creating a web-based form depends on who will be
responsible for creating and maintaining the form.

 Tip

To improve efficiency and accuracy, consider automating the process by using the
Power BI REST API to programmatically create or update a workspace. In this case,
we recommend including review and approval processes rather than automatically
processing each request.

Checklist - When considering the process to request a new workspace, key decisions
and actions include:

" Establish a process for requesting a new workspace: Decide what the specific
process is for requesting a new workspace. Consider the information you'll need,
how to capture the information, and who will process the request.
" Create a standard form for requesting a new workspace: Decide what information
will be included on the form for a new workspace. Consider building a Power Apps
app to collect the information from the user. Ensure links to the form are broadly
available and easy to find in your centralized portal and other common locations.
Include a link to the form in ongoing communications too.
" Decide who will respond to submitted requests, and how quickly: Determine
who'll process requests. Consider what the expected response time is for handling a
request for a new workspace. Verify that you can handle requests quickly so that
self-service users don't experience delays.
" Conduct a knowledge transfer session: If another team will be supporting the
workspace request process, conduct a knowledge transfer session with them so
they have all the information they need.
" Create documentation for how to approve or deny a request: Create
documentation about how to approve a request, targeted at those who will review
or process requests. Also include reasons why a request might be denied, and what
action should be taken.
" Create documentation for how to request a workspace: Create documentation
about how to request a new workspace, targeted at users who can't create their
own workspaces. Include what information is required, and expectations for a
response. Ensure that the information is available in your centralized portal and
training materials.

Workspace governance level


Not all workspaces need the same level of oversight. Certain workspaces might be
considered governed. A governed workspace means that there are more requirements
and expectations for its content. Some organizations use the term managed instead of
governed.

There are four key decision criteria to determine the level of governance:

Who owns and manages the BI content?


What is the scope for delivery of the BI content?
What is the data subject area?
Is the data, and/or the BI solution, considered critical?

7 Note

For more information about the four key decision criteria, see the governance
article that forms part of the Power BI adoption roadmap.

You might start out with two levels of workspaces: governed and ungoverned. We
recommend that you keep the governance levels as simple as possible. However,
depending on your specific circumstances, you might need to subdivide the governed
classification. For example, critical content that's managed by the enterprise BI team
might have one set of governance requirements. Whereas critical content that's owned
and managed directly by business units may be subject to a slightly different set of
requirements. In some cases, decisions will be tailored to individual business units.

The following table lists some of the most common requirements when a workspace is
considered governed.
Category Potential governance requirement

Ownership and Ownership is assigned with clear responsibilities for the technical content
support owner and/or the subject matter expert.

User support team/person is assigned, and users understand how to


request help or submit issues.

A mechanism is in place for user feedback, questions, and enhancement


requests.

A communication plan exists to announce important changes to content


in the workspace.

Workspace setup The workspace is well-organized with a well-defined purpose.

A specific naming convention is used.

Workspace description, image, and contacts are required.

Accuracy All content is certified.

Data validations are automated so that content owners become aware of


data quality issues on a timely basis.

Distribution A Power BI app is used for distributing reports and dashboards.

Security and data Security groups are used (instead of individual accounts) for managing
protection workspace roles.

Sensitivity labels are used for information protection.

Only sanctioned (or approved) data sources are permitted.

All source files reside in a secure location that's backed up.

Change Separate development, test, and production workspaces are used.


management
Source control (such as Git integration) is used for all Power BI Desktop
files and items in the Fabric portal.

Versioning or source control is used for all data source files.

Lifecycle management and change management processes, including


deployment pipelines and/or DevOps processes, are followed.

Capacity The workspace is assigned to an appropriate Fabric capacity level.

The Fabric capacity is managed and monitored.


Category Potential governance requirement

Gateway A data gateway in standard mode (non-personal) is used.

All gateway data source credentials use approved credentials.

Auditing and Active auditing and monitoring processes are in place for tracking
monitoring adoption, usage patterns, and performance.

 Tip

Governance requirements usually aren't optional. For this reason, timely auditing is
important, and enforcement becomes necessary in certain situations. For example,
if governed workspaces require all files be in a secure location and an unapproved
file location is detected during auditing, then action should be taken to update the
file location.

Checklist - When considering the workspace governance level, key decisions and actions
include:

" Decide on the workspace governance levels: Determine the levels of governance


that you'll need. Try to keep it as simple as possible.
" Decide on the criteria for how to classify a workspace: Determine what the
decision criteria will be for classifying workspaces to a specific governance level.
" Decide what the workspace governance requirements are: For each governance
level, determine what the specific requirements will be.
" Decide how to designate the workspace governance level: Find the simplest way
to identify the governance level for a workspace. You could record it as part of its
name, part of its description, or stored elsewhere (for example, a SharePoint list that
contains more information about each workspace).
" Create documentation for workspace governance requirements: Create useful
documentation targeted at content creators that includes what their responsibilities
are for managing content in a governed workspace. Make the information available
in your centralized portal and training materials.
" Create workspace auditing processes: For workspaces that are considered
governed, create an auditing process to identify areas of noncompliance with the
most important requirements. Ensure that someone is responsible for contacting
content owners to address compliance issues.
Next steps
In the next article in this series, learn about workspace-level planning.
Power BI implementation planning:
Workspace-level workspace planning
Article • 08/23/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article covers Fabric workspace-level planning, with an emphasis on the Power BI
experience. It's primarily targeted at:

Fabric administrators: The administrators who are responsible for overseeing


Fabric in the organization.
Center of Excellence, IT, and BI team: The teams who are also responsible for
overseeing analytics and supporting self-service users throughout the
organization.
Content creators and owners: Self-service creators who need to create, publish,
and manage content in workspaces.

To use workspaces effectively, there are many tactical decisions to be made. Whenever
possible, individual workspace-level decisions should align with your tenant-level
decisions.

7 Note

The concept of a workspace originated in Power BI. With Fabric, the purpose of a
workspace has become broader. The result is that a workspace can now contain
items from one or more different Fabric experiences (also known as workloads).
Even though the content scope has become broader than Power BI, most of the
workspace planning activities described in these articles can be applied to Fabric
workspace planning.

Workspace purpose
When planning for workspaces, it's important to consider not only the type of content it
will store, but also the activities that the workspace is intended to support.
Consider the following two examples of finance-related workspaces. Although they're
both dedicated to the same team, each workspace serves a different purpose:

Financial month-end workspace: The Financial month-end workspace contains


reconciliation and month-end closing reports. This workspace is considered an
informal workspace to support collaborative efforts. A Power BI app isn't necessary
for content viewers because the primary use of this workspace is collaboration by a
small group of people who work closely together. Most team members have
permission to edit content in this workspace.
Financial reporting workspace: The Financial reporting workspace contains the
finalized, presentation-level reports. This workspace contains content that's
broadly distributed throughout the organization to many viewers (including
executives) by using a Power BI app. The workspace is closely governed.

With these two examples in mind, consider two specific aspects of workspace purpose:
intent for collaboration, and intent for viewing.

Intent for collaboration


The primary objective of a workspace in the Fabric portal is to facilitate collaboration
among multiple people. There are many ways that collaboration can happen in a
workspace:

Team-based development: Multiple people can work together to build, test, and
publish content. One user may work on the design of a lakehouse. Another user
may work on the design of the dataset, while other users may focus on building
reports.
Testing and validations: Users may need to perform data validations for new
content. Subject matter experts from the business unit may need to perform user
acceptance testing (UAT), or a data quality team may need to validate the accuracy
of the dataset.
Enhancements: Stakeholders and consumers of the content may suggest
enhancements to the content as circumstances change.
Ownership transfer: Another person or team may take over responsibility for
content that was created by someone else.

One of the key areas of the Power BI adoption roadmap is content ownership and
management. The type of collaboration that will occur in a workspace will differ based
on the approach used for content ownership and management:

Business-led self-service BI: Content is owned and managed by the content


creators within a business unit or department. In this scenario, most collaboration
in the workspace occurs among users within that business unit.
Managed self-service BI: Data is owned and managed by a centralized team,
whereas various content creators from business units take responsibility for reports
and dashboards. In this scenario, it's highly likely that multiple workspaces will be
needed to securely facilitate collaboration by multiple teams of people.
Enterprise BI: Content is owned and managed by a centralized team, such as IT,
enterprise BI, or the Center of Excellence (COE). In this scenario, collaboration
efforts in the workspace are occurring among users in the centralized team.

Checklist - When considering your intentions for collaboration in a workspace, key


decisions and actions include:

" Consider expectations for collaboration: Determine how workspace collaboration


needs to occur and who's involved within a single team or across organizational
boundaries.
" Consider expectations for content ownership and management: Think about how
the different content ownership and management approaches (business-led self-
service BI, managed self-service BI, and enterprise BI) will influence how you design
and use workspaces.

 Tip

When your needs can't be met by a single approach, be prepared to be flexible and
use a different content ownership and management strategy for different
workspaces. The strategy can be based on the scenario as well as the team
members that are involved.

Intent for content viewing


The secondary objective for a workspace is to distribute content to consumers who need
to view the content. For content viewers, the primary Fabric workload is Power BI.

There are several different ways to approach content distribution in the Power BI service:

Reports can be viewed by using a Power BI app: Content stored in a non-personal


workspace can be published to a Power BI app. A Power BI app is a more user-
friendly experience than viewing reports directly in a workspace. For this reason,
using a Power BI app is often the best choice for distributing content to
consumers. Audiences for a Power BI app are very flexible. However, sometimes
the goals for how you want to distribute content with an app are a factor in
determining how to organize content in or across workspaces. For more
information about securing Power BI apps, see Report consumer security planning.
Reports can be viewed directly in the workspace: This approach is often
appropriate for informal, collaborative workspaces. Workspace roles define who
can view or edit the content contained in a workspace. For more information about
workspace roles, see Content creator security planning.
Reports can be shared: Use of per-item permissions (links or direct access) is
useful when there's a need to provide read-only access to a single item within a
workspace. We recommend that you use app permissions and workspace roles
more frequently than sharing because they're easier to maintain. For more
information, see Report consumer security planning.
Reports can be embedded in another application and viewed there: Sometimes
the intention is for consumers to view Power BI content that's embedded in
another application. Embedding content is useful when it makes sense for the user
to remain in the application to increase efficiency and stay within its workflow.

Another key area of the Power BI adoption roadmap is content delivery scope. The ways
that a workspace will support content distribution will differ based on the content
delivery scope:

Personal BI: Content is intended for use by the creator. Since sharing content with
others isn't an objective, personal BI is done within a personal workspace
(described in the next topic).
Team BI: Content is shared with a relatively small number of colleagues who work
closely together. In this scenario, most workspaces are informal, collaborative
workspaces.
Departmental BI: Content is distributed to many consumers who belong to a large
department or business unit. In this scenario, the workspace is primarily for
collaboration efforts. In departmental BI scenarios, content is commonly viewed in
a Power BI app (instead of directly viewed in the workspace).
Enterprise BI: Content is delivered broadly across organizational boundaries to the
largest number of target consumers. In this scenario, the workspace is primarily for
collaboration efforts. For enterprise BI scenarios, content is commonly viewed in a
Power BI app (instead of directly viewed in the workspace).

 Tip

When you plan your workspaces, consider the needs of the audience when
determining the workspace license mode. The type of license that's assigned to the
workspace will impact the features that are available, including who can view or
manage workspace content.

Checklist - When considering your expectations for how workspace content will be
viewed, key decisions and actions include:

" Consider expectations for viewing content: Determine how you expect consumers
to view content that's been published to the workspace. Consider whether viewing
will happen directly in the workspace directly or by using a different method.
" Determine who the content will be delivered to: Consider who the target audience
is. Also consider the workspace license mode, especially when you expect a
significant number of content viewers.
" Evaluate needs for a Power BI app: Consider what the workspace purpose is as it
relates to the content distribution requirements. When a Power BI app is required, it
can influence decisions about creating a workspace.
" Consider expectations for content delivery scope: Consider how the different
content delivery scopes (personal BI, team BI, departmental BI, and enterprise BI)
will influence how you design and use workspaces.

 Tip

Be prepared to be flexible. You can use a different content viewing strategy for
workspaces based on the scenario as well as the team members that are involved.
Also, don't be afraid to use different content delivery scope approaches for
workspaces when it can be justified.

Appropriate use of personal workspaces


There are two types of workspaces:

Personal workspaces: Every user has a personal workspace. A personal workspace


may be used for publishing certain types of content to the Fabric portal. Its primary
purpose is to support personal BI usage scenarios.
Workspaces: The primary purpose of a workspace is to support collaboration
among multiple users. Secondarily, a workspace can also be used for viewing
content.
Using a personal workspace for anything other than learning personal BI, temporary
content, or testing purposes can be risky because content in a personal workspace is
managed and maintained by one person. Further, a personal workspace doesn't support
collaboration with others.

To allow the creation of any type of Fabric item (like a lakehouse or warehouse), a
workspace must be added to a Fabric capacity. That's true for both standard workspaces
as well as personal workspaces. Therefore, you can govern who's able to create certain
types of items within a personal workspace by way of its capacity assignment.

A personal workspace is limited in its options to share content with others. You can't
publish a Power BI app from a personal workspace (and Power BI apps are an important
mechanism for distributing content to the organization). Per-item permissions (links or
direct access) are the only way to share personal workspace content with others.
Therefore, extensive use of per-item permissions involves more effort and increases the
risk of error. For more information, see Report consumer security planning.

Checklist - When considering your expectations for how personal workspaces should be
used, key decisions and actions include:

" Understand current use of personal workspaces: Have conversations with your


users and review the activity activity data to ensure you understand what users are
doing with their personal workspaces.
" Decide how personal workspaces should be used: Decide how personal
workspaces should (and should not) be used in your organization. Focus on
balancing risk and ease of use with needs for content collaboration and viewing.
" Relocate personal workspace content when appropriate: For critical content, move
content from personal workspaces to standard workspaces when appropriate.
" Create and publish documentation about personal workspaces: Create useful
documentation or FAQs for your users about how to effectively use personal
workspaces. Make the information available in your centralized portal and training
materials.

7 Note

For more information, see these Power BI adoption roadmap topics: centralized
portal, training, and documentation.
Workspace ownership
One of the most important things to consider when planning workspaces is determining
the ownership and stewardship roles and responsibilities. The goal is to have clarity on
exactly who is accountable for creating, maintaining, publishing, securing, and
supporting the content in each workspace.

Clarity on ownership is particularly relevant when responsibilities for creating and


managing data are decentralized—or distributed—among departments and business
units. This concept is also sometimes referred to as a data mesh architecture. For more
information about data mesh, see What is data mesh?.

In Fabric, decentralized or distributed ownership is enabled through workspaces.


Different areas of the organization can work independently, while still contributing to
the same underlying data structure in OneLake. Each workspace can have its own
administrator, access control, and capacity assignment (for billing, geographic data
location, and performance monitoring).

 Tip

An additional way to support workspace ownership in Fabric is with domains, which


are described later in this article.

When the intent for collaboration involves decentralization and multiple teams beyond a
single business unit, it can add complexity for managing workspaces. Often, it's helpful
to create separate workspaces to clearly delineate which team is responsible for which
content. Use of multiple workspaces allows you to be specific as to ownership and
management responsibilities, and it can help you to set security according to the
principle of least privilege. For more security considerations, see Content creator
security planning.

 Tip

Your decisions related to accountability and responsibility should correlate directly


with your actions related to defining workspace access, which is described later in
this article.
Checklist - When considering workspace ownership responsibilities, key decisions and
actions include:

" Gain a full understanding of how content ownership works: Ensure that you
deeply understand how content ownership and management is happening
throughout the organization. Recognize that there likely won't be a one-size-fits-all
approach to apply uniformly across the entire organization. Understand
decentralized or distributed ownership needs.
" Define and document roles and responsibilities: Ensure that you define and
document clear roles and responsibilities for people who collaborate in workspaces.
Make this information available in onboarding activities, training materials, and your
centralized portal.
" Create a responsibility matrix: Map out who is expected to handle each function
when creating, maintaining, publishing, securing, and supporting content. Have this
information ready when you start planning for workspace access roles.
" Consider co-ownership or multi-team ownership scenarios: Identify when a
scenario exists where it would be helpful to separate out workspaces so that
responsibilities are clear.
" Create workspace management documentation: Educate workspace
administrators and members about how to manage workspace settings and access.
Include responsibilities for workspace administrators, members, and contributors.
Make the information available in your centralized portal and training materials.

Workspace organization
How to organize workspaces is one of the most important aspects of workspace
planning.

Different business units and departments may use workspaces slightly differently
depending on their collaboration requirements. When you need a new workspace, we
recommend that you consider the factors described in this section.

Workspace subject and scope


The following options present some suggestions about how you can organize
workspaces by subject and scope.

In some cases, you may already have some useful Azure Active Directory (Azure AD)
groups in place. You can then use them to manage access to resources for the defined
subject area and scope. However, you might need to create some new groups to suit
this purpose. See the workspace access section below for considerations.
Option 1: Workspace per subject area or project
Creating a workspace for each subject area or project allows you to focus on its
purpose. It allows you to take a balanced approach.

Examples: Quarterly Financials or Product Launch Analysis

The advantages of option 1 include:

Managing user access for who is allowed to edit or view content is more
straightforward since it's scoped per subject area.
When content will be accessed by users across organizational boundaries,
structuring workspaces by subject area is more flexible and easier to manage
(compared to option 2 discussed next).
Using a scope per subject area is a good compromise between workspaces that
contain too many items and workspaces that contain too few items.

A disadvantage of option 1 is that depending on how narrow or wide workspaces are


defined, there's still some risk that many workspaces will be created. Finding content can
be challenging for users when content is spread across many workspaces.

 Tip

When well-planned and managed, a workspace per subject area or project usually
results in a manageable number of workspaces.

Option 2: Workspace per department or team


Creating a workspace per department or team (or business unit) is a common approach.
In fact, alignment with the organizational chart is the most common way people start
with workspace planning. However, it's not ideal for all scenarios.

Examples: Finance Department or Sales Team Analytics

The advantages of option 2 include:

Getting started with planning is simple. All content needed by the people that
work in that department will reside in one workspace.
It's easy for users to know which workspace to use since all of their content is
published to the workspace associated with their department or team.
Managing security roles can be straightforward, especially when Azure AD groups
are assigned to workspace roles (which is a best practice).
The disadvantages of option 2 include:

The result is often a broad-scoped workspace that contains many items. A broadly
defined workspace scope can make it challenging for users to locate specific items.
Because there's a one-to-one relationship between a workspace and a Power BI
app, a broadly defined workspace can result in apps for users that contain lots of
content. This issue can be mitigated by excluding certain workspace items from the
app, and with good design of the app navigation experience.
When users from other departments need to view specific workspace items,
managing permissions can become more complex. There's a risk that people will
assume that everything in the departmental workspace is for their eyes only.
There's also a risk that the sharing of individual items will become overused in
order to accomplish granular viewing permissions.
If some content creators need permission to edit some items (but not all items), it's
not possible to set those permissions in a single workspace. That's because
workspace roles, which determine edit or view permissions, are defined at the
workspace level.
When you have a large number of workspace items, it often means you'll need to
use strict naming conventions for items so that users are able to find what they
need.
Broad workspaces with many items might run into a technical limitation on the
number of items that can be stored in a workspace.

 Tip

When creating workspaces that align with your organizational chart, you often end
up with fewer workspaces. However, it can result in workspaces that contain a lot of
content. We don't recommend aligning workspaces per department or team when
you expect to have a significant number of items and/or many users.

Option 3: Workspace for a specific report or app

Creating a workspace for each report or type of analysis isn't recommended except for
specific circumstances.

Examples: Daily Sales Summary or Executive Bonuses

Advantages of option 3 include:

The purpose of a narrowly defined workspace is clear.


Ultra-sensitive content can, and often should, be segregated into its own
workspace so that it can be managed and governed explicitly.
Fine-grained workspace permissions are applicable to a few items. This setup is
useful when, for instance, a user is permitted to edit one report but not another.

Disadvantages of option 3 include:

If overused, creating narrowly defined workspaces results in a large number of


workspaces.
Having a large number of workspaces to work with involves more effort. While
users can rely on search, finding the right content in the right workspace can be
frustrating.
When a larger number of workspaces exist, there's more work from an auditing
and monitoring perspective.

 Tip

Creating a workspace with a narrow scope, such as an individual report, should be


done for specific reasons only. It should be the exception rather than the rule.
Occasionally, separating scorecards into their own workspace is a useful technique.
For example, using a separate workspace is helpful when a scorecard presents goals
that span multiple subject areas. It's also helpful to set up specific permissions for
managing and viewing the scorecard.

Checklist - When considering the subject area and scope of workspace content, key
decisions and actions include:

" Assess how workspaces are currently set up: Review how people currently use
workspaces. Identify what works well and what doesn't work well. Plan for potential
changes and user education opportunities.
" Consider the best workspace scope: Identify how you want people to use
workspaces based on purpose, subject area, scope, and who's responsible for
managing the content.
" Identify where highly sensitive content resides: Determine when creating a specific
workspace for highly sensitive content can be justified.
" Create and publish documentation about using workspaces: Create useful
documentation or FAQs for your users about how to organize and use workspaces.
Make this information available in training materials and your centralized portal.
Workspace item types
Separating data workspaces from reporting workspaces is a common practice for
decoupling data assets from analytical assets.

A data workspace is dedicated to storing and securing data items such as a


lakehouse, warehouse, data pipeline, dataflow, or dataset.
A reporting workspace is focused more on the downstream analytical activities. It's
dedicated to storing and securing items such as reports, dashboards, and metrics.
Reporting workspaces primarily (but not necessarily exclusively) include Power BI
content.

 Tip

Each Fabric experience allows you to create various types of items. These items
don't always fit neatly into the concept of what's considered data versus reporting
(or analytical) content. One example is a Fabric notebook that can be used in many
different ways, such as: loading and transforming data in a lakehouse, submitting
Spark SQL queries, or analyzing and visualizing data with PySpark. When the
workspace will contain mixed workloads, we recommend that you focus primarily
on the workspace purpose and ownership of the content as described elsewhere in
this article.

The advantages for separating data workspaces from reporting workspaces include:

Critical organizational data, such as an endorsed lakehouse or dataset, can reside


in a specific workspace that's designed to make reusable data available at
enterprise scale. Common examples include:
Report creators can locate and reuse trustworthy shared datasets more easily.
For more information, see the managed self-service BI usage scenario.
Dataset creators can locate trustworthy dataflows or lakehouse tables more
easily. For more information, see the self-service data preparation usage
scenario and the advanced self-service data preparation usage scenario.
Access management can be centralized for critical organizational data. Managing
access separately for the data workspace compared with reporting workspace(s) is
useful when different people are responsible for data and reports. With managed
self-service BI, it's common to have many report creators and fewer data creators.
Limiting who can edit and manage datasets minimizes the risk of unintentional
changes, especially to critical data items that are reused for many purposes or by
many users. Physical separation reduces the chances of inadvertent, or
unapproved, changes. This extra layer of protection is helpful for certified datasets,
which are relied upon for their quality and trustworthiness.
Co-ownership scenarios are clarified. When shared datasets are delivered from a
centralized BI or IT team, while reports are published by self-service content
creators (in business units), it's a good practice to segregate the datasets into a
separate workspace. This approach avoids the ambiguity of co-ownership
scenarios because ownership and responsibility per workspace is more clearly
defined.
Row-level security (RLS) is enforced. When you encourage creators to work in
different workspaces, they won't have unnecessary edit permission to the original
dataset. The advantage is that RLS and/or object-level security (OLS) will be
enforced for content creators (and also content viewers).

The disadvantages for separating data workspaces from reporting workspaces include:

A workspace naming convention is required to be able to distinguish a data


workspace from a reporting workspace.
Extra user education is required to ensure that content authors and consumers
know where to publish and find content.
Sometimes it's challenging to clearly delineate the item types that should be
contained within a workspace. Over time, a workspace can end up containing more
types of content than was originally intended.
Use of separate workspaces results in a larger number of workspaces that you
need to manage and audit. As you plan for purpose, scope, and other
considerations (such as the separation of development, test, and production
content) the approach to workspace design can become more complicated.
Extra change management processes may be required to track and prioritize
requested changes to centralized data items, particularly when report creators
have requirements beyond what can be handled by composite models and report-
level measures.

Checklist - When considering the item types to store in a workspace, key decisions and
actions include:

" Determine your objectives for data reuse: Decide how to achieve data reuse as
part of a managed self-service BI strategy.
" Update the tenant setting for who can use datasets across workspaces: Determine
whether this capability can be granted to all users. If you decide to limit who can
use datasets across workspaces, consider using a group such as Fabric approved
report creators.

Workspace access
Since the primary purpose of a workspace is collaboration, workspace access is mostly
applicable to users who create and manage its content. It can also be relevant when the
workspace is used for viewing content (a secondary purpose for workspaces, as
described earlier in this article).

When starting to plan for workspace roles, it's helpful to ask yourself the following
questions.

What are the expectations for how collaboration will occur in the workspace?
Will the workspace be used directly for viewing content by consumers?
Who will be responsible for managing the content in the workspace?
Who will view content that's stored in the workspace?
Is the intention to assign individual users or groups to workspace roles?

It's a best practice to use groups for assigning workspace roles whenever practical.
There are different types of groups you can assign. Security groups, mail-enabled
security groups, distribution groups, and Microsoft 365 groups are all supported for
workspace roles. For more information about using groups, see Tenant-level security
planning.

When planning to use groups, you might consider creating one group per role per
workspace. For example, to support the Quarterly Financials workspace, you could
create the following groups:

Fabric workspace admins – Quarterly Financials


Fabric workspace members – Quarterly Financials
Fabric workspace contributors – Quarterly Financials
Fabric workspace viewers – Quarterly Financials
Power BI app viewers – Quarterly Financials

 Tip

Creating the groups listed above provides flexibility. However, it involves creating
and managing many groups. Also, managing a large number of groups can be
challenging when groups are only created and maintained by IT. This challenge can
be mitigated by enabling self-service group management to certain satellite
members. These members can include the Center of Excellence (COE), champions,
or trusted users who have been trained in how to manage role memberships for
their business unit. For more information, see Tenant-level security planning.

When data workspaces are separated from reporting workspaces, as described earlier in
this article, it results in an even larger number of groups. Consider how the number of
groups doubles from five to 10 when you separate data and reporting workspaces:

Fabric data workspace admins – Quarterly Financials


Fabric reporting workspace admins – Quarterly Financials
Fabric data workspace members – Quarterly Financials
Fabric reporting workspace members – Quarterly Financials
Fabric data workspace contributors – Quarterly Financials
Fabric reporting workspace contributors – Quarterly Financials
Fabric data workspace viewers – Quarterly Financials
Fabric reporting workspace viewers – Quarterly Financials
Power BI app viewers – Quarterly Financials

When multiple workspaces exist for development, test, and production, it results in an
even larger number of groups. There's the potential for the number of groups to triple.
For example, for just the data workspace admins, there would be these three groups:

Fabric data workspace admins – Quarterly Financials [Dev]


Fabric data workspace admins – Quarterly Financials [Test]
Fabric data workspace admins – Quarterly Financials

The previous examples are intended to convey that the use of groups that map to
workspace roles can quickly become unmanageable.

 Tip

There are times when fewer groups are needed, particularly in development. For
example, you may not need to specify a workspace viewers group in development;
that group may only be needed for testing and production. Or you might be able
to use the same workspace admins group for development, test, and production.
For more information about development, test, and production, see Workspace
lifecycle management later in this article.

The effective use of groups for workspace roles can require considerable planning. Be
prepared to encounter scenarios when existing groups (that may be aligned with the
organizational chart) don't meet all your needs for managing Fabric content. In this case,
we recommend that you create groups specifically for this purpose. That's why the
words Fabric or Power BI are included in the group name examples shown above. If you
have multiple business intelligence tools, you may choose to use only BI as the prefix
instead. That way, you can use the same groups across multiple tools.

Lastly, the examples show one workspace - Quarterly Financials - but often it's possible
to manage a collection of workspaces with one set of groups. For example, multiple
workspaces owned and managed by the finance team might be able to use the same
groups.

7 Note

You'll often plan security more broadly, taking into consideration dataset Read and
Build permission requirements, and row-level security (RLS) requirements. For
more information about what to consider for supporting report consumers and
content creators, see the security planning articles. For the purposes of this article,
the focus is only on workspace roles as part of the workspace planning process.

Checklist - When considering workspace access, key decisions and actions include:

" Refer to roles and responsibilities: Use the roles and responsibilities information
prepared earlier to plan for workspace roles.
" Identify who'll own and manage the content: Verify that all the items you expect to
store in a single workspace align with the people who'll take responsibility for
owning and managing the content. If there are mismatches, reconsider how the
workspaces could be better organized.
" Identify who'll view content in the workspace: Determine whether people will view
content directly from the workspace.
" Plan for the workspace roles: Determine which people are suited to the Admin,
Member, Contributor, and Viewer roles for each workspace.
" Decide on group or individual role assignments: Determine whether you intend to
assign individual users or groups to workspace roles. Check whether there are
existing groups that you can use for workspace role assignments.
" Determine whether new groups need to be created: Consider carefully whether
you need to create a new group per workspace role. Bear in mind that it can result
in creating and maintaining many groups. Determine what the process is when a
new workspace is created and how related groups will be created.
" Configure and test the workspace role assignments: Verify that users have the
appropriate security settings they need to be productive while creating, editing and
viewing content.

Workspace domain
As described earlier in this article, it's critical to have clarity on workspace ownership.
That clarity is even more essential when responsibilities for creating and managing data
are decentralized among many departments and business units. Sometimes this
approach is referred to as a distributed, federated, or data mesh architecture.

One way to further support workspace ownership in Fabric is with domains. A domain is
a way to logically group multiple workspaces that have similar characteristics. For
example, you might create a domain to group all of your sales workspaces together, and
another domain for your finance workspaces.

Here are two key advantages to domains.

They group similar workspaces into a single management boundary.


They help users find relevant data (for example, by using filters in the OneLake
data hub).

The following table lists several ways you might choose to organize similar workspaces.

Method for organizing Method for organizing workspaces


workspaces

By subject area/domain/content The Finance domain includes each workspace related to


type finance content.

By the team who owns and The Enterprise BI domain includes all workspaces that the
manages the content team is directly responsible for managing.

By organizational business unit The European division domain includes all workspaces that
are related directly to the operations in Europe.

By project The Subsidiary acquisition domain includes all workspaces


for a highly sensitive project.

Here are some considerations when planning for Fabric domains.

What's the best way to map each workspace to a domain? Once a domain exists,
each workspace may be assigned to only one domain (rather than multiple
domains). You can reassign the domain in the workspace settings, or the admin
portal, if you discover that you need to reorganize workspaces.
Are there specific compliance needs or restrictions, such as geographic area?
Keep in mind that the geographic area for data storage is set for each capacity.
Therefore, consider how workspaces are assigned to a domain and also to a
capacity.
Who are the domain administrators? A domain administrator is authorized to
manage an existing domain. When possible, assign domain administrator(s) who
directly own and manage the content for the domain. The domain administrators
should be experts who are familiar with internal, regional, and governmental
regulations for the subject area. They should also be familiar with all internal
governance and security requirements.
How are domain contributors handled? A domain contributor defines who can
assign a workspace to an existing domain. If you allow the entire organization to
assign workspaces to a domain, you'll need to frequently audit the accuracy of the
assigned groupings. If you allow only specific groups of users, or Fabric admins
and domain admins, you'll have more control over how they're assigned.

Checklist - When planning for workspace domains, key decisions and actions include:

" Validate how content ownership works: Ensure that you deeply understand how
content ownership and management is happening throughout the organization.
Factor that information into needs for grouping and managing workspaces.
" Plan workspace domains: Have discussions to plan how to best organize
workspaces into domains. Confirm all key decisions with the Center of Excellence as
well as your executive sponsor.
" Decide how to handle domain contributors: Consider which users should have
permission to assign workspaces to each domain.
" Educate Fabric administrators: Ensure that your tenant administrators are familiar
with how to create a domain, and how to manage domain administrators.
" Educate domain administrators: Ensure that your domain administrators
understand their expectations for this role in managing a domain.
" Create an auditing process: On a regular basis, validate the assigned domain
groupings are correct.

Workspace settings
There are several settings you can set up for each individual workspace. These settings
can significantly influence how collaboration occurs, who is allowed to access the
workspace, and the level of data reusability across Fabric workloads.

Workspace license mode


Each workspace has a license mode setting. It can be set to Pro, Premium per user
(PPU), Fabric capacity, Embedded, or Trial. The type of license is important for
workspace planning because it determines:

Features: Different features are supported. PPU includes more features (such as
deployment pipelines) that aren't available in Pro. Many more Fabric features (such
as lakehouses) become available for workspaces assigned to a Fabric capacity.
Content access: The license type determines who can access content in the
workspace:
Only users who have a PPU license (in addition to being assigned a workspace
role) can access a PPU workspace.
If you expect to deliver content to content viewers who have a free license,
you'll need a license of F64 or higher.
Data storage location: When you need to store data in a specific geographic
region (outside of your home region), that becomes possible with a workspace
assigned to a capacity (and, accordingly, the capacity is created in that region). For
more information about data storage location, see Tenant setup.

Checklist - When considering the workspace license mode, key decisions and actions
include:

" Consider which features are required for each workspace: Determine the feature
requirements of each workspace. Consider differences in workload and which users
you intend to access the workspace.
" Set the workspace license mode: Review and update each workspace license mode
according to which features are needed by each workspace.

Workspace lifecycle management


When content creators collaborate to deliver analytical solutions that are important to
the organization, there are various lifecycle management considerations. These
processes are also known as continuous integration/continuous delivery (CI/CD), which
are one aspect of DevOps.
Several lifecycle management considerations include:

How to ensure timely, reliable, and consistent delivery of content.


How to communicate and coordinate activities between multiple content creators
who are working on the same project.
How to resolve conflicts when multiple content creators edit the same item in the
same project.
How to structure a straightforward and reliable deployment process.
How to roll back deployed content to a previous stable, working version.
How to balance fast releases of new features and bug fixes while safeguarding
production content.

In Fabric, there are two main components of lifecycle management.

Version control of content: Git integration allows content owners and creators to
create versions of their work. It can be used with web-based development in a
workspace, or when developing in a client tool, such as Power BI Desktop. Version
control (also known as source control) is achieved by tracking all revisions to a
project by using branches associated with local and remote repositories in Azure
DevOps. Changes are committed at regular intervals to branches in the remote
repository. When a content creator has completed revisions that are tested and
approved, their branch is merged with the latest version of the solution in the main
remote repository (after resolving any merge conflicts). Git integration can be
specified for each workspace in the Fabric portal, providing the feature has been
enabled in the tenant settings.
Promoting content: Deployment pipelines are primarily focused on release
management in order to maintain a stable environment for users. You can assign a
workspace to a stage (development, test, or production) in a deployment pipeline.
Then, you can easily and systematically promote, or deploy, your content to the
next stage.

When combining the lifecycle management features, there are best practices to consider
during your planning process. For example, you may choose to use Git integration for
your development workspace and deployment pipelines to publish to your test and
production workspaces. Those types of decisions require using the agreed-upon
practice consistently. We recommend that you do a proof of concept to fully test your
setup, processes, and permissions model.
Checklist - When planning for workspace lifecycle management, the key decisions and
actions include:

" Determine how users need to use version control: Analyze how your self-service
and advanced content creators work to determine whether file versioning with
OneDrive for Business or SharePoint is appropriate. Introduce Git integration for
advanced users who need more capabilities. Prepare to support both types of users.
" Determine how users need to promote content: Analyze how your self-service and
advanced content creators work to determine whether deployment pipelines are a
good fit for promoting content.
" Decide whether Git integration should be enabled: Consider whether Git
integration with workspaces is a good fit for how your content creators work. Set
the Users can synchronize workspace items with their Git repositories tenant setting
to align with this decision. Review each of the Git integration tenant settings and set
them according to your governance guidelines.
" Do a proof of concept: Conduct a technical proof of concept to clarify how you
intend for Git workspaces and deployment pipelines to work together.
" Decide which workspaces should have Git integration: Consider how your content
creators work, and which workspaces should be assigned to a development, test, or
production (release) branch.
" Verify licenses: Confirm that you have a capacity license available to use Git
integration. Ensure that each workspace is assigned to a Fabric capacity or Power BI
Premium capacity.
" Set up Azure DevOps: Work with your administrator to set up the Azure DevOps
projects, repositories, and branches that you'll need for each workspace. Assign
appropriate access to each repository.
" Connect workspaces: Connect each workspace to the appropriate Azure DevOps
repository.
" Consider who should deploy to production: Make decisions on how and who
should be able to update production content. Ensure that these decisions align with
how workspace ownership is handled in your organization.
" Educate content creators: Ensure that all your content creators understand when to
use lifecycle management features and practices. Educate them on the workflow
and how different workspaces impact lifecycle management processes.

Workspace integration with ADLS Gen2


It's possible to connect a workspace to an Azure Data Lake Storage Gen2 (ADLS Gen2)
account. There are two reasons you might do that:
Storage of Power BI dataflows data: If you choose to bring-your-own-data-lake,
the data for Power BI dataflows (Gen1) could be accessed directly in Azure. Direct
access to dataflow storage in ADLS Gen2 is helpful when you want other users or
processes to view or access the data. It's especially helpful when your goal is to
reuse dataflows data beyond Power BI. There are two choices for assigning
storage:
Tenant-level storage, which is helpful when centralizing all data for Power BI
dataflows into one ADLS Gen2 account is desired.
Workspace-level storage, which is helpful when business units manage their
own data lake or have certain data residency requirements.
Backup and restore for Power BI datasets: The Power BI dataset backup and
restore feature is supported for workspaces that are assigned to capacity or PPU.
This feature uses the same ADLS Gen2 account that's used for storing Power BI
dataflows data (described in the previous bullet point). Dataset backups are helpful
for:
Complying with data retention requirements
Storing routine backups as part of a disaster recovery strategy
Storing backups in another region
Migrating a data model

) Important

Setting Azure connections in the Fabric admin portal doesn't mean that all
dataflows for the entire tenant are stored by default to an ADLS Gen2 account. To
use an explicit storage account (instead of internal storage), each workspace must
be explicitly connected. It's critical that you set the workspace Azure connections
prior to creating any Power BI dataflows in the workspace.

Checklist - When considering workspace integration with ADLS Gen2, key decisions and
actions include:

" Decide whether the workspace will be used in ways that require Azure storage:
Consider whether a bring-your-own-data-lake scenario would be useful for the
storage of dataflows and/or whether you have requirements to use the dataset
backup and restore functionality.
" Determine which Azure storage account will be used: Select an Azure Storage
account that has the hierarchical namespace enabled (ADLS Gen2) for tenant-level
(centralized) storage of dataflows data or dataset backups. Ensure you have the
Azure storage account information readily available.
" Configure the tenant-level storage account: In the Fabric admin portal, set the
tenant-level ADLS Gen2 storage account.
" Decide whether workspace administrators may connect a storage account: Have
discussions to understand the needs of decentralized teams, and whether individual
teams are currently maintaining their own Azure Storage accounts. Decide whether
this capability should be enabled.
" Configure the admin setting for workspace-level storage: In the Fabric admin
portal, enable the option that allows workspace administrators to connect their own
storage account.
" Set the workspace-level Azure Storage connections: Specify the Azure Storage
account for each individual workspace. You must set the storage account prior to
creating any Power BI dataflows in the workspace. If you intend to use dataset
backups, ensure the workspace license mode is set to capacity or PPU.
" Update your workspace management documentation: Ensure that your workspace
management documentation includes information about how to assign ADLS Gen2
storage accounts correctly. Make the information available in your centralized
portal and training materials.

Workspace integration with Azure Log Analytics


Azure Log Analytics is a service within Azure Monitor. You can use Azure Log Analytics
to review diagnostic data generated by the Analysis Services engine, which hosts Power
BI datasets. Workspace-level logs are useful for analyzing performance and trends,
performing data refresh analysis, analyzing XMLA endpoint operations, and more. Azure
Log Analytics is available only for workspaces assigned to capacity or PPU.

7 Note

Although the names are similar, the data sent to Azure Log Analytics is different
from the data captured by the Power BI activity log. The data sent to Azure Log
Analytics is concerned with events generated by the Analysis Services engine (for
example, Query begin and Query end events). Conversely, the activity log is
concerned with tracking user activities (for example, View report or Edit report
events).

For more information about dataset event logs, see Data-level auditing.

For more information about how to set up Azure Log Analytics for use with Power BI,
see Configuring Azure Log Analytics for Power BI. Be sure to understand the
prerequisites you must have in place to make the integration work.

Checklist - When considering workspace integration with Azure Log Analytics, key
decisions and actions include:

" Decide whether workspace administrators can connect to Log Analytics:


Determine whether all, or some, workspace administrators will be permitted to use
Azure Log Analytics for analyzing workspace-level logs. If access is to be restricted
to only certain people, decide which group to use.
" Set up the tenant setting for Log Analytics connections: In the Fabric admin portal,
set the tenant setting according to the decision for which workspace administrators
set connections.
" Set the Log Analytics workspace for each workspace: In the workspace settings,
specify the Azure Log Analytics information for each workspace. To capture
workspace-level logs, ensure that the workspace license mode is set to capacity or
PPU.
" Update your workspace management documentation: Ensure that your workspace
management documentation includes information about how to assign a
workspace to Azure Log Analytics.

Other workspace properties


There are several other workspace properties that can provide helpful information. For
governed workspaces, we recommend that you set these properties.

Here are some suggestions for how to set these key settings to improve the experience
for your users.

Workspace description: A good workspace description includes a brief, yet


specific, explanation of what type of content can be found in the workspace. You
can use up to 4000 characters to describe:
The purpose for the workspace
The target audience
The type of content published to the workspace
Whether the workspace is considered governed
Whether the workspace includes development, test, or production data
Who to contact should there be any questions (occasionally it's crucial to
display this information as prominently as possible, in addition to the contact
list that's described next)
Workspace contacts: The workspace contact list includes the workspace
administrators by default. If you have technical content owners that are different
from the subject matter experts, you might find it helpful to specify other contacts.
Other contacts could be groups or individuals who can answer questions about the
workspace content.
Workspace image: Consistent use of workspace images can be helpful for users
when they're scanning a list of workspaces. Consider using an image to help users
determine:
The domain or subject area
Which business unit or team owns and manages the content
Whether it's a data workspace (one that's dedicated to storing reusable items,
such as a lakehouse, warehouse, data pipeline, dataflow, or dataset)
Whether it's a reporting workspace (one that's dedicated to storing analytical
items, such as reports, dashboards, or metrics)
Data model settings: Allows workspace members, administrators, and users with
Build permission on the dataset(s) to edit Power BI data models by using the web
interface. This setting is used together with the Users can edit data models in the
Power BI service tenant setting. This setting should align with your decisions and
processes for how content is created, managed, and deployed. Also, consider your
method for version control as described earlier in this article.

Checklist - When considering the other workspace properties, the key decisions and
actions include:

" Specify the workspace description: Ensure that there's a helpful and thorough
description included in the workspace description.
" Use a helpful image for the workspace: Set a consistent image for the workspace
that'll visually help users understand its subject area, who owns and manages
content in the workspace, and/or the type of content stored in the workspace.
" Identify contacts for the workspace: Verify whether the workspace administrators
should be the workspace contacts, or whether specific users or groups should be
specified.
" Specify data model settings: Consider which workspaces can permit web-based
data model editing. Set the Users can edit data models in the Power BI service tenant
setting according to your preferences for who can edit and manage content.
Other technical factors
There are other technical factors that may influence your workspace setup.

If you integrate content with other tools and services, there may be licensing
implications. For example, if you embed a Power Apps visual in a Power BI report,
you'll need appropriate Power Apps licenses.
There are per-workspace storage limits that apply to the amount of data you can
store in a Pro workspace. If using capacity or PPU isn't an option, consider how to
work within the storage limits during the workspace planning process.
When you install a template app from AppSource, it will create a new workspace
that will have a narrow subject and scope.

Checklist - When considering other technical factors, key decisions and actions include:

" Pay attention to technical factors: As you work through the planning process,
determine whether there's a technical reason (such as per-workspace storage limits)
that could influence your decision-making process.
" Reorganize workspace content: If storage limits could become a problem, create
separate workspaces now and republish content to these new workspaces.

Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Power BI implementation planning:
Security
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article introduces a series of articles about Power BI security. The series of articles is
targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They also support self-service users throughout the
organization.
Content creators: Self-service BI creators who set up permissions for the content
they create and publish.

The series of articles is intended to expand upon the content in the Power BI security
white paper. While the Power BI security white paper focuses on key technical topics
such as authentication, data residency, and network isolation, the primary goal of the
series is to provide you with considerations and decisions to help you plan for security
and privacy.

It's important to plan to address challenges related to security, which include:

Identifying and appropriately managing the volume and variety of data that's
stored in many locations.
Ensuring that sensitive data is appropriately stored and shared.
Keeping pace with the regulatory landscape, which is ever-changing.
Educating Power BI content creators on appropriate practices in relation to security
and privacy.

 Tip

Also see the information protection and data loss prevention articles. They
contain information that's complementary to this series of articles.
The focus for this series of articles is on security and privacy. It's organized into the
following articles:

Tenant-level security planning: Strategic decisions and actions you should


consider that affect securing the content in your Power BI tenant. The focus is on
strategic decisions that will impact consumers and content creators. It also includes
strategies for file locations, external users, and using groups.
Report consumer security planning: Tactical decisions and actions you should
consider when planning how to deliver secure content to read-only consumers.
The focus is primarily on report viewers and app viewers. It also describes
techniques for how to enforce data security and the Request access workflow for
consumers.
Content creator security planning: Tactical decisions and actions you should
consider when planning for enterprise and self-service creators, who create,
manage, secure, and publish content. It also describes the data discovery
experience and the Request access workflow for content creators.
Capacity and data storage security planning (article not currently available):
Tactical decisions and actions you should consider when planning for Premium
capacities and where to store data.
Gateway security planning (article not currently available): Tactical decisions and
actions you should consider when planning for gateway and data source security.

Next steps
In the next article in this series, learn about tenant-level security planning.
Power BI implementation planning:
Tenant-level security planning
Article • 03/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This tenant-level security planning article is primarily targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators,
information security teams, and other relevant teams.

This article may also be relevant for self-service Power BI creators who create, publish,
and manage content in workspaces.

The series of articles is intended to expand upon the content in the Power BI security
white paper. While the Power BI security white paper focuses on key technical topics
such as authentication, data residency, and network isolation, the primary goal of the
series is to provide you with considerations and decisions to help you plan for security
and privacy.

Because Power BI content can be used and secured in different ways, many tactical
decisions will be made by content creators. However, there are some strategic planning
decisions to make at the tenant level, too. Those strategic planning decisions are the
focus for this article.

We recommend that you make the tenant-level security decisions as early as possible
because they'll affect everything else. Also, it's easier to make other security decisions
once you have clarity on your overall security goals and objectives.

Power BI administration
The Power BI administrator is a high-privilege role that has significant control over
Power BI. We recommend that you carefully consider who's assigned to this role
because a Power BI administrator can perform many high-level functions, including:

Tenant settings management: Administrators can manage the tenant settings in


the admin portal. They can enable or disable settings and allow or disallow specific
users or groups within settings. It's important to understand that your tenant
settings have a significant influence on the user experience.
Workspace role management: Administrators can update workspace roles in the
admin portal. They can potentially update workspace security to access any data or
grant rights to other users to access any data in the Power BI service.
Personal workspace access: Administrators can access contents and govern the
personal workspace of any user.
Access to tenant metadata: Administrators can access tenant-wide metadata,
including the Power BI activity logs and activity events retrieved by the Power BI
admin APIs.

 Tip

As a best practice, you should assign between two and four users to the Power BI
administrator role. That way, you can reduce risk while ensuring there's adequate
coverage and cross-training.

A Power BI administrator belongs to at least one of these built-in roles:

Power BI admin (Microsoft 365)


Power Platform admin (Microsoft 365)
Global administrator (Azure Active Directory)

7 Note

While a Power Platform administrator can manage the Power BI service, the inverse
isn't true. Someone assigned to the Power BI administrator role can't manage other
applications in the Power Platform.

Checklist - When planning for who will be a Power BI administrator, key decisions and
actions include:
" Identify who's currently assigned the administrator role: Verify who's assigned to
one of the Power BI administrator roles: Power BI admin, Power Platform admin, and
Global administrator.
" Determine who should manage the Power BI service: If there are too many Power
BI administrators, create a plan to reduce the total number. If there are users
assigned as Power BI administrators who aren't well suited to such a high-privilege
role, create a plan to resolve the issue.
" Clarify roles and responsibilities: For each Power BI administrator, ensure that their
responsibilities are clear. Verify that appropriate cross-training has occurred.

Security and privacy strategies


You'll need to make some tenant-level decisions that relate to security and privacy. The
tactics taken and the decisions you make will rely on:

Your data culture. The goal is to encourage a data culture that understands that
security and protection of data is everyone's responsibility.
Your content ownership and management strategies. The level of centralized and
decentralized content management significantly affects how security is handled.
Your content delivery scope strategies. The number of people who will view
content will influence how security should be handled for the content.
Your requirements to comply with global, national/regional, and industry
regulations.

Here are a few examples of high-level security strategies. You might choose to make
decisions that impact the entire organization.

Requirements for row-level security: You can use row-level security (RLS) to
restrict data access for specific users. That means different users will see different
data when accessing the same report. A Power BI dataset or a data source (when
using single sign-on) can enforce RLS. For more information, see the Enforce data
security based on consumer identity section in the Report consumer security
planning article.
Data discoverability: Determine the extent to which data discoverability should be
encouraged in Power BI. Discoverability affects who can find datasets or datamarts
in the data hub, and whether content authors are allowed to request access to
those items (by using the Request access workflow). For a more information, see
the customizable managed self-service BI usage scenario.
Data that's permitted to be stored in Power BI: Determine whether there are
certain types of data that shouldn't be stored in Power BI. For example, you might
specify that certain sensitive information types, like bank account numbers or
social security numbers, aren't allowed to be stored in a dataset. For more
information, see the Information protection and data loss prevention article.
Inbound private networking: Determine whether there are requirements for
network isolation by using private endpoints to access Power BI. When you use
Azure Private Link, data traffic is sent by using the Microsoft private network
backbone instead of going across the internet.
Outbound private networking: Determine whether more security is required when
connecting to data sources. The Virtual Network (VNet) data gateway enables
secure outbound connectivity from Power BI to data sources within a VNet. You
can use an Azure VNet data gateway when content is stored in a Premium
workspace.

) Important

When considering network isolation, work with your IT infrastructure and


networking teams before you change any of the Power BI tenant settings. Azure
Private Link allows for enhanced inbound security through private endpoints, while
an Azure VNet gateway allows for enhanced outbound security when connecting to
data sources. Azure VNet gateway is Microsoft-managed rather than customer-
managed, so it eliminates the overhead of installing and monitoring on-premises
gateways.

Some of your organizational-level decisions will result in firm governance policies,


particularly when they relate to compliance. Other organizational-level decisions may
result in guidance that you can provide to content creators who are responsible for
managing and securing their own content. The resulting policies and guidelines should
be included in your centralized portal, training materials, and communication plan.

 Tip

See the other articles in this series for additional suggestions that relate to security
planning for report consumers and content creators.

Checklist - When planning your high-level security strategies, key decisions and actions
include:
" Identify regulatory requirements related to security: Investigate and document
each requirement, including how you'll ensure compliance.
" Identify high-level security strategies: Determine which security requirements are
important enough that they should be included in a governance policy.
" Collaborate with other administrators: Contact the relevant system administrator(s)
to discuss how to meet security requirements and what technical prerequisites exist.
Plan for doing a technical proof of concept.
" Update Power BI tenant settings: Set up each relevant Power BI tenant setting.
Schedule follow-up reviews regularly.
" Create and publish user guidance: Create documentation for the high-level
security strategies. Include details about the process and how a user may request
an exemption from the standard process. Make this information available in your
centralized portal and training materials.
" Update training materials: For the high-level security strategies, determine which
requirements or guidelines you should include in user training materials.

Integration with Azure AD


Power BI security is built upon the foundation of an Azure Active Directory (Azure AD)
tenant. The following Azure AD concepts are relevant to the security of a Power BI
tenant.

User access: Access to the Power BI service requires a user account (in addition to
a Power BI license: Free, Power BI Pro, or Premium Per User - PPU). You can add
both internal users and guest users to Azure AD, or they can be synchronized with
an on-premises Active Directory (AD). For more information about guest users, see
Strategy for external users.
Security groups: Azure AD security groups are required when making certain
features available in the Power BI tenant settings. You may also need groups to
effectively secure Power BI workspace content or for distributing content. For more
information, see Strategy for using groups.
Conditional access policies: You can set up conditional access to the Power BI
service and the Power BI mobile app. Azure AD conditional access can restrict
authentication in various situations. For example, you could enforce policies that:
Require multi-factor authentication for some or all users.
Allow only devices that comply with organizational policies.
Allow connectivity from a specific network or IP range(s).
Block connectivity from a non-domain-joined machine.
Block connectivity for a risky sign-on.
Allow only certain types of devices to connect.
Conditionally allow or deny access to Power BI for specific users.
Service principals: You may need to create an Azure AD app registration to
provision a service principal. Service principal authentication is a recommended
practice when a Power BI administrator wants to run unattended, scheduled,
scripts that extract data by using the Power BI admin APIs. Service principals are
also useful when embedding Power BI content in a custom application.
Real-time policies: You may choose to set up real-time session control or access
control policies, which involves both Azure AD and Microsoft Defender for Cloud
Apps. For example, you can prohibit the download of a report in the Power BI
service when it has a specific sensitivity label. For more information, see the
information protection and data loss prevention articles.

It may be difficult to find the right balance between unrestricted access and overly
restrictive access (which frustrates users). The best strategy is to work with your Azure
AD administrator to understand what's currently set up. Try to remain responsive to the
needs of the business while being mindful of necessary restrictions.

 Tip

Many organizations have an on-premises Active Directory (AD) environment that


they synchronize with Azure AD in the cloud. This setup is known as a hybrid
identity solution, which is out of scope for this article. The important concept to
understand is that users, groups, and service principals must exist in Azure AD for
cloud-based services like Power BI to work. Having a hybrid identity solution will
work for Power BI. We recommend talking to your Azure AD administrators about
the best solution for your organization.

Checklist - When identifying needs for Azure AD integration, key decisions and actions
include:

" Work with Azure AD administrators: Collaborate with your Azure AD


administrators to find out what existing Azure AD policies are in place. Determine
whether there are any policies (current or planned) that'll affect the user experience
in the Power BI service and/or in the Power BI mobile applications.
" Decide when user access versus service principal should be used: For automated
operations, decide when to use a service principal instead of user access.
" Create or update user guidance: Determine whether there are security topics that
you'll need to document for the Power BI user community. That way, they'll know
what to expect for using groups and conditional access policies.

Strategy for external users


Power BI supports Azure AD Business-to-Business (Azure AD B2B). External users, for
instance from a customer or partner company, can be invited as guest users in Azure AD
for collaboration purposes. External users can work with Power BI and many other Azure
and Microsoft 365 services.

) Important

The Azure AD B2B white paper is the best resource for learning about strategies
for handling external users. This article is limited to describing the most important
considerations that are relevant to planning.

There are advantages when an external user is from another organization that also has
Azure AD set up.

Home tenant manages the credentials: The user's home tenant stays in control of
their identity and management of credentials. You don't need to synchronize
identities.
Home tenant manages the user's status: When a user leaves that organization
and the account is removed or disabled, with immediate effect, the user will no
longer have access to your Power BI content. It's a significant advantage because
you may not know when someone has left their organization.
Flexibility for user licensing: There are cost-effective licensing options. An external
user may already have a Power BI Pro or PPU license, in which case you don't need
to assign one to them. It's also possible to grant them access to content in a
Premium capacity workspace by assigning a Free license to them.

Key settings
There are two aspects to enabling and managing how external user access will work:

Azure AD settings that are managed by an Azure AD administrator. These Azure


AD settings are a prerequisite.
Power BI tenant settings that are managed by a Power BI administrator in the
admin portal. These settings will control the user experience in the Power BI
service.
 Tip

Most external users are read-only consumers. However, sometimes you may want
to allow an external user to edit and manage content in Power BI. In that situation,
you must work within some limitations. We recommend that you thoroughly test
the capabilities that you intend to implement.

Guest invitation process


There are two ways to invite guest users to your tenant.

Planned invitations: You can set up external users ahead of time in Azure AD. That
way, the guest account is ready whenever a Power BI user needs to use it for
assigning permissions (for example, app permissions). Although it requires some
up-front planning, it's the most consistent process because all Power BI security
capabilities are supported. An administrator can use PowerShell to efficiently add a
large number of external users.
Ad hoc invitations: A guest account can be automatically generated in Azure AD at
the time that a Power BI user shares or distributes content to an external user (who
wasn't previously set up). This approach is useful when you don't know ahead of
time who the external users will be. However, this capability must first be enabled
in Azure AD. The ad hoc invitation approach works for ad hoc per-item
permissions and app permissions.

 Tip

Not every security option in the Power BI service supports triggering an ad hoc
invitation. For this reason, there's an inconsistent user experience when assigning
permissions (for example workspace security versus per-item permissions versus
app permissions). Whenever possible, we recommend that you use the planned
invitation approach because it results in a consistent user experience.

Customer tenant ID
Every Azure AD tenant has a globally unique identifier (GUID) known as the tenant ID. In
Power BI, it's known as the customer tenant ID (CTID). The CTID allows the Power BI
service to locate content from the perspective of a different organizational tenant. You
need to append the CTID to URLs when sharing content with an external user.
Here's an example of appending the CTID to a URL: https://app.powerbi.com/Redirect?
action=OpenApp&appId=abc123&ctid=def456

When you need to provide the CTID for your organization to an external user, you can
find it in the Power BI service by opening the About Power BI dialog window. It's
available from the Help & Support (?) menu located at the top-right of the Power BI
service. The CTID is appended to the end of the tenant URL.

Organizational branding
When external guest access happens frequently in your organization, it's a good idea to
use custom branding. It helps users identify which organizational tenant they're
accessing. Custom branding elements include a logo, cover image, and theme color.

The following screenshot shows what the Power BI service looks like when accessed by a
guest account. It includes a Guest content option, which is available when the CTID is
appended to the URL.
External data sharing
Some organizations have a requirement to do more than share reports with external
users. They intend to share datasets with external users, such as partners, customers, or
vendors.

The goal for in-place dataset sharing (also known as cross-tenant dataset sharing) is to
allow external users to create their own customized reports and composite models by
using data you create, manage, and provide. The original shared dataset (created by
you) remains in your Power BI tenant. The dependent reports and models are stored in
the external user's Power BI tenant.

There are several security aspects for making in-place dataset sharing work.

Tenant setting: Allow guest users to work with shared datasets in their own
tenants: This setting specifies whether the external data sharing feature can be
used. It needs to be enabled for either of the other two settings (shown next) to
take effect. It's enabled or disabled for the entire organization by the Power BI
administrator.
Tenant setting: Allow specific users to turn on external data sharing: This setting
specifies which groups of users may share data externally. The groups of users
permitted here will be allowed to use the third setting (described next). This setting
is managed by the Power BI administrator.
Dataset setting: External sharing: This setting specifies whether that specific
dataset can be used by external users. This setting is managed by content creators
and owners for each specific dataset.
Dataset permission: Read and Build: The standard dataset permissions to support
content creators are still in place.

) Important

Typically, the term consumer is used to refer to view-only users who consume
content that's produced by others in the organization. However, with in-place
dataset sharing, there's a producer of the dataset and a consumer of the dataset. In
this situation, the consumer of the dataset is usually a content creator in the other
organization.

If row-level security is specified for your dataset, it's honored for external users. For
more information, see the Enforce data security based on consumer identity section in the
Report consumer security planning article.
External user subscriptions
It's most common for external users to be managed as guest users in Azure AD, as
previously described. In addition to this common approach, Power BI provides other
capabilities for distributing report subscriptions to users outside the organization.

The Power BI Allow email subscriptions to be sent to external users tenant setting
specifies whether users are permitted to send email subscriptions to external users who
aren't yet Azure AD guest users. We recommend that you set this tenant setting to align
with how strictly, or flexibly, your organization prefers to manage external user accounts.

 Tip

Administrators can verify which external users are being sent subscriptions by using
the Get Report Subscriptions as Admin API. The email address for the external
user is shown. The principal type is unresolved because the external user isn't set up
in Azure AD.

Checklist - When planning for how to handle external guest users, key decisions and
actions include:

" Identify requirements for external users in Power BI: Determine what use cases
there are for external collaboration. Clarify the scenarios for using Power BI with
Azure AD B2B. Determine whether collaboration with external users is a common or
rare occurrence.
" Determine the current Azure AD settings: Collaborate with your Azure AD
administrator to find out how external collaboration is currently set up. Determine
what the impact will be on using B2B with Power BI.
" Decide how to invite external users: Collaborate with your Azure AD administrators
to decide how guest accounts will be created in Azure AD. Decide whether ad hoc
invitations will be allowed. Decide to what extent the planned invitation approach
will be used. Ensure that the entire process is understood and documented.
" Create and publish user guidance about external users: Create documentation for
your content creators that will guide them on how to share content with external
users (particularly when the planned invitation process is required). Include
information about limitations that external users will face if they intend to have
external users edit and manage content. Publish this information to your centralized
portal and training materials.
" Determine how to handle external data sharing: Decide whether external data
sharing should be allowed, and whether it's limited to a specific set of approved
content creators. Set the Allow guest users to work with shared datasets in their own
tenants tenant setting and the Allow specific users to turn on external data sharing
tenant setting to align with your decision. Provide information about external data
sharing for your dataset creators. Publish this information to your centralized portal
and training materials.
" Determine how to handle Power BI licenses for external users: If the guest user
doesn't have an existing Power BI license, decide on the process to assign them a
license. Ensure that the process is documented.
" Include your CTID in relevant user documentation: Record the URL that appends
the tenant ID (CTID) in user documentation. Include examples for creators and
consumers on how to use URLs that append the CTID.
" Set up custom branding in Power BI: In the admin portal, set up custom branding
to help external users identify which organizational tenant they're accessing.
" Verify or update tenant settings: Check how the tenant settings are currently set
up in the Power BI service. Update them as necessary based on the decisions made
for managing external user access.

Strategy for file locations


There are different types of files that should be appropriately stored. So, it's important
to help users understand expectations for where files and data should be located.

There can be risk associated with Power BI Desktop files and Excel workbooks because
they can contain imported data. This data could include customer information,
personally identifiable information (PII), proprietary information, or data that's subject to
regulatory or compliance requirements.

 Tip

It's easy to overlook the files that are stored outside of the Power BI service. We
recommend that you consider them when you're planning for security.

Here are some of the types of files that may be involved in a Power BI implementation.

Source files
Power BI Desktop files: The original files (.pbix) for content that's published to
the Power BI service. When the file contains a data model, it may contain
imported data.
Excel workbooks: Excel workbooks (.xlsx) may include connections to datasets
in the Power BI service. They may also contain exported data. They may even be
original workbooks for content that's published to the Power BI service (as a
workbook item in a workspace).
Paginated report files: The original report files (.rdl) files for content that's
published to the Power BI service.
Source data files: Flat files (for example, .csv or .txt) or Excel workbooks that
contain source data that's been imported into a Power BI model.
Exported and other files
Power BI Desktop files: The .pbix files that have been downloaded from the
Power BI service.
PowerPoint and PDF files: The PowerPoint presentations (.pptx) and PDF
documents downloaded from the Power BI service.
Excel and CSV files: Data exported from reports in the Power BI service.
Paginated report files: The files exported from paginated reports in the Power
BI service. Excel, PDF, and PowerPoint are supported. Other export file formats
exist for paginated reports as well, including Word, XML, or web archive. When
using the export files to reports API, image formats are also supported.
Email files: Email images and attachments from subscriptions.

You'll need to make some decisions about where users can or can't store files. Typically,
that process involves creating a governance policy that users can refer to. The locations
for source files and exported files should be secured to ensure appropriate access by
authorized users.

Here are some recommendations for working with files.

Store files in a shared library: Use a Teams site, a SharePoint library, or a OneDrive
for work or school shared library. Avoid using personal libraries and drives. Ensure
that the storage location is backed up. Also ensure that the storage location has
versioning enabled so that it's possible to roll back to a previous version.
Use the Power BI service as much as possible: Whenever possible, use the Power
BI service for sharing and distribution of content. That way, there's full auditing of
access always. Storing and sharing files on a file system should be reserved for the
small number of users who are collaborating on content.
Don't use email: Discourage the use of email for sharing files. When someone
emails an Excel workbook or a Power BI Desktop file to 10 users, it results in 10
copies of the file. There's always the risk of including an incorrect (internal or
external) email address. Also, there's a greater risk the file will be forwarded to
someone else. (To minimize this risk, work with your Exchange Online administrator
to implement rules to block attachments based on conditions of size or type of file
extension. Other data loss prevention strategies for Power BI are described in the
information protection and data loss prevention articles.)
Use template files: Occasionally, there's a legitimate need to share a Power BI
Desktop file with someone else. In this case, consider creating and sharing a Power
BI Desktop template (.pbit) file. A template file only contains metadata, so it's
smaller in size than the source file. This technique will require the recipient to input
data source credentials to refresh the model data.

There are tenant settings in the admin portal that control which export formats users are
permitted to use when exporting from the Power BI service. It's important to review and
set these settings. It's a complementary activity to planning for the file locations that
should be used for the exported files.

 Tip

Certain export formats support end-to-end information protection by using


encryption. Due to regulatory requirements, some organizations have a valid need
to restrict which export formats users may use. The Information protection for
Power BI article describes factors to consider when deciding which export formats
to enable or disable in the tenant settings. In most cases, we recommend that you
restrict exporting capabilities only when you must meet specific regulatory
requirements. You can use the Power BI activity log to identify which users are
performing many exports. You can then teach these users about more efficient and
secure alternatives.

Checklist - When planning for file locations, the key decisions and actions include:

" Identify where files should be located: Decide where files should be stored.
Determine whether there are specific locations that shouldn't be used.
" Create and publish documentation about file locations: Create user
documentation that clarifies the responsibilities for managing and securing files. It
should also describe any locations where files should (or shouldn't) be stored.
Publish this information to your centralized portal and training materials.
" Set the tenant settings for exports: Review and set each tenant setting related to
export formats you want to support.

Strategy for using groups


We recommend using Azure AD security groups to secure Power BI content for the
following reasons.

Reduced maintenance: The security group membership can be modified without


the need to modify the permissions for the Power BI content. New users can be
added to the group, and unnecessary users can be removed from the group.
Improved accuracy: Because the group membership changes are made once, it
results in more accurate permission assignments. If an error is detected, it can be
more easily corrected.
Delegation: You can delegate the responsibility of managing group membership
to the group owner.

High-level group decisions


There are some strategic decisions to be made regarding how groups will be used.

Permission to create and manage groups

There are two key decisions to make about creating and managing groups.

Who's allowed to create a group? Commonly, only IT can create security groups.
However, it's possible to add users to the built-in Groups administrator Azure AD
role. That way, certain trusted users, like Power BI champions or satellite members
of your COE, can create groups for their business unit.
Who's allowed to manage members of a group? It's common that IT manages
group membership. However, it's possible to specify one or more group owners
who are permitted to add and remove group members. Using self-service group
management is helpful when a decentralized team or satellite members of the COE
are permitted to manage the membership of Power BI-specific groups.

 Tip

Allowing self-service group management and specifying decentralized group


owners are great ways to balance efficiency and speed with governance.

Planning for Power BI groups


It's important that you create a high-level strategy for how to use groups for securing
Power BI content and many other uses.
Various use cases for groups
Consider the following use cases for groups.

Use case Description Example


group name

Communicating Includes all users associated with the COE, including all core Power BI
with the Power and satellite members of the COE. Depending on your needs, Center of
BI Center of you might also create a separate group for only the core Excellence
Excellence members. It's likely to be a Microsoft 365 group that's
(COE) correlated with a Teams site.

Communicating Includes the executive sponsor and representatives from Power BI


with the Power business units who collaborate on leading the Power BI steering
BI leadership initiative in the organization. committee
team

Communicating Includes all users who are assigned any type of Power BI user Power BI
with the Power license. It's useful for making announcements to all Power BI community
BI user users in your organization. It's likely to be a Microsoft 365
community group that's correlated with a Teams site.

Supporting the Includes help desk users who directly interact with the user Power BI user
Power BI user community to handle Power BI support issues. This email support
community address (and Teams site, if applicable) is available and visible
to the user population.

Providing Includes specific users, usually from the Power BI COE, who Power BI
escalated provide escalated support. This email address (and Teams site, escalated user
support if applicable) is typically private, for use only by the user support
support team.

Administering Includes specific users who are allowed to administer the Power BI
the Power BI Power BI service. Optionally, members of this group can be administrators
service correlated to the role in Microsoft 365 to simplify
management.

Notifying Includes users allowed for a specific tenant setting in the Power BI
allowed admin portal (if the feature will be limited), or if the feature is workspace
features and to be rolled out gradually to groups of users. Many of the creators
gradual roll tenant settings will require you to create a new Power BI-
outs of features specific group. Power BI
external data
sharing
Use case Description Example
group name

Managing data Includes one or more groups of users who are allowed to Power BI
gateways administer a gateway cluster. There may be several groups of gateway
this type when there are multiple gateways or when administrators
decentralized teams manage gateways.
Power BI
gateway data
source
creators

Power BI
gateway data
source owners

Power BI
gateway data
source users

Managing Includes users allowed to manage a Premium capacity. There Power BI


Premium may be several groups of this type when there are multiple capacity
capacities capacities or when decentralized teams manage capacities. contributors

Securing Many groups that are based on subject areas and allowed Power BI
workspaces, access for managing security of Power BI workspace roles, workspace
apps, and items app permissions, and per-item permissions. administrators

Power BI
workspace
members

Power BI
workspace
contributors

Power BI
workspace
viewers

Power BI app
viewers

Deploying Includes the users that can deploy content by using a Power Power BI
content BI deployment pipeline. This group is used in conjunction deployment
with workspace permissions. pipeline
administrators
Use case Description Example
group name

Automating Includes the service principals that are allowed to use Power Power BI
administrative BI APIs for embedding or administrative purposes. service
operations principals

Groups for Power BI tenant settings

Depending on the internal processes you have in place, you'll have other groups that
are necessary. Those groups are helpful when managing the tenant settings. Here are
some examples.

Power BI workspace creators: Useful when you need to limit who can create
workspaces. It's used to set up the Create workspaces tenant setting.
Power BI certification subject matter experts: Useful to specify who's permitted to
use the certified endorsement for content. It's used to set up the Certification
tenant setting.
Power BI approved content creators: Useful when you require approval, or
training, or a policy acknowledgment for installation of Power BI Desktop, or for
obtaining a Power BI Pro or PPU license. It's used by tenant settings that
encourage content creation capabilities, such as Allow DirectQuery connections to
Power BI datasets, Push apps to end users, Allow XMLA endpoints, and others.
Power BI external tool users: Useful when you allow the use of external tools for a
selective group of users. It's used by group policy, or when software installations or
requests must be carefully controlled.
Power BI custom developers: Useful when you need to control who's permitted to
embed content in other applications outside of Power BI. It's used to set up the
Embed content in apps tenant setting.
Power BI public publishing: Useful when you need to limit who can publish data
publicly. It's used to set up the Publish to web tenant setting.
Power BI share to entire organization: Useful when you need to restrict who can
share a link with everyone in the organization. It's used to set up the Allow
shareable links to grant access to everyone in your organization tenant setting.
Power BI external data sharing: Useful when you need to allow certain users to
share datasets with external users. It's used to set up the Allow specific users to turn
on external data sharing tenant setting.
Power BI guest user access licensed: Useful when you need to group approved
external users who are granted a license by your organization. It's used to set up
the Allow Azure Active Directory guest users access to Power BI tenant setting.
Power BI guest user access BYOL: Useful when you need to group approved
external users who bring their own license (BYOL) from their home organization.
It's used to set up the Allow Azure Active Directory guest users access to Power BI
tenant setting.

 Tip

For considerations about using groups when planning for workspace access, see
the Workspace-level planning article. For information about planning for securing
workspaces, apps, and items, see the Report consumer security planning article.

Type of group
You can create different types of groups.

Security group: A security group is the best choice when your primary goal is to
grant access to a resource.
Mail-enabled security group: When you need to grant access to a resource and
distribute messages to the entire group by email, a mail-enabled security group is
a good choice.
Microsoft 365 group: This type of group has a Teams site and an email address.
It's the best choice when the primary goal is communication or collaboration in a
Teams site. A Microsoft 365 group only has members and owners; there isn't a
viewer role. For this reason, its primary purpose is collaboration. This type of group
was formerly known as an Office 365 group, modern group, or unified group.
Distribution group: You can use a distribution group to send a broadcast
notification to a list of users. Today, it's considered to be a legacy concept that
provides backwards compatibility. For new use cases, we recommend that you
create a mail-enabled security group instead.

When you request a new group, or you intend to use an existing group, it's important to
be aware of its type. The type of group can determine how it's used and managed.

Power BI permissions: Not every type of group is supported for every type of
security operation. Security groups (including mail-enabled security groups) offer
the highest coverage when it comes to setting Power BI security options. Microsoft
documentation generally recommends Microsoft 365 groups. However, in the case
of Power BI, they aren't as capable as security groups. For more information about
Power BI permissions, see the later articles in this series on security planning.
Power BI tenant settings: You can only use security groups (including mail-
enabled security groups) when allowing or disallowing groups of users to work
with Power BI tenant settings.
Advanced Azure AD features: Certain types of advanced features aren't supported
for all group types. For example, you might want to manage group membership
dynamically based on an attribute in Azure AD (such as the department for a user,
or even a custom attribute). Only Microsoft 365 groups and security groups
support dynamic group memberships. Or, if you want to nest a group within a
group, be aware that Microsoft 365 groups don't support that capability.
Managed differently: Your request to create or manage a group might be routed
to a different administrator based on the type of group (mail-enabled security
groups and distribution groups are managed in Exchange). Therefore, your internal
process will differ depending on the type of group.

Group naming convention


It's likely that you'll end up with many groups in Azure AD to support your Power BI
implementation. Therefore, it's important to have an agreed-upon pattern for how
groups are named. A good naming convention will help to determine the purpose of
the group and make it simpler to manage.

Consider using the following standard naming convention: <Prefix> <Purpose> -


<Topic/Scope/Department> <[Environment]>

The following list describes each part of the naming convention.

Prefix: Used to group all Power BI groups together. When the group will be used
for more than one analytical tool, your prefix might be just BI, rather than Power BI.
In that case, the text that describes the purpose will be more generic so that it
relates to more than one analytical tool.
Purpose: The purpose will vary. It could be for a workspace role, app permissions,
item-level permissions, row-level security, or other purpose. Sometimes multiple
purposes can be satisfied with a single group.
Topic/Scope/Department: Used to clarify who the group applies to. It will often
describe the group membership. It can also refer to who manages the group.
Sometimes a single group can be used for multiple purposes. For example, a
collection of finance workspaces could be managed with a single group.
Environment: Optional. Useful to differentiate between development, test, and
production.

Here are some example group names that apply the standard naming convention.

Power BI workspace admins - Finance [Dev]


Power BI workspace members - Finance [Dev]
Power BI workspace contributors - Finance [Dev]
Power BI workspace viewers - Finance [Dev]
Power BI app viewers - Finance
Power BI gateway administrators - Enterprise BI
Power BI gateway administrators - Finance

Decisions per group


When planning for which groups you'll need, several decisions must be made.

When a content creator or owner requests a new group, ideally they use a form to
provide the following information.

Name and purpose: A suggested group name and its intended purpose. Consider
including Power BI (or just BI when you have multiple BI tools) in the group name
to clearly indicate the scope of the group.
Email address: An email address when communication is also required for the
group members. Not all types of groups need to be mail-enabled.
Type of group: Options include security group, mail-enabled security group,
Microsoft 365 group, and distribution group.
Group owner: Who's allowed to own and manage the members of the group.
Group membership: The intended users who will be members of the group.
Consider whether external users and internal users can be added, or whether
there's a justification for putting external users into a different group.
Use of just-in-time group member assignment: You can use Privileged Identity
Management (PIM) to allow time-boxed, just-in-time, access to a group. This
service can be helpful when users require temporary access. PIM is also helpful for
Power BI administrators who need occasional access.

 Tip

Existing groups that are based on the organizational chart don't always work well
for Power BI purposes. Use existing groups when they meet your needs. However,
be prepared to create Power BI-specific groups when the need arises.

Checklist - When creating your strategy for how to use groups, key decisions and
actions include:
" Decide on the strategy for the use of groups: Determine the use cases and
purposes you'll need to use groups. Be specific about when security may be applied
by using user accounts versus when a group is required or preferred.
" Create a naming convention for Power BI-specific groups: Ensure that a consistent
naming convention is in use for groups that will support Power BI communication,
features, administration, or security.
" Decide who is allowed to create groups: Clarify whether all group creation is
required to go through IT. Or whether certain individuals (like satellite members of
the COE) can be granted permission to create groups for their business unit.
" Create a process for how to request a new group: Create a form for users to
request the creation of a new group. Ensure that there's a process in place to
respond quickly to new requests. Bear in mind that if requests are delayed, users
might be tempted to start assigning permissions to individuals accounts.
" Decide when decentralized group management is allowed: For groups that apply
to a specific team, decide when it's acceptable for a group owner (outside of IT) to
manage members in the group.
" Decide whether just-in-time group membership will be used: Determine whether
Privileged Identity Management will be useful. If so, determine which groups it can
be used for (such as the Power BI administrator group).
" Review which groups currently exist: Determine which existing groups can be
used, and which groups need to be created.
" Review each tenant setting: For each tenant setting, determine whether it'll be
allowed or disallowed for a specific set of users. Determine whether a new group
needs to be created to set up the tenant setting.
" Create and publish guidance for users about groups: Include documentation for
content creators that includes requirements, or preferences, for using groups.
Ensure that they know what to ask for when they request a new group. Publish this
information to your centralized portal and training materials.

Next steps
In the next article in this series, learn about ways to securely deliver content to read-only
report consumers.
Power BI implementation planning:
Report consumer security planning
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This security planning article describes strategies for read-only consumers. The focus is
on viewer permissions for reports and apps, and how to enforce data security. It's
primarily targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators,
information security teams, and other relevant teams.
Content creators and owners: Self-service BI creators who need to create, publish,
secure, and manage content that other users consume.

The series of articles is intended to expand upon the content in the Power BI security
white paper. While the Power BI security white paper focuses on key technical topics
such as authentication, data residency, and network isolation, the primary goal of the
series is to provide you with considerations and decisions to help you plan for security
and privacy.

In an organization, many users are classified as consumers. Consumers view content that
other users have created and published. Consumers are the focus of this article. For
security planning focused on content creators and owners, see the Content creator
security planning article.

To get the most from this article, it's helpful to understand the meaning of the terms
sharing and distribution in the context of Power BI.

Sharing is where one user gives another user (or group of users) access to a specific item
of content. The sharing capability in the Power BI service is scoped to one item. It most
commonly takes place between individuals who know each other and work closely
together.
Distribution is where content is delivered to other users, who are known as recipients. It
often involves a larger number of users across multiple teams. Recipients may not have
explicitly requested the content, but it's recognized that they need it to perform their
role. Recipients who consume distributed content may or may not know the original
creator of the content. As such, distribution as a concept is more formal than sharing.

When you talk with other people, determine whether they're using the term sharing in a
general way, or literally. Use of the term sharing can be interpreted in two ways.

The term sharing is often used in a general way related to sharing content with
colleagues. There are several techniques for delivering read-only content, which
are described in this article.
Sharing is also a specific feature in Power BI. It's a capability where a user or group
is granted access to a single item. Sharing links and direct access sharing are
described in this article.

Strategy for read-only consumers


In the Power BI service, consumers can view a report or dashboard when they have
permission to both:

View the Power BI item that contains the visualizations (such as a report or
dashboard).
Read the underlying data (dataset or other source).

You can provide read-only access to consumers by using different techniques. The
common techniques used by self-service content creators include:

Granting users and groups access to a Power BI app.


Adding users and groups to a Power BI workspace Viewer role.
Providing users and groups per-item permissions by using a sharing link.
Providing users and groups per-item permissions by using direct access.

The Power BI app and Power BI workspace Viewer role options involve managing
permissions for a set of items. The two per-item permissions techniques involve
managing permissions for one individual item.

 Tip

Generally, it's a best practice to use a Power BI app for most consumers.
Occasionally the workspace Viewer role may also be appropriate. Both Power BI
apps and the workspace Viewer role allow managing permissions for many items,
and should be used whenever possible. Managing permissions for individual items
can be tedious, time consuming, and error prone. In contrast, managing a set of
items reduces maintenance and improves accuracy.

When reviewing security settings for an item, you may see that its permissions are
either:

Inherited from the workspace or an app.


Applied directly to the item.

In the following screenshot, the Direct access permissions are shown for a report. In this
instance, the workspace Admin and Member roles are each assigned to a group. These
roles are shown for the report because the report-level access is inherited from the
workspace. There's also one user who has Read permissions applied directly to the
report.

The strategy you choose for read-only consumers may be different, and it should be
based on the individual solution, the preferences of who manages the solution, and the
needs of the consumer. The remainder of this section describes when to consider using
each of the available techniques.

Checklist - When creating your strategy for how to provide content to read-only
consumers, key decisions and actions include:

" Assess your existing strategy for read-only consumers: Verify how content is
currently distributed and shared to consumers. Identify whether there are
opportunities for improvement.
" Decide on your strategy for read-only consumers: Consider what your preferences
are for using app permissions, workspace roles, or per-item permissions. If changes
are necessary to meet these preferences, create a plan for making improvements.

Power BI app permissions


A Power BI app delivers a collection of reports, dashboards, and workbooks to
consumers. An app provides the best user experience for consumers because:

The app's navigation pane provides a simple and intuitive user experience. It's a
nicer experience than accessing content directly in a workspace.
Content can be logically organized into sections (which are like folders) in the
app's navigation pane.
Consumers only have access to specific items that have been explicitly included in
the app for their audience.
Links to additional information, documentation, or other content can be added to
the navigation pane for their audience.
There's a built-in Request access workflow.

7 Note

All references to an app in this article refer to a Power BI app. It's a different
concept from Power Apps. It's also a different concept than the Power BI mobile
applications. In this section, the focus is on organizational apps rather than
template apps.

You can create one app for each workspace as a formal way to distribute some, or all,
workspace content. Apps are a good way to distribute content broadly within an
organization, especially to users that you don't know or don't collaborate with closely.

 Tip

For more information about using a Power BI app for broad content distribution,
see the enterprise BI usage scenario. We recommend that content creators who
need to distribute content consider creating an app as their first choice.

You manage app permissions separately from workspace roles. The separation of
permissions has two advantages. It encourages:

Granting workspace access to content creators. It includes users that are actively
collaborating on the content, like dataset creators, report creators, and testers.
Granting app permissions to consumers. Unlike workspace permissions, app
permissions are always read-only (or none).

All users with workspace access may automatically view the app (when a Power BI app
has been published for the workspace). Due to this behavior, you can conceptually think
of workspace roles as being inherited by each app audience. Some users with workspace
access may also update the Power BI app, depending on their assigned workspace role.

 Tip

For more information about workspace access, see the Content creator security
planning article.

Using an app to distribute content to read-only consumers is the best choice when:

You want users to be able to view only specific items that are visible for that
audience (rather than all items within the underlying workspace).
You want to manage read-only permissions for the app separately from the
workspace.
You want simpler permission management for read-only users than per-item
permissions.
You want to ensure that row-level security is enforced for consumers (when they
have read-only permission on the underlying dataset).
You want to ensure that consumers can't view new and changed reports until the
app is republished.

While it's true that changes to reports and dashboards aren't visible to users of the app
until the app is republished, there are two considerations that require caution.

Immediate dataset changes: Dataset changes always take effect immediately. For
example, if you introduce breaking changes to a dataset in the workspace, it could
inadvertently result in reports becoming unstable (even though they haven't been
republished in the app). There are two ways to mitigate this risk: First, do all
development work in Power BI Desktop (separate from the workspace). Second,
insulate the production app by using separate workspaces for development and
test. (Optionally, you can achieve a higher level of control over deploying
workspace content from development to test and production by using deployment
pipelines.)
Content and permissions are published together: When you publish an app, its
permissions are published at the same time as the content. For example, you may
have report changes in a workspace that aren't yet complete, fully tested, or
approved. So, you can't republish the app merely to update permissions. To
mitigate this risk, assign app permissions to security group(s), and use security
group memberships (instead of individual users) when granting app permissions.
Avoid republishing an app merely to apply permission changes.

App audience
Each workspace in the Power BI service can have only one Power BI app. However, within
the app you can create one or more audiences. Consider the following scenario.

You have five sales reports that are distributed to many users throughout your
global sales organization.
One audience is defined in the app for the sales representatives. This audience can
view three of the five reports.
Another audience is defined in the app for the sales leadership team. This audience
can view all five reports, including the two reports that aren't available to sales
representatives.

This capability to mix and match content and audiences has the following advantages.

Certain reports can be available for viewing by multiple audiences. So, creating
multiple audiences removes the need to duplicate content across different
workspaces.
Certain reports may be available to only one audience. So, content for that one
audience can reside in the same workspace as other related content.

The following screenshot shows an app with two audiences: Sales Leadership and Sales
Reps. The Manage Audience Access pane provides access to the Sales Leadership
audience group for two security groups: Sales Leadership-North America and Sales
Leadership-Europe. The Gross Margin Analysis report that's shown in the screenshot for
the Sales Leadership audience group isn't available to the Sales Reps audience group.
7 Note

The term audience group is sometimes used. It isn't a direct reference to the use of
security groups. It includes members of the target audience who will see the
collection of content within a Power BI app. While you can assign individual users
to an audience, it's a best practice to assign security groups, Microsoft 365 groups,
or distribution groups whenever practical. For more information, see the strategy
for using groups in the Tenant-level security planning article.

When you manage permissions for an app, on the Direct Access page you can view the
members of each audience. You can also see users with a workspace role listed under
the All audience. You can't update the app permissions from the Direct Access page.
Instead, you must republish the app. You can, however, update app permissions from
the Pending page when there are open access requests for the app.

 Tip

The primary use case for using app audiences is to define specific permissions for
different sets of users. However, you can get a little creative when using audiences.
A user can be a member of multiple audiences, and each audience is shown to
viewers of the app as a secondary set of menus. For example, you can create an
audience named Start Here that contains information about how to use the app,
who to contact, how to provide feedback, and how to get help. Or, you can create
an audience named KPI Definitions that includes a data dictionary. Providing this
type of information helps new users and improves solution adoption efforts.

App permission options

When you create (or republish) an app, each audience has a Manage Audience Access
pane. In that pane, the following permissions are available.

Grant access to: For each audience, you can grant access to individual users and
groups. It's possible to publish the app to the entire organization when it's enabled
by the Publish content packs and apps to the entire organization tenant setting, and
the app isn't installed automatically. Whenever possible, we recommend that you
assign groups to audiences because adding or removing users involves
republishing the app. Everyone with workspace access automatically has
permission to view or update the app depending on their workspace role.
Dataset permissions: Two types of dataset permissions can be granted while
publishing an app:
Dataset Reshare: When enabled, app users are granted the Reshare permission
to the underlying dataset(s) with others. It makes sense to enable this option
when the underlying dataset(s) can be readily reshared with anyone. We
recommend that you get approval from the dataset owner(s) before granting
the Reshare permission to an app audience.
Dataset Build: When enabled, app users are granted the Build permission for
the datasets. Build permission allows users to create new reports, export
underlying data from reports, and more. We recommend that you get approval
from the dataset owner(s) before granting Build permission to an app audience.

The capability to add the dataset Reshare or Build permissions while publishing an app
is convenient. However, we recommend that you consider managing app permissions
and dataset permissions separately. Here are the reasons why.

Shared datasets might be in a separate workspace: If the dataset is published to a


separate workspace from the app, you'll need to manage its permissions directly.
The ability to add Read, Build or Reshare permissions while publishing an app only
works for datasets that are in the same workspace as the app. For this reason, we
recommend that you get into the habit of managing dataset permissions
independently.
Dataset permissions are managed separately: If you remove or change
permissions for an app, that action only affects the app. It doesn't automatically
remove any dataset permissions that were previously assigned. In this way, you can
think of the app permissions and dataset permissions as being decoupled. You'll
need to manage the dataset directly, separately from the app, when dataset
permissions change or need to be removed.
Dataset permissions should be controlled: Granting dataset permissions through
an app removes control from the dataset owner. Granting the Reshare permission
relies on good judgment by users who are choosing to reshare the dataset(s). Your
internal governance or security guidelines can become more difficult to manage
when resharing is allowed.
Consumers and creators have different goals: Typically, there are many more
content consumers than creators in an organization. In line with the principle of
least privilege, consumers only need Read permission for the underlying dataset.
They don't need Build permission unless they intend to create new reports.

 Tip

For more information about when to use separate data workspaces and reporting
workspaces, see the Workspace-level planning article.

App pre-installation rights

After you publish a Power BI app, a user typically needs to install it so they can open it. A
user can install an app from the Apps page in the Power BI service, or by using a link
they've received from another user. They'll be able to find (and install) an app when
they're included in at least one audience of the app.

An alternative approach to install an app is to push it to app consumers. It results in the


pre-installation of the app so that it automatically shows up in the Apps page in the
Power BI service. This approach is a convenience for consumers because they don't need
to find and install the app. However, pre-installed apps can become an annoyance for
users because they may become overwhelmed by too many apps that aren't relevant to
them.

The Push apps to end users tenant setting controls who's allowed to automatically install
apps. We recommend that you use this feature because it's convenient for users.
However, we also recommend that you educate your content creators on when to use it
so that it isn't overused.

 Tip

When publishing an app, if you select the option to install the app automatically,
you can't set the audience to be the entire organization (if enabled by the Push
apps to end users tenant setting).
Checklist - When creating your strategy for using apps for content viewers, key
decisions and actions include:

" Decide on the strategy for use of apps: Define your preferences for how to use
apps. Ensure that it aligns with your overall strategy for read-only consumers.
" Decide who can publish apps to the entire organization: Decide which report
creators are able to publish to the entire organization. Set the Publish content packs
and apps to the entire organization tenant setting to align with this decision.
" Decide who can push apps to end users: Decide which Power BI report creators
can pre-install apps. Set the Push apps to end users tenant setting to align with this
decision.
" Create and publish guidance for content creators: Provide documentation and
training for content creators. Include requirements and preferences for how to use
apps most effectively.
" Determine how to handle app access requests: Ensure that a process is in place to
assign contacts and handle app access requests in a timely manner.

Workspace Viewer role


As described in the Workspace planning articles, the primary purpose of a workspace is
collaboration. Workspace collaborators, such as dataset creators, report creators, and
testers, should be assigned to one of three roles: Contributor, Member, or Admin. These
roles are described in the Content creator security planning article.

You can assign the workspace Viewer role to consumers. Allowing consumers to access
content directly in a workspace can make sense for small teams and informal teams who
work closely together.

Allowing consumers to access workspace content directly is a good choice when:

The formality of an app, with its separate permissions, isn't necessary.


Viewers are permitted to view all items stored within the workspace.
You want simpler permissions management than per-item permissions.
Workspace users may also view an app (when an app is published for the
workspace).
The intention is for viewers to review content before it's published in an app.
Here are some suggestions to support workspace viewers.

Organize content in each workspace so that the items are easily located by report
consumers and so they align well with security. Workspace organization by subject
area or project usually works well.
Separate development and test content from production content so that work-in-
progress items can't be accessed by viewers.
Use apps (or per-item permissions when appropriate) when you expect to have
many access requests to process. There isn't a Request access workflow for
workspaces.

Checklist - When creating your strategy for using workspaces for content viewers, key
decisions and actions include:

" Decide on a strategy for using the workspace Viewer role: Define what your
preferences are for how to use workspaces for consumers. Ensure that it aligns with
your overall strategy for read-only consumers.
" Create and publish guidance for content creators: Provide documentation and
training for content creators. Include requirements and preferences for how to use
workspace permissions most effectively.

Per-item permissions
Individual item sharing grants permission to a single item. Less experienced content
creators commonly use this technique as the primary sharing technique because the
sharing commands are prominently displayed in the Power BI service. For this reason, it's
important to educate your content creators on the different sharing options, including
when to use app permissions instead of workspace roles.

Per-item permissions are a good choice when:

You want to provide read-only access to one item (report or dashboard).


You don't want the consumer to view all content published to a workspace.
You don't want the consumer to view all content published to an app audience.

Use per-item permissions sparingly because sharing grants Read permission to a single
item. In a way, you can think of per-item permissions as an override of workspace roles
or app permissions.
 Tip

We recommend that you use app permissions whenever possible. Next, consider
using workspace roles to enable direct workspace access. Lastly, use per-item
permissions when they meet the above criteria. App permissions and workspace
roles both specify security for a collection of content (rather than individual items),
which is a better security practice.

Sharing many items by using per-item permissions can be tedious and error prone,
especially when sharing to individual users instead of groups. Consider this scenario: You
have 40 reports that you've shared to colleagues by using their individual user accounts.
When one colleague transfers to a different department, you'll need to revoke their
access, which will involve editing permissions for all 40 reports.

) Important

Sharing content from a personal workspace should be done infrequently. Personal


workspaces are best suited to non-critical, informal, or temporary content. If you
have a situation where content creators frequently share important or critical
content from their personal workspaces, you should take appropriate action to
move that content to a standard workspace. For more information, see the
personal BI usage scenario.

When you share an individual item, you have several permission options.

Reshare permission: When enabled, users can share the item with other users,
including its underlying datasets. It makes sense to grant this permission when the
item can be readily shared with anyone. It removes control from the person or
team that manages the item. So, it relies on good judgment by users who are
granted the Reshare permission. However, your internal governance or security
guidelines can become more difficult to manage when resharing is allowed.
Build permission: When enabled, users are granted Build permission for the
underlying dataset. Build permission allows users to create new content that's
based on the dataset. It also allows them to export underlying data from reports,
and more. Considerations for granting Build permission are described in the
Content creator security planning article.

Per-item permissions for reports and dashboards can make sense for informal scenarios
when content is shared with a few users. It's a good idea to educate users on managing
permissions with apps and workspaces instead, especially when they're sharing content
to large numbers of users or users outside their team. It's important to emphasize the
following points.

It becomes more difficult to determine which content has been shared with which
users, because the permissions on each report and dashboard must be reviewed
individually.
In many instances, Reshare permission is set because the user experience enables
this option by default. So, there's a risk that content is shared to a wider set of
users than intended. This outcome can be prevented by unchecking the Allow
recipients to share this report option when sharing. Minimizing oversharing in this
way is a user training issue. The content creator that's setting the sharing
permissions should consider this choice every time.
All changes to reports and dashboards are viewable by others immediately, which
may confuse users when content modifications are a work in progress. This
concern can be mitigated by distributing content in an app, or by using separate
workspaces to segregate development, test, and production content. For more
information, see the self-service content publishing usage scenario.
When a user shares content from their personal workspace and they leave the
organization, IT usually disables their user account. In this case, all recipients of the
shared content will immediately lose access to the content.

There are three specific types of sharing: sharing links, direct access sharing, and shared
views.

Per-item permission links

When you share an individual item, the default experience results in a sharing link. There
are three types of sharing links.

People in your organization: When enabled in your Power BI tenant settings, this
type of sharing link is a straightforward way to provide read-only access to
everyone within the organization. However, the sharing link won't work for external
users. This option is best suited to when anyone can view the content, and the link
can be freely shared throughout the organization. Unless it's disabled by the Allow
shareable links to grant access to everyone in your organization tenant setting, this
type of sharing is the default.
People with existing access: This option doesn't create a new sharing link. Rather,
it allows you to retrieve the URL so you can send it to someone who already has
access.
Specific people: This option produces a sharing link for specific users or groups.
We recommend that you use this option most of the time because it provides
specific access. If you commonly work with external users, you can use this type of
link for guest users who already exist in Azure Active Directory (Azure AD). For
more information about the planned invitation process to create guest users, see
the Tenant-level security planning article.

) Important

We recommend that you consider restricting the Allow shareable links to grant
access to everyone in your organization tenant setting to members of a group. You
can create a group name like Power BI Share to Entire Organization, and then add a
small number of users who understand the implications of organization-wide
sharing. If you're concerned about existing organization-wide links, you can use the
admin API to find all items that have been shared with the entire organization.

A sharing link adds Read permission to the item. The Reshare permission is selected by
default. It's also possible to add Build permission to the underlying dataset at the same
time that the sharing link is created.

 Tip

We recommend that you teach content creators to enable the Build permission
option only when the consumer of the report is also a content creator who might
need to create reports, export data, or create a composite model from the
underlying dataset.

Sharing links are easier to maintain than direct access sharing, particularly when you
need to do bulk changes. It's a significant advantage when individual users are granted
sharing permissions, more so than groups (which commonly occurs when self-service
users are responsible for managing permissions). Consider the following comparisons.

Sharing link: 20 individual users are granted access with a sharing link. With a
single change to the link, it affects all 20 users.
Direct access: 20 individuals are granted direct access to an item. To make a
change, all 20 user permissions must be modified.

Per-item direct access permissions

You can also achieve per-item permissions by using direct access. Direct access involves
setting up the permissions for a single item. You can also determine any inherited
permissions derived from workspace roles.
When you grant a user direct access, they're granted Read permission for the item. The
Reshare permission is selected by default, as is the Build permission for the underlying
dataset. We recommend that you teach content creators to enable Build permission only
when the consumer of this report is also a content creator who might need to create
reports, export data, or create composite models from the underlying dataset.

 Tip

The user experience makes granting Reshare and Build permissions very
straightforward, but the user doing the sharing should always verify whether those
permissions are necessary.

Shared views

Use a shared view to share a filtered perspective of a report with another user. You can
publish a shared view by using a sharing link or by direct access.

Shared views are a temporary concept. They automatically expire after 180 days. For this
reason, shared views are best suited to informal and temporary sharing scenarios. Be
sure your users are aware of this limitation.

Checklist - When creating your strategy for using per-item permissions, key decisions
and actions include:

" Decide on the strategy for use of the sharing feature: Define what your
preferences are for how to use per-item permissions. Ensure that it aligns with your
overall strategy for read-only consumers.
" Decide who can publish links to the entire organization: Decide which report
creators are able to publish links for the entire organization. Set the Allow shareable
links to grant access to everyone in your organization tenant setting to align with
this decision.
" Create and publish guidance for content creators: Provide documentation and
training for content creators that includes requirements and preferences for how to
use per-item permissions most effectively. Ensure they're clear on the advantages
and disadvantages of per-item permissions. Include guidance for when to use
sharing links and when to use direct access sharing.
Other consumer query techniques
The most common ways for consumers to interact with Power BI are with apps,
workspaces, and per-item permissions (previously described in this article).

There are other techniques that consumers can use to query Power BI data. Each of the
following query techniques requires dataset or datamart Build permission.

Analyze in Excel: Consumers who prefer to use Excel can query a Power BI dataset
by using Analyze in Excel. This capability is a great alternative to exporting data to
Excel because the data isn't duplicated. With a live connection to the dataset, users
can create PivotTables, charts, and slicers. They can then publish the workbook to a
workspace in the Power BI service which allows consumers to open it and interact
with it.
XMLA endpoint: Consumers can query a dataset by connecting to the XMLA
endpoint. An application that's XMLA-compliant can connect to, query, and
consume a dataset that's stored in a Premium workspace. This capability is helpful
when consumers want to use a Power BI dataset as their data source for a data
visualization tool outside of the Microsoft ecosystem.
Datamart editor: Consumers can query a Power BI datamart by using the datamart
editor. It's a web-based visual query editor for creating no-code queries. There's
also a web-based SQL editor for when consumers prefer to write SQL queries. Both
editors query the managed Azure SQL Database that underlies the Power BI
datamart (rather than the built-in dataset).
SQL endpoint: Consumers can query a Power BI datamart by using the SQL
endpoint. They can use tools like Azure Data Studio or SQL Server Management
Studio (SSMS) to run SQL queries. The SQL endpoint directs queries to the
managed Azure SQL Database that underlies the Power BI datamart (rather than
the built-in dataset).

For more information about the Build permission, see the Content creator security
planning article.

Checklist - When planning which query techniques consumers will use, key decisions
and actions include:

" Create guidance for users on using Analyze in Excel: Provide documentation and
training for consumers on the best way to reuse existing datasets with Excel.
" Create guidance for users on using the XMLA endpoint: Provide documentation
and training for consumers on the best way to reuse existing datasets with the
XMLA endpoint.
" Create guidance for users on datamart queries: Provide documentation and
training for consumers on the available techniques for querying Power BI datamarts.

Request access workflow for consumers


When sharing content, it's common that one user forwards a link (URL) to another user.
When the recipient tries to view the content, and discovers that they don't have access
to it, they can select the Request access button. This action initiates the Request access
workflow. The user is then asked to provide a message to explain why they want access.

A Request access workflow exists for:

Access to a Power BI app.


Access to an item, like a report or dashboard.
Access to a dataset. For more information about the Request access workflow when
a dataset is discoverable, see the Content creator security planning article.

App access requests


There are two ways to learn about pending access requests that have been submitted
for an app.

Email: The contact(s) for the app receive an email notification. By default, this
contact is the app publisher. To provide better support for critical apps, we
recommend that you set the contact to a group that's able to respond quickly to
access requests.
Manage permissions menu: Workspace administrators and members can view,
approve, or decline access requests. The Manage permissions page is available on
the Apps page, and can be opened for each app. This capability is also available to
workspace contributors when the Allow contributors to update the app for this
workspace setting is enabled.

Pending access requests for an app show the message provided by the user. Each
pending request can be approved or declined. When choosing to approve a request, an
app audience must be selected.

The following screenshot shows a pending access request from a user. To approve it,
one of the two app audiences, Sales Reps or Sales Leadership, must be selected.
When you publish an app, the content and the permissions are published at the same
time. As previously described, it's not possible to publish only the app permissions
without content changes too. However, there's one exception: When you approve a
pending access request (such as the one shown in the previous screenshot), the
permission change occurs without publishing the latest content in the workspace.

Workspace access requests


Workspace access is granted by users who belong to the Admin role or Member role.

A user who is attempting to view a workspace receives an access denied message when
they aren't a member of a workspace role. Since there isn't a built-in Request access
workflow for workspaces, they're best used for small teams and informal teams who
work closely together. That's one reason why a Power BI app is better suited to larger
teams and broader content distribution scenarios.

Per-item access requests


There are two ways to learn about pending access requests that have been submitted
for an individual item, like a report.

Email: The contact(s) for the item receive an email notification. To provide better
support for critical reports, we recommend that you set the contact to a group
that's able to respond quickly to access requests.
Manage permissions menu: Workspace administrators and members may access
the Manage permissions page for each item. They can view, approve, or decline
access pending requests.

Manage access requests with groups


When a user submits the Request access form for a Power BI item (like a report or
dataset) or a Power BI app, the request is submitted for an individual user. However,
many large organizations need to use groups to comply with their internal security
policies.

We recommend that you use groups, rather than individuals, for securing content
whenever practical. For more information about planning for groups, see the Tenant-
level security planning article.

If you intend to provide access to groups instead of individual users, the content owner
or administrator that's processing the request for access will need to complete the
request in multiple steps:

1. Decline the pending request in Power BI (because it's associated with an individual
user).
2. Add the requestor to the correct group according to your current process.
3. Notify the requestor that they now have access.

 Tip

See Request access workflow for creators for information about responding to
requests for Build access from content creators. It also includes recommendations
about using a form for access requests.

Checklist - When planning the Request access workflow, key decisions and actions
include:

" Determine who should handle app access requests: Ensure that a process is in
place to handle app access requests in a timely manner. Ensure that app contacts
are assigned to support the process.
" Determine who should handle per-item requests: Ensure that a process is in place
to handle access requests in a timely manner. Ensure that contacts are assigned to
each item to support the process.
" Include in documentation and training for content creators: Ensure that content
creators understand how to handle access requests in a timely manner. Make them
aware of how to handle requests when a group should be used instead of an
individual user.
" Include in documentation and training: Include guidance for your content creators
on how to manage access requests effectively. Also include guidance for consumers
on what information to include in their access request message.
Enforce data security based on consumer
identity
You can plan to create fewer datasets and reports by enforcing data security. The
objective is to enforce data security based on the identity of the user who's viewing the
content.

For example, consider that you can share a single sales report with all salespeople
(consumers), knowing that each salesperson will only see sales results for their region.
This approach allows you to avoid the complexity of creating separate reports per region
that would need to be shared with the salespeople from that sales region.

Some organizations have specific requirements for endorsed (certified or promoted)


datasets or datamarts. For data that will be widely used, there might be a requirement
to use data security.

You can accomplish data security in multiple ways.

Power BI dataset: As a Power BI data creator, you can enforce row-level security
(RLS) and object-level security (OLS). RLS involves defining roles and rules that
filter data model rows, while OLS restricts access to specific tables or columns.
Both techniques are described later in this section.
Analysis Services: A live connection dataset can connect to a remote data model,
which is hosted by either Azure Analysis Services (AAS) or SQL Server Analysis
Services (SSAS). The remote model can enforce RLS or OLS based on the consumer
identity.
Data source: Some data sources, like Azure SQL Database, can enforce RLS. In this
case, the Power BI model may take advantage of the existing security rather than
redefining it. That approach can be a significant advantage when RLS defined in
the source is complex. You can develop and publish a DirectQuery model and set
the data source credentials of the dataset in the Power BI service to enable single
sign-on (SSO). When a report consumer opens a report, Power BI passes their
identity to the data source. The data source then enforces RLS based on the
identity of the report consumer. For more information about Azure SQL Database
RLS, see this article.

7 Note

Source systems, like Azure SQL Database, can also use techniques like views to
narrow down what the user can see. While that's a valid technique, it's not relevant
to the focus of this section.
Row-level security
Row-level security (RLS) allows a data modeler to restrict access to a subset of data. It's
typically used to ensure that some report consumers can't see specific data, like sales
results of other sales regions.

 Tip

If you've noticed someone creating multiple data models to support different


groups of consumers, check whether RLS will satisfy their requirements. It's typically
better to create, test, and maintain one data model rather than multiple data
models.

There are two steps for setting up RLS: rules and role mappings.

RLS rules

For datasets, a data modeler can set up RLS in Power BI Desktop by creating one or
more roles. A role has a unique name in the model, and it usually includes one or more
rules. Rules enforce filters on model tables by using Data Analysis Expressions (DAX)
filter expressions. By default, a model has no roles.

) Important

A model without roles means that users (who have permission to query the data
model) have access to all model data.

Rule expressions are evaluated within row context. Row context means the expression is
evaluated for each row by using the column values of that row. When the expression
returns TRUE , the user can see the row. You can define rules that are either static or
dynamic.

Static rules: Use DAX expressions that refer to constants, like [Region] =
"Midwest" .
Dynamic rules: Use specific DAX functions that return environmental values (as
opposed to constants). Environmental values are returned from three specific DAX
functions: USERNAME, USERPRINCIPALNAME, and CUSTOMDATA. Defining
dynamic rules is simple and effective when a model table stores username values.
They allow you to enforce a data-driven RLS design.
RLS role mappings
After you publish the model to the Power BI service, you must set up role mappings in
advance of users accessing related reports. Role mapping involves assigning Azure AD
security objects to roles. Security objects can be user accounts or security groups.

Whenever possible, it's a best practice to map roles to security groups. That way, there
will be fewer mappings, and group membership management can be handled by the
owner of the group.

We recommend that you make security account information from Azure AD available to
your content creators. One option is to create a dataflow with data that's kept in sync
with Azure AD. That way, content creators can integrate the dataflow data to produce a
data-driven dataset.

 Tip

It's possible to define a role that has no rules. In this case, the role provides access
to all rows of all model tables. Setting up this type of role is suitable when an
administrator or user is allowed to view all data in the model.

RLS user experience


Some organizations choose to purposefully use RLS as a secondary layer of security, in
addition to standard Power BI permissions. Consider the following scenario: You share a
link to a report with the entire organization. Any user who views the report must be
mapped to an RLS role to be able to see data in the report. If they aren't mapped to an
RLS role, they won't see any data.

The presence of RLS changes the default experience for consumers.

When RLS isn't defined for the dataset: Creators and consumers with at least Read
permission on the dataset can view all data in the dataset.
When RLS is defined on the dataset: Creators and consumers with only Read
permission on the dataset will only be able to view the data they're allowed to see
(based on their RLS role mapping).

7 Note

Some organizations enforce RLS as an additional layer of security, especially when


sensitive data is involved. For this reason, you might choose to require RLS for
datasets that are certified. That requirement can be accomplished with an internal
review and approval process prior to certifying the dataset.

When a user views a report in either a workspace or an app, RLS may or may not be
enforced depending on their dataset permissions. For this reason, it's critical that
content consumers and creators only possess Read permission on the underlying
dataset when RLS must be enforced.

Here are the permission rules that determine whether RLS is enforced.

User has Read permission on the dataset: RLS is enforced for the user.
User has Read and Build permissions on the dataset: RLS is enforced for the user.
User has Write permission on the dataset: RLS isn't enforced for the user,
meaning that they can see all data in the dataset. The Write permission provides
the ability to edit a dataset. It can be granted in one of two ways:
With the Contributor, Member, or Admin workspace roles (for the workspace
where the dataset is stored).
With the per-item Write dataset permission.

 Tip

For more information about how to use separate workspaces so that RLS works for
content creators, see the managed self-service BI usage scenario.

For more information about RLS, see Restrict access to Power BI model data.

RLS for datamarts

Power BI datamarts can also enforce RLS. However, the implementation is different.

The main difference is that RLS for datamarts is set up in the Power BI service, rather
than in Power BI Desktop.

Another difference is that datamarts enforce RLS on both the dataset and the managed
Azure SQL Database that's associated with the datamart. Enforcing RLS at both layers
provides consistency and flexibility. The same RLS filters are applied regardless of how
the user queries the data, whether it's by connecting to the dataset or to the managed
Azure SQL Database.

For more information, see RLS for datamarts.


Object-level security
Object-level security (OLS) allows a data modeler to restrict access to specific tables and
columns, and their metadata. You typically use OLS to ensure sensitive columns, like
employee salary, aren't visible to certain users. While it isn't possible to restrict access to
measures, any measure that references a restricted column will itself be restricted.

Consider an example of an employee table. It contains columns that store the employee
name and phone number, and also salary. You can use OLS to ensure that only certain
users, like senior Human Resources staff, can see salary values. For those users that can't
see salary values, it's as if that column doesn't exist.

Take care, because if a Power BI report visual includes salary, users that don't have
access to that field will receive an error message. The message will inform them that the
object doesn't exist. To these users, it looks like the report is broken.

7 Note

You can also define perspectives in a data model. A perspective defines viewable
subsets of model objects to help provide a specific focus for report creators.
Perspectives aren't intended to restrict access to model objects. A user can still
query a table or column even when it's not visible to them. Therefore, consider
perspectives as a user convenience rather than a security feature.

There isn't currently an interface in Power BI Desktop to set up OLS. You can use Tabular
Editor, which is a third-party tool for creating, maintaining, and managing models. For
more information, see the advanced data model management usage scenario.

For more information about OLS, see Restrict access to Power BI model objects.

Checklist - When planning for RLS and OLS, key decisions and actions include:

" Decide on the strategy for use of RLS: Consider for which use cases and purposes
you intend to use row-level security.
" Decide on the strategy for use of OLS: Consider for which use cases and purposes
you intend to use object-level security.
" Consider requirements for certified content: If you have a process for what's
required to certify a dataset, decide whether to include any specific requirements
for using RLS or OLS.
" Create and publish user guidance: Create documentation for users that includes
requirements and preferences for using RLS and OLS. Describe how to obtain user
mapping information if it exists in a centralized location.
" Update training materials: Include key information about requirements and
preferences for RLS and OLS in user training materials. Provide examples for users
to understand when it's appropriate to use either data security technique.

Next steps
In the next article in this series, learn about security planning for content creators who
are responsible for creating datasets, dataflows, datamarts, reports, or dashboards.
Power BI implementation planning:
Content creator security planning
Article • 03/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This security planning article describes strategies for content creators who are
responsible for creating datasets, dataflows, datamarts, reports, or dashboards. It's
primarily targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators,
information security teams, and other relevant teams.
Content creators and owners: Self-service BI creators who need to create, publish,
secure, and manage content that others consume.

The series of articles is intended to expand upon the content in the Power BI security
white paper. While the Power BI security white paper focuses on key technical topics
such as authentication, data residency, and network isolation, the primary goal of the
series is to provide you with considerations and decisions to help you plan for security
and privacy.

In an organization, many users are content creators. Content creators produce and
publish content that's viewed by others. Content creators are the focus of this article.

 Tip

We recommend that you review the Report consumer security planning article
first. It describes strategies for securely providing content to read-only consumers
including how to enforce data security.

Strategy for content creators


The foundation of a well-governed self-service BI system begins with content creators
and owners. They create and validate datasets and reports. In many cases, content
creators also set up permissions to manage security for their content.

 Tip

We recommend that you foster a data culture that makes security and protection
of data a normal part of everyone's role. To achieve that objective, user education,
support, and training is essential.

For purposes of security and permissions, consider that there are two types of content
creators: data creators and report creators. They can be responsible for creating and
managing enterprise BI or self-service BI content.

Data creators
A data creator is any Power BI user who creates datasets, dataflows, or datamarts.

Here are some common data creator scenarios.

Create a new dataset: Create and test a new data model in Power BI Desktop. It's
then published to the Power BI service so that it can be used as a shared dataset
for many reports. For more information about reusing shared datasets, see the
managed self-service BI usage scenario.
Extend and customize a dataset: Create a live connection to an existing shared
dataset in Power BI Desktop. Convert the live connection to a local model, which
allows extending the model design with new tables or columns. For more
information about extending and customizing shared datasets, see the
customizable managed self-service BI usage scenario.
Create a new dataflow: In the Power BI service, create a new dataflow so that it
can be used as a source by many datasets. For more information about reusing
data preparation activities, see the self-service data preparation usage scenario.
Create a new datamart. In the Power BI service, create a new datamart.

Data creators are often found in enterprise BI teams and in the Center of Excellence
(COE). They also have a key role to play in decentralized business units and departments
that maintain and manage their own data.

For other considerations about business-led BI, managed self-service BI, and enterprise
BI, see the Content ownership and management article.
Report creators
Report creators create reports and dashboards to visualize data that's sourced from
existing datasets.

Here are some common report creator scenarios.

Create a new report including a data model: Create and test a new report and
data model in Power BI Desktop. The Power BI Desktop file that contains one or
more report pages and includes a data model is published to the Power BI service.
New content creators commonly use this method before they're aware of using
shared datasets. It's also appropriate for narrow use cases that have no need for
data reuse.
Create a live connection report: Create a new Power BI report that connects to a
shared dataset in the Power BI service. For more information about reusing shared
datasets, see the managed self-service BI usage scenario.
Create a connected Excel workbook: Create a new Excel report that connects to a
shared dataset in the Power BI service. Connected Excel experiences, rather than
downloads of data, are highly encouraged.
Create a DirectQuery report: Create a new Power BI report that connects to a
supported data source in DirectQuery mode. One situation when this method is
useful is when you want to take advantage of user security that's implemented by
the source system.

Report creators are found throughout every business unit in the organization. There are
usually many more report creators in an organization than data creators.

 Tip

While not every dataset is a shared dataset, it's still worth adopting a managed
self-service BI strategy. This strategy reuses shared datasets whenever possible. In
that way, report creation and data creation are decoupled. Any content creator from
any business unit can effectively use this strategy.

Permissions for creators


This section describes the most common permissions for data creators and report
creators.

This section isn't intended to be an all-inclusive list of every possible permission. Rather,
it's intended to help with planning your strategy for supporting different types of
content creators. Your goal should be to follow the principle of least privilege. This
principle allows enough permissions for users to be productive, without over-
provisioning permissions.

Creating new content


The following permissions are commonly required for creating new content.

Permission Report Dataset Dataflow Datamart


creator creator creator creator

Access to the underlying data source

Dataset Read and Build permissions

Dataflow Read permission (when a dataflow is used


as a source, via the workspace Viewer role)

Access where original Power BI Desktop file is


stored

Permission to use custom visuals

Publishing content permissions


The following permissions are commonly required for publishing content.

Permission Report Dataset Dataflow Datamart


creator creator creator creator

Workspace role: Contributor, Member, or Admin

Dataset Write permission (when the user


doesn't belong to a workspace role)

Deployment pipeline role to publish items


(optional)

Refreshing data
The following permissions are commonly required for refreshing data.

Permission Report Dataset Dataflow Datamart


creator creator creator creator
Permission Report Dataset Dataflow Datamart
creator creator creator creator

Owner assigned (who has set up settings or taken


over the item)

Access to the underlying data source (when a


gateway isn't used)

Access to the data source in a gateway (when the


source is on-premises or in a virtual network)

The remainder of this article describes considerations for content creator permissions.

 Tip

For permissions related to viewing content, see the Report consumer security
planning article.

Checklist - When planning your security strategy for content creators, key decisions and
actions include:

" Determine who your data creators are: Ensure that you're familiar with who's
creating datasets, dataflows, and datamarts. Verify that you understand what their
needs are before starting your security planning activities.
" Determine who your report creators are: Ensure that you're familiar with who's
creating reports, dashboards, workbooks, and scorecards. Verify that you
understand what their needs are before starting your security planning activities.

Discover content for creators


Users can rely on data discovery to find datasets and datamarts. Data discovery is a
Power BI feature that allows content creators to locate existing data assets—even when
they don't have any permissions for that content.

Discovery of existing data is useful for:

Report creators who want to use an existing dataset for a new report.
Report creators who want to query data from an existing datamart.
Dataset creators who want to use an existing dataset for a new composite model.

7 Note

Data discovery in Power BI isn't a data security permission. It's a setting that allows
report creators to read metadata, helping them to discover data and request access
to it.

You can set up a dataset or datamart as discoverable when it's been endorsed (certified
or promoted). When it's discoverable, content creators can find it in the data hub.

A content creator can also request access to the dataset or datamart. Essentially, an
access request asks for Build permission, which is required to create new content based
on it. When responding to requests for access, consider using groups instead of
individual users. For more information about how to use groups for this purpose, see
Request access workflow for consumers.

Consider the following three examples.

The Sales Summary dataset is certified. It's the trusted and authoritative source for
sales tracking. Many self-service report creators throughout the organization use
this dataset. So, there are many existing reports and composite models based on
the dataset. To encourage other creators to find and use the dataset, it's set as
discoverable.
The Inventory Stats dataset is certified. It's a trusted and authoritative source for
inventory analysis. The dataset and related reports are maintained and distributed
by the enterprise BI team. Due to the complex design of the dataset, only the
enterprise BI team is allowed to create and maintain inventory content. Since the
goal is to discourage report creators from using the dataset, it isn't set as
discoverable.
The Executive Bonuses dataset contains highly confidential information.
Permissions to view or update this dataset are restricted to a few users. This
dataset isn't set as discoverable.

The following screenshot shows a dataset in the data hub in the Power BI service.
Specifically, it shows an example of a Request access message for a discoverable dataset.
This message is shown when the user doesn't currently have access. The Request access
message has been customized in the dataset settings.

The Request access message reads: For standard sales reporting of MTD/QTD/YTD, this
dataset is the authorize and certified source. Please request access to the dataset by
completing the form located at https://COE.contoso.com/RequestAccess . You will be
asked for a brief business justification, and the manager of the Center of Excellence will be
required to approve the request as well. Access will be audited every six months.

7 Note

Your data culture and your stance on data democratization should strongly
influence whether you enable data discovery. For more information about data
discovery, see the customizable managed self-service BI usage scenario.

There are three tenant settings related to discovery.

The Discover content tenant setting allows Power BI administrators to set which
groups of users are allowed to discover data. It's primarily targeted at report
creators who may need to locate existing datasets when creating reports. It's also
useful for dataset creators who might look for existing data that they can use in
their composite model development. While it's possible to set it for specific
security groups, it's a good idea to enable the setting for the entire organization.
The discovery setting on individual datasets and dataflows will control what's
discoverable. Less commonly, you might consider restricting this capability only to
approved content creators.
The Make certified content discoverable tenant setting allows Power BI
administrators to set which groups can set content to be discoverable (when they
also have permission to edit the item as well as permission to certify content,
which is granted by the Certification tenant setting). The ability to certify content
should be tightly controlled. In most cases, the same users who are allowed to
certify content should be allowed to set it as discoverable. In some situations, you
might want to restrict this capability only to approved data creators.
The Make promoted content discoverable tenant setting allows Power BI
administrators to set which groups can set the content as discoverable (when they
also have permissions to edit the data). Because the ability to promote content is
open to all content creators, in most cases, this capability should be available to all
users. Less commonly, you might consider restricting this capability to only
approved content creators.

Checklist - When planning data discovery for your content creators, key decisions and
actions include:

" Clarify needs for data discovery: Consider what your organization's position is on
encouraging content creators to find existing datasets and datamarts. When
appropriate, create a governance policy about how data discovery should be used.
" Decide who can discover content: Decide whether any Power BI user is allowed to
discover content, or whether discovery should be limited to certain groups of users
(for example, a group of approved content creators). Set the Discover content tenant
setting to align with this decision.
" Decide who can set certified content to be discoverable: Decide whether any
Power BI user (who has permission to edit the dataset or datamart, as well as
permission to certify it) can set it as discoverable. Set the Make certified content
discoverable tenant setting to align with this decision.
" Decide who can set promoted content to be discoverable: Decide whether any
Power BI user (who has permission to edit the dataset or datamart) can set it as
discoverable. Set the Make promoted content discoverable tenant setting to align
with this decision.
" Include in documentation and training for dataset creators: Include guidance for
your dataset creators about when it's appropriate to use data discovery for the
datasets and datamarts they own and manage.
" Include in documentation and training for report creators: Include guidance for
your content creators about how data discovery works, and what they can expect.

Request access workflow for creators


A user can request access to content in two ways.

For content consumers: A user receives a link to an existing report or app in the
Power BI service. To view the item, the consumer can select the Request access
button. For more information, see the Report consumer security planning article.
For content creators: The user discovers a dataset or datamart in the data hub. To
create a new report or composite model based on the existing data, the content
creator can select the Request access button. This experience is the focus of this
section.

By default, a request for access to a dataset or a datamart goes to the owner. The owner
is the user who last scheduled data refresh or input credentials. Relying on one user to
process access requests might be acceptable for team datasets. However, that may not
be practical or reliable.

Instead of relying on one owner, you can define custom instructions that are presented
to users when they request access to a dataset or datamart. Custom instructions are
helpful when:

The dataset is set as discoverable.


Approval of the access request will be done by someone other than the data
owner.
There's an existing process in place that needs to be followed for access requests.
Tracking of who requested access, when, and why is necessary for auditing or
compliance reasons.
Explanation is necessary for how to request access, and to set expectations.

The following screenshot shows an example of setting up custom instructions that a


user sees when they request the Build permission. The custom instructions read: For
standard sales reporting of MTD/QTD/YTD, this dataset is the authoritative and certified
source. Please request access to the dataset by completing the form located at
https://COE.contoso.com/RequestAccess . You will be asked for a brief business
justification, and the manager of the Center of Excellence will be required to approve the
request as well. Access will be audited every six months.
There are many options to create a form. Power Apps and Microsoft Forms are both
low-code, easy-to-use options. We recommend that you create a form in a way that's
independent of a single user. It's crucial that your form is created, managed, and
monitored by the proper team.

We recommend that you create helpful information for:

Content creators so they know what to expect when they request access.
Content owners and administrators so they know how to manage requests that are
submitted.

 Tip

For more information about responding to requests for read access from
consumers, see Request access workflow for consumers. It also includes
information about using groups (instead of individual users).

Checklist - When planning the Request access workflow, key decisions and actions
include:

" Clarify preferences for how to handle access requests: Determine for which
situations it's acceptable for owner approval, and when a different process should
be used. When appropriate, create a governance policy about how access requests
should be handled.
" Include in documentation and training for dataset and datamart creators: Include
guidance for your dataset and datamart creators about how and when to set
custom instructions for access requests.
" Include in documentation and training for report creators: Include guidance for
your report creators about what they can expect when requesting Build permissions
for datasets and datamarts.

Create and publish content


This section includes security aspects that apply to content creators.

7 Note

For consumers who view reports, dashboards, and scorecards, see the Report
consumer security planning article. Considerations related to app permissions are
covered in that article, too.

Workspace roles
You grant workspace access by adding users or groups (including security groups,
Microsoft 365 groups, and distribution lists) to workspace roles. Assigning users to
workspace roles allows you to specify what they can do with the workspace and its
content.

7 Note

For more information about workspace planning considerations, see the workspace
planning articles. For more information about groups, see the Tenant-level security
planning article.

Because the primary purpose of a workspace is collaboration, workspace access is


mostly relevant for the users who own and manage its content. When starting to plan
for workspace roles, it's helpful to ask yourself the following questions.

What are the expectations for how collaboration will occur in the workspace?
Who will be responsible for managing the content in the workspace?
Is the intention to assign individual users or groups to workspace roles?
There are four Power BI workspace roles: Admin, Member, Contributor, and Viewer. The
first three roles are relevant to content creators, who create and publish content. The
Viewer role is relevant to read-only consumers.

The four workspace role permissions are nested. That means that workspace
administrators have all the capabilities available to members, contributors, and viewers.
Likewise, members have all the capabilities available to contributors and viewers.
Contributors have all the capabilities available to viewers.

 Tip

See the workspace roles documentation for the authoritative reference for each of
the four roles.

Workspace administrator
Users assigned to the Admin role become the workspace administrators. They can
manage all settings and perform all actions, including adding or removing users
(including other workspace administrators).

Workspace administrators can update or delete the Power BI app (if one exists). They
can, optionally, allow contributors to update the app for the workspace. For more
information, see Variations to workspace roles later in this article.

 Tip

When referring to an administrator, be sure to clarify whether you're speaking


about a workspace administrator or a Power BI tenant-level administrator.

Take care to ensure that only trusted and reliable individuals are workspace
administrators. A workspace administrator has high privileges. They have access to view
and manage all content in the workspace. They can add and remove users (including
other administrators) to any workspace role. They can also delete the workspace.

We recommend that there are at least two administrators so that one serves as a backup
should the primary administrator be unavailable. A workspace that doesn't have an
administrator is known as an orphaned workspace. The orphaned status occurs when a
user leaves the organization and there's no alternative administrator assigned to the
workspace. For more information about how to detect and rectify orphaned workspaces,
see the View workspaces article.
Ideally, you should be able to determine who's responsible for the workspace content
by who the workspace administrators and members are (and the contacts specified for
the workspace). However, some organizations adopt a content ownership and
management strategy that restricts workspace creation to specific users or groups. They
typically have an established workspace creation process that may be managed by the
IT department. In this case, the workspace administrators may be the IT department
rather than the users who directly create and publish the content.

Workspace member

Users assigned to the Member role can add other workspace users (but not
administrators). They can also manage permissions for all content in the workspace.

Workspace members can publish or unpublish the app for the workspace, share a
workspace item or the app, and allow other users to share workspace items of the app.

Workspace members should be limited to the users who need to manage the workspace
content creation and publish the app. In some cases, the workspace administrators fulfill
that purpose, so you might not need to assign any users or groups to the Member role.
When the workspace administrators aren't directly related to the workspace content (for
example, because IT manages the workspace creation process), the workspace members
might be the true owners responsible for the workspace content.

Workspace contributor
Users assigned to the Contributor role can create, edit, or delete workspace content.

Contributors can't update the Power BI app (when one exists for the workspace) unless
that's been allowed by the workspace setting. For more information, see Variations to
workspace roles later in this article.

Most content creators in the organization are workspace contributors.

Workspace viewer
Users assigned to the Viewer role can view and interact with all workspace content.

The Viewer role is relevant to read-only consumers for small teams and informal
scenarios. It's fully described in the Report consumer security planning article.

Workspace ownership considerations


Consider an example where the following actions are taken to set up a new workspace.

1. Specific Power BI champions and satellite members of the Center of Excellence


(COE) have been granted permission in the tenant settings to create new
workspaces. They've been trained in content organization strategies and naming
standards.
2. You (a content creator) submit a request to create a workspace for a new project
that you'll manage. The workspace will include reports that track the progress of
your project.
3. A Power BI champion for your business unit receives the request. They determine
that a new workspace is justified. They then create a workspace and assign the
Power BI champions security group (for their business unit) to the workspace
Admin role.
4. The Power BI champion assigns you (the content creator) to the workspace
Member role.
5. You assign a trusted colleague to the workspace Member role to ensure there's a
backup should you be away.
6. You assign other colleagues to the workspace Contributor role because they'll be
responsible for creating the workspace content, including datasets and reports.
7. You assign your manager to the workspace Viewer role because they've requested
access to monitor the progress of the project. Your manager would like to review
content in the workspace before you publish an app.
8. You take responsibility for managing other workspace properties such as
description and contacts. You also take responsibility for managing workspace
access on an ongoing basis.

The previous example shows an effective way to allow a decentralized business unit the
ability to act independently. It also shows the principle of least privilege.

For governed content, or critical content that's more tightly managed, it's a best practice
to assign groups rather than individual user accounts to workspace roles. That way, you
can manage the group membership separately from the workspace. However, when you
assign groups to roles, it's possible that users may become assigned to multiple
workspace roles (because the user belongs to multiple groups). In that case, their
effective permissions are based on the highest role that they're assigned to. For more
considerations, see Strategy for using groups.

When a workspace is co-owned by multiple individuals or teams, it can make


management of the content complicated. Try to avoid multi-team ownership scenarios
by separating out workspaces. That way, responsibilities are clear and role assignments
are straightforward to set up.
Variations to workspace roles
There are two variations to the four workspace roles (described previously).

By default, only workspace administrators and members can create, publish, and
update the app for the workspace. The Allow contributors to update the app option
for this workspace setting is a workspace-level setting, which lets workspace
administrators delegate the ability to update the app for the workspace to
contributors. However, contributors can't publish a new app or change who has
permission to edit it. This setting is useful when you want contributors to be able
to update the app (when one exists for the workspace), yet not grant the other
permissions available to members.
The Block republish and disable package refresh tenant setting only allows dataset
owners to publish updates. When enabled, workspace administrators, members,
and contributors can't publish changes unless they first take over the dataset as its
owner. Because this setting applies to the entire organization, enable it with a
measure of caution because it affects all datasets for the tenant. Be sure to
communicate to your dataset creators what to expect because it changes the
normal behavior of workspace roles.

) Important

Per-item permissions can also be thought of as an override of the standard


workspace roles. For more information about per-item permissions, see the Report
consumer security planning article.

Checklist - When planning for workspace roles, key decisions and actions include:

" Create a responsibility matrix: Map out who is expected to handle each function
when creating, maintaining, publishing, securing, and supporting content. Use this
information when planning your workspace roles.
" Decide on your strategy for assigning workspace roles for content creators:
Determine which users should be an administrator, member, or contributor, and in
what circumstances (such as job role or subject area). If there are mismatches that
cause a security concern, reconsider how your workspaces could be better
organized.
" Determine how security groups versus individuals should be used for workspace
roles: Determine the use cases and purposes you'll need to use groups. Be specific
about when security may be applied by using user accounts versus when a group is
required or preferred.
" Provide guidance for content creators about managing workspace roles: Include
documentation for content creators about how to manage workspace roles. Publish
this information to your centralized portal and training materials.
" Set up and test workspace role assignments: Verify that content creators have the
functionality they need for editing and publishing content.

App creator permissions


Content creators who are workspace administrators or members can create and publish
a Power BI app.

A workspace administrator may also specify a setting in the workspace which allows
workspace contributors to update the app. It's a variation to workspace role security
because it grants contributors one other permission they wouldn't normally have. This
setting is set on a per-workspace basis.

 Tip

For more information about delivering content to read-only consumers, see the
Report consumer security planning article. This article includes information about
app permissions for app consumers, including audiences for the app.

Checklist - When planning for app creator permissions, key decisions and actions
include:

" Decide on your strategy for who can create and publish Power BI apps: Clarify
who should be permitted to create and publish Power BI apps.
" Determine when contributors can update Power BI apps: Clarify the situations
when a contributor should be permitted to update Power BI apps. Update the
workspace setting when this capability is required.

Data source permissions


When a data creator starts a new project, permissions required to access external data
sources are one of their first security-related considerations. They may also need
guidance on other data source related matters, including privacy levels, native database
queries, and custom connectors.

Access to data source

When a data creator creates a dataset, dataflow, or datamart, they must authenticate
with data sources to retrieve data. Usually, authentication involves user credentials
(account and password), which could be for a service account.

Sometimes it's useful to create specific service accounts for accessing data sources.
Check with your IT department for guidance on how service accounts should be used in
your organization. When they're permitted, the use of service accounts can:

Centralize permissions needed for data sources.


Reduce the number of individual users that need permissions to a data source.
Avoid data refresh failures when a user leaves the organization.

 Tip

If you choose to use service accounts, we recommend that you tightly control who
has access to the credentials. Rotate passwords on a regular basis (such as every
three months) or when someone that has access leaves the organization.

When accessing data sources, apply the principle of least privilege to ensure that users
(or service accounts) have permission to read only the data they need. They should
never have permission to perform data modifications. Database administrators who
create these service accounts should inquire about expected queries and workloads and
take steps to ensure adequate optimizations (like indexes) and resources are in place.

 Tip

If it's difficult to provide direct data source access to self-service data creators,
consider using an indirect approach. You can create dataflows in the Power BI
service and allow self-service data creators to source data from them. This
approach has the added benefits of reducing the query load on the data source
and delivering a consistent snapshot of data. For more information, see the self-
service data preparation and advanced data preparation usage scenarios.

The credentials (account and password) can be applied in one of two ways.
Power BI Desktop: Credentials are encrypted and stored locally on the user
machine.
Power BI service: Credentials are encrypted and securely stored for either:
The dataset (when a data gateway isn't in use to reach the data source).
The gateway data source (when a standard gateway or a virtual network
gateway service is in use to reach the data source).

 Tip

When you've already entered credentials for a dataset data source, the Power BI
service will automatically bind those credentials to other dataset data sources when
there's an exact match of connection string and database name. Both the Power BI
service and Power BI Desktop make it look like you're entering credentials for each
data source. However, it can apply the same credentials to matching data sources
that have the same owner. In that respect, dataset credentials are scoped to the
owner.

Credentials are encrypted and stored separately from the data model in both Power BI
Desktop and the Power BI service. This data separation has the following security
advantages.

It facilitates the reuse of credentials for multiple datasets, dataflows, and


datamarts.
When someone parses the metadata of a dataset, they can't extract the
credentials.
In Power BI Desktop, another user can't connect to the original data source to
refresh data without first applying the credentials.

Some data sources support single-sign on (SSO), which can be set when entering
credentials in the Power BI service (for dataset or gateway data sources). When you
enable SSO, Power BI sends the authenticated user's credentials to the data source. This
option enables Power BI to honor the security settings that are set up in the data source,
such as row-level security. SSO is especially useful when tables in the data model use
DirectQuery storage mode.

Privacy levels

Data privacy levels specify isolation levels that define the degree that one data source is
isolated from other data sources. When appropriately set, they ensure that Power Query
only transmits compatible data between sources. When Power Query can transmit data
between data sources, it can result in more efficient queries that reduce the volume of
data that's sent to Power BI. When it can't transmit data between data sources, it can
result in slower performance.

There are three privacy levels.

Private: Includes sensitive or confidential data that must be isolated from all other
data sources. This level is the most restrictive. Private data source data can't be
shared with any other data sources. For example, a human resources database that
contains employee salary values should be set to the Private privacy level.
Organizational: Isolated from public data sources but is visible to other
organizational data sources. This level is the most common. Organizational data
source data can be shared with private data sources or other organizational data
sources. Most internal operational databases can be set with the Organizational
privacy level.
Public: Non-sensitive data that could be made visible to any data source. This level
is the least restrictive. Public data source data can be shared with any other data
source. For example, a census report obtained from a government website can be
set to the Public privacy level.

When combining queries from different data sources, it's important that you set the
correct privacy levels. When privacy levels are set correctly, there's the potential for data
from one data source to be transmitted to another data source to efficiently query data.

Consider a scenario where a dataset creator has two data sources: an Excel workbook
and a table in an Azure SQL Database. They want to filter the data in the Azure SQL
Database table by using a value sourced from the Excel workbook. The most efficient
way for Power Query to generate a SQL statement for the Azure SQL Database is to
apply a WHERE clause to perform the necessary filtering. However, that SQL Statement
will contain a WHERE clause predicate with a value sourced from the Excel workbook. If
the Excel workbook contains sensitive data, it may represent a security breach because
the database administrator could view the SQL statement by using a tracing tool. While
less efficient, the alternative is for the Power Query mashup engine to download the
entire result set of the database table and perform the filtering itself in the Power BI
service. This approach will be less efficient and slow, but secure.

Privacy levels can be set for each data source:

By data modelers in Power BI Desktop.


By dataset owners in the Power BI service (for cloud data sources, which don't
require a gateway).
By gateway data source creators and owners in the Power BI service (for gateway
data sources).
) Important

The privacy levels that you set in Power BI Desktop aren't transferred to the Power
BI service.

There's a Power BI Desktop security option that allows ignoring privacy levels to
improve performance. You might use this option to improve query performance while
developing a data model when there's no risk of breaching data security (because you're
working with development or test data that isn't sensitive). However, this setting isn't
honored by the Power BI service.

For more information, see Power BI Desktop privacy levels.

Native database queries


To create efficient Power Query queries, you can use a native query to access data. A
native query is a statement written in a language supported by the data source. Native
queries are only supported by specific data sources, which are typically relational
databases like Azure SQL Database.

Native queries can pose a security risk because they could run a malicious SQL
statement. A malicious statement could perform data modifications or delete database
records (when the user has the required permissions in the data source). For this reason,
by default, native queries require user approval to run in Power BI Desktop.

There's a Power BI Desktop security option that allows you to disable the requirement
for pre-approval. We recommend that you leave the default setting that requires user
approval, especially when you anticipate that the Power BI Desktop file could be
refreshed by other users.

Custom connectors

Developers can use the Power Query SDK to create custom connectors. Custom
connectors allow access to proprietary data sources or implement specific
authentication with custom data extensions. Some custom connectors are certified and
distributed by Microsoft as certified connectors. Certified connectors have been audited
and reviewed to ensure that they meet certain specified code requirements that
Microsoft has tested and approved.

There's a Power BI Desktop data extension security option that restricts the use of non-
certified connectors. By default, an error is raised when an attempt is made to load a
non-certified connector. By setting this option to allow non-certified connectors, custom
connectors will load without validation or warning.

We recommend that you keep your data extension security level at the higher level,
which prevents loading of non-certified code. However, there may be cases where you
want to load specific connectors, perhaps connectors that you've developed, or
connectors provided to you by a trusted consultant or vendor outside the Microsoft
certification path.

7 Note

Developers of in-house-developed connectors can take steps to sign a connector


with a certificate, allowing you to use the connector without the need to change
your security settings. For more information, see Trusted third-party connectors.

Checklist - When planning data source permissions, key decisions and actions include:

" Decide who can directly access each data source: Determine which data creators
are permitted to access a data source directly. If there's a strategy to reduce the
number of people with direct access, clarify what the preferred alternative is
(perhaps by using dataflows).
" Decide how data sources should be accessed: Determine whether individual user
credentials will be used for accessing a data source, or whether a service account
should be created for that purpose. Determine when single sign-on is appropriate.
" Provide guidance for dataset creators about accessing data sources: Include
documentation for content creators about how to access organizational data
sources. Publish the information to your centralized portal and training materials.
" Provide guidance for dataset creators about privacy levels: Provide guidance to
dataset creators to make them aware of privacy levels, and their implications when
working with sensitive or confidential data. Publish this information to your
centralized portal and training materials.
" Provide guidance for gateway connection creators about privacy levels: Provide
guidance to dataset creators to make them aware of privacy levels and their
implications when working with sensitive or confidential data. Publish this
information to your centralized portal and training materials.
" Decide on the strategy for using native database queries: Consider your strategy
for using native database queries. Educate dataset creators on how and when to set
the Power BI Desktop native database queries option to disable pre-approval when
Power Query runs native queries.
" Decide on the strategy for using custom connectors: Consider your strategy for
using custom connectors. Determine whether the use of non-certified connectors is
justified, in which case educate dataset creators on how and when to set the Power
BI Desktop data extension option.

Dataset creator permissions


You can assign permission to edit a dataset to a user or group in different ways.

Workspace role: Assignment to any of the workspace roles provides access to all
datasets in the workspace. The ability to view or edit an existing dataset depends
on the workspace role that you assign. Administrators, members, and contributors
can publish or edit content within a workspace.
Per-item permission links: If a sharing link was created for a report, permission to
read the dataset (and optionally, build, write, and/or reshare) is also indirectly
granted by the link.
Per-item direct access permissions: You can assign the direct access permission to
a specific dataset.

In the following screenshot, notice the permissions assigned to the Call Center Data
dataset. One user has Read permission, which was granted by using per-item direct
access permissions. The remaining users and groups have permissions because they're
assigned to workspace roles.

 Tip

Using per-item permissions (links or direct access) works best when the intention is
for a user or group to view or edit one specific item in the workspace. It's best
suited when the user isn't permitted to access all items in the workspace. In most
cases, we recommend that you design your workspaces so that security is simpler
to manage with workspace roles. Avoid setting per-item permissions whenever
possible.

Dataset permissions

You can assign the following dataset permissions.

Read: Targeted primarily at report consumers, this permission allows a report to


query data in the dataset. For more information about permissions for viewing
read-only content, see the Report consumer security planning article.
Build: Targeted at report creators, this permission allows users to create new
reports based on the shared dataset. For more information, see the Report creator
permissions section later in this article.
Write: Targeted at dataset creators who create, publish, and manage datasets, this
permission allows users to edit the dataset. It's described later in this section.
Reshare: Targeted at anyone with existing permission to the dataset, this
permission allows users to share the dataset with another user. It's described later
in this section.

A workspace administrator or member can edit the permissions for a dataset.

Dataset Read permission


The dataset Read permission is primarily targeted at consumers. This permission is
required for users to be able to view data that's displayed in reports. Be aware that
reports based on the dataset must also have Read permission; otherwise, the report will
fail to load. For more information about setting report Read permissions, see the Report
consumer security planning article.

Dataset Build permission


In addition to dataset Read permission, content creators also need the dataset Build
permission. Specifically, the Build permission allows report creators to:

Create new Power BI reports based on the dataset.


Connect to the dataset by using Analyze in Excel.
Query the dataset by using the XMLA endpoint.
Export Power BI report visual underlying data (instead of the summarized data
retrieved by the visual).
Create a DirectQuery connection to a Power BI dataset. In this case, the new
dataset connects to one or more existing Power BI datasets (known as chaining). To
query chained datasets, the dataset creator will need Build permission for all
upstream datasets. For more information, see Chained datasets later in this article.

You can grant Build permission to a user or group, directly or indirectly, in different
ways.

Grant Build directly by:


Setting dataset permissions on the dataset settings page in the Power BI service.
Setting dataset permissions by using the Power BI REST API.
Grant Build indirectly by:
Sharing a report or dashboard and setting the option to grant dataset Build
permission.
Publishing a Power BI app and setting the advanced option (for an audience) to
grant Build permission on the related datasets.
Assigning users to the Admin, Member, or Contributor workspace roles.

Setting Build permission directly for a dataset is appropriate when you want to manage
security on a granular, per-item basis. Setting Build permission indirectly is appropriate
when the users who will view or use the content through one of the indirect methods
will also create new content.

 Tip

Frequently, the users who view a report or a Power BI app are different from the
users who create new content by using the underlying dataset(s). Most consumers
are viewers only, so they don't need to create new content. We recommend that
you educate your content creators to grant the least number of permissions that
are required.

Dataset Write permission


Usually, setting permissions for who can edit and manage datasets will be done by
assigning users to either the Admin, Member, or Contributor workspace role. However,
it's also possible to set Write permission for a specific dataset.

We recommend that you use workspace roles whenever possible because it's the
simplest way to manage and audit permissions. Use the dataset Write permissions on a
per-item basis when you've chosen to create fewer workspaces, and a workspace
contains datasets for different subject areas that require different permissions
management.

 Tip

For guidance on how to organize workspaces, see the workspace planning articles.

Dataset Reshare permission

The dataset Reshare permission allows a user with existing permission to share the
dataset with other users. You can grant this permission when content in the dataset can
be freely shared, based on user discretion.

In many cases, we recommend limiting the use of the Reshare permission to ensure that
dataset permissions are carefully controlled. Get approval from the dataset owner(s)
before granting the Reshare permission.

Dataset data security

You can plan to create fewer datasets and reports by enforcing data security. The
objective is to enforce data security based on the identity of the user who's viewing the
content.

A dataset creator can enforce data security in two ways.

Row-level security (RLS) allows a data modeler to restrict access to a subset of


data.
Object-level security (OLS) allows a data modeler to restrict access to specific
tables and columns, and their metadata.

The implementation of RLS and OLS are targeted at report consumers. For more
information, see Report consumer security planning article. It describes how and when
RLS and OLS are enforced for consumers who have view-only permission to the dataset.

For RLS and OLS targeted at other report creators, see data security in the Report
creator permissions section later in this article.

Chained datasets
Power BI datasets can connect to other datasets in a process known as chaining, which
are connections to upstream datasets. For more information, see Using DirectQuery for
Power BI datasets and Analysis Services.
The Allow DirectQuery connections to Power BI datasets tenant setting allows Power BI
administrators to set up which groups of content creators can create chained datasets. If
you don't want to restrict dataset creators from chaining datasets, you can leave this
setting enabled for the entire organization and rely on workspace access and dataset
permissions. In some cases, you may consider restricting this capability to approved
content creators.

7 Note

As a dataset creator, you can restrict chaining to your dataset. It's done by enabling
the Discourage DirectQuery connection to this dataset option in Power BI Desktop.
For more information, see Manage DirectQuery connections to a published
dataset.

Dataset API queries


In some situations, you might want to execute a DAX query by using the Power BI REST
API. For example, you might want to perform data quality validations. For more
information, see Datasets - Execute Queries.

The Dataset Execute Queries REST API tenant setting allows Power BI administrators to
set which groups of users can send DAX queries by using the Power BI REST API. In most
cases, you can leave this setting enabled for the entire organization and rely on
workspace access and dataset permissions. In some cases, you may consider restricting
this capability to approved content creators.

Checklist - When planning for dataset creator permissions, key decisions and actions
include:

" Decide on the strategy for dataset creator permissions: Determine what


preferences and requirements exist for managing security for dataset creators.
Consider the subject area and level of data sensitivity. Also consider who's allowed
to take responsibility for managing data and permissions in centralized and
decentralized business units.
" Review how workspace roles are handled for dataset creators: Determine what the
impact is on your workspace design process. Create separate data workspaces for
each subject area so that you can more easily manage the workspace roles and
dataset security for each subject area.
" Provide guidance for dataset creators about managing permissions: Include
documentation for dataset creators about how to manage dataset permissions.
Publish this information to your centralized portal and training materials.
" Decide who can use DirectQuery connections for Power BI datasets: Decide
whether there should be any limitations for which Power BI dataset creators (with
existing Build permission for a dataset) can create a connection to a Power BI
dataset. Set the Allow DirectQuery connections to Power BI datasets tenant setting to
align with this decision. If you decide to limit this capability, consider using a group
such as Power BI approved dataset creators.
" Decide who can query Power BI datasets by using the REST API: Decide whether
to restrict Power BI content creators from querying Power BI datasets by using the
Power BI REST API. Set the Dataset Execute Queries REST API tenant setting to align
with this decision. If you decide to limit this capability, consider using a group such
as Power BI approved report creators.
" Decide on the strategy for the use of RLS or OLS for dataset creators: Consider
which use cases and purposes you intend to use RLS or OLS. Factor in the
workspace design strategy, and who has read versus edit permissions, when you
want to enforce RLS or OLS for dataset creators.

Report creator permissions


Report creators need workspace access to create reports in the Power BI service or
publish them from Power BI Desktop. They must be either an administrator, member, or
contributor in the target workspace.

Whenever possible, report creators should use an existing shared dataset (via a live
connection or DirectQuery). That way, the report creation process is decoupled from the
dataset creation process. This type of separation provides many benefits for security and
team development scenarios.

A report creator needs to be a workspace administrator, member, or contributor.

Unlike datasets, there isn't a Write permission for reports. To support report creators,
workspace roles must be used. For this reason, optimal workspace design is important
to balance content organization and security needs.

 Tip

For permissions to support report consumers (including the Read and Reshare per-
item permissions), see the Report consumer security planning article.
Read and Build permissions for underlying dataset

Report creators must have Read and Build permissions on the datasets that their reports
will use, which includes chained datasets. That permission can be granted explicitly on
the individual datasets, or it can be granted implicitly for workspace datasets when the
report creator is a workspace administrator, member, or contributor.

The Use datasets across workspaces tenant setting allows Power BI administrators to set
up which groups of users can create reports that use datasets located in other
workspaces. This setting is targeted at dataset and report creators. Usually, we
recommend that you leave this setting enabled for the entire organization and relying
on workspace access settings and dataset permissions. That way, you can encourage the
use of existing datasets. In some cases, you may consider restricting this capability only
to approved content creators.

There's also the Allow live connections tenant setting, which allows Power BI
administrators to set up which groups of users can create live connections to datasets in
Power BI Desktop or Excel. It's targeted specifically at report creators, and it also
requires that they're granted Read and Build permission on the dataset that the report
will use. We recommend that you leave this setting enabled for the entire organization
and rely on workspace access and dataset permissions. That way, you can encourage the
use of existing datasets. In some cases, you may consider restricting this capability only
to approved content creators.

Data security for underlying dataset


RLS and OLS (described previously in this article) are targeted at report consumers.
However, sometimes it also needs to be enforced for report creators. Creating separate
workspaces is justified when RLS needs to be enforced for report creators and also
report consumers.

Consider the following scenario.

Centralized shared datasets with RLS: The enterprise BI team published sales
datasets to the Sales Data workspace. These datasets enforce RLS to show sales
data for the assigned sales region of the report consumer.
Decentralized self-service report creators: The sales and marketing business unit
has many capable analysts who create their own reports. They publish their reports
to the Sales Analytics workspace.
Read and Build permissions for datasets: Whenever possible, the analysts use the
datasets from the Sales Data workspace to avoid the unnecessary duplication of
data. Because the analysts only have Read and Build permissions on these datasets
(with no write or edit permissions), RLS is enforced for the report creators (and also
the report consumers).
Edit permissions for reporting workspace: The analysts have more rights in the
Sales Analytics workspace. The administrator, member, or contributor workspace
roles allow them to publish and manage their reports.

For more information about RLS and OLS, see the Report consumer security planning
article. It describes how and when RLS and OLS are enforced for consumers who have
view-only permission to the dataset.

Connecting to external datasets


When a report creator connects to a shared dataset for their report, they usually connect
to a shared dataset that's been published in their own Power BI tenant. When
permission has been granted, it's also possible to connect to a shared dataset in another
tenant. The other tenant could be a partner, customer, or vendor.

This functionality is known as in-place dataset sharing (also known as cross-tenant


dataset sharing). The reports created by the report creator (or new composite models
created by a dataset creator) are stored and secured in your Power BI tenant by using
your normal process. The original shared dataset remains in its original Power BI tenant,
and all permissions are managed there.

For more information, see the Tenant-level security planning article. It includes
information about the tenant settings and dataset settings that make external sharing
work.

Featured tables

In Power BI Desktop, dataset creators can set a model table to become a featured table.
When the dataset is published to the Power BI service, report creators can use the Data
Types Gallery in Excel to find the featured table, allowing them to add featured table
data to augment their Excel worksheets.

The Allow connections to featured tables tenant setting allows Power BI administrators to
set up which groups of users can access featured tables. It's targeted at Excel users who
want to access Power BI featured tables in Excel organization data types. We
recommend that you leave this setting enabled for the entire organization and rely on
workspace access and dataset permissions. That way, you can encourage the use of
featured tables.

Custom visual permissions

In addition to core visuals, Power BI report creators can use custom visuals. In Power BI
Desktop, custom visuals can be download from Microsoft AppSource . They can also
be developed in-house by using the Power BI SDK, and installed by opening the visual
file (.pbviz).

Some visuals available for download from AppSource are certified visuals. Certified
visuals meet certain specified code requirements that the Power BI team has tested and
approved. The tests check that visuals don't access external services or resources.

The Allow visuals created by the Power BI SDK tenant setting allows Power BI
administrators to control which groups of users can use custom visuals.

There's also the Add and use certified visuals only tenant setting, which allows Power BI
administrators to block the use of non-certified visuals in the Power BI service. This
setting can be enabled or disabled for the entire organization.

7 Note

If you block the use of non-certified visuals, it only applies to the Power BI service.
If you want to restrict their use in Power BI Desktop, ask your system administrators
to use a group policy setting to block their use in Power BI Desktop. Taking this
step will ensure that report creators don't waste time and effort creating a report
that won't work when published to the Power BI service. We highly recommend
that you set up your users to have consistent experiences in the Power BI service
(with the tenant setting) and Power BI Desktop (with group policy).

Power BI Desktop has an option to show a security warning when a report creator adds
a custom visual to the report. Report creators can disable this option. This option
doesn't test whether the visual is certified.

Power BI administrators can approve and deploy custom visuals for their organization.
Report creators can then easily discover, update, and use these visuals. Administrators
can then manage these visuals by updating versions or disabling and enabling specific
custom visuals. This approach is useful when you want to make an in-house-developed
visual available to your report creators, or when you acquire a custom visual from a
vendor that isn't in AppSource. For more information, see Power BI organizational
visuals.
Consider a balanced strategy of enabling only certified custom visuals in your
organization (with the tenant setting and group policy previously described), while
deploying organizational visuals to handle any exceptions.

Checklist - When planning for report creator permissions, key decisions and actions
include:

" Decide on the strategy for report creator permissions: Determine what


preferences and requirements exist for managing security for report creators.
Consider the subject area and level of data sensitivity. Also, consider who's allowed
to take responsibility for creating and managing reports in centralized and
decentralized business units.
" Review how workspace roles are handled for report creators: Determine what the
impact is on your workspace design process. Create separate data workspaces and
reporting workspaces for each subject area, so the workspace roles (and underlying
dataset security) are simplified for the subject area.
" Provide guidance for report creators about managing permissions: Include
documentation for report creators about how to manage permissions for report
consumers. Publish this information to your centralized portal and training
materials.
" Decide who can use shared datasets: Decide whether there should be any
limitations for which Power BI report creators (who already have Read and Build
permissions for a dataset) can use datasets across workspaces. Set the Use datasets
across workspaces tenant setting to align with this decision. If you decide to limit
this capability, consider using a group such as Power BI approved report creators.
" Decide who can use live connections: Decide whether there should be any
limitations for which Power BI report creators (who already have Read and Build
permission for a dataset) can use live connections. Set the Allow live connections
tenant setting to align with this decision. If you decide to limit this capability,
consider using a group such as Power BI approved report creators.
" Decide on the strategy for use of RLS for report creators: Consider which use
cases and purposes you intend to use row-level security. Factor in the workspace
design strategy to ensure that RLS is enforced for report creators.
" Decide on the strategy for use of custom visuals: Consider your strategy for which
report creators can use custom visuals. Set the Allow visuals created by the Power BI
SDK tenant setting to align with this decision. Create a process for using
organizational visuals, when appropriate.
Dataflow creator permissions
Dataflows are helpful for centralizing data preparation so that the work done in Power
Query isn't repeated across many datasets. They're a building block for achieving a
single source of truth, preventing analysts from requiring direct access to sources, and
for helping to perform extract, transform, and load (ETL) operations at scale.

A dataflow creator needs to be a workspace administrator, member, or contributor.

To consume a dataflow (for instance, from a new data model created in Power BI
Desktop or in another workspace), a dataset creator can belong to any workspace role,
including the Viewer role. There's no concept of RLS for dataflows.

In addition to workspace roles, the Create and use dataflows tenant setting must be
enabled. This tenant setting applies to the entire organization.

Consider the following scenario.

Many datasets in the organization need to enforce dynamic RLS. It requires that
user principal names (UPNs) be stored in the dataset (to filter by the identity of the
report consumer).
A dataflow creator, who belongs to the Human Resources department, creates a
dataflow of current employee details including their UPNs. They set up the
dataflow to refresh daily.
Dataset creators then consume the dataflow in their model designs to set up RLS.

For more information about using dataflows, see the self-service data preparation and
advanced data preparation usage scenarios.

Checklist - When planning for dataflow creator permissions, key decisions and actions
include:

" Decide on the strategy for dataflow creator permissions: Determine what


preferences and requirements exist for managing security for dataflow creators.
Consider who's allowed, or encouraged, to take responsibility for managing data
preparation activities in centralized and decentralized business units.
" Decide who can create dataflows: Decide whether there should be any limitations
for which Power BI data creators can create dataflows. Set the Create and use
dataflows tenant setting to align with this decision.
" Review how workspace roles are handled for dataflow creators: Determine what
the impact is on your workspace design process. Create separate dataflow
workspaces per subject area so that you can handle workspace roles and
permissions separately for each the subject area, when appropriate.

Datamart creator permissions


A datamart is a self-service analytics solution that enables users to store and explore
data that's loaded in a fully managed relational database. It also comprises an auto-
generated dataset.

Datamarts provide a simple low-code experience to ingest data from different data
sources, and to extract, transform, and load (ETL) the data by using Power Query Online.
The data is loaded into an Azure SQL Database that's fully managed and requires no
tuning or optimization. The auto-generated dataset is always synchronized with the
managed database because it's in DirectQuery mode.

You can create a datamart when you're either a workspace administrator, member, or
contributor. Workspace roles also get mapped to database-level roles in the Azure SQL
Database (however, because the database is fully managed, user permissions can't be
edited or managed in the relational database).

The Create datamarts tenant setting allows Power BI administrators to set up which
groups of users can create datamarts.

Datamart sharing
For datamarts, the term sharing takes on a meaning that's different to other Power BI
content types. Usually, a sharing operation is targeted at a consumer because it provides
read-only permission to one item, like a report.

Sharing a datamart is targeted at content creators (rather than consumers). It grants the
Read and Build permissions, which allows users to query either the dataset or the
relational database, whichever they prefer.

Sharing a datamart allows content creators to:

Build content by using the auto-generated dataset. The dataset is the semantic
layer on which Power BI reports can be built. Most report creators should use the
dataset.
Connect to and query the Azure SQL Database. The relational database is useful
for content creators who want to create new datasets or paginated reports. They
can write structured query language (SQL) queries to retrieve data by using the
SQL endpoint.

Datamart row-level security

You can define RLS for datamarts to restrict data access for specified users. RLS is set up
in the datamart editor in the Power BI service, and it's automatically applied to the auto-
generated dataset and the Azure SQL Database (as security rules).

Regardless of how a user chooses to connect to the datamart (to the dataset or the
database), identical RLS permissions are enforced.

Checklist - When planning for datamart creator permissions, key decisions and actions
include:

" Decide on the strategy for datamart creator permissions: Determine what


preferences and requirements exist for managing security for datamart creators.
Consider who's allowed, or encouraged, to take responsibility for managing data in
centralized and decentralized business units.
" Decide who can create datamarts: Decide whether there should be any limitations
for which Power BI data creators can create a datamart. Set the Create datamarts
tenant setting to align with this decision. If you decide to limit who can create
datamarts, consider using a group such as Power BI approved datamart creators.
" Review how workspace roles are handled for datamart creators: Determine what
the impact is on your workspace design process. Create separate data workspaces
per subject area so the workspace roles and dataset security can be simplified for
the subject area.
" Provide guidance for datamart creators about managing permissions: Include
documentation for datamart creators about how to manage datamart permissions.
Publish this information to your centralized portal and training materials.
" Decide on the strategy for using RLS in datamarts: Consider which use cases and
purposes you intend to use RLS within a datamart.

Scorecard creator permissions


Metrics in Power BI let you curate specific metrics and track them against key business
objectives. Metrics are added to scorecards, which can be shared with other users and
viewed in a single pane.
Scorecards can be secured with three levels of permissions:

Workspace.
Scorecard (per-item) permissions.
Metrics (within the scorecard).

A user who creates, or fully manages, a scorecard needs to be a workspace


administrator, member, or contributor.

Because metrics often span multiple subject areas, we recommend that you create a
separate workspace so that you can independently manage permissions for creators and
consumers.

The Create and use metrics tenant setting allows Power BI administrators to set up which
groups of users can create scorecard metrics.

Scorecard permissions
You can assign the following scorecard permissions.

Read: This permission allows a user to view the scorecard.


Reshare: Targeted at anyone with existing permission to the scorecard, this
permission allows users to share the scorecard with another user.

Consistent with other content types in the Power BI service, the per-item permissions
are useful when the intention is to share one item with another user. We recommend
using workspace roles and app permissions whenever possible.

Metric-level permissions

Each scorecard has a set of metric-level permissions that you can set up in the scorecard
settings. The metric-level permissions (within a scorecard) may be granted differently
from the workspace or the scorecard (per-item) permissions.

The metric-level roles allow you to set:

Who can view individual metrics on a scorecard.


Who can update individual metrics by:
Updating the status during a check-in.
Adding notes during a check-in.
Updating the current value during a check-in.

To reduce the level of future maintenance, it's possible to set default permissions that
will be inherited by submetrics you create in the future.
Checklist - When planning for metric creator permissions, key decisions and actions
include:

" Decide on the strategy for scorecard creator permissions: Determine what


preferences and requirements exist for managing security for scorecard creators.
Consider who's allowed, or encouraged, to take responsibility for managing data in
centralized and decentralized business units.
" Decide who can create scorecards: Decide whether there should be any limitations
for which Power BI data creators can create scorecards. Set the Create and use
Metrics tenant setting to align with this decision. If you decide to limit who can
create scorecards, consider using a group such as Power BI approved scorecard
creators.
" Review how workspace roles are handled for scorecard creators: Determine what
the impact is on your workspace design process. Consider creating separate
workspaces for scorecards when the content spans subject areas.
" Provide guidance for scorecard creators about managing permissions: Include
documentation for scorecard creators about how to manage metric-level
permissions. Publish this information to your centralized portal and training
materials.

Publishing content
This section includes topics related to publishing content that are relevant to content
creators.

Workspaces

Content creators will need administrator, member, or contributor role access to publish
content to a workspace. For more information, see the workspace roles described earlier
in this article.

Except for personal BI, content creators should be encouraged to publish content to
standard workspaces, instead of their personal workspace.

The Block republish and disable package refresh tenant setting changes the behavior for
publishing datasets. When enabled, workspace administrators, members, or contributors
can't publish changes to a dataset. Only the dataset owner is permitted to publish an
update (forcing the takeover of a dataset before publishing an updated dataset).
Because this tenant setting applies to the entire organization, enable it with a measure
of caution because it affects all datasets for the entire tenant. Be sure to communicate
to your dataset creators what to expect because it changes the normal behavior of
workspace roles.

Power Apps synchronization

It's possible to create a Power Apps solution that includes embedded Power BI reports.
The Power Apps process will automatically create a dedicated Power BI workspace for
storing and securing the Power BI reports and datasets. To manage items that exist in
both Power Apps and Power BI, there's a synchronization process.

The process synchronizes security roles to ensure that Power BI inherits the same roles
that were initially set up in Power Apps. It also allows the content creator to manage
permissions for who can view the Power BI reports (and related datasets) that are
embedded in a Power App.

For more information about synchronizing Power Apps roles with Power BI workspace
roles, see Permission sync between Power Apps environment and Power BI workspace.

Deployment pipeline access


Content creators and owners can use Power BI deployment pipelines for self-service
content publishing. Deployment pipelines simplify the publication process and improve
the level of control when releasing new content.

You manage pipeline permissions (for users who can deploy content with a deployment
pipeline) separately from the workspace roles. Access to both the workspace and the
deployment pipeline are required for the users conducting a deployment.

Content creators might also need:

Workspace creation permissions (when workspaces need to be created by the


pipeline).
Premium capacity permissions (when workspaces are assigned by the pipeline).

For more information, see Deployment pipeline access.

XMLA endpoint
The XMLA endpoint uses the XMLA protocol to expose all features of a tabular data
model, including some data modeling operations that aren't supported by Power BI
Desktop. You can use the Tabular Object Model (TOM) API to make programmatic
changes to a data model.

The XMLA endpoint also provides connectivity. You can only connect to a dataset when
the workspace has its license mode set to Premium per user, Premium per capacity, or
Embedded. Once a connection is made, an XMLA-compliant tool can operate on the
data model to read or write data. For more information about how you can use the
XMLA endpoint for managing a dataset, see the advanced data model management
usage scenario.

Access through the XMLA endpoint will honor existing permissions. Workspace
administrators, members, and contributors implicitly have dataset Write permission,
which means they can deploy new datasets from Visual Studio and execute Tabular
Modeling Scripting Language (TMSL) scripts in SQL Server Management Studio (SSMS).

Users with the dataset Build permission can use the XMLA endpoint to connect to and
browse datasets for data consumption and visualization. RLS rules are honored, and
users can't see internal dataset metadata.

The Allow XMLA endpoints and Analyze in Excel with on-premises datasets tenant setting
refers to two capabilities: It controls which groups of users can use the XMLA endpoint
to query and/or maintain datasets in the Power BI service. It also determines whether
Analyze in Excel can be used with on-premises SQL Server Analysis Services (SSAS)
models.

7 Note

The Analyze in Excel aspect of that tenant setting only applies to on-premises SQL
Server Analysis Services (SSAS) models. The standard Analyze in Excel functionality,
which connects to a Power BI dataset in the Power BI service, is controlled by the
Allow Live Connections tenant setting.

Publish to web
Publish to web is a feature that provides access to Power BI reports to anyone on the
internet. It doesn't require authentication and access isn't logged for auditing purposes.
Because report consumers don't need to belong to the organization or have a Power BI
license, this technique is well suited to data journalism, a process where reports are
embedded in blog posts, websites, emails, or social media.

U Caution
Publish to web has the potential to expose sensitive or confidential data, whether
accidentally or intentionally. For this reason, this feature is disabled by default.
Publish to web should only be used for reports that contain data that can be
viewed by the public.

The Publish to web tenant setting allows Power BI administrators to control which
groups of users can publish reports to the web. To maintain a higher level of control, we
recommend that you don't include other groups in this tenant setting (like Power BI
administrators or other types of content creators). Instead, enforce specific user access
by using a security group such as Power BI public publishing. Ensure that the security
group is well managed.

Embedding in custom apps


The Embed content in apps tenant setting allows Power BI administrators to control
which users can embed Power BI content outside of the Power BI service. When you plan
to embed Power BI content in custom applications, enable this setting for specific
groups, such as app developers.

Embedding in PowerPoint
The Enable Power BI add-in for PowerPoint tenant setting allows Power BI administrators
to control which users can embed Power BI report pages in PowerPoint presentations.
When appropriate, enable this setting for specific groups, such as report creators.

7 Note

For this capability to work, users must install the Power BI add-in for PowerPoint. To
use the add-in, users must either have access to the Office add-in store, or the add-
in must be made available to them as an admin managed add-in. For more
information, see Power BI add-in for PowerPoint.

Educate report creators to be cautious about where they save their PowerPoint
presentations and who they share them with. That's because an image of the Power BI
report visuals is shown to users when they open the presentation. That image is
captured from the last time the PowerPoint file was connected. However, the image may
inadvertently reveal data that the receiving user doesn't have permission to see.

7 Note
The image can be useful when the receiving user doesn't yet have the add-in, or
until the add-in connects to the Power BI service to retrieve data. Once the user
connects, only data the user can see (enforcing any RLS) is retrieved from Power BI.

Template apps
Template apps enable Power BI partners and software vendors to build Power BI apps
with little or no coding, and deploy them to any Power BI customer.

The Publish template apps tenant setting allows Power BI administrators to control which
users can publish template apps outside of the organization, such as through Microsoft
AppSource. For most organizations, this tenant setting should be disabled or tightly
controlled. Consider using a security group such as Power BI external template app
creators.

Email subscriptions

You can subscribe yourself and others to Power BI reports, dashboards, and paginated
reports. Power BI will then send an email on a schedule you set. The email will contain a
snapshot and link to the report or dashboard.

You can create a subscription that includes other users when you're a workspace
administrator, member, or contributor. If the report is in a Premium workspace, you can
subscribe groups (whether they're in your domain or not) and external users. When
setting up the subscription, there's also the option to grant permissions to the item. Per-
item direct access permissions are used for this purpose, which are described in the
Report consumer security planning article.

U Caution

The email subscription feature has the potential to share content to internal and
external audiences. Also, when RLS is enforced on the underlying dataset,
attachments and images are generated by using the security context of the
subscribing user.

The Email subscriptions tenant setting allows Power BI administrators to control whether
this feature is enabled or disabled for the entire organization.

There are some limitations concerning attachments related to licensing and tenant
setting restrictions. For more information, see Email subscriptions for reports and
dashboards in the Power BI service.
Checklist - When planning for publishing content, key decisions and actions include:

" Decide on the strategy for where content should be published, how, and by
whom: Determine what preferences and requirements exist for where content gets
published.
" Verify workspace access: Confirm the workspace design approach. Verify how to
use the workspace access roles to support your strategy for where content should
be published.
" Determine how to handle deployment pipeline permissions: Decide which users
are permitted to publish content by using a deployment pipeline. Set the
deployment pipeline permissions accordingly. Ensure that workspace access is also
provided as well.
" Decide who can connect to datasets by using the XMLA endpoint: Decide which
users are permitted to query or manage datasets by using the XMLA endpoint. Set
the Allow XMLA endpoints and Analyze in Excel with on-premises datasets tenant
setting to align with this decision. When you decide to limit this capability, consider
using a group such as Power BI approved content creators.
" Decide who can publish reports publicly: Decide which users are permitted to
publish Power BI reports publicly, if any. Set the Publish to web tenant setting to
align with this decision. Use a group such as Power BI public publishing.
" Decide who can embed content in custom apps: Determine who should be
allowed to embed content outside of the Power BI service. Set the Embed content in
apps tenant setting to align with this decision.
" Decide who can embed content in PowerPoint: Determine who should be allowed
to embed content in PowerPoint. Set the Enable Power BI add-in for PowerPoint
tenant setting to align with this decision.
" Decide who can publish template apps: Determine what your strategy is for using
template apps outside of the organization. Set the Publish template apps tenant
setting to align with this decision.
" Decide whether to enable subscriptions: Confirm what your strategy is for using
subscriptions. Set the Email Subscriptions tenant setting to align with this decision.

Refresh data
Once published, data creators should ensure that their datasets and dataflows (that
contain imported data) are periodically refreshed. You should also decide on
appropriate strategies for the dataset and dataflow owners.
Dataset owner
Each dataset has an owner, which is a single user account. The dataset owner is required
to set up dataset refresh and set dataset parameters.

By default, the dataset owner also receives access requests from report creators who
want Build permissions (unless the Request access settings for the dataset are set to
provide custom instructions). For more information, see the Request access workflow for
creators section in this article.

There are two ways that Power BI can obtain credentials to refresh a dataset.

The dataset owner stores credentials in the dataset settings.


The dataset owner references a gateway in the dataset settings (that contains a
data source with stored credentials).

If a different user needs to set up refresh or set dataset parameters, they must take
ownership of the dataset. Dataset ownership can be taken over by a workspace
administrator, member, or contributor.

U Caution

Taking dataset ownership permanently removes any stored credentials for the
dataset. Credentials must be re-entered to allow data refresh operations to resume.

Ideally, the dataset owner is the user who's responsible for the dataset. You should
update the dataset owner when they leave the organization or change roles. Also, be
aware that when the dataset owner's user account is disabled in Azure Active Directory
(Azure AD), data refresh is automatically disabled. In this case, another user must take
ownership of the dataset to allow data refresh operations to resume.

Dataflow owner
Like datasets, dataflows also have an owner, which is a single user account. The
information and guidance provided in the previous topic about dataset owners also
applies to dataflow owners.
Checklist - When planning for security related to data refresh processes, key decisions
and actions include:

" Decide on the strategy for dataset owners: Determine what preferences and
requirements exist for managing dataset owners.
" Decide on the strategy for dataflow owners: Determine what preferences and
requirements exist for managing dataflow owners.
" Include in documentation and training for dataset creators: Include guidance for
your data creators about how to manage owners for each type of item.

Next steps
For other considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see the subject areas to consider.
Power BI implementation planning:
Information protection and data loss
prevention
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article introduces the Power BI information protection and data loss prevention
(DLP) articles. These articles are targeted at multiple audiences:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators need to collaborate with
information security teams and other relevant teams.
Center of Excellence, IT, and BI teams: The teams that are responsible for
overseeing Power BI in the organization. They may need to collaborate with Power
BI administrators, information security teams, and other relevant teams.

) Important

Information protection and DLP is a significant organization-wide undertaking. Its


scope and impact are far greater than Power BI alone. This type of initiative requires
funding, prioritization, and planning. Expect to involve several cross-functional
teams in your planning, usage, and oversight efforts.

As someone who manages Power BI for your organization, you won't usually be directly
responsible for most aspects of information protection and DLP. It's likely those
responsibilities will fall to the information security team and other system
administrators.

The focus for this set of articles includes:

Why: Why these capabilities are important for compliance and auditing.
What: An overview of what the end-to-end process involves.
Who: Which teams participate in the end-to-end process.
Prerequisites: The things that need to be in place before the information
protection and DLP capabilities can be enabled for Power BI.

Protect organizational data


Data exists in many applications and services. It's stored in source databases and files.
It's published to the Power BI service. It also exists outside the Power BI service as
original files, downloaded files, and exported data. When data becomes more accessible
and across more resources, the way you go about protecting data becomes increasingly
important.

In short, protecting data is about:

Safeguarding organizational data.


Reducing the risk of unauthorized or unintentional sharing of sensitive information.
Strengthening compliance status for regulatory requirements.

Protecting data is a complex subject. At a high level, topics relevant to Power BI include:

Responsible actions taken by users: Users who have received guidance and
training, and clearly understand what's expected of them, can act ethically. They
can enact a culture that values security, privacy, and compliance during the normal
course of their work.
Right-sized user security permissions: In Power BI, securing data and reports is
separate and distinct from the information protection and DLP activities described
in these articles. Security methods in Power BI include techniques such as
workspace roles, sharing, app permissions, and row-level security (RLS). Security
techniques, such as workspace roles, app permissions, per-item sharing, and RLS,
are covered in the security planning articles.
Data lifecycle management: Processes such as backups and version control are
important for protecting data. The setup of encryption keys and geographic
locations for data storage also are considerations.
Information protection: Labeling and classifying content by using sensitivity labels
is the first step towards being able to protect it. Information protection is covered
in this series of articles.
Data loss prevention policies: DLP refers to controls and policies that reduce the
risk of data leakage. Data loss prevention is covered in this series of articles.

The information protection and DLP series of articles focus on the final two bullet points:
information protection and DLP, and specifically how they relate to Power BI.
We recommend that you also become familiar with the full Microsoft Purview
Information Protection framework: know your data, protect your data, prevent data loss,
and govern your data.

 Tip

Your organization's IT department will have existing processes in place that are
considered information protection, but they're out of scope for this series of
articles. Processes could include high availability and disaster recovery efforts
related to source database systems. They could also include protecting mobile
devices. Be sure to identify and involve relevant technology and governance teams
in all your planning efforts.

Common use cases


Power BI compliance challenges and regulatory reporting requirements are frequently a
driving factor for getting started with information protection and DLP.

 Tip

Data leakage refers to the risk of data being viewed by unauthorized users. The
term is often used when referring to external users. However, it can apply to
internal users too. Reducing the risk of data leakage is usually a top priority for
information protection and DLP efforts. All the use cases listed in this section can
help reduce data leakage.

This section includes common use cases that would compel an organization to
implement information protection and DLP. The use cases focus primarily on Power BI,
although the advantages to the organization are much broader.

Classify and label data


Organizations commonly have external or internal requirements for classifying and
labeling content. The use of sensitivity labels in Power BI (and in other organizational
applications and services as well) is a key factor in meeting compliance requirements.

Once you assign a sensitivity label to content in Power BI, you're able to gain knowledge
and insight about:

Whether sensitive data is contained in a Power BI workspace.


Whether a particular Power BI item, like a dataset, is considered confidential.
Who can access Power BI items that are considered sensitive.
Who has accessed sensitive data in the Power BI service.

With end-to-end protection, sensitivity labels can (optionally) be automatically inherited


from data sources. Label inheritance reduces the risk of users accessing and sharing
sensitive data with unauthorized users because it wasn't labeled.

When exported from the Power BI service, sensitivity labels are retained when content is
exported to supported file types. The retention of the label when content is exported is
another key factor in reducing data leakage.

For more information about labeling and classifying Power BI content, see Information
protection for Power BI.

Educate users
As stated previously, one aspect of protecting data involves responsible actions taken by
users.

Because sensitivity labels are clearly displayed in plain text, they serve as helpful
reminders to users. During the normal course of their work, labels raise awareness about
how users should interact with data according to organizational guidelines and policies.

For example, when a user sees a Highly Confidential sensitivity label, it should prompt
them to take extra care with their decisions about downloading, saving, or sharing the
content with others. In this way, sensitivity labels help users responsibly handle sensitive
data and reduce risk that it's shared by mistake with unauthorized users.

For more information, see Information protection for Power BI.

Detect sensitive data


The ability to detect where sensitive data is stored is another important aspect of data
leakage.

When a dataset has been published to the Power BI service and it's in a Premium
workspace, you can use DLP for Power BI to detect the existence of certain sensitive
information types within it. This capability is helpful to find sensitive data (such as
financial data or personal data) that are stored in Power BI datasets.

This type of DLP policy for Power BI allows security administrators to monitor and detect
when unauthorized sensitive data is uploaded to the Power BI service. They can depend
on alerts to act quickly. Policy tips are also used to guide content creators and owners
on how to properly handle sensitive data. For more information about DLP for Power BI,
see Data loss prevention for Power BI.

 Tip

Having properly classified data allows you to correlate, analyze, and report on it. In
most cases, you'll need to correlate data from multiple sources to form a complete
understanding. You can capture data by using tools like the Power BI scanner APIs
and the Power BI activity log. For more information about these topics, as well as
audit logs in the Microsoft Purview compliance portal, see Auditing of information
protection and data loss prevention for Power BI.

Use data encryption


Files that are classified with a sensitivity label can (optionally) include protection. When a
file is protected with encryption, it reduces the risk of data leakage and oversharing. The
encryption setting follows the file, regardless of device or user. Unauthorized users
(internal and external to the organization) are unable to open, decrypt, or view the file
contents.

) Important

There are trade-offs you should understand when implementing encryption. For
more information, including encryption considerations, see Information protection
for Power BI.

For more information about the types of controls you can implement to reduce data
leakage, see Defender for Cloud Apps for Power BI.

Control activity in real time


To augment existing security settings in Power BI, you can implement real-time controls
to reduce the risk of data leakage.

For example, you can restrict users from downloading highly sensitive data and reports
from the Power BI service. This type of real-time control is helpful when someone is
allowed to view content themselves, but they should be prevented from downloading
and distributing it to others.
For more information about the types of controls you can implement, see Defender for
Cloud Apps for Power BI.

 Tip

For additional considerations related to strengthening Power BI compliance, see the


security planning articles.

Information protection and DLP services


Many features and services related to information protection and DLP have been
rebranded and now form part of Microsoft Purview. The Microsoft 365 security and
compliance functionality has also become part of Microsoft Purview.

The features and services that are most pertinent for this series of articles include:

Microsoft Purview Information Protection (formerly known as Microsoft


Information Protection): Microsoft Purview Information Protection includes
capabilities for discovering, classifying, and protecting data. A key principle is that
data can be better protected once it's classified. The key building blocks for
classifying data are sensitivity labels, which are described in the Information
protection for Power BI article.
Microsoft Purview compliance portal (formerly known as the Microsoft 365
compliance center): The portal is where you set up sensitivity labels. It's also where
you set up Power BI for DLP, which is described in the Data loss prevention for
Power BI article.
Microsoft Purview Data Loss Prevention (formerly known as Office 365 Data Loss
Prevention): DLP activities focus primarily on reducing data leakage. By using
sensitivity labels or sensitive information types, Microsoft Purview Data Loss
Prevention policies help an organization locate sensitive data and protect it.
Capabilities relevant to Power BI are described in the Data loss prevention for
Power BI article.
Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App
Security): Policies in Microsoft Defender for Cloud Apps (which are defined in a
separate application) also help protect data, including real-time controls.
Capabilities relevant to Power BI are described in the Defender for Cloud Apps for
Power BI article.

The above list isn't exhaustive. Microsoft Purview includes a broad set of capabilities that
far exceeds the scope of this series of articles. For example, the Microsoft Purview data
cataloging and governance features are important; however, they're not directly in
scope for this series of articles.

 Tip

If you have questions about services, features, or licensing, contact your Microsoft
account team. They're in the best position to clarify what's available for your
organization.

The remainder of the information protection and DLP content is organized into the
following articles:

Organization-level information protection


Information protection for Power BI
Data loss prevention for Power BI
Defender for Cloud Apps for Power BI
Auditing of information protection and data loss prevention for Power BI

Next steps
In the next article in this series, learn about getting started with information protection
with organization-level planning activities for Power BI.
Power BI implementation planning:
Organization-level information
protection
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article describes the initial assessment and preparatory activities for information
protection in Power BI. It's targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators need to collaborate with
information security and other relevant teams.
Center of Excellence, IT, and BI teams: The teams that are responsible for
overseeing Power BI in the organization. They may need to collaborate with Power
BI administrator, information security teams, and other relevant teams.

) Important

Information protection and data loss prevention (DLP) is a significant organization-


wide undertaking. Its scope and impact are far greater than Power BI alone. This
type of initiative requires funding, prioritization, and planning. Expect to involve
several cross-functional teams in your planning, usage, and oversight efforts.

Current state assessment


Before you get started with any setup activities, assess what's currently happening
within your organization. It's critical to understand the extent to which information
protection is currently implemented (or being planned).

Generally, there are two cases of sensitivity label use.

Sensitivity labels are currently in use: In this case, sensitivity labels are set up and
used to classify Microsoft Office files. In this situation, the amount of work that will
be required to use sensitivity labels for Power BI will be significantly lower. The
timeline will be shorter, and it will be more straightforward to set up quickly.
Sensitivity labels not yet in use: In this case, sensitivity labels aren't used for
Microsoft Office files. In this situation, an organization-wide project to implement
sensitivity labels will be required. For some organizations, this project can
represent a significant amount of work and a considerable time investment. That's
because labels are intended to be used across the organization by various
applications (rather than one application, such as Power BI).

The following diagram shows how sensitivity labels are used broadly across the
organization.

The above diagram depicts the following items:

Item Description

Sensitivity labels are set up in the Microsoft Purview compliance portal.

Sensitivity labels can be applied to many types of items and files, such as Microsoft Office
files, items in the Power BI service, Power BI Desktop files, and emails.

Sensitivity labels can be applied for Teams sites, SharePoint sites, and Microsoft 365
groups.

Sensitivity labels can be applied to schematized data assets that are registered in the
Microsoft Purview Data Map.

In the diagram, notice that items in the Power BI service, and Power BI Desktop files, are
just some of many resources that allow assigning sensitivity labels. Sensitivity labels are
defined centrally in Microsoft Purview Information Protection. Once defined, the same
labels are used by all supported applications throughout the organization. It's not
possible to define labels for use in only one application, such as Power BI. Therefore,
your planning process needs to consider a broader set of usage scenarios to define
labels that can be used in multiple contexts. Because information protection is intended
to be used consistently across applications and services, it's critical to begin with
assessing what sensitivity labels are currently in place.

The activities for implementing sensitivity labels are described in the Information
protection for Power BI article.

7 Note

Sensitivity labels are the first building block towards implementing information
protection. DLP occurs after information protection is set up.

Checklist - When assessing the current state of information protection and DLP in your
organization, key decisions and actions include:

" Determine whether information protection is currently in use: Find out what


capabilities are currently enabled, how they're being used, by which applications,
and by whom.
" Identify who is currently responsible for information protection: While assessing
the current capabilities, determine who is currently responsible. Involve that team in
all activities going forward.
" Consolidate information protection projects: If applicable, combine the
information protection methods currently in use. If possible, consolidate projects
and teams to gain efficiency and consistency.

Team staffing
As previously stated, many of the information protection and DLP capabilities that will
be set up will have an impact across the entire organization (well beyond Power BI).
That's why it's critical to assemble a team that includes all relevant people. The team will
be crucial in defining the goals (described in the next section) and guiding the overall
effort.
As you define roles and responsibilities for your team, we recommend that you include
people who can capably translate requirements and communicate well with
stakeholders.

Your team may include pertinent stakeholders involving different individuals and groups
in the organization, including:

Chief information security officer / data protection officer


Information security / cyber security team
Legal
Compliance
Risk management
Enterprise data governance committee
Chief data officer / chief analytics officer
Internal audit team
Analytics Center of Excellence (COE)
Enterprise analytics / business intelligence (BI) team
Data stewards and domain data owners from key business units

Your team should also include the following system administrators:

Microsoft Purview administrator


Microsoft 365 administrator
Azure Active Directory (Azure AD) administrator
Defender for Cloud Apps administrator
Power BI administrator

 Tip

Expect the planning and implementation of information protection to be a


collaborative effort that will take time to get right.

The task of planning and implementing information protection is usually a part-time


responsibility for most people. It's typically one of many pressing priorities. Therefore,
having an executive sponsor will help to clarify priorities, set deadlines, and provide
strategic guidance.

Clarity on roles and responsibilities is necessary to avoid misunderstandings and delays


when working with cross-functional teams across organizational boundaries.
Checklist - When putting together your information protection team, key decisions and
actions include:

" Assemble the team: Involve all pertinent technical and non-technical stakeholders.
" Determine who the executive sponsor is: Ensure you're clear on who is the leader
of the planning and implementation effort. Involve this person (or group) for
prioritization, funding, reaching consensus, and decision-making.
" Clarify roles and responsibilities: Make sure everyone involved is clear on their role
and responsibilities.
" Create a communication plan: Consider how and when you'll communicate with
users throughout the organization.

Goals and requirements


It's important to consider what your goals are for implementing information protection
and DLP. Different stakeholders from the team you've assembled are likely to have
different viewpoints and areas of concern.

At this point, we recommend that you focus on the strategic goals. If your team has
started by defining implementation level details, we suggest that you step back and
define the strategic goals. Well-defined strategic goals will help you to deliver a
smoother implementation.

Your information protection and DLP requirements may include the following goals.

Self-service user enablement: Allow self-service BI content creators and owners to


collaborate, share, and be as productive as possible–all within the guardrails
established by the governance team. The goal is to balance self-service BI with
centralized BI and make it easy for self-service users to do the right thing, without
negatively impacting their productivity.
Data culture that values protecting trusted data: Implement information
protection in a way that's low friction and doesn't get in the way of user
productivity. When implemented in a balanced way, users are far more likely to
work within your systems than around them. User education and user support are
essential.
Risk reduction: Protect the organization by reducing its risks. Risk reduction goals
often include minimizing the possibility of data leakage outside of the organization
and protecting data against unauthorized access.
Compliance: Support compliance efforts for industry, regional, and governmental
regulations. Additionally, your organization may also have internal governance and
security requirements that are deemed critical.
Auditability and awareness: Understand where the sensitive data is located
throughout the organization and who's using it.

Be aware that an initiative to introduce information protection is complementary to


other related approaches that involve security and privacy. Coordinate information
protection initiatives with other efforts, such as:

Access roles, permissions, sharing, and row-level security (RLS) for Power BI
content
Data residency requirements
Network security requirements
Data encryption requirements
Data cataloging initiatives

For more information about securing content in Power BI, see the security planning
articles.

Checklist - When considering your information protection goals, key decisions and
actions include:

" Identify applicable data privacy regulations and risks: Ensure that your team is
aware of the data privacy regulations that your organization is subject to for your
industry or geographic region. If necessary, conduct a data privacy risk assessment.
" Discuss and clarify your goals: Have initial discussions with relevant stakeholders
and interested people. Make sure you're clear on your information protection
strategic goals. Ensure that you can translate these goals into business
requirements.
" Approve, document, and prioritize your goals: Ensure that your strategic goals are
documented and prioritized. When you need to make complex decisions, prioritize,
or make trade-offs, refer to these goals.
" Verify and document regulatory and business requirements: Ensure that all
regulatory and business requirements for data privacy are documented. Refer to
them for prioritization and compliance needs.
" Begin creating a plan: Start building a project plan by using the prioritized strategic
goals and documented requirements.
Next steps
In the next article in this series, learn about labeling and classification of data assets for
use with Power BI.
Power BI implementation planning:
Information protection for Power BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article describes the planning activities related to implementing information


protection in Power BI. It's targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators need to collaborate with
information security and other relevant teams.
Center of Excellence, IT, and BI teams: Others who are responsible for overseeing
Power BI in the organization. They may need to collaborate with Power BI
administrators, information security teams, and other relevant teams.

) Important

Information protection and data loss prevention (DLP) is a significant organization-


wide undertaking. Its scope and impact are far greater than Power BI alone. This
type of initiative requires funding, prioritization, and planning. Expect to involve
several cross-functional teams in your planning, usage, and oversight efforts.

Labeling and classification activities extend beyond Power BI and even data assets. The
decisions discussed in this article apply to assets for the entire organization, including
files and emails, and not just to Power BI. This article introduces topics that apply to
labeling and classification in general, because making the right organizational decisions
is critical for the success of data loss prevention in Power BI.

This article also includes introductory guidance about defining a sensitivity label
structure. Technically, the sensitivity label structure is a prerequisite for implementation
of sensitivity labels in Power BI. The purpose of including some basic information in this
article is to help you understand what's involved. It's crucial that you collaborate with IT
for planning and implementing information protection in the organization.
Purpose of sensitivity labels
The use of Microsoft Purview Information Protection sensitivity labels is about classifying
content. Think of a sensitivity label like a tag that you apply to an item, file, site, or data
asset.

There are many advantages to using information protection. Classifying and labeling
content helps organizations to:

Understand where sensitive data resides.


Track external and internal compliance requirements.
Protect content from unauthorized users.
Educate users on how to responsibly handle data.
Implement real-time controls to reduce the risk of data leakage.

For more use cases for information protection, see Information protection and DLP
(Common use cases).

 Tip

It helps to remember that Microsoft Purview Information Protection is the product.


Sensitivity labels are a specific feature of that product.

A sensitivity label is a brief description in clear text. Conceptually, you can think of a
sensitivity label like a tag. Only one label can be assigned to each item (like a Power BI
dataset in the Power BI service) or each file (like a Power BI Desktop file).

A label has the following purposes.

Classification: It provides a classification for describing the sensitivity level.


User education and awareness: It helps users understand how to appropriately
work with the content.
Policies: It forms the basis for applying and enforcing policies and DLP.

Prerequisites for Power BI information


protection
By now, you should have completed the organization-level planning steps that are
described in the Organization-level information protection planning article. Before
proceeding, you should have clarity on:
Current state: The current state of information protection in your organization. You
should have an understanding whether sensitivity labels are already in use for
Microsoft Office files. In this case, the scope of work to add Power BI is much
smaller than if you're bringing information protection to the organization for the
first time.
Goals and requirements : The strategic goals for implementing information
protection in your organization. Understanding the goals and requirements will
serve as a guide for your implementation efforts.

If information protection isn't in use by your organization, the remainder of this section
provides information to help you collaborate with others to introduce information
protection to your organization.

If information protection is actively in use within your organization, we recommend that


you use this article to verify that the prerequisites are met. If sensitivity labels are
actively in use, then most (or all) activities in rollout phases 1-4 (in the next section) will
already be complete.

Rollout phases
We recommend that you plan to enact a gradual rollout plan for implementing and
testing information protection. The objective for a gradual rollout plan is to set yourself
up to learn, adjust, and iterate as you go. The advantage is that fewer users are
impacted during the early stages (when changes are more likely), until information
protection is eventually rolled out to all users in the organization.

Introducing information protection is a significant undertaking. As described in the


Organization-level information protection planning article, if your organization has
already implemented information protection for Microsoft Office documents, many of
these tasks will already be complete.

This section provides an overview of the phases that we recommend you include in your
gradual rollout plan. It should give you a sense for what to expect. The remainder of this
article describes other decision-making criteria for the key aspects that affect Power BI
most directly.

Phase 1: Plan, decide, prepare


In the first phase, focus on planning, decision-making, and preparatory activities. Most
of the remainder of this article focuses this first phase.
As early as possible, clarify where the initial testing will occur. That choice will impact
where you'll initially set up, publish, and test. For the initial testing, you may want to use
a non-production tenant (if you have access to one).

 Tip

Most organizations have access to one tenant, so it can be challenging to explore


new features in an isolated way. For those organizations that have a separate
development or test tenant, we recommend that you use it for the initial testing
phase. For more information about managing tenants and how to create a trial
tenant to test out new features, see Tenant setup.

Phase 2: Set up supporting user resources


The second phase includes steps to set up the resources for supporting users. Resources
include your data classification and protection policy and the custom help page.

It's important to have some of the user documentation published early. It's also
important to have the user support team prepared early.

Phase 3: Set up labels and publish


The third phase focuses on defining sensitivity labels. When all the decisions have been
made, the setup isn't difficult or time-consuming. Sensitivity labels are set up in the
Microsoft Purview compliance portal in the Microsoft 365 admin center.

Phase 4: Publish label policy


Before a label can be used, you must publish it as part of a label policy. Label policies
allow certain users to use a label. Label policies are published in the Microsoft Purview
compliance portal in the Microsoft 365 admin center.

7 Note

Everything up until this point is a prerequisite for implementing information


protection for Power BI.

Phase 5: Enable Power BI tenant settings


There are several information protection tenant settings in the Power BI admin portal.
They're required to enable information protection in the Power BI service.

) Important

You should set the tenant settings after you've set up and published the labels in
the Microsoft Purview compliance portal.

Phase 6: Initial testing


In the sixth phase, you perform initial tests to verify that everything behaves as
expected. For initial testing purposes, you should publish the label policy only for the
implementation team.

During this phase, be certain to test:

Microsoft Office files


Power BI items in the Power BI service
Power BI Desktop files
Exporting files from the Power BI service
Other scopes included in the configuration, such as Teams sites or SharePoint

Be sure to check the functionality and user experience by using a web browser and also
mobile devices that are commonly used. Update your user documentation accordingly.

) Important

Even if only a few members of the team are authorized to set a sensitivity label, all
users will be able to see labels that are assigned to content. If you're using your
production tenant, users may wonder why they see labels assigned to items in a
workspace in the Power BI service. Be ready to support and respond to user
questions.

Phase 7: Gather user feedback


The goal for this phase is to obtain feedback from a small group of key users. The
feedback should identify areas of confusion, gaps in the label structure, or technical
issues. You might also find reasons to improve the user documentation.

To this end, you should publish (or republish) the label policy to a small subset of users
who are willing to provide feedback.
 Tip

Be sure to factor sufficient time into your project plan. For labels and label policy
settings, the product documentation recommends allowing 24 hours for the
changes to take effect. This time is required to ensure all changes propagate
through to related services.

Phase 8: Release iteratively


The implementation phase is usually an iterative process.

Often, the initial objective is to get to a state where all Power BI content has a sensitivity
label assigned. To achieve this objective, you might introduce a mandatory label policy
or a default label policy. You might also use the information protection admin APIs to
programmatically set or remove sensitivity labels.

You can gradually include more groups of users until the entire organization is included.
This process involves republishing each label policy to increasingly larger groups of
users.

Throughout this process, be sure to prioritize providing guidance, communications, and


training to your users so they'll understand the process and what's expected of them.

Phase 9: Monitor, audit, adjust, integrate


There are other steps to do after the initial rollout. You should have a primary team to
monitor information protection activities and tune them over time. As labels are applied,
you'll be able to assess their usefulness and identify areas for adjustments.

There are many aspects to auditing information protection. For more information, see
Auditing of information protection and data loss prevention for Power BI.

The investments you make in setting up information protection can be used in DLP
policies for Power BI, which are set up in the Microsoft Purview compliance portal. For
more information, including a description of DLP capabilities, see Data loss prevention
for Power BI.

Information protection can also be used to create policies in Microsoft Defender for
Cloud Apps. For more information, including a description of capabilities that you may
find helpful, see Defender for Cloud Apps for Power BI.
Checklist - When preparing your information protection rollout phases, the key
decisions and actions include:

" Create a gradual rollout plan: Define the phases for your rollout plan. Clarify what
the specific objectives are for each phase.
" Identify where to do testing: Determine where the initial testing can be done. To
minimize the impact on users, use a non-production tenant, if possible.
" Create a project plan: Build a project plan that includes all the key activities,
estimated timeline, and who will be responsible.

Sensitivity label structure


The sensitivity label structure is a prerequisite for implementing sensitivity labels in
Power BI. This section includes some basic information to help you understand what's
involved if you're involved in creating the label structure.

This section isn't an exhaustive list of all possible sensitivity label considerations for all
possible applications. Instead, its focus is on considerations and activities that directly
affect the classification of Power BI content. Ensure that you work with other
stakeholders and system administrators to make decisions that work well for all
applications and use cases.

The foundation for implementing information protection begins with a set of sensitivity
labels. The end goal is to create a set of sensitivity labels that are clear and
straightforward for users to work with.

The sensitivity label structure that's used in an organization represents a label taxonomy.
It's also commonly referred to as a data classification taxonomy because the goal is to
classify data. Sometimes it's referred to as a schema definition.

There isn't a standard, or built-in, set of labels. Each organization must define and
customize a set of labels to suit their needs. The process of arriving at the right set of
labels involves extensive collaboration. It requires thoughtful planning to ensure that the
labels will meet goals and requirements. Remember, labels will be applied to more than
just Power BI content.

 Tip
Most organizations start out by assigning labels to Microsoft Office files. They then
evolve to classifying other content, such as Power BI items and files.

A label structure includes:

Labels: Labels form a hierarchy. Each label indicates the level of sensitivity for an
item, file, or data asset. We recommend that you create between three and seven
labels. The labels should rarely change.
Sub-labels: Sub-labels indicate variations in protection or scope within a specific
label. By including them in different label policies, you can scope sub-labels to a
certain set of users or to users involved with a specific project.

 Tip

While sub-labels offer flexibility, they should be used in moderation only to satisfy
critical requirements. Creating too many sub-labels results in increased
management. They can also overwhelm users with too many options.

Labels form a hierarchy, beginning with the least sensitive classification to the most
sensitive classification.

Sometimes Power BI content contains data that spans multiple labels. For example, a
dataset may contain product inventory information (General Internal Use) and the
current quarter sales figures (Restricted). When choosing which label to assign to the
Power BI dataset, users should be taught to apply the most restrictive label.

 Tip

The next section describes the Data classification and protection policy that can
provide users with guidance on when to use each label.

) Important

Assigning a label or a sub-label doesn't directly affect access to Power BI content in


the Power BI service. Instead, the label provides a useful category that can guide
user behavior. Data loss prevention policies can also be based on the assigned
label. However, nothing changes for how access to Power BI content is managed,
except when a file is encrypted. For more information, see Use of encryption
protection.
Be deliberate with the labels you create because it's challenging to remove or delete a
label once you've progressed beyond the initial testing phase. Because sub-labels can
(optionally) be used for a particular set of users, they can change more often than labels.

Here are some best practices for defining a label structure.

Use intuitive, unambiguous, terms: Clarity is important to ensure that users know
what to choose when classifying their data. For example, having a Top Secret label
and a Highly Confidential label is ambiguous.
Create a logical hierarchical order: The order of the labels is crucial to making
everything work well. Remember that the last label in the list is the most sensitive.
The hierarchical order, in combination with well-selected terms, should be logical
and intuitive for users to work with. A clear hierarchy will also make policies easier
to create and maintain.
Create just a few labels that apply across the organization: Having too many
labels for users to choose from will be confusing. It will also lead to less accurate
label selection. We recommend that you create just a few labels for the initial set.
Use meaningful, generic names: Avoid using industry jargon or acronyms in your
label names. For example, rather than creating a label named Personally
Identifiable Information, use names like Highly Restricted or Highly Confidential
instead.
Use terms that are easily localized into other languages: For global organizations
with operations in multiple countries/regions, it's important to choose label terms
that won't be confusing or ambiguous when they're translated into other
languages.

 Tip

If you find yourself planning for many labels that are highly specific, step back and
reassess your approach. Complexity can lead to user confusion, reduced adoption,
and less effective information protection. We recommend that you begin with an
initial set of labels (or use what you already have). After you've gained more
experience, cautiously expand the set of labels by adding more specific ones when
needed.

Checklist - When planning your sensitivity label structure, key decisions and actions
include:
" Define an initial set of sensitivity labels: Create an initial set of between three and
seven sensitivity labels. Ensure that they have broad use for a wide range of
content. Plan to iterate on the initial list as you finalize the data classification and
protection policy.
" Determine whether you need sub-labels: Decide whether there's a need to use
sub-labels for any of the labels.
" Verify localization of label terms: If labels will be translated into other languages,
have native speakers confirm that localized labels convey the intended meaning.

Sensitivity label scope


A sensitivity label scope limits the use of a label. While you can't specify Power BI
directly, you can apply labels to various scopes. Possible scopes include:

Items (such as items published to the Power BI service, and files and emails)
Groups and sites (such as a Teams channel or a SharePoint site)
Schematized data assets (supported sources that are registered in the Purview
Data Map)

) Important

It's not possible to define a sensitivity label with a scope of Power BI only. While
there are some settings that apply specifically to Power BI, scope isn't one of them.
The items scope is used for the Power BI service. Sensitivity labels are handled
differently from DLP policies, which are described in the Data loss prevention for
Power BI planning article, in that some types of DLP policies can be defined
specifically for Power BI. If you intend to use sensitivity label inheritance from data
sources in Power BI, there are specific requirements for the label scope.

Events related to sensitivity labels are recorded in the activity explorer. Logged details of
these events will be significantly richer when the scope is broader. You'll also be better
prepared to protect data across a broader spectrum of applications and services.

When defining the initial set of sensitivity labels, consider making the initial set of labels
available to all scopes. That's because it can become confusing for users when they see
different labels in different applications and services. Over time, however, you may
discover use cases for more specific sub-labels. However, it's safer to start with a
consistent and simple set of initial labels.

The label scope is set in the Microsoft Purview compliance portal when the label is set
up.
Checklist - When planning for the label scope, key decisions and actions include:

" Decide the label scope: Discuss and decide whether each of your initial labels will
be applied to all scopes.
" Review all prerequisites: Investigate prerequisites and necessary setup steps that
will be required for each scope that you intend to use.

Use of encryption protection


There are multiple options for protection with a sensitivity label.

Encryption: Encryption settings pertain to files or emails. For example, a Power BI


Desktop file may be encrypted.
Markings: Refers to headers, footers, and watermarks. Markings are useful for
Microsoft Office files, but they aren't shown on Power BI content.

 Tip

Usually when someone refers to a label as protected, they're referring to encryption.


It may be sufficient that only higher-level labels, like Restricted and Highly
Restricted, are encrypted.

Encryption is a way to cryptographically encode information. Encryption has several key


advantages.

Only authorized users (for example, internal users within your organization) can
open, decrypt, and read a protected file.
Encryption remains with a protected file, even when the file is sent outside the
organization or is renamed.
The encryption setting is obtained from the original labeled content. Consider a
report in the Power BI service has a sensitivity label of Highly Restricted. If it's
exported to a supported export path, the label remains intact, and encryption is
applied on the exported file.

The Azure Rights Management Service (Azure RMS) is used for file protection with
encryption. There are some important prerequisites that must be met in order to use
Azure RMS encryption.
) Important

There's a limitation to consider: An offline user (without an internet connection)


can't open an encrypted Power BI Desktop file (or other type of Azure RMS-
protected file). That's because Azure RMS must synchronously verify that a user is
authorized to open, decrypt, and view the file contents.

Encrypted labels are handled differently depending on where the user is working.

In the Power BI service: The encryption setting doesn't have a direct effect on user
access in the Power BI service. Standard Power BI permissions (such as workspace
roles, app permissions, or sharing permissions) control user access in the Power BI
service. Sensitivity labels don't affect access to content within the Power BI service.
Power BI Desktop files: An encrypted label can be assigned to a Power BI Desktop
file. The label is also retained when it's exported from the Power BI service. Only
authorized users will be able to open, decrypt, and view the file.
Exported files: Microsoft Excel, Microsoft PowerPoint, and PDF files exported from
the Power BI service retain their sensitivity label including encryption protection.
For supported file formats, only authorized users will be able to open, decrypt, and
view the file.

) Important

It's critical that users understand the distinctions between the Power BI service and
files, which are easily confused. We recommend that you provide a FAQs document,
together with examples, to help users understand the differences.

To open a protected Power BI Desktop file, or an exported file, a user must meet the
following criteria.

Internet connectivity: The user must be connected to the internet. An active


internet connection is required to communicate with Azure RMS.
RMS permissions: The user must have RMS permissions, which are defined within
the label (rather than within the label policy). RMS permissions allow authorized
users to decrypt, open, and view supported file formats.
Allowed user: Users or groups must be specified in the label policy. Typically,
assigning authorized users is only required for content creators and owners so
they can apply labels. However, when using encryption protection there's another
requirement. Each user that needs to open a protected file must be specified in the
label policy. This requirement means that information protection licensing may be
required for more users.
 Tip

The Allow workspace administrators to override automatically applied sensitivity


labels tenant setting enables workspace administrators to change a label that was
automatically applied, even if protection (encryption) is enabled for the label. This
capability is particularly helpful when a label was automatically assigned or
inherited but the workspace administrator isn't an authorized user.

Label protection is set in the Microsoft Purview compliance portal when you set up the
label.

Checklist - When planning for the use of label encryption, key decisions and actions
include:

" Decide which labels should be encrypted: For each sensitivity label, decide
whether it should be encrypted (protected). Carefully consider the limitations that
are involved.
" Identify the RMS permissions for each label: Determine what the user permissions
will be for accessing and interacting with encrypted files. Create a mapping of users
and groups for each sensitivity label to help with the planning process.
" Review and address RMS encryption prerequisites: Ensure that technical
prerequisites for using Azure RMS encryption are met.
" Plan to conduct thorough testing of encryption: Due to the differences between
Office files and Power BI files, ensure that you commit to a thorough testing phase.
" Include in user documentation and training: Ensure that you include guidance in
your documentation and training about what users should expect for files that are
assigned a sensitivity label that's encrypted.
" Conduct knowledge transfer with support: Make specific plans to conduct
knowledge transfer sessions with the support team. Due to the complexity of
encryption, they'll likely get questions from users.

Inheritance of labels from data sources


When importing data from supported data sources (such as Azure Synapse Analytics,
Azure SQL Database, or an Excel file), a Power BI dataset can, optionally, inherit the
sensitivity label applied to the source data. Inheritance helps to:
Promote consistency of labeling.
Reduce user effort when assigning labels.
Reduce the risk of users accessing and sharing sensitive data with unauthorized
users because it wasn't labeled.

 Tip

There are two types of inheritance for sensitivity labels. Downstream inheritance
refers to downstream items (like reports) that automatically inherit a label from its
Power BI dataset. However, the focus of this section is on upstream inheritance.
Upstream inheritance refers to a Power BI dataset that inherits a label from a data
source that's upstream from the dataset.

Consider an example where the organization's working definition for the sensitivity label
of Highly Restricted includes financial account numbers. Because financial account
numbers are stored in an Azure SQL Database, the Highly Restricted sensitivity label has
been assigned to that source. When data from the Azure SQL Database is imported to
Power BI, the intent is for the dataset to inherit the label.

You can assign sensitivity labels to a supported data source in different ways.

Data discovery and classification: You can scan a supported database to identify
columns that may contain sensitive data. Based on the scan results, you can apply
some or all the label recommendations. Data Discovery & Classification is
supported for databases such as Azure SQL Database, Azure SQL Managed
Instance, and Azure Synapse Analytics. SQL Data Discovery & Classification is
supported for on-premises SQL Server databases.
Manual assignments: You can assign a sensitivity label to an Excel file. You may
also manually assign labels to database columns in Azure SQL Database or SQL
Server.
Auto-labeling in Microsoft Purview: Sensitivity labels can be applied to supported
data sources that are registered as assets in the Microsoft Purview Data Map.

2 Warning

The details for how to assign sensitivity labels to a data source are out of scope for
this article. The technical capabilities are evolving with respect to what's supported
for inheritance in Power BI. We recommend that you conduct a technical proof of
concept to verify your goals, ease of use, and whether the capabilities meet your
requirements.
Inheritance will happen only when you enable the Apply sensitivity labels from data
sources to their data in Power BI tenant setting. For more information about tenant
settings, see the Power BI tenant settings section later in this article.

 Tip

You'll need to become familiar with the inheritance behavior. Be sure to include
various circumstances in your test plan.

Checklist - When planning for the inheritance of labels from data sources, key decisions
and actions include:

" Decide whether Power BI should inherit labels from data sources: Decide whether
Power BI should inherit these labels. Plan to enable the tenant setting to allow this
capability.
" Review technical prerequisites: Determine whether you need to take extra steps to
assign sensitivity labels to data sources.
" Test label inheritance functionality: Complete a technical proof of concept to test
how inheritance works. Verify that the feature works as you expect in various
circumstances.
" Include in user documentation: Ensure that information about label inheritance is
added to guidance provided to users. Include realistic examples in the user
documentation.

Published label policies


After you define a sensitivity label, the label can be added to one or more label policies.
A label policy is how you publish the label so it can be used. It defines which labels can
be used by which set of authorized users. There are other settings as well, such as the
default label and mandatory label.

Using multiple label policies can be helpful when you need to target different sets of
users. You can define a sensitivity label once and then include in one or more label
policies.

 Tip
A sensitivity label isn't available for use until a label policy that contains the label is
published in the Microsoft Purview compliance portal.

Authorized users and groups


When you create a label policy, one or more users or groups must be selected. The label
policy determines which users can use the label. It allows users to assign that label to
specific content, such as a Power BI Desktop file, an Excel file, or an item published to
the Power BI service.

We recommend that you keep the authorized users and groups as simple as possible. A
good rule of thumb is for the primary labels to be published for all users. Sometimes it's
appropriate for a sub-label to be assigned, or scoped, to a subset of users.

We recommend that you assign groups instead of individuals whenever possible. The
use of groups simplifies management of the policy and reduces how often it needs to
be republished,

2 Warning

Authorized users and groups for a label are different from the users assigned to
Azure RMS for a protected (encrypted) label. If a user is having issues opening an
encrypted file, investigate the encryption permissions for specific users and
groups (which are set up within the label configuration, rather than within the label
policy). In most situations, we recommend that you assign the same users to both.
This consistency will avoid confusion and reduce support tickets.

Authorized users and groups are set in the Microsoft Purview compliance portal when
the label policy is published.

Checklist - When planning for authorized users and groups in your label policy, key
decisions and actions include:

" Determine which labels apply to all users: Discuss and decide which sensitivity
labels should be available for use by all users.
" Determine which sub-labels apply to a subset of users: Discuss and decide
whether there are any sub-labels that will be available for use only by a specific set
of users or groups.
" Identify whether any new groups are needed: Determine whether any new Azure
Active Directory (Azure AD) groups will need to be created to support the
authorized users and groups.
" Create a planning document: If the mapping of authorized users to sensitivity
labels is complicated, create a mapping of users and groups for each label policy.

Default label for Power BI content


When creating a label policy, you can choose a default label. For example, the General
Internal Use label could be set as the default label. This setting will affect new Power BI
items created in either Power BI Desktop or the Power BI service.

You can set up the default label in the label policy specifically for Power BI content,
which is separate from other items. Most information protection decisions and settings
apply more broadly. However, the default label setting (and the mandatory label setting
that's described next) applies only to Power BI.

 Tip

While you can set different default labels (for Power BI and non-Power BI content),
consider whether that's the best option for users.

It's important to understand that a new default label policy will apply to content created,
or edited, after the label policy is published. It won't retroactively assign the default
label to existing content. Your Power BI administrator can use the information protection
APIs to set sensitivity labels in bulk to ensure that existing content is assigned to a
default sensitivity label.

The default label options are set in the Microsoft Purview compliance portal when the
label policy is published.

Checklist - When planning whether a default label for Power BI content will be applied,
key decisions and actions include:

" Decide whether a default label will be specified: Discuss and decide whether a
default label is appropriate. If so, determine which label is best suited as the default.
" Include in user documentation: If necessary, ensure information about the default
label is mentioned in the guidance provided for users. The goal is for users to
understand how to determine whether the default label is appropriate, or whether it
should be changed.

Mandatory labeling Power BI content


Data classification is a common regulatory requirement. To meet this requirement, you
can choose to require users to label all Power BI content. This mandatory labeling
requirement takes effect when users create or edit Power BI content.

You might choose to implement mandatory labels, default labels (described in the
previous section), or both. You should consider the following points.

A mandatory label policy ensures the label won't be empty


A mandatory label policy requires users to choose what the label should be
A mandatory label policy prevents users from removing a label entirely
A default label policy is less intrusive for users because it doesn't require them to
take action
A default label policy can result in content that's mis-labeled since a user didn't
expressly make the choice
Enabling both a default label policy and a mandatory label policy can provide
complementary benefits

 Tip

If you choose to implement mandatory labels, we recommend that you also


implement default labels.

You can set up the mandatory label policy specifically for Power BI content. Most
information protection settings apply more broadly. However, the mandatory label
setting (and default label setting) applies specifically to Power BI.

 Tip

A mandatory label policy isn't applicable to service principals or APIs.

The mandatory labeling options are set in the Microsoft Purview compliance portal
when the label policy is published.
Checklist - When planning whether mandatory labeling of Power BI content will be
required, key decisions and actions include:

" Decide whether labels will be mandatory: Discuss and decide whether mandatory
labels are necessary for compliance reasons.
" Include in user documentation: If necessary, ensure that information about
mandatory labels is added to guidance provided for users. The goal is for users to
understand what to expect.

Licensing requirements
Specific licenses must be in place to work with sensitivity labels.

A Microsoft Purview Information Protection license is required for:

Administrators: The administrators who will set up, manage, and oversee labels.
Users: The content creators and owners who will be responsible for applying labels
to content. Users also includes those who need to decrypt, open, and view
protected (encrypted) files.

You might already have these capabilities because they're included in license suites, such
as Microsoft 365 E5 . Alternatively, Microsoft 365 E5 Compliance capabilities may be
purchased as a standalone license.

A Power BI Pro or Premium Per User (PPU) license is also required for users who will
apply and manage sensitivity labels in the Power BI service or Power BI Desktop.

 Tip

If you need clarifications about licensing requirements, talk to your Microsoft


account team. Be aware that the Microsoft 365 E5 Compliance license includes
additional capabilities that are out of scope for this article.
Checklist - When evaluating licensing requirements for sensitivity labels, key decisions
and actions include:

" Review product licensing requirements: Ensure that you review all the licensing
requirements.
" Review user licensing requirements: Verify that all users you expect to assign labels
have Power BI Pro or PPU licenses.
" Procure additional licenses: If applicable, purchase more licenses to unlock the
functionality that you intend to use.
" Assign licenses: Assign a license to each user who will assign, update, or manage
sensitivity labels. Assign a license to each user who will interact with encrypted files.

Power BI tenant settings


There are several Power BI tenant settings that are related to information protection.

) Important

The Power BI tenant settings for information protection shouldn't be set until after
all prerequisites are met. The labels and the label policies should be set up and
published in the Microsoft Purview compliance portal. Until this time, you're still in
the decision-making process. Before setting the tenant settings, you should first
determine a process for how to test the functionality with a subset of users. Then
you can decide how to do a gradual roll out.

Users who can apply labels


You should decide who will be allowed to apply sensitivity labels to Power BI content.
This decision will determine how you set the Allow users to apply sensitivity labels for
content tenant setting.

It's typically the content creator or owner who assigns the label during their normal
workflow. The most straightforward approach is to enable this tenant setting, which
allows all Power BI users to apply labels. In this case, standard workspace roles will
determine who can edit items in the Power BI service (including applying a label). You
can use the activity log to track when a user assigns or changes a label.

Labels from data sources


You should decide whether you want sensitivity labels to be inherited from supported
data sources that are upstream from Power BI. For example, if columns in an Azure SQL
Database have been defined with the Highly Restricted sensitivity label, then a Power BI
dataset that imports data from that source will inherit that label.

If you decide to enable inheritance from upstream data sources, set the Apply
sensitivity labels from data sources to their data in Power BI tenant setting. We
recommend that you plan to enable inheritance of data source labels to promote
consistency and reduce effort.

Labels for downstream content


You should decide whether sensitivity labels should be inherited by downstream
content. For example, if a Power BI dataset has a sensitivity label of Highly Restricted,
then all downstream reports will inherit this label from the dataset.

If you decide to enable inheritance by downstream content, set the Automatically apply
sensitivity labels to downstream content tenant setting. We recommend that you plan
to enable inheritance by downstream content to promote consistency and reduce effort.

Workspace administrator overrides


This setting applies for labels that were applied automatically (such as when default
labels are applied, or when labels are automatically inherited). When a label has
protection settings, Power BI allows only authorized users to change the label. This
setting enables workspace administrators to change a label that was applied
automatically, even if there are protection settings on the label.

If you decide to allow label updates, set the Allow workspace administrators to
override automatically applied sensitivity labels tenant setting. This setting applies to
the entire organization (not individual groups). It allows workspace administrators to
change labels that were automatically applied.

We recommend that you consider allowing Power BI workspace administrators to


update labels. You can use the activity log to track when they assign or change a label.

Disallow sharing of protected content


You should decide whether protected (encrypted) content can be shared with everyone
in your organization.
If you decide to disallow sharing of protected content, set the Restrict content with
protected labels from being shared via link with everyone in your organization tenant
setting. This setting applies to the entire organization (not individual groups).

We strongly recommend that you plan to enable this tenant setting to disallow sharing
of protected content. When enabled, it disallows sharing operations with the entire
organization for more sensitive content (defined by the labels that have encryption
defined). By enabling this setting, you'll reduce the possibility of data leakage.

) Important

There's a similar tenant setting named Allow shareable links to grant access to
everyone in your organization. Although it has a similar name, its purpose is
different. It defines which groups can create a sharing link for the entire
organization, regardless of the sensitivity label. In most cases, we recommend this
capability be limited in your organization. For more information, see the Report
consumer security planning article.

Supported export file types


In the Power BI admin portal, there are many export and sharing tenant settings. In most
circumstances, we recommend that the ability to export data be available to all (or most)
users so as not to limit user productivity.

However, highly regulated industries may have a requirement to restrict exports when
information protection can't be enforced for the output format. A sensitivity label that's
applied in the Power BI service follows the content when it's exported to a supported file
path. It includes Excel, PowerPoint, PDF, and Power BI Desktop files. Since the sensitivity
label stays with the exported file, protection benefits (encryption that prevents
unauthorized users from opening the file) are retained for these supported file formats.

2 Warning

When exporting from Power BI Desktop to a PDF file, the protection is not retained
for the exported file. We recommend that you educate your content creators to
export from the Power BI service to achieve maximum information protection.

Not all export formats support information protection. Unsupported formats, such as
.csv, .xml, .mhtml, or .png files (available when using the ExportToFile API) may be
disabled in the Power BI tenant settings.
 Tip

We recommend that you restrict exporting capabilities only when you must meet
specific regulatory requirements. In typical scenarios, we recommend that you use
the Power BI activity log to identify which users are performing exports. You can
then teach these users about more efficient and secure alternatives.

Checklist - When planning how tenant settings will be set up in the Power BI admin
portal, key decisions and actions include:

" Decide which users can apply sensitivity labels: Discuss and decide whether
sensitivity labels can be assigned to Power BI content by all users (based on
standard Power BI security) or only by certain groups of users.
" Determine whether labels should be inherited from upstream data sources:
Discuss and decide whether labels from data sources should be automatically
applied to Power BI content that uses the data source.
" Determine whether labels should be inherited by downstream items: Discuss and
decide whether labels assigned to existing Power BI datasets should be
automatically applied to related content.
" Decide whether Power BI workspace administrators can override labels: Discuss
and decide whether it's appropriate for workspace administrators to be able to
change protected labels that were automatically assigned.
" Determine whether protected content can be shared with the entire organization:
Discuss and decide whether sharing links for "people in your organization" can be
created when a protected (encrypted) label has been assigned to an item in the
Power BI service.
" Decide which export formats are enabled: Identify regulatory requirements that
will affect which export formats are available. Discuss and decide whether users will
be able to use all export formats in the Power BI service. Determine whether certain
formats need to be disabled in the tenant settings when the export format doesn't
support information protection.

Data classification and protection policy


Setting up your label structure and publishing your label policies are necessary first
steps. However, there's more to do to help your organization be successful with
classifying and protecting data. It's critical that you provide guidance to users about
what can and can't be done with content that's been assigned to a certain label. That's
where you'll find a data classification and protection policy is helpful. You can think of it
as your label guidelines.

7 Note

A data classification and protection policy is an internal governance policy. You


might choose to call it something different. What matters is that it's documentation
that you create and provide to your users so they know how to use the labels
effectively. Since the label policy is a specific page in the Microsoft Purview
compliance portal, try to avoid calling your internal governance policy by the same
name.

We recommend that you iteratively create your data classification and protection policy
while you're in the decision-making process. It will mean everything is clearly defined
when it's time to set up the sensitivity labels.

Here are some of the key pieces of information that you might include in your data
classification and protection policy.

Description of the label: Beyond the label name, provide a full description of the
label. The description should be clear yet brief. Here are some example
descriptions:
General Internal Use - for private, internal, business data
Restricted - for sensitive business data that would cause harm if compromised or
is subject to regulatory or compliance requirements
Examples: Provide examples to help explain when to use the label. Here are some
examples:
General Internal Use - for most internal communications, non-sensitive support
data, survey responses, reviews, ratings, and imprecise location data
Restricted - for personally identifiable information (PII) data such as name,
address, phone, email, government ID number, race, or ethnicity. Includes
vendor and partner contracts, non-public financial data, employee, and human
resources (HR) data. Also includes proprietary information, intellectual property,
and precise location data.
Label required: Describes whether assigning a label is mandatory for all new and
changed content.
Default label: Describes whether this label is a default label that's automatically
applied to new content.
Access restrictions: Additional information that clarifies whether internal and/or
external users are permitted to see content assigned to this label. Here are some
examples:
All users, including internal users, external users, and third parties with active
confidentiality agreement (NDA) in place may access this information.
Internal users only may access this information. No partners, vendors,
contractors, or third parties, regardless of NDA or confidentiality agreement
status.
Internal access to information is based on job role authorization.
Encryption requirements: Describes whether data is required to be encrypted at-
rest and in-transit. This information will correlate to how the sensitivity label is set
up and will affect the protection policies that may be implemented for file (RMS)
encryption.
Downloads allowed and/or offline access: Describes whether offline access is
permitted. It can also define whether downloads are permitted to organizational or
personal devices.
How to request an exception: Describes whether a user can request an exception
to the standard policy, and how that can be done.
Audit frequency: Specifies the frequency of compliance reviews. Higher sensitive
labels should involve more frequent and thorough auditing processes.
Other metadata: A data policy requires more metadata, such as policy owner,
approver, and effective date.

 Tip

When creating your data classification and protection policy, focus on making it a
straightforward reference for users. It should be as short and clear as possible. If it's
too complex, users won't always take the time to understand it.

One way to automate the implementation of a policy, such as the data classification and
protection policy, is with Azure AD terms of use. When a terms of use policy is set up,
users are required to acknowledge the policy before they're permitted to visit the Power
BI service for the first time. It's also possible to ask them to agree again on a recurring
basis, for example every 12 months.

Checklist - When planning the internal policy to govern expectations for usage of
sensitivity labels, key decisions and actions include:
" Create a data classification and protection policy: For each sensitivity label in your
structure, create a centralized policy document. This document should define what
can or can't be done with content that's been assigned each label.
" Obtain consensus on the data classification and protection policy: Ensure that all
necessary people in the team you assembled have agreed to the provisions.
" Consider how to handle exceptions to the policy: Highly decentralized
organizations might need to consider whether exceptions may arise. Though it's
preferable to have a standardized data classification and protection policy, decide
how you'll address exceptions when new requests are made.
" Consider where to locate your internal policy: Give some thought to where the
data classification and protection policy should be published. Ensure that all users
can easily access it. Plan to include it on the custom help page when you publish
the label policy.

User documentation and training


Before rolling out information protection functionality, we recommend that you create
and publish guidance documentation for your users. The goal of the documentation is
to achieve a seamless user experience. Preparing the guidance for your users will also
help you make sure you've considered everything.

You can publish the guidance as part of the sensitivity label's custom help page. A
SharePoint page or a wiki page in your centralized portal can work well because it will
be easy to maintain. A document uploaded to a shared library or Teams site is also a
good approach. The URL for the custom help page is specified in the Microsoft Purview
compliance portal when you publish the label policy.

 Tip

The custom help page is an important resource. Links to it are made available in
various applications and services.

The user documentation should include the data classification and protection policy
described earlier. That internal policy is targeted at all users. Interested users include
content creators and consumers who need to understand the implications for labels that
have been assigned by other users.

In addition to the data classification and protection policy, we recommend that you
prepare guidance for your content creators and owners about:
Viewing labels: Information about what each label means. Correlate each label
with your data classification and protection policy.
Assigning labels: Guidance on how to assign and manage labels. Include
information they'll need to know, such as mandatory labels, default labels, and
how label inheritance works.
Workflow: Suggestions for how to assign and review labels as part of their normal
workflow. Labels can be assigned in Power BI Desktop as soon as development
begins, which protects the original Power BI Desktop file during the development
process.
Situational notifications: Awareness about system-generated notifications that
users might receive. For example, a SharePoint site is assigned to a certain
sensitivity label, but an individual file has been assigned to more sensitive (higher)
label. The user who assigned the higher label will receive an email notification that
the label assigned to the file is incompatible with the site where it's stored.

Include information about who users should contact if they have questions or technical
issues. Since information protection is an organization-wide project, support is often
provided by IT.

FAQs and examples are especially helpful for user documentation.

 Tip

Some regulatory requirements include a specific training component.

Checklist - When preparing user documentation and training, key decisions and actions
include:

" Identify what information to include: Determine what information should be


included so all relevant audiences understand what's expected of them when it
comes to protecting data on behalf of the organization.
" Publish the custom help page: Create and publish a custom help page. Include
guidance about labeling in the form of FAQs and examples. Include a link to access
the data classification and protection policy.
" Publish the data classification and protection policy: Publish the policy document
that defines what exactly can or can't be done with content that's been assigned to
each label.
" Determine whether specific training is needed: Create or update your user training
to include helpful information, especially if there's a regulatory requirement to do
so.

User support
It's important to verify who will be responsible for user support. It's common that
sensitivity labels are supported by a centralized IT help desk.

You may need to create guidance for the help desk (sometimes known as a runbook).
You may also need to conduct knowledge transfer sessions to ensure that the help desk
is ready to respond to support requests.

Checklist - When preparing for the user support function, key decisions and actions
include:

" Identify who will provide user support: When you're defining roles and
responsibilities, make sure to include how users will get help for issues related to
information protection.
" Ensure the user support team is ready: Create documentation and conduct
knowledge transfer sessions to ensure that the help desk is ready to support
information protection. Emphasize complex aspects that might confuse users, such
as encryption protection.
" Communicate between teams: Discuss the process and expectations with the
support team, as well as your Power BI administrators and Center of Excellence.
Make sure that everyone involved is prepared for potential questions from Power BI
users.

Implementation summary
After the decisions have been made and prerequisites have been met, it's time to begin
implementing information protection according to your gradual rollout plan.

The following checklist includes a summarized list of the end-to-end implementation


steps. Many of the steps have other details that were covered in previous sections of this
article.
Checklist - When implementing information protection, key decisions and actions
include:

" Verify current state and goals: Ensure that you have clarity on the current state of
information protection in the organization. All goals and requirements for
implementing information protection should be clear and actively used to drive the
decision-making process.
" Make decisions: Review and discuss all the decisions that are required. This task
should occur prior to setting up anything in production.
" Review licensing requirements: Ensure that you understand the product licensing
and user licensing requirements. Procure and assign more licenses, if necessary.
" Publish user documentation: Publish your data classification and protection policy.
Create a custom help page that contains the relevant information that users will
need.
" Prepare the support team: Conduct knowledge transfer sessions to ensure that the
support team is ready to handle questions from users.
" Create the sensitivity labels: Set up each of the sensitivity labels in the Microsoft
Purview compliance portal.
" Publish a sensitivity label policy: Create and publish a label policy in the Microsoft
Purview compliance portal. Start by testing with a small group of users.
" Set the Power BI tenant settings: In the Power BI admin portal, set the information
protection tenant settings.
" Perform initial testing: Perform an initial set of tests to verify everything is working
correctly. Use a non-production tenant for initial testing, if available.
" Gather user feedback: Publish the label policy to a small subset of users who are
willing to test the functionality. Obtain feedback on the process and user
experience.
" Continue iterative releases: Publish the label policy to other groups of users.
Onboard more groups of users until the entire organization is included.

 Tip

These checklist items are summarized for planning purposes. For more details
about these checklist items, see the previous sections of this article.

Ongoing monitoring
After you've completed the implementation, you should direct your attention to
monitoring and tuning sensitivity labels.

Power BI administrators and security and compliance administrators will need to


collaborate from time to time. For Power BI content, there are two audiences who are
concerned with monitoring.

Power BI administrators: An entry in the Power BI activity log is recorded each


time a sensitivity label is assigned or changed. The activity log entry records details
of the event, including user, date and time, item name, workspace, and capacity.
Other activity log events (such as when a report is viewed) will include the
sensitivity label ID that's assigned to the item.
Security and compliance administrators: The organization's security and
compliance administrators will typically use Microsoft Purview reports, alerts, and
audit logs.

Checklist - When monitoring information protection, key decisions and actions include:

" Verify roles and responsibilities: Ensure that you're clear on who is responsible for
which actions. Educate and communicate with your Power BI administrators or
security administrators, if they'll be directly responsible for some aspects.
" Create or validate your process for reviewing activity: Make sure the security and
compliance administrators are clear on the expectations for reviewing the activity
explorer regularly.

 Tip

For more information about auditing, see Auditing of information protection and
data loss prevention for Power BI.

Next steps
In the next article in this series, learn about data loss prevention for Power BI.
Power BI implementation planning:
Data loss prevention for Power BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article describes the planning activities related to implementing data loss
prevention (DLP) in Power BI. It's targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators need to collaborate with
information security teams and other relevant teams.
Center of Excellence, IT, and BI teams: Others who are responsible for overseeing
Power BI in the organization. They may need to collaborate with Power BI
administrators, information security teams, and other relevant teams.

) Important

Data loss prevention (DLP) is a significant organization-wide undertaking. Its scope


and impact are far greater than Power BI alone. This type of initiative requires
funding, prioritization, and planning. Expect to involve several cross-functional
teams in your planning, usage, and oversight efforts.

We recommend that you follow a gradual, phased approach to rolling out DLP for Power
BI. For a description of the types of rollout phases that you should consider, see
Information protection for Power BI (Rollout phases).

Purpose of DLP
Data loss prevention (DLP) refers to activities and practices that safeguard the
organization's data. The goal for DLP is to reduce the risk of data leakage, which can
happen when sensitive data is shared with unauthorized people. Although responsible
user behavior is a critical part of safeguarding data, DLP usually refers to policies that
are automated.
DLP allows you to:

Detect and inform administrators when risky, inadvertent, or inappropriate sharing


of sensitive data has occurred. Specifically, it allows you to:
Improve the overall security setup of your Power BI tenant, with automation and
information.
Enable analytical use cases that involve sensitive data.
Provide auditing information to security administrators.
Provide users with contextual notifications. Specifically, it allows you to:
Help users make the right decisions during their normal workflow.
Guide users to follow your data classification and protection policy without
negatively affecting their productivity.

DLP services
Broadly, there are two different services that can implement data loss prevention.

Microsoft Purview DLP policies for Power BI


Microsoft Defender for Cloud Apps

Microsoft Purview DLP policies for Power BI


A DLP policy for Power BI is set up in the Microsoft Purview compliance portal. It can
detect sensitive data in a dataset that's been published to a Premium workspace in the
Power BI service.

The goal for this type of DLP policy is to bring awareness to users and inform
administrators of where sensitive data is stored. The DLP policy can generate user
notifications and administrator alerts based on sensitive information types or sensitivity
labels. For example, you can determine whether credit card information or personally
identifiable information (PII) is stored in a dataset.

7 Note

DLP for Power BI is the focus of this article.

Microsoft Defender for Cloud Apps


Microsoft Defender for Cloud Apps is a tool with many capabilities. Some policies that
can be set up in Microsoft Defender for Cloud Apps (with integration with Azure Active
Directory) include DLP. These policies can block, log, or alert when certain user activities
happen. For example, when a user attempts to download a report from the Power BI
service that's been assigned a Highly Restricted sensitivity label, the download action is
blocked.

The Defender for Cloud Apps for Power BI article covers using Defender for Cloud Apps
for monitoring the Power BI service. The remainder of this article focuses on DLP for
Power BI.

) Important

A DLP policy for Power BI that's set up in the Microsoft Purview compliance portal
may be applied only for content that's stored in a Power BI Premium workspace.
However, policies that are set up in Defender for Cloud Apps don't have a similar
Power BI Premium prerequisite. Be aware that the functionality, purpose, and
available actions differ for the two toolsets. To achieve maximum effect, we
recommend that you consider using both toolsets.

Prerequisites for DLP for Power BI


By now, you should have completed the organization-level planning steps that are
described in the Information protection for Power BI article. Before proceeding, you
should have clarity on:

Current state: The current state of DLP in your organization. You should have an
understanding to what extent DLP is already in use, and who's responsible for
managing it.
Goals and requirements: The strategic goals for implementing DLP in your
organization. Understanding the goals and requirements will serve as a guide for
your implementation efforts.

Usually, you implement information protection (described in the Information protection


for Power BI article) before you implement DLP. However, that isn't a prerequisite for
using DLP for Power BI. If sensitivity labels are published, they can be used with DLP for
Power BI. You can also use sensitive information types with DLP for Power BI. Both types
are described in this article.

Key decisions and actions


The intention of a DLP policy is to set up an automated action, based on rules and
conditions, on the content you intend to protect. You'll need to make some decisions
about the rules and conditions that will support your goals and requirements.

The advantage of defining separate rules within a single DLP policy is that you can
enable customized alerts or user notifications.

There's a hierarchical precedence to the list of DLP policies, as well as to DLP policy
rules, to consider. The precedence will affect which policy gets invoked when it's
encountered first.

U Caution

This section isn't an exhaustive list of all possible DLP decisions for all possible
applications. Ensure that you work with other stakeholders and system
administrators to make decisions that work well for all applications and use cases.
For example, we recommend that you investigate additional DLP policies for
protecting source files and exported files that are stored in OneDrive or SharePoint.
This set of articles focuses only on content in the Power BI service.

Type of sensitive data


A DLP policy for Power BI that's set up in the Microsoft Purview compliance portal can
be based on either a sensitivity label or a sensitive information type.

) Important

Although you can assign sensitivity labels to most types of items in Power BI, the
DLP policies described in this article are focused specifically on datasets. The
dataset must be published to a Premium workspace.

Sensitivity label

You can use sensitivity labels to classify content, ranging from less sensitive to more
sensitive.

When a DLP policy for Power BI is invoked, a sensitivity label rule checks datasets (that
are published to the Power BI service) for the presence of a certain sensitivity label. As
described in the Information protection for Power BI article, a label can be assigned
either by a user or by an automated process (for example, an inherited label or a default
label).
Here are some examples of when you might create a DLP rule based on a sensitivity
label.

Regulatory compliance: You have a sensitivity label that's reserved for data that's
subject to a particular regulatory requirement. You want to raise an alert for your
security administrators when users assign that sensitivity label to a dataset in the
Power BI service.
Reminders for content creators about confidential data: You have a sensitivity
label that's used for confidential data. You want to generate a user notification
when a user views the dataset details page within the data hub in the Power BI
service. For instance, you could remind users about how to appropriately handle
confidential data.

Other considerations about user notifications and alerts are described in this next
section of this article.

Checklist - When considering the needs for sensitivity label rules, key decisions and
actions include:

" Verify current state of information protection: Ensure that sensitivity labels are
deployed in the organization, and that they're ready for use by DLP policies.
" Compile use cases for DLP based on sensitivity labels: Determine which sensitivity
labels would benefit from having DLP policies in place. Consider your goals,
regulations, and internal requirements.
" Prioritize the list of use cases for DLP based on sensitivity labels: Discuss the top
priorities with your team. Identify which items to prioritize on your project plan.

7 Note

DLP policies are typically automated. However, responsible user actions also play a
crucial role in protecting data.

For more information, see Information protection for Power BI (Data classification and
protection policy). It describes an internal governance policy that provides guidance
about what users can and can't do with content that's been assigned to a certain
sensitivity label.

Sensitive information types


Not all types of data are the same; certain types of data are inherently more sensitive
than others. There are many different sensitive information types (SITs). Depending on
your industry and the compliance requirements, only some SITs will be applicable to
your organization.

Some common examples of SITs include:

Passport, social security, and driver's license numbers


Bank account and routing numbers
Credit card and debit card numbers
Tax identification and national ID numbers
Health ID numbers and medical information
Physical addresses
Account keys, passwords, and database connection strings

 Tip

If sensitive data doesn't have analytical value, ask yourself whether it should reside
in an analytical system. We recommend that you educate your content creators to
help them make good decisions about what data to store in Power BI.

SITs are pattern-based classifiers. They'll look for a known pattern in text by using
regular expressions.

You'll find many pre-configured SITs in the Microsoft Purview compliance portal. When
they meet your requirements, you should use a pre-configured SIT to save time.
Consider the credit card number pre-configured SIT: It detects the correct patterns for
all major card issuers, ensures the validity of the checksum, and searches for a relevant
keyword within proximity to the credit card number.

If the pre-configured SITs don't meet your needs, or you have proprietary data patterns,
you can create a custom SIT. For example, you can create a custom SIT to match the
pattern of your employee ID number.

Once the SIT is set up, a DLP policy for Power BI is invoked when a dataset is uploaded
or refreshed. At that time, a sensitive information type rule will check datasets (in the
Power BI service) for the presence of sensitive information types.

Here are some examples of when you might create a DLP rule based on a sensitive
information type.

Regulatory compliance: You have a sensitive information type that's subject to


regulatory requirements. You want to generate an alert for your security
administrators when that type of data is detected within a dataset in the Power BI
service.
Internal requirements: You have a sensitive information type that needs special
handling. To meet internal requirements, you want to generate a user notification
when a user views the dataset settings or the dataset details page in the data hub
(in the Power BI service).

Checklist - When considering the needs for sensitive information types, key decisions
and actions include:

" Compile use cases for DLP based on sensitive information types: Determine which
sensitive information types would benefit from having DLP policies in place.
Consider your goals, regulations, and internal requirements.
" Test existing sensitive information types: Work with the information security team
to verify that the pre-configured SITs will meet your needs. Use test data to confirm
that patterns and keywords are correctly detected.
" Create custom sensitive information types: If applicable, work with your
information security team to create SITs so they can be used in DLP for Power BI.
" Prioritize the list of use cases: Discuss the top priorities with your team. Identify
which items to prioritize on your project plan.

User notifications
When you've identified use cases for DLP with sensitivity labels and SITs, you should
next consider what happens when a DLP rule match occurs. Typically, it involves a user
notification.

User notifications for DLP policies are also known as policy tips. They're useful when you
want to provide more guidance and awareness to your users during the normal course
of their work. It's more likely that users will read and absorb user notifications if they're:

Specific: Correlating the message to the rule makes it far easier to understand.
Actionable: Offering a suggestion for what the user needs to do, or how to find
more information.

For DLP in Power BI, user notifications appear in the dataset settings. They also appear
across the top of the dataset details page in the data hub, as shown in the following
screenshot. In this instance, the notification reads: This data contains credit cards. That
type of data is not permitted in Power BI per the Data Classification, Protection, and Usage
policy.

You can define one or more rules for each DLP policy. Each rule can optionally have a
different policy tip that will be displayed to users.

Consider the following example of how you could define a DLP policy for detecting
financial data stored within datasets in the Power BI service. The DLP policy uses SITs
and it has two rules.

Rule 1: The first rule detects credit card numbers. The customized policy tip text
reads: This data contains credit card numbers. That type of data is not permitted in
Power BI per the Data Classification and Protection policy.
Rule 2: The second rule detects financial accounts. The customized policy tip text
reads: This data contains sensitive financial information. It requires the use of the
Highly Restricted label. Please refer to the Data Classification and Protection Policy
for requirements when storing financial data.

Rule 1 is more urgent than rule 2. Rule 1 is intended to communicate that there's a
problem that requires action. The second rule is more informational. For urgent issues,
it's a good idea to set up alerting. Alerting for administrators is described in the next
section.

When deciding what notifications users should receive, we recommend that you focus
on showing only highly important notifications. If there are too many policy
notifications, users may become overwhelmed by them. The result is that some
notifications may be overlooked.
Users can take action by reporting an issue when they believe it's a false positive
(misidentified). It's also possible to allow the user to override the policy. These
capabilities are intended to allow communication between Power BI users and the
security administrators who manage DLP for Power BI.

Checklist - When considering DLP user notifications, key decisions and actions include:

" Decide when user notifications are needed: For each DLP rule you intend to create,
determine whether a custom user notification is required.
" Create customized policy tips: For each notification, define what message should
be shown to users. Plan to correlate the message to the DLP rule so that it's specific
and actionable.

Administrator alerting
Alerting is useful for certain DLP rules when you want to track incidents when a policy
violation occurred. When you define DLP policy rules, consider whether alerts should be
generated.

 Tip

Alerts are designed to call the attention of an administrator to certain situations.


They're best suited when you intend to actively investigate and resolve important
alerts. You can find all DPS rule matches in the activity explorer in the Microsoft
Purview compliance portal.

Alerting is useful when you want to:

Make your security and compliance administrators aware that something occurred
via the DLP alert management dashboard. Optionally, you can also send an email
to a specific set of users.
See more details for an event that occurred.
Assign an event to someone to investigate it.
Manage the status of an event or add comments to it.
View other alerts generated for activity by the same user.

Each alert can be defined by a severity level, which can be low, medium, or high. The
severity level helps to prioritize the review of open alerts.
Here are two examples of how alerts can be used.

Example 1: You've defined a DLP policy for detecting financial data stored in datasets in
the Power BI service. The DLP policy uses sensitive information types. It has two rules.

Rule 1: This rule detects credit card numbers. Alerting is enabled with a high
severity. An email is generated too.
Rule 2: This rule detects financial accounts. Alerting is enabled with a high severity.

Example 2: You've defined a DLP policy that's invoked when the Highly
Restricted\Executive Committee and Board Members sensitivity label is assigned to a
dataset in the Power BI service. It doesn't generate a user notification. In this situation,
you may not want to generate an alert because you only want to log the occurrence. If
needed, you can obtain more information from the activity explorer.

When an email alert is required, we recommend that you use a mail-enabled security
group. For example, you might use a group named Security and Privacy Admin Alerting.

 Tip

Keep in mind that DLP rules for Power BI are checked every time that a dataset is
uploaded or refreshed. That means an alert could be generated each time the
dataset is refreshed. Regular or frequent data refreshes could result in an
overwhelming number of logged events and alerts.

Checklist - When considering DLP alerting for administrators, key decisions and actions
include:

" Decide when alerts are required: For each DLP rule you intend to create, determine
which situations warrant using alerts.
" Clarify roles and responsibilities: Determine the expectations and the specific
actions that should be taken when an alert is generated.
" Determine who will receive alerts: Decide which security and compliance
administrators will handle open alerts. Confirm that permissions and licensing
requirements are met for each administrator that will use the Microsoft Purview
compliance portal.
" Create email groups: If necessary, create new mail-enabled security groups to
handle alerting.
Workspaces in scope
A DLP policy for Power BI that's set up in the Microsoft Purview compliance portal is
intended to target datasets. Specifically, it supports scanning datasets that have been
published to a Premium workspace.

You can set up the DLP policy to scan all Premium workspaces. Optionally, you can
choose to include, or exclude, specific workspaces. For example, you might exclude
certain development or test workspaces that are considered lower risk (particularly if
they don't contain real production data). Alternatively, you might create separate
policies for certain development or test workspaces.

 Tip

If you decide that only a subset of your Premium workspaces will be included for
DLP, consider the level of maintenance. DLP rules are easier to maintain when all
Premium workspaces are included. If you decide to include only a subset of
Premium workspaces, make sure that you have an auditing process in place so you
can quickly identify whether a new workspace is missing from the DLP policy.

For more information about workspace, see the workspace planning articles.

Checklist - When considering which workspaces to include in scope for DLP, key
decisions and actions include:

" Decide which Premium workspaces should have DLP applied: Consider whether
the DLP policies should affect all Power BI Premium workspaces or only a subset of
them.
" Create documentation for workspace assignments: If applicable, document which
workspaces are subject to DLP. Include the criteria and reasons why workspaces are
included or excluded.
" Correlate DLP decisions with your workspace governance: If applicable, update
your workspace governance documentation to include details about how DLP is
handled.
" Consider other important file locations: In addition to the Power BI service,
determine whether it's necessary to create other DLP policies to protect source files
and exported files that are stored in OneDrive or SharePoint.
Licensing requirements
To use DLP, there are several licensing requirements. A Microsoft Purview Information
Protection license is required for the administrators who will set up, manage, and
oversee DLP. You might already have these licenses because they're included in some
license suites, such as Microsoft 365 E5 . Alternatively, Microsoft 365 E5 Compliance
capabilities may be purchased as a standalone license.

Also, DLP policies for Power BI require Power BI Premium. This licensing requirement can
be met with a Premium capacity or a Premium Per User (PPU) license.

 Tip

If you need clarifications about licensing requirements, talk to your Microsoft


account team. Note that the Microsoft 365 E5 Compliance license includes other
DLP capabilities that are out scope for this article.

Checklist - When evaluating DLP licensing requirements, key decisions and actions
include:

" Review product licensing requirements: Ensure that you've reviewed all the
licensing requirements for DLP.
" Review Premium licensing requirements: Verify that the workspaces you want to
configure for DLP are Premium workspaces.
" Procure additional licenses: If applicable, purchase more licenses to unlock the
functionality that you intend to use.
" Assign licenses: Assign a license to each of your security and compliance
administrators who will need one.

User documentation and training


Before rolling out DLP for Power BI, we recommend that you create and publish user
documentation. A SharePoint page or a wiki page in your centralized portal can work
well because it will be easy to maintain. A document uploaded to a shared library or
Teams site is a good solution, too.
The goal of the documentation is to achieve a seamless user experience. Preparing user
documentation will also help you make sure you've considered everything.

Include information about who to contact when users have questions or technical issues.
Because information protection is an organization-wide project, support is often
provided by IT.

FAQs and examples are especially helpful for user documentation.

 Tip

For more information, see Information protection for Power BI (Data classification
and protection policy). It describes suggestions for creating a data classification
and protection policy so that users understand what they can and can't do with
sensitivity labels.

Checklist - When preparing user documentation and training, key decisions and actions
include:

" Update documentation for content creators and consumers: Update your FAQs
and examples to include relevant guidance about DLP policies.
" Publish how to get help: Ensure that your users know how to get help when they're
experiencing something unexpected or that they don't understand.
" Determine whether specific training is needed: Create or update your user training
to include helpful information, especially if there's a regulatory requirement to do
so.

User support
It's important to verify who will be responsible for user support. It's common that DLP is
supported by a centralized IT help desk.

You may need to create guidance for the help desk (sometimes known as a runbook).
You may also need to conduct knowledge transfer sessions to ensure that the help desk
is ready to respond to support requests.
Checklist - When preparing for the user support function, key decisions and actions
include:

" Identify who will provide user support: When you're defining roles and
responsibilities, make sure to account for how users will get help with issues related
to DLP.
" Ensure the user support team is ready: Create documentation and conduct
knowledge transfer sessions to ensure that the help desk is ready to support DLP.
" Communicate between teams: Discuss user notifications and the process to resolve
DLP alerts with the support team, as well as your Power BI administrators and
Center of Excellence. Make sure that everyone involved is prepared for potential
questions from Power BI users.

Implementation and testing summary


After the decisions have been made and prerequisites have been met, it's time to begin
implementing and testing DLP for Power BI.

DLP policies for Power BI are set up in the Microsoft Purview compliance portal
(formerly known as the Microsoft 365 compliance center) in the Microsoft 365 admin
center.

 Tip

The process to set up DLP for Power BI in the Microsoft Purview compliance portal
involves just one step, instead of two, to set up the policy. This process is different
from when you set up information protection in the Microsoft Purview compliance
portal (described in the Information protection for Power BI article). In that case,
there were two separate steps to set up the label and publish a label policy. In this
case for DLP, there's just one step in the implementation process.

The following checklist includes a summarized list of the end-to-end implementation


steps. Many of the steps have other details that were covered in previous sections of this
article.
Checklist - When implementing DLP for Power BI, key decisions and actions include:

" Verify current state and goals: Ensure that you have clarity on the current state of
DLP for use with Power BI. All goals and requirements for implementing DLP should
be clear and actively used to drive the decision-making process.
" Make decisions: Review and discuss all the decisions that are required. This task
should occur prior to setting up anything in production.
" Review licensing requirements: Ensure that you understand the product licensing
and user licensing requirements. If necessary, procure and assign more licenses.
" Publish user documentation: Publish information that users will need to answer
questions and clarify expectations. Provide guidance, communications, and training
to your users so they're prepared.
" Create DLP policies: In the Microsoft Purview compliance portal, create and set up
each DLP policy. Refer to all the decisions previously made for setting up the DLP
rules.
" Perform initial testing: Perform an initial set of tests to verify everything is set up
correctly. Use test mode with some sample data to determine whether everything
behaves as you'd expect, while minimizing the impact on users. Use a small subset
of Premium workspaces initially. Consider using a non-production tenant when you
have access to one.
" Gather user feedback: Obtain feedback on the process and user experience.
Identify areas of confusion, or unexpected results with sensitive information types,
and other technical issues.
" Continue iterative releases: Gradually add more Premium workspaces to the DLP
policy until they're all included.
" Monitor, tune, and adjust: Invest resources to review policy match alerts and audit
logs on a frequent basis. Investigate false positives and adjust policies when
necessary.

 Tip

These checklist items are summarized for planning purposes. For more details
about these checklist items, see the previous sections of this article.

For other steps to take beyond the initial rollout, see Defender for Cloud Apps with
Power BI.
Ongoing monitoring
After you've completed the implementation, you should direct your attention to
monitoring, enforcing, and adjusting DLP policies based on their use.

Power BI administrators and security and compliance administrators will need to


collaborate from time to time. For Power BI content, there are two audiences for
monitoring.

Power BI administrators: An entry in the Power BI activity log is recorded each


time there's a DLP rule match. The Power BI activity log entry records details of the
DLP event, including user, date and time, item name, workspace, and capacity. It
also includes information about the policy, such as the policy name, rule name,
severity, and the matched condition.
Security and compliance administrators: The organization's security and
compliance administrators will typically use Microsoft Purview reports, alerts, and
audit logs.

2 Warning

Monitoring for DLP for Power BI policies doesn't occur in real time because it takes
time for DLP logs and alerts to be generated. If your goal is real-time enforcement,
see Defender for Cloud Apps for Power BI (Real-time policies).

Checklist - When monitoring DLP for Power BI, key decisions and actions include:

" Verify roles and responsibilities: Ensure that you're clear on who is responsible for
which actions. Educate and communicate with your Power BI administrators or
security administrators, if they'll be directly responsible for some aspects of DLP
monitoring.
" Create or validate your process for reviewing activity: Make sure that the security
and compliance administrators are clear on the expectations for regularly reviewing
the activity explorer.
" Create or validate your process for resolving alerts: Ensure that your security and
compliance administrators have a process in place to investigate and resolve DLP
alerts when a policy match occurs.
 Tip

For more information about auditing, see Auditing of information protection and
data loss prevention for Power BI.

Next steps
In the next article in this series, learn about using Defender for Cloud Apps with Power
BI.
Power BI implementation planning:
Defender for Cloud Apps for Power BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article describes the planning activities related to implementing Defender for Cloud
Apps as it relates to monitoring Power BI. It's targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators need to collaborate with
information security and other relevant teams.
Center of Excellence, IT, and BI teams: Others who are responsible for overseeing
Power BI in the organization. They may need to collaborate with Power BI
administrators, information security teams, and other relevant teams.

) Important

Monitoring and data loss prevention (DLP) is a significant organization-wide


undertaking. Its scope and impact are far greater than Power BI alone. These types
of initiative require funding, prioritization, and planning. Expect to involve several
cross-functional teams in planning, usage, and oversight efforts.

We recommend that you follow a gradual, phased approach to rolling out Defender for
Cloud Apps for monitoring Power BI. For a description of the types of rollout phases
that you should consider, see Information protection for Power BI (Rollout phases).

Purpose of monitoring
Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security) is
a Cloud Access Security Broker (CASB) that supports various deployment modes. It has a
broad set of capabilities that extend well beyond the scope of this article. Some
capabilities are real-time while others aren't real-time.
Here are some examples of real-time monitoring you can implement.

Block downloads from the Power BI service: You can create a session policy to
block certain types of user activities. For example, when a user tries to download a
report from the Power BI service that's been assigned a Highly Restricted sensitivity
label, the download action can be blocked in real-time.
Block access to the Power BI service by an unmanaged device: You can create an
access policy to prevent users from accessing certain applications unless they're
using a managed device. For example, when a user tries to access the Power BI
service from their personal mobile phone that action can be blocked.

Here are some examples of other capabilities that aren't real time.

Detect and alert certain activities in the Power BI service: You can create an
activity policy to generate an alert when certain types of activities occur. For
example, when an administrative activity occurs in the Power BI service (indicating
that a tenant setting was changed), you can receive an email alert.
Monitor advanced security activities: You can view and monitor sign-ins and
security activities, anomalies, and violations. Alerts can be raised for situations such
as suspicious activity, unexpected locations, or a new location.
Monitor user activities: You can view and monitor user activities. For example, a
Power BI administrator could be assigned permission to view the Power BI activity
log, in addition to user sign-in frequency within Defender for Cloud Apps.
Detect and alert unusual behavior in the Power BI service: There are built-in
policies for anomaly detection. For example, when a user downloads or exports
content from the Power BI service significantly more often than normal patterns,
you can receive an email alert.
Find unsanctioned applications: You can find unsanctioned applications in use
within the organization. For example, you may become concerned about users
sharing files (like Power BI Desktop files or Excel files) on a third-party file sharing
system. You can block use of an unsanctioned application, and then contact users
to educate them on appropriate ways to share and collaborate with others.

 Tip

The portal in Defender for Cloud Apps is a convenient place to view activities and
alerts without creating a script to extract and download the data. This advantage
includes viewing data from the Power BI activity log.

Power BI is one of many applications and services that can be integrated with Defender
for Cloud Apps. If you're already using Defender for Cloud Apps for other purposes, it
can be used to monitor Power BI too.

Policies created in Defender for Cloud Apps are a form of DLP. The Data loss prevention
for Power BI article covers DLP policies for Power BI that are set up in the Microsoft
Purview compliance portal. We recommend that you use DLP policies for Power BI with
the capabilities described in this article. Although there's some overlap conceptually, the
capabilities are different.

U Caution

This article focuses on capabilities in Microsoft Defender for Cloud Apps that can
be used to monitor and protect Power BI content. There are many other capabilities
in Defender for Cloud Apps that aren't covered in this article. Be sure to work with
other stakeholders and system administrators to make decisions that work well for
all applications and use cases.

Prerequisites for Defender for Cloud Apps for


Power BI
By now, you should have completed the organization-level planning steps that were
described in the Data loss prevention for Power BI article. Before proceeding, you should
have clarity on:

Current state: The current state of DLP in your organization. You should have an
understanding to what extent DLP is already in use, and who's responsible for
managing it.
Goals and requirements: The strategic goals for implementing DLP in your
organization. Understanding the goals and requirements will serve as a guide for
your implementation efforts.

Usually, information protection is already implemented before DLP is implemented. If


sensitivity labels are published (described in the Information protection for Power BI
article), they can be used in certain policies within Defender for Cloud Apps.

You might have already implemented DLP for Power BI (described in the Data loss
prevention for Power BI article). Those DLP capabilities are different to the capabilities
that are managed in the Microsoft Purview compliance portal. All DLP capabilities
described in this article are managed in the Defender for Cloud Apps portal.

Key decisions and actions


You'll need to make some key decisions before you're ready to set up policies in
Defender for Cloud Apps.

The decisions related to Defender for Cloud Apps policies should directly support the
goals and requirements for protecting the data that you've previously identified.

Policy type and activities


You'll need to consider which user activities you're interested in monitoring, blocking, or
controlling. The policy type in Defender for Cloud Apps influences:

What you're able to accomplish.


Which activities can be included in the configuration.
Whether the controls will occur in real time or not.

Real-time policies

Access policies and session policies created in Defender for Cloud Apps allow you to
monitor, block, or control user sessions in real time.

Access policies and session policies allow you to:

Programmatically respond in real time: Detect, inform, and block risky,


inadvertent, or inappropriate sharing of sensitive data. These actions allow you to:
Improve the overall security setup of your Power BI tenant, with automation and
information.
Enable analytical use cases that involve sensitive data in a way that can be
audited.
Provide users with contextual notifications: This capability allows you to:
Help users make the right decisions during their normal workflow.
Guide users to follow your data classification and protection policy without
affecting their productivity.

To provide real-time controls, access policies and session policies work with Azure Active
Directory (Azure AD), relying on the reverse proxy capabilities of Conditional Access App
Control. Instead of user requests and responses going through the app (the Power BI
service in this case), they go through a reverse proxy (Defender for Cloud Apps).

Redirection doesn't affect the user experience. However, the URL for the Power BI
service will change to https://app.powerbi.com.mcas.ms once you've set up Azure AD
for conditional access app control with Power BI. Also, users will receive a notification
when they sign in to the Power BI service that announces that the app is monitored by
Defender for Cloud Apps.
) Important

Access policies and session policies operate in real time. Other policy types in
Defender for Cloud Apps involve a short delay in alerting. Most other types of DLP
and auditing also experience latency, including DLP for Power BI and the Power BI
activity log.

Access policies

An access policy created in Defender for Cloud Apps controls whether a user is allowed
to sign in to a cloud application like the Power BI service. Organizations that are in
highly regulated industries will be concerned with access policies.

Here are some examples of how you might use access policies to block access to the
Power BI service.

Unexpected user: You can block access for a user who isn't a member of a specific
security group. For example, this policy could be helpful when you have an
important internal process that tracks approved Power BI users via a specific group.
Non-managed device: You can block access for a personal device that isn't
managed by the organization.
Updates needed: You can block access for a user who's using an outdated browser
or operating system.
Location: You can block access for a location where you don't have offices or users,
or from an unknown IP address.

 Tip

If you have external users that access your Power BI tenant or employees who travel
frequently, that may affect how you define your access control policies. These types
of policies are usually managed by IT.

Session policies

A session policy is useful when you don't want to allow or block access completely
(which can be done with an access policy as previously described). Specifically, it allows
access for the user while monitoring or limiting what actively occurs during their session.

Here are some examples of ways that you can use session policies to monitor, block, or
control user sessions in the Power BI service.
Block downloads: Block downloads and exports when a specific sensitivity label,
like Highly Restricted, is assigned to the item in the Power BI service.
Monitor sign-ins: Monitor when a user, who meets certain conditions, signs in. For
example, the user could be a member of a specific security group or they're using
a personal device that isn't managed by the organization.

 Tip

Creating a session policy (for example, to prevent downloads) for content that's
assigned to a particular sensitivity label, like Highly Restricted, is one of the most
effective use cases for real-time session controls with Power BI.

It's also possible to control file uploads with session policies. However, typically you
want to encourage self-service BI users to upload content to the Power BI service
(instead of sharing Power BI Desktop files). Therefore, think carefully about blocking file
uploads.

Checklist - When planning your real-time policies in Defender for Cloud Apps, key
decisions and actions include:

" Identify use cases to block access: Compile a list of scenarios for when blocking
access to the Power BI service is appropriate.
" Identify use cases to monitor sign-ins: Compile a list of scenarios for when
monitoring sign-ins to the Power BI service is appropriate.
" Identify use cases to block downloads: Determine when downloads from the
Power BI service should be blocked. Determine which sensitivity labels should be
included.

Activity policies
Activity policies in Defender for Cloud Apps don't operate in real time.

You can set up an activity policy to check events recorded in the Power BI activity log.
The policy can act on a single activity, or it can act on repeated activities by a single user
(when a specific activity occurs more than a set number of times within a set number of
minutes).
You can use activity policies to monitor activity in the Power BI service in different ways.
Here are some examples of what you can achieve.

Unauthorized or unexpected user views privileged content: A user who isn't a


member a specific security group (or an external user) has viewed a highly
privileged report that's provided to the board of directors.
Unauthorized or unexpected user updates tenant settings: A user who isn't a
member a specific security group, like the Power BI Administrators group, has
updated the tenant settings in the Power BI service. You can also choose to be
notified anytime a tenant setting is updated.
Large number of deletes: A user has deleted more than 20 workspaces or reports
in a time period that's less than 10 minutes.
Large number of downloads: A user has downloaded more than 30 reports in a
time period that's less than five minutes.

The types of activity policy alerts described in this section are commonly handled by
Power BI administrators as part of their oversight of Power BI. When setting up alerts
within Defender for Cloud Apps, we recommend that you focus on situations that
represent significant risk to the organization. That's because each alert will need to be
reviewed and closed by an administrator.

2 Warning

Because Power BI activity log events aren't available in real-time, they can't be used
for real-time monitoring or blocking. You can, however, use operations from the
activity log in activity policies. Be sure to work with your information security team
to verify what's technically feasible before you get too far into the planning
process.

Checklist - When planning your activity policies, key decisions and actions include:

" Identify use cases for activity monitoring: Compile a list of specific activities from
the Power BI activity log that represent significant risk to the organization.
Determine whether the risk relates to a single activity or repeated activities.
" Coordinate effort with Power BI administrators: Discuss the Power BI activities that
will be monitored in Defender for Cloud Apps. Ensure that there's not a duplication
of effort between different administrators.
Users impacted
One of the compelling reasons to integrate Power BI with Defender for Cloud Apps is to
benefit from real-time controls when users interact with the Power BI service. This type
of integration requires conditional access app control in Azure AD.

Before setting up conditional access app control in Azure AD, you'll need to consider
which users will be included. Usually, all users are included. However, there may be
reasons to exclude specific users.

 Tip

When setting up the conditional access policy, it's likely that your Azure AD
administrator will exclude specific administrator accounts. That approach will
prevent locking out administrators. We recommend that the excluded accounts are
Azure AD administrators rather than standard Power BI users.

Certain types of policies in Defender for Cloud Apps can apply to certain users and
groups. Most often, these types of policies are applicable to all users. However, it's
possible that you'll encounter a situation when you'll need to purposefully exclude
certain users.

Checklist - When considering which users are affected, key decisions and actions
include:

" Consider which users are included: Confirm whether all users will be included in
your Azure AD conditional access app control policy.
" Identify which administrator accounts should be excluded: Determine which
specific administrator accounts should be purposefully excluded from the Azure AD
conditional access app control policy.
" Determine whether certain Defender policies apply to subsets of users: For valid
use cases, consider whether they should be applicable to all or some users (when
possible).

User messaging
Having identified use cases, you'll need to consider what should happen when there's
user activity that matches the policy.
When an activity is blocked in real time, it's important to provide the user with a
customized message. The message is useful when you want to provide more guidance
and awareness to your users during their normal workflow. It's more likely that users will
read and absorb user notifications when they're:

Specific: Correlating the message to the policy makes it simple to understand.


Actionable: Offering a suggestion for what they need to do, or how to find more
information.

Some types of policies in Defender for Cloud Apps can have a customized message.
Here are two examples of user notifications.

Example 1: You can define a real-time session control policy that prevents all exports
and downloads when the sensitivity label for the Power BI item (like a report or dataset)
is set to Highly Restricted. The customized block message in Defender for Cloud Apps
reads: Files with a Highly Restricted label are not permitted to be downloaded from the
Power BI service. Please view the content online in the Power BI service. Contact the Power
BI support team with any questions.

Example 2: You can define a real-time access policy that prevents a user from signing in
to the Power BI service when they're not using a machine managed by the organization.
The customized block message in Defender for Cloud Apps reads: The Power BI service
may not be accessed on a personal device. Please use the device provided by the
organization. Contact the Power BI support team with any questions.

Checklist - When considering user messages in Defender for Cloud Apps, key decisions
and actions include:

" Decide when a customized block message is needed: For each policy you intend
to create, determine whether a customized block message will be required.
" Create customized block messages: For each policy, define what message should
be displayed to users. Plan to relate each message to the policy so that it's specific
and actionable.

Administrator alerting
Alerting is useful when you want to make your security and compliance administrators
aware that a policy violation has occurred. When you define policies in Defender for
Cloud Apps, consider whether alerts should be generated. For more information, see
alert types in Defender for Cloud Apps.

Optionally, you can set up an alert to send an email to multiple administrators. When an
email alert is required, we recommend that you use a mail-enabled security group. For
example, you might use a group named Security and Compliance Admin Alerting.

For high priority situations, it's possible to send alerts by text message. It's also possible
to create custom alert automation and workflows by integrating with Power Automate.

You can set up each alert with a low, medium, or high severity. The severity level is
helpful when prioritizing the review of open alerts. An administrator will need to review
and action each alert. An alert can be closed as true positive, false positive, or benign.

Here are two examples of administrator alerts.

Example 1: You can define a real-time session control policy that prevents all exports
and downloads when the sensitivity label for the Power BI item (like a report or dataset)
is set to Highly Restricted. It has a helpful customized block message for the user.
However, in this situation there isn't a need to generate an alert.

Example 2: You can define an activity policy that tracks whether an external user has
viewed a highly privileged report that's provided to the board of directors. A high
severity alert can be set up to ensure that the activity is promptly investigated.

 Tip

Example 2 highlights the differences between information protection and security.


Its activity policy can help identify scenarios where self-service BI users have
permission to manage security for content. Yet these users may take actions that
are discouraged by the organizational policy. We recommend that you set up these
types of policies only in specific circumstances when the information is especially
sensitive.

Checklist - When considering alerting for administrators in Defender for Cloud Apps,
key decisions and actions include:

" Decide when alerts are required: For each policy you intend to create, decide
which situations warrant using alerts.
" Clarify roles and responsibilities: Determine expectations and the action that
should be taken when an alert is generated.
" Determine who will receive alerts: Decide which security and compliance
administrators will review and action open alerts. Confirm permissions and licensing
requirements are met for each administrator who will use Defender for Cloud Apps.
" Create a new group: When necessary, create a new mail-enabled security group to
use for email notifications.

Policy naming convention


Before you create policies in Defender for Cloud Apps, it's a good idea to first create a
naming convention. A naming convention is helpful when there are many types of
policies for many types of applications. It's also useful when Power BI administrators
become involved in monitoring.

 Tip

Consider granting Defender for Cloud Apps access to your Power BI administrators.
Use the admin role, which allows viewing the activity log, sign-in events, and events
related to the Power BI service.

Consider a naming convention template that includes component placeholders:


<Application> - <Description> - <Action> - <Type of Policy>

Here are some naming convention examples.

Type of policy Real-time Policy name

Session policy Yes Power BI - Highly restricted label - Block downloads - RT

Access policy Yes All - Unmanaged device - Block access - RT

Activity policy No Power BI - Administrative activity

Activity policy No Power BI - External user views executive report

The components of the naming convention include:

Application: The application name. The Power BI prefix helps to group all the
Power BI-specific policies together when sorted. However, some policies will apply
to all cloud apps rather than just the Power BI service.
Description: The description portion of the name will vary the most. It might
include sensitivity labels affected or the type of activity being tracked.
Action: (Optional) In the examples, one session policy has an action of Block
downloads. Usually, an action is only necessary when it's a real-time policy.
Type of policy: (Optional) In the example, the RT suffix indicates that it's a real-
time policy. Designating whether it's real-time or not helps to manage
expectations.

There are other attributes that don't need to be included in the policy name. These
attributes include the severity level (low, medium, or high), and the category (such as
threat detection or DLP). Both attributes can be filtered on the alerts page.

 Tip

You can rename a policy in Defender for Cloud Apps. However, it's not possible to
rename the built-in anomaly detection policies. For example, the Suspicious Power
BI report sharing is a built-in policy that can't be renamed.

Checklist - When considering the policy naming convention, key decisions and actions
include:

" Choose a naming convention: Use your first policies to establish a consistent


naming convention that's straightfoward to interpret. Focus on using a consistent
prefix and suffix.
" Document the naming convention: Provide reference documentation about the
policy naming convention. Make sure your system administrators are aware of the
naming convention.
" Update existing policies: Update any existing Defender policies to comply with the
new naming convention.

Licensing requirements
Specific licenses must be in place to monitor a Power BI tenant. Administrators must
have one of the following licenses.

Microsoft Defender for Cloud Apps: Provides Defender for Cloud Apps
capabilities for all supported applications (including the Power BI service).
Office 365 Cloud App Security: Provides Defender for Cloud Apps capabilities for
Office 365 apps that are part of the Office 365 E5 suite (including the Power BI
service).

Also, if users need to use real-time access policies or session policies in Defender for
Cloud Apps, they will need an Azure AD Premium P1 license.

 Tip

If you need clarifications about licensing requirements, talk to your Microsoft


account team.

Checklist - When evaluating licensing requirements, key decisions and actions include:

" Review product licensing requirements: Ensure that you've reviewed all the
licensing requirements for working with Defender for Cloud Apps.
" Procure additional licenses: If applicable, purchase more licenses to unlock the
functionality that you intend to use.
" Assign licenses: Assign a license to each of your security and compliance
administrators who will use Defender for Cloud Apps.

User documentation and training


Before rolling out Defender for Cloud Apps, we recommend that you create and publish
user documentation. A SharePoint page or a wiki page in your centralized portal can
work well because it will be easy to maintain. A document uploaded to a shared library
or Teams site is a good solution, too.

The goal of the documentation is to achieve a seamless user experience. Preparing user
documentation will also help you make sure you've considered everything.

Include information about who to contact when users have questions or technical issues.

FAQs and examples are especially helpful for user documentation.

Checklist - When preparing user documentation and training, key decisions and actions
include:
" Update documentation for content creators and consumers: Update your FAQs
and examples to include relevant information about policies that users might
encounter.
" Publish how to get help: Ensure that your users know how to get help when they're
experiencing something unexpected or that they don't understand.
" Determine whether specific training is needed: Create or update your user training
to include helpful information, especially if there's a regulatory requirement to do
so.

User support
It's important to verify who will be responsible for user support. It's common that using
Defender for Cloud Apps to monitor Power BI is done by a centralized IT help desk.

You may need to create documentation for the help desk and conduct some knowledge
transfer sessions to ensure the help desk is ready to respond to support requests.

Checklist - When preparing for the user support function, key decisions and actions
include:

" Identify who will provide user support: When you're defining roles and
responsibilities, make sure to account for how users will get help with issues that
they may encounter.
" Ensure the user support team is ready: Create documentation and conduct
knowledge transfer sessions to ensure that the help desk is ready to support these
processes.
" Communicate between teams: Discuss messages users might see and the process
to resolve open alerts with your Power BI administrators and Center of Excellence.
Make sure that everyone involved is prepared for potential questions from Power BI
users.

Implementation summary
After the decisions have been made, and a rollout plan has been prepared, it's time to
start the implementation.
If you intend to use real-time policies (session policies or access policies), your first task
is to set up Azure AD conditional access app control. You'll need to set up the Power BI
service as a catalog app that will be controlled by Defender for Cloud Apps.

When Azure AD conditional access app control is set up and tested, you can then create
policies in Defender for Cloud Apps.

) Important

We recommend that you introduce this functionality to a small number of test


users first. There's also a monitor-only mode that you may find helpful to introduce
this functionality in an orderly way.

The following checklist includes a summarized list of the end-to-end implementation


steps. Many of the steps have other details that were covered in previous sections of this
article.

Checklist - When implementing Defender for Cloud Apps with Power BI, key decisions
and actions include:

" Verify current state and goals: Ensure that you have clarity on the current state of
DLP for use with Power BI. All goals and requirements for implementing DLP should
be clear and actively used to drive the decision-making process.
" Carry out the decision-making process: Review and discuss all the decisions that
are required. This task should occur prior to setting up anything in production.
" Review licensing requirements: Ensure that you understand the product licensing
and user licensing requirements. If necessary, procure and assign more licenses.
" Publish user documentation: Publish information that users will need to answer
questions and clarify expectations. Provide guidance, communications, and training
to your users so they're prepared.
" Create an Azure AD conditional access policy: Create a conditional access policy in
Azure AD to enable real-time controls for monitoring the Power BI service. At first,
enable the Azure AD conditional access policy for a few test users.
" Set Power BI as a connected app in Defender for Cloud Apps: Add or verify that
Power BI appears as a connected app in Defender for Cloud Apps for conditional
access app control.
" Perform initial testing: Sign in to the Power BI service as one of the test users.
Verify that access works. Also verify that the message displayed informs you that
the Power BI service is monitored by Defender for Cloud Apps.
" Create and test a real-time policy: Using the use cases already compiled, create an
access policy or a session policy in Defender for Cloud Apps.
" Perform initial testing: As a test user, perform an action that will trigger the real-
time policy. Verify the action is blocked (if appropriate) and that the expected alert
messages are displayed.
" Gather user feedback: Obtain feedback on the process and user experience.
Identify areas of confusion, unexpected results with sensitive information types, and
other technical issues.
" Continue iterative releases: Gradually add more policies in Defender for Cloud
Apps until all use cases are addressed.
" Review the built-in policies: Locate the built-in anomaly detection policies in
Defender for Cloud Apps (that have Power BI in their name). Update the alert
settings for the built-in policies, when necessary.
" Proceed with a broader rollout: Continue to work through your iterative rollout
plan. Update the Azure AD conditional access policy to apply to a broader set of
users, as appropriate. Update individual policies in Defender for Cloud Apps to
apply to a broader set of users, as appropriate.
" Monitor, tune, and adjust: Invest resources to review policy match alerts and audit
logs on a frequent basis. Investigate any false positives and adjust policies when
necessary.

 Tip

These checklist items are summarized for planning purposes. For more details
about these checklist items, see the previous sections of this article.

For more specific information about deploying Power BI as a catalog application in


Defender for Cloud Apps, see the steps to deploy catalog apps.

Ongoing monitoring
After you've completed the implementation, you should direct your attention to
monitoring, enforcing, and adjusting Defender for Cloud Apps policies based on their
usage.

Power BI administrators and security and compliance administrators will need to


collaborate from time to time. For Power BI content, there are two audiences for
monitoring.
Power BI administrators: In addition to alerts generated by Defender for Cloud
Apps, activities from the Power BI activity log are also displayed in the Defender for
Cloud Apps portal.
Security and compliance administrators: The organization's security and
compliance administrators will typically use Defender for Cloud Apps alerts.

It's possible to provide your Power BI administrators with a limited view in Defender for
Cloud Apps. It uses a scoped role to view the activity log, sign-in events, and events
related to the Power BI service. This capability is a convenience for Power BI
administrators.

Checklist - When monitoring Defender for Cloud Apps, key decisions and actions
include:

" Verify roles and responsibilities: Ensure that you're clear on who is responsible for
which actions. Educate and communicate with your Power BI administrators if they'll
be responsible for any aspect of monitoring.
" Manage access for Power BI administrators: Add your Power BI administrators to
the scoped admin role in Defender for Cloud Apps. Communicate with them so
they're aware of what they can do with this extra information.
" Create or validate your process for reviewing activity: Make sure your security and
compliance administrators are clear on the expectations for regularly reviewing the
activity explorer.
" Create or validate your process for resolving alerts: Ensure that your security and
compliance administrators have a process in place to investigate and resolve open
alerts.

Next steps
In the next article in this series, learn about auditing for information protection and data
loss prevention for Power BI.
Power BI implementation planning:
Auditing of information protection and
data loss prevention for Power BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article describes the type of auditing you can do after implementing information
protection and data loss prevention (DLP). It's targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators need to collaborate with
information security and other relevant teams.
Center of Excellence, IT, and BI teams: Others who are responsible for overseeing
Power BI in the organization. They may need to collaborate with Power BI
administrators, information security teams, and other relevant teams.

It's important to understand how information protection and data loss prevention is
used in your organization. You can achieve that by performing auditing, which can:

Track usage patterns, activities, and adoption


Support governance and security requirements
Find non-compliance issues with specific requirements
Document the current setup
Identify user education and training opportunities

Checklist - When considering auditing for information protection and DLP, key decisions
and actions include:

" Decide what's most important to audit: Consider what's most important from an
auditing perspective. Prioritize areas of risk, major inefficiencies, or non-compliance
with regulatory requirements. When a situation arises that could be improved,
educate users on appropriate ways to do things.
" Implement relevant auditing processes: Put processes in place to extract, integrate,
model, and create reports so that auditing can be done.
" Take appropriate action: Using the information obtained from the auditing
processes, make sure that someone has the authority and time to take appropriate
action. Depending on the situation, it may involve adjusting which sensitivity labels
are assigned to content. Other situations might involve user education or training.

The remainder of this article describes useful auditing processes and suggestions.

Power BI activity log


To help with information protection, you can use the Power BI activity log to track
activities related to sensitivity labels.

When you've implemented DLP for Power BI, the activity log tracks when there's a DLP
rule match.

What to look for: You can determine when specific activities occur, such as:
Sensitivity labels were applied, changed, deleted, and by which users
Whether labels were applied manually
Whether labels were applied automatically (for example, by inheritance or a
deployment pipeline)
Whether a changed label was upgraded (to a more sensitive label) or
downgraded (to a less sensitive label)
How frequently DLP events are triggered, where, and by which users
Actions to take: Ensure that data from the activity log data is extracted regularly
by an administrator who has permission to extract tenant-level metadata.
Determine how to classify activities to support your auditing needs. Some activities
might justify review by an administrator or content owner (for example, when a
label is deleted). Other activities might justify being included in regular audit
reviews (for example, when labels are downgraded, or when DLP rule matches
occur).
Where to find this data: Power BI administrators can use the Power BI activity log
to view activities related to Power BI content. Alternatively, in Defender for Cloud
Apps, you can grant your Power BI administrators a limited view so they can see
activity log events, sign-in events, and other events related to the Power BI service.

Power BI protection metrics


The data protection metrics report is a dedicated report in the Power BI admin portal. It
summarizes how sensitivity labels are assigned to content in your Power BI tenant.

What to look for: You can gain a quick sense for how frequently sensitivity labels
are applied to each type of item (for example, dataset or report) in the Power BI
service.
Actions to take: Review this report to become familiar with how much content
doesn't have a label applied.
Where to find this data: Power BI administrators can find the data protection
metrics report in the Power BI admin portal.

 Tip

The data protection metrics report is a summary report. You can also use the
scanner APIs, which are described in the next section, to perform deeper analysis.

Power BI scanner APIs


The Power BI scanner APIs allow you to scan the metadata in your Power BI tenant. The
metadata of Power BI items, like datasets and reports, can help you to monitor and
review self-service user activity.

For example, you might discover that content in a financial workspace has been
assigned to three different sensitivity labels. If any of these labels aren't appropriate for
financial data, you can apply more suitable labels.

What to look for: You can create an inventory of Power BI items in your tenant,
including the sensitivity label of each item.
Actions to take: Create a process to scan your tenant on a weekly or monthly
basis. Use the metadata retrieved by the scanner APIs to understand how Power BI
content has been labeled. Investigate further if you find that some labels don't
meet expectations for the workspace. Correlate metadata from the scanner APIs
with events from the Power BI activity log to determine when a sensitivity label was
applied, changed, deleted, and by which user.
Where to find this data: Power BI administrators can use the Power BI scanner APIs
to retrieve a snapshot of the sensitivity labels applied to all Power BI content. If you
prefer to build your own inventory reports, you can use the APIs directly by writing
scripts. Alternatively, you can use the APIs indirectly by registering Power BI in the
Microsoft Purview Data Map (which uses the Power BI scanner APIs to scan the
Power BI tenant).
Microsoft Purview activity explorer
Activity explorer in the Microsoft Purview compliance portal aggregates useful auditing
data. This data can help you to understand the activities across applications and
services.

 Tip

Activity explorer exposes only certain types of Power BI events. Plan to use both the
Power BI activity log and the activity explorer to see view events.

What to look for: You can use activity explorer to view sensitivity label activity
from various applications, including Teams, SharePoint Online, OneDrive, Exchange
Online, and Power BI. It's also possible to see when a file was read, where, and by
which user. Certain types of DLP policy events are also shown in the activity
explorer. When a justification is provided to explain a change of sensitivity label,
you can view the reason in activity explorer.
Actions to take: Regularly review activity explorer events to identify whether there
are areas of concern or events that warrant further investigation. Some events
might justify review by an administrator or content owner (for example, when a
label is removed). Other events might justify being included in regular audit
reviews (for example, when labels are downgraded).
Where to find this data: Microsoft 365 administrators can use activity explorer in
the Microsoft Purview compliance portal to view all sensitivity label activities.

Microsoft Purview content explorer


Content explorer in the Microsoft Purview compliance portal provides a snapshot of
where sensitive information is located across a broad spectrum of applications and
services.

 Tip

It's not possible to see Power BI Desktop (.pbix) files in content explorer. However,
you can use content explorer to see certain types of supported files that were
exported from the Power BI service, such as Excel files.

What to look for: You can use content explorer to determine what sensitive data is
found in various locations such as Teams, SharePoint Online, OneDrive, and
Exchange Online.
Actions to take: Review content explorer when you need to gain an understanding
of what content exists and where it resides. Use this information to assess the
decisions you've made, and whether other actions should be taken.
Where to find this data: Microsoft 365 administrators can use content explorer in
the Microsoft Purview compliance portal to locate where sensitive data currently
resides.

Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see the Power BI implementation
planning subject areas.
Power BI implementation planning:
Auditing and monitoring
Article • 08/31/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This article introduces the Power BI auditing and monitoring articles. These articles are
targeted at multiple audiences:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators may need to collaborate with
information security and other relevant teams.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators,
information security teams, and other relevant teams.
Content creators and workspace administrators: Users who need to understand
usage and adoption for the content that they've created, published, and shared
with others in the organization.

The terms auditing and monitoring are closely related.

Auditing: Actions taken to understand a system, its user activities, and related
processes. Auditing activities can be manual, automated, or a combination. An
auditing process might focus on one specific aspect (for example, auditing security
for a workspace). Or it might reference an end-to-end auditing solution, which
involves extracting, storing, and transforming data so that you can analyze and act
upon it.
Monitoring: Ongoing activities that inform you about what's occurring. Monitoring
usually involves alerting and automation, though sometimes monitoring is done
manually. Monitoring can be set up for a process you've selected to audit (for
example, notifications when a specific tenant setting changes).

The primary purpose of this series of auditing and monitoring articles is to help you
understand how Power BI is used to oversee and govern your Power BI implementation.
Troubleshooting and performance tuning are important components of auditing and
monitoring your data assets. However, providing deep performance tuning guidance
isn't a goal of these articles. Also, these articles aren't intended to provide a complete
reference of all options available to developers.

) Important

We recommend that you closely follow the Power BI release plan to learn about
future enhancements of the auditing and monitoring capabilities.

Value of auditing and monitoring


The data that's produced from auditing is incredibly valuable for many reasons. Most
people think of auditing as an oversight and control function. While that's true, you can
also audit data to improve the user experience.

This article describes some valuable ways you can use auditing data.

Analyze adoption efforts


As described in the Power BI adoption roadmap, adoption isn't just about using the
technology regularly; it's also about using it effectively. Adopting a technology like
Power BI can be considered from three inter-related perspectives:

Organizational adoption: The effectiveness of Power BI governance. It also refers


to data management practices that support and enable business intelligence (BI)
efforts.
User adoption: The extent to which Power BI consumers and creators continually
increase their knowledge. It's concerned with whether they're actively using Power
BI, and whether they're using it in an effective way.
Solution adoption: The impact and business value achieved for individual
requirements and deployed Power BI items (like datasets and reports).

All types of auditing data can be used in many ways to assess and contribute towards
actions that improve each aspect of adoption.

Understand usage patterns


Analyzing usage patterns is primarily about understanding the user activities that occur
in your Power BI tenant.
It's helpful to have the data to support whether actual user behavior meets expectations.
For example, a manager might be under the impression that a set of reports is critical,
whereas the auditing data shows that the reports aren't regularly accessed.

) Important

If you're not already extracting and storing user activities data, make that an urgent
priority. Even if you're not ready to build an end-to-end auditing solution, ensure
that you're extracting and storing all the activity log data. For more information
about the decisions and activities involved, and the different ways to obtain the
data, see Access user activity data.

The following sections describe some of the most common usage patterns you should
understand.

Content usage
It's valuable to understand the extent to which content is used. The types of questions
you might ask include:

What content is viewed most frequently?


What content is viewed by the greatest number of users?
What content is considered most critical (and is therefore vital to the organization)
based on its usage patterns?
What content is frequently used by executives and senior leadership?
What content requires the most stability and support (due to a high level of usage
or usage by a critical user audience)?
What content should be endorsed (certified or promoted) based on its usage
patterns?
What percentage of content is endorsed (certified or promoted)?
Are there high numbers of report views for non-certified reports?
What content has consistent usage versus sporadic usage?
What content is updated most frequently, when, and by which users?
What content isn't used, with the potential to be retired? (For more information
about creating an inventory of data assets, see Access tenant inventory data.)
What type of devices are used to view reports?
Are there unexpected, or irregular, usage patterns that raise concerns?

User activities
It's useful to understand which users are most active. The types of questions you might
ask include:

Which content consumers are most active?


Which content creators are most active?
Which content creators publish the most content?
Which content creators publish the content that's used by the most content
consumers?
How many distinct (licensed) users are there? What percentage of those users are
active?
Are there content creators who are assigned a Power BI Pro or Power BI Premium
Per User (PPU) license, but aren't actively using that license?
Are the most active users members of your Power BI champions network?

 Tip

For analytical reporting, it's important that you add classifications to the data
model to analyze users based on their level of usage, or to analyze content based
on its level of usage. For more information, see Create classifications

For more information about the Power BI activity log, see Access user activity data. For
more information about pre-built reports, see What is the admin monitoring
workspace?.

Understand published items


It's helpful to have an inventory of the content that exists in your Power BI tenant.
Whereas the previous section is concerned with user activities, this section is concerned
with a tenant inventory.

A tenant inventory is a snapshot of metadata at a given point in time. It describes what's


been published to the Power BI service, and it includes an inventory of all workspaces, all
reports, all datasets, and more. It can also include metadata for data sources or
supporting resources, like gateways and capacities.

A tenant inventory is helpful to:

Understand how much content you have and where: Because the tenant
inventory includes all published items, it represents a complete inventory at that
time. You can use it to identify where content is published, and its dependencies
and lineage.
Examine the ratio of datasets to reports: You can use lineage information from
the tenant inventory to determine the extent to which data reuse occurs. For
example, if you discover many duplicate datasets, it could justify user training on
shared datasets. You might also decide that a consolidation project to reduce the
number of datasets is justified.
Understand trends between points in time: You can compare multiple snapshots
to identify trends. For example, you might find that a large number of new items
are published every month. Or you may you discover that users are republishing a
new report (with a different report name) every time they modify it. Those types of
discoveries should prompt you to improve user training.
Scrutinize user permissions: You can gain valuable insight into how user
permissions are assigned. For example, you might routinely analyze which users
have access to what content. You might undertake further investigation to
determine whether asset oversharing is occurring. One approach you can take is to
correlate certain sensitivity labels (such as Highly Restricted) with high numbers of
user permission assignments.
Understand lineage and find highly used data sources and gateways: By
correlating the lineage information from the tenant inventory with user activities,
you can identify the most frequently used data sources and gateways.
Find unused content: You can compare your tenant inventory to the activity log to
find unused (or under-utilized) content. For example, there are 20 reports in a
workspace, yet only 12 reports have recent data in the activity log. You can
investigate why the other eight reports aren't used, and whether they should be
retired. Discovery of unused content might help you detect Power BI solutions that
need further development because they aren't useful.

 Tip

We recommend that you store your tenant inventory snapshot every week or every
month. Also, by combining activity log data with your tenant inventory snapshot,
you can produce a complete picture and enhance the value of the data.

For more information about the tenant inventory, see Access tenant inventory data.

Educate and support users


Auditing data allows you to gain a deep understanding of how users in your
organization are using Power BI. In turn, your ability to educate and support users can
improve dramatically.
Based on actual user data, here are some examples of the types of actions you might
take.

Provide helpful information to users: When you see an activity for the first time
(for example, the first time a user publishes a report), you can send them an email
with information about your internal best practices for workspaces and security.
Teach users a better way: When you see certain activities that concern you (for
example, a significant and recurring number of report exports), you can contact the
user. You can then explain the downsides of their actions and provide them with a
better alternative.
Include in office hours: Based on recent activities, you can choose to discuss a
relevant topic during office hours.
Improve training curriculum: To better prepare new users (and avoid the same
missteps you see happening with existing users), you can improve or expand your
training content. You might also conduct some cross-training sessions for your
support team.
Improve the centralized portal: To improve consistency, you can invest time by
adding or changing the guidance and resources available in your centralized
portal.

Mitigate risk
Auditing data helps you understand what's happening in your Power BI tenant. This data
allows you to mitigate risk in various ways.

Auditing data helps you to:

Govern the Power BI tenant: Find out whether users are following your
governance guidelines and policies. For example, you might have a governance
policy that requires all content that's published to a particular workspace be
certified. Or, you might have guidelines for when groups (rather than users) should
be used for security. The activity log, your tenant inventory (described previously),
and admin APIs are helpful resources to help you govern your Power BI tenant.
Review security: Determine whether there are security concerns. For example, you
might see overuse of sharing from a personal workspace. Or you may see
unrelated content published to a single workspace (which leads to more
complicated security for the items in such a broadly defined workspace). The
activity log, your tenant inventory (described previously), and the admin APIs are
helpful for security auditing.
Minimize security issues: Use the activity log data to avoid or minimize the effect
of security issues. For example, you might detect that an organization-wide sharing
link was used in an unexpected way. By noticing this event in the activity log
shortly after it happens, you can take action to resolve the issue before the link is
used inappropriately.
Monitor tenant setting changes: Use the activity log data to determine when a
tenant setting has changed. If you see that an unexpected change occurred, or that
it was done by an unexpected user, you can act quickly to correct or revert the
setting.
Review data sources: Determine whether unknown or unexpected data sources are
used by datasets, dataflows, or datamarts. You might also determine what types of
data source are in use (such as files or databases). You might also check whether
files are stored in an appropriate location (such as OneDrive for work or school).
Information protection: Review how sensitivity labels are used to reduce the risk
of data leakage and misuse of data. For more information, see the information
protection and data loss prevention series of articles.
Mentoring and user enablement: Take action to change user behaviors when
necessary. As you gain more knowledge about what users need and what actions
they are taking, you can influence mentoring and user enablement activities.
Monitor tenant setting changes: Use the activity log data to determine when a
tenant setting has changed. If you see that an unexpected change has occurred, or
that it was done by an unexpected user, you can act quickly to correct or revert the
setting. You can also use the Get Tenant Settings REST API to regularly extract a
snapshot of the tenant settings.

Improve compliance
Auditing data is critical to strengthen your compliance status and respond to formal
requests in different scenarios.

The following table provides several examples.

Type of requirement Example

Regulatory requirements: You You have a compliance requirement to track where personal
need data to strengthen your data exists.
compliance status for industry,
governmental, or regional Data Loss Prevention (DLP) for Power BI is one option for
regulatory requirements. detecting certain types of sensitive data that's stored in
published datasets.

The metadata scanning APIs are another option for locating


personal data. For example, you could check for certain
column names or measure names that exist within published
datasets.
Type of requirement Example

Organizational requirements: You have an internal requirement to use row-level security


You have internal governance, (RLS) on all certified datasets. The Get Datasets As Admin API
security, or data management can help you verify whether this requirement is being met.
requirements.
Or, you have an internal requirement that sensitivity labels are
mandatory for all content in Power BI. You can use the
metadata scanning APIs to verify which sensitivity labels have
been applied to Power BI content.

Contractual requirements: You You have a customer that provides your organization with
have contractual requirements data. According to your agreement with the customer, the
with partners, vendors, or data must be stored in a specific geographic region. You can
customers. use the Get Capacities API to verify which region a capacity is
assigned to. You can use the metadata scanning APIs or the
Get Groups As Admin API to verify which capacity a workspace
is assigned to.

Internal audit requests: You Your organization conducts an internal security audit every
need to fulfill requests made by quarter. You can use several APIs to audit requests for details
internal auditors. about permissions in Power BI. Examples include the metadata
scanning APIs, the Get User Artifact Access As Admin API for
report sharing, the Get Group Users As Admin API for
workspace roles, and the Get App Users As Admin API for
Power BI app permissions.

External audit requests: You You receive a request from the auditors to summarize how all
need to respond to requests your Power BI data assets are classified. The metadata
made by external auditors. scanning APIs are one way to compile the sensitivity labels that
are applied to Power BI content.

Manage licenses and costs


Because auditing data contains information about actual user activities, it can help you
manage costs in various ways.

You can use the auditing data to:

Understand the mix of user licenses: To reduce cost, consider reassigning unused
user licenses (Pro or PPU) to other users. You may also be able to reassign a user to
a Free user license. When there are many consumers who only view content, it can
be more cost effective to use a Power BI Premium (P SKU) capacity with Free user
licenses (known as unlimited content distribution).
Assess the use of capacity licenses: Assess whether a Power BI capacity (purchased
with a P SKU, EM SKU, or A SKU) is sized appropriately for your workload and
usage patterns. To balance cost with decentralized management needs, you might
consider using multiple decentralized capacities (for example, three P1 capacities
that are each managed by a different team). To reduce cost, you might provision
one larger capacity (for example, a P3 capacity that's managed by a centralized
team).
Monitor the use of capacity autoscale: Determine whether Autoscale (available
with Power BI Premium) is set up in a cost-effective way. If autoscale is invoked too
frequently, it can be more cost-effective to scale up to a higher P SKU (for
example, from a P1 capacity to a P2 capacity). Or, you could scale out to more
capacities (for example, provision another P1 capacity).
Perform chargebacks: Perform intercompany chargebacks of Power BI costs based
on which users are using the service. In this situation, you should determine which
activities in the activity log are important, and correlate those activities to business
units or departments.
View trials: The activity log records when users sign up for a PPU trial. That
information can prepare you to purchase a full license for those users before their
trial period ends.

Performance monitoring
Certain types of auditing data include information that you can use as an input to
performance tuning activities.

You can use auditing data to monitor:

The levels of dataset usage, when, and by which users.


The queries submitted by users who open live connection reports.
Which composite models depend on a shared dataset.
Details of data refresh operations.
When a data gateway is used for queries or data refresh operations.
Which data sources are used, how frequently, and by which users.
When query caching is enabled for datasets.
When query folding isn't occurring.
The number of active DirectQuery connections for data sources.
The data storage mode(s) used by datasets.

For more information, see Data-level auditing.

Related implementation planning content


The remainder of the auditing and monitoring content is organized into the following
articles.
Report-level auditing: Techniques that report creators can use to understand
which users are using the reports that they create, publish, and share.
Data-level auditing: Methods that data creators can use to track the performance
and usage patterns of data assets that they create, publish, and share.
Tenant-level auditing: Key decisions and actions administrators can take to create
an end-to-end auditing solution.
Tenant-level monitoring: Tactical actions administrators can take to monitor the
Power BI service, updates, and announcements.

Next steps
In the next article in this series, learn about report-level auditing.
Power BI implementation planning:
Report-level auditing
Article • 04/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This report-level auditing article is targeted at multiple audiences:

Report creators: Users who need to understand usage, adoption, and performance
of the reports that they've created, published, and shared.
Power BI administrators: The administrators who are responsible for overseeing
Power BI in the organization. Power BI administrators may need to collaborate with
IT, security, internal audit, and other relevant teams.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators
and other relevant teams.

The concepts covered in this article apply primarily to solutions created for three
content delivery scopes, specifically enterprise BI, departmental BI, and team BI. Creators
of personal BI solutions may find the information in this article useful as well; however,
they're not the primary target.

This article focuses on auditing and monitoring of reports and visuals. However,
achieving good performance for a report and its visuals isn't possible when the
underlying dataset and/or data source doesn't perform well. For information about
auditing and monitoring of datasets, dataflows, and datamarts, see Data-level auditing.

This article is the first article in the auditing and monitoring series because it describes
built-in tools that content creators are likely to discover first. Ideally, you create shared
datasets (intended for reuse among many reports) before users create reports.
Therefore, we recommend that you read this article together with the Data-level
auditing article.

 Tip
Whether you're conversing with colleagues or reading online, you'll need to discern
whether the term report is used literally or more generally. Often, it's used in a
general way to refer to a single Power BI Desktop file (.pbix). The file might contain
a data model (which when published becomes a dataset), a report, or both. The
term can be used literally to refer to a report only (for example, a report with a Live
Connection to a dataset). In this article, the term is used literally.

Report performance targets


To effectively monitor reports, we recommend that you define what report performance
targets, like excellent performance, good performance, and poor performance, mean to
your organization. There aren't any universal definitions. You should always consider
these targets from the consumer's perspective.

Ideally, performance is a primary concern during the report design process. Here are
several situations when you might choose to set performance targets.

When validating or reviewing a new report (especially when you expect it to have a
content delivery scope to a large number of users).
Before endorsing a report (particularly when it's to be certified).
Prior to publishing a report to a production workspace.
When including a report in a Power BI app.

You might choose to create a standard performance target that's intended to apply to
all reports throughout the organization. For example, the first report page should render
within five seconds. However, because there are so many different considerations, it's
not typically realistic to expect that every solution should meet the same target.
Consider ranges for your performance targets that factor in the complexity level of the
solution.

Checklist - When considering how report creators should verify report performance, key
decisions and actions include:

" Identify report performance targets: Ensure that you have a good understanding
of what acceptable report performance means from the consumer's perspective.
" Document and communicate performance targets: If there are specific targets,
make sure they are communicated to the report creators in your organization.
Provide helpful information so that report creators understand how to measure
performance, and how to apply design techniques that improve performance.

The remainder of this article describes techniques that you can use to audit and monitor
report performance.

Report usage metrics


The main auditing resource available to report creators are the usage metrics reports,
which are built into the Power BI service.

The primary objective of the usage metrics reports is to assess the impact of one report,
or all reports in a workspace. Because it's focused on report views and performance of
reports and dashboards (rather than other items, such as datasets and dataflows), it's
targeted at report creators.

Use the usage metrics reports to:

Determine which users are most actively viewing reports.


Understand how often reports are viewed and rank those reports by popularity
(based on usage).
Determine which report pages users access most frequently.
Find reports that haven't been viewed recently.
View high-level report performance statistics. These statistics can help guide report
design optimization efforts, and identify reports that may have intermittent or
consistent performance issues.
Understand which consumption methods (for example, browser or Power BI
mobile app) report consumers use. This information can help report creators
decide how much effort to put into optimizing reports for mobile use.

 Tip

Power BI captures usage metrics for activity that occurs for content that's been
published to the Power BI service (including when it's rendered by using Power BI
Embedded). Access to usage metrics is just one reason to encourage report
creators to publish their reports to the Power BI service, rather than sharing Power
BI Desktop files.

Usage metrics are built into the Power BI service, which is a key advantage because
report creators don't need to set up a process to extract and store the usage data. It's
fast and simple for them to get started.
Another advantage of the usage metrics is that the internal dataset (that contains the
usage metrics data) includes information that's not easily found elsewhere. For example,
it includes views per report page and report opening time duration. The report page
views are obtained by using client telemetry, which has limitations. Client telemetry
(used by report usage metrics) is different from server-side telemetry data (used by the
activity log).

Usage metrics include an internal dataset and a report. While the internal dataset can't
be edited or customized, you can customize the usage metrics report. You can also
update the report filters to learn about usage for all reports in a workspace (rather than
just one report). By using this approach, the broadest range available is one workspace.
You can view up to 30 days of history, including the most recent fully completed day.

) Important

The Power BI activity log is a better alternative when you want to:

Retrieve user activities for more than one workspace.


Extract and retain activity data for more than 30 days.
Analyze all activities that users perform in the Power BI service.

For more information about the activity log, see Tenant-level auditing.

The usage metrics reports are available to report creators and owners who are assigned
to the Contributor, Member, or Admin workspace role. To make the usage metrics
reports visible to workspace viewers (content consumers), you can create a copy of the
usage report and customize it.

 Tip

For more information about workspace roles, see the Content creator security
planning article.

There are two tenant settings related to usage metrics.

The Usage metrics for content creators tenant setting controls which groups of
report creators (who also have the necessary workspace role) may generate and
view the usage metrics reports. Commonly, Power BI administrators leave this
setting enabled for the entire organization. That way, all self-service report creators
can view the usage patterns for their content.
The Per-user data in usage metrics for content creators tenant setting determines
whether the names and email addresses of report consumers are displayed in the
usage metrics reports. When this setting is disabled (for some or all report
creators), Power BI suppresses names and email addresses in the usage metrics
reports, which is referred to as user masking. Most often, Power BI administrators
leave this setting enabled so that report creators can understand exactly who's
using their reports. Also, the ability to contact other users directly for feedback
about the content is valuable because it can help to improve the content.
Occasionally, you might have a security need to mask user information for certain
groups of report creators. When the setting is disabled, the report creator sees
unnamed user in place of the user details.

The ViewUsageMetrics operation in the Power BI activity log allows Power BI


administrators to monitor which content creators and owners are using the usage
metrics reports. You can use that information to guide training and documentation
efforts.

Checklist - When planning for using the usage metrics report, key decisions and actions
include:

" Confirm usage metrics are enabled: Decide whether any Power BI report creator
(who has permission to edit the report) can view usage metrics. Set the Usage
metrics for content creators tenant setting to align with this decision.
" Decide whether per-user data is displayed in usage metrics: Determine whether
names and email can be shown to all or some users. Set the Per-user data in usage
metrics for content creators tenant setting to align with this decision.
" Verify workspace roles: Validate the workspace role assignments. Ensure that
appropriate report creators and owners have permission to edit content in the
workspace (thereby making the usage metrics reports available).
" Create and customize the usage metrics reports: For content you want to analyze,
generate a usage metrics report. When appropriate, customize the usage metrics
report to include all reports in the workspace.
" Include in documentation and training for report creators: Include guidance for
your report creators about how they can take advantage of the usage metrics
reports. Ensure that report creators understand the use cases and key limitations.
Include examples of key metrics they can track, and how they can use the
information to continually improve the solutions they create and publish.
" Monitor who's using usage metrics: Use the Power BI activity log to track which
content creators and owners are using the usage metrics reports.
" Determine whether usage metrics is sufficient: Consider the situations when the
built-in usage metrics report would be sufficient. Determine whether data-level and
tenant-level auditing solutions (described in other articles in this series) would be
more appropriate.

Performance Analyzer
Performance Analyzer is a tool available in Power BI Desktop to help you investigate and
monitor report performance. It can help report creators understand the performance of
visuals and DAX formulas.

 Tip

In addition to Performance Analyzer, there are other tools that you can use to
troubleshoot report performance issues. For example, you can troubleshoot specific
report consumption issues that impact a Premium capacity by using the Premium
utilization and metrics app or the dataset event logs that are sent to Azure Log
Analytics. For more information about these tools (and other tools), see Data-level
auditing.

Performance Analyzer captures operations while a user interacts with a report in Power
BI Desktop. It produces a log that records how each report element performs and for
each interaction. For example, when you interact with a report slicer, cross-filter a visual,
or select a page, the action and time duration are recorded in the log. Depending on the
type of operation, other details are recorded too.

Summarized information is available in the Performance Analyzer pane. You can export
log results to a JSON file, allowing you to follow through with more in-depth analysis.
The export file contains more information about the logged operations. For more
information about using the export file, see the Performance Analyzer documentation
on GitHub.

) Important

Keep in mind that Performance Analyzer runs within Power BI Desktop. The
environment of the report creator's machine may differ from the environment of
the Power BI service.

Some common differences that you should account for include:


Data volume in the underlying dataset
The number of concurrent users viewing the report
Table storage mode(s)
Whether a data gateway is used
Whether a Power BI Premium capacity is involved
Whether query caching is enabled
Whether query parallelization is used
The number of active connections
Whether row-level security (RLS) is enforced by the Power BI service.

Data is logged when a user interacts with a report element. Logged data includes more
than the visual display elements. It also includes:

Visual display activity.


DAX queries (when the visual retrieves data from the data model instead of the
cache).
DirectQuery activity (when applicable).
Other activities performed by a visual, such as query preparation, background
processing activities, and wait time.

Depending on their experience level, and how roles and responsibilities are divided, a
report creator may need assistance to resolve performance issues. That's especially true
when trying to understand why a query or calculation is slow. Assistance for a report
creator could come in the form of:

Collaborating with a data creator: The root cause of performance issues is often
related to the design of the data model.
User support: Assistance is often intra-team support from close colleagues or
internal community support from other Power BI users in the organization. In some
situations, it could also involve help desk support.
Skills mentoring from the Center of Excellence: Assistance could also be in the
form of skills mentoring activities, such as office hours.

Some organizations have specific requirements for endorsed (certified or promoted)


reports. That's particularly true for reports that are widely used throughout the
organization. In that case, you might be required (or encouraged) to verify Performance
Analyzer results before publishing the report, or before it's certified.

 Tip
Well-performing reports have a positive impact on solution adoption. We
recommend that you encourage report creators to test report performance before
publishing a new solution to the Power BI service. You should also encourage them
to retest performance when significant changes are made to an existing solution
(report or dataset).

For more information about optimization techniques, see Optimization guide for
Power BI.

Checklist - When considering how report creators should use Performance Analyzer, key
decisions and actions include:

" Create documentation and training for report creators: Include guidance for your
report creators about what performance targets exist and how they can validate,
measure, and test performance. Provide guidance to your report creators about
how to create well-performing reports. Help new report creators adopt good
design habits early.
" Ensure support and skills mentoring are available: Ensure that your report creators
know how to get assistance to resolve performance issues.
" Include in requirements to certify reports: Decide whether you want to include
Performance Analyzer results as a prerequisite to certifying (endorsing) reports. If
so, ensure that this requirement is documented and communicated to report
creators.

Next steps
In the next article in this series, learn about data-level auditing.
Power BI implementation planning:
Data-level auditing
Article • 06/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This data-level auditing article is targeted at multiple audiences:

Data creators and workspace administrators: Users who need to understand


usage, adoption, and performance of the datasets, dataflows, and datamarts that
they create, publish, and share.
Power BI administrators: The administrators who are responsible for overseeing
Power BI in the organization. Power BI administrators may need to collaborate with
IT, security, internal audit, and other relevant teams. Power BI administrators may
also need to collaborate with content creators when troubleshooting performance.
Power BI capacity administrators: The administrators responsible for overseeing
Premium capacity in the organization. Power BI capacity administrators may need
to collaborate with content creators when troubleshooting performance.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators
and other relevant teams.
System administrators: The team that's responsible for creating and securing
Azure Log Analytics resources, and the database administrators who manage data
sources.

The concepts covered in this article apply primarily to solutions created for three
content delivery scopes, specifically enterprise BI, departmental BI, and team BI. Creators
of personal BI solutions may find the information in this article useful as well; however,
they're not the primary target.

Achieving good performance in reports and visuals isn't possible when the underlying
dataset and/or data source isn't performing well. This article focuses on auditing and
monitoring of datasets, dataflows, and datamarts. It's the second article in the auditing
and monitoring series because the tools and techniques are more complex than what's
described in the Report-level auditing article. Ideally, you create shared datasets
(intended for reuse among many reports) before users create reports. Therefore, we
recommend that you read this article together with the Report-level auditing article.

Because Power BI datasets are built upon the Analysis Services tabular engine, you can
connect to a local data model (in Power BI Desktop) or a Premium dataset (in the Power
BI service) as if it's an Analysis Services database. Therefore, many of the auditing and
monitoring capabilities of Analysis Services are supported for Power BI Premium
datasets.

7 Note

For more information about models hosted in Analysis Services, see Monitoring
overview.

The remainder of this article primarily focuses on models published to the Power BI
service.

Dataset event logs


Over time, data creators and owners may experience situations with their datasets. A
dataset can:

Become more complex and include complex measures.


Grow larger in data volume.
Consume more memory (sometimes unnecessarily when poor design decisions
were made).
Use more diverse data sources, and more complex table relationships.
Include more row-level security (RLS) rules. For more information, see Enforce data
security based on consumer identity.
Have more reports that depend on it. For more information about using live
connections with a shared dataset, see the managed self-service BI usage scenario.
Have more downstream data models that depend on it. For more information
about using DirectQuery for Power BI datasets and Analysis Services with a shared
dataset, see the customizable managed self-service BI usage scenario.
Experience slower query execution and slower data refresh times.
Contribute to slower rendering of reports and visuals.

To ensure usability, good performance, and adoption of the content they create, you
should audit the usage and performance of the data assets you're responsible for
managing. You can use the dataset event logs, which capture user-generated and
system-generated activities that occur for a dataset. They're also referred to as trace
events, dataset logs, or dataset activity logs. System administrators often call them low-
level trace events because they're detailed.

You should analyze dataset trace events to:

Audit all activities that occurred on a dataset.


Troubleshoot and optimize dataset performance, memory usage, and query
efficiency.
Investigate dataset refresh details and duration.
Monitor Power Query formula language (M queries) sent by Power Query.
Monitor DAX formulas and expressions sent to the dataset (Analysis Services
engine).
Verify whether the correct storage mode was selected based on the workloads and
the need to balance fresh data and optimal performance.
Audit which row-level security roles are invoked, for which users, and on which
datasets.
Understand the number of concurrent users.
Validate a dataset (for example, to verify data quality and performance before
endorsing a dataset, or before publishing it to a production workspace).

The events generated by a Power BI dataset are derived from existing diagnostic logs
available for Azure Analysis Services. There are many types of trace events that you can
capture and analyze, which are described in the following sections.

Azure Log Analytics


Azure Log Analytics is a component of the Azure Monitor service. Azure Log Analytics
integration with Power BI allows you to capture dataset events from all datasets in a
Power BI workspace. It's supported only for Premium workspaces. After you set up
integration and the connection is enabled (for a Power BI Premium workspace), dataset
events are automatically captured and continually sent to an Azure Log Analytics
workspace. The dataset logs are stored in Azure Data Explorer, which is an append-only
database that's optimized for capturing high-volume, near-real time telemetry data.

You assign a Power BI Premium workspace to a Log Analytics workspace in Azure. You
must create a new Log Analytics resource in your Azure subscription to enable this type
of logging.

Logs from one or more Power BI workspaces will be sent to a target Log Analytics
workspace. Here are some ways you can choose to organize the data.

One target workspace for all audit data: Store all the data in one Log Analytics
workspace. That's helpful when the same administrator or users will access all data.
Target workspaces organized by subject area: Organize the content by subject
area. This technique is particularly helpful when different administrators or users
are permitted to access the audit data from Azure Log Analytics. For example,
when you need to segregate sales data from operations data.
One target workspace for each Power BI workspace: Set up a one-to-one
relationship between a Power BI workspace and an Azure Log Analytics workspace.
That's useful when you have particularly sensitive content, or when the data is
subject to specific compliance or regulatory requirements.

 Tip

Thoroughly review the documentation and frequently asked questions on this


functionality so that you're clear on what's possible and that you understand the
technical requirements. Before making this functionality broadly available to
workspace administrators in your organization, consider doing a technical proof of
concept (POC) with one Power BI workspace.

) Important

Although the names are similar, the data captured by Azure Log Analytics isn't the
same as the Power BI activity log. Azure Log Analytics captures detail-level trace
events from the Analysis Services engine. Its sole purpose is to help you analyze
and troubleshoot dataset performance. Its scope is at the workspace level.
Conversely, the purpose of the activity log is to help you understand how often
certain user activities occur (such as editing a report, refreshing a dataset, or
creating an app). Its scope is the entire Power BI tenant.

For more information about the user activities you can audit for your Power BI
tenant, see Tenant-level auditing.

The Azure Log Analytics connection for workspace administrators tenant setting controls
which groups of users (who also have the necessary workspace admin role) can connect
a Power BI workspace to an existing Azure Log Analytics workspace.

Before you can set up integration, you must meet security prerequisites. Therefore,
consider enabling the Power BI tenant setting only for Power BI workspace
administrators who also have the required permissions in Azure Log Analytics, or who
can obtain those permissions upon request.

 Tip
Collaborate with your Azure administrator early in the planning process, especially
when getting approval to create a new Azure resource is a challenge in your
organization. You'll also need to plan for the security prerequisites. Decide whether
to grant permission to your Power BI workspace administrator in Azure, or whether
to grant permission to the Azure administrator in Power BI.

The dataset logs captured by Azure Log Analytics include the dataset queries, query
statistics, detailed refresh activity, CPU time consumed on Premium capacities, and
more. Because they're detail-level logs from the Analysis Services engine, the data can
be verbose. Large data volumes are common for large workspaces that experience high
dataset activity.

To optimize cost when using Azure Log Analytics with Power BI:

Connect Power BI workspaces to Azure Log Analytics only when you're actively
troubleshooting, testing, optimizing, or investigating dataset activity. When
connected, a trace runs on all the datasets in the workspace.
Disconnect Azure Log Analytics from a Power BI workspace when you no longer
need to actively troubleshoot, test, optimize, or investigate dataset activity. By
disconnecting, you're terminating the trace from running on all the datasets in the
workspace.
Make sure you understand the cost model for how Azure Log Analytics bills for
data ingestion, storage, and queries.
Don't store the data in Log Analytics for longer than the default 30-day retention
period. That's because dataset analysis typically focuses on immediate
troubleshooting activities.

There are several ways to access the events that are sent to Azure Log Analytics. You can
use:

The prebuilt Log Analytics for Power BI Datasets template app.


The Power BI Desktop connector for Azure Data Explorer (Kusto). Use the Kusto
Query Language (KQL) to analyze the data that's stored in Log Analytics. If you
have SQL query experience, you'll find many similarities with KQL.
The web-based query experience in Azure Data Explorer.
Any query tool that can run KQL queries.

 Tip
Because there's a high volume of dataset trace events, we recommend that you
develop a DirectQuery model to analyze the data. A DirectQuery model allows you
to query the data in near-real time. The events usually arrive within five minutes.

Checklist - When planning to use Azure Log Analytics, key decisions and actions include:

" Consider a technical POC: Plan for a small project to ensure that you fully
understand the technical requirements, security requirements, which events to
capture, and how to analyze the logs.
" Decide which workspaces should be integrated with Log Analytics: Determine
which Premium workspaces contain datasets that you're interested to analyze.
" Decide whether Log Analytics should be enabled full-time for any workspaces:
For cost optimization, determine whether there are situations (or specific
workspaces) where logging should be enabled permanently. Decide whether
workspaces should be disconnected when troubleshooting isn't occurring.
" Decide how long to retain Log Analytics data: Determine whether there's a need
to set a longer retention period than the 30-day default.
" Clarify the process for requesting new Log Analytics workspace: Collaborate with
your Azure administrator to clarify how requests for a new Log Analytics resource
should be submitted by Power BI workspace administrators.
" Decide how security will work: Collaborate with your Azure administrator to decide
whether it's more feasible for a Power BI workspace administrator to be granted
rights to an Azure Log Analytics workspace, or for a Azure administrator to be
granted rights to a Power BI workspace. As you make this security decision,
consider your plan to connect and disconnect workspaces regularly (for cost
optimization).
" Decide how to organize the target Log Analytics workspaces: Consider how many
Azure Log Analytics workspaces will be appropriate to organize the data from one
or more Power BI workspaces. Align this decision with your security decisions for
who may access the log data.
" Decide which workspace administrators are allowed to connect: Determine which
groups of workspace administrators can connect a Power BI workspace to a Log
Analytics workspace. Set the Azure Log Analytics connection for workspace
administrators tenant setting to align with this decision.
" Create the Azure Log Analytics resource: Collaborate with your Azure administrator
to create each Log Analytics workspace. Verify and update the permissions that are
assigned in Azure to ensure that the Power BI configuration can occur without any
issues. Validate that the data stored in Azure is in the correct geographic region.
" Set the Log Analytics connection for each Power BI workspace: Collaborate with
your Power BI workspace administrators to set up the connection to Log Analytics
for each Power BI workspace. Verify that the log data is flowing correctly to the Log
Analytics workspace.
" Create queries to analyze the data: Set up KQL queries to analyze the data in Log
Analytics based on your use case and current needs.
" Include guidance for Power BI workspace administrators: Provide information and
prerequisites to your Power BI workspace administrators for how to request a new
Log Analytics workspace and how to connect to a Power BI workspace. Also, explain
when it's appropriate to disconnect a Power BI workspace.
" Provide guidance and sample queries for analyzing the data: Create KQL queries
for workspace administrators to make it easier for them to get started with
analyzing the data that's been captured.
" Monitor costs: Collaborate with your Azure administrator to monitor Log Analytics
costs on an ongoing basis.

SQL Server Profiler


You can use SQL Server Profiler (SQL Profiler) to capture Power BI dataset events. It's a
component of SQL Server Management Studio (SSMS). Connectivity to a Power BI
dataset is supported with SSMS because it's based on the Analysis Services architecture
that originated in SQL Server.

You can use SQL Profiler during different stages of the lifecycle of a dataset.

During data model development: SQL Profiler can connect to a data model in
Power BI Desktop as an external tool. This approach is useful for data modelers
who want to validate their data model, or do performance tuning.
After the dataset is published to the Power BI service: SQL Profiler can connect to
a dataset in a Premium workspace. SSMS is one of many supported client tools
that can use the XMLA endpoint for connectivity. This approach is useful when you
want to audit, monitor, validate, troubleshoot, or tune a published dataset in the
Power BI service.

It's also possible to use SQL Profiler as an external tool within DAX Studio . You can use
DAX Studio to start a profiler trace, parse the data, and format the results. Data
modelers who use DAX Studio often prefer this approach versus using SQL Profiler
directly.
7 Note

Using SQL Profiler is a different use case to the activity of profiling data. You profile
data in the Power Query Editor to gain a deeper understanding of its
characteristics. While data profiling is an important activity for data modelers, it's
not in scope for this article.

Consider using SQL Profiler instead of Azure Log Analytics when:

Your organization doesn't allow you to use or create Azure Log Analytics resources
in Azure.
You want to capture events for a data model in Power BI Desktop (that hasn't been
published to a Premium workspace in the Power BI service).
You want to capture events for one dataset for a short period of time (rather than
all datasets in a Premium workspace).
You want to capture certain events only during a trace (such as only the Query End
event).
You want to start and stop traces on a frequent basis (like when you need to
capture dataset events that are occurring now).

Like Azure Log Analytics (described earlier in this article), dataset events captured by
SQL Profiler are derived from existing diagnostic logs available for Azure Analysis
Services. However, there are some differences in the events that are available.

 Tip

The use of SQL Profiler for monitoring Analysis Services is covered in many books,
articles, and blog posts. Most of that information is relevant for monitoring a Power
BI dataset.

) Important

You can also use SQL Profiler to monitor queries sent from the Power BI service to
the underlying data sources (for example, to a SQL Server relational database).
However, the capability to trace a relational database is deprecated. Connecting to
the Analysis Services engine is supported and not deprecated. If you're familiar
with Analysis Services extended events and you prefer to use them, connectivity
from SSMS is possible for a data model in Power BI Desktop. However, it's not
supported for Power BI Premium. Therefore, this section focuses only on standard
SQL Profiler connectivity.
The Allow XMLA endpoints and Analyze in Excel with on-premises datasets tenant setting
controls which groups of users (who are also assigned to the Contributor, Member, or
Admin workspace role, or the Build permission for the individual dataset) can use the
XMLA endpoint to query and/or maintain datasets in the Power BI service. For more
information about using the XMLA endpoint, see the advanced data model
management usage scenario.

7 Note

You can also use SQL Profiler to help debug and troubleshoot specific DAX
expressions. You can connect SQL Profiler to Power BI Desktop as an external tool.
Look for the DAX Evaluation Log event class to view intermediary results of a DAX
expression. That event is generated when you use the EVALUATEANDLOG DAX
function in a model calculation.

This function is only intended for development and test purposes. You should
remove it from your data model calculations before publishing the data model to a
production workspace.

Checklist - When planning to use SQL Profiler, key decisions and actions include:

" Decide who may have SSMS or DAX Studio installed: Determine whether you'll
allow all the Power BI content creators in your organization to install SSMS and/or
DAX Studio so they can use SQL Profiler. Decide whether these ancillary tools are
installed upon request, or part of a standard set of software that's installed for
approved data creators in the organization.
" Add SQL Profiler to the External Tools menu in Power BI Desktop: If data creators
will use SQL Profiler often, ask IT to automatically add it to the External Tools menu
in Power BI Desktop for these users.
" Decide who can use the XMLA endpoint: Determine whether all users are
permitted to connect to published datasets by using the XMLA endpoint, or
whether it's limited to approved data creators only. Set the Allow XMLA endpoints
and Analyze in Excel with on-premises datasets tenant setting to align with this
decision.
" Provide guidance and sample queries for analyzing the data: Create
documentation for your data creators so they understand the recommended way to
audit and monitor datasets. Provide guidance for common use cases to make it
easier for them to get started gathering and analyzing trace data.

Data model metadata


Because Power BI datasets are built upon the Analysis Services engine, you have access
to the tools that can query the metadata of a data model. Metadata includes everything
about the data model, including table names, column names, and measure expressions.

Dynamic management views


The Analysis Services Dynamic Management Views (DMVs) can query the data model
metadata. You can use the DMVs to audit, document, and optimize your data models at
a point in time.

Specifically, you can:

Audit the data sources used by a model.


Discover which objects are consuming the most memory in a model.
Determine how efficiently column data can be compressed.
Find columns in a model that aren't used.
Audit active user sessions and connections.
Verify the structure of the model.
Review DAX expressions used by calculated tables, calculated columns, measures,
and row-level security (RLS) rules.
Identify dependencies between objects and measures.

 Tip

The DMVs retrieve information about the current state of a dataset. Think of the
data returned by DMVs as a snapshot of what's occurring at a point in time.
Conversely, the dataset event logs (described earlier in this article) retrieve
information about what activities occurred for a dataset while a trace connection
was active.

SSMS is a tool commonly used to run DMV queries. You can also use the Invoke-ASCmd
PowerShell cmdlet to create and execute XMLA scripts that query the DMVs.

Third-party tools and external tools are also popular with the Power BI community.
These tools use the publicly documented DMVs to simplify access and to work with data
returned by the DMVs. One example is DAX Studio , which includes explicit
functionality to access the DMVs. DAX Studio also includes a built-in View Metrics
feature, which is commonly known as Vertipaq Analyzer. Vertipaq Analyzer has a user
interface for analyzing the structure and size of tables, columns, relationships, and
partitions in a data model. You can also export (or import) the data model metadata to a
.vpax file. The exported file only contains metadata about the data model structure and
size, without storing any model data.

 Tip

Consider sharing a .vpax file with someone when you need assistance with a data
model. That way, you won't share the model data with that person.

You can use DMV queries during different stages of the lifecycle of a dataset.

During data model development: Your tool of choice can connect to a data model
in Power BI Desktop as an external tool. This approach is useful for data modelers
who want to validate their data model, or do performance tuning.
After the dataset is published to the Power BI service: Your tool of choice can
connect to a dataset in a Premium workspace. SSMS is one of many supported
client tools that use the XMLA endpoint for connectivity. This approach is useful
when you want to audit or validate a published dataset in the Power BI service.

 Tip

If you decide to write your own DMV queries (for example, in SSMS), be aware that
the DMVs don't support all SQL operations. Also, some DMVs aren't supported in
Power BI (because they require Analysis Services server administrator permissions
that aren't supported by Power BI).

The Allow XMLA endpoints and Analyze in Excel with on-premises datasets tenant setting
controls which groups of users (who are also assigned to the Contributor, Member, or
Admin workspace role, or the Build permission for the individual dataset) can use the
XMLA endpoint to query and/or maintain datasets in the Power BI service.

For more information about using the XMLA endpoint, third-party tools, and external
tools, see the advanced data model management usage scenario.

Best Practice Analyzer


Best Practice Analyzer (BPA) is a feature of Tabular Editor , which is a third-party tool
that's achieved widespread adoption by the Power BI community. BPA includes a set of
customizable rules that can help you audit the quality, consistency, and performance of
your data model.

 Tip

To set up BPA, download the set of best practice rules, which are provided by
Microsoft on GitHub .

Primarily, BPA can help you improve consistency of models by detecting suboptimal
design decisions that can reduce performance issues. It's helpful when you have self-
service data modelers distributed throughout different areas of the organization.

BPA can also help you audit and govern your data models. For example, you can verify
whether a data model includes any row-level security (RLS) roles. Or, you can validate
whether all model objects have a description. That's helpful when, for example, your
goal is to ensure that a data model includes a data dictionary.

BPA can expose design issues that can help the Center of Excellence determine whether
more training or documentation is necessary. It can take action to educate data creators
on best practices and organizational guidelines.

 Tip

Keep in mind that BPA can detect the existence of a characteristic (such as row-level
security). However, it may be difficult to determine whether it's set up correctly. For
that reason, a subject matter expert may need to conduct a review . Conversely, the
non-existence of a particular characteristic doesn't necessarily mean a bad design;
the data modeler may have a good reason for producing a particular design.

Checklist - When planning to access metadata for data models, key decisions and
actions include:

" Decide who may have SSMS installed: Determine whether you'll allow all Power BI
content creators in your organization to install SSMS so that they can connect to
published datasets. Decide whether it's installed upon request, or as part of a
standard set of software that's installed for approved data creators in the
organization.
" Decide who may have third-party tools installed: Determine whether you'll allow
all Power BI content creators in your organization to install third-party tools (such
as DAX Studio and Tabular Editor) so that they can monitor local data models
and/or published datasets. Decide whether they're installed upon request, or as part
of a standard set of software that's installed for approved data creators in the
organization.
" Set up best practice rules: Decide which Best Practice Analyzer rules can scan the
data models in your organization.
" Decide who can use the XMLA endpoint: Determine whether all users are
permitted to connect to datasets by using the XMLA endpoint, or whether it's
limited to approved data creators only. Set the Allow XMLA endpoints and Analyze
in Excel with on-premises datasets tenant setting to align with this decision.
" Provide guidance for content creators: Create documentation for your data
creators so that they understand the recommended way(s) to analyze datasets.
Provide guidance for common use cases to make it easier for them to start
gathering and analyzing DMV results and/or using Best Practice Analyzer.

Data model and query performance


Power BI Desktop includes several tools that help data creators troubleshoot and
investigate their data models. These capabilities are targeted at data modelers who want
to validate their data model, and do performance tuning before publishing to the Power
BI service.

Performance Analyzer
Use Performance Analyzer, which is available in Power BI Desktop, to audit and
investigate performance of a data model. Performance Analyzer helps report creators
measure the performance of individual report elements. Commonly, however, the root
cause of performance issues is related to data model design. For this reason, a dataset
creator can benefit from using Performance Analyzer too. If there are different content
creators responsible for creating reports versus datasets, it's likely that they'll need to
collaborate when troubleshooting a performance issue.

 Tip

You can use DAX Studio to import and analyze the log files generated by
Performance Analyzer.

For more information about Performance Analyzer, see Report-level auditing.


Query Diagnostics
Use Query Diagnostics, which are available in Power BI Desktop, to investigate the
performance of Power Query. They're useful for troubleshooting, and for when you need
to understand what the Power Query engine is doing.

The information you can gain from Query Diagnostics includes:

Extra detail related to error messages (when an exception occurs).


The queries that are sent to a data source.
Whether query folding is or isn't occurring.
The number of rows returned by a query.
Potential slowdowns during a data refresh operation.
Background events and system-generated queries.

Depending on what you're looking for, you can enable one or all the logs: aggregated,
detailed, performance counters, and data privacy partitions.

You can start session diagnostics in Power Query Editor. Once enabled, query and
refresh operations are collected until diagnostic tracing is stopped. The data is
populated directly in the query editor as soon as the diagnostics are stopped. Power
Query creates a Diagnostics group (folder), and adds several queries to it. You can then
use standard Power Query functionality to view and analyze the diagnostics data.

Alternatively, you can enable a trace in Power BI Desktop in the Diagnostics section of
the Options window. Log files are saved to a folder on your local machine. These log files
are populated with the data after you close Power BI Desktop, at which time the trace is
stopped. Once Power BI Desktop is closed, you can open the log files with your
preferred program (such as a text editor) to view them.

Query evaluation and folding


Power Query supports various capabilities to help you understand query evaluation,
including the query plan. It can also help you determine whether query folding is
occurring for an entire query, or for a subset of steps in a query. Query folding is one of
the most important aspects of performance tuning. It's also helpful to review the native
queries sent by Power Query when you're monitoring a data source, which is described
later in this article.

Premium metrics app


When troubleshooting, it can be helpful to collaborate with your Power BI Premium
capacity administrator. The capacity administrator has access to the Power BI Premium
utilization and metrics app. This app can provide you with a wealth of information about
activities that occur in the capacity. That information can help you troubleshoot dataset
issues.

 Tip

Your Premium capacity administrator can grant access to additional users (non-
capacity administrators) to allow them to access the Premium metrics app.

The Premium metrics app comprises an internal dataset and an initial set of reports. It
helps you perform near-real-time monitoring of a Power BI Premium capacity (P SKU) or
Power BI Embedded (A SKU) capacity. It includes data for the last two to four weeks
(depending on the metric).

Use the Premium metrics app to troubleshoot and optimize datasets. For example, you
can identify datasets that have a large memory footprint or that experience routinely
high CPU usage. It's also a useful tool to find datasets that are approaching the limit of
your capacity size.

Checklist - When considering approaches to use for monitoring data model and query
performance, key decisions and actions include:

" Identify dataset query performance targets: Ensure that you have a good
understanding of what good dataset performance means. Determine when you'll
require specific query performance targets (for example, queries to support reports
must render within five seconds). If so, make sure the targets are communicated to
the data creators in your organization.
" Identify dataset refresh performance targets: Determine when you'll require
specific data refresh targets (for example, completion of a data refresh operation
within 15 minutes and prior to 5am). If so, make sure the targets are communicated
to the data creators in your organization.
" Educate your support team: Ensure that your internal user support team is familiar
with the diagnostic capabilities so they're ready to support Power BI users when
they need help.
" Connect your support team and database administrators: Make sure that your
support team knows how to contact the correct administrators for each data source
(when troubleshooting query folding, for example).
" Collaborate with your Premium capacity administrator: Work with your capacity
administrator to troubleshoot datasets that reside in a workspace that's assigned to
Premium capacity or Power BI Embedded capacity. When appropriate, request
access to the Premium metrics app.
" Provide guidance for content creators: Create documentation for your data
creators so that they understand what actions to take when troubleshooting.
" Include in training materials: Provide guidance to your data creators about how to
create well-performing data models. Help them adopt good design habits early.
Focus on teaching data creators how to make good design decisions.

Data source monitoring


Sometimes it's necessary to directly monitor a specific data source that Power BI
connects to. For example, you may have a data warehouse that's experiencing an
increased workload, and users are reporting performance degradation. Typically, a
database administrator or system administrator monitors data sources.

You can monitor a data source to:

Audit which users are sending queries to the data source.


Audit which applications (like Power BI) are sending queries to the data source.
Review what query statements are sent to the data source, when, and by which
users.
Determine how long it takes for a query to run.
Audit how row-level security is invoked by the source system when it's using single
sign-on (SSO).

There are many actions that a Power BI content creator might take once they analyze
monitoring results. They could:

Tune and refine the queries that are sent to the data source so they're as efficient
as possible.
Validate and tune the native queries that are sent to the data source.
Reduce the number of columns that are imported into a data model.
Remove high precision and high cardinality columns that are imported into a data
model.
Reduce the amount of historical data that's imported into a data model.
Adjust the Power BI data refresh times to help spread out the demand for the data
source.
Use incremental data refresh to reduce the load on the data source.
Reduce the number of Power BI data refreshes by consolidating multiple datasets
into a shared dataset.
Adjust automatic page refresh settings to increase the refresh frequency, and
therefore reduce the load on the data source.
Simplify calculations to reduce the complexity of queries sent to the data source.
Change the data storage mode (for example, to import mode instead of
DirectQuery) to reduce the consistent query load on the data source.
Use query reduction techniques to reduce the number of queries that are sent to
the data source.

System administrators might take other actions. They could:

Introduce an intermediary data layer, such as Power BI dataflows (when a data


warehouse isn't a viable option). Power BI content creators can use the dataflows
as their data source instead of connecting directly to data sources. An intermediary
data layer can reduce the load on a source system. It also has the added benefit of
centralizing data preparation logic. For more information, see the self-service data
preparation usage scenario.
Change the data source location to reduce the impact of network latency (for
example, use the same data region for the Power BI service, data sources, and
gateways).
Optimize the data source so it more efficiently retrieves data for Power BI. Several
commons techniques include creating table indexes, creating indexed views,
creating persisted computed columns, maintaining statistics, using in-memory or
columnstore tables, and creating materialized views.
Direct users to use a read-only replica of the data source, rather than an original
production database. A replica might be available as part of a high availability (HA)
database strategy. One advantage of a read-only replica is to reduce contention on
the source system.

The tools and techniques that you can use to monitor data sources depend on the
technology platform. For example, your database administrator can use extended events
or the Query Store for monitoring Azure SQL Database and SQL Server databases.

Sometimes, Power BI accesses a data source through a data gateway. Gateways handle
connectivity from the Power BI service to certain types of data sources. However, they
do more than just connect to data. A gateway includes a mashup engine that performs
processing and data transformations on the machine. It also compresses and encrypts
the data so that it can be efficiently and securely transmitted to the Power BI service.
Therefore, an unmanaged, or non-optimized, gateway can contribute to performance
bottlenecks. We recommend that you talk to your gateway administrator for help with
monitoring gateways.
 Tip

Your Power BI administrator can compile a full tenant inventory (which includes
lineage) and access user activities in the activity log. By correlating the lineage and
user activities, administrators can identify the most frequently used data sources
and gateways.

For more information about the tenant inventory and the activity log, see Tenant-
level auditing.

Checklist - When planning to monitor a data source, key decisions and actions include:

" Determine specific goals: When monitoring a data source, get clarity about exactly
what you need to accomplish and the goals for troubleshooting.
" Collaborate with database administrators: Work with your database or system
administrator(s) to get their help when monitoring a specific data source.
" Collaborate with gateway administrators: For data sources that connect through a
data gateway, collaborate with the gateway administrator when troubleshooting.
" Connect your support team and database administrators: Make sure that your
support team knows how to contact the correct administrators for each data source
(for example, when troubleshooting query folding).
" Update training and guidance: Include key information and tips for data creators
about how to work with organizational data sources. Include information about
what to do when something goes wrong.

Data refresh monitoring


A data refresh operation involves importing data from underlying data source(s) into a
Power BI dataset, dataflow, or datamart. You can schedule a data refresh operation or
run it on-demand.

Service-level agreement
IT commonly uses service-level agreements (SLAs) to document the expectations for
data assets. For Power BI, consider using an SLA for critical content or enterprise-level
content. It commonly includes when users can expect updated data in a dataset to be
available. For example, you could have an SLA that all data refreshes must complete by
7am every day.

Dataset logs
The dataset event logs from Azure Log Analytics or SQL Profiler (described previously in
this article) include detailed information about what's happening in a dataset. The
captured events include dataset refresh activity. The event logs are especially useful
when you need to troubleshoot and investigate dataset refreshes.

Premium capacity datasets


When you have content that's hosted in a Power BI Premium capacity, you have more
capabilities to monitor data refresh operations.

The Power BI refresh summaries page in the admin portal includes a summary of
the refresh history. This summary provides information about refresh duration and
error messages.
The Power BI Premium utilization and metrics app also includes helpful refresh
information. It's useful when you need to investigate refresh activity for a Power BI
Premium capacity (P SKU) or Power BI Embedded (A SKU) capacity.

Enhanced dataset refreshes


Content creators can initiate dataset refreshes programmatically by using enhanced
refresh with the Refresh Dataset in Group Power BI REST API. When you use enhanced
refresh, you can monitor the historical, current, and pending refresh operations.

Data refresh schedule monitoring


Power BI administrators can monitor data refresh schedules in the tenant to determine
whether there are many refresh operations scheduled concurrently during a specific
timeframe (for example, between 5am and 7am, which could be a particularly busy data
refresh time). Administrators have permission to access the dataset refresh schedule
metadata from the metadata scanning APIs, which are known as the scanner APIs.

Power BI REST APIs


For critical datasets, don't rely solely on email notifications for monitoring data refresh
issues. Consider compiling the data refresh history in a centralized store where you can
monitor, analyze, and act upon it.

You can retrieve data refresh history by using:

The Get Refresh History in Group REST API to retrieve refresh information for a
workspace.
The Get Refreshables for Capacity REST API to retrieve refresh information for a
capacity.

 Tip

We strongly recommend that you monitor the refresh history of your datasets to
ensure that current data is available to reports and dashboards. It also helps you to
know whether SLAs are being met.

Checklist - When planning for data refresh monitoring, key decisions and actions
include:

" Determine specific goals: When monitoring data refreshes, get clarity about exactly
what you need to accomplish and what the scope of monitoring should be (for
example, production datasets, certified datasets, and others).
" Consider setting up an SLA: Determine whether an SLA would be useful to set
expectations for data availability and when data refresh schedules should run.
" Collaborate with database and gateway administrators: Work with your database
or system administrator(s), and gateway administrators, to monitor or troubleshoot
data refresh.
" Knowledge transfer for support team: Make sure that your support team knows
how to help content creators when data refresh issues arise.
" Update training and guidance: Include key information and tips for data creators
about how to refresh data from organizational data sources and common data
sources. Include best practices and organizational preferences for how to manage
data refresh.
" Use a support email address for notifications: For critical content, set up refresh
notifications to use a support email address.
" Set up centralized refresh monitoring: Use the Power BI REST APIs to compile data
refresh history.
Dataflow monitoring
You create a Power BI dataflow with Power Query Online. Many of the query
performance features, and the Power Query diagnostics, which were described earlier,
are applicable.

Optionally, you can set workspaces to use Azure Data Lake Storage Gen2 for dataflow
storage (known as bring-your-own-storage) rather than internal storage. When you use
bring-your-own-storage, consider enabling telemetry so that you can monitor metrics
for the storage account. For more information, see the self-service data preparation
usage scenario, and the advanced data preparation usage scenario.

You can use the Power BI REST APIs to monitor dataflow transactions. For example, use
the Get Dataflow Transactions API to check the status of dataflow refreshes.

You can track user activities for Power BI dataflows with the Power BI activity log. For
more information, see Tenant-level auditing.

 Tip

There are many best practices that you can adopt to optimize your dataflow
designs. For more information, see Dataflows best practices.

Datamart monitoring
A Power BI datamart includes several integrated components, including a dataflow, a
managed database, and a dataset. Refer to the previous sections of this article to learn
about auditing and monitoring of each component.

You can track user activities for Power BI datamarts by using the Power BI activity log.
For more information, see Tenant-level auditing.

Next steps
In the next article in this series, learn about tenant-level auditing.
Power BI implementation planning:
Tenant-level auditing
Article • 08/31/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This tenant-level audit planning article is primarily targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators may need to collaborate with
IT, security, internal audit, and other relevant teams.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators
and other relevant teams.

) Important

We recommend that you closely follow the Power BI release plan to learn about
future enhancements of the auditing and monitoring capabilities.

The purpose of a tenant-level audit solution is to capture and analyze data for all users,
activities, and solutions deployed to a Power BI tenant. This tenant-level auditing data is
valuable for many purposes, allowing you to analyze adoption efforts, understand usage
patterns, educate users, support users, mitigate risk, improve compliance, manage
license costs, and monitor performance.

Creating an end-to-end auditing solution that's secure and production-ready is a


significant project that takes time. Think of it as building business intelligence on
business intelligence (BI on BI). For information about why the auditing data is so
valuable, see Auditing and monitoring overview.

For report-level auditing, which is targeted at report creators, see Report-level auditing.

For auditing data assets, which is targeted at data creators, see Data-level auditing.

The remainder of this article focuses on tenant-level auditing.


 Tip

The primary focus of this article is to help you plan to create an end-to-end
auditing solution. Because the content in this article is organized into four phases,
you'll need information across multiple phases. For example, you'll find information
in Phase 1 about planning to use a service principal, and information in Phase 2
about prerequisites and setup.

Therefore, we recommend that you read this entire article first so that you'll be
familiar with what's involved. Then you should be well-prepared to plan and build
your auditing solution in an iterative manner.

When you plan to build a tenant-level auditing solution, plan to spend time on the
following four phases.

Phase 1: Planning and decisions


Requirements and priorities
Data needs
Type of auditing solution
Permissions and responsibilities
Technical decisions
Data source decisions
Phase 2: Prerequisites and setup
Create storage account
Install software and set up services
Register an Azure AD application
Set Power BI tenant settings
Phase 3: Solution development and analytics
Extract and store the raw data
Create the curated data
Create a data model
Enhance the data model
Create analytical reports
Take action based on the data
Phase 4: Maintain, enhance, and monitor
Operationalize and improve
Documentation and support
Enable alerting
Ongoing management
Phase 1: Planning and decisions
The first phase focuses on planning and decision-making. There are many requirements
and priorities to consider, so we recommend that you spend sufficient time to
understand the topics introduced in this section. The goal is to make good upfront
decisions so that the downstream phases run smoothly.

Requirements and priorities


In the initial phase, the goal is to identify what's most important. Focus on what matters
most, and how you're going to measure impact in your organization. Strive to define
business-oriented requirements rather than technology-oriented requirements.

Here are some questions you should answer.

What key questions do you need to answer? There's a large volume of auditing
data you can explore. The most effective way to approach auditing is to focus on
answering specific questions.
What are your adoption and data culture goals? For example, perhaps you have a
goal to increase the number of self-service BI content creators in the organization.
In that case, you'll need to measure user activities related to creating, editing, and
publishing content.
What immediate risks are present? For example, you might know oversharing of
content has occurred in the past. User training has since been enhanced, and you
now want to audit security settings and activities on an ongoing basis.
Are there current key performance indicators (KPIs) or organizational goals? For
example, perhaps you have an organizational KPI that relates to digital
transformation or becoming a more data-driven organization. Tenant-level
auditing data can help you measure whether your Power BI implementation is
aligned with these goals.

 Tip
Verify auditing requirements and set priorities with your executive sponsor and
Center of Excellence. It's tempting to extract auditing data and start creating
reports based on a lot of interesting data. However, it's unlikely that you'll derive
high value from your auditing solution when it isn't driven by questions you need
to answer and actions you intend to take.

For more ideas about ways that you can use auditing data, see Auditing and monitoring
overview.

Checklist - When identifying requirements and priorities, key decisions and actions
include:

" Identify requirements: Collect and document the key business requirements for
auditing your Power BI tenant.
" Prioritize requirements: Set priorities for the requirements, classifying them as
immediate, short-term, medium-term, and long-term (backlog).
" Make a plan for the immediate priorities: Put a plan in place to begin collecting
data so that it's available when you need it.
" Reassess requirements regularly: Create a plan to reassess requirements on a
regular basis (for example, twice per year). Verify whether the auditing and
monitoring solution meets the stated requirements. Update action items on your
plan as necessary.

Data needs
Once you've defined the requirements and priorities (described previously), you're ready
to identify the data that you'll need. Four categories of data are covered in this section.

User activity data


Tenant inventory
Users and groups data
Security data

Most of the data that you'll need comes from the admin APIs, which provide a wealth of
metadata about the Power BI tenant. For more information, see Choose a user API or
admin API later in this article.
User activity data
Make it your first priority to obtain data about user activities. The user activities, which
are also called events or operations, are useful for a wide variety of purposes.

Most often, this data is referred to as the activity log or the event log. Technically, there
are several ways to acquire user activity data (the activity log being one method). For
more information about the decisions and activities involved, see Access user activity
data later in this article.

Here are some common questions that user activity data can answer.

Find top users and top content


What content is viewed most often and by which users?
What are the daily, weekly, and monthly trends for viewing content?
Are report views trending up or down?
How many users are active?
Understand user behavior
What type of security is used most often (apps, workspaces, or per-item
sharing)?
Are users using browsers or mobile apps most often?
Which content creators are most actively publishing and updating content?
What content is published or updated, when, and by which users?
Identify user education and training opportunities
Who is doing (too much) sharing from their personal workspace?
Who is doing a significant amount of exporting?
Who is regularly downloading content?
Who is publishing many new datasets?
Who is using subscriptions heavily?
Improve governance and compliance efforts
When are tenant settings changed, and by which Power BI administrator?
Who started a Power BI trial?
What content is accessed by external users, who, when, and how?
Who added or updated a sensitivity label?
What percentage of report views are based on certified datasets?
What percentage of datasets support more than one report?
When is a gateway or data source created or updated, and by which user?

For more information about ways to use this data, see Understand usage patterns.

) Important
If you're not already extracting and storing user activities data, make that an urgent
priority. At a minimum, ensure that you extract the raw data and store it in a secure
location. That way, the data will be available when you're ready to analyze it.
History is available for 30 days or 90 days, depending on the source you use.

For more information, see Access user activity data later in this article.

Tenant inventory
Often, the second priority is to retrieve the data to create a tenant inventory. Sometimes
it's referred to as workspace inventory, workspace metadata, or tenant metadata.

A tenant inventory is a snapshot at a given point in time. It describes what's published in


the tenant. The tenant inventory doesn't include the actual data or the actual reports.
Rather, it's metadata that describes all workspaces and their items (such as datasets and
reports).

Here are some common questions that the tenant inventory can answer.

Understand how much content you have and where


Which workspaces have the most content?
What type of content is published in each workspace (particularly when you're
separating reporting workspaces and data workspaces)?
What dependencies exist between items (such as dataflows that support many
datasets, or a dataset that's a source for other composite models)?
What is the data lineage (dependencies between Power BI items, including
external data sources)?
Are there orphaned reports (which are disconnected from their dataset)?
Understand the ratio of datasets to reports
How much dataset reuse is occurring?
Are there duplicate, or highly similar, datasets?
Are there opportunities to consolidate datasets?
Understand trends between points in time
Is the number of reports increasing over time?
Is the number of datasets increasing over time?
Find unused content
Which reports are unused (or under-utilized)?
Which datasets are unused (or under-utilized)?
Are there opportunities to retire content?

A tenant inventory helps you to use current names when analyzing historical activities.
The snapshot of the items in your tenant contains the names of all items at that point in
time. It's helpful to use current item names for historical reporting. For example, if a
report was renamed three months ago, the activity log at that time records the old
report name. With a properly defined data model, you can use the latest tenant
inventory to locate an item by its current name (rather than its former name). For more
information, see Create a data model later in this article.

For more information about ways to use a tenant inventory, see Understand published
content.

 Tip

You'll often use combine the user activity events (described in the previous section)
and the tenant inventory to produce a complete picture. That way, you enhance the
value of all of the data.

Because a tenant inventory is a snapshot at a given point in time, you'll need to decide
how often to extract and store the metadata. A weekly snapshot is useful for making
comparisons between the two points in time. For example, if you want to investigate
security settings for a workspace, you'll need the previous tenant inventory snapshot,
the activity log events, and the current tenant inventory snapshot.

There are two main ways to build a tenant inventory. For more information about the
decisions and activities involved, see Access tenant inventory data later in this article.

Users and groups data


As your analytical needs grow, you'll likely determine that you'd like to include data
about users and groups in your end-to-end auditing solution. To retrieve that data, you
can use Microsoft Graph, which is the authoritative source for information about Azure
Active Directory (Azure AD) users and groups.

Data that's retrieved from the Power BI REST APIs includes an email address to describe
the user, or the name of a security group. This data is a snapshot at a given point in
time.

Here are some common questions that Microsoft Graph can answer.

Identify users and groups


What's the full username (in addition to the email address), department, or
location sourced from their user profile?
Who are the members of a security group?
Who's the owner of a security group (to manage members)?
Obtain additional user information
Which licenses—Power BI Pro or Power BI Premium Per User (PPU)—are
assigned to users?
Which users sign in most frequently?
Which users haven't signed in to the Power BI service recently?

2 Warning

Some older methods (which are extensively documented online) for accessing users
and groups data are deprecated and shouldn't be used. Whenever possible, use
Microsoft Graph as the authoritative source of users and groups data.

For more information and recommendations about how to access data from Microsoft
Graph, see Access user and groups data later in this article.

Security data
You may have a requirement to perform regular security audits.

Here are some common questions that the Power BI REST APIs can answer.

Identify people and applications


Which reports does a user, group, or service principal have access to?
Which users, groups, or service principals are subscribers to receive reports with
an email subscription?
Understand content permissions
Which workspace roles are assigned to which users and groups?
Which users and groups are assigned to each Power BI app audience?
Which per-item permissions are assigned, for which reports, and for which
users?
Which per-item permissions are assigned, for which datasets, and for which
users?
Which datasets and datamarts have row-level security (RLS) defined?
Which items are shared to people in the entire organization?
Which items are published publicly on the internet?
Understand other permissions
Who has permission to publish by using a deployment pipeline?
Who has permission to manage gateways and data connections?
Who has permission to manage a Premium capacity?

 Tip
For more considerations about security, see the security planning articles.

These common questions aren't an exhaustive list; there are a wide variety of Power BI
REST APIs available. For more information, see Using the Power BI REST APIs.

For more information about using the admin APIs versus user APIs (including how it
affects permissions that are required for the user who's extracting the data), see Choose
a user API or admin API later in this article.

Checklist - When identifying the data that's needed for auditing, key decisions and
actions include:

" Identify specific data needs for user activity data: Validate the requirements you've
collected. Identify which auditing data is necessary to meet each requirement for
user activity data.
" Identify specific data needs for tenant inventory data: Validate the requirements
you've collected. Identify which auditing data is necessary to compile a tenant
inventory.
" Identify specific data needs for users and groups data: Validate the requirements
you've collected. Identify which auditing data is necessary to meet each
requirement for users and groups data.
" Identify specific data needs for security data: Validate the requirements you've
collected. Identify which auditing data is necessary to meet each requirement for
security data.
" Verify priorities: Verify the order of priorities so you know what to develop first.
" Decide how often to capture activities: Decide how frequently to capture activity
events (such as once per day).
" Decide how often to capture snapshots: Decide what interval to capture snapshot
data, such as a tenant inventory or the users and groups data.

Type of auditing solution


Tenant-level auditing is either done manually or with automated processes.

Most new auditing processes start off as a manual process, particularly during
development and while testing occurs. A manual auditing process may evolve to
become an automated process. However, not every auditing process needs to be fully
automated.
Manual auditing processes
Manual auditing relies on scripts and processes that are run on-demand by a user
(usually a Power BI administrator). When needed, the user runs queries interactively to
retrieve auditing data.

Manual auditing is best suited to:

New scripts that are being developed and tested.


Occasional, one-off auditing needs.
Exploratory auditing needs.
Nonessential auditing processes that don't require full support.

Automated auditing processes

Auditing data that's needed on a recurring basis should be automated. It's extracted and
processed on a regular schedule, and it's known as an automated process. Sometimes it's
referred to as an unattended process.

The goals of an automated process are to reduce manual effort, reduce risk of error,
increase consistency, and eliminate the dependency on an individual user to run it.

Automated auditing is best suited to:

Auditing processes that run in production.


Unattended processes that run on a regular schedule.
Scripts that have been fully tested.
Essential auditing processes that have other reports (or other processes) that
depend on it as a data source.
Auditing processes that are documented and supported.

The type of process—manual or automated—might impact how you handle


authentication. For example, a Power BI administrator might use user authentication for
a manual auditing process but use a service principal for an automated process. For
more information, see Determine the authentication method later in this article.

 Tip

If there's a regular, recurring, need to obtain auditing data that's currently handled
manually, consider investing time to automate it. For example, if a Power BI
administrator manually runs a script every day to retrieve data from the Power BI
activity log, there's a risk of missing data should that person be ill or away on
vacation.

Checklist - When considering the type of auditing solution to develop, key decisions
and actions include:

" Determine the primary purpose for each new auditing requirement: Decide
whether to use a manual or automated process for each new requirement. Consider
whether that decision is temporary or permanent.
" Create a plan for how to automate: When it's appropriate for a particular auditing
requirement, create a plan for how to automate it, when, and by whom.

Permissions and responsibilities


At this point, you should have a clear idea of what you want to accomplish and the data
you'll initially need. Now's a good time to make decisions about user permissions as well
as roles and responsibilities.

User permissions
As you plan to build an end-to-end auditing solution, consider user permissions for
other users who will need to access the data. Specifically, decide who will be permitted
to access and view auditing data.

Here are some considerations to take into account.

Direct access to auditing data

You should decide who can access the auditing data directly. There are multiple ways to
think about it.

Who should be a Power BI administrator? A Power BI administrator has access to


all tenant metadata, including the admin APIs. For more information about
deciding who should be a Power BI administrator, see Tenant-level security
planning.
Who can use an existing service principal? To use a service principal (instead of
user permissions) to accessing auditing data, a secret must be provided to
authorized users so they can perform password-based authentication. Access to
secrets (and keys) should be very tightly controlled.
Does access need to be tightly controlled? Certain industries with regulatory and
compliance requirements must control access more tightly than other industries.
Are there large activity data volumes? To avoid reaching API throttling limits, you
may need to tightly control who's allowed to directly access the APIs that produce
auditing data. In this case, it's helpful to ensure that there aren't duplicate or
overlapping efforts.

Access to extracted raw data

You should decide who can view the raw data that's extracted and written to a storage
location. Most commonly, access to raw data is restricted to administrators and auditors.
The Center of Excellence (COE) might require access as well. For more information, see
Determine where to store audit data later in this article.

Access to curated analytical data

You should decide who can view the curated and transformed data that's ready for
analytics. This data should always be made available to administrators and auditors.
Sometimes a data model is made available to other users in the organization,
particularly when you create a Power BI dataset with row-level security. For more
information, see Plan to create curated data later in this article.

Roles and responsibilities

Once you've decided how user permissions work, you're in a good position to start
thinking about roles and responsibilities. We recommend that you involve the right
people early on, especially when multiple developers or teams will be involved in
building the end-to-end auditing solution.

Decide which users or team will be responsible for the following activities.

Role Types of responsibilities Role typically performed by

Communicate to Communication activities and COE in partnership with IT. May also
stakeholders requirements gathering. include select people from key business
units.

Decide priorities, Decision-making activities The team that oversees Power BI in the
and provide related to the end-to-end organization, such as the COE. The
direction on auditing solution, including how executive sponsor or a data governance
requirements to meet requirements. steering committee may become
Role Types of responsibilities Role typically performed by

involved to make critical decisions or


escalate issues.

Extract and store Creation of scripts and processes Data engineering staff, usually in IT and
the raw data to access and extract the data. in partnership with the COE.
Also involves writing the raw
data to storage.

Create the curated Data cleansing, transformation, Data engineering staff, usually in IT and
data and the creation of the curated in partnership with the COE.
data in star schema design.

Create a data Creation of an analytical data Power BI content creator(s), usually in IT


model model, based on the curated or the COE.
data.

Create analytics Creation of reports to analyze Power BI report creator(s), usually in IT or


reports the curated data. Ongoing the COE.
changes to reports based on
new requirements and when
new auditing data becomes
available.

Analyze the data Analyze the data and act in The team that oversees Power BI in the
and act on the response to the audit data. organization, usually the COE. Might also
results include other teams depending on the
audit results and the action. Other teams
can include security, compliance, legal,
risk management, or change
management.

Test and validate Test to ensure that auditing Power BI content creator(s), in
requirements are met and that partnership with the team that oversees
the data presented is accurate. Power BI in the organization.

Secure the data Set and manage security for System administrator for the system that
each storage layer, including the stores the data, in partnership with the
raw data and the curated data. team that oversees Power BI in the
Manage access to datasets that organization.
are created for analyzing the
data.

Scheduling and Operationalize a solution and Data engineering staff, usually in IT and
automation schedule the process with the in partnership with the COE.
tool of choice.

Support the Monitor the audit solution, The team that handles Power BI system
solution including job runs, errors, and support, which is usually IT or the COE.
support for technical issues.
Many of the above roles and responsibilities can be consolidated if they're going to be
performed by the same team or the same person. For example, your Power BI
administrators might perform some of these roles.

It's also possible that different people perform different roles, depending on the
circumstance. Actions will be contextual depending on the audit data and the proposed
action.

Consider several examples.

Example 1: The audit data shows that some users frequently export data to Excel.
Action taken: The COE contacts the specific users to understand their needs and to
teach them better alternatives.
Example 2: The audit data shows external user access occurs in an unexpected
way. Actions taken: An IT system administrator updates the relevant Azure AD
settings for external user access. The Power BI administrator updates the tenant
setting related to external user access. The COE updates documentation and
training for content creators who manage security. For more information, see
Strategy for external users.
Example 3: The audit data shows that several data models define the same
measure differently (available from the metadata scanning APIs, if allowed by the
detailed metadata tenant settings). Action taken: The data governance committee
starts a project to define common definitions. The COE updates documentation
and training for content creators who create data models. The COE also works with
content creators to update their models, as appropriate. For more information
about retrieving detailed metadata, see Access tenant inventory data later in this
article.

7 Note

The setup of data extraction processes is usually a one-time effort with occasional
enhancements and troubleshooting. Analyzing and acting on the data is an
ongoing effort that requires continual effort on a recurring basis.

Checklist - When considering permissions and responsibilities, key decisions and actions
include:
" Identify which teams are involved: Determine which teams will be involved with
the end-to-end creation and support of the auditing solution.
" Decide user permissions: Determine how user permissions will be set up for
accessing audit data. Create documentation of key decisions for later reference.
" Decide roles and responsibilities: Ensure that expectations are clear for who does
what, particularly when multiple teams are involved. Create documentation for roles
and responsibilities, which includes expected actions.
" Assign roles and responsibilities: Assign specific roles and responsibilities to
specific people or teams. Update individual job descriptions with Human Resources,
when appropriate.
" Create a training plan: Establish a plan for training personnel when they require
new skills.
" Create a cross-training plan: Ensure that cross-training and backups are in place for
key roles.

Technical decisions
When you plan for a tenant-level auditing solution, you'll need to make some important
technical decisions. To improve consistency, avoid rework, and improve security, we
recommend that you make these decisions as early as possible.

The first planning phase involves making the following decisions.

Select a technology to access audit data


Determine the authentication method
Choose a user API or admin API
Choose APIs or PowerShell cmdlets
Determine where to store audit data
Plan to create curated data

Select a technology to access audit data

The first thing you need to decide is how to access the audit data.

The easiest way to get started is to use the pre-built reports available in the admin
monitoring workspace.

When you need to access the data directly and build your own solution, you'll use an
API (application program interface). APIs are designed to exchange data over the
internet. The Power BI REST APIs support requests for data by using the HTTP protocol.
Any language or tool can invoke Power BI REST APIs when it's capable of sending an
HTTP request and receiving a JSON response.
 Tip

The PowerShell cmdlets use the Power BI REST APIs to access the audit data. For
more information, see Choose APIs or PowerShell cmdlets later in this article.

This section focuses on your technology choice. For considerations about the source for
accessing specific types of audit data, see Data source decisions later in this article.

Platform options

There are many different ways to access audit data. Depending on the technology you
choose, you might lean toward a preferred language. You might also need to make a
specific decision on how to schedule the running of your scripts. Technologies differ
widely in their learning curve and ease of getting started.

Here are some technologies you can use to retrieve data by using the Power BI REST
APIs.

Technology Good choice for manual Good choice for automated


auditing processes auditing processes

Admin monitoring
workspace

Try-it

PowerShell

Power BI Desktop

Power Automate

Preferred Azure service

Preferred tool/language

Microsoft Purview audit


log search

Defender for Cloud Apps

Microsoft Sentinel

Postman

The remainder of this section provides a brief introduction to each of the options
presented in the table.
Admin monitoring workspace

The admin monitoring workspace contains pre-defined reports and datasets in the
Power BI service. It's a convenient way for Fabric administrators and global
administrators to view recent audit data and help them understand user activities.

Try-it in API documentation

Try-it is an interactive tool that allows you to experience the Power BI REST API
directly in Microsoft documentation. It's an easy way to explore the APIs because it
doesn't require other tools or any setup on your machine.

You can use Try-it to:

Manually send a request to an API to determine whether it returns the information


that you want.
Learn how the URL is constructed before you write a script.
Check data in an informal way.

7 Note

You must be a licensed and authenticated Power BI user to use Try-it. It follows the
standard permissions model, meaning that the user APIs rely on user permissions,
and the admin APIs require Power BI administrator permissions. You can't
authenticate with Try-it by using a service principal.

Because it's interactive, Try-it is best suited to learning, exploration, and new manual
auditing processes.

PowerShell and Azure Cloud Shell

You can create and run PowerShell scripts in multiple ways.

Here are several common options.

Visual Studio Code : A modern, lightweight source code editor. It's a freely
available open-source tool that's supported on multiple platforms, including
Windows, macOS, and Linux. You can use Visual Studio Code with many languages,
including PowerShell (by using the PowerShell extension).
Azure Data Studio: A tool for creating scripts and notebooks. It's built on top of
Visual Studio Code. Azure Data Studio is available independently, or with SQL
Server Management Studio (SSMS). There are many extensions, including an
extension for PowerShell.
Azure Cloud Shell: An alternative to working with PowerShell locally. You can
access Azure Cloud Shell from a browser.
Azure Functions: An alternative to working with PowerShell locally. Azure
Functions is an Azure service that lets you write and run code in a serverless
environment. PowerShell is one of several languages that it supports.

) Important

We recommend that you use the latest version of PowerShell (PowerShell Core) for
all new development work. You should discontinue using Windows PowerShell
(which can't run PowerShell Core) and instead use one of the modern code editors
that support PowerShell Core. Take care when referring to many online examples
that use Windows PowerShell (version 5.1) because they might not use the latest
(and better) code approaches.

You can use PowerShell in several different ways. For more information about this
technical decision, see Choose APIs or PowerShell cmdlets later in this article.

There are many online resources available for using PowerShell, and it's often possible
to find experienced developers who can create and manage PowerShell solutions.
PowerShell is a good choice for creating both manual and automated auditing
processes.

Power BI Desktop

Because Power BI Desktop can connect to APIs, you might be tempted to use it to build
your auditing solution. However, there are some significant drawbacks and complexities.

Accumulating history: Because the Power BI activity log provides up to 30 days of


data, creating your entire auditing solution involves accumulating a set of files over
time that store all historical events. Accumulating historical files is simpler to
accomplish with other tools.
Handling credentials and keys securely: Many solutions you find online from
community bloggers aren't secure because they hard-code credentials and keys in
plaintext within Power Query queries. While that approach is easy, it's not
recommended because anyone who obtains your Power BI Desktop file can read
the values. You can't store the password (when using user authentication) or the
secret (when using a service principal) securely in Power BI Desktop unless you:
Use a custom connector that was developed with the Power Query SDK . For
example, a custom connector could read confidential values from Azure Key
Vault or another service that securely stores the encrypted value. There are also
various custom connector options available from the global Power BI
community.
Use the ApiKeyName option, which works well in Power BI Desktop. However, it
isn't possible to store the key value in the Power BI service.
Types of requests: You might run into some limitations for what you can run in
Power BI Desktop. For example, Power Query doesn't support every type of API
request. For example, only GET and POST requests are supported when using the
Web.Contents function. For auditing, you typically send GET requests.
Ability to refresh: You need to follow specific Power Query development patterns
to successfully refresh a dataset in the Power BI service. For example, the
RelativePath option must be present when using Web.Contents so that Power BI

can properly verify the location of a data source without generating an error in the
Power BI service.

In most cases, we recommend that you use Power BI Desktop only for two purposes.
You should use it to:

Build your data model based on the existing curated data (rather than using it to
initially extract the auditing data).
Create analytical reports based on your data model.

Power Automate

You can use Power Automate with Power BI in many ways. In relation to auditing, you
can use Power Automate to send requests to the Power BI REST APIs and then store the
extracted data in the location of your choice.

 Tip

To send requests to the Power BI REST APIs, you can use a custom connector for
Power Automate to securely authenticate and extract the audit data. Alternatively,
you can use Azure Key Vault to reference a password or secret. Both options are
better than storing passwords and secrets in plaintext within the Power Automate
flow.

For other ideas on ways to use Power Automate, search for Power BI in the Power
Automate templates .
Preferred Azure service

There are several Azure services that can send requests to the Power BI REST APIs. You
can use your preferred Azure service, such as:

Azure Functions
Azure Automation
Azure Data Factory
Azure Synapse Analytics

In most cases, you can combine a compute service that handles the logic for the data
extraction with a storage service that stores the raw data (and curated data, when
appropriate). Considerations for making technical architecture decisions are described
later in this article.

Preferred tool and/or language

You can use your preferred tool and your preferred language to send requests to the
Power BI REST APIs. It's a good approach when your team has expertise with a specific
tool or language, such as:

Python
C# on the .NET framework. Optionally, you can use the Power BI .NET SDK , which
acts as a wrapper on top of the Power BI REST APIs in order to simplify some
processes and increase productivity.
JavaScript

Microsoft Purview audit search

The Microsoft Purview compliance portal (formerly the Microsoft 365 compliance center)
includes a user interface that allows you to view and search the audit logs. The unified
audit logs include Power BI, and all other services that support Microsoft 365 unified
audit logs.

The data captured in the unified audit log is the exact same data that's contained in the
Power BI activity log. When you do an audit log search in the Microsoft Purview
compliance portal, it adds your request to the queue. It may take a few minutes (or
longer, depending on the volume of data) for the data to be ready.

Here are some common ways to use the audit log search tool. You can:

Select the Power BI workload to view events for a series of dates.


Look for one or more specific activities, such as:
Deleted Power BI report
Added admin access to a personal workspace in Power BI
View activities of one or more users.

For administrators with sufficient permissions, the audit log search tool in the Microsoft
Purview compliance portal is a good option for manual auditing processes. For more
information, see Microsoft Purview compliance portal later in this article.

Defender for Cloud Apps user interface

Defender for Cloud Apps is available to administrators in Microsoft 365 Defender


(Microsoft 365 portal). It includes a user interface to view and search activity logs for
various cloud services, including Power BI. It displays the same log events that originate
in the Microsoft Purview compliance portal, in addition to other events like user sign-in
activity from Azure AD.

The audit log interface in Defender for Cloud Apps is a good option for manual auditing
processes. For more information, see Defender for Cloud Apps later in this article.

Microsoft Sentinel and KQL

Microsoft Sentinel is an Azure service that allows you to collect, detect, investigate, and
respond to security incidents and threats. Power BI can be set up in Microsoft Sentinel
as a data connector so that audit logs are streamed from Power BI into Microsoft
Sentinel Azure Log Analytics (which is a component of the Azure Monitor service). You
can use the Kusto Query Language (KQL) to analyze the activity log events that are
stored in Azure Log Analytics.

Using KQL to search the data in Azure Monitor is a good option for viewing part of the
audit log. It's a good option for manual auditing processes, too. For more information,
see Microsoft Sentinel later in this article.

Postman

Postman is a platform for building and using APIs. Many Power BI community
members like to use Postman for learning purposes and for automated scripts.
Developers often prefer tools like Postman because it supports advanced API
development and testing.

Platform considerations

Here are some considerations for how you might access audit data.
Are you implementing a manual or automated auditing process? Certain tools
align strongly with manual processing or automated processing. For example, a
user always runs the Try-it functionality interactively, so it's suited only to manual
auditing processes.
How will you authenticate? Not all options support every authentication option.
For example, the Try-it functionality only supports user authentication.
How will you store credentials securely? Different technologies have different
options for storing credentials. For more information, see Determine the
authentication method later in this article.
Which technology is your team already proficient with? If there's a tool that your
team prefers and is comfortable using, start there.
What is your team capable of supporting? If a different team will support the
deployed solution, consider whether that team is able to adequately support it.
Also consider that your internal teams might support development that's done by
consultants.
What tool do you have approval to use? Certain tools and technologies might
require approval. It could be due to the cost. It could also be due to IT security
policies.
What's your preference for scheduling? Some technologies make the decision for
you. Other times it will be another decision you must make. For example, if you
choose to write PowerShell scripts, there are various ways you can schedule
running them. Conversely, if you use a cloud service such as Azure Data Factory,
scheduling is built in.

We recommend that you review the remainder of this article before choosing a
technology to access audit data.

Checklist - When deciding which technology to use to access audit data, key decisions
and actions include:

" Discuss with your IT staff: Talk to your IT teams to find out whether there are
certain tools that are approved or preferred.
" Explore options with a small proof of concept (POC): Do some experimentation
with a technical POC. Focus initially on learning. Then focus on your decision for
what to use going forward.

Determine the authentication method


How you plan to set up authentication is a critical decision. It's often one of the most
difficult aspects when you get started with auditing and monitoring. You should carefully
consider available options to make an informed decision.

) Important

Consult with your IT and security teams on this matter. Take the time to learn the
basics of how security in Azure AD works.

Not every API on the internet requires authentication. However, all the Power BI REST
APIs require authentication. When you access tenant-level auditing data, the
authentication process is managed by the Microsoft identity platform. It's an evolution
of the Azure AD identity service that's built on industry standard protocols.

 Tip

The Microsoft identity platform glossary of terms is a resource that helps you
become familiar with the basic concepts.

Before you retrieve audit data, you must first authenticate, which is known as signing in.
The concepts of authentication and authorization are separate, yet related. An
authentication process verifies the identity of who is making the request. An
authorization process grants (or denies) access to a system or resource by verifying what
the user or service principal has permission to do.

When a user or service principal authenticates, an Azure AD access token is issued to a


resource server, such as a Power BI REST API or a Microsoft Graph API. By default, an
access token expires after one hour. A refresh token can be requested, if needed.

There are two types of access tokens:

User tokens: Issued on behalf of a user with a known identity.


App-only tokens: Issued on behalf of an Azure AD application (service principal).

For more information, see Application and service principal objects in Azure Active
Directory.

7 Note

An Azure AD application is an identity configuration that allows an automated


process to integrate with Azure AD. In this article, it's referred to as an app
registration. The term service principal is also used commonly in this article. These
terms are described in more detail later in this section.

 Tip

The easiest way to authenticate is to use the Connect-PowerBIServiceAccount


cmdlet from the Power BI Management module. You may see other authentication-
related cmdlets in articles and blog posts online. For example, there are the Login-
PowerBI and Login-PowerBIServiceAccount cmdlets, which are aliases for Connect-

PowerBIServiceAccount cmdlet. There's also the Disconnect-

PowerBIServiceAccount cmdlet that you can use to explicitly sign out at the end of
a process.

The following table describes the two authentication options.

Type of Description Intended for Good choice Good choice for


authentication for manual automated
auditing auditing
processes processes

User Sign in as a user by Occasional,


authentication using the user interactive use
principal name (email
address) and a
password.

Service principal Sign in as a service Ongoing,


authentication principal by using a scheduled,
secret (key) assigned unattended
to an app registration. operations

Each authentication option is described in more detail in the following section.

User authentication

User authentication is useful in the following situations.

Manual auditing processes: You want to run a script by using your user
permissions. Permissions could be at one of two levels, depending on the type of
API request:
Administrator permissions for admin APIs: You need to use your Power BI
administrator permissions to send a request to an admin API.
User permissions for user APIs: You need to use your Power BI user permissions
to send a request to a user API. For more information, see Choose a user API or
admin API.
Interactive sign in: You want to sign in interactively, which requires you to input
your email address and password. For example, you must sign in interactively to
use the Try-it experience, which is found in Microsoft API documentation.
Track who accessed tenant metadata: When individual users and administrators
send API requests, you might want to audit who those individuals are. When you
authenticate with a service principal (described next), the user ID captured by the
activity log is the Application ID. If multiple administrators authenticate with the
same service principal, it might be difficult to know which administrator accessed
the data.
Shared administrator account: Some IT teams use a shared administrator account.
Depending on how it's implemented and controlled, it may not be a best practice.
Instead of a shared account, you should consider using Azure AD Privileged
Identity Management (PIM), which can grant occasional and temporary Power BI
administrator rights for a user.
APIs that only support user authentication: Occasionally, you might need to use
an API that doesn't support authentication by a service principal. In
documentation, each API notes whether a service principal can call it.

) Important

Whenever possible, we recommend that you use service principal authentication


for automated processes.

Service principal authentication

Most organizations have one Azure AD tenant, so the terms app registration and service
principal tend to be used interchangeably. When you register an Azure AD app, there
are two components.

App registration: An app registration is globally unique. It's the global definition of
a registered Azure AD app that can be used by multiple tenants. An app
registration is also known as a client application or the actor. Typically, the term
client application implies a user machine. However, that's not the situation for app
registrations. In the Azure portal, Azure AD applications are found on the App
registrations page in Azure AD.
Service principal: A service principal is the local representation of the app
registration for use in your specific tenant. The service principal is derived from the
registered Azure AD app. For organizations with multiple Azure AD tenants, the
same app registration can be referenced by more than one service principal. In the
Azure portal, service principals can be found on the Enterprise applications page in
Azure AD.

Because the service principal is the local representation, the term service principal
authentication is the most common way to refer to sign-ins.

 Tip

Your Azure AD administrator can tell you who's allowed to create, and consent to,
an app registration in your organization.

Service principal authentication is useful in the following situations.

Automated auditing processes: You want to run an unattended process on a


schedule.
No need to sign in to the Power BI service: You only need to connect and extract
data. A service principal can't sign in to the Power BI service.
Shared access to data: You (personally) aren't a Power BI administrator, but you're
authorized to use a service principal. The service principal has permission to run
admin APIs to retrieve tenant-level data (or other specific permissions).
Use by multiple administrators: You want to allow multiple administrators or users
to use the same service principal.
Technical blockers: There's a technical blocker that prevents the use of user
authentication. Common technical blockers include:
Multi-factor authentication (MFA): Automated auditing processes (that run
unattended) can't authenticate by using a user account when multi-factor
authentication is enabled.
Password hash synchronization: Azure AD requires password hash
synchronization for a user account to authenticate. Sometimes, IT or a
cybersecurity team might disable this functionality.

App registration purpose and naming convention

Each app registration should have a clear name that describes its purpose and the target
audience (who can use the app registration).

Consider implementing a naming convention such as: <Workload> - <Purpose> -


<Target audience> - <Object type>

Here are the parts of the naming convention.


Workload: Usually equivalent to a data source (for example, Power BI data or
Microsoft Graph data).
Purpose: Similar to the level of permissions (for example, Read versus ReadWrite).
As described below, the purpose helps you to follow the principle of least privilege
when you create separate app registrations that can only read data.
Target audience: Optional. Depending on the workload and purpose, the target
audience allows you to determine the intended users of the app registration.
Object type: AADApp is included for clarity.

Your naming convention can be more specific when you have multiple tenants or
multiple environments (such as development and production).

The following table shows examples of app registrations that could be used to extract
tenant-level auditing data:

App registration name Purpose Target audience

PowerBIData-Read- Extract tenant-wide metadata for auditing Power BI


AdminAPIs-AADApp and administration of Power BI. Admin administrators and
APIs are always read-only and may not the Center of
have permissions granted in Azure AD Excellence
(Power BI tenant setting only).

PowerBIData-ReadWrite- Manage the Power BI tenant. Requires Power BI


PowerBIAdministrators- read/write permissions to create or update administrators and
AADApp resources. the Center of
Excellence

GraphData-Read- Extract users and groups data for auditing Power BI


PowerBIAdministrators- and administration of Power BI. Requires administrators and
AADApp read permissions. the Center of
Excellence

Managing multiple app registrations for different purposes involves more effort.
However, you can benefit from several advantages.

See what the app registration is intended to be used for and why: When you see
the client ID from the app registration show up in your auditing data, you'll have
an idea of what it was used for and why. That's a significant advantage of creating
separate app registrations (rather than using only one for all purposes).
See who the app registration is intended to be used by: When you see the client
ID from the app registration show up in your auditing data, you'll have an idea of
who was using it.
Avoid overprovisioning permissions: You can follow the principle of least privilege
by providing separate app registrations to different sets of people who have
different needs. You can avoid overprovisioning permissions by using a read-only
app registration when write permissions aren't necessary. For example, you may
have some highly capable self-service users who want to gather data about their
datasets; they need access to a service principal with read permission. Whereas
there might be satellite members of the Center of Excellence working for the
Finance team who programmatically manage datasets; they need access to a
service principal with write permission.
Know who should be in possession of the secret: The secret for each app
registration should only be shared with the people who need it. When the secret is
rotated (described later in this section), the impact is smaller when you use
separate app registrations for different needs. That's helpful when you need to
rotate the secret because a user leaves the organization or changes roles.

App registration permissions

There are two main ways that an app registration can access data and resources. It
involves permissions and consent.

App-only permissions: Access and authorization are handled by the service


principal without a signed-in user. That method of authentication is helpful for
automation scenarios. When using app-only permissions, it's critical to understand
that permissions are not assigned in Azure AD. Rather, permissions are assigned in
one of two ways:
For running admin APIs: Certain Power BI tenant settings allow access to the
endpoints for the admin APIs (that return tenant-wide auditing metadata). For
more information, see Set Power BI tenant settings later in this article.
For running user APIs: Power BI workspace and item permissions apply.
Standard Power BI permissions control what data can be returned to a service
principal when running user APIs (that return auditing data based on
permissions of the user or service principal that's invoking the API).
Delegated permissions: Scope-based permissions are used. The service principal
accesses data on behalf of the user that's currently signed in. It means that the
service principal can't access anything beyond what the signed-in user can access.
Be aware that this it's a different concept from user-only authentication, described
previously. Because a custom application is required to properly handle the
combination of user and app permissions, delegated permissions are rarely used
for auditing scenarios. This concept is commonly misunderstood, leading to many
app registrations that are overprovisioned with permissions.

) Important
In this article, the focus is only on user authentication or app-only authentication.
Delegated permissions (with an interactive user and the service principal) could
play an important role when programmatically embedding Power BI content.
However, we strongly discourage you from setting up delegated permissions for
extracting auditing data. Every API limits how frequently it can be run (with
throttling in place), so it isn't practical to have different users directly accessing the
raw audit data. Instead, we recommend that you extract the raw audit data once
(with a centralized process), and then make the curated audit data available to
authorized users downstream.

For more information, see Register an Azure AD app later in this article.

App registration secrets

An app registration often has a secret assigned to it. The secret is used as a key for
authentication, and it has an expiration date. Therefore, you need to plan how to rotate
the key regularly and how to communicate the new key to authorized users.

You also need to concern yourself with where to securely store the secret. An
organizational password vault, such as Azure Key Vault, is an excellent choice.

 Tip

As an alternative approach to using a secret, you can use a managed identity. A


managed identity eliminates the need to manage credentials. If you intend to use
services like Azure Functions or Azure Data Factory to extract the data, a managed
identity is a good option to investigate.

Securely manage credentials

Regardless of whether you use user authentication or service principal authentication,


one of the biggest challenges is how to securely manage passwords, secrets, and keys.

U Caution

The first rule is one that many people violate: passwords and keys should never
appear in plaintext in a script. Many articles, blogs, and examples online do that for
simplicity. However, it's a poor practice that should be avoided. Anyone that
obtains the script (or the file) can potentially gain access to the same data that the
author has access to.
Here are several secure methods you can use to manage passwords, secrets, and keys.

Integrate with Azure Key Vault or another organizational password vault system,
whenever possible.
Use credentials and secure strings in your PowerShell scripts. A credential works for
both user authentication and service principal authentication. However, you can't
use a credential when multi-factor authentication is enabled for a user account.
Prompt for a password in your PowerShell script, or use interactive authentication
with a browser.
Use the Secret Management module for PowerShell that's published by
Microsoft. It can store values by using a local vault. It can also integrate with a
remote Azure Key Vault, which is useful when there are multiple administrators in
your organization who work with the same scripts.
Create a custom connector for Power BI Desktop so that it can securely handle
credentials. Some custom connectors are available from community members
(usually on GitHub).

 Tip

We recommend that you consult with your IT and security teams to help choose
the best method, or validate that your solution manages credentials in a secure
way.

Microsoft Authentication Library

Support for Azure AD Authentication Library (ADAL) ended in December 2022. Going
forward, you should use Microsoft Authentication Library (MSAL). The security team in
your organization should be familiar with this significant change.

While it's not important for Power BI professionals to deeply understand the transition
to MSAL, you should adhere to the following recommendations.

Use the latest version of the Power BI Management module when you authenticate
with the Connect-PowerBIServiceAccount PowerShell cmdlet. It switched from
ADAL to MSAL in version 1.0.946 (February 26, 2021).
Use the Azure AD V2 endpoint when you authenticate directly in your script. The
Azure AD V2 endpoint uses MSAL, whereas the Azure AD V1 endpoint uses ADAL.
Discontinue the use of APIs and modules that are deprecated. For more
information, see Deprecated APIs and modules later in this article.
If you find a community solution online, be sure that it's using MSAL for
authentication rather than ADAL.
Checklist – When deciding how to manage authentication, key decisions and actions
include:

" Decide which type of authentication to use: Determine whether user


authentication or service principal authentication is most suitable, depending on
the type of auditing solution you plan to create.
" Plan for what app registrations you need: Consider your requirements, workloads,
and target audience (users of each app registration). Plan for how many app
registrations you need to create to support these needs.
" Create a naming convention for app registrations: Set up a naming convention
that makes it easy to understand the intended purpose and intended users of each
app registration.
" Plan for how to securely manage credentials: Consider ways to securely manage
passwords, secrets, and keys. Consider what documentation and training might be
necessary.
" Confirm the use of MSAL: Determine whether there are any existing manual or
automated auditing solutions that rely on ADAL. If necessary, create a plan to
rewrite these solutions to use the newer MSAL authentication library.

Choose a user API or admin API


When planning to retrieve auditing data, it's important to use the right type of API.

There are two types of APIs to consider.

User APIs: Rely on the permissions of the signed-in user (or service principal) to
send requests to the API. For example, one way to return a list of datasets in a
workspace is with a user API. Permission to read datasets can be granted either by
workspace role or per-item permissions. There are two ways to run user APIs:
User authentication: The signed-in user must have permission to access the
workspace or item.
Service principal authentication: The service principal must have permission to
access the workspace or item.
Admin APIs: Retrieve metadata from the tenant. It's sometimes referred to as the
organizational scope. For example, to return a list of all datasets in the tenant, you
use an admin API. There are two ways to run admin APIs:
User authentication: The signed-in user must be a Power BI administrator to
run admin APIs.
Service principal authentication: The service principal must have permission to
run admin APIs (without relying on security permissions like a user API does).

 Tip

All admin APIs belong to the Admin operation group. Any API that has the As
Admin suffix is an admin API; all remaining APIs are user APIs.

Consider an example where you need to obtain a list of datasets. The following table
shows six API options that you can use to do that. Four options are user APIs, and two
options are admin APIs. The scope and number of items that are returned are different,
depending on the API you choose.

API name Type of Scope Number of datasets


API

Get Dataset User Personal One


workspace

Get Datasets User Personal All


workspace

Get Dataset in Group User One workspace One, provided the signed-in user has
permission to read the dataset

Get Datasets in User One workspace All, provided the signed-in user has
Group permission to read each dataset

Get Datasets in Admin One workspace All


Group as Admin

Get Datasets as Admin Entire tenant All


Admin

7 Note

Some of the API names include the term Group as a reference to a workspace. That
term originated from the original Power BI security model, which relied on
Microsoft 365 groups. While the Power BI security model has evolved significantly
over the years, the existing API names remain unchanged to avoid breaking
existing solutions.

For information about tenant settings, see Set Power BI tenant settings later in this
article.
Checklist - When choosing whether to use a user API or an admin API, key decisions and
actions include:

" Refer to data requirements: Collect and document the key business requirements
for an auditing solution. Based on the type of data that's needed, determine
whether a user scope or organizational scope is appropriate.
" Test the results: Do some experimentation. Test the accuracy of the results that are
returned by the different APIs.
" Review permissions: For any existing auditing solutions, confirm that permissions
are assigned correctly for Power BI administrators and service principals. For new
auditing solutions, confirm which method of authentication will be used.

Choose APIs or PowerShell cmdlets


A key decision to make is whether to use PowerShell cmdlets when they're available for
the specific data that you want to retrieve. This decision is important if you've decided
that PowerShell is one of the technologies you'll use to access audit data.

PowerShell is a task automation solution. The term PowerShell is a collective term that
refers to the scripting language, a command-line shell application, and a configuration
management framework. PowerShell Core is the newest version of PowerShell, which
runs on Windows, Linux, and macOS.

A PowerShell cmdlet is a command that performs an action. A PowerShell module is a


package that contains PowerShell members, such as cmdlets, providers, functions,
workflows, variables, and aliases. You download and install modules.

A PowerShell module creates a layer of abstraction over the APIs. For example, the Get-
PowerBIActivityEvent cmdlet retrieves (or gets) audit events for a Power BI tenant. This
PowerShell cmdlet retrieves its data from the Get Activity Events REST API. Generally, it's
simpler to get started by using the cmdlets because it simplifies access to the underlying
APIs.

Only a subset of the APIs are available as PowerShell cmdlets. As you continue to
expand your entire auditing solution, it's likely that you'll use a combination of cmdlets
and APIs. The remainder of this section helps you to decide which way you'll access the
data.
PowerShell modules

Microsoft has published two PowerShell modules that contain cmdlets related to Power
BI. They're the Power BI Management module and the Data Gateway module. These
PowerShell modules act as an API wrapper on top of the underlying Power BI REST APIs.
Using these PowerShell modules can make it easier to write scripts.

 Tip

In addition to the modules published by Microsoft, there are freely available


PowerShell modules and scripts published (usually on GitHub) by Power BI
community members, partners, and Microsoft Most Valued Professionals (MVPs).

Starting with a community solution can play an important role in building your
end-to-end auditing solution. If you choose to use a freely available solution, be
sure to fully test it. Become familiar with how it works so you can effectively
manage your solution over time. Your IT department might have criteria for using
publicly available community solutions.

Power BI Management module

The Power BI Management module is a rollup module that contains specific Power BI
modules for administration, capacities, workspaces, and more. You can choose to install
the rollup module to obtain all of the modules, or you can install specific modules. All of
the Power BI Management modules are supported on both Windows PowerShell and
PowerShell Core.

) Important

We recommend that you discontinue using Windows PowerShell (which can't run
PowerShell Core). Instead, use one of the modern code editors that supports
PowerShell Core. The Windows PowerShell ISE (integrated script environment) is
only capable of running PowerShell version 5.1, which is no longer updated.
Windows PowerShell is now deprecated, so we recommend that you use
PowerShell Core for all new development work.

Here are several commonly used cmdlets that you can use to retrieve auditing data.
Management Cmdlet Purpose
module

Profile Connect- Authenticate a Power BI user or service principal.


PowerBIServiceAccount

Admin Get- View or extract Power BI activity log events for the
PowerBIActivityEvent tenant.

Workspaces Get-PowerBIWorkspace Compile a list of workspaces.

Reports Get-PowerBIReport Compile a list of reports for a workspace or the tenant.

Data Get-PowerBIDataset Compile a list of datasets for a workspace or the


tenant.

Profile Invoke- Send a REST API request (command). This cmdlet is a


PowerBIRestMethod good option because it can invoke any of the Power BI
REST APIs. It's also a good choice when you want to
use the simpler form of authentication by using the
Connect-PowerBIServiceAccount cmdlet and be able to
invoke an API that doesn't have a corresponding
PowerShell cmdlet. For more information, see
Considerations for using PowerShell cmdlets later in
this section.

 Tip

There are other cmdlets available for managing and publishing content. However,
the focus of this article is on retrieving auditing data.

You can download the Power BI Management module from the PowerShell Gallery .
The simplest way to install modules is to use PowerShellGet.

We recommend that you install the Power BI Management rollup module. That way, all
the cmdlets you might need are available. At a minimum, you need the Profile module
(for authentication) and any specific modules that contain the cmdlets that you want to
use.

U Caution

Be sure you keep the modules up to date on every server, user machine, and cloud
service (such as Azure Automation) where you use PowerShell. If the modules aren't
updated regularly, unpredictable errors or issues could arise. Some tools (like Visual
Studio Code) help you keep the modules updated. Also, be aware that
PowerShellGet doesn't automatically uninstall older versions of a module when you
install or update a newer version. It installs newer versions alongside the older
versions. We recommend that you check for installed versions and periodically
uninstall older versions.

Data Gateway module

The Data Gateway module contains cmdlets that retrieve data from an on-premises data
gateway and its data sources. The Data Gateway module is supported only on
PowerShell Core. It's not supported on Windows PowerShell.

Unlike the Power BI Management module (previously described), the Data Gateway
module doesn't include any admin cmdlets. Many of the cmdlets (and the
corresponding gateway APIs), require that the user has gateway administrator rights.

2 Warning

It's not possible to assign a service principal as gateway administrator (even when
the service principal is a member of a security group). Therefore, plan to use user
credentials when retrieving information about data gateways.

Here are several cmdlets from the Data Gateway module that you can use to retrieve
auditing data.

Cmdlet Purpose

Connect- Authenticate a Power BI user (usually requires that the user


DataGatewayServiceAccount belongs to the gateway admin role).

Get-DataGatewayCluster Compile a list of gateway clusters.

Get- Compile a list of data sources for a gateway cluster.


DataGatewayClusterDataSource

Get-DataGatewayInstaller Verify which users are allowed to install and register


gateways in the organization.

You can download the Data Gateway module from the PowerShell Gallery .

Techniques to use PowerShell

You can use PowerShell in several different ways. The decision that you make is an
important one.
The following table describes three different techniques for using PowerShell.

Technique Description Example

Use PowerShell With this technique, there's a balance of After authenticating with the
cmdlets to simplify simplicity and flexibility. The Invoke- Connect-
authentication, and PowerBIRestMethod cmdlet is available PowerBIServiceAccount cmdlet,
to call APIs from the Power BI Management Profile use the Invoke-
directly. module. This cmdlet is often referred to PowerBIRestMethod cmdlet to
Recommended as a Swiss Army Knife because you can retrieve data from your
approach use it to call any of the Power BI REST preferred API (for example, Get
APIs. The advantage of using this Pipeline Users As Admin).
technique is you can simplify
authentication, and then call any of the
Power BI REST APIs. We strongly
recommend that you use this technique.

Use cmdlets from With this technique, PowerShell is used After authenticating with the
PowerShell as extensively. PowerShell is used to Connect-
much as possible, coordinate running the script, and PowerBIServiceAccount cmdlet,
both for PowerShell modules are used whenever use a cmdlet (for example, Get-
authentication and possible for accessing the data. This PowerBIActivityEvent) to
for retrieving data. creates a greater dependency on retrieve data.
PowerShell from multiple aspects.

Use PowerShell With this technique, there's less After authenticating by using
only to coordinate dependency on PowerShell as a tool the Microsoft Authentication
running the since its primary use is to orchestrate Library (MSAL), call your
process. invoking API calls. Only one generic preferred API (for example, Get
PowerShell PowerShell module is used to access APIs Pipeline Users As Admin) by
modules are used (none of the modules from the Power BI using the generic Invoke-
as little as possible. Management modules are used). RestMethod cmdlet to retrieve
data.

In the above table, the first technique describes an approach that balances ease of use
with flexibility. This approach provides the best balance for the needs of most
organizations, which is why it's recommended. Some organizations might have existing
IT policies and staff preferences that influence how you decide to use PowerShell.

 Tip

You can use the Invoke-ASCmd PowerShell cmdlet to create and execute DAX,
XMLA, and TMSL scripts. However, these use cases are out of scope for this article.
For more information about querying Dynamic Management Views (DMVs), see
Data-level auditing.
Technical users (and administrators) can use the Power BI Management modules to
retrieve data or automate certain aspects of content management.

For administrators: Set the -Scope parameter to Organization to access entities


across the entire tenant (for example, to retrieve a list of all workspaces).
For technical users: Set the -Scope parameter to Individual (or omit the
parameter) to access entities that belong to the user (for example, to retrieve a list
of workspaces that the user has permission to access).

Consider a scenario where you need to obtain a list of datasets. If you choose to work
directly with the APIs, you must specify which API to call. However, if you choose to use
the Get-PowerBIDataset cmdlet, you can set the -Scope parameter to retrieve a user-
specific or tenant-wide list of datasets. The choice you make depends on your decision
for how to use PowerShell (covered in the previous table).

The following table describes how the parameters used in the PowerShell cmdlet
translate to the API PowerShell calls.

Cmdlet API that the Type of Scope Retrieved items


parameters cmdlet uses API

-DatasetID Get Dataset User Personal One dataset


{DatasetID} workspace
-Scope
Individual

-Scope Get Datasets User Personal All datasets


Individual workspace

-DatasetID Get Dataset in User One One dataset, provided the signed-
{DatasetID} Group workspace in user has permission to read the
-GroupID dataset
{WorkspaceID}

-GroupID Get Datasets User One All datasets, provided the signed-
{WorkspaceID} in Group workspace in user has permission to access
the workspace and read the
dataset

-GroupID Get Datasets Admin One All datasets


{WorkspaceID} in Group as workspace
-Scope Admin
Organization

-Scope Get Datasets Admin Entire tenant All datasets


Organization as Admin
Considerations for using PowerShell cmdlets

The Power BI PowerShell cmdlets are simpler to use because they abstract some of the
complexity of the REST API calls.

Here are some of the ways that the cmdlets simplify accessing auditing data.

Authentication: The simplest way to authenticate in a PowerShell script is to use


the Connect-PowerBIServiceAccount cmdlet.
Simplicity: It's simpler to get started because the techniques to retrieve auditing
data are straightforward. Consider that when you use the Get-PowerBIActivityEvent
cmdlet, you retrieve one day of audit data. However, when you directly call the Get
Activity Events REST API, you retrieve one hour of audit data. So, when you use the
REST API to retrieve one day of audit data you must use a loop to call the API 24
times to extract each hour of the day.
Pagination of large result sets: Large result sets are efficiently retrieved by
pagination. Pagination involves retrieving one chunk of the results at a time. The
cmdlets automatically manage pagination for you. However, when you directly use
the REST APIs, your script must check a continuation token to determine whether
there are any more results to come. The script should continue retrieving chunks of
results until no continuation token is received. For an example of using a
continuation token, see Activity Events REST API.
Access token expirations: Upon authenticating, an access token is returned. By
default, it expires after one hour. The cmdlets handle access token expirations for
you. That way, you don't need to write logic to request a refresh token.
Formatting: The data returned by a cmdlet is formatted slightly nicer than the data
returned by the REST API. The output is more user-friendly. For automated
auditing processes, that's not likely to be an issue. However, for manual auditing
processes the nicer formatting might be helpful.

Considerations for using the REST APIs directly

Sometimes there are advantages to calling the Power BI REST APIs directly.

Many more APIs available: There are more REST APIs available than PowerShell
cmdlets. However, don't overlook the flexibility of the Invoke-PowerBIRestMethod
cmdlet, which you can use to call any of the Power BI REST APIs.
Frequency of updates: Microsoft updates the REST APIs more frequently than it
updates the PowerShell modules. For example, if a new attribute is added to the
Get Dataset API response, it could take some time before it becomes available in
the results of the Get-PowerBIDataset cmdlet.
Less language/tool dependency: When you use the REST APIs directly (instead of
a cmdlet), you don't need to use PowerShell. Or, you might choose to use
PowerShell only to orchestrate calling many APIs in a script. By removing (or
avoiding) any dependency on PowerShell, at some time in the future you can
rewrite your logic in a different language. When your IT or developer team has
strong preferences for tools and languages, less dependency can be an important
consideration to take into account.

Regardless of which method you choose to use, the REST APIs can change over time.
When a technology evolves, APIs (and PowerShell modules) can be deprecated and
replaced. Therefore, we recommend that you purposefully create scripts that are simple
to maintain and resilient to change.

Checklist - When choosing whether to access the REST APIs directly or by using
PowerShell cmdlets, key decisions and actions include:

" Consult with IT on the use of PowerShell: Contact the relevant IT team(s) to


determine whether there are existing internal requirements or preferences on how
PowerShell can be used.
" Decide how you want to use PowerShell: Determine which PowerShell techniques
to use, and whether you want to intentionally increase or reduce your dependency
on PowerShell. Consider whether internal communication or training is necessary.
" Upgrade to PowerShell Core: Research using PowerShell Core. Determine whether
administrator machines need to be updated. Consider which existing scripts need
to be tested. Determine whether administrators or developers need additional
training to upgrade their skills.
" Determine which PowerShell modules to use: Consider whether the Power BI
Management modules and/or the Data Gateway module will be used.
" Decide whether community projects are permitted: Determine whether you're
allowed (or even encouraged) to use PowerShell modules available online (versus
modules published by Microsoft). Consult with IT to determine whether there are
criteria for community projects obtained online.
" Clarify how to support community projects: If PowerShell community projects are
permitted, consider who'll be responsible for supporting them once they're used
internally.
" Complete a small proof of concept (POC): Experiment with a technical POC.
Confirm your preferred client tools and method of accessing data.
" Determine how to support advanced users: Consider how you're going to support
technical users who create automation on their own by using the user APIs.

Determine where to store audit data

When you plan to build an end-to-end auditing solution, you'll need to decide where to
store the raw data (and the curated data, which is covered in the next section). Your
decisions about data storage are related to your preferences for how to handle data
integration.

When you extract raw auditing data, keep it simple. We recommend that you don't
select specific attributes, perform transformations, or apply any formatting when you
initially extract the data. Instead, extract all available attributes and store the data in its
original state.

Here are several reasons why storing the raw data in its original state is a best practice.

All data available in history: New attributes and new event types will become
available over time. Storing all the raw data is a good way to ensure that you'll
always have access to whatever data was available at the time you extracted it.
Even when it takes you time—which could be weeks or months—to realize that
new data attributes are available, it's possible to analyze them historically because
you captured them in the raw data.
Resilient to change: If the raw data format changes, the process that extracts the
data isn't impacted. Because some auditing data is time-sensitive, it's important to
make sure that you design a data extraction process that won't fail when a change
occurs in the source.
Roles and responsibilities: Different team members (such as data engineers or
global administrators) might be responsible for creating the processes to access,
extract, and store the raw audit data. Simplifying the data extraction process makes
it easier for multiple teams to work together.

Here are some options for storing raw data.

Data lake or object storage: A cloud service such as OneLake that specializes in
storing files of any structure. It's an ideal choice for storing the raw audit data. In a
data lake architecture, raw data should be stored in the bronze layer.
File system: A file system (such as Windows or Linux) is a common choice for
storing the raw audit data.
Database: It's possible to store JSON data in a relational database, like Azure SQL
Database. However, it's less common than storing the data in a data lake or file
system. That's because querying and maintaining JSON data can become
challenging and expensive, especially as data volumes grow.

 Tip

When you use a data lake, object storage, or a file system, we recommend that you
store the data in a way that's easy to organize and secure. It's also important to be
clear about whether the data comprises events (which are typically appended) or
whether it's a point-in-time snapshot (which requires selecting a date to analyze).

Consider an example involving a raw data zone of a data lake. The zone has a folder
hierarchy for storing activity log data: Raw > ActivityLog > YYYY > MM. The folders are
partitioned by year and month. A file stored in the month folder uses the following
format: PBIActivityLog-YYYYMMDD-YYYYMMDDTTTT.json. YYYYMMDD represents the
actual data, and YYYYMMDDTTTT represents the extraction time stamp. By including the
extraction time stamp, you can determine which file is the latest (in case you need to
extract the data again). For example, if you extract data at 9am (UTC) on April 25, 2023
for activity on April 23, 2023, the file name would be PBIActivityLog-20230523-
202305250900.

We strongly recommend that you use a technology that allows you to write the raw data
to immutable storage. Immutable storage guarantees that once the data is written, it
can't be overwritten or deleted. That distinction is important to auditors, and it allows
you to trust that the raw data is reliable.

You also need to consider how to securely store the raw data. Typically, very few users
require access the raw data. Access to raw data is typically provided on a needs-basis,
typically for data engineers and system administrators. Your internal auditors may also
need access. The team members who are responsible for creating the curated data
(described next) also require access to the raw data.

Here are some considerations to help you choose your raw data storage.

Is it a manual or automated auditing process? An automated auditing process


typically has stricter requirements for how and where data is stored.
Is the subject area particularly sensitive? Depending on the type of data and how
sensitive it is, your organization might have a requirement for how and where it's
stored. Power BI audit data contains identifying user information and IP addresses,
so it should be stored in a secure area.
Is there a preferred storage technology? There may be an existing storage
technology that's in use within the organization, so it's a logical choice.
Will users access the storage directly? The security model is an important
consideration. Usually, very few users are granted access to raw data. Access to the
curated data is typically made available to Power BI content creators who are
responsible for creating the centralized data model (the dataset that serves as the
semantic layer for reporting).
Do you have data residency requirements? Some organizations have legal or
regulatory data residency requirements to store data in a specific data region.
How will the data be organized? Use of the medallion architecture is a common
practice, particularly in data lake and lakehouse implementations. The goal is to
store your data in bronze, silver, and gold data layers. The bronze layer contains
the original raw data. The silver layer contains validated and deduplicated data
used for transformations. The gold layer contains the curated, highly refined data
that's ready for analysis.

Checklist - When planning the location to store raw data, key decisions and actions
include:

" Consult with IT: Contact the relevant IT team(s) to determine whether there are
existing processes running to extract the raw audit data. If so, find out whether you
can access the existing data. If a new data extract process is required, determine
whether there are preferences or requirements for where the data should be stored.
" Decide where to store raw data: Determine the best storage location and structure
for storing the raw data.
" Determine data residency requirements: Find out whether there are legal or
regulatory requirements for where the data can be stored.
" Create a folder structure and naming convention: Determine what naming
convention to use for raw data files, including the folder structure. Include these
details in your technical documentation.
" Consider how security for raw data will work: While you design the raw data
storage location, consider who'll need to access the data and how access will be
granted.

Plan to create curated data


Curated data supports analysis and is designed to be user-friendly. You need to make
some decisions on how and where curated data will be created. The technology you
choose to store the curated data might be any of the technologies listed in the previous
section.

The goal of curated data is to transform the data into a more friendly format that's
optimized for analysis and reporting. It forms the data source of a Power BI data model,
so that means you use a Power BI data model to analyze the usage of Power BI in your
organization. Fundamental data model guidance applies: You should adopt a star
schema design, which is optimized for performance and usability.

You can choose to create curated data in different ways. You need to decide how to do
data integration and how far upstream to apply the transformations that prepare the
data for loading into a star schema.

Data lake

You can apply transformations and create curated data files in a data lake. You should
use a gold layer for curated data so that it's logically separate from the raw data that's
stored in the bronze layer. Separating the data in layers also simplifies setting different
user access permissions.

Use a data lake to transform and curate the data when:

You expect that there'll be multiple Power BI data models that serve reporting
(which justifies the creation of an intermediary silver layer).
You need to support multiple self-service content creators who need access to
tenant-level audit data.
You have existing tools and processes that you want to use to transform and load
data.
You want to minimize the data preparation that needs to be done (and potentially
duplicated) within one or more Power BI data models.

Power BI data model

You may be able to complete all the transformation work in Power BI. Use Power Query
to access and transform the data from its raw state to the curated format.

Use a Power BI data model when:

You want to simplify the end-to-end architecture and load the data model directly
from the raw data. (Incremental refresh can help reduce refresh durations.)
Your preference is to do all the transformation work while loading the data model.
You expect to have one centralized data model for the tenant-level audit data. All
reports (or other data models) can use the centralized Power BI dataset as its
source.
You want to provide user access only to the centralized Power BI dataset (and not
to any of the source data).

 Tip

When you create the curated data, set it up in a way so you can easily re-run the
process for earlier date ranges. For example, if you discover that a new attribute
appeared in the audit logs three months ago, you will want to be able to analyze it
since it first appeared. The ability to reload the curated data history is one of
several reasons why it's important to store the raw data in its original state
(described earlier in this article).

For more information about what dimension tables and fact tables you might include in
your star schema, see Create a data model later in this article.

Checklist - When planning how to create curated data, key decisions and actions
include:

" Decide where transformations should be done: Consider how far upstream the
transformation work should be done, and where that fits into your plan for the
entire architecture.
" Decide what tool to use to transform the data into a star schema: Confirm which
tools and services will be used for transforming the raw data into curated data.
" Decide where curated data should be stored: Determine the best choice for
storing the raw data that's been transformed into a star schema format. Decide
whether an intermediary silver layer is useful in the end-to-end architecture.
" Create a naming convention: Determine what naming convention to use for
curated data files and folders (if applicable). Include the details in your technical
documentation.
" Consider how security for curated data will work: While designing the curated
data storage location, consider who'll need to access the data and how security will
be assigned.

Data source decisions


At this point, you should be clear on requirements, data needs, and permissions. Key
technical decisions have been made. You now need to make some decisions about the
approach for how you'll obtain certain types of data.

Access user activity data


The Power BI user activity data, which is sometimes referred to as the activity log or
audit logs, includes a wealth of information to help you understand what's happening in
your Power BI tenant. For more information about identifying your data needs, see User
activity data earlier in this article.

Power BI integrates its logs with the Microsoft Purview unified audit log (formerly known
as the Microsoft 365 unified audit log). This integration is an advantage because it
means every service within Microsoft 365 doesn't have to implement its own unique way
of logging. It's enabled by default.

) Important

If you're not already extracting user activity data, make that an urgent priority. The
user activity data is available for the most recent 30 or 90 days (depending on the
source). Even when you're not ready to do in-depth analytics, it's important to
begin extracting and storing the data as soon as possible. That way, valuable
history will be available to analyze.

You have several options to retrieve user activity data.

Technique Description Default Good Good choice Good Good


days of choice for for choice choice
history manual automated to set up to get
available auditing auditing alerting started
processes processes quickly

Manual (user Microsoft 90


interface) Purview
compliance
portal

Manual (user Defender for 30


interface) Cloud Apps

Programmatic Power BI 30
activity log
(API or
Technique Description Default Good Good choice Good Good
days of choice for for choice choice
history manual automated to set up to get
available auditing auditing alerting started
processes processes quickly

PowerShell
cmdlet)

Programmatic Office 365 7


Management
Activity API

Programmatic Microsoft 30
Sentinel (Azure
Monitor)

The remainder of this section introduces each of the techniques presented in the table.

Microsoft Purview compliance portal

The audit search tool in the Microsoft Purview compliance portal (formerly known as the
Microsoft 365 compliance center) is a convenient place to view activities and alerts. This
tool doesn't require you to create a script to extract and download the data. You can
choose a Power BI workload to view all the audit log records for a range of time. You can
also narrow down the results by selecting specific activities and users. For example, a
manager asks you to find out who deleted a workspace earlier that day so they can
quickly contact the person to discuss the situation.

The most recent 90 days of history are available with Audit (Standard). With Audit
(Premium), long-term retention allows you to (optionally) retain audit logs longer. Since
long-term retention only applies to licensed E5 users, it excludes audit records for non-
E5 users and guest users. Therefore, we recommend that you only rely on the default
90-day history to ensure that all activity is captured.

There are licensing requirements to access the audit logs in the Microsoft Purview
compliance portal. Elevated Exchange Online permissions are also required in order to
access the audit logs.

 Tip

By default, Power BI administrators don't have permission to access the unified


audit log search in the Microsoft Purview compliance portal. Because it contains
data for many Microsoft 365 services, it's a high-privilege role. In most
organizations, those permissions are reserved for a small number of IT
administrators. There are other options available to Power BI administrators, which
are described later in this section.

The user interface in the Microsoft Purview compliance portal is useful for manual
auditing processes and occasional investigations of user activities.

Defender for Cloud Apps

The portal in Defender for Cloud Apps is a convenient place to view activities and alerts
without the need to create a script to extract and download the data. It also allows you
to view data from the Power BI activity log and user sign-ins.

Defender for Cloud Apps includes a user interface to view and search activity logs for
various cloud services, including Power BI. It displays the same log events that originate
in the Microsoft Purview unified audit log, and it includes other events like user sign-in
activity from Azure AD.

Like the Microsoft Purview compliance portal (described in the previous section),
Defender for Cloud Apps has a searchable user interface. However, the filter options are
different. In addition to the standard 30-day history, you can use Defender for Cloud
Apps to view up to six months of activity log history (in seven-day increments).

There are licensing requirements to access Defender for Cloud Apps. Separate
permissions are also required.

 Tip

By default, Power BI administrators don't have permission to access Defender for


Cloud Apps. There's a Power BI role in Defender for Cloud Apps. Adding Power BI
administrators to this role is a good way to grant them access to a limited set of
data.

The user interface in Defender for Cloud Apps is useful for manual auditing processes
and one-off investigations of user activities.

Power BI activity log

The Power BI activity log originates from the unified audit log. It contains only user
activities related to the Power BI service.

 Tip
For organizations that are planning to extract user activities, we recommend they
start with the Power BI activity log. It's the simplest source to programmatically
access.

You have two options to access the Power BI activity log.

Call the Get Activity Events REST API by using the tool of your choice.
Use the Get-PowerBIActivityEvent PowerShell cmdlet. It's available from the Power
BI Management Admin module.

For information about which option to use, see Choose APIs or PowerShell cmdlets
earlier in this article.

 Tip

For examples of how to access the Power BI activity log with PowerShell, see Access
the Power BI activity log.

Power BI activity log data is available to all Power BI administrators, Power Platform
administrators, and global administrators. The data can be accessed from the unified
audit log, available to certain Exchange Online roles. For more information, see Audit log
requirements.

We recommend that you use the Power BI activity log when your intention is to only
retrieve Power BI audit log records.

7 Note

In the audit log data, workspaces are referred to as folders. In the Power BI REST
API, workspaces are referred to as groups.

Office 365 Management Activity API

You can extract data from the unified audit log for other services, such as SharePoint
Online, Teams, Exchange Online, Dynamics 365, and Power BI. When your goal is to
implement an auditing and monitoring solution for multiple services, we recommend
that you use the Office 365 Management Activity API. Because this API returns data from
many services, it's known as the Unified Auditing API (or the unified audit log). It's one of
the Office 365 Management APIs.
Before you build a new solution, we recommend that you contact your IT department to
determine whether existing Power BI audit records are available. It's possible that a
process already extracts and stores the data. It's also possible that you might obtain
permission to access this data rather than build a new solution.

You can access the data one of two ways.

Call the Office 365 Management Activity API directly by using the tool of your
choice. By default, the API returns 24 hours of data. You can retrieve a maximum of
seven days of history.
Use the Search-UnifiedAuditLog PowerShell cmdlet. It's an Exchange Online
PowerShell cmdlet. You can retrieve a maximum of 90 days of history.

) Important

The Search-UnifiedAuditLog cmdlet doesn't scale well and it requires you to


implement a looping strategy to overcome its 5,000-row limit. For these reasons,
it's suited to occasional queries, or for small organizations that experience low
activity. When you only need Power BI data, we recommend that you use the Get-
PowerBIActivityEvent cmdlet instead.

In general, getting started with the Office 365 Management Activity API is more
challenging than using the Power BI activity log (described previously). That's because
the API returns content blobs, and each content blob contains individual audit records.
For large organizations, there could be thousands of content blobs per day. Because
customers and third-party applications heavily use this API, Microsoft implements
throttling to ensure that their use of the service doesn't negatively impact others (known
as the noisy neighbor problem). Therefore, retrieving large volumes of history can be a
challenge in larger organizations. For more information, see this troubleshooting article.

To use this API, you must authenticate with a service principal that's been granted
permission to retrieve data from the Office 365 Management Activity API.

 Tip

By default, Power BI administrators don't have permission to access the Office 365
Management Activity API. Because it contains data for many Microsoft 365 services,
access requires the high-privilege administrator or audit roles. In most
organizations, these roles are reserved for a small number of IT administrators. The
Power BI activity log is intended for use by Power BI administrators.
Retrieving the audit data programmatically from the Office 365 Management Activity
API is a good choice when IT needs to extract and store audit logs from various services
(beyond Power BI).

Microsoft Sentinel

Microsoft Sentinel is an Azure service that allows you to collect, detect, investigate, and
respond to security incidents and threats. You can set up Power BI in Microsoft Sentinel
as a data connector. When connected, audit logs (that contain a subset of data) are
streamed from Power BI into Azure Log Analytics, which is a capability of Azure Monitor.

 Tip

Azure Log Analytics is based on the same architecture used by the workspace-level
dataset event logs. For more information about dataset events logs, see Data-level
auditing.

Because it's a separate Azure service, any administrator or user that you want to run
Kusto Query Language (KQL) queries must be granted permissions in Azure Log
Analytics (Azure Monitor). When they have permission, they can query the audit data
stored in the PowerBIActivity table. However, keep in mind that in most cases you'll
grant users access to the curated data in the gold layer rather than the raw data in the
bronze layer.

You use KQL to analyze the activity log events that are stored in Azure Log Analytics. If
you have SQL experience, you'll find many similarities with KQL.

There are several ways to access the events stored to Azure Log Analytics. You can use:

The prebuilt Log Analytics for Power BI Datasets template app.


Power BI Desktop connector for Azure Data Explorer (Kusto).
Web-based query experience in Azure Data Explorer.
Any query tool that can send KQL queries.

U Caution

Be aware that only a subset of the Power BI activity log data is stored in Azure
Monitor. When you plan to do comprehensive analysis of Power BI usage and
adoption in the organization, we recommend that you use other options (described
earlier in this section) to retrieve activity data.
The period of history that you can retrieve depends on the data retention policy for the
Azure Log Analytics workspace. Consider cost and access to raw data when deciding on
how much data to retain.

Microsoft Sentinel and Azure Monitor are good options when IT has already made an
investment with Microsoft Sentinel, the level of detail captured meets your needs, and
activities such as threat detection are a priority. However, because Microsoft Sentinel
doesn't capture all Power BI activity data, it doesn't support performing comprehensive
analysis of Power BI usage and adoption.

User activity data considerations

Here are some considerations to help you choose your source for user activity data.

Will it be a manual or automated auditing process? The user interface options


work well for manual auditing processes and occasional one-off queries, especially
when you need to investigate a specific activity. An automated auditing process
that extracts the user activity data on a schedule will also be necessary to support
historical data analysis.
How often do you need the user activity data? For automated auditing processes,
plan to extract data that's at least one day before the current date by using
Coordinated Universal Time (UTC), rather than your local time. For example, if your
process runs on Friday morning (UTC time), you should extract data from
Wednesday. There are several reasons why you should extract data with one day
latency.
Your files will be simpler to work with when you always extract a full 24 hours at
a time, which avoids partial day results.
You'll minimize the risk of missing some audit events. Usually, audit events
arrive within 30 minutes. Occasionally some events can take up to 24 hours to
arrive in the unified audit log.
Do you need more than Power BI data? Consider the most efficient way to access
what you need.
When you only need Power BI user activity data, use the Power BI activity log.
When you need audit logs from multiple services, use the Office 365
Management Activity API to access the unified audit log.
Who will develop the solution? Plan to invest some time to build out the solution.
It can take significant effort to produce a production-ready script.
Checklist - When planning how to access user activity data, key decisions and actions
include:

" Clarify scope of needs: Determine whether you need to access only Power BI audit
records, or audit data for multiple services.
" Consult with IT: Find out whether IT currently extracts audit logs, and whether it's
possible to obtain access to the raw data so that you don't need to build a new
solution.
" Decide on a data source: Determine the best source to use to access user activity
data.
" Complete a proof of concept: Plan to complete a small technical proof of concept
(POC). Use it to validate your decisions about how the final solution will be built.
" Track additional data needs: You can correlate activity log data with other sources
to enhance the value of the data. Keep track of these opportunities as they arise so
they can added to the scope of your project.

Access tenant inventory data

A tenant inventory is an invaluable, and necessary, part of a mature auditing and


monitoring solution. It helps you understand what workspaces and content (datasets,
reports, and other items) exist in Power BI, and it's an excellent complement to the user
activity data (described in the previous section). For more information about identifying
your data needs, see Tenant inventory earlier in this article.

User activities (previously described) are audited events; they're a record of actions that
a user took at a specific date and time. However, the tenant inventory is different. The
tenant inventory is a snapshot at a given point in time. It describes the current state of
all published items in the Power BI service at the time you extracted it.

7 Note

Power BI REST API documentation refers to published items as artifacts.

 Tip

Many organizations find it helpful to extract the tenant inventory at the same time
of day every week.

To properly analyze what's happening in your Power BI tenant, you need both the user
activity data and the tenant inventory. Combining them allows you to understand how
much content you have and where it's located. It also allows you to find unused or
under-utilized content (because there won't be any events for it in the activity log). The
tenant inventory also helps you compile a list of current names for all items, which is
helpful when item names change.

For more information about the value of the tenant inventory, see Tenant inventory
earlier in this article

 Tip

You can use the Get Unused Artifacts as Admin API to search for items that don't
have any user activity in that last 30 days. However, you can't customize that time
period. For example, you might have critical content that's only used quarterly. By
combining your tenant inventory with the user activity data, you can find unused
items in ways that you can customize.

You can obtain tenant inventory data in two different ways.

Technique Description Most suited to Good Good choice


choice for for
manual automated
auditing auditing
processes processes

Programmatic The Get Groups as Admin API Smaller


or the Get-PowerBIWorkspace organizations or
PowerShell cmdlet can provide low data volume
a list of workspaces for the
entire tenant. Optionally,
individual items (like datasets
and reports) per workspace
can be included.

Programmatic A set of asynchronous admin Organizations


APIs, collectively known as the with high data
metadata scanning APIs or volume and/or a
scanner APIs, can return a list need to obtain
of workspaces and individual detailed results
items. Optionally, detailed
metadata (like tables, columns,
and measure expressions) can
be included as well.

The remainder of this section introduces each of the techniques presented in the table.
Groups APIs or workspaces cmdlet

To retrieve a list of workspaces, you can use one of several REST APIs, such as the Get
Groups as Admin API (using the tool of your choice). The results include extra metadata
such as the description and whether the workspace is stored in a Premium capacity.

7 Note

The API named includes the term group as a reference to a workspace. That term
originated from the original Power BI security model, which relied on Microsoft 365
groups. While the Power BI security model has evolved significantly over the years,
the existing API names remain unchanged to avoid breaking existing solutions.

The Get Groups as Admin REST API includes the helpful $expand parameter. This
parameter allows you to optionally expand the results so that they include a list of items
published to the workspace (such as datasets and reports). You can also pass the users
data type to the $expand parameter to include the users who are assigned to a
workspace role.

You can also use the Get-PowerBIWorkspace PowerShell cmdlet. It's from the Power BI
Management Workspaces module. When you set the -Scope parameter organization , it
functions like the Get Groups as Admin API. The cmdlet also accepts the -Include
parameter (which is the equivalent of the $expand parameter in the REST API) to include
items published in the workspace (such as datasets and reports).

 Tip

By passing in parameters, you can use the cmdlet in different ways. This section
focuses on retrieving a tenant-wide inventory. For more information about using
the -Scope parameter, see Choose a user API or admin API earlier in this article.

To help you choose which option to use, see Choose APIs or PowerShell cmdlets earlier
in this article.

The Get Groups as Admin REST API or the Get-PowerBIWorkspace PowerShell cmdlet is
best suited to manual auditing processes and one-off investigations. Larger
organizations with high data volume typically find these options challenging to use. The
REST API and cmdlet always extract a full set of data, so they can take a long time to
run. So, it's also likely that larger organizations will run into limitations on the number of
requests allowed per hour (known as throttling, which is done to protect the health of
the service). The metadata scanning APIs (described next) provide a more reliable and
scalable alternative.

Metadata scanning APIs

The metadata scanning APIs, commonly called the scanner APIs, are a set of APIs that
return a list of workspaces and their Power BI items (datasets, reports, and other items).
Conceptually, they provide the same data (and more) as the Groups APIs or the
workspace cmdlet, which are described in the previous section. However, the method
you use to retrieve the data is different and better suited to extracting the tenant
inventory.

7 Note

Take notice of how some people use the term scanner APIs or the phrase scanning
the tenant. They often use those terms to mean compiling a tenant inventory,
distinguishing it from the user activity events. They may, or may not, be literally
referring to the use of the metadata scanning APIs.

There are two primary reasons why you should consider using the metadata scanning
APIs instead of the Get Groups as Admin REST API or the Get-PowerBIWorkspace cmdlet
(described previously).

Incremental data extract: The scanner APIs only extract data that's changed since
the last time it was run. Conversely, the Get Groups as Admin REST API and the
Get-PowerBIWorkspace cmdlet are synchronous operations that extract the full set

of data every time they run.


More granular level of detail: The scanner APIs can retrieve granular detail (like
tables, columns, and measure expressions), providing permission is granted by the
two tenant settings for detailed metadata. Conversely, the lowest level of detail
available with the Get Groups as Admin REST API and the Get-PowerBIWorkspace
cmdlet is by item type (for example, a list of reports in a workspace).

Using the scanner APIs involves more effort because you need to call a series of APIs.
Also, they run as an asynchronous process. An asynchronous process runs in the
background, and so you don't have to wait for it to finish. You might need to collaborate
with IT, or a developer, when it's time to create a production-ready script that will be
scheduled.

Here's the sequence of steps your process should follow when using the scanner APIs.
1. Sign in: Sign in to the Power BI service by using a Power BI administrator account
or a service principal that has permission to run admin APIs. For more information,
see Determine the authentication method earlier in this article.
2. Get the workspace IDs: Send a request to retrieve a list of workspace IDs. The first
time it's run, you won't have a modified date, so it will return a complete list of
workspaces. Usually, you'll pass the modified date to retrieve only workspaces that
have changed since that point in time. The modified date must be within the last
30 days.
3. Initiate a metadata scan: Initiate a call to request a scan of workspace information
by passing in the workspace IDs retrieved in Step 2. Note that this is a POST API
(instead of a GET API, which is more common when retrieving audit data). A POST
API is an HTTP request that creates or writes data for a specified resource. This
query specifies the level of detail that will be extracted, such as data source details,
dataset schema, dataset expressions, or users. When submitted, an ID for the scan
is returned by the API. It also returns the date and time of the scan, which you
should record as the snapshot date.
4. Check the scan status: Use the scan ID obtained in Step 3 to get the scan status.
You might need to call this API more than once. When the returned status is
Succeeded, can proceed to the next step. The time it takes to scan depends on how
much data you request.
5. Get the results: Use the scan ID obtained in Step 3 to get the scan result and
extract the data. You must do this step within 24 hours of completing Step 4. Keep
in mind that the snapshot time stamp should be correlated with Step 3 rather than
Step 5 (when there's a significant delay). That way, you'll have an accurate
snapshot time stamp. Save the results in your preferred data storage location. As
already described in this article, we recommend that you store the raw data in its
original state.
6. Prepare for the next process: Record the time stamp of the scan from Step 3 so
you can use it in Step 2 the next time you run the process. That way, you'll only
extract data that's changed since that point in time. Moving forward, all
subsequent data extracts will be incremental changes rather than full snapshots.

Checklist - When planning for access to the tenant inventory data, key decisions and
actions include:

" Confirm requirements: Clarify needs for compiling a tenant inventory and the
analytical requirements for the data.
" Decide on the frequency for extracting tenant inventory: Confirm how often you'll
need a new tenant inventory (such as every week).
" Decide how to extract the tenant inventory: Confirm which method you will use to
obtain the tenant inventory data (API, cmdlet, or metadata scanning APIs).
" Complete a proof of concept: Plan to complete a small technical proof of concept
(POC). Use it to validate your decisions about how the final solution will be built.

Access user and group data


In addition to the identifying information (like an email address) that's included in the
user activity data, it's valuable to retrieve additional information such as location or
organizational details. You can use Microsoft Graph to retrieve data about users, groups,
service principals, and licenses. Microsoft Graph comprises a set of APIs and client
libraries that allow you to retrieve audit data from a wide variety of services.

Here are some details about the Azure AD objects that you can access.

User: An identity that exists in Azure AD as a work, school, or Microsoft account.


The term domain user is often used to describe organizational users, while the
formal term is user principal name (UPN). A UPN is usually the same value as the
user's email address (however, if an email address changes, the UPN doesn't
change because it's immutable). There's also a unique Microsoft Graph ID for each
user. Often, a user account is associated with one person. Some organizations
create users that are service accounts that are used for automated activities or for
administrative tasks.
Service principal: A different type of identity, that's provisioned when you create
an app registration. A service principal is intended for unattended, automated
activities. For more information, see Determine the authentication method earlier
in this article.
Group: A collection of users and service principals. There are several types of
groups that you can use to simplify permissions management. For more
information, see Strategy for using groups.

7 Note

When this article refers to users and groups, this term includes service principals as
well. This shorter term is used for brevity.

The users and groups data that you retrieve is a snapshot that describes the current
state at a given point in time.
 Tip

For more information about users, service principals, and groups, see Integration
with Azure AD.

Analytical attributes

For Power BI tenant-level auditing, you might extract and store the following attributes
from Microsoft Graph.

Full name of users: Many data sources only include the email address of the user
that performed an activity or who's assigned to a role. Use this attribute when you
want to display the full name (display name) in analytical reports. Using the full
name makes reports more user-friendly.
Other user properties: Other descriptive attributes about users may be available in
Azure AD. Some examples of built-in user profile attributes that have analytical
value include job title, department, manager, region, and office location.
Members of a security group: Most data sources provide the name of a group (for
example, the Power BI activity log records that a security group was assigned to a
workspace role). Retrieving the group membership improves your ability to fully
analyze what an individual user is doing or has access to.
User licenses: It's useful to analyze which user licenses—free, Power BI Pro, or
Power BI Premium Per User (PPU)—are assigned to users. This data can help you to
identify who's not using their license. It also helps you to analyze all users (distinct
users with a license) versus active users (with recent activities). If you're considering
adding or expanding your Power BI Premium licenses, we recommend that you
analyze the user license data together with user activity data to perform a cost
analysis.
Members of the administrator roles: You can compile a list of your Power BI
administrators are (which includes Power Platform administrators and global
administrators).

For the authoritative reference of Power BI license information that you can find in the
audit data from Microsoft Graph, see Product names and service plan identifiers for
licensing.

 Tip

Retrieving members from groups can be one of the most challenging aspects of
obtaining audit data. You will need to do a transitive search to flatten out all nested
members and nested groups. You can do a transitive search for group members.
This type of search is especially challenging when there are thousands of groups in
your organization. In that case, there might consider better alternatives to retrieve
the data. One option is to extract all groups and group members from Microsoft
Graph. However, that may not be practical when only a small number of groups is
used for Power BI security. Another option is to pre-build a reference list of groups
that are used by any type of Power BI security (workspace roles, app permissions,
per-item sharing, row-level security, and others). You can then loop through the
reference list to retrieve group members from Microsoft Graph.

Here are some other attributes you might extract and store.

User type: Users are either members or guests. Most commonly, members are
internal users and guests are external (B2B) users. Storing user type is useful when
you need to analyze the activities of external users.
Role changes: When you perform a security audit, it's useful to understand when a
user changed roles in the organization (for example, when they transfer to a
different department). That way, you can verify whether their Power BI security
settings have been—or should be—updated.
Disabled users: When a user leaves the organization, usually an administrator
disables their account. You can create a process to check whether disabled users
are workspace administrators or dataset owners.

 Tip

The Power BI activity log includes an event that records when a user signs up for a
trial license. You can combine that event with the user license data (sourced from
Azure AD) to produce a complete picture.

Retrieve users and groups data

You can retrieve data about users and groups in different ways.

Technique Description Good choice for manual Good choice for automated
auditing processes auditing processes

Manual Graph Explorer

Programmatic Microsoft Graph


APIs and SDKs
Technique Description Good choice for manual Good choice for automated
auditing processes auditing processes

Programmatic Az PowerShell
module

The remainder of this section introduces each of the techniques presented in the table.
Other techniques, which are deprecated and shouldn't be used for new solutions, are
described at the end of this section.

7 Note

Be careful when you read information online because many tool names are similar.
Some tools in the Microsoft ecosystem include the term Graph in their name, like
Azure Resource Graph, GraphQL, and the Microsoft Security Graph API. Those tools
aren't related to Microsoft Graph, and they're out of scope for this article.

Microsoft Graph Explorer

Microsoft Graph Explorer is a developer tool that lets you learn about Microsoft Graph
APIs. It's a simple way to get started because it requires no other tools or setup on your
machine. You can sign in to retrieve data from your tenant, or retrieve sample data from
a default tenant.

You can use Graph Explorer to:

Manually send a request to a Microsoft Graph API to check whether it returns the
information that you want.
See how to construct the URL, headers, and body before you write a script.
Check data in an informal way.
Determine which permissions are required for each API. You can also provide
consent for new permissions.
Obtain code snippets to use when you create scripts.

Use this link to open Graph Explorer.

Microsoft Graph APIs and SDKs

Use the Microsoft Graph APIs to retrieve users and groups data. You can also use it to
retrieve data from services such as Azure AD, SharePoint Online, Teams, Exchange,
Outlook, and more.
The Microsoft Graph SDKs act as an API wrapper on top of the underlying Microsoft
Graph APIs. An SDK is a software development kit that bundles tools and functionality
together. The Microsoft Graph SDKs expose the entire set of Microsoft Graph APIs.

You can choose to send requests directly to the APIs. Alternatively, you can add a layer
of simplification by using your preferred language and one of the SDKs. For more
information, see Choose APIs or PowerShell cmdlets earlier in this article.

The Microsoft Graph SDKs support several languages, and there's also the Microsoft
Graph PowerShell modules. Other SDKs are available for Python, C#, and other
languages.

) Important

The Microsoft Graph PowerShell module replaces the Azure AD PowerShell


modules and MSOnline (MSOL) modules, which are both deprecated. You shouldn't
create new solutions with deprecated modules. The Microsoft Graph PowerShell
module has many features and benefits. For more information, see Upgrade from
Azure AD PowerShell to Microsoft Graph PowerShell.

You can install the Microsoft Graph PowerShell modules from the PowerShell Gallery .
Because Microsoft Graph works with many Microsoft 365 services, there are many
PowerShell modules that you install.

For Power BI tenant-level auditing, here are the most common PowerShell modules
you'll need to install.

Authentication (for signing in)


Users
Groups
Applications (and service principals)
Directory objects (and license details)

 Tip

Microsoft updates the Microsoft Graph PowerShell modules regularly. Sometimes,


preview modules are made available on a pre-release basis or a beta endpoint. You
might want to specify the version you're interested in when you install and update
the modules. Also, when you review online documentation, note that the current
version number is automatically appended to the end of the URL (so be careful
when saving or sharing URLs).
Az PowerShell module

You can also use the Az PowerShell module to retrieve users and groups data. It focuses
on Azure resources.

) Important

The Az PowerShell module replaces the AzureRM PowerShell modules, which are
deprecated. You shouldn't create new solutions with deprecated modules.

You can use the Invoke-AzRestMethod cmdlet when there isn't a corresponding cmdlet
for an API. It's a flexible approach that allows you to send a request to any Microsoft
Graph API by using the Az PowerShell module.

Beginning with Az version 7, the Az cmdlets now reference the Microsoft Graph API.
Therefore, the Az module acts as an API wrapper on top of Microsoft Graph. The Az
modules have a subset of functionality that's contained in the Microsoft Graph APIs and
PowerShell modules. For new solutions, we recommend that you use the Microsoft
Graph PowerShell SDK.

Deprecated APIs and modules

You might find articles and blog posts online that suggest alternative options that aren't
presented in this section. We strongly recommend that you do not create new solutions
(and/or migrate your existing solutions) by using any of the following APIs or modules.

AzureRM PowerShell modules: Deprecated and will be retired. They've been


replaced with the Az PowerShell module.
Azure AD Graph API and Azure AD PowerShell module: Deprecated and will be
retired. This change is the result of the migration from Azure AD Graph to
Microsoft Graph (note that Graph appears in both names, but Microsoft Graph is
the future direction). All future PowerShell investments will be made in the
Microsoft Graph PowerShell SDK.
MS Online (MSOL) PowerShell module: Deprecated and will be retired. All future
PowerShell investments will be made in the Microsoft Graph PowerShell SDK.

U Caution

Be sure to confirm the status of any deprecated API or module that you're currently
using. For more information about the retirement of AzureRM, see this
announcement .
For more information about Azure AD and MSOL, see this retirement
announcement post .

If you have questions or require clarification on the future direction of


programmatic data access, contact your Microsoft account team.

Checklist - When planning to access users and groups data, key decisions and actions
include:

" Confirm requirements: Clarify needs for compiling data related to users, groups,
and licenses.
" Prioritize requirements: Confirm what the top priorities are so you know what to
spend time on first.
" Decide on the frequency for extracting data: Determine how often you'll need a
new snapshot of the users and groups data (such as every week or every month).
" Decide how to extract data with Microsoft Graph: Determine which method you'll
use to retrieve the data.
" Complete a proof of concept: Plan to complete a small technical proof of concept
(POC) to extract users and groups data. Use it to validate your decisions about how
the final solution will be built.

Access data from Power BI REST APIs


Perhaps as a lower priority, you can also retrieve other data by using the Power BI REST
APIs.

For example, you can retrieve data about:

All apps in the organization.


All imported datasets in the organization.
All deployment pipelines in the organization.
All Premium capacities in the organization.

During a security audit, you might want to identify:

Items that have been widely shared with the entire organization.
Items that are available on the public internet by using publish to the web.
For more information about the other types of APIs, see Choose a user API or admin API
earlier in this article.

Checklist - When planning for accessing data from the Power BI APIs, key decisions and
actions include:

" Obtain requirements: Compile analytical requirements as they arise. Keep track of


them in your backlog.
" Prioritize requirements: Set priorities for the new requirements that arise.

Phase 2: Prerequisites and setup


The second phase of planning and implementing a tenant-level auditing solution
focuses on prerequisites and setup that must be done before you begin solution
development.

Create storage account


At this point, you've decided on a location to store raw data and how to create curated
data. Based on those decisions, you're now ready to create a storage account.
Depending on your organization's processes, it might involve submitting a request to IT.

As described earlier, we recommend using a technology that allows you to write the raw
data to immutable storage. Once the data is written, it can't be changed or deleted. You
can then have confidence in the raw data because you know that an administrator with
access can't alter it in any way. The curated data, however, doesn't (usually) need to be
stored in immutable storage. Curated data might change or it can be regenerated.
Checklist – When creating a storage account, key decisions and actions include:

" Create a storage account: Create, or submit a request, for creating a storage


account. Use immutable storage settings for the raw data, whenever possible.
" Set permissions: Determine which permissions should be set for the storage
account.
" Test access: Do a small test to ensure that you can read and write to the storage
account.
" Confirm responsibilities: Ensure that you're clear on who'll manage the storage
account on an ongoing basis.

Install software and set up services


At this point, you've made your decisions about which technology to use for access
audit data. Based on those decisions, you're now ready to install software and set up
services.

Set up the preferred development environment for each administrator. The


development environment will allow them to write and test scripts. Visual Studio Code
is a modern tool for developing scripts, so it's a good option. Also, many extensions are
available to work with Visual Studio Code.

If you've made the decision (previous described) to use PowerShell, you should install
PowerShell Core and the necessary PowerShell modules on:

The machine of each administrator/developer who will write or test auditing


scripts.
Each virtual machine or server that will run scheduled scripts.
Each online service (such as Azure Functions or Azure Automation).

If you've chosen to use any Azure services (such as Azure Functions, Azure Automation,
Azure Data Factory, or Azure Synapse Analytics), you should provision and set them up
as well.
Checklist – When installing software and setting up services, key decisions and actions
include:

" Set up administrator/developer machines: If applicable, install the necessary tools


that will be used for development.
" Set up servers: If applicable, install the necessary tools on any servers or virtual
machines that are present in your architecture.
" Set up Azure services: If applicable, provision and set up each Azure service. Assign
permissions for each administrator/developer who'll be doing development work.

Register an Azure AD application


At this point, you've decided how to authenticate. We recommend that you register an
Azure AD application (service principal). Commonly referred to as an app registration, it
should be used for unattended operations that you'll automate.

For more information about how to create an app registration to retrieve tenant-level
auditing data, see Enable service principal authentication for read-only admin APIs.

Checklist - When registering an Azure AD application, key decisions and actions include:

" Check whether an existing app registration exists: Verify with IT whether an


existing app registration is available for the purpose of running admin APIs. If so,
determine whether the existing one should be used, or whether a new one should
be created.
" Create a new app registration: Create an app registration when appropriate.
Consider using an app name such as PowerBI-AdminAPIs-AADApp, which clearly
describes its purpose.
" Create a secret for the app registration: Once the app registration exists, create a
secret for it. Set the expiration date based on how often you intend to rotate the
secret.
" Securely save the values: Store the secret, app ID (client ID), and the tenant ID
because you'll need them to authenticate with the service principal. Use a location
that's secure, such as an organizational password vault. (If you need to request
creation of an app registration from IT, specify that you need these values returned
to you.)
" Create a security group: Create (or request) a security group that will be used for
Power BI. Consider using group name such as Power BI admin service principals,
which signifies that the group will be used to access tenant-wide metadata.
" Add the service principal as a member of the security group: Use the app ID
(client ID) to add the new (or existing) service principal as a member of the new
security group.
" Update admin API tenant setting in Power BI: In the Power BI admin portal, add
the security group to the Allow service principals to use read-only Power BI admin
APIs tenant setting.
" Skip assigning permissions in Azure: Don't delegate any permissions to the app
registration (it'll gain access to the Power BI tenant-level audit data by way of the
Power BI Allow service principals to use read-only Power BI admin APIs tenant
setting).
" Decide whether the app registration should access detailed metadata: Determine
whether you want to extract detailed information about dataset tables, columns,
and measure expressions when you build your tenant inventory.
" Update the detailed metadata tenant settings in Power BI: Optionally, in the
Power BI admin portal, add the security group to the Enhance admin API responses
with detailed metadata tenant setting and also the Enhance admin API responses
with DAX and mashup expressions tenant setting.
" Test the service principal: Create a simple script to sign in by using the service
principal and test that it can retrieve data from an admin API.
" Create a process to manage app registration secrets: Create documentation and a
process to regularly rotate secrets. Determine how you'll securely communicate a
new secret to any administrators and developers who need it.

Set Power BI tenant settings


There are several tenant settings in the Power BI admin portal that are relevant for
extracting tenant-level auditing data.

Admin APIs
There are three tenant settings that are relevant for running admin APIs.

) Important

Because these settings grant metadata access for the entire tenant (without
assigning direct Power BI permissions), you should control them tightly.

The Allow service principals to use read-only Power BI admin APIs tenant setting allows
you to set which service principals can call admin APIs. This setting also allows Microsoft
Purview to scan the entire Power BI tenant so that it can populate the data catalog.

7 Note

You don't need to explicitly assign other Power BI administrators to the Allow
service principals to use read-only Power BI admin APIs tenant setting because they
already have access to tenant-wide metadata.

The Enhance admin API responses with detailed metadata tenant setting allows you to
set which users and service principals can retrieve detailed metadata. Metadata is
retrieved by using the metadata scanning APIs, and it includes tables, columns, and
more. This setting also allows Microsoft Purview to access schema-level information
about Power BI datasets so that it can store it in the data catalog.

The Enhance admin API responses with DAX and mashup expressions tenant setting
allows you to set which users and service principals can retrieve detailed metadata.
Metadata is retrieved from the metadata scanning APIs, and it can include queries and
dataset measure expressions.

7 Note

The Allow service principals to use read-only Power BI admin APIs tenant setting is
specifically about accessing admin APIs. Its name is very similarly to the tenant
setting that's intended for accessing user APIs (described next). For more
information about the difference between admin APIs and user APIs, see Choose a
user API or admin API earlier in this article.

User APIs
There's one tenant setting that applies to calling user APIs. In this situation, Power BI
permissions are also required for the service principal (for example, a workspace role).

The Allow service principals to use Power BI APIs tenant setting allows you to set which
service principals have access to run REST APIs (excluding the admin APIs, which are
granted by a different tenant setting, described above).

There are other tenant settings related to APIs, which allow access to the embedding
APIs, streaming APIs, export APIs, and the execute queries API. However, these APIs are
out of scope for this article.
For more information about the tenant settings for usage metrics, see Report-level
auditing.

Checklist - When setting up the Power BI tenant settings, key decisions and actions
include:

" Verify that each service principal is in the correct group: Confirm that the Power BI
admin service principals group includes the correct service principals.
" Update the admin API tenant setting in Power BI: Add the security group to the
Allow service principals to use read-only Power BI admin APIs tenant setting, which
allows using the admin APIs to retrieve tenant-wide metadata.
" Update the detailed metadata tenant settings in Power BI: If necessary, add the
security group to the Enhance admin API responses with detailed metadata tenant
setting and the Enhance admin API responses with DAX and mashup expressions
tenant setting.
" Confirm which user APIs will be accessed: Verify whether one or more user APIs
will be needed (in addition using the admin APIs).
" Update the user API tenant setting in Power BI: Add the security group to the
Allow service principals to use Power BI APIs tenant setting, which is intended for
user APIs.

Phase 3: Solution development and analytics


The third phase of planning and implementing a tenant-level auditing solution focuses
on solution development and analytics. At this point, all the planning and decision-
making has occurred, and you've met prerequisites and completed the setup. You're
now ready to begin solution development so that you can perform analytics and gain
insights from your auditing data.
Extract and store the raw data
At this point, your requirements and priorities should be clear. The key technical
decisions have been made about how to access audit data and the location to store
audit data. Preferred tools have been selected, and the prerequisites and settings have
been set up. During the previous two phases, you might have completed one or more
small projects (proofs of concept) to prove feasibility. The next step is to set up a
process to extract and store the raw auditing data.

Like with data returned by most Microsoft APIs, auditing data is formatted as JSON.
JSON-formatted data is self-describing because it's human-readable text that contains
structure and data.

JSON represents data objects that consist of attribute-value pairs and arrays. For
example, "state": "Active" is an example where the state attribute value is Active. A
JSON array contains an ordered list of elements separated by a comma and which are
enclosed within brackets ([ ]). It's common to find nested arrays in JSON results. Once
you become familiar with the hierarchical structure of a JSON result, it's straightforward
to understand the data structure, like a list (array) of datasets in a workspace.

Here are some considerations for when you extract and store the raw data retrieved
from the APIs.

What naming convention will be used? For a file-based system, a naming


convention is necessary for files, folders, and data lake zones. For a database, a
naming convention is necessary for tables and columns.
What format will be used to store the raw data? As Power BI continues to
introduce new features, new audit events will appear that don't exist today. For this
reason, we recommend that you extract and retain the original JSON results. Don't
parse, filter, or format the data while it's extracted. That way, you can use the
original raw data to regenerate your curated audit data.
What storage location will be used? A data lake or blob storage is commonly
used to store raw data. For more information, see Determine where to store audit
data earlier in this article.
How much history will you store? Export the audit data to a location where you
can store the history. Plan to accumulate at least two years of history. That way,
you can analyze trends and changes beyond the default 30-day retention period.
You might choose to store the data indefinitely, or you might decide on a shorter
period, depending on the data retention policies for your organization.
How will you track when the process last ran? Set up a configuration file, or the
equivalent, to record the time stamps when a process starts and finishes. The next
time the process runs, it can retrieve these time stamps. It's especially important
that you store time stamps when you extract data by using the metadata scanning
APIs.
How will you handle throttling? Some Power BI REST APIs and the Microsoft
Graph API implement throttling. You'll receive a 429 error (too many requests) if
your API request is throttled. Throttling can be a common problem for larger
organizations that need to retrieve a large volume of data. How you avoid failed
attempts due to throttling depends on the technology you use to access and
extract the data. One option is to develop logic in your scripts that responds to a
429 "Too many requests" error by retrying after a wait period.
Is the audit data needed for compliance? Many organizations use the raw audit
log records to do compliance audits or to respond to security investigations. In
these cases, we strongly recommend that you store the raw data in immutable
storage. That way, once the data is written, it can't be changed or deleted by an
administrator or other user.
What storage layers are necessary to support your requirements? The best places
to store raw data are a data lake (like Azure Data Lake Storage Gen2) or object
storage (like Azure Blob Storage). A file system can also be used if cloud services
aren't an option.

Checklist - When extracting and storing the raw data, key decisions and actions include:

" Confirm requirements and priorities: Clarify the requirements and priorities for the
data that you'll extract first.
" Confirm the source of data to be extracted: Verify the source for each type of data
you need.
" Set up a process to extract the data and load it to the raw data zone: Create the
initial process to extract and load the raw data in its original state, without any
transformations. Test that the process works as intended.
" Create a schedule to run the processes: Set up a recurring schedule to run the
extract, load, and transform processes.
" Verify that credentials are managed securely: Confirm that passwords, secrets, and
keys are stored and communicated in secure ways.
" Confirm security is set up correctly: Verify that access permissions are set up
correctly for the raw data. Ensure that administrators and auditors can access the
raw data.

For more information about how an auditing and monitoring solution grows over time,
see Operationalize and improve later in this article.
Create the curated data
At this point, the raw data is extracted and stored. The next step is to create a separate
gold data layer for the curated data. Its goal is to transform and store the data files in a
star schema. A star schema comprises dimension tables and fact tables, and it's
intentionally optimized for analysis and reporting. The files in the curated data layer
become the source of a Power BI data model (described in the next topic).

 Tip

When you expect there to be more than one data model, investing in a centralized
curated data layer is particularly useful.

Checklist - When creating the curated data layer, key decisions and actions include:

" Confirm requirements and priorities: If you intend to use an intermediary silver


layer for the transformed data, clarify the requirements and objectives for what you
need to accomplish.
" Set up a process to transform the raw data and load it into the curated data zone:
Create a process to transform and load the data into a star schema. Test that the
process works as intended.
" Create a schedule to run the processes: Set up a recurring schedule to populate
the curated data layer.
" Confirm security is set up correctly: Verify that access permissions are set up
correctly for the curated data. Ensure that developers of the data model can access
the curated data.

Create a data model


The topic is about setting up an analytical data model. A data model is query-able data
resource optimized for analytics. Sometimes it's referred to as a semantic model, or
simply a model. For your auditing and monitoring solution, the data model will likely be
implemented as a Power BI dataset.

In the context of auditing and monitoring, a data model sources data from the curated
(gold) data layer. If you choose not to create a curated data layer, the data model
sources its data directly from the raw data.
We recommend that your Power BI data model implements a star schema design. When
the source data is the curated data layer, the Power BI data model star schema should
mirror the curated data layer star schema.

 Tip

For an overview of star schema design, see Understand star schema and the
importance for Power BI.

As with any Power BI project, creating a data model is an iterative process. You can add
new measures as needed. You can also add new tables and columns as new audit events
become available. In time, you might even integrate new data sources.

Here are some useful dimension tables that you can include in the data model.

Date: A set of date attributes to enable analysis (slicing and dicing) of data by day,
week, month, quarter, year, and other relevant time periods.
Time: If you need to analyze by time of day and you have a very large volume of
audit data, consider storing the time part separately from the date. This approach
can help to improve query performance.
Users: Attributes that describe users (such as department and geographic region)
that can filter many subjects of auditing data. The goal is to remove all user details
from the fact tables and store them in this dimension table so that they can filter
many fact tables. You can also store service principals in this table.
Activity events: Attributes that group and describe the activity events (operations).
To enhance your reporting, you might create a data dictionary that describes each
activity event. You might also create a hierarchy that groups and classifies similar
activity events. For example, you might group all item creation events, delete
events, and so on.
Workspaces: A list of workspaces in the tenant and workspace properties, such as
type (personal or standard) and description. Some organizations record more
details about workspaces (possibly using a SharePoint list). You can integrate these
details into this dimension table. You need to decide whether this dimension table
stores only the current state of workspace, or whether it stores versioned data that
reflects significant workspace changes over time. For example, when a workspace
name changes, does historical reporting show the current workspace name or the
workspace name that was current at that time? For more information about
versioning, see Slowly changing dimensions.
Item types: A list of Power BI item types (datasets, reports, and others).
Capacities: A list of Premium capacities in the tenant.
Gateways: A list of data gateways in the tenant.
Data sources: A list of data sources that are used by any dataset, dataflow, or
datamart.

Here are some useful fact tables (subjects) that you can include in the data model.

User activities: The fact data that's sourced from the original JSON data. Any
attributes that have no analytical value are removed. Any attributes that belong in
the dimension tables (above) are removed too.
Tenant inventory: A point-in-time snapshot of all items published in the tenant.
For more information, see Tenant inventory earlier in this article.
Datasets: Includes user activity involving datasets (like dataset changes), or related
data sources.
Dataset refreshes: Stores data refresh operations, including details about type
(scheduled or on-demand), duration, status, and which user initiated the operation.
Workspace roles: A point-in-time snapshot of workspace role assignments.
User licenses: A point-in-time snapshot of user licenses. While you might be
tempted to store the user license in the Users dimension table, that approach
won't support the analysis of license changes and trends over time.
User group memberships: A point-in-time snapshot of users (and service
principals) assigned to a security group.
Community activities: Includes community-related facts such as training events.
For example, you could analyze Power BI user activities compared to training
attendance. This data could help the Center of Excellence identify potential new
champions.

Fact tables shouldn't include columns that report creators will filter. Instead, those
columns belong to related dimension tables. Not only is this design more efficient for
queries, but it also promotes reuse of dimension tables by multiple facts (known as drill
across). That last point is important to produce a useful and user-friendly data model
that's extensible when you add new fact tables (subjects).

For example, the Users dimension table will be related to every fact table. It should be
related to the User activities fact table (who performed the activity), Tenant inventory
fact table (who created the published item), and all other fact tables. When a report
filters by a user in the Users dimension table, visuals in that report can show facts for
that user from any related fact table.

When you design your model, ensure that an attribute is visible once, and only once, in
the model. For example, the user email address should only be visible in the Users
dimension table. It will exist in other fact tables too (as a dimension key to support a
model relationship). However, you should hide it in each fact table.
We recommend that you create your data model separate from reports. The decoupling
of a dataset and its reports results in a centralized dataset that can serve many reports.
For more information about using a shared dataset, see the managed self-service BI
usage scenario.

Consider setting up row-level security (RLS) so that other users—beyond the Center of
Excellence, auditors, and administrators—can analyze and report on auditing data. For
example, you could use RLS rules to allow content creators and consumers to report on
their own user activities or development efforts.

Checklist - When creating the data model, key decisions and actions include:

" Plan and create the data model: Design the data model as a star schema. Validate
relationships work as intended. As you develop the model, iteratively create
measures and add additional data based on analytical requirements. Include future
improvements on a backlog, when necessary.
" Set up RLS: If you intend to make the data model available to other general users,
set up row-level security to restrict data access. Validate that the RLS roles return
the correct data.

Enhance the data model


To effectively analyze content usage and user activities, we recommend that you enrich
your data model. Data model enhancements can be done gradually and iteratively over
time as you discover opportunities and new requirements.

Create classifications
One way to enhance the model and increase the value of your data is to add
classifications to the data model. Ensure that these classifications are used consistently
by your reports.

You might choose to classify users based on their level of usage, or to classify content
based on its level of usage.

User usage classification

Consider the following classifications for user usage.


Frequent user: Activity recorded in the last week, and in nine of the last 12
months.
Active user: Activity recorded in the past month.
Occasional user: Activity recorded in the last nine months, but without activity in
the past one month.
Inactive user: No activity recorded in the last nine months.

 Tip

It's helpful to know who your occasional or inactive users are, especially when they
have Pro or PPU licenses (which involve cost). It's also helpful to know who your
frequent and most active users are. Consider inviting them to join office hours or
attend training. Your most active content creators might be candidates to join your
champions network.

Content usage classification

Consider the following classifications for content usage.

Frequently used content: Activity recorded in the last week, and in nine of the last
12 months.
Actively used content: Activity recorded in the past month.
Occasionally used content: Activity recorded in the last nine months, but without
activity in the past one month.
Unused content: No activity recorded in the last nine months.

User type classification

Consider the following classifications for user type.

Content creator: Activity recorded in the past six months that created, published,
or edited content.
Content viewer: Activity recorded in the past six months that viewed content, but
without any content creation activity.

Consider recency vs. trends

You should decide whether the usage classifications for users or content should be
based only on how recently an activity occurred. You might want to also consider
factoring in average or trending usage over time.
Consider some examples that demonstrate how simple classification logic might
misrepresent reality.

A manager viewed one report this week. However, prior to that week, the manager
hadn't viewed any reports in the last six months. You shouldn't consider this
manager to be a frequent user based on recent usage alone.
A report creator publishes a new report every week. When you analyze usage by
frequent users, the report creator's regular activity appears to be positive.
However, upon further investigation you discover that this user has been
republishing a new report (with a new report name) every time they edit the
report. It would be useful for the report creator to have more training.
An executive views a report sporadically, and so their usage classification changes
frequently. You might need to analyze certain types of users, such as executives,
differently.
An internal auditor views critical reports once per year. The internal auditor may
appear to be an inactive user because of their infrequent usage. Someone might
take steps to remove their Pro or PPU license. Or, someone might believe that a
report should be retired since it's used infrequently.

 Tip

You can calculate averages and trends by using the DAX time intelligence functions.
To learn how to use these functions, work through the Use DAX time intelligence
functions in Power BI Desktop models learning module.

Checklist – When creating classifying usage data, key decisions and actions include:

" Get consensus on classification definitions: Discuss classification definitions with


the relevant stakeholders. Make sure there's agreement when making the decisions.
" Create documentation: Ensure that the classification definitions are included in
documentation for content creators and consumers.
" Create a feedback loop: Make sure there's a way for users to ask questions or
propose changes to the classification definitions.

Create analytical reports


At this point, the auditing data has been extracted and stored, and you've published a
data model. The next step is to create analytical reports.

Focus on the key information that's most relevant for each audience. You may have
several audiences for your auditing reports. Each audience will be interested in different
information, and for different purposes. The audiences you might serve with your
reports include:

Executive sponsor
Center of Excellence
Power BI administrators
Workspace administrators
Premium capacity administrators
Gateway administrators
Power BI developers and content creators
Auditors

Here are some of the most common analytical requirements that you may want to start
with when creating your auditing reports.

Top content views: Your executive sponsor and leadership teams may
predominantly be interested in summary information and trends over time. Your
workspace administrators, developers, and content creators will be more interested
in the details.
Top user activities: Your Center of Excellence will be interested in who's using
Power BI, how, and when. Your Premium capacity administrators will be interested
in who's using the capacity to ensure its health and stability.
Tenant inventory: Your Power BI administrators, workspace administrators, and
auditors will be interested in understanding what content exists, where, lineage,
and its security settings.

This list isn't all-inclusive. It's intended to provide you with ideas about how to create
analytical reports that target specific needs. For more considerations, see Data needs
earlier in this article, and Auditing and monitoring overview. These resources include
many ideas for how you can use auditing data, and the types of information you might
choose to present in your reports.

 Tip

While it's tempting to present a lot of data, only include information that you're
prepared to act on. Ensure that every report page is clear about its purpose, what
action should be taken, and by whom.
To learn how to create analytical reports, work through the Design effective
reports in Power BI learning path.

Checklist - When planning for analytical auditing reports, key decisions and actions
include:

" Review requirements: Prioritize creating reports based on known needs and


specific questions that should be answered.
" Confirm your audience(s): Clarify who'll use the auditing reports, and what their
intended purpose will be.
" Create and deploy reports: Develop the first set of core reports. Extend and
enhance them gradually over time.
" Distribute reports in an app: Consider creating an app that includes all your
auditing and monitoring reports.
" Verify who has access to reports: Ensure that the reports (or the app) are made
available to the correct set of users.
" Create a feedback loop: Make sure there's a way for report consumers to provide
feedback or suggestions, or report issues.

Take action based on the data


Auditing data is valuable because it helps you to understand what's happening in your
Power BI tenant. While it might seem obvious, explicitly acting on what you learn from
the audit data can be easily overlooked. For that reason, we recommend that you assign
someone who's responsible for tracking measurable improvements, rather than just
reviewing auditing reports. That way, you can make gradual, measurable advances in
your adoption and level of maturity with Power BI.

You can take many different actions based on your goals and what you learn from the
auditing data. The remainder of this section provides you with several ideas.

Content usage
Here are some actions you might take based on how content is used.

Content is frequently used (daily or weekly): Verify that any critical content is
certified. Confirm that ownership is clear, and the solution is adequately supported.
High number of workspace views: When a high number of workspace views occur,
investigate why Power BI apps aren't in use.
Content is rarely used: Contact the target users to determine whether the solution
meets their needs, or whether their requirements have changed.
Refresh activity occurs more frequently than views: Contact the content owner to
understand why a dataset is refreshed regularly without any recent use of the
dataset or related reports.

User activities

Here are some actions you might take based on user activities.

First publishing action by a new user: Identify when a user type changes from
consumer to creator, which you can identify when they publish content for the first
time. It's a great opportunity to send them a standard email that provides
guidance and links to useful resources.
Engagement with the most frequent content creators: Invite your most active
creators to join your champions network, or to get involved with your community
of practice.
License management: Verify whether inactive creators still need a Pro or PPU
license.
User trial activation: A trial license activation can prompt you to assign a
permanent license to the user before their trial ends.

User training opportunities


Here are some user training opportunities that you might identify from the auditing
data.

Large number of datasets published by the same content creator: Teach users
about shared datasets and why it's important to avoid creating duplicate datasets.
Excessive sharing from a personal workspace: Contact a user who's doing a lot of
sharing from their personal workspace. Teach them about standard workspaces.
Significant report views from a personal workspace: Contact a user who owns
content that has a high number of report views. Teach them how standard
workspaces are better than personal workspaces.

 Tip

You can also improve your training content or documentation by reviewing


questions answered by your internal Power BI community and issues submitted to
the help desk.

Security
Here are some security situations you may want to actively monitor.

Too many users assigned to the high-privilege Fabric administrator role.


Too many workspace administrators (when the Member, Contributor, or Viewer
workspace role would be sufficient).
Excessive Build permissions assigned to datasets (when Read permission would be
sufficient).
High use of per-item permissions, when Power BI app permissions or the
workspace Viewer role would be a better choice for content consumers.
How content is shared with external users.

For more information, see the Security planning articles.

Governance and risk mitigation

Here are some situations that you may encounter. Consider explicitly looking for these
types of situations in your auditing reports, so you're prepared to act quickly.

High number of views for reports (and underlying datasets) that aren't endorsed.
Significant use of unknown or unsanctioned data sources.
File locations that don't align with governance guidelines for where source files
should be located.
Workspace names don't align with governance requirements.
Sensitivity labels are used for information protection.
Consistent data refresh failures.
Significant and recurring use of printing.
Unexpected or excessive use of subscriptions.
Unexpected use of personal gateways.

The specific actions to be taken in each situation will depend on your governance
policies. For more information, see Governance in the Power BI adoption roadmap.

Checklist - When planning for potential actions based on the auditing data, key
decisions and actions include:
" Clarify expectations: Create auditing reports with a clear set of expectations for
what actions are expected.
" Clarify who'll be responsible for actions: Ensure that roles and responsibilities are
assigned.
" Create automation: When possible, automate known actions that are repeatable.
" Use a tracking system: Use a system to track an observed situation, including
contact made, next planned action, who's responsible, resolution, and status.

Phase 4: Maintain, enhance, and monitor


The fourth phase of planning and implementing a tenant-level auditing solution focuses
on maintenance, enhancements, and monitoring. At this point, your auditing solution is
in production use. You're now primarily concerned with maintaining, enhancing, and
monitoring the solution.

Operationalize and improve


Auditing processes are typically considered to be running in production when initial
development and testing are complete and you've automated the process. Automated
auditing processes running in production have greater expectations (than manual
processes) for quality, reliability, stability, and support.

A production-level auditing process has been operationalized. An operationalized


solution commonly includes many of the following characteristics.

Secure: Credentials are stored and managed securely. Scripts don't contain
passwords or keys in plaintext.
Scheduling: A reliable scheduling system is in place.
Change management: Use of change management practices and multiple
environments are used to ensure that the production environment is safeguarded.
You might also work with development and test environments, or just a
development environment.
Support: The team that supports the solution is clearly defined. Team members
have been trained, and they understand the operational expectations. Backup
members have been identified and cross-training happens when appropriate.
Alerting: When something goes wrong, alerts notify the support team
automatically. Preferably, alerting includes both logs and email (rather than email
only). A log is useful for analyzing logged errors and warnings.
Logging: Processes are logged so there's a history of when the auditing data was
updated. Logged information should record start time, end time, and the identity
of user or app that ran the process.
Error handling: Scripts and processes gracefully handle and log errors (such as
whether to exit immediately, continue, or wait and try again). Alert notifications are
sent to the support team when an error occurs.
Coding standards: Good coding techniques that perform well are used. For
example, loops are purposefully avoided except when necessary. Consistent coding
standards, comments, formatting, and syntax are used so that the solution is easier
to maintain and support.
Reuse and modularization: To minimize duplication, code and configuration values
(like connection strings or email addresses for notifications) are modularized so
that other scripts and processes can reuse them.

 Tip

You don't have to do everything listed above all at once. As you gain experience,
you can incrementally improve the solution so that it becomes complete and
robust. Be aware that most examples you find online are simple, one-off script
snippets that may not be production quality.

Checklist - When planning to operationalize and improve an auditing solution, key


decisions and actions include:

" Assess the level of existing solutions: Determine whether there are opportunities
to improve and stabilize existing auditing solutions that are automated.
" Establish production-level standards: Decide what standards you want to have for
your automated auditing processes. Factor in improvements that you can
realistically add over time.
" Create a plan for improvement: Plan to improve the quality and stability of
production auditing solutions.
" Determine whether a separate development environment is necessary: Assess the
level of risk and reliance on the production data. Decide when to create separate
development and production (and test) environments.
" Consider a data retention strategy: Determine whether you need to implement a
data retention process that purges data after a certain period of time, or upon
request.

Documentation and support


Documentation and support are critical for any production-level solution. It's helpful to
create several types of documentation, depending on the target audience and purpose.

Technical documentation
Technical documentation is targeted at the technical team who built the solution and
who will gradually improve and expand the solution over time. It's also a useful resource
for the support team.

Technical documentation should include:

A summary of architecture components and prerequisites.


Who owns and manages the solution.
Who supports the solution.
An architecture diagram.
Design decisions, including goals, reasons why certain technical choices were
made, and constraints (such as cost or skills).
Security decisions and choices.
Naming conventions used.
Coding and technical standards and guidelines.
Change management requirements.
Deployment, setup, and installation instructions.
Known areas of technical debt (areas that can be improved if there's opportunity
to do so).

Support documentation

Depending on the level of criticality for your auditing solution, you might have a help
desk or support team should urgent issues arise. They might be available all day, every
day.
Support documentation is sometimes referred to as a knowledge base or a runbook. This
documentation is targeted at your help desk or support team, and it should include:

Troubleshooting guidance for when something goes wrong. For example, when
there's a data refresh failure.
Actions to take on an ongoing basis. For example, there may be some manual
steps that someone needs to do regularly until an issue is resolved.

Content creator documentation


You derive more value from your auditing solution by providing usage and adoption
analytics to other teams throughout the organization (with row-level security enforced,
if necessary).

Content creator documentation is targeted at self-service content creators who create


reports and data models that source the curated auditing data. It includes information
about the data model, including common data definitions.

Checklist – When planning for documentation and support for your auditing solution,
key decisions and actions include:

" Confirm who's expected to support the solution: Determine who'll support


auditing solutions that are considered production-level (or have downstream report
dependencies).
" Ensure support team readiness: Verify that the support team is prepared to
support the auditing solution. Identify whether there are any readiness gaps that
need addressing.
" Arrange for cross-training: Conduct knowledge transfer sessions or cross-training
sessions for the support team.
" Clarify support team expectations: Ensure that expectations for response and
resolution are clearly documented and communicated. Decide whether anyone
needs to be on call to quickly resolve issues related to the auditing solution.
" Create technical documentation: Create documentation about the technical
architecture and design decisions.
" Create support documentation: Update the help desk knowledgebase to include
information about how to support the solution.
" Create documentation for content creators: Create documentation to help self-
service creators use the data model. Describe common data definitions to improve
consistency of their use.

Enable alerting
You might want to raise alerts based on specific conditions in the auditing data. For
example, you can raise an alert when someone deletes a gateway or when a Power BI
administrator changes a tenant setting.

For more information, see Tenant-level monitoring.

Ongoing management
You need to perform ongoing management of the entire auditing solution. You might
need to extend or change your auditing solution when:

The organization discovers new data requirements.


New audit events appear in the raw data you exact from the Power BI REST APIs.
Microsoft makes changes to the Power BI REST APIs.
Employees identify ways to improve the solution.

) Important

Breaking changes are rare, but they can occur. It's important that you have
someone available who can quickly troubleshoot issues or update the auditing
solution when necessary.

Checklist - When planning for ongoing management of the auditing solution, key
decisions and actions include:

" Assign a technical owner: Ensure that there's clear ownership and responsibility for
the entire auditing solution.
" Verify that a backup exists: Make sure there's a backup technical owner who can
get involved should an urgent issue arise that support can't solve.
" Keep a tracking system: Ensure that you have a way to capture new requests and a
way to prioritize immediate priorities, and also short-term, medium-term, and long-
term (backlog) priorities.
Next steps
In the next article in this series, learn about tenant-level monitoring.
Power BI implementation planning:
Tenant-level monitoring
Article • 04/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This tenant-level monitoring article is primarily targeted at:

Power BI administrators: The administrators who are responsible for overseeing


Power BI in the organization. Power BI administrators may need to collaborate with
IT, information security, internal audit, and other relevant teams.
Center of Excellence, IT, and BI team: The teams that are also responsible for
overseeing Power BI. They may need to collaborate with Power BI administrators,
information security, and other relevant teams.

The terms auditing and monitoring are closely related.

Auditing: Actions you take to understand a system, its user activities, and related
processes. Auditing activities can be manual, automated, or a combination of both.
An auditing process might focus on one specific aspect (for example, auditing
security for a workspace), or it can refer to an end-to-end auditing solution. An
auditing solution consists of extracting, storing, and transforming data so that it
can be analyzed and acted upon.
Monitoring: Ongoing activities that inform you about what's occurring. Monitoring
usually involves alerting and automation, though sometimes monitoring is done
manually. Monitoring can be set up for a process you've selected to audit (for
example, notifications that appear when a specific tenant setting changes).

This article describes the different ways you can monitor a Power BI tenant.

7 Note

For more information about tracking activities that users perform, see Tenant-level
auditing.
Monitor and protect content
There are several monitoring aspects related to information protection and data loss
prevention.

Information protection for Power BI


Information protection focuses on safeguarding organizational data, reducing the risk of
sharing sensitive information, and strengthening compliance status for regulatory
requirements. Information protection begins with the use of sensitivity labels.

When you classify and label content, it helps the organization to:

Understand where sensitive data resides.


Track external and internal compliance requirements.
Protect content from unauthorized users.
Educate users on how to responsibly handle data.
Implement real-time controls to reduce the risk of data leakage.

 Tip

For information about implementing sensitivity labels, see Information protection


for Power BI.

Data loss prevention for Power BI


Data loss prevention (DLP) refers to activities and practices that safeguard data in the
organization. The goal for DLP is to reduce the risk of data leakage, which can happen
when unauthorized people share sensitive data. While responsible user behavior is a
critical part of safeguarding data, DLP usually refers to policies that are automated.

DLP allows you to:

Detect and inform administrators when risky, inadvertent, or inappropriate sharing


of sensitive data occurs. Specifically, it allows them to:
Improve the overall security setup of your Power BI tenant, with automation and
information.
Enable analytical use cases that involve sensitive data.
Provide auditing information to security administrators.
Provide users with contextual notifications. Specifically, it supports them to:
Make the right decisions during their normal workflow.
Adhere to your data classification and protection policy, without negatively
affecting their productivity.

 Tip

For information about implementing DLP, see Data loss prevention for Power BI.

Monitor security and threats


There are several monitoring aspects related to security and threats.

Defender for Cloud Apps for Power BI


Microsoft Defender for Cloud Apps is a cloud access security broker (CASB) that
allows administrators to:

Monitor and raise alerts based on specific activities.


Create DLP policies.
Detect unusual behaviors and risky sessions.
Limit activities performed by applications (with Azure AD conditional access app
control).

Some powerful Power BI monitoring and protection capabilities are available with
Defender for Cloud Apps. For example, you can:

Prohibit all—or certain users—from downloading a file from the Power BI service
when a specific sensitivity label is assigned.
Receive an alert when an administrative activity is detected, such as when a tenant
setting is updated in the Power BI service.
Detect when suspicious or unusual behaviors occur, such as large file downloads or
an unusual number of sharing operations in the Power BI service.
Search the activity log for specific activities relating to content with a specific
sensitivity label, such as data exports from the Power BI service.
Receive notifications as risky sessions occur, such as when the same user account
connects from different geographical areas in a short time period.
Determine when someone outside of a predefined security group views specific
content in the Power BI service.

For more information, see Defender for Cloud Apps for Power BI.

Microsoft Sentinel
Microsoft Sentinel is a security information and event management (SIEM) service. It's
an Azure service that allows you to:

Collect logs and security data for users, devices, applications, and infrastructure.
You can capture logs and user activities from the Power BI service and many other
areas across the enterprise.
Detect potential threats. You can create rules and alerts to refine what's important
to track. Automated threat intelligence exists as well, which can help reduce the
manual effort.
Analyze data and investigate the scope and root cause of incidents. You can use
built-in visualization tools or Kusto Query Language (KQL) queries. You can also
use Power BI to visualize and analyze the data that you collect.
Respond to security incidents and threats. You can handle responses directly. Or,
you can automate responses and remediations by using playbooks, which are
workflows based on Azure Logic Apps.

Microsoft Sentinel stores its data in Azure Log Analytics (a component of Azure
Monitor). It's based on the same architecture as the dataset event logs, which capture
user-generated and system-generated activities that occur for a dataset.

You can use Microsoft Sentinel with Power BI in two different ways.

Use the Power BI data connector in Sentinel: A subset of the attributes from the
Power BI audit logs are streamed into Azure Log Analytics (Azure Monitor). It's one
way to obtain audit logs for tracking user activities in your Power BI environment.
For more information, see Tenant-level auditing.
Use Power BI as an analytical tool: Power BI connects to the data that Microsoft
Sentinel (and, accordingly, Azure Monitor and Azure Log Analytics) collects from a
wide variety of data connectors. You can then use standard Power BI functionality
to model, analyze, and visualize data. For more information, see Create a Power BI
report from Microsoft Sentinel data.

) Important

Microsoft Sentinel, Azure Monitor, Azure Log Analytics, and Defender for Cloud
Apps are separate services. They have their own pricing models and security
models, which are separate from Power BI. Power BI administrators don't
automatically have access to these services. We recommend that you work with
your infrastructure team to plan which services are best to use.

For more information about the different options for capturing user activities, see
Tenant-level auditing.
Monitor Power BI service health
There are several ways to obtain information about service health, incidents, and issues.

 Tip

Before you submit a Power BI support request to Microsoft, we recommend that


you first check with the resources listed in this section.

Power BI support site


When there's an apparent service outage or degradation, Power BI administrators and
users should refer to the Power BI support site. It's a publicly available site that
displays information about high-priority issues concerning Power BI. This site shows:

The status for the Power BI service.


Service level outages or degradation notifications.
Informational messages for broad awareness.

7 Note

Microsoft typically communicates issues related to the national/regional clouds in


the Microsoft 365 admin center rather than the Power BI support site. If you work
with national/regional clouds, work with your Microsoft 365 administrator to
monitor Power BI issues.

For more information about Power BI support, see How to contact support.
For more information about how to support users in your organization, see User
support.

Power BI email notifications


You can receive alert notifications by email to inform you when there's a service outage,
interruption, or degradation occurring in your Power BI tenant. These notifications are
available only for Premium workspaces.

To set up email alerts, enable the Receive email notifications for service outages or
incidents tenant setting. Because its purpose is to send email, you must assign a mail-
enabled security group to this setting. We recommend that you use a group name like
Power BI System Support. You should add your Power BI administrators, key personnel
from your Center of Excellence (COE), and your help desk that handles user support to
this group.

 Tip

When you need to notify your internal users, we recommend that your COE sends a
customized message that uses non-technical language. That way, the message can
include additional context, and use the preferred communication platform, like a
Teams channel.

For more information, see Enable service interruption notifications.

Power BI known issues


Power BI administrators and users can also monitor the Power BI known issues page.
This page includes information about currently active known issues and recently closed
known issues.

Known issues may include software bugs that have been reported to Microsoft support
by other customers. An issue may also include functionality that's by design, but
because Microsoft Support has received a significant number of tickets, an explanation
is warranted.

Microsoft 365 admin center


The Microsoft 365 admin center displays service health information, incident
summaries, and advisory messages for Microsoft 365 services. Also, service incidents
and certain types of major updates for the Power BI service are posted in the Microsoft
365 admin center.

 Tip

Access to M365 is available to administrators who have sufficient permissions.


Power BI administrators have a limited view to the M365 service health and the
M365 message center.

There are two types of message.

Advisory message: An issue is affecting only some customers. The service is


available, however the issue may be intermittent or limited in scope and user
impact.
Active incident: An issue is presently causing the service, or a major function, to be
unavailable or severely degraded for multiple customers.

Microsoft 365 Service health

The Microsoft 365 service health page shows notifications about advisories and
incidents. It also has information about active incidents, including:

Description
User impact
Status
Duration (if closed) or estimated time to resolve (if open)
Next status update (if open)
Root cause (if closed)

The issue history page shows incidents and advisories that were resolved in the past 30
days.

Microsoft 365 message center


The Microsoft 365 message center publishes planned changes to Microsoft 365
services, allowing administrators to prepare in advance. For active incidents, each
message links to more details in the Microsoft 365 service health page, as previously
described.

Sometimes, a message is based on telemetry gathered by Microsoft. For example,


Microsoft's telemetry knows which type of browser users are using to connect to the
Power BI service. If Internet Explorer use is detected, you might receive a message
reminding you that Power BI no longer supports that browser.

Power BI issues site


You can check the publicly available Power BI issues site. It's a place for users to
publicly report issues they've encountered.

Monitor updates and fixes


By managing how Power BI Desktop is installed on user machines, you can control when
updates and fixes are installed. However, you can't control when changes to the Power
BI service are released in your tenant.

We recommend that you closely monitor releases for the Power BI service and Power BI
Desktop. Monitoring releases ensures that you can be prepared, test key feature
changes, and announce important changes to your users.

Updates
The Power BI service is a cloud service that's updated continually and frequently. Power
BI Desktop, which must be installed on a Windows machine, is usually updated monthly
(except for fixes, which are described next). Microsoft posts release announcements on
the Power BI blog .

For information about version numbers and links to more information for the current
release, see What's new in Power BI.

For information about previous releases, see Power BI updates archive.

QFE releases
Depending on the severity, Microsoft may do a quick-fix engineering (QFE) release,
which is commonly known as a bug fix or hotfix. QFE releases occur when Power BI
Desktop updates are made outside of the normal monthly release cadence.

For history of previous QFE releases, see Change log for Power BI Desktop.

Monitor Power BI announcements


To effectively support Power BI in your organization, you should continually watch for
announcements and new features.

Power BI release plan


You can find the public roadmap for future features including estimated dates in the
release plan .

Sometimes a change that's coming is so important that you'll want to plan for it in
advance. The planning cycle is divided into two semesters each year: April through
September, and October through March.

Power BI blog
Subscribe to the Power BI blog to follow posts about important public
announcements and new releases. Some blog posts also provide information about
upcoming features that can help you to plan ahead.

 Tip
It's particularly important to read the monthly blog post announcement. It includes
a summary of new features and planned changes to the Power BI service, Power BI
Desktop, and the Power BI mobile apps.

Power BI ideas
Consider routinely monitoring the Power BI ideas site . This site informs you about top
ideas that other customers have requested. You can also influence the future direction
of Power BI by submitting new ideas and voting for posted ideas that you want to
support.

Monitor related Azure services


The Azure status page shows the status for Azure services. There are many Azure
services that might potentially integrate with your Power BI tenant.

Common Azure services that integrate with Power BI include:

Azure Active Directory: Your Power BI tenant relies on Azure Active Directory
(Azure AD) for identity and access management.
Azure Power BI Embedded: Azure Power BI Embedded supports programmatic
embedding of Power BI content in apps for your customers. Power BI Embedded is
also applicable for customers who have enabled autoscale for their Power BI
Premium capacity. For more information about when to use Power BI Embedded,
see the embed for your customers usage scenario.
Azure storage accounts: Azure Data Lake Storage Gen2 (ADLS Gen2) can be used
for workspace-level data storage, including dataflows storage and dataset backups.
For more information about dataflow storage, see the self-service data preparation
usage scenario.
Azure Log Analytics: You can enable workspace auditing to capture dataset event
logs. For more information, see Data-level auditing.
Azure Files: When the large dataset format is enabled for a workspace, the data is
stored in Azure Files.
Data sources: It's likely that you have many types of data sources that Power BI
connects to. Data sources could be Azure Analysis Services, Azure SQL Database,
Azure Synapse Analytics, Azure storage, and others.
Virtual machines: A data gateway for Power BI could run on a virtual machine (VM)
in Azure. Or, a database containing data that's used as a data source for Power BI
may run on a VM in Azure.
Virtual network data gateway: A virtual network (VNet) data gateway could be
implemented to securely access data sources in a private network.
Azure Key Vault: One common way to use Azure Key Vault is for customer
management of the encryption keys for data at-rest in the Power BI service. For
more information, see bring your own key (BYOK) and customer-managed keys
(CMK).
Microsoft Purview: Used by Microsoft Purview Information Protection, or by
Microsoft Purview Data Catalog to scan your Power BI tenant, to extract metadata.

Checklist - When planning for tenant-level monitoring, key decisions and actions
include:

" Educate administrators and key personnel: Make sure that Power BI administrators
and key personnel in the COE are aware of the resources available for monitoring
service health, updates, and announcements.
" Create a monitoring plan: Determine how, and who will monitor service health,
updates, and announcements. Ensure that expectations are clear for how to gather,
communicate, plan, and act on the information.
" Create a user communication plan: Clarify which situations warrant communicating
to others in the organization. Determine how, and who will be responsible for
communicating to users in the organization, and in which circumstances.
" Decide who should receive email notifications: Determine who should receive
email notifications from Microsoft when there's a Power BI issue. Update the Receive
email notifications for service outages or incidents tenant setting to align with your
decision.
" Review administrator roles: Review roles and permissions necessary for viewing
service health in the M365 admin center.
" Investigate information protection and DLP requirements: Explore requirements
for using sensitivity labels in Microsoft Purview Information Protection to classify
data (the first building block of information protection). Consider requirements for
implementing DLP for Power BI, and the associated monitoring processes.
" Investigate Defender for Cloud Apps capabilities: Explore requirements for using
Microsoft Defender for Cloud Apps to monitor user behavior and activities.

Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Power BI usage scenarios
Article • 05/29/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

The Power BI ecosystem is diverse and can be implemented in different ways. In this
series of articles, common usage scenarios are provided to illustrate different ways that
Power BI can be deployed and utilized by creators and consumers. Understanding how
these usage scenarios are used in your organization, and by whom, can influence the
implementation strategies you decide to take.

7 Note

The most prevalent components of Power BI are identified in each scenario based
on how Power BI is intended to be used for that scenario. The objective is not to
call out every possible option for each usage scenario. Rather, each scenario
diagram depicts the primary features that are most relevant for that scenario.

How to use the scenarios


Use the scenarios to help you with Power BI architecture planning and implementation
decisions. Here are some suggestions:

Initially read the scenarios in the order they're documented. Become familiar with
the concepts and how the scenarios build upon each other.
Focus on the scenarios that align well with your data culture. Also consider how
content ownership and management is handled, as well as content delivery scope
when determining which usage scenarios are a good fit.
Consider which areas of your BI operations could be strengthened in your
organization. For example, if your goal is to reduce the level of data duplication,
focus on the managed self-service BI scenario. If your goal is to improve efficiency
of data preparation efforts, focus on the self-service data preparation scenario.
Determine if there are ways to use Power BI that will bring additional value or
reduce risk for your organization. For example, if your goal is to achieve a balance
between centralization and decentralization (described further in the content
ownership and management articles), consider the customizable managed self-
service BI scenario.
After understanding the areas of your BI operations that you want to implement or
strengthen, create a project plan that defines tactical steps to arrive at your desired
future state.

 Tip

You may need to mix and match the ideas described in the usage scenarios to
create a Power BI implementation strategy that fits your circumstances. To support
the needs of users from different departments and business units, expect to draw
from multiple Power BI implementation methods simultaneously. That way, you'll
be able to support diverse content creators and various solutions.

Content collaboration and delivery scenarios


The following usage scenarios are about content collaboration and delivery. These initial
four scenarios focus primarily on content ownership and management, and content
delivery scope. They are inter-related, building upon each other in a way that aligns with
how business intelligence teams evolve and grow over time. They can be thought of as
the building blocks that other scenarios build upon—particularly the self-service BI
scenarios that are described in the next section. Therefore, it's a good idea to review
those scenarios first.

Personal BI: The content creator has a lot of freedom and flexibility to create
content for individual usage. This scenario describes using a personal workspace
for private usage.

Team BI: The primary focus is on informal collaboration among team members
who work closely together on a team. This scenario describes using a workspace
for both collaboration and distribution. It also showcases the value of using
Microsoft Teams for collaboration between Power BI creators and consumers.

Departmental BI: There's a focus on distributing content to a larger number of


users within a department or business unit. This scenario describes using a Power
BI app for distributing content.

Enterprise BI: The primary focus is on content distribution at scale. This scenario
describes using Premium capacity to distribute content to a larger number of read-
only consumers who have a Power BI free license.
7 Note

Additional information about content ownership and management and


content delivery scope are described in the Power BI adoption roadmap.

Self-service BI scenarios
Four usage scenarios focus on supporting self-service BI activities, in which analytical
responsibilities are handled by people throughout many areas of the organization. The
content collaboration and delivery scenarios (described in the previous group of
scenarios) also include aspects of self-service BI but from a slightly different viewpoint.
The intention of this set of scenarios is to focus on several important aspects to plan for
in a Power BI implementation.

The self-service BI scenarios presented here primarily emphasize the use of managed
self-service BI in which data management is centralized. Reusability of this centralized
data is one of the primary goals. Business users take responsibility for creation of
reports and dashboards.

Managed self-service BI: The goal is for many report creators to reuse shared
datasets. This scenario describes decoupling the report creation process from the
dataset creation process. To encourage report authors to find and reuse an existing
shared dataset, it should be endorsed and made discoverable in the data hub in
the Power BI service.

Customizable managed self-service BI: The focus is on the dataset creator


customizing or extending an existing dataset to satisfy new requirements. This
scenario describes publishing a customized data model where some tables are new
while others are dependent on the existing shared dataset.

Self-service data preparation: The focus is on centralizing data preparation


activities to improve consistency and reduce effort. This scenario describes creating
Power BI dataflows to avoid repeating data preparation Power Query logic in many
different Power BI Desktop files. A dataflow can be consumed as a data source by
numerous datasets.

Advanced data preparation: The focus is on improving the reach and reusability of
dataflows for multiple users, teams, and use cases. This scenario describes use of
multiple workspaces based on purpose: staging, cleansed, and final.
Prototyping and sharing: Prototyping techniques are very useful for validating
requirements for visuals and calculations by subject matter experts. Prototyping
solutions may be temporary, short-lived solutions, or they may ultimately evolve
into a solution that's fully validated and released. This scenario describes using
Power BI Desktop during an interactive prototyping session. It's followed by
sharing in the Power BI service when additional feedback is needed from a subject
matter expert.

7 Note

Additional information about content ownership and management, and


content delivery scope, which affect self-service BI activities and decisions,
are described in the Power BI adoption roadmap.

Content management and deployment


scenarios
The following content management and deployment scenarios describe approaches for
how content creators and owners use methodical and disciplined lifecycle management
processes to reduce errors, minimize inconsistencies, and improve the user experience
for consumers.

Self-service content publishing: The focus is on ensuring that content is stable for
consumers. This scenario describes using a Power BI deployment pipeline to
publish content through development, test, and production workspaces. It also
describes how (optionally) Premium per user license mode can be used for
development and test workspaces, and Premium per capacity license mode for the
production workspace.
Enterprise content publishing: The focus is on using more sophisticated and
programmatic techniques to publish content through development, test, and
production workspaces. In this scenario, it describes how you can use Azure
DevOps to orchestrate collaboration and content publication.
Advanced data model management: The focus is on empowering creators with
advanced data modeling and publishing capabilities. This scenario describes
managing a data model by using Tabular Editor, which is a third-party tool. Data
modelers publish their models to the Power BI service by using the XMLA
endpoint, which is available with Power BI Premium.

Real-time scenarios
Real-time scenarios describe different techniques to allow presenting data updates in
near real-time. Monitoring data in real-time allows the organization to react faster when
time-sensitive decisions must be made.

Self-service real-time analytics: The focus is on how a business analyst can


produce real-time Power BI reports.
Programmatic real-time analytics (usage scenario article not currently available):
The focus is on how a developer can produce real-time Power BI reports.

Embedding and hybrid scenarios


There are two embedding and hybrid scenarios: content embedding and on-premises
reporting. They describe ways to deploy and distribute content that can be used in
addition to, or instead of, the Power BI service.

Embed for your organization: The focus is on making analytical data easier for
business users to access by integrating visuals within the tools and applications
they use every day. This scenario describes using the Power BI REST APIs to embed
content in a custom application for users who have permission and appropriate
licenses to access Power BI content in your organization.
Embed for your customers: This scenario describes using the Power BI REST APIs
to embed content in a custom application for users who don't have permission or
appropriate licenses to access Power BI content in your organization. The custom
application requires an embedding identity that has permission and an appropriate
license to access Power BI content. The custom application could be a multitenancy
application.
On-premises reporting: The focus is on using a basic portal for publishing,
sharing, and consuming business intelligence content within your organizational
network. This scenario describes using Power BI Report Server for this purpose.

Next steps
In the next article in this series, learn about enabling private analytics for an individual
with the personal BI usage scenario.
Power BI usage scenarios: Personal BI
Article • 03/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

As described in the Power BI adoption roadmap, personal BI is about enabling an


individual to gain analytical value. It's also about allowing them to perform business
tasks more efficiently with the use of data, information, and analytics. Personal BI is
sometimes thought of as the entry point for self-service BI.

In personal BI scenarios, the content creator has a lot of freedom and flexibility to create
content for individual usage. Simplicity and speed are usually high priorities. There's no
sharing or collaboration in this usage scenario—those topics are covered in the team BI,
departmental BI, and enterprise BI scenario articles.

7 Note

There are four content collaboration and delivery scenarios that build upon each
other. The personal BI scenario is the first of the four scenarios. A list of all
scenarios can be found in the Power BI usage scenarios overview article.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support personal BI. The focus is on private analytics for
an individual.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

The Power BI content creator develops a BI solution using Power BI Desktop.

Power BI Desktop connects to data from one or more data sources. Queries and data
mashups, which combine multiple sources, are developed in the Power Query Editor.

Data model development and report creation are done in Power BI Desktop. In a personal
BI solution, the primary intention is typically data exploration and analysis.

When ready, the content creator publishes the Power BI Desktop file (.pbix) to the Power BI
service.

Since the primary intention is personal usage, the content is published to the content
creator's personal workspace. Some advantages of using the Power BI service (instead of
remaining solely in Power BI Desktop) include scheduled data refresh, dashboard alerts,
and the ability to consume content using a mobile app.

The content creator views and interacts with the content published. One option is to sign
in to the Power BI service using a web browser.

The content creator can also use a Power BI mobile app to view published content.

Scheduled data refresh can be set up in the Power BI service to keep imported data up to
date.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Power BI administrators oversee and monitor activity in the Power BI service. Personal
workspaces are usually governed to a much lesser extent than workspaces that are
intended for collaboration and distribution.
Key points
The following are some key points to emphasize about the personal BI scenario.

Choice of authoring tools


Power BI Desktop is the authoring tool to develop queries, models, and Power BI
reports. It's possible to use different tools to create Excel reports and Power BI
paginated reports (not depicted in the scenario diagram).

Reliance on personal workspace


Use of the personal workspace can be thought of like an analytical sandbox. For many
organizations, personal content is subject to little governance or formal oversight.
However, it's still wise to educate content creators on guidelines to be successful with
personal BI. Use of the sharing feature available within a personal workspace isn't
depicted in this usage scenario since the focus is individual analytics.

) Important

Limit the use of personal workspaces and ensure no mission-critical content is


stored in them. Although a Power BI administrator can access and govern a user's
personal workspace, storing critical content in personal workspaces does represent
risk to the organization.

Use of Power BI free license


For personal use, which by definition means there's no sharing or collaboration with
others, only certain capabilities of the Power BI service are available to a user with a
Power BI free license. When using a free license, most activities to create and publish
content to the Power BI service are limited to their personal workspace.

 Tip

The enterprise BI scenario describes how users with a Power BI free license can
view content when it's hosted in a Premium capacity.
Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset (not depicted in the scenario diagram).

7 Note

A data gateway in personal mode is most frequently installed on the machine of an


individual user. Therefore, a data gateway in personal mode is best-suited to
personal BI usage scenarios. Your organization may restrict individuals from
installing data gateways, in which case the content creator can use a data gateway
in standard mode (typically set up and managed by IT).

Information protection
Information protection policies can be applied to content in the Power BI service. Some
organizations have a mandatory label policy that requires a sensitivity label be assigned,
even within a personal workspace.

System oversight
The activity log records user activities that occur in the Power BI service, and it extends
to personal workspaces. Power BI administrators can use the activity log data that's
collected to perform auditing to help them understand usage patterns and detect risky
activities. Auditing and governance requirements are typically less stringent for personal
BI scenarios.

Next steps
In the next article in this series, learn about small team collaboration with the team BI
usage scenario.
Power BI usage scenarios: Team BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

Once a valuable BI solution is created, it's time to collaborate with colleagues. The goal
is to deliver additional value beyond what can be achieved with the personal BI scenario.

As described in the Power BI adoption roadmap, team BI focuses on a small team of


people who work closely together. Collaborating and sharing content with each other in
an informal way is usually a key objective of team BI (more formal delivery of content is
covered in the departmental BI and enterprise BI scenarios).

Sometimes when working with close colleagues, collaboration for small teams can be
done simply within a workspace. A workspace can be thought of as a way to informally
view content (without the formality of publishing a Power BI app, which is covered in the
departmental BI scenario) by members of a small team.

7 Note

There are four content collaboration and delivery usage scenarios that build upon
each other. The team BI scenario is the second of the four scenarios. A list of all
scenarios can be found in the Power BI usage scenarios article.

The managed self-service BI scenario introduces an important concept about


decoupling dataset and report development. For simplicity, this concept isn't
explicitly discussed in this article. You're encouraged to apply the concepts
discussed in the managed self-service BI scenario whenever possible.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support team BI. The primary focus is small team
collaboration.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Power BI content creators develop BI solutions using Power BI Desktop. In a team BI


scenario, it's common for creators to work within a decentralized team, department, or
business unit.

Power BI Desktop connects to data from one or more data sources. Queries and data
mashups, which combine multiple sources, are developed in the Power Query Editor.

Data model development and report creation are done in Power BI Desktop. In a team BI
solution, the purpose is to help team members understand the meaning and significance
of data by placing it in a visual context.

When ready, content creators publish their Power BI Desktop file (.pbix) to the Power BI
service.

The content is published to a workspace. Its primary purpose is to provide information and
enable collaboration for a small team.

All users assigned to a workspace role (viewer or higher) view and interact with content in
the workspace. One option is to sign in to the Power BI service using a web browser.

The Power BI mobile apps are also available for viewing published content.

Users who frequently work in Microsoft Teams might find it convenient to manage or view
Power BI content directly in Teams. They can use the Power BI app for Microsoft Teams or
view reports that are embedded within a team channel. Users can also have private chats
with each other and receive notifications directly in Teams.
Item Description

Users assigned to the administrator, member, or contributor workspace role can publish
and manage workspace content.

Scheduled data refresh can be set up in the Power BI service to keep imported data—in
datasets or dataflows—up to date.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Other self-service content creators can author new reports using an existing dataset. They
can choose to use Power BI Desktop, Excel, or Power BI Report Builder (not depicted in the
scenario diagram). The reuse of existing datasets in this manner is highly encouraged.

Power BI administrators oversee and monitor activity in the Power BI service. Team BI
solutions may be subject to more governance requirements than personal BI, but fewer
than departmental BI and enterprise BI solutions.

Key points
The following are some key points to emphasize about the team BI scenario.

Source file storage


Power BI Desktop is the authoring tool to develop queries, models, and interactive
reports. Because collaboration is a high priority for team BI, it's important to store the
source Power BI Desktop file in a secure, shared location. Locations such as OneDrive for
work or school or SharePoint (not depicted in the scenario diagram) are useful due to
built-in version history and automatic file synchronization. A shared library is securable,
easily accessible by colleagues, and has built-in versioning capabilities.

When the co-management of a BI solution involves multiple people with different


skillsets, consider decoupling the model and reports into separate Power BI Desktop
files (described in the managed self-service BI scenario). This approach encourages
reuse of the dataset and is more efficient than continually alternating between the
people who are editing the Power BI Desktop file. That's particularly helpful when, for
instance, one person works on the dataset while another person works on the reports.

Workspaces
A Power BI workspace serves as a logical container in the Power BI service for storing
related Power BI items, such as datasets and reports. In a team BI scenario, it's practical
and simple to use the workspace for collaboration as well as for the viewing of reports
by a small number of users. The distribution of content as a Power BI app is described in
the departmental BI scenario.

Workspace access and sharing


In addition to organizing content, a workspace forms a security boundary. Assign users
to workspace roles when a team member needs to edit or view all items published to a
workspace. The four workspace roles (administrator, member, contributor, and viewer)
support productivity for self-service content creators and consumers, without over-
provisioning permissions.

7 Note

Alternatively, workspace users can share individual reports and dashboards (not
depicted in the scenario diagram). Sharing can grant read-only access to someone
who isn't assigned to a workspace role. However, try to limit sharing because it can
be tedious to setup for many items or many users.

Power BI user licenses


When collaborating in a workspace, all users must have a Power BI Pro or Power BI
Premium Per User (PPU) license .

7 Note

There's one exception to the requirement of a Power BI Pro or PPU license: When
the workspace is assigned to Premium capacity, Power BI free license users (with
proper permissions) can view the workspace (and/or Power BI app) content. This
approach is described in the enterprise BI scenario.

Reuse existing datasets


The reuse of existing datasets is important for team collaboration. It helps to promote a
single version of the truth. It's particularly important when a small number of dataset
creators support many report creators. A Power BI Desktop live connection can connect
a report to an existing dataset, avoiding the need to create another dataset.
Alternatively, when users prefer to create an Excel report, they can use the Analyze in
Excel feature. This type of connectivity is preferred to exporting data to Excel because it:
Avoids creating duplicate datasets.
Reduces the risk of inconsistent data and calculations.
Supports all slicing, dicing, and pivoting capabilities within the visuals while
remaining connected to the dataset that's stored in the Power BI service.

To access an existing dataset, the content creator must have Build permission for the
dataset. It can be granted directly or indirectly when the user is assigned to a workspace
role (contributor or higher) or granted when publishing a Power BI app or sharing a
Power BI item. The managed self-service BI scenario explores the reuse of shared
datasets further.

Power BI integration with Microsoft Teams


Using a modern collaboration tool like Microsoft Teams engages users to make data-
driven decisions. Microsoft Teams supports collaborative discussions about data while
viewing Power BI content within a natural workflow. To learn about more collaboration
options, see Collaborate in Microsoft Teams with Power BI.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset (not depicted in the scenario diagram).

7 Note

For team, departmental, and enterprise BI scenarios, a centralized data gateway in


standard mode is strongly recommended over gateways in personal mode. In
standard mode, the data gateway supports live connection and DirectQuery
operations (in addition to scheduled data refresh operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements.
Next steps
In the next article in this series, learn about distributing content to a larger number of
viewers in the departmental BI usage scenario.
Power BI usage scenarios: Departmental
BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

As described in the Power BI adoption roadmap, departmental BI focuses on distributing


content to a larger number of users. These users are typically members of a department
or business unit.

When teams grow larger, it becomes impractical to use a workspace effectively for the
distribution of all reports (as described in the team BI scenario). A more effective way to
handle larger departmental BI scenarios is to use the workspace for collaboration and
distribute workspace content as an app to consumers.

7 Note

There are four content collaboration and delivery usage scenarios that build upon
each other. The departmental BI scenario is the third of the four scenarios. A list of
all scenarios can be found in the Power BI usage scenarios article.

The managed self-service BI scenario introduces an important concept about


decoupling dataset and report development. For simplicity, this concept isn't
explicitly discussed in this article. You're encouraged to apply the concepts
discussed in the managed self-service BI scenario whenever possible.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support departmental BI. The primary focus is on using a
Power BI app for content distribution to a large consumer audience.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Power BI content creators develop BI solutions using Power BI Desktop. In a departmental


BI scenario, it's common for creators to work within a decentralized team, department, or
business unit.

Power BI Desktop connects to data from one or more data sources. Queries and data
mashups, which combine multiple sources, are developed in the Power Query Editor.

Data model development and report creation are done in Power BI Desktop. In a
departmental BI solution, the purpose is to help colleagues understand the meaning and
significance of data by placing it in a visual context.

When ready, content creators publish their Power BI Desktop file (.pbix) to the Power BI
service.

The content is published to a workspace. Its primary purpose is to provide a collaboration


area for people who are responsible for creating, managing, and validating content.

Some, or all, reports and dashboards are published as a Power BI app. The purpose of the
app is to provide a set of related content for consumers to view in a user-friendly way.

Power BI app users are assigned read-only permissions. App permissions are managed
separately from the workspace.

The Power BI mobile apps are also available for viewing apps and workspace content.

Users who frequently work in Microsoft Teams might find it convenient to manage or view
Power BI content directly in Teams.
Item Description

Users assigned to the administrator, member, or contributor workspace roles can publish
and manage workspace content.

Scheduled data refresh is set up in the Power BI service to keep imported data—in
datasets or dataflows—up to date.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Other self-service content creators can author new reports using an existing dataset. They
can choose to use Power BI Desktop, Excel, or Power BI Report Builder (not depicted in the
scenario diagram). The reuse of existing datasets in this manner is highly encouraged.

Power BI administrators oversee and monitor activity in the Power BI service. Departmental
BI solutions may be subject to more governance requirements than team BI solutions, but
fewer requirements than enterprise BI solutions.

Key points
The following are some key points to emphasize about the departmental BI scenario.

Source file storage


Power BI Desktop is the authoring tool to develop queries, models, and interactive
reports. For departmental BI, it's important to store the source Power BI Desktop file in a
secure, shared location. Locations such as OneDrive for work or school or SharePoint
(not depicted in the scenario diagram) are useful. A shared library is securable, easily
accessible by colleagues, and has built-in versioning capabilities.

When the co-management of a BI solution involves multiple people with different


skillsets, consider decoupling the model and reports into separate Power BI Desktop
files (described in the managed self-service BI scenario). This approach encourages
reuse of the dataset and is more efficient than continually alternating between the
people who are editing the Power BI Desktop file. That's particularly helpful when, for
instance, one person works on the dataset while another person works on the reports.

Workspaces
A Power BI workspace serves as a logical container in the Power BI service for storing
related Power BI items, such as datasets and reports. Although this scenario depicts one
workspace, multiple workspaces are commonly required to satisfy all workspace
planning requirements.
The managed self-service BI scenario describes the use of separate workspaces.

Power BI app publication


For departmental BI, a Power BI app works well for content distribution to consumers
(rather than direct workspace access, which is described in the team BI scenario). A
Power BI app provides the best experience for consumers because it presents a set of
related content with a user-friendly navigation experience. A Power BI app is particularly
useful in situations where there's a larger and more diverse number of consumers, or
when the content developer doesn't work closely with the app consumers.

Power BI app permissions


Power BI app users are granted read-only permission to the app, and these permissions
are managed separately from the workspace. This additional level of flexibility is useful
for managing who can view the content.

For departmental BI, it's a best practice to limit workspace access to those who are
responsible for content authoring, development, and quality assurance activities.
Typically, only a small number of people genuinely require workspace access.
Consumers can access the content by opening the Power BI app, rather than opening
the workspace.

Power BI user licenses


All content creators and consumers of the workspace or the Power BI app must have a
Power BI Pro or Power BI Premium Per User (PPU) license .

7 Note

There's one exception to the requirement of a Power BI Pro or PPU license: When
the workspace is assigned to Premium capacity, Power BI free license users (with
proper permissions) can view the workspace (and/or app) content. This approach is
described in the enterprise BI scenario.

Reuse existing datasets


The reuse of existing datasets is important for team collaboration. It helps to promote a
single version of the truth. It's particularly important when a small number of dataset
creators support many report creators. A Power BI Desktop live connection can connect
a report to an existing dataset, avoiding the need to create another dataset.
Alternatively, when users prefer to create an Excel report, they can use the Analyze in
Excel feature. Retaining connectivity to the dataset is preferred to exporting data to
Excel because it:

Avoids creating duplicate datasets.


Reduces the risk of inconsistent data and calculations.
Supports all slicing, dicing, and pivoting capabilities within the visuals while
remaining connected to the dataset that's stored in the Power BI service.

To access an existing dataset, the content creator must have Build permission for the
dataset. It can be granted directly or indirectly when the user is assigned to a workspace
role (contributor or higher) or granted when publishing a Power BI app or sharing a
Power BI item. The managed self-service BI scenario explores the reuse of shared
datasets further.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset (not depicted in the scenario diagram).

7 Note

For team, departmental, and enterprise BI scenarios, a centralized data gateway in


standard mode is strongly recommended over gateways in personal mode. In
standard mode, the data gateway supports live connection and DirectQuery
operations (in addition to scheduled data refresh operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements.

Next steps
In the next article in this series, learn about organization-wide content distribution at
scale in the enterprise BI scenario.
Power BI usage scenarios: Enterprise BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

As described in the Power BI adoption roadmap, enterprise BI is characterized by having


a significantly larger number of content consumers, compared to a much smaller
number of authors who create and publish content.

The distinction between the enterprise BI and the departmental BI scenarios is the use of
Power BI Premium capacity, which allows content to be widely distributed to consumers
who have a Power BI free license. Consumers can include users within the organization,
as well as guest users who are external to the organization.

Large enterprise BI implementations often employ a centralized approach. Enterprise


Power BI content is commonly maintained by a centralized team, for use broadly
throughout the organization. The centralized team responsible for content management
is usually IT, BI, or the Center of Excellence (COE).

7 Note

There are four content collaboration and delivery usage scenarios that build upon
each other. The enterprise BI scenario is the fourth scenario. A list of all scenarios
can be found in the Power BI usage scenarios article.

The managed self-service BI scenario introduces an important concept about


decoupling dataset and report development. For simplicity, this concept isn't
explicitly discussed in this article. You're encouraged to apply the concepts
discussed in the managed self-service BI scenario whenever possible.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support enterprise BI. The primary focus is on
organization-wide content distribution at scale including the use of Power BI Premium
capacity. This scenario also depicts developing Power BI paginated reports.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Power BI content creators develop BI solutions using Power BI Desktop. In an enterprise BI


scenario, it's common that creators are members of a centralized team (such as IT, BI, or
the COE) that supports users across organizational boundaries.

Power BI Desktop connects to data from one or more data sources. Queries and data
mashups, which combine multiple sources, are developed in the Power Query Editor.

Data model development and report creation are done in Power BI Desktop. The purpose
is to help colleagues understand the meaning and significance of data by placing it in a
visual context.

When ready, content creators publish their Power BI Desktop file (.pbix) to the Power BI
service.

Report creators develop paginated reports by using Power BI Report Builder.

Power BI Report Builder queries data from one or more data source types. A paginated
report is produced to meet requirements for a highly formatted, print-ready report.

When ready, report creators publish their Power BI Report Builder file (.rdl) to the Power BI
service.

Multiple Power BI item types can be published to a Premium workspace.


Item Description

In the enterprise BI scenario, use of Premium capacity (rather than Premium Per User) is
depicted. This choice is made to support content delivery to many content viewers who
have a free Power BI license.

Some, or all, reports and dashboards are published as a Power BI app. The purpose of the
app is to provide a set of related content for consumers to view in a user-friendly way.

Power BI app users are assigned read-only permissions. App permissions are managed
separately from the workspace. In an enterprise BI scenario, users with any type of Power
BI license (free, Power BI Pro, or PPU) can be assigned as a viewer of the app. This feature
applies only when the workspace is assigned a license mode of Premium per capacity
(free users cannot access workspace content when it's assigned a license mode of
Premium per user or Embedded).

The Power BI mobile apps are also available for viewing app and workspace content.

Users who frequently work in Microsoft Teams might find it convenient to manage or view
Power BI content directly in Teams.

Users assigned to the administrator, member, or contributor workspace roles can publish
and manage workspace content.

Scheduled data refresh is set up in the Power BI service to keep imported data—in
datasets or dataflows—up to date.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Other self-service content creators can author new reports using an existing dataset. They
can choose to use Power BI Desktop, Excel, or Power BI Report Builder. The reuse of
existing datasets in this manner is highly encouraged. The managed self-service BI scenario
explores dataset reuse further.

Power BI administrators oversee and monitor activity in the Power BI service. Enterprise BI
solutions are often subject to stricter governance requirements than team BI or
departmental BI solutions.

Key points
The following are some key points to emphasize about the enterprise BI scenario.

Choice of report authoring tools


Power BI Desktop is a tool to develop highly interactive reports, whereas Power BI
Report Builder is a tool to develop paginated reports. For more information about when
to use paginated reports, see When to use paginated reports in Power BI.

Excel reports can also be published to the Power BI service (not depicted in the scenario
diagram) when a PivotTable or PivotChart better meets reporting requirements.

Source file storage


For enterprise BI, it's important to store the source Power BI Desktop files and Power BI
Report Builder files in a secure, shared location. Locations such as OneDrive for work or
school or SharePoint (not depicted in the scenario diagram) are useful. A shared library
is securable, easily accessible by colleagues, and has built-in versioning capabilities.

When the co-management of a BI solution involves multiple people with different


skillsets, consider decoupling the model and reports into separate Power BI Desktop
files (described in the managed self-service BI scenario). This approach encourages
reuse of the dataset, and is more efficient than continually alternating between the
people who are editing the Power BI Desktop file. That's particularly helpful when, for
instance, one person works on the dataset while another person works on the reports.

Workspaces
A Power BI workspace serves as a logical container in the Power BI service for storing
related Power BI items, such as datasets and reports. Although this scenario depicts one
workspace, multiple workspaces are commonly required to satisfy all workspace
planning requirements.

The managed self-service BI scenario describes the use of separate workspaces.

Workspace license mode


A workspace license mode can be assigned to Pro, Premium per user (PPU), Premium
per capacity, or Embedded. This choice impacts feature availability, as well as which
users can access the content in the workspace and the associated Power BI app. An
enterprise BI scenario often involves many consumers of the content. So, it can be cost
effective to use the Premium per capacity license mode to distribute content to users
with a free license.

Power BI app publication


For enterprise BI, a Power BI app works well for content distribution to consumers
(rather than direct workspace access, which is described in the team BI scenario). A
Power BI app provides the best experience for consumers because it presents a set of
related content with a user-friendly navigation experience. A Power BI app is particularly
useful in situations where there's a larger and more diverse number of consumers, or
when the content developer doesn't work closely with the app consumers.

Power BI app permissions


Power BI app users are granted read-only permission to the app, and these permissions
are managed separately from the workspace. This additional level of flexibility is useful
for managing who can view the content.

For enterprise BI, it's a best practice to limit workspace access to those who are
responsible for content authoring, development, and quality assurance activities.
Typically, only a small number of people genuinely require workspace access.
Consumers can access the content by opening the Power BI app, rather than opening
the workspace.

Distribute content to Power BI free license users


Users with a Power BI free license (or Power BI Pro or PPU license) can view content
when granted app access or are added to a workspace role—provided the workspace is
assigned to Premium capacity. This ability to distribute content to users with a free
license is not available for any of the other workspace license modes, including Pro,
Premium per user, or Embedded.

Power BI Premium capacity license


Use of a P SKU (such as P1, P2, P3, P4, or P5) is described in this scenario. A P SKU is
required for typical production scenarios and is appropriate for the enterprise BI
scenario described in this article.

Manage lifecycle of content


Generally, enterprise BI solutions require stability for production content. One aspect is
controlling when and how content is deployed to production. Use of deployment
pipelines is described in the self-service content publishing scenario.

Reuse existing datasets


The reuse of existing datasets is important for team collaboration. It helps to promote a
single version of the truth. It's particularly important when a small number of dataset
creators support many report creators. A Power BI Desktop live connection can connect
a report to an existing dataset, avoiding the need to create another dataset.
Alternatively, when users prefer to create an Excel report, they can use the Analyze in
Excel feature. Retaining connectivity to the dataset is preferred to exporting data to
Excel because it:

Avoids creating duplicate datasets.


Reduces the risk of inconsistent data and calculations.
Supports all slicing, dicing, and pivoting capabilities within the visuals while
remaining connected to the dataset that's stored in the Power BI service.

To access an existing dataset, the content creator must have Build permission for the
dataset. It can be granted directly or indirectly when the user is assigned to a workspace
role (contributor or higher) or granted when publishing a Power BI app or sharing a
Power BI item. The managed self-service BI scenario explores the reuse of shared
datasets further.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset (not depicted in the scenario diagram).

7 Note

For team, departmental, and enterprise BI scenarios, a centralized data gateway in


standard mode is strongly recommended over gateways in personal mode. In
standard mode, the data gateway supports live connection and DirectQuery
operations (in addition to scheduled data refresh operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements.
Next steps
In the next article in this series, learn more about the importance of reusing datasets in
the managed self-service BI scenario.
Power BI usage scenarios: Managed
self-service BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

As described in the Power BI adoption roadmap, managed self-service BI is characterized


by a blended approach that emphasizes discipline at the core and flexibility at the edge.
The data architecture is usually maintained by a single team of centralized BI experts,
while reporting responsibility belongs to creators within departments or business units.

Usually, there are many more report creators than dataset creators. These report
creators can exist in any area of the organization. Because self-service report creators
often need to quickly produce content, a blended approach allows them to focus on
producing reports that support timely decision-making without the additional effort of
creating a dataset.

7 Note

The managed self-service BI scenario is the first of the self-service BI scenarios. For
a complete list of the self-service BI scenarios, see the Power BI usage scenarios
article.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support managed self-service BI. The primary objective is
for many report creators to reuse centralized shared datasets. To accomplish that, this
scenario focuses on decoupling the model development process from the report
creation process.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Dataset creators develop models using Power BI Desktop. For datasets that are intended
for reuse, it's common (but not required) for creators to belong to a centralized team that
supports users across organizational boundaries (such as IT, enterprise BI, or the Center of
Excellence).

Power BI Desktop connects to data from one or more data sources.

Data model development is done in Power BI Desktop. Additional effort is made to create
a well-designed and user-friendly model because it will be used as a data source by many
self-service report creators.

When ready, dataset creators publish their Power BI Desktop file (.pbix) that contains only a
model to the Power BI service.

The dataset is published to a workspace dedicated to storing and securing shared


datasets. Since the dataset is intended for reuse, it's endorsed (certified or promoted, as
appropriate). The dataset is also marked as discoverable to further encourage its reuse.
The lineage view in the Power BI service can be used to track dependencies that exist
between Power BI items, including reports connected to the dataset.

Dataset discovery in the data hub is enabled because the dataset is marked as
discoverable. Discoverability allows the existence of a dataset to be visible in the data hub
by other Power BI content creators who are looking for data.
Item Description

Report creators use the data hub in the Power BI service to search for discoverable
datasets.

If report creators don't have permission, they can request Build permission on the dataset.
This starts a workflow to request Build permission from an authorized approver.

Report creators create new reports using Power BI Desktop. Reports use a live connection
to a shared dataset.

Report creators develop reports in Power BI Desktop.

When ready, report creators publish their Power BI Desktop file to the Power BI service.

Reports are published to a workspace dedicated to storing and securing reports and
dashboards.

Published reports remain connected to the shared datasets that are stored in a different
workspace. Any changes to the shared dataset affect all reports connected to it.

Other self-service report creators can author new reports using the existing shared dataset.
Report creators can choose to use Power BI Desktop, Power BI Report Builder, or Excel.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Power BI administrators oversee and monitor activity in the Power BI service.

Key points
The following are some key points to emphasize about the managed self-service BI
scenario.

Shared dataset
The key aspect of making managed self-service BI work is to minimize the number of
datasets. This scenario is about shared datasets that help achieve a single version of the
truth.

7 Note

For simplicity, the scenario diagram depicts just one shared dataset. However, it's
not usually practical to model all organizational data in a single dataset. The other
extreme is to create a new dataset for every report, as less experienced content
creators often do. The goal of managed self-service BI is to find the right balance,
leaning toward relatively few datasets and creating new datasets when it makes
sense to do so.

Decouple dataset and reports


When the dataset is decoupled from reports, it facilitates the separation of effort and
responsibility. A shared dataset is commonly maintained by a centralized team (like IT,
BI, or Center of Excellence), while reports are maintained by subject matter experts in
the business units. However, that's not required. For example, this pattern can be
adopted by any content creator that wants to achieve reusability.

7 Note

For simplicity, dataflows aren't depicted in the scenario diagram. To learn about
dataflows, see the self-service data preparation scenario.

Dataset endorsement
Because shared datasets are intended for reuse, it's helpful to endorse them. A certified
dataset conveys to report creators that the data is trustworthy and meets the
organization's quality standards. A promoted dataset highlights that the dataset owner
believes the data is valuable and worthwhile for others to use.

 Tip

It's a best practice to have a consistent, repeatable, rigorous process for endorsing
content. Certified content should indicate that data quality has been validated. It
should also follow change management rules, have formal support, and be fully
documented. Because certified content has passed rigorous standards, the
expectations for trustworthiness are higher.

Dataset discovery
The data hub helps report creators find, explore, and use datasets across the
organization. In addition to dataset endorsement, enabling dataset discovery is critical
for promoting its reuse. A discoverable dataset is visible in the data hub for report
creators who are searching for data.

7 Note

If a dataset isn't configured to be discoverable, only Power BI users with Build


permission can find it.

Request dataset access


A report creator may find a dataset in the data hub that they want to use. If they don't
have Build permission for the dataset, they can request access. Depending on the
request access setting for the dataset, an email will be submitted to the dataset owner
or custom instructions will be presented to the person who is requesting access.

Live connection to the shared dataset


A Power BI Desktop live connection connects a report to an existing dataset. Live
connections avoid the need to create a new data model in the Power BI Desktop file.

) Important

When using a live connection, all data that the report creator needs must reside
within the connected dataset. However, the customizable managed self-service BI
scenario describes how a dataset can be extended with additional data and
calculations.

Publish to separate workspaces


There are several advantages to publishing reports to a workspace different from where
the dataset is stored.

First, there's clarity on who's responsible for managing content in which workspace.
Second, report creators have permissions to publish content to a reporting workspace
(via workspace admin, member, or contributor roles). However, they only have Read and
Build permissions for specific datasets. This technique allows row-level security (RLS) to
take effect when necessary for users assigned to the viewer role.

) Important
When you publish a Power BI Desktop report to a workspace, the RLS roles are
applied to members who are assigned to the viewer role in the workspace. Even if
viewers have Build permission to the dataset, RLS still applies. For more
information, see Using RLS with workspaces in Power BI.

Dependency and impact analysis


When a shared dataset is used by many reports, those reports can exist in many
workspaces. The lineage view helps identify and understand the downstream
dependencies. When planning a dataset change, first perform impact analysis to
understand which dependent reports may require editing or testing.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset.

7 Note

For managed self-service BI scenarios, a centralized data gateway in standard mode


is strongly recommended over gateways in personal mode. In standard mode, the
data gateway supports live connection and DirectQuery operations (in addition to
scheduled data refresh operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements. With a
managed self-service BI scenario, it's particularly helpful to track usage of shared
datasets. A high report-to-dataset ratio indicates good reuse of datasets.

Next steps
In the next article in this series, learn about ways to customize and extend a shared
dataset to meet additional types of requirements.
Power BI usage scenarios: Customizable
managed self-service BI
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

As described in the Power BI adoption roadmap, managed self-service BI is characterized


by a blended approach that emphasizes discipline at the core and flexibility at the edge.
The data architecture is usually maintained by a single team of centralized BI experts,
while reporting responsibility belongs to creators within departments or business units.

However, when the core data architecture doesn't include all data required, dataset
creators can extend, personalize, or customize existing shared datasets. New specialized
datasets can be created that meet business requirements not met by existing centrally
delivered datasets. Importantly, there's no duplication of core data. This usage scenario
is called customizable managed self-service BI.

7 Note

This customizable managed self-service BI scenario is the second of the self-service


BI scenarios. This scenario builds upon what can be done with a centralized shared
dataset (that was introduced in the managed self-service BI scenario). A list of all
scenarios can be found in the Power BI usage scenarios article.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components to support customizable managed self-service BI. The primary
focus is on providing content creators in the business units with the ability to create a
specialized data model by extending an existing shared dataset. The goal is to achieve
reusability whenever possible and to allow flexibility to meet additional analytical
requirements.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Dataset creator A develops a model using Power BI Desktop. For a dataset that's intended
for reuse, it's common (but not required) for the creator to belong to a centralized team
that supports users across organizational boundaries (such as IT, enterprise BI, or the
Center of Excellence).

Power BI Desktop connects to data from one or more data sources.

Data model development is done in Power BI Desktop. Additional effort is made to create
a well-designed and user-friendly model because it may be used as a data source by many
self-service report creators.

When ready, dataset creator A publishes their Power BI Desktop file (.pbix) that contains
only a model to the Power BI service.

The dataset is published to a workspace dedicated to storing and securing shared


datasets. Since the dataset is intended for reuse, it's endorsed (certified or promoted, as
appropriate). The dataset is also marked as discoverable to further encourage its reuse.
The lineage view in the Power BI service can be used to track dependencies that exist
between Power BI items.
Item Description

Data discovery in the data hub is enabled because the dataset is marked as discoverable.
Discoverability allows the existence of a dataset to be visible in the data hub by other
Power BI content creators who are looking for data.

Dataset creator B uses the data hub in the Power BI service to search for discoverable
datasets.

If dataset creator B doesn't have permission, they can request Build permission on the
dataset. This starts a workflow to request Build permission from an authorized approver.

In Power BI Desktop, dataset creator B creates a live connection to the original shared
dataset that's located in the Power BI service. Since the intention is to extend and
customize the original dataset, the live connection is converted to a DirectQuery model.
This action results in a local model in the Power BI Desktop file.

Power BI Desktop connects to data from additional data sources. The goal is to augment
the shared dataset so that additional analytical requirements are met by the new
specialized dataset.

Relationships are created in Power BI Desktop between the existing tables (from the
shared dataset, also known as the remote model) and new tables just imported (stored in
the local model). Additional calculations and modeling work is done in Power BI Desktop
to complete the design of the specialized model.

When ready, dataset creator B publishes their Power BI Desktop file to the Power BI
service.

The new specialized dataset is published to a workspace dedicated to storing and securing
datasets that are owned and managed by the department.

The specialized dataset remains connected to the original Power BI shared dataset. Any
changes to the original shared dataset will affect downstream specialized datasets that
have dependency on it.

Other self-service report creators can author new reports connected to the specialized
dataset. Report creators can choose to use Power BI Desktop, Power BI Report Builder, or
Excel.

Report creators create new reports using Power BI Desktop.

Reports are published to a workspace dedicated to storing and securing reports and
dashboards.

Published reports remain connected to the specialized dataset that's stored in a different
workspace. Any changes to the specialized dataset affect all reports connected to it.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.
Item Description

Power BI administrators oversee and monitor activity in the Power BI service.

Key points
The following are some key points to emphasize about the customizable managed self-
service BI scenario.

Shared dataset
The key aspect of making managed self-service BI work is to minimize the number of
datasets. This scenario depicts a shared dataset that contributes towards achieving a
single version of the truth.

7 Note

For simplicity, the scenario diagram depicts just one shared dataset. However, it's
not usually practical to model all organizational data in a single dataset. The other
extreme is to create a new dataset for every report, as less experienced content
creators often do. The goal is to find the right balance, leaning toward relatively few
datasets and creating new datasets when it makes sense to do so.

Augment the initial shared dataset


Sometimes self-service creators need to augment an existing dataset with, for instance,
additional data that's specific to their department. In this case, they can use DirectQuery
connections to Power BI datasets. This feature allows for an ideal balance of self-service
enablement while taking advantage of the investment in centrally managed data assets.
The scenario diagram depicts a DirectQuery connection. The act of converting a live
connection to a DirectQuery connection creates a local model that allows new tables to
be added. Relationships can be created between tables from the original shared dataset
(the remote model) and new tables just added (the local model). Additional calculations
and data modeling can be done to customize the new data model.

 Tip

This scenario highlights reusing a shared dataset. However, sometimes there are
situations when data modelers want to limit the creation of downstream data
model. In that case, they can enable the Discourage DirectQuery connections
property in the Power BI Desktop settings.

Dataset endorsement
Because shared datasets are intended for reuse, it's helpful to endorse them. A certified
dataset conveys to report creators that the data is trustworthy and meets the
organization's quality standards. A promoted dataset highlights that the dataset owner
believes the data is valuable and worthwhile for others to use.

 Tip

It's a best practice to have a consistent, repeatable, rigorous process for endorsing
content. Certified content should indicate that data quality has been validated. It
should also follow change management rules, have formal support, and be fully
documented. Because certified content has passed rigorous standards, the
expectations for trustworthiness are higher.

Dataset discovery
The data hub helps report creators find, explore, and use datasets across the
organization. In addition to dataset endorsement, enabling dataset discovery is critical
for promoting its reuse. A discoverable dataset is visible in the data hub for report
creators who are searching for data.

7 Note

If a dataset isn't configured to be discoverable, only Power BI users with Build


permission can find it.

Request dataset access


A report creator may find a dataset in the data hub that they want to use. If they don't
have Build permission for the dataset, they can request access. Depending on the
request access setting for the dataset, an email will be submitted to the dataset owner
or custom instructions will be presented to the person who is requesting access.

Publish to separate workspaces


There are several advantages to publishing reports to a workspace different from where
the dataset is stored.

First, there's clarity on who's responsible for managing content in which workspace.
Second, report creators have permissions to publish content to a reporting workspace
(via workspace admin, member, or contributor roles). However, they only have Read and
Build permissions for specific datasets. This technique allows row-level security (RLS) to
take effect when necessary for users assigned to the viewer role.

Dependency and impact analysis


When a shared dataset is used by other datasets or reports, those dependent objects
can exist in many workspaces. The lineage view helps identify and understand the
downstream dependencies. When planning a dataset change, first perform impact
analysis to understand which datasets or reports should be edited or tested.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset.

7 Note

For customizable managed self-service BI scenarios, a centralized data gateway in


standard mode is strongly recommended over gateways in personal mode. In
standard mode, the data gateway supports live connection and DirectQuery
operations (in addition to scheduled data refresh operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements. With a
customizable managed self-service BI scenario, it's particularly helpful to track usage of
the original shared dataset as well as dependent datasets.
Next steps
In the next article in this series, learn about reusing data preparation work with
dataflows in the self-service data preparation scenario.
Power BI usage scenarios: Self-service
data preparation
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

Data preparation (sometimes referred to as ETL, which is an acronym for Extract,


Transform, and Load) often involves a significant amount of work depending on the
quality and structure of source data. The self-service data preparation usage scenario
focuses on the reusability of data preparation activities by business analysts. It achieves
this goal of reusability by relocating the data preparation work from Power Query
(within individual Power BI Desktop files) to Power Query Online (using a Power BI
dataflow). The centralization of the logic helps achieve a single source of the truth and
reduces the level of effort required by other content creators.

Dataflows are created by using Power Query Online in one of several tools: the Power
BI service, Power Apps, or Dynamics 365 Customer Insights. A dataflow created in Power
BI is referred to as an analytical dataflow. Dataflows created in Power Apps can either be
one of two types: standard or analytical. This scenario only covers using a Power BI
dataflow that's created and managed within the Power BI service.

7 Note

The self-service data preparation scenario is one of the self-service BI scenarios. For
a complete list of the self-service scenarios, see the Power BI usage scenarios
article.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support self-service data preparation. The primary focus
is on creating a dataflow in Power Query Online that becomes a source of data for
multiple datasets. The goal is for many datasets to leverage the data preparation that's
done once by the dataflow.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

The dataflow creator develops a collection of tables within a Power BI dataflow. For a
dataflow that's intended for reuse, it's common (but not required) for the creator to
belong to a centralized team that supports users across organizational boundaries (such as
IT, enterprise BI, or the Center of Excellence).

The dataflow connects to data from one or more data sources.

Dataflows are developed using Power Query Online, which is a web-based version of
Power Query. The familiar Power Query interface in Power Query Online makes the
transition from Power BI Desktop easy.

The dataflow is saved as an item in a workspace that's dedicated to storing and securing
dataflows. A dataflow refresh schedule is required to keep the data current (not depicted
in the scenario diagram).

The dataset creator develops a new data model using Power BI Desktop.
Item Description

The dataflow is a data source for the new data model.

The dataset creator can use the full capabilities of Power Query within Power BI Desktop.
They can optionally apply additional query steps to further transform the dataflow data or
merge the dataflow output.

When ready, the dataset creator publishes the Power BI Desktop file (.pbix) that contains
the data model to the Power BI service. Refresh for the dataset is managed separately from
the dataflow (not depicted in the scenario diagram).

The dataflow can be reused as a data source by other datasets that could reside in
different workspaces.

Power BI administrators manage settings in the Admin portal.

In the Admin portal, Power BI administrators can configure Azure connections to store
dataflow data in their Azure Data Lake Storage Gen2 (ADLS Gen2) account. Settings
include assigning a tenant-level storage account and enabling workspace-level storage
permissions.

By default, dataflows store data using internal storage that's managed by the Power BI
service. Optionally, data output by the dataflow can be stored in the organization's ADLS
Gen2 account. This type of storage is sometimes called bring your own data lake. A benefit
of storing dataflow data in the data lake is that it can be accessed and consumed by other
BI tools.

Dataflow data in ADLS Gen2 is stored within a Power BI-specific container known as
filesystem. Within this container, a folder exists for each workspace. A subfolder is created
for each dataflow, as well as for each table. Power BI generates a snapshot each time the
dataflow data is refreshed. Snapshots are self-describing, comprising metadata and data
files.

Other self-service dataset creators can create new data models in Power BI Desktop using
the dataflow as a data source.

Azure administrators manage permissions for the organization's ADLS Gen2 account.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Power BI administrators oversee and monitor activity in the Power BI service.

 Tip
We recommend that you review the advanced data preparation usage scenario
too. It builds upon concepts introduced in this scenario.

Key points
The following are some key points to emphasize about the self-service data preparation
scenario.

Dataflows
A dataflow comprises a collection of tables (also known as entities). All work to create a
dataflow is done in Power Query Online . You can create dataflows in multiple
products, including Power Apps, Dynamics 365 Customer Insights, and Power BI.

7 Note

You can't create dataflows in a personal workspace in the Power BI service.

Support dataset creators


The scenario diagram depicts using a Power BI dataflow to provide prepared data to
other self-service dataset creators.

7 Note

Datasets use the dataflow as a data source. A report can't connect directly to a
dataflow.

Here are some advantages of using Power BI dataflows:

Dataset creators use the same familiar Power Query interface found in Power BI
Desktop.
Data preparation and data transformation logic defined by a dataflow can be
reused many times because it's centralized.
When data preparation logic changes are made to the dataflow, it may not require
updating dependent data models. Removing or renaming columns, or changing
column data types, will require updating dependent data models.
Pre-prepared data can easily be made available to Power BI dataset creators. Reuse
is particularly helpful for commonly used tables—especially dimension tables, like
date, customer, and product.
The level of effort required by dataset creators is reduced because the data
preparation work has been decoupled from the data modeling work.
Fewer dataset creators need direct access to source systems. Source systems can
be complex to query and may require specialized access permissions.
The number of refreshes executed on source systems is reduced because dataset
refreshes connect to dataflows, and not to the source systems from which
dataflows extract data.
Dataflow data represents a snapshot in time, and promotes consistency when used
by many datasets.
Decoupling data preparation logic into dataflows can help improve dataset refresh
success. If a dataflow refresh fails, datasets will refresh using the last successful
dataflow refresh.

 Tip

Create dataflow tables by applying star schema design principles. A star schema
design is well-suited to creating Power BI datasets. Also, refine the dataflow output
to apply friendly names and use specific data types. These techniques promote
consistency in dependent datasets and helps reduce the amount of work that
dataset creators need to do.

Dataset creator flexibility


When a dataset creator connects to a dataflow in Power BI Desktop, the creator isn't
limited to using the exact dataflow output. They still have the full functionality of Power
Query available to them. This functionality is useful if additional data preparation work is
required, or the data requires further transformation.

Dataflow advanced features


There are many design techniques, patterns, and best practices for dataflows that can
take them from self-service to enterprise-ready. Dataflows in a workspace that has its
license mode set to Premium per user or Premium per capacity can benefit from
advanced features.

7 Note

One of the advanced features is incremental refresh for dataflows. Although


incremental refresh for datasets is a Power BI Pro feature, incremental refresh for
dataflows is a Premium feature.

To learn more about dataflow advanced features, see the advanced data
preparation usage scenario.

Dataflow and dataset refresh


As previously mentioned, a dataflow is a source of data for datasets. In most cases,
multiple data refresh schedules are involved: one for the dataflow and one for each
dataset. Alternatively, it's possible to use DirectQuery from the dataset to the dataflow,
which is a Premium feature (not depicted in the scenario diagram).

Azure Data Lake Storage Gen2


In Microsoft Azure, an ADLS Gen2 account is a specific type of Azure Storage account
that has the hierarchical namespace enabled. ADLS Gen2 has performance,
management, and security advantages for operating analytical workloads. By default,
Power BI dataflows use internal storage, which is a built-in data lake account managed
by the Power BI service. Optionally, organizations may bring their own data lake by
connecting to their organization's ADLS Gen2 account.

Here are some advantages of using the organization's data lake account:

The data stored by a Power BI dataflow can (optionally) be accessed from the data
lake by other users or processes. That's helpful when dataflow reuse occurs beyond
Power BI. For example, the data could be accessed by Azure Data Factory.
The data in the data lake can (optionally) be managed by other tools or systems. In
this case, Power BI could consume the data rather than manage it (not depicted in
the scenario diagram).

Tenant-level storage
The Azure connections section of the Admin portal includes a setting to configure a
connection to an ADLS Gen2 account. Configuring this setting enables bring your own
data lake. Once configured, you may set workspaces to use that data lake account.

) Important

Setting Azure connections does not mean that all dataflows in the Power BI tenant
are stored in this account by default. In order to use an explicit storage account
(instead of internal storage), each workspace must be specifically connected.
It's critical to set the workspace Azure connections prior to creating any dataflows in
the workspace. The same Azure storage account is used for Power BI dataset
backups.

Workspace-level storage
A Power BI administrator can configure a setting to allow workspace-level storage
permissions (in the Azure connections section of the Admin portal). When enabled, this
setting allows workspace administrators to use a different storage account than the one
defined at the tenant-level. Enabling this setting is particularly helpful for decentralized
business units who manage their own data lake in Azure.

7 Note

The workspace-level storage permission in the Admin portal applies to all


workspaces in the Power BI tenant.

Common Data Model format


The data in an ADLS Gen2 account is stored in the Common Data Model (CDM) structure.
The CDM structure is a metadata format that dictates how the self-describing schema,
as well as the data, is stored. The CDM structure enables semantic consistency in a
format that's standardized for sharing data across numerous applications (not depicted
in the scenario diagram).

Publish to separate workspaces


There are several advantages to publishing a dataflow to a workspace that's separate
from where the dependent datasets are stored. One advantage is clarity on who's
responsible for managing which types of content (if you have different people handling
different responsibilities). Another advantage is that specific workspace permissions can
be assigned for each type of content.

7 Note

You can't create dataflows in a personal workspace in the Power BI service.

The advanced data preparation usage scenario describes how to set up multiple
workspaces to provide better flexibility when supporting enterprise-level self-
service creators.

Gateway setup
Typically, an On-premises data gateway is required for connecting to data sources that
reside within a private organizational network or a virtual network.

A data gateway is required when:

Authoring a dataflow in Power Query Online that connects to private


organizational data.
Refreshing a dataflow that connects to private organizational data.

 Tip

Dataflows require a centralized data gateway in standard mode. A gateway in


personal mode isn't supported when working with dataflows.

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements. With a
self-service data preparation scenario, it's particularly helpful to track usage of
dataflows.

Next steps
In the next article in the series, learn about the advanced data preparation usage
scenario.
Power BI usage scenarios: Advanced
data preparation
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

Data preparation (sometimes referred to as ETL, which is an acronym for Extract,


Transform, and Load) activities often involve a large effort. The time, skill, and effort
involved with collecting, cleaning, combining, and enriching data depends on the quality
and structure of source data.

Investing time and effort in centralized data preparation helps to:

Enhance reusability and gain maximum value from data preparation efforts.
Improve the ability to provide consistent data to multiple teams.
Reduce the level of effort required by other content creators.
Achieve scale and performance.

The advanced data preparation usage scenario expands on the self-service data
preparation scenario. Advanced data preparation is about increasing dataflow reuse by
multiple users across various teams and for various use cases.

Separate workspaces, organized by dataflow purpose, are helpful when dataflow output
is provided to multiple dataset creators, especially when they are on different teams in
the organization. Separate workspaces are also helpful for managing security roles when
the people who create and manage dataflows are different from the people consume
them.

7 Note

The advanced data preparation scenario is the second of the data preparation
scenarios. This scenario builds upon what can be done with centralized dataflows as
described in the self-service data preparation scenario.

The advanced data preparation scenario is one of the self-service BI scenarios.


However, a centralized team member can use the techniques in a similar way to
what's described in the managed self-service BI scenario. For a complete list of the
self-service scenarios, see the Power BI usage scenarios article.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram

 Tip

We recommend that you review the self-service data preparation usage scenario if
you're not familiar with it. The advanced self-service data preparation scenario
builds upon that scenario.

The focus of this advanced data preparation scenario is on:

The use of separate dataflows based on purpose: staging, transformation, or final.


We recommend using composable building blocks to obtain greater reuse, in
various combinations, to support specific user requirements. Composable building
blocks are described later in this article.
The use of separate workspaces that support dataflow creators or dataflow
consumers. Data modelers, who consume dataflows, may be on different teams
and/or have different use cases.
The use of linked tables (also known as linked entities), computed tables (also
known as computed entities), and the enhanced compute engine.

7 Note

Sometimes the terms dataset and data model are used interchangeably. Generally,
from a Power BI service perspective, it's referred to as dataset. From a development
perspective, it's referred to as a data model (or model for short). In this article, both
terms have the same meaning. Similarly, a dataset creator and a data modeler have
the same meaning.

The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support the advanced data preparation scenario.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

The dataflow creator develops a collection of tables within a dataflow. For a dataflow that's
intended for reuse, it's common (but not required) that the creator belongs to a
centralized team that supports users across organizational boundaries (such as IT,
enterprise BI, or the Center of Excellence).

The dataflow connects to data from one or more data sources.

Dataflow creators develop dataflows by using Power Query Online, which is a web-based
version of Power Query.

A staging dataflow is created in a workspace that's dedicated to the centralized


management of dataflows. A staging dataflow copies the raw data as-is from the source.
Few, if any, transformations are applied.

A transformation dataflow (also known as a cleansed dataflow) is created in the same


workspace. It sources data by using linked table(s) to the staging dataflow. Computed
table(s) include transformation steps that prepare, cleanse, and reshape the data.

Dataflow creators have access to manage content in the workspace that's dedicated to the
centralized management of dataflows.
Item Description

One or more other workspaces exist that are intended to provide access to the final
dataflow, which delivers production-ready data to data models.

The final dataflow is created in a workspace available to data modelers. It sources data by
using linked table(s) to the transformation dataflow. Computed table(s) represent the
prepared output that's visible to data modelers who are granted the workspace viewer
role.

Dataset creators (who consume the dataflow output) have viewer access to the workspace
that contains the final dataflow output. Dataflow creators also have access to manage and
publish content in the workspace (not depicted in the scenario diagram).

All of the workspaces involved have their license mode set to Premium per user, Premium
per capacity, or Embedded. These license modes allow for the use of linked tables and
computed tables across workspaces, which are required in this scenario.

Dataset creators use the final dataflow as a data source when developing a data model in
Power BI Desktop. When ready, the dataset creator publishes the Power BI Desktop file
(.pbix) that contains the data model to the Power BI service (not depicted in the scenario
diagram).

Power BI administrators manage settings in the Admin portal.

In the Admin portal, Power BI administrators can configure Azure connections to store
dataflow data in their Azure Data Lake Storage Gen2 (ADLS Gen2) account. Settings
include assigning a tenant-level storage account and enabling workspace-level storage
permissions.

By default, dataflows store data by using internal storage that's managed by the Power BI
service. Optionally, data output by the dataflow can be stored in the organization's ADLS
Gen2 account.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for authoring the dataflow in Power Query Online. The
data gateway is also used for refreshing the dataflow.

Power BI administrators oversee and monitor activity in the Power BI service.

Key points
The following are some key points to emphasize about the advanced data preparation
scenario.

Dataflows
A dataflow comprises a collection of tables (also known as entities). Each table is defined
by a query, which contains the data preparation steps required to load the table with
data. All work to create a dataflow is done in Power Query Online . You can create a
dataflow in multiple products, including Power Apps, Dynamics 365 Customer Insights,
and Power BI.

7 Note

You can't create dataflows in a personal workspace in the Power BI service.

Types of dataflows
Use of composable building blocks is a design principle that allows you to manage,
deploy, and secure system components, and then use them in various combinations.
Creating modular, self-contained dataflows that are specific to a purpose is a best
practice. They help to achieve data reuse and enterprise scale. Modular dataflows are
also easier to manage and test.

Three types of dataflows are shown in the scenario diagram: staging dataflow,
transformation dataflow, and final dataflow.

Staging dataflow

A staging dataflow (sometimes called a data extraction dataflow) copies raw data as-is
from the source. Having the raw data extracted with minimal transformation means that
downstream transformation dataflows (described next) can use the staging dataflow as
their source. This modularity is useful when:

Access to a data source is restricted to narrow time windows and/or to a few users.
Temporal consistency is desired to ensure that all downstream dataflows (and
related datasets) deliver data that was extracted from the data source at the same
time.
Reducing the number of queries submitted to the data source is necessary due to
source system restrictions or its ability to support analytical queries.
A copy of the source data is useful for reconciliation processes and data quality
verifications.

Transformation dataflow
A transformation dataflow (sometimes called a cleansed dataflow) sources its data from
linked tables that connect to the staging dataflow. It's a best practice to separate out
transformations from the data extraction process.

A transformation dataflow includes all the transformation steps required to prepare and
restructure the data. However, there's still a focus on reusability at this layer to ensure
the dataflow is suitable for multiple use cases and purposes.

Final dataflow
A final dataflow represents the prepared output. Some additional transformations may
occur based on the use case and purpose. For analytics, a star schema table (dimension
or fact) is the preferred design of the final dataflow.

Computed tables are visible to data modelers that are granted the workspace viewer
role. This table type is described in the types of dataflow tables topic below.

7 Note

Data lakes often have zones, like bronze, silver, and gold. The three types of
dataflows represent a similar design pattern. To make the best possible data
architecture decisions, give thought to who will maintain the data, the expected use
of the data, and the skill level required by people accessing the data.

Workspaces for dataflows


If you were to create all dataflows in a single workspace, it would significantly limit the
extent of reusability. Using a single workspace also limits the security options available
when supporting multiple types of users across teams and/or for different use cases. We
recommend using multiple workspaces. They provide better flexibility when you need to
support self-service creators from various areas of the organization.

The two types of workspaces shown in the scenario diagram include:

Workspace 1: It stores centrally managed dataflows (sometimes referred to as a


backend workspace). It contains both the staging and transformation dataflows
because they're managed by the same people. Dataflow creators are often from a
centralized team, such as IT, BI, or the Center of Excellence. They should be
assigned to either the workspace admin, member, or contributor role.
Workspace 2: It stores and delivers the final dataflow output to consumers of the
data (sometimes referred to as a user workspace). Dataset creators are often self-
service analysts, power users, or citizen data engineers. They should be assigned to
the workspace viewer role because they only need to consume the output of the
final dataflow. To support dataset creators from various areas of the organization,
you can create numerous workspaces like this one, based on use case and security
needs.

 Tip

We recommend reviewing ways to support dataset creators as described in the


self-service data preparation usage scenario. It's important to understand that
dataset creators can still use the full capabilities of Power Query within Power BI
Desktop. They can choose to add query steps to further transform the dataflow
data or merge the dataflow output with other sources.

Types of dataflow tables


Three types of dataflow tables (also known as entities) are depicted in the scenario
diagram.

Standard table: Queries an external data source, such as a database. In the


scenario diagram, standard tables are depicted in the staging dataflow.
Linked table: References a table from another dataflow. A linked table doesn't
duplicate the data. Rather, it allows the reuse of a standard table multiple times for
multiple purposes. Linked tables aren't visible to workspace viewers since they
inherit permissions from the original dataflow. In the scenario diagram, linked
tables are depicted twice:
In the transformation dataflow for accessing the data in the staging dataflow.
In the final dataflow for accessing the data in the transformation dataflow.
Computed table: Performs additional computations by using a different dataflow
as its source. Computed tables allow customizing the output as needed for
individual use cases. In the scenario diagram, computed tables are depicted twice:
In the transformation dataflow for performing common transformations.
In the final dataflow for delivering output to dataset creators. Since computed
tables persist the data again (after the dataflow refresh), data modelers can
access the computed tables in the final dataflow. In this case, data modelers
should be granted access with the workspace viewer role.

7 Note
There are many design techniques, patterns, and best practices that can take
dataflows from self-service to enterprise-ready. Also, dataflows in a workspace that
has its license mode set to Premium per user or Premium capacity can benefit
from advanced features. Linked tables and computed tables (also known as
entities) are two advanced features that are essential for increasing the reusability
of dataflows.

Enhanced compute engine


The enhanced compute engine is an advanced feature available with Power BI Premium.
The enhanced compute engine improves performance of linked tables (within the same
workspace) that reference (link to) the dataflow. To get maximum benefit from the
enhanced compute engine:

Split out the staging and transformation dataflows.


Use the same workspace to store the staging and transformation dataflows.
Apply complex operations that can query fold early in the query steps. Prioritizing
foldable operations can help to achieve the best refresh performance.
Use incremental refresh to reduce refresh durations and resource consumption.
Perform testing early and frequently during the development phase.

Dataflow and dataset refresh


A dataflow is a source of data for datasets. In most cases, multiple data refresh
schedules are involved: one for each dataflow and one for each dataset. Alternatively, it's
possible to use DirectQuery from the dataset to the dataflow, which requires Power BI
Premium and the enhanced compute engine (not depicted in the scenario diagram).

Azure Data Lake Storage Gen2


An ADLS Gen2 account is a specific type of Azure storage account that has the
hierarchical namespace enabled. ADLS Gen2 has performance, management, and
security advantages for operating analytical workloads. By default, Power BI dataflows
use internal storage, which is a built-in data lake account managed by the Power BI
service. Optionally, organizations may bring their own data lake by connecting to an
ADLS Gen2 account in their organization.

Here are some advantages of using your own data lake:

Users (or processes) can directly access the dataflow data stored in the data lake.
That's helpful when dataflow reuse occurs beyond Power BI. For example, Azure
Data Factory could access the dataflow data.
Other tools or systems can manage the data in the data lake. In this case, Power BI
could consume the data rather than manage it (not depicted in the scenario
diagram).

When using linked tables or computed tables, make sure that each workspace is
assigned to the same ADLS Gen2 storage account.

7 Note

Dataflow data in ADLS Gen2 is stored within a Power BI-specific container. This
container is depicted in the self-service data preparation usage scenario diagram.

Admin portal settings


There are two important settings to manage in the Admin portal:

Azure connections: The Azure connections section of the Admin portal includes a
setting to set up a connection to an ADLS Gen2 account. This setting allows a
Power BI administrator to bring your own data lake to dataflows. Once configured,
workspaces can use that data lake account for storage.
Workspace-level storage: A Power BI administrator can set workspace-level
storage permissions. When enabled, the setting allows workspace administrators
to use a different storage account to the one set at tenant-level. Enabling this
setting is helpful for decentralized business units that manage their own data lake
in Azure.

Gateway setup
Typically, an On-premises data gateway is required for connecting to data sources that
reside within a private organizational network or a virtual network.

A data gateway is required when:

Authoring a dataflow in Power Query Online that connects to private


organizational data.
Refreshing a dataflow that connects to private organizational data.

 Tip
Dataflows require a centralized data gateway in standard mode. A gateway in
personal mode isn't supported when working with dataflows.

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. The activity log is also valuable for
supporting governance efforts, security audits, and compliance requirements. In the
advanced data preparation scenario, the activity log data is helpful to track the
management and use of dataflows.

Next steps
For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: Prototyping
and sharing
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

As described in the Power BI adoption roadmap, exploration, experimentation, and


obtaining useful feedback from a small group of users is the purpose of phase 1 of
solution adoption.

A prototype—or proof of concept (POC)—is a Power BI solution that's intended to


address unknowns and mitigate risk. This solution may be shared with others to get
feedback during development iterations. The solution may be a temporary, short-lived
solution, or it may ultimately evolve into a solution that's fully validated and released.
Creating a prototype is commonly done for departmental BI and enterprise BI scenarios
(and may occasionally be done for team BI scenarios).

Prototyping often occurs naturally during self-service BI development efforts. Or a


prototype might be a small project that has specific goals and a scope.

7 Note

The prototyping and sharing scenario is one of the self-service BI scenarios. For a
complete list of the self-service scenarios, see the Power BI usage scenarios article.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components to support prototyping activities. The focus is on using Power
BI Desktop during an interactive prototyping session. Focus can also be on sharing in
the Power BI service when additional feedback is needed from subject matter experts.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Power BI content creators develop BI solutions using Power BI Desktop.

Power BI Desktop connects to data from one or more data sources. Queries and data
mashups, which combine multiple sources, are developed in the Power Query Editor.

Data model development and report creation are done in Power BI Desktop. The purpose
is to help team members understand the meaning and significance of data by placing it in
a visual context.

Subject matter experts provide feedback during an interactive prototyping session. Based
on feedback from the subject matter experts (and other team members), content creators
make iterative improvements directly to the BI solution.

If desired, content creators publish their Power BI Desktop file (.pbix) to the Power BI
service. Publication of prototyping solutions to the Power BI service is optional.

The content is published to a non-production workspace. Its primary purpose is to provide


a development area that enables review by team members.

An individual report is shared with a colleague to provide read-only permissions to the


report (and its underlying data). The sharing operation can be done with a sharing link or
direct access sharing. Sharing can be advantageous for a prototyping solution to provide
temporary access during the feedback process.
Item Description

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh.

Power BI administrators oversee and monitor activity in the Power BI service. A


development workspace (containing non-production and prototyping solutions) is usually
governed to a much lesser extent than a production workspace.

Key points
The following are some key points to emphasize about the prototyping and sharing
scenario.

Interactive prototyping sessions


Interactive prototyping sessions are valuable to get immediate feedback when exploring
user requirements, validating calculations, clarifying visual layout needs, validating user
experience, and confirming report presentation. Use Power BI Desktop during
prototyping sessions that are interactively conducted with subject matter experts.

Power BI service
Publishing prototyping solutions to the Power BI service is optional. It can be useful
when there's a need to share preliminary results for feedback and decision-making
purposes.

 Tip

Prototyping solutions should be clearly separated from other production content


so that consumers have proper expectations for a non-production solution. For
example, consumers of a prototype report may not expect it to include all the data
or be refreshed on a schedule. A prototype report shouldn't be used for business
decisions until it's fully validated, finalized, and published to a production
workspace.

Workspace
A development workspace is appropriate in this scenario since it involves working with a
small team BI collaboration scenario (rather than a personal workspace as described in
the personal BI scenario). Once the solution is finalized and fully tested, it can be quickly
promoted to a production workspace (as described in the self-service content
publishing scenario).

Sharing reports and dashboards


The scenario diagram depicts sharing directly to a recipient (rather than workspace roles
or using a Power BI app). Using the sharing feature is appropriate for collaboration
scenarios when colleagues work closely together in an informal way. Sharing is useful in
this situation because it's limited to a small number of colleagues who need to review
and provide feedback on the prototyped solution.

 Tip

Individual item sharing should be done infrequently. Since sharing is configured per
individual items in a workspace, it's more tedious to maintain and increases the risk
of error. A valid alternative to sharing (not depicted in the scenario diagram) is to
use workspace roles (described in the team BI scenario). Workspace roles work
best when colleagues need access to all items in a workspace.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset (not depicted in the scenario diagram).

7 Note

For team, departmental, and enterprise BI scenarios, a centralized data gateway in


standard mode is strongly recommended over gateways in personal mode. In
standard mode, the data gateway supports live connection and DirectQuery
operations (in addition to scheduled data refresh operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and detect risky activities. Auditing and governance
requirements are typically less stringent for prototyping and personal BI scenarios.

Next steps
For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: Self-service
content publishing
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

When analytical solutions are critical to the organization, it's important to ensure
content in the Power BI service is stable and reliable for consumers. IT teams often solve
this problem by working in multiple environments:

In the development environment, content creators and owners make changes and
improvements to the solution. When these changes are ready for broader review,
the solution is deployed (sometimes known as promoted) to the test environment.
In the test environment, reviewers validate the changes made to the solution. This
review can involve validating the solution functionality and data. When the review
is complete, the solution is deployed to the production environment.
The production environment is where consumers view and interact with the
released solution.

This structured approach ensures that content creators, owners, and reviewers can make
and validate changes without negatively affecting consumers.

Using methodical and disciplined lifecycle management processes reduces errors,


minimizes inconsistencies, and improves the user experience for consumers. Content
creators and owners can use Power BI deployment pipelines for self-service content
publishing. Deployment pipelines simplify the process and improve the level of control
when releasing new content.

7 Note

This self-service content publishing scenario is one of the content management


and deployment scenarios. For a complete list of the self-service scenarios, see the
Power BI usage scenarios article.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components to support self-service content publishing. The focus is on
use of a Power BI deployment pipeline for promoting content through development,
test, and production workspaces.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

The Power BI content creator develops a BI solution using Power BI Desktop.

The Power BI Desktop file (.pbix) is saved to a shared library in OneDrive.

When ready, the content creator publishes the Power BI Desktop file to the Power BI
service.

Content is published to a workspace that's dedicated to development.


Item Description

The development (or test) workspace is set to Premium per user, Premium per capacity,
or Embedded license mode.

Content creators and owners collaborate in the development workspace to ensure all
requirements are met.

A deployment pipeline administrator configures the Power BI deployment pipeline with


three stages: development, test, and production. Each stage aligns to a separate
workspace in the Power BI service. Deployment settings and access are configured for the
deployment pipeline.

When the development content is ready, the deployment pipeline compares the content
between the development and test stages. Some, or all, Power BI items are deployed to a
workspace that's dedicated to testing.

The test (or development) workspace is set to Premium per user, Premium per capacity,
or Embedded license mode.

Once the deployment pipeline has completed its deployment, the content creator
manually performs post-deployment activities for the test workspace. Activities can include
configuring scheduled data refresh or publishing a Power BI app for the test workspace.

Quality assurance, data validations, and user acceptance testing occur by reviewers of the
test workspace.

When the test content is fully validated, the deployment pipeline compares the content
between the test and production stages. Some, or all, Power BI items are deployed to a
workspace that's dedicated to production.

The production workspace is set to Premium per user, Premium per capacity, or
Embedded license mode. For a production workspace, Premium per capacity license
mode is often more appropriate when there's a large number of read-only consumers.

Once the deployment pipeline completes deployment, content creators can manually
perform post-deployment activities. Activities can include configuring scheduled data
refresh or publishing a Power BI app for the production workspace.

Content viewers access the content using the production workspace or a Power BI app.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required.

Power BI administrators oversee and monitor activity in the Power BI service. Content that's
deemed critical enough to have separate development, test, and production workspaces
may be subject to stricter governance requirements than less critical content.

 Tip
We recommend that you reviewing the advanced data model management usage
scenario as too. It builds upon concepts introduced in this scenario.

Key points
The following are some key points to emphasize about the self-service content
publishing scenario.

Deployment pipeline
A deployment pipeline consists of three stages: development, test, and production. A
single workspace is assigned to each stage in the deployment pipeline. Power BI items
that are supported by deployment pipelines are published (or cloned) from one
workspace to another when a deployment occurs. Once testing and validations are
complete, the deployment pipeline can be reused many times to promote content
quickly. The deployment pipeline interface is easy to implement for content creators
who don't have the skills or desire to use code-based deployments (use of the Power BI
REST APIs are described in the enterprise content publishing scenario).

7 Note

Publishing content using a deployment pipeline is known as a metadata-only


deployment. In this case, data isn't overwritten or copied to the target workspace. A
data refresh is usually required once the deployment completes—see the post-
deployment activities topic below.

Deployment process
It's a best practice to consider the entire workspace content as an analytical package
that can be deployed together as a unit. Therefore, it's important to have clarity on the
purpose and expectations of each workspace. Although a selective deployment of
specific Power BI items is possible, it's more efficient and less risky when a deployment
represents a logical unit of content.

 Tip

Plan for how urgent issues will be handled, in addition to planned deployments. If
an immediate fix is required, still follow the standard practice of propagating all
changes from development through to test and production using the deployment
pipeline.

Permissions model
Spend time planning the permissions model. Full flexibility for applying different
workspace roles (between development, test, and production) is supported. As depicted
in the scenario diagram, it's common to assign the following workspace permissions:

Development workspace: Limit access to a team of content creators and owners


who collaborate together.
Test workspace: Limit access to reviewers involved with quality assurance, data
validations, and user acceptance testing activities.
Production workspace: Grant viewer access to content consumers of the Power BI
app (and the workspace, when appropriate). Limit access to those who need to
manage and publish production content, involving the fewest number of users
possible.

7 Note

Most content consumers are unaware of the development and test workspaces.

Access for deployment pipeline


Pipeline user permissions (for who can deploy content with a deployment pipeline) are
managed separately from the workspace roles. Access to both the workspace and the
deployment pipeline are required for the users conducting a deployment. Relevant
Premium permissions are also required.

When possible, it's recommended that the existing content creator or owner conduct
the deployments. In some situations, permissions are more restricted for the production
workspace. In that case, it may be appropriate to coordinate the production deployment
with someone else who has permission to deploy to production.

Pipeline users who are assigned to the workspace member (or admin) role are allowed
to compare stages and deploy content. Assigning pipeline users to this role minimizes
permissions issues and allows for a smoother deployment process.

 Tip
Keep in mind that workspace roles are set separately for development, test, and
production. However, pipeline access is set once for the entire pipeline.

Power BI Premium licensing


Power BI deployment pipelines are a Premium feature. There are various ways to obtain
licensing, depending on whether the content is used for development, test, or
production purposes. The scenario diagram depicts use of a Premium P SKUs such as P1,
P2, P3, P4, or P5 for the production workspace, and a Power BI Premium Per User (PPU)
user-based Premium license for the development and test workspaces. Using PPU
licensing for workspaces with very few users (as depicted in the scenario diagram) is a
cost-effective way to use Premium features, while keeping them separate from the
Premium capacity that's assigned for production workloads.

Deployment settings
Data source rules and parameter rules are available for dynamically managing values
that differ between development, test, and production. Use of deployment settings are
an effective way to reduce effort and the risk of errors.

Post-deployment activities
Purposefully, certain properties aren't copied to the target workspace during a
deployment. Several key post-deployment activities include:

Data refresh: Data isn't copied from the source workspace to the target workspace.
Publishing from a deployment pipeline is always a metadata-only deployment.
Therefore, a data refresh is usually required after deploying to a target workspace.
For first-time deployments, the data source credentials or gateway connectivity (as
appropriate) must be configured as well.
Apps: Power BI apps aren't published automatically by deployment pipelines.
Access roles, sharing permissions, and app permissions: Permissions aren't
overwritten during a deployment.
Workspace properties: Properties, such as contacts and the workspace description,
aren't overwritten during a deployment.
Power BI item properties: Certain Power BI item properties, such as sensitivity
labels, may be overwritten during a deployment in certain circumstances.
Unsupported Power BI items: Additional manual steps may need to be taken for
Power BI items that aren't supported by the deployment pipeline.
U Caution

There isn't a rollback process once a deployment has occurred with a deployment
pipeline. Consider carefully what change management processes and approvals are
required in order to deploy to the production workspace.

OneDrive storage
The scenario diagram depicts using OneDrive for storing the source Power BI Desktop
files. The goal is to store the source files in a location that is:

Appropriately secured to ensure only publishers can access the source files. A
shared library (rather than a personal library) is a good choice.
Backed up frequently so the files are safe from loss.
Versioned when changes occur, to allow for a rollback to an earlier version.

 Tip

If a OneDrive location is synchronized to a workspace, configure it only for the


development workspace.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a Power BI Desktop file is published to the Power BI service. The
two purposes of a gateway are to refresh imported data, or view a report that queries a
live connection or DirectQuery dataset (not depicted in the scenario diagram).

When working with multiple environments, it's common to configure development, test,
and production connections to use different source systems. In this case, use data
source rules and parameter rules to manage values that differ between environments.

7 Note

A centralized data gateway in standard mode is strongly recommended over


gateways in personal mode. In standard mode, the data gateway supports live
connection and DirectQuery operations (in addition to scheduled data refresh
operations).
System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand deployment activities that occur.

Next steps
In the next article in the series, learn about the advanced data modeling usage scenario.
Power BI usage scenarios: Enterprise
content publishing
Article • 04/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

When content creators collaborate to deliver analytical solutions that are important to
the organization, they must ensure timely and reliable delivery of content to consumers.
Technical teams address this challenge by using a process called DevOps. DevOps allows
teams to automate and scale processes by adopting practices that improve and
accelerate delivery.

7 Note

Data teams that address the same challenges may also practice DataOps. DataOps
builds upon DevOps principles, but DataOps includes additional practices specific
to data management, such as data quality assurance and governance. We refer to
DevOps in this article, but be aware that the underlying principles can also apply to
DataOps.

Content creators and consumers benefit from several advantages when adopting
DevOps practices to publish Power BI content. The following points are a high-level
overview of how this process works.

1. Develop content and commit work to a remote repository: Content creators


develop their solution on their own machine. They commit and save their work to a
remote repository at regular intervals during development. A remote repository
contains the latest version of the solution, and it's accessible to the entire
development team.
2. Collaborate and manage content changes by using version control: Other
content creators can make revisions to the solution by creating a branch. A branch
is copy of a remote repository. When these revisions are ready and approved, the
branch is merged with the latest version of the solution. All revisions to the
solution are tracked. This process is known as version control (or source control).
3. Deploy and promote content by using pipelines: In the self-service content
publishing usage scenario, content is promoted (or deployed) through
development, test, and production workspaces by using Power BI deployment
pipelines. Power BI deployment pipelines can promote content to Premium Power
BI workspaces manually using the user interface, or programmatically using the
REST APIs. In contrast, enterprise content publishing (the focus of this usage
scenario) promotes content by using Azure Pipelines. Azure Pipelines are an Azure
DevOps service that automates testing, management, and deployment of content
by using a series of customized, programmatic steps. In the enterprise content
publishing usage scenario, these pipelines can also be referred to as continuous
integration and deployment (or CI/CD) pipelines. These pipelines frequently and
automatically integrate changes and streamline content publishing.

DevOps supports a mature, systematic approach to content management and


publication. It enables content creators to collaborate on solutions, and it ensures fast
and reliable delivery of content to consumers. When you adhere to DevOps practices,
you benefit from streamlined workflows, fewer errors, and improved experiences for
content creators and content consumers.

You set up and manage DevOps practices for your Power BI solution by using Azure
DevOps. In enterprise scenarios, you can automate content publication with Azure
DevOps and the Power BI REST APIs in three different ways.

Power BI REST APIs with Power BI deployment pipelines: You can import content
to development workspaces and use deployment pipelines to deploy content
through test and production workspaces. You still control deployment from Azure
DevOps, and use the Power BI REST APIs to manage deployment pipelines instead
of individual content items. Additionally, you use the XMLA endpoint to deploy
data model metadata instead of a Power BI Desktop file (.pbix) with Azure DevOps.
This metadata allows you to track object-level changes by using version control.
While more robust and easier to maintain, this approach does require Premium
licensing and moderate scripting effort to set up content import and deployment
with the Power BI REST APIs. Use this approach when you want to simplify content
lifecycle management with deployment pipelines, and you have a Premium license.
The XMLA endpoint and Power BI deployment pipelines are Premium features.
Power BI REST APIs: You can also import content to development, test and
production workspaces by using Azure DevOps and only the Power BI REST APIs.
This approach doesn't require Premium licensing, but it does involve high scripting
effort and setup, because deployment is managed outside of Power BI. Use this
approach when you want to deploy Power BI content centrally from Azure DevOps,
or when you don't have a Premium license. For a visual comparison between the
first two approaches, see the release pipeline approaches flow diagram.
Power BI automation tools with Power BI deployment pipelines: You can use the
Power BI automation tools Azure DevOps extension to manage deployment
pipelines instead of the Power BI REST APIs. This approach is an alternative to the
first option, which uses the Power BI REST APIs with Power BI deployment
pipelines. The Power BI automation tools extension is an open source tool. It helps
you manage and publish content from Azure DevOps without the need to write
PowerShell scripts. Use this approach when you want to manage deployment
pipelines from Azure DevOps with minimal scripting effort, and you have a
Premium capacity.

This article focuses on the first option, which uses the Power BI REST APIs with Power BI
deployment pipelines. It describes how you can use Azure DevOps to set up DevOps
practices. It also describes how you can use Azure Repos for your remote repositories
and automate content testing, integration, and delivery with Azure Pipelines. Azure
DevOps isn't the only way to set up enterprise content publishing, but simple
integration with Power BI makes it a good choice.

7 Note

This usage scenario is one of the content management and deployment scenarios.
For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support enterprise content publishing. The focus is on
the use of Azure DevOps to manage and publish content programmatically at scale
through development, test, and production workspaces in the Power BI service.

The scenario diagram depicts the following user actions, processes, and features.

Item Description

Content creators develop data models by using Tabular Editor and develop reports by
using Power BI Desktop. It's also possible that content creators develop data models with
Power BI Desktop. Content creators save their work to a local repository during
development.

The data model connects to the data from one or more data sources.

Content creators regularly commit and push their changes to a remote repository during
development by using a Git client such as Visual Studio Code. In this diagram, the remote
repository is Azure Repos.

Other content creators use Azure Repos to track changes with version control. They
collaborate by committing changes into separate branches.

Changes to content in the remote repository trigger Azure Pipelines. A validation pipeline
is the first pipeline that is triggered. The validation pipeline performs automated tests to
validate content before it's published.

Content that passes the validation pipeline triggers a subsequent build pipeline. The build
pipeline prepares content for publishing to the Power BI service. The process up to this
point is typically referred to as continuous integration (CI).

Content is published and deployed by using release pipelines. The release pipelines use
either the Power BI REST APIs or the workspace XMLA endpoint to programmatically
Item Description

import content to the Power BI service. Deployment by using release pipelines is typically
referred to as continuous deployment (CD).

A release pipeline publishes content to the development workspace. This is the first release
pipeline that initially imports content to the Power BI service. Other release pipelines later
promote content through development, test, and production workspaces.

A release manager controls deployment to test and production workspaces by using an


Azure Pipelines release approval. In enterprise content publishing, a release manager
typically plans and coordinates the content release to test and production environments.
They coordinate and communicate with content creators, stakeholders, and users.

Release pipelines promote content by using Power BI deployment pipelines with the Power
BI REST APIs. The test release pipeline promotes content from the development workspace
to the test workspace. The production release pipeline promotes content from test
workspace to the production workspace.

After deployment, release pipelines perform post-deployment activities. Activities can


include setting dataset credentials or updating the Power BI app for test and production
workspaces.

Azure Pipelines publishes content to a development workspace. Separate workspaces are


set up for test and production environments. Power BI deployment pipelines deploy
content from development to test workspaces, and from test to production workspaces
once approved.

Reviewers within the test workspace perform user acceptance testing (UAT).

Workspaces are set to Premium per user, Premium per capacity, or Embedded license
mode, to allow using Power BI deployment pipelines and the XMLA read/write endpoint.

Content viewers access the content by using the production workspace or a Power BI app.

To connect to data sources that reside within a private organizational network, an on-
premises data gateway is required. You can set up Azure Pipelines to perform gateway
configuration activities by using the Power BI REST APIs (not shown in this scenario
diagram).

Power BI administrators oversee and monitor activity in the Power BI service. Content that's
deemed critical enough to have separate development, test, and production workspaces
may be subject to stricter governance requirements than less critical content.

DevOps administrators oversee and monitor repository and pipeline activity in Azure
DevOps.

 Tip
We recommend that you also review the self-service content publishing and
advanced data model management usage scenarios. The enterprise content
publishing usage scenario builds upon concepts that these scenarios introduce.

Key points
The following are some key points to emphasize about the enterprise content
publishing scenario.

Version control
Tracking changes during the content lifecycle is important to ensure stable and
consistent delivery of content to consumers. In this usage scenario, content creators and
owners manage content changes in a remote repository by using version control. Version
control is the practice of managing changes to files or code in a central repository. This
practice allows for better collaboration and effective management of version history.
Version control has advantages for content creators, including the ability to roll back or
merge changes.

Content creators typically develop data models in Tabular Editor to support better
version control. Unlike a data model that you develop in Power BI Desktop, a data
model developed in Tabular Editor is saved in human-readable metadata format. This
format enables data model object-level version control. You should use object-level
version control when collaborating with multiple people on the same data model. For
more information, see the advanced data model management usage scenario. It isn't
possible to see changes that you made in a Power BI Desktop file (.pbix), such as the
report definition or data model. For example, you can't track changes to a report page,
such as the visuals used, their positions, and their field mappings or formatting.

Content creators store data model metadata files and .pbix files in a central remote
repository, like Azure Repos. These files are curated by a technical owner. While a
content creator develops a solution, a technical owner is responsible for managing the
solution and reviewing the changes, and merging them into a single solution. Azure
Repos provides sophisticated options for tracking and managing changes. This
approach differs from the approach described in the self-service content publishing
usage scenario, where the creator uses OneDrive storage with version tracking.
Maintaining a well-curated, documented repository is essential because it's the
foundation of all content and collaboration.

Here are some key considerations to help you set up a remote repository for version
control.
Scope: Clearly define the scope of the repository. Ideally, the scope of the
repository is identical to the scope of the downstream workspaces and apps that
you use to deliver content to consumers.
Access: You should set up access to the repository by using a similar permissions
model as you have set up for your deployment pipeline permissions and
workspace roles. Content creators need access to the repository.
Documentation: Add text files to the repository to document its purpose,
ownership, access, and defined processes. For example, the documentation can
describe how changes should be staged and committed.
Tools: To commit and push changes to a remote repository, content creators need
a Git client like Visual Studio or Visual Studio Code . Git is a distributed version
control system that tracks changes in your files. To learn Git basics, see What is
Git?.

7 Note

Consider using Git Large File Storage (LFS) if you plan to commit Power BI Desktop
files (.pbix). Git LFS provides advanced options for managing files where changes
aren't visible (undiffable files), like a .pbix file. For example, you can use file
locking to prevent concurrent changes to a Power BI report during development.
However, Git LFS has its own client and configuration.

Collaboration with Azure DevOps


As a solution increases in scope and complexity, it may become necessary for multiple
content creators and owners to work in collaboration. Content creators and owners
communicate and collaborate in a central, organized hub by using Azure DevOps.

To collaborate and communicate in Azure DevOps, you use supporting services.

Azure Boards: Content owners use boards to track work items. Work items are
each assigned to a single developer on the team, and they describe issues, bugs, or
features in the solution, and the corresponding stakeholders.
Azure Wiki: Content creators share information with their team to understand and
contribute to the solution.
Azure Repos: Content creators track changes in the remote repository and merge
them into a single solution.
Azure Pipelines: Pipeline owners set up programmatic logic to deploy the solution,
either automatically or on-demand.
Collaboration flow diagram
The following diagram depicts a high-level overview of how Azure DevOps enables
collaboration in the enterprise content publishing usage scenario. The focus of the
diagram is on the use of Azure DevOps to create a structured and documented content
publishing process.

The diagram depicts the following user actions, processes, and features.

Item Description

A content creator makes a new branch by cloning an existing production or release


branch. The new branch is often referred to as the feature or development branch.

The content creator commits their changes to a local repository during development.

The content creator links their changes to work items that are managed in Azure Boards.
Works items describe specific developments, improvements, or bug fixes scoped to their
branch.

The content creator regularly commits their changes. When ready, the content creator
publishes their branch to the remote repository.

To test their changes, the content creator deploys their solution to a development
workspace (not shown in this diagram). Once tested, the content creator opens a pull
request to merge their changes into the release branch. The release branch is the latest
working version, and it may contain changes from other members of the development
team.
Item Description

A technical owner is responsible for reviewing the pull request and merging changes.
When they approve the pull request, they merge the feature branch into the release
branch.

A successful merge triggers deployment of the solution to a test workspace (not shown in
this diagram). Users test the solution in user acceptance testing (UAT). If extra
development is required, content creators can make other changes to the development
branch, which must be merged to the test branch after the content owner reviews a new
pull request.

The technical owner opens a pull request to merge the release branch into the production
branch (sometimes known as the main branch) once UAT is complete. The production
branch contains the latest version of the solution that's deployed to the production
workspace for content consumers.

The release manager reviews the changes and approves the pull request. They then merge
the release branch into the production branch.

The release manager performs a final review and approval of the solution. This release
approval prevents the solution from being published before it's ready. In enterprise
content publishing, a release manager typically plans and coordinates the content release
to test and production environments. They coordinate and communicate with content
creators, stakeholders and users.

When the release manager approves the release, Azure Pipelines automatically prepare the
solution for deployment.

The pipelines deploy the solution to the production workspace.

Content creators and content owners document the solution and its processes in an Azure
Wiki, which is available to the entire development team.

To elaborate, content creators achieve collaboration by using a branching strategy. A


branching strategy allows individual content creators to work in isolation in their local
repository. When ready, they combine their changes as a single solution in the remote
repository. Content creators should scope their work to branches by linking them to
work items for specific developments, improvements, or bug fixes. Each content creator
creates their own branch of the remote repository for their scope of work. Work done on
their local solution is committed and pushed to a version of the branch in the remote
repository with a commit message. A commit message describes the changes made in
that commit.

To merge the changes, a content creator opens a pull request. A pull request is a
submission for peer review that can lead to the merge of the work done into a single
solution. Merging can result in conflicts, which must be resolved before the branch can
be merged. Pull request reviews are important to ensure creators adhere to
organizational standards and practices for development, quality, and compliance.

Collaboration recommendations

We recommend that you define a structured process for how content creators should
collaborate. Ensure that you determine:

How work is scoped and how branches are created, named, and used.
How authors group changes and describe them with commit messages.
Who's responsible for reviewing and approving pull requests.
How merge conflicts are resolved.
How changes made in different branches are merged together into a single
branch.
How content is tested and who performs testing before content is deployed.
How and when changes are deployed to development, test, and production
workspaces.
How and when deployed changes or versions of the solution should be rolled
back.

) Important

The value provided by DevOps is directly proportional to the adherence to the


processes that define its use.

A successful collaboration depends on a well-defined process. It's important to


clearly describe and document the end-to-end development workflow. Ensure that
the selected strategies and processes align with existing practices in your team, and
if not, how you'll manage change. Further, ensure that the processes are clear and
communicated to all team members and stakeholders. Make sure that team
members and stakeholders who are new to the processes are trained in how to
adopt them, and that they appreciate the value of successful DevOps adoption.

Power BI REST APIs


You develop programmatic logic to import and deploy content from Azure DevOps by
using the Power BI REST APIs. You import Power BI files (.pbix) to a workspace by using
an import operation. You use a pipeline operation to deploy some content or all content
to test or production workspaces by using Power BI deployment pipelines. The
programmatic logic is defined in the Azure Pipelines.
We recommend that you use a service principal to call Power BI REST APIs in your
pipelines. A service principal is intended for unattended, automated activities, and it
doesn't rely on user credentials. However, some items and activities aren't supported by
the Power BI REST APIs or when using a service principal, like dataflows.

When you use a service principal, be sure to carefully manage permissions. Your goal
should be to follow the principle of least privilege. You should set sufficient permissions
for the service principal without over-provisioning permissions. Use Azure Key Vault or
another service that securely stores the service principal secrets and credentials.

U Caution

If you have a data model that's saved as a human-readable metadata format, it


can't be published by using the Power BI REST APIs. Instead, you must publish it by
using the XMLA endpoint. You can publish metadata files by using third-party tools
like the Tabular Editor command-line interface (CLI). You can also publish
metadata files programmatically by using your own custom .NET development.
Developing a custom solution requires more effort, since you must use the
Microsoft Tabular Object Model (TOM) extension of the Analysis Management
Object (AMO) client libraries.

Azure Pipelines
Azure Pipelines programmatically automate testing, management, and deployment of
content. When a pipeline is run, steps in the pipeline execute automatically. Pipeline
owners can customize its triggers, steps, and functionality to meet deployment needs.
As such, the number and types of pipelines vary depending on the solution
requirements. For example, an Azure Pipeline could run automated tests or modify data
model parameters before a deployment.

There are three types of Azure Pipelines that you can set up to test, manage, and deploy
your Power BI solution:

Validation pipelines.
Build pipelines.
Release pipelines.

7 Note

It's not necessary to have all three of these pipelines in your publishing solution.
Depending on your workflow and needs, you may set up one or more of the
variants of the pipelines described in this article to automate content publication.
This ability to customize the pipelines is an advantage of Azure Pipelines over the
built-in Power BI deployment pipelines. For example, you don't have to have a
validation pipeline; you can use only use build and release pipelines.

Validation pipelines

Validation pipelines perform basic quality checks of data models before they're
published to a development workspace. Typically, changes in a branch of the remote
repository trigger the pipeline to validate those changes with automated testing.

Examples of automated testing include scanning the data model for best practice rule
violations by using Best Practice Analyzer (BPA), or by running DAX queries against a
published dataset. The results of these tests are then stored in the remote repository for
documentation and auditing purposes. Data models that fail validation shouldn't be
published. Instead, the pipeline should notify content creators of the issues.

Build pipelines
Build pipelines prepare data models for publication to the Power BI service. These
pipelines combine serialized model metadata into a single file that's later published by a
release pipeline (described in the release pipelines diagram). A build pipeline may also
make other changes to the metadata, like modifying parameter values. The build
pipelines produce deployment artifacts that consist of data model metadata (for data
models) and Power BI Desktop files (.pbix) that are ready for publication to the Power BI
service.

Release pipelines

Release pipelines publish or deploy content. A publishing solution typically includes


several release pipelines, depending on the target environment.

Development release pipeline: This first pipeline is triggered automatically. It


publishes content to a development workspace after the build and validation
pipelines succeed.
Test and production release pipelines: These pipelines aren't triggered
automatically. Instead, they're triggered on-demand or when approved. Test and
production release pipelines deploy content to a test or production workspace,
respectively, after release approval. Release approvals ensure that content isn't
automatically deployed to a test or production stage before it's ready. These
approvals are provided by release managers, who are responsible for planning and
coordinating content release to test and production environments.

There are two different approaches to publish content with test and release pipelines.
Either they promote content by using a Power BI deployment pipeline, or they publish
content to the Power BI service from Azure DevOps.

The following diagram depicts the first approach. In this approach, release pipelines
orchestrate content deployment to test and production workspaces by using Power BI
deployment pipelines. Content is promoted through development, test, and production
workspaces in Power BI. While this approach is more robust and simpler to maintain, it
requires Premium licensing.

The diagram depicts the following user actions, processes, and features of the first
approach.

Item Description

In the first approach, the release pipelines publish content by using the XMLA endpoint
and the Power BI REST APIs with Power BI deployment pipelines. Content is published and
then promoted through development, test, and production workspaces. Power BI
deployment pipelines and the XMLA read/write endpoint are Premium features.

Either a successful branch merge or completion of an upstream pipeline triggers the build
pipeline. The build pipeline then prepares content for publishing, and triggers the
development release pipeline.

The development release pipeline publishes content to the development workspace by


using the XMLA endpoint (for data model metadata) or the Power BI REST APIs (for Power
BI Desktop files, which can contain data models and reports). The development pipeline
uses the Tabular Editor command-line interface (CLI) to deploy data model metadata by
using the XMLA endpoint.

A release approval or on-demand trigger activates the test release pipeline.


Item Description
The test release pipeline deploys content by using the Power BI REST API deploy
operations, which run the Power BI deployment pipeline.

The Power BI deployment pipeline promotes content from the development workspace to
the test workspace. After deployment, the release pipeline performs post-deployment
activities by using the Power BI REST APIs (not shown in the diagram).

A release approval or on-demand trigger activates the production release pipeline.

The production release pipeline deploys content by using the Power BI REST API deploy
operations, which run the Power BI deployment pipeline.

The Power BI deployment pipeline promotes content from the test workspace to the
production workspace. After deployment, the release pipeline performs post-deployment
activities by using the Power BI REST APIs (not shown in the diagram).

The following diagram depicts the second approach. This approach doesn't use
deployment pipelines. Instead, it uses release pipelines to publish content to test and
production workspaces from Azure DevOps. Notably, this second approach doesn't
require Premium licensing when you publish only Power BI Desktop files with the Power
BI REST APIs. It does involve more setup effort and complexity, because you must
manage deployment outside of Power BI. Development teams that already use DevOps
for data solutions outside of Power BI may be more familiar with this approach.
Development teams that use this approach can consolidate deployment of data
solutions in Azure DevOps.

The diagram depicts the following user actions, processes, and features in the second
approach.
Item Description

In the second approach, the release pipelines publish content by using the XMLA endpoint
and the Power BI REST APIs only. Content is published to development, test, and
production workspaces.

Either a successful branch merge or completion of an upstream pipeline triggers the build
pipeline. The build pipeline then prepares content for publishing, and triggers the
development release pipeline.

The development release pipeline publishes content to the development workspace by


using the XMLA endpoint (for data model metadata) or the Power BI REST APIs (for Power
BI Desktop files, which can contain data models and reports). The development pipeline
uses the Tabular Editor command-line interface (CLI) to deploy data model metadata by
using the XMLA endpoint.

A release approval or on-demand trigger activates the test release pipeline.

The development release pipeline publishes content to the test workspace by using the
XMLA endpoint (for data model metadata) or Power BI REST APIs (for Power BI Desktop
files, which can contain data models and reports). The development pipeline uses the
Tabular Editor command-line interface (CLI) to deploy data model metadata by using the
XMLA endpoint. After deployment, the release pipeline performs post-deployment
activities by using the Power BI REST APIs (not shown in the diagram).

A release approval or on-demand trigger activates the production release pipeline.

The development release pipeline publishes content to the production workspace by using
the XMLA endpoint (for data model metadata) or Power BI REST APIs (for Power BI
Desktop files, which can contain data models and reports). The development pipeline uses
the Tabular Editor command-line interface (CLI) to deploy data model metadata by using
the XMLA endpoint. After deployment, the release pipeline performs post-deployment
activities by using the Power BI REST APIs (not shown in the diagram).

Release pipelines should manage post-deployment activities. These activities could


include setting dataset credentials or updating the Power BI app for test and production
workspaces. We recommend that you set up notifications to inform relevant people
about deployment activities.

 Tip

Using a repository for version control allows content creators to create a rollback
process. The rollback process can reverse the last deployment by restoring the
previous version. Consider creating a separate set of Azure Pipelines that you can
trigger to roll back production changes. Think carefully about what processes and
approvals are required to initiate a rollback. Ensure that these processes are
documented.

Power BI deployment pipelines


A Power BI deployment pipeline consists of three stages: development, test, and
production. You assign a single Power BI workspace to each stage in the deployment
pipeline. When a deployment occurs, the deployment pipeline promotes Power BI items
from one workspace to another.

An Azure Pipelines release pipeline uses the Power BI REST APIs to deploy content by
using a Power BI deployment pipeline. Access to both the workspace and the
deployment pipeline is required for the users conducting a deployment. We recommend
that you plan deployment pipeline access so that pipeline users can view deployment
history and compare content.

 Tip

When you separate data workspaces from reporting workspaces, consider using
Azure Pipelines to orchestrate content publishing with multiple Power BI
deployment pipelines. Datasets are deployed first, and then they're refreshed.
Lastly, reports are deployed. This approach helps you simplify deployment.

Premium licensing
Power BI deployment pipelines and the XMLA read/write endpoint are Premium
features. These features are available with Power BI Premium capacity and Power BI
Premium Per User (PPU).

PPU is a cost-effective way to manage enterprise content publishing for development


and test workspaces, which typically have few users. This approach has the added
advantage of isolating development and test workloads from production workloads.

7 Note

You can still set up enterprise content publishing without a Premium license, as
described by the second approach in the release pipeline section. In the second
approach, you use Azure Pipelines to manage deployment of Power BI Desktop
files to development, test, and production workspaces. However, you can't deploy
model metadata by using the XMLA endpoint because it's not possible to publish a
metadata format dataset with the Power BI REST APIs. Also, it's not possible to
promote content through environments with deployment pipelines without a
Premium license.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The two purposes of a gateway are
to refresh imported data, and view a report that queries a live connection or DirectQuery
dataset (not depicted in the scenario diagram).

When working with multiple environments, it's common to set up development, test,
and production connections to different source systems. In this case, use data source
rules and parameter rules to manage values that differ between environments. You can
use Azure Pipelines to manage gateways by using the gateway operations of the Power
BI REST APIs.

7 Note

A centralized data gateway in standard mode is strongly recommended over


gateways in personal mode. In standard mode, the data gateway supports live
connection and DirectQuery operations (in addition to scheduled data refresh
operations).

System oversight
The activity log records events that occur in the Power BI service. Power BI
administrators can use the activity log to audit deployment activities.

You can use the Power BI metadata scanning APIs to create a tenant inventory. The API
results are helpful to verify which items have been deployed to each workspace, to
check lineage, and to validate security settings.

There's also an audit log within Azure DevOps, which is outside of the Power BI service.
Azure DevOps administrators can use the audit log to review activities in remote
repositories and pipelines.

Next steps
For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: Advanced
data model management
Article • 03/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This usage scenario focuses on advanced data model management, which is when a
Power BI content creator relies on a third-party tool to develop, manage, or optimize
data models. Some third-party tools are external tools, which Power BI Desktop supports
directly. You can also manage a published a data model (dataset) by communicating
directly with the XMLA endpoint in the Power BI service.

Data models are hosted in either the Power BI service, Azure Analysis Services (AAS), or
SQL Server Analysis Services (SSAS). This usage scenario focuses on using the XMLA
endpoint in the Power BI service.

 Tip

Many people refer to third-party tools as external tools. However, there are
distinctions in how different tools may be used. Connecting to a local data model in
Power BI Desktop is the most literal interpretation of the term external tool. This
advanced data model management usage scenario focuses on connecting to a
remote data model (a dataset hosted in the Power BI service) by using the XMLA
endpoint. More details on the different ways to use third-party tools are described
later in this article.

You can achieve connectivity to a data model by using the XML for Analysis (XMLA)
protocol. The XMLA protocol is an industry standard protocol that's supported by more
than 25 vendors, including Microsoft. All tools, including third-party tools, that are
compliant with the XMLA protocol use Microsoft client libraries to read and/or write
data to a data model. Connectivity is achieved with an XMLA endpoint, which is an API
exposed by a data model that broadens the development and management capabilities
available to dataset creators.
7 Note

This advanced data model management usage scenario is one of the content
management and deployment scenarios. For a complete list of the self-service
usage scenarios, see Power BI usage scenarios.

For brevity, some aspects described in the content collaboration and delivery
scenarios topic aren't covered in this article. For complete coverage, read those
articles first.

Scenario diagram
The focus of this advanced data model management usage scenario is on using Tabular
Editor to manage the data model. You can publish a data model to the Power BI service
by using the XMLA endpoint, which is available with Power BI Premium.

 Tip

We recommend that you review the self-service content publishing usage scenario
if you're not familiar with it. The advanced data model management scenario builds
upon that scenario.

7 Note

Sometimes the terms dataset and data model are used interchangeably. Generally,
from a Power BI service perspective, it's referred to as dataset. From a development
perspective, it's referred to as a data model (or model for short). In this article, both
terms have the same meaning. Similarly, a dataset creator and a data modeler have
the same meaning.

The following diagram depicts a high-level overview of the most common user actions
and tools that can help you develop, manage, or optimize data models.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

Dataset creators develop data models by using Tabular Editor. It's common that the initial
design work (like Power Query work) is done in Power BI Desktop before switching to
Tabular Editor (not depicted in the scenario diagram).

The data model connects to data from one or more data sources.

Data model development is done in Tabular Editor. Editing of Power Query (M) scripts is
supported.

When ready, dataset creators publish the data model from Tabular Editor to the Power BI
service by using the XMLA endpoint of the target workspace.

The data model is published to a workspace dedicated to storing and securing shared
datasets. Access to the workspace by using the XMLA endpoint is only possible when the
workspace license mode is set to Premium per user, Premium per capacity, or Embedded.

Report creators create reports by using a live connection to the shared dataset.

Report creators develop reports in Power BI Desktop. Other than purposefully separating
reports from datasets, content creators follow the typical report creation process.

When ready, report creators publish their Power BI Desktop file (.pbix) to the Power BI
service.
Item Description

Reports are published to a workspace dedicated to storing and securing reports and
dashboards.

Published reports remain connected to the shared dataset that's stored in a different
workspace. Any changes made to the shared dataset affect all dependent reports.

Third-party tools can use the XMLA endpoint to query the shared dataset. Other XMLA-
compliant tools, such as DAX Studio or PowerShell, can be used to query or update the
shared dataset. Power BI Desktop, Excel, and Report Builder can also connect by using the
XMLA endpoint (not depicted in the scenario diagram).

Other Microsoft and third-party tools can use the XMLA endpoint to manage the dataset
and provide application lifecycle management. To learn more, see XMLA endpoint-based
client tools.

Power BI administrators manage the tenant setting to enable the use of the XMLA
endpoint. The administrator must enable the XMLA endpoint for Premium capacities and
Premium Per User capacities.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required for data refresh. Data refresh is scheduled and managed
in the Power BI service.

Power BI administrators oversee and monitor activity in the Power BI service.

Key points
The following are some key points to emphasize about the advanced data model
management scenario.

Third-party applications and tools


Enterprise BI teams commonly use client tools, such as Tabular Editor (depicted in the
scenario diagram and described in the next topic), to help them manage centralized
datasets. However, any dataset creator that wants to work with advanced modeling
capabilities can take advantage of the methods described in this usage scenario.

There are several ways to use third-party applications:

Connect to a remote data model by using the XMLA endpoint: Some third-party
tools can connect directly to a remote data model in the Power BI service (or
Analysis Services). Once connected to the XMLA endpoint, all Tabular Object
Model (TOM) operations are supported. This approach is the primary focus of this
usage scenario.
Connect to a local data model in Power BI Desktop: Some third-party tools can
connect to a local data model that's open in Power BI Desktop (not depicted in the
scenario diagram). However, there are some limitations, and not all external tool
functionality is officially supported.
Connect to a template file in Power BI Desktop: Some third-party tools distribute
their functionality in a lightweight way by using a Power BI Desktop template file
(.pbit) (not depicted in the scenario diagram).

Tabular Editor
Tabular Editor is depicted in the scenario diagram. It's a third-party tool that's
achieved widespread adoption by the Power BI community. Some advantages of
managing tabular data models with Tabular Editor include:

Setting up data model capabilities not supported in Power BI Desktop: Tabular


Editor provides an interface to set up object-level security (OLS), calculation
groups, perspectives, translations, and partitions.
Support for concurrent model development: Microsoft data model development
tools, such as Visual Studio with Analysis Services projects, store the entire data
model definition in a Model.bim file. This single file can make it difficult for a team
of developers to work together on a single data model. Tabular Editor has a feature
called Folder serialization. Folder serialization deconstructs the Model.bim file into
separate object-specific files within an organized folder structure. Different data
modelers can then work on different files with less risk of overwriting each other's
efforts.
Integration with source control: Folder serialization allows source control system
to easily detect data model changes, making source merges and conflict resolution
easier to do.
Improved data model quality and design: Tabular Editor integrates with Best
Practices Analyzer (BPA) . BPA helps data modelers with a set of customizable
rules that can improve the quality, consistency, and performance of data models.
You can download a set of best practice rules (provided by Microsoft) from
GitHub .
Increased productivity when developing data models: The Tabular Editor interface
makes it well-suited for performing batch edits, debugging, and viewing data
model dependencies. Tabular Editor is different from Power BI Desktop in that it
works in disconnected mode. You can make data model changes in disconnected
mode and commit them as a batch of edits. Working this way allows for faster
development and validation, especially for experienced data modelers. It's also
possible to create C# scripts and save them as macros. These scripts can help you
to improve the efficiency of managing and synchronizing multiple data models.

XMLA endpoint
The XMLA endpoint uses the XMLA protocol to expose all features of a tabular data
model, including some data modeling operations that aren't supported by Power BI
Desktop. You can use the TOM API to make programmatic changes to a data model.

The XMLA endpoint also provides connectivity. You can only connect to a dataset when
the workspace that has its license mode set to Premium per user, Premium per
capacity, or Embedded. Once a connection is made, an XMLA-compliant tool can
operate on the data model in two ways:

Write data and metadata: Read/write use of the XMLA endpoint allows for:
Data modeling capabilities that aren't supported by Power BI Desktop, like
object-level security (OLS), calculation groups, perspectives, translations, and
partition management.
More complex deployments. For example, a partial deployment or a metadata-
only deployment that publishes only a single new measure.
Asynchronous dataset refresh. For example, refreshing a single table or
partition.
Read data and metadata: Read-only use of the XMLA endpoint allows for:
Monitoring, debugging, and tracing datasets and queries.
Allowing third-party data reporting tools to visualize data sourced from a
shared dataset. This technique is a great way to extend the benefits and
investments in managed self-service BI.

2 Warning

Once you modify or publish a dataset by using the XMLA endpoint, you can no
longer download it from the Power BI service as a Power BI Desktop file.

XMLA settings per capacity


Each Power BI Premium capacity and Power BI Embedded capacity has a setting to
control whether the XMLA endpoint is read-only, read/write, or off. This setting is also
available for all Premium Per User workspaces in the Power BI tenant. Read/write XMLA
access must be enabled for each capacity that contains datasets that you want to
manage with a tool other than Power BI Desktop.
 Tip

The XMLA endpoint setting (read/write, read-only, or off) applies to all workspaces
and datasets assigned to a particular capacity. You can set up multiple capacities to
decentralize and/or customize how content is managed for each capacity.

XMLA tenant setting


In addition to the XMLA endpoint settings, a Power BI administrator must use the tenant
settings to allow XMLA endpoints and Analyze in Excel with on-premises datasets. When
enabled, you can allow all users, or specific security groups, to use XMLA endpoint
functionality.

7 Note

All standard security and data protection features still apply to specify which users
can view and/or edit content.

Third-party tools
Power BI Desktop can handle the end-to-end needs for most self-service content
creators. However, third-party tools offer other enterprise features and functionality. For
this reason, third-party tools, such as Tabular Editor , have become prevalent in the
Power BI community, especially for advanced content creators, developers, and IT
professionals.

 Tip

This blog post describes how third-party tools allow the Power BI product team
to reevaluate their development priorities, increase the reach of the Power BI
platform, and satisfy more advanced and diverse requests from the user
community.

7 Note

Some third-party tools require a paid license, such as Tabular Editor 3. Other
community tools are free and open source (such as Tabular Editor 2, DAX Studio,
and ALM Toolkit). We recommend that you carefully evaluate the features of each
tool, cost, and support model so you can adequately support your community of
content creators.

Data model management


The primary focus of this usage scenario is on the content creator who uses Tabular
Editor to manage a data model. For infrequent advanced data model management
requirements, like occasional partition management, you might choose to use a tool
such as SQL Server Management Studio (SSMS). It's also possible for a .NET developer
to create and manage Power BI datasets by using the TOM API.

 Tip

When using the XMLA endpoint for data model management, we recommend that
you enable the large dataset storage format setting. When enabled, the large
dataset storage format can improve XMLA write operation performance.

Separation of data model and reports


For this usage scenario to be successful, you should separate reports from the data
model. This approach results in managing separate Power BI Desktop files as described
in the managed self-service BI usage scenario. Even if the same person is responsible for
all development, the separation of datasets and reports is important because Tabular
Editor doesn't have an awareness of report content.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The On-premises data gateway
becomes relevant once a data model is published to the Power BI service. The two
purposes of a gateway are to refresh imported data, or view a report that queries a live
connection or DirectQuery dataset (not depicted in the scenario diagram).

7 Note

A centralized data gateway in standard mode is strongly recommended over


gateways in personal mode. In standard mode, the data gateway supports live
connection and DirectQuery operations (in addition to scheduled data refresh
operations).
System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand activities that connect through XMLA endpoints.

Next steps
For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: Self-service
real-time analytics
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This usage scenario focuses on how a business analyst can produce real-time Power BI
reports. What's meant by real-time is that the data is always current, and report
consumers aren't required to interact with visualizations. Data visualizations should
refresh automatically to always show current data.

Real-time reports allow organizations to monitor and make confident decisions based
on up-to-date data.

7 Note

In this article, the term real-time actually means near real-time. Near real-time
means that there's always a degree of delay (known as latency), due to data
processing and network transmission time.

To develop self-service real-time analytics, the business analyst will first need to create
(or connect to) a DirectQuery model. They can then build a report and set up its
automatic page refresh settings. Once set up, Power BI automatically refreshes report
pages to show current data.

 Tip

You can also achieve real-time analytics in Power BI by using push datasets.
However, this topic is out of scope for this self-service real-time usage scenario
because it targets developers. Push datasets usually involve developing a
programmatic solution.

For a complete understanding of Power BI real-time analytics, work through the Monitor
data in real-time with Power BI learning path.
Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support self-service real-time analytics. The primary
objective is on creating a DirectQuery model and building Power BI reports that use
automatic page refresh.

The above diagram depicts the following user actions, tools, and features:

Item Description

The business analyst uses Power BI Desktop to create a DirectQuery model.

When Power BI Desktop queries the DirectQuery model, Power BI Desktop sends native
queries to the underlying data source in order to retrieve current data.

The business analyst builds a report that will display near real-time updates by enabling
and setting up automatic page refresh.

When ready, the business analyst publishes their Power BI Desktop file (.pbix) to a
workspace in the Power BI service.

Once published, the workspace contains a new report and DirectQuery dataset. When the
workspace is a personal or Pro workspace, the minimum automatic page refresh interval is
30 minutes (even when the report creator sets a lower interval).
Item Description

When report consumers open a report page that has automatic page refresh enabled, data
visualizations refresh automatically to show current data.

Each visual on an automatic page refresh page queries the dataset to retrieve current data
from the underlying data source.

To connect to data sources that reside within a private organizational network, an On-
premises data gateway is required.

When an automatic page refresh report is stored in a workspace on Premium capacity or a


Premium Per User (PPU) capacity, Power BI can automatically refresh on intervals less than
30 minutes—even at one-minute intervals. It's also possible to use the change detection
refresh type so Power BI can avoid unnecessary refreshes.

When the change detection refresh type is set, at each refresh interval, Power BI sends
change detection queries to determine whether data has changed since the last automatic
refresh. When Power BI detects change, it refreshes all visuals on the page.

Capacity administrators can enable or disable the automatic page refresh feature. When
the feature is disabled, automatic page refresh won't work for any reports stored in
workspaces assigned to the capacity. Capacity administrators can also set a minimum
refresh interval and a minimum execution interval. These minimum intervals will override
any report page setting that uses a lower interval.

Power BI administrators oversee and monitor activity in the Power BI service.

Key points
The following are some key points to emphasize about the self-service real-time
analytics scenario.

Supported data sources


The automatic page refresh feature doesn't work for reports connected to import
models, where all tables use import storage mode. The feature only works when the
Power BI report connects to a dataset that:

Includes DirectQuery storage mode tables.


Uses incremental refresh to get the latest data in real-time with DirectQuery. This
capability is described later in this topic.
Is a live connection to a tabular model in Azure Analysis Services (AAS) or SQL
Server Analysis Services (SSAS).
Is a push dataset. For more information, see Pushing data to datasets.
A DirectQuery model is an alternative to an import model. Models developed in
DirectQuery mode don't import data. Instead, they consist only of metadata defining the
model structure. When the model is queried, native queries are used to retrieve data
from the underlying data source.

From a self-service perspective, the business analyst can add DirectQuery storage tables
to their model in Power BI Desktop, providing the data source supports this storage
mode. Typically, relational databases are supported by DirectQuery. For a full listing of
data sources that support DirectQuery, see Data sources supported by DirectQuery.

A business analyst can also enhance an import model by setting up incremental refresh.
By enabling the Get the latest data in real-time with DirectQuery option (only
supported by Premium workspaces), Power BI Desktop adds a DirectQuery partition to
ensure the latest data is retrieved. For more information, see Incremental refresh and
real-time data for datasets.

The business analyst can also create a live connection to an existing tabular model that
includes DirectQuery storage mode tables.

Involve data source owners


Before publishing an automatic page refresh report, it's a good idea to first discuss the
real-time requirements with the data source owners. That's because automatic page
refresh can place a significant workload on the data source.

Consider a single report page that's set to refresh every five minutes and that includes
two visuals. When the report page is open, Power BI will send at least 24 queries per
hour (12 refreshes multiplied by two visuals) to the underlying data source. Now
consider that 10 report consumers open the same report page at the same time. In this
case, Power BI will send 240 queries per hour.

It's important to discuss the real-time requirements, including the number of visuals on
the report page and the desired refresh interval. When the use case is justified, the data
source owner can take proactive steps by scaling up the data source resources. They can
also optimize the data source by adding useful indexes and materialized views. For more
information, see DirectQuery model guidance in Power BI Desktop.

Refresh type
The automatic page refresh feature supports two refresh types.

Fixed interval: Updates all page visuals based on a fixed interval, which can be
from every one second to multiple days.
Change detection: Updates all page visuals providing that source data has
changed since the last automatic refresh. It avoids unnecessary refreshes, which
can help to reduce resource consumption for the Power BI service and the
underlying data source. Power BI only supports this refresh type for Premium
workspaces and for data models hosted by Power BI. Remote data models, which
are hosted in AAS or SSAS, aren't supported.

To set up change detection, you must create a special type of measure called a change
detection measure. For example, a change detection measure might query for the
maximum sales order number. Power BI uses the change detection measure to query
the data source. Each time, Power BI stores the query result so it can compare it with the
next result (according to the refresh interval you set). When the results differ, Power BI
refreshes the page.

A model can only have one change detection measure, and there can only be a
maximum of 10 change detection measures per tenant.

For more information, see Refresh types.

Capacity administration
When a workspace is attached to a Premium capacity, capacity administrators can
enable or disable the automatic page refresh feature for the capacity. When the feature
is disabled, automatic page refresh won't work for any report stored in any of the
attached workspaces.

Capacity administrators can also set a minimum refresh interval (default five minutes)
and a minimum execution interval (default five minutes). The execution interval
determines the frequency of change detection queries. When a report page interval is
less that the capacity minimum interval, Power BI will use the capacity minimum interval.

7 Note

Minimum intervals don't apply to reports open in Power BI Desktop.

When there are performance issues related to automatic page refresh, a capacity
administrator can:

Scale up the capacity to a larger Premium SKU.


Raise the minimum intervals.

For more information, see Page refresh intervals.


Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The gateway supports the
DirectQuery operations (visual queries and change detection queries).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption.

By using the Premium Capacity Metrics app that's available to administrators, you can
visualize how much of the capacity is being used by low-priority queries. Low-priority
queries consist of automatic page refresh queries and model refresh queries. Change
detection queries aren't low priority.

Next steps
For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: Embed for
your organization
Article • 03/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This usage scenario focuses on how a developer can programmatically embed Power BI
content in a custom application for your organization. (The developer isn't necessarily
responsible for creating the Power BI content.) The Embed for your organization
scenario applies when the application audience comprises users who have permission
and appropriate licenses to access Power BI content in the organization. These users
must have organizational accounts (including guest accounts), which authenticate with
Azure Active Directory (Azure AD).

7 Note

In this scenario, Power BI is software-as-a-service (SaaS). The embedding scenario is


sometimes referred to as User owns data.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support embedding for your organization.

The above diagram depicts the following user actions, tools, and features:

Item Description

The Power BI content creator develops a BI solution by using Power BI Desktop.

When ready, the content creator publishes the Power BI Desktop file (.pbix) to the Power BI
service.

To connect to any data sources that reside within a private organizational network, an on-
premises data gateway is required for data refresh.

A Power BI workspace contains Power BI items ready for embedding. For non-personal
workspaces, users of the custom application have permission to view (or create or modify)
Power BI content because they belong to a workspace role or they have direction
permissions.

The custom application prompts the app user to authenticate with Azure AD. When
authentication succeeds, the custom application caches an Azure AD access token.

The custom application uses the Azure AD access token to make Power BI REST API calls
on behalf of the app user. Specifically, the application uses the access token to retrieve
metadata about workspace items. Metadata includes properties required to embed
content in the custom application.

The custom application embeds a specific Power BI item in an iframe HTML element. The
application can support the creation and editing of Power BI reports, providing the user
has permission to do so.

Power BI administrators oversee and monitor activity in the Power BI service.


Key points
The following are some key points to emphasize about programmatically embed Power
BI content in a custom application for your organization.

Use cases
There are several reasons why you might embed Power BI content for your organization.

Internal business intelligence portal: You might want to create an internal


business intelligence (BI) portal as a replacement for the Power BI service. That
way, you can create a custom application that integrates content from Power BI
and other BI tools.
Internal app: You might want to develop an intranet app that shows data
visualizations. For example, an intranet site for a manufacturing department could
show real-time visuals that provide up-to-date information about the production
line.
Customized logging: You might want to log custom events to record Power BI
content access and use, beyond what the activity log records.

 Tip

If you're looking to create a BI portal styled for your organization, you might be
able to achieve that by simply adding custom branding to the Power BI service.

No-code embedding
Developing a programmatic solution requires skill, time, and effort. Consider that there
are embedding techniques known as no-code embedding that non-developers can use
to embed content in a simple internal portal or website.

Use the Power BI report web part to embed Power BI reports in SharePoint Online.
Use a secure embed code (or HTML) that's generated by Power BI to embed Power
BI reports in internal web portals.
Embed Power BI reports or dashboards in Power Pages.
Embed reports in a Microsoft Teams channel or chat.

These techniques require that report consumers belong to the organization, be


authenticated, and have permission to access the reports. Power BI ensures that all
permissions and data security are enforced when consumers view the reports.
Sometimes, users might be challenged to authenticate by signing in to Power BI.
Embeddable content
When embedding for your organization, you can embed the following Power BI content
types:

Power BI reports
Specific Power BI report visuals
Paginated reports
Q&A experience
Dashboards
Specific dashboard tiles

There's no limitation on where the content resides. The content could reside in a
personal workspace or a regular workspace. What matters is that the app user has
permission to view (or create or edit) the content. For example, it's possible to embed
content from the app user's personal workspace.

Any content the user can see in the Power BI service may be embedded in a custom
application. If the user has permission to create or edit content, it's possible for a
custom app to support that functionality (for Power BI reports only).

Authentication
The authentication flow is interactive authentication with Azure AD. Interactive
authentication means that the app user will be challenged to authenticate. When
authenticated, Azure AD returns an access token. It's the responsibility of the custom
application to cache the access token so that it can be used to make Power BI REST API
calls and to embed content inside an iframe HTML element. Those calls can retrieve
metadata about Power BI content on behalf of the app user, including the properties
required to embed it in the custom application.

Licensing
There's no specific licensing requirement to embed for your organization. What matters
is that the app user has permission and an appropriate Power BI license to view (or
create or edit) the content. It's even possible to embed content from a personal
workspace when the app user only has a Power BI (free) license.

Power BI client APIs


The Power BI client APIs allow a developer to achieve tight integration between the
custom application and the Power BI content. They develop the application by writing
custom logic with JavaScript or TypeScript that runs in the browser.

The application can set up and automate operations, and it can respond to user-
initiated actions. Additionally, you can integrate Power BI capabilities, including
navigation, filters and slicers, menu operations, layout, and bookmarks.

 Tip

The Power BI Embedded Analytics Playground is a website that helps you learn,
explore, and experiment with Power BI embedded analytics. It includes a developer
sandbox for hands-on experiences that use the client APIs with sample Power BI
content or your own content. Code snippets and showcases are available for you to
explore, too.

For more information, see What is the Power BI embedded analytics playground?

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The two purposes of a gateway are
to refresh imported data, or view a report that queries a live connection or DirectQuery
dataset.

7 Note

A centralized data gateway in standard mode is strongly recommended over


gateways in personal mode. In standard mode, the data gateway supports live
connection and DirectQuery operations (in addition to scheduled data refresh
operations).

System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption. Logged events will describe the
consumption method as Embedding for your organization. There's presently no way to
determine whether content was viewed in a no-code embedding experience in a custom
application.
Next steps
To learn more about Power BI embedded analytics, work through the Embed Power BI
analytics learning path.

You can also work through the Power BI Developer in a Day course. It includes a self-
study kit that guides you through the process of developing an ASP.NET Core MVC app.

For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: Embed for
your customers
Article • 03/20/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

This usage scenario focuses on how a developer can programmatically embed Power BI
content in a custom application for your customers. (The developer isn't necessarily
responsible for creating the Power BI content.) The Embed for your customers scenario
applies when the application audience comprises users who don't have permission or
appropriate licenses to access Power BI content in your customers. The custom
application requires an embedding identity that has permission and an appropriate
license to access Power BI content. The custom application could be a multitenancy
application.

7 Note

In this scenario, Power BI is platform-as-a-service (PaaS). The embedding scenario is


sometimes referred to as App owns data.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components that support embedding for your customers.

The above diagram depicts the following user actions, tools, and features:

Item Description

The Power BI content creator develops a BI solution by using Power BI Desktop.

When ready, the content creator publishes the Power BI Desktop file (.pbix) to the Power BI
service.

To connect to any data sources that reside within a private organizational network, an on-
premises data gateway is required for data refresh.

A Power BI workspace contains Power BI items ready for embedding. An embedding


identity, either a service principal or master user account, must belong to either the
workspace Admin or Member role. In a multi-tenancy solution, the separation of tenants is
achieved by creating one workspace for each tenant. This design pattern is known as
workspace separation.

The custom application prompts the app user to authenticate by using any authentication
method (not necessarily Azure AD).

When authentication succeeds, the custom application uses the embedding identity to
acquire and cache an Azure AD access token.
Item Description

The custom application uses the Azure AD access token to make Power BI REST API calls
on behalf of the embedding identity. Specifically, the application uses the access token to
retrieve metadata about workspace items. Metadata includes properties required to
embed content in the custom application. It also uses the access token to generate and
cache embed tokens, which represent facts about Power BI content and how the
application can access it.

The custom application embeds a specific Power BI item in an iframe HTML element. The
application can support the creation and editing of Power BI reports, providing the
embedding identity has permission to do so.

Power BI administrators oversee and monitor activity in the Power BI service.

Key points
The following are some key points to emphasize about programmatically embed Power
BI content in a custom application for your customers.

Use case
Often, embedding for your customers is done by Independent Software Vendors (ISVs).
ISVs recognize a need to embed analytics in their apps. It allows users to have direct
access to in-context insights, helping them make decisions based on facts instead of
opinions. Instead of developing visualizations, it's usually faster and less expensive to
embed Power BI content.

ISVs can develop a multitenancy application, where each of their customers is a tenant. A
multitenancy application that embeds Power BI analytics will use the Embed for your
customers scenario because the application users include external users. Multitenancy
applications are described in more detail later in this article.

Embeddable content
When embedding for your customers, you can embed the following Power BI content
types:

Power BI reports
Specific Power BI report visuals
Paginated reports
Q&A experience
Dashboards
Specific dashboard tiles

There's no limitation on where the content resides, except the content can't reside in a
personal workspace. What matters is that the embedding identity has permission to
view (or create or edit) the content.

Authentication
The authentication flow is non-interactive authentication with Azure AD (also known as
silent authentication). Non-interactive authentication means that the app user isn't
required to have a Power BI account, and even when they do, it isn't used. So a
dedicated Azure AD identity, known as the embedding identity, authenticates with
Azure AD. An embedding identity can be a service principal or a master user account
(described later).

The authentication flow attempts to acquire an Azure token in a way in which the
authentication service can't prompt the user for additional information. Once the app
user authenticates with the app (the app can use any authentication method), the app
uses the embedding identity to acquire an Azure AD token by using a non-interactive
authentication flow.

Once the app acquires an Azure AD token, it caches it and then uses it to generate an
embed token. An embed token represents facts about Power BI content and how to
access them. The app uses the embed token to embed content inside an iframe HTML
element.

Service principal
An app can use a service principal to acquire an Azure AD token. An Azure service
principal is a security identity used by apps. It defines the access policy and permissions
for the app in the Azure AD tenant, enabling core features such as authentication of the
app during sign in, and authorization during resource access. A service principal can
authenticate by using an app secret or certificate. A service principal can only use Power
BI REST APIs, when the Allow service principals to use Power BI APIs tenant setting is
enabled, and the service principal belongs to an allowed group.

 Tip

We recommend using a service principal for production apps. It provides the


highest security and for this reason it's the approach recommended by Azure AD.
Also, it supports better automation and scale and there's less management
overhead. However, it requires Power BI admin rights to set up and manage.

Master user account

An app can use a master user account to acquire an AD token. A master user account is a
regular Azure AD user. In Power BI, the account must belong to the workspace Amin or
Member role to embed workspace content. It must also have either a Power BI Pro or
Power BI Premium Per User (PPU) license.

7 Note

It's not possible to use a master user account to embed paginated reports.

For more information about embedding identities, see Set up permissions to embed
Power BI content.

Licensing
When embedding Power BI content for your customers, you need to ensure that content
resides in a workspace that has one of the following license modes:

Premium per capacity: This license mode is available with Power BI Premium.
Embedded: This license mode is available with Power BI Embedded .

Each license mode option requires the purchase of a billable product that is a capacity-
based license. A capacity-based license allows you to create reserved capacities.

Capacities represent the computational resources that are required to process


workloads, such as report rendering and data refresh. Reserved capacities are isolated
from other customers' workloads, so they offer scale that can deliver dependable and
consistent performance.

7 Note

It's not possible to use the Embed for your customers scenario in production
environments with the Power BI (free), Power BI Pro, or Power BI PPU licenses.

For more information about products and licensing, see Select the appropriate Power BI
embedded analytics product.
Power BI client APIs
The Power BI client APIs allow a developer to achieve tight integration between the
custom application and the Power BI content. They develop the application by writing
custom logic with JavaScript or TypeScript that runs in the browser.

The application can set up and automate operations, and it can respond to user-
initiated actions. Additionally, you can integrate Power BI capabilities, including
navigation, filters and slicers, menu operations, layout, and bookmarks.

 Tip

The Power BI Embedded Analytics Playground is a website that helps you learn,
explore, and experiment with Power BI embedded analytics. It includes a developer
sandbox for hands-on experiences that use the client APIs with sample Power BI
content or your own content. Code snippets and showcases are available for you to
explore, too.

For more information, see What is the Power BI embedded analytics playground?

Enforce data permissions


When the app users should only have access to view a subset of data, you need to
develop a solution that restricts access to Power BI dataset data. The reason might be
because some users aren't permitted to view specific data, such as sales results of other
sales regions. Achieving this requirement commonly involves setting up row-level
security (RLS), which involves defining roles and rules that filter model data.

When you use the For your customers scenario, the app must set the effective identity of
the embed token to restrict access to data. This effective identity determines how Power
BI will connect to the model and how it will enforce RLS roles. How you set up the
effective identity depends on the type of Power BI dataset.

For more information about RLS roles for embedded content, see Enforce data
permissions for Power BI embedded analytics.

Multitenancy applications
Multiple organizations can use a multitenancy app, where each organization is a tenant.
A multitenancy app that embeds Power BI analytics can use the Embed for your
customers scenario because the app users include external users. When designing a
multitenancy app, you can choose from two different tenancy models.

The recommended approach is to use the workspace separation model. You can achieve
this approach by creating one Power BI workspace for each tenant. Each workspace
contains Power BI artifacts that are specific to that tenant, and the datasets connect to a
separate database for each tenant.

 Tip

For more information about the workspace separation model, see Automate
workspace separation. For more information about scalable multitenancy apps, see
Service principal profiles for multitenancy apps in Power BI Embedded.

Alternatively, the single multi-customer database model is available. When you use this
model, your solution will achieve separation with a single workspace that includes a set
of Power BI items that are shared across all tenants. RLS roles, which are defined in the
datasets, will help filter the data more securely to ensure that organizations only view
their own data.

No-code embedding
Developing a programmatic solution requires skill, time, and effort. Consider that there's
one embedding technique known as no-code embedding that non-developers can use to
embed Power BI reports or dashboards in Power Pages.

Gateway setup
Typically, a data gateway is required when accessing data sources that reside within the
private organizational network or a virtual network. The two purposes of a gateway are
to refresh imported data, or view a report that queries a live connection or DirectQuery
dataset.

7 Note

A centralized data gateway in standard mode is strongly recommended over


gateways in personal mode. In standard mode, the data gateway supports live
connection and DirectQuery operations (in addition to scheduled data refresh
operations).
System oversight
The activity log records user activities that occur in the Power BI service. Power BI
administrators can use the activity log data that's collected to perform auditing to help
them understand usage patterns and adoption.

Next steps
To learn more about Power BI embedded analytics, work through the Embed Power BI
analytics learning path.

You can also work through the Power BI Developer in a Day course. It includes a self-
study kit that guides you through the process of developing an ASP.NET Core MVC app.

For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Power BI usage scenarios: On-premises
reporting
Article • 02/27/2023

7 Note

This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.

The on-premises reporting scenario is one of several hybrid and custom scenarios for
deploying Power BI solutions without using the Power BI service.

This scenario involves using Power BI Report Server, which is an on-premises portal for
publishing, sharing, and consuming business intelligence content within the
organizational network. It's useful when the organization needs an alternative to the
cloud-based Power BI service for deploying some (or all) BI content. For example, a fully
customer-managed platform may be necessary for regulatory, legal, or intellectual
property reasons.

Scenario diagram
The following diagram depicts a high-level overview of the most common user actions
and Power BI components to support on-premises reporting. The focus is on using
Power BI Report Server, which runs on a Windows server within the organizational
network.

The scenario diagram depicts the following user actions, tools, and features:

Item Description

A Power BI content creator builds a BI solution.

Power BI Desktop for Report Server connects to data from one or more data sources.
Queries and data mashups, which combine multiple sources, are developed in the Power
Query Editor.

Data model development and report creation are done in Power BI Desktop for Report
Server. It generates a specific type of Power BI Desktop file (.pbix) that can be published to
Power BI Report Server.

The report creator can also build paginated reports using Power BI Report Builder. This
tool generates a Report Definition Language file (.rdl) that can be published to Power BI
Report Server.

The report creator can also develop reports using Excel. The Excel workbook file (.xlsx) can
be published to Power BI Report Server.

When ready, the content creator publishes their file to Power BI Report Server.

Content is published to a folder in Power BI Report Server.

Report consumers view reports published to Power BI Report Server.

Report consumers can also view reports using Power BI mobile apps.

Server administrators manage the Windows server infrastructure.


Item Description

Database administrators manage Power BI Report Server, including the Report Server
databases, and SQL Server Agent.

SQL Server Agent jobs periodically refresh import datasets.

Administrators oversee and monitor activity in Power BI Report Server.

Key points
The following are some key points to emphasize about the on-premises reporting
scenario.

Report creator experience


Content creators use a specific tool named Power BI Desktop for Report Server . This
version of Power BI Desktop is updated three times per year and is compatible with the
Power BI Report Server release cycle.

7 Note

For report creators who create content for both the Power BI service and Power BI
Report Server, the two versions of Power BI Desktop can be installed side by side.

Report consumer experience


The report consumer experience for Power BI Report Server is very different from the
Power BI service. The Power BI Report Server is a web portal for viewing, storing, and
managing content. Content files (.pbix, .rdl, or .xlsx) are published to a folder hierarchy.
For more information, see Manage content in the web portal.

Power BI Report Server


Power BI Report Server is a distinct product from SQL Server Reporting Services (SSRS).
It's licensed and installed separately. Power BI Report Server is considered a superset of
SSRS because it comprises additional capabilities beyond SSRS.

) Important
Although Power BI Report Server and the Power BI service are supported by the
same engineering team at Microsoft, there are substantial functionality differences
between the two products. Power BI Report Server is a basic reporting portal for
on-premises reporting. For this reason there are many feature differences between
it and the Power BI service. The feature set of Power BI Report Server is intentionally
simple, and parity should not be expected. Before installing Power BI Report Server,
verify that critical features you intend to use are supported.

Report server databases


SQL Server hosts the Report Server databases. Most commonly, a SQL Server Database
Engine instance is installed on a Windows server in an on-premises data center. It can
also be installed on a virtual machine in Azure (hosted cloud) or hosted by Azure SQL
Managed Instance (not depicted in the scenario diagram). The database infrastructure is
managed by a database administrator.

Mobile access
Additional configurations must be done to enable remote mobile access to Power BI
Report Server. For more information, see Configure Power BI mobile app access to
Report Server remotely.

Licensing Power BI Report Server


There are two ways to license Power BI Report Server: Power BI Premium and SQL Server
Enterprise Edition with Software Assurance.

With the purchase of Power BI Premium capacity, Power BI Report Server may be
installed on an on-premises server, provided it has the same number of cores as the
capacity node's v-cores. This way, it's possible to adopt a hybrid approach supporting
publication of content to the Power BI service (cloud) and to Power BI Report Server (on-
premises or hosted cloud in Azure).

7 Note

When licensing Power BI Report Server as part of the Premium capacity feature set,
it's only available with the P SKUs. The other capacity-based SKUs (EM and A SKUs)
do not offer this benefit, nor does Power BI Premium Per User (PPU).
Next steps
For other useful scenarios to help you with Power BI implementation decisions, see the
Power BI usage scenarios article.
Microsoft's BI transformation
Article • 02/27/2023

 Tip

This article focuses on Microsoft's experience establishing a Center of Excellence.


When setting up your own Center of Excellence, we recommend that you also
review the information covered in the Power BI adoption roadmap.

This article targets IT professionals and IT managers. You'll learn about our BI strategy
and vision, which enables us to continuously leverage our data as an asset. You'll also
learn how we successfully drive a data culture of business decision making with Power
BI.

Some background first: Today, the explosion of data is impacting consumers and
businesses at breakneck speeds. Succeeding in this data-intensive environment requires
analysts and executives who can distill enormous amount of data into succinct insights.
The revolutions in Microsoft's BI tools have changed the way that Microsoft itself
explores their data and gets to the right insights needed to drive impact in the
company.

So, how can your organization, too, revolutionize the way it works with data? Let's help
you understand by sharing the story of our BI transformation journey.

Microsoft journey
Several years ago at Microsoft, our organizational culture encouraged individuals to
pursue full ownership of data and insights. It also experienced strong cultural resistance
to doing things in a standardized way. So, the organizational culture led to reporting
and analytic challenges. Specifically, it led to:

Inconsistent data definitions, hierarchies, metrics, and Key Performance Indicators


(KPIs). For example, each country or region had their own way of reporting on new
revenue. There was no consistency, yet much confusion.
Analysts spending 75% of time collecting and compiling data.
78% of reports being created in "offline environment".
Over 350 centralized finance tools and systems.
Approximately $30 million annual spend on "shadow applications".
These challenges prompted us to think about how we could do things better. Finance
and other internal teams received executive support to transform the business review
process, which led to building a unified BI platform as our single source of truth. (We'll
discuss more about our BI platform later in this article.) Ultimately, these innovations led
to business reviews being transformed from dense tabular views into simpler, more
insightful visuals focused on key business themes.

How did we achieve this successful outcome? Delivering centralized BI managed by IT


and extending it with self-service BI (SSBI) led to success. We describe it in two creative
ways: discipline at the core and flexibility at the edge.

Discipline at the core


Discipline at the core means that IT retains control by curating a single master data
source. Delivering standardized corporate BI and defining consistent taxonomies and
hierarchies of KPIs is part of that discipline. Importantly, data permissions are enforced
centrally to ensure our people can only read the data they need.

First, we understood that our BI transformation wasn't a technology problem. To achieve


success we learned to first define success, and then translate it into key metrics. It
cannot be understated how important it was for us to achieve consistency of definition
across our data.

Our transformation didn't happen all at once. We prioritized the delivery of the
subsidiary scorecard consisting of about 30 KPIs. Then, over several years, we gradually
expanded the number and depth of subject areas, and built out more complex KPI
hierarchies. Today, it allows us to roll up lower-level KPIs at customer level to higher
ones at company level. Our total KPI count now exceeds 2000, and each is a key
measure of success and is aligned to corporate objectives. Now across the entire
company, corporate reports and SSBI solutions present KPIs that are well-defined,
consistent, and secure.

Flexibility at the edge


At the edge of the core, our analysts in the Finance, Sales, and Marketing teams became
more flexible and agile. They now benefit from the ability to analyze data more quickly.
More formally, this scenario is described as managed self-service BI (SSBI). We now
understand that managed SSBI is about mutual benefit for IT and analysts. Importantly,
we experienced optimizations by driving standardization, knowledge, and the reuse of
our data and BI solutions. And, as a company, we derived more value synergistically as
we found the right balance between centralized BI and managed SSBI.
Our solution
Starlight is the name we give to our internal data unification and analytics platform,
which supports finance, sales, marketing, and engineering. Its mission is to deliver a
robust, shared, and scalable data platform. The platform was built entirely by Finance,
and continues in operation today using the latest Microsoft products.

The KPI Lake isn't an Azure Data Lake. Rather, it's a Starlight-powered tabular BI
semantic model hosted in Azure IaaS using Microsoft SQL Server Analysis Services. The
BI semantic model delivers data sourced from over 100 internal sources, and defines
numerous hierarchies and KPIs. Its mission is to enable business performance reporting
and analysis teams across Finance, Marketing, and Sales. It does so to obtain timely,
accurate, and well performing insights through unified BI semantic models from relevant
sources.

When first deployed, it was an exciting time because the tabular BI semantic model
resulted in immediate and measurable benefits. The first version centralized C+E Finance
and Marketing BI platforms. Then, over the past six years, it's been expanded to
consolidate additional business insight solutions. Today, it continues to evolve, powering
our global and commercial business reviews as well as standard reporting and SSBI. Its
adoption has spiked 5X since its release—well beyond our initial expectations.

Here's a summary of key benefits:

It powers our subsidiary scorecard, worldwide business reviews, and finance,


marketing, sales reports and analytics.
It supports self-service analytics, enabling analysts to discover insights hidden in
data.
It drives reporting and analytics for incentive compensation, marketing and
operations analysis, sales performance metrics, senior leadership reviews, and the
annual planning process.
It delivers automated and dynamic reporting and analytics from a single source of
truth.

The KPI Lake is a great success story. It's often presented to our customers to showcase
an example of how to effectively use our latest technologies. Not surprisingly, it's highly
resonant with many of them.

How it works
The Starlight platform manages the flow of data from acquisition, to processing, and
then all the way to publication:
1. Robust and agile data integration takes place on a scheduled basis, consolidating
data from over 100 disparate raw sources. Source data systems include relational
databases, Azure Data Lake Storage, and Azure Synapse databases. Subject areas
include finance, marketing, sales, and engineering.
2. Once staged, the data is conformed and enriched using master data and business
logic. It's then loaded to data warehouse tables. The tabular BI semantic model is
then refreshed.
3. Analysts across the company use Excel and Power BI to deliver insights and
analytics from the tabular BI semantic model. And, it enables business owners to
champion metric definitions for their own business. When necessary, scaling is
achieved using Azure IaaS with load balancing.

Deliver success
Humorously, everybody wants one version of the truth... as long as it's theirs. But for
some organizations it's their reality. They have multiple versions of the truth as a result
of individuals pursuing full ownership of data and insights. For these organizations, this
unmanaged approach isn't likely to be a pathway to business success.

It's why we believe you need a Center of Excellence (COE). A COE is a central team that's
responsible for defining company-wide metrics and definitions, and much more. It's also
a business function that organizes people, processes, and technology components into
a comprehensive set of business competencies and capabilities.

We see much evidence to support that a comprehensive and robust COE is critical to
delivering value and maximizing business success. It can include change initiatives,
standard processes, roles, guidelines, best practices, support, training, and much more.

We invite you to read the articles in this COE series to learn more. Let's help you
discover how your organization can embrace change to deliver success.

Next steps
For more information about this article, check out the following resources:

Establish a Center of Excellence


Power BI adoption roadmap: Center of Excellence
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

In the next article in this series, learn how a COE helped us at Microsoft create a
standardized analytics and data platform to unlock insights from our data.
Professional services
Certified Power BI partners are available to help your organization succeed when setting
up a COE. They can provide you with cost-effective training or an audit of your data. To
engage a Power BI partner, visit the Power BI partner portal .

You can also engage with experienced consulting partners. They can help you assess ,
evaluate , or implement Power BI.
How Microsoft established a Center of
Excellence
Article • 02/27/2023

 Tip

This article focuses on Microsoft's experience establishing a Center of Excellence.


When setting up your own Center of Excellence, we recommend that you also
review the information covered in the Power BI adoption roadmap.

This article targets IT professionals and IT managers. You'll learn how to set up a BI and
analytics Center of Excellence (COE) in your organization, and how Microsoft has set up
theirs.

For some, there's a misconception that a COE is just a help desk—this thinking, however,
is far from reality.

Generally, a BI and analytics COE is a team of professionals that's responsible for


establishing and maintaining a BI platform. It's also responsible for creating a single
source of truth, and defining a set of consistent company-wide metrics to unlock and
accelerate insights. Yet, a COE is a broad term. As such, it can be implemented and
managed in different ways, and its structure and scope can vary from organization to
organization. At its core, it's always about a robust platform delivering the right data
and insight capabilities to the right people at the right time. Ideally, it also promotes
evangelizing, training, and support. At Microsoft, it's described as discipline at the core,
and it's delivered as our BI platform and single source of truth.

In larger organizations, you could find multiple COEs with the core COE extended by
satellite COEs—often at department level. This way, a satellite COE is a group of experts
familiar with taxonomies and definitions, who know how to transform core data into
what makes sense for their department. Departmental analysts are granted permissions
to core data, and they trust it for use in their own reports. They build solutions that rely
upon carefully prepared core dimensions, facts, and business logic. At times, they might
also extend it with smaller, department-specific datasets and business logic. Importantly,
satellite COEs aren't ever disconnected nor do they act in isolation. At Microsoft, satellite
COEs promote flexibility at the edge.

For this extended scenario to succeed, departments must pay to play. In other words,
departments must financially invest in the core COE. This way, there isn't concern that
they're "not getting their fair share" or that their requirements are ever de-prioritized.
To support this scenario, the core COE must scale to meet funded departmental needs.
Once several datasets have been onboarded economies of scale set in. At Microsoft, it
quickly became evident that working centrally is more economic and brings about faster
results. When each new subject area was onboarded, we experienced even greater
economies of scale that allowed for leveraging and contributing across the entire
platform, reinforcing our underlying data culture.

Consider an example: Our BI platform delivers core dimensions, facts, and business logic
for Finance, Sales, and Marketing. It also defines hundreds of Key Performance
Indicators (KPIs). Now, an analyst in the Power Platform business needs to prepare a
leadership dashboard. Some of the KPIs, like revenue and pipelines, come directly from
the BI platform. Others, however, are based on more granular needs of the business.
One such need is for a KPI on user adoption of Power BI-specific feature: dataflows. So,
the analyst produces a Power BI composite model to integrate core BI platform data
with departmental data. They then add business logic to define their departmental KPIs.
Finally, they author their leadership dashboard based on the new model, which
leverages the company-wide COE resources amplified with local knowledge and data.

Importantly, a division of responsibility between the core and satellite COEs allows
departmental analysts to focus on breaking new ground, rather than managing a data
platform. At times, there can even be a mutually beneficial relationship between the
satellite COEs and the core COE. For example, a satellite COE may define new metrics
that—having proved beneficial to their department—end up as core metrics beneficial
to the entire company, available from—and supported by—the core COE.

BI platform
In your organization, the COE might be recognized by a different name, like the BI team
or group. The name matters less than what it actually does. If you don't have a
formalized team, we recommend you cultivate a team that brings together your core BI
experts to establish your BI platform.

At Microsoft, the COE is known as the BI Platform. It has many stakeholder groups
representing different divisions within the company like Finance, Sales, and Marketing.
It's organized to run shared capabilities and dedicated deliveries.
Shared capabilities
Shared capabilities are required to establish and operate the BI platform. They support
all stakeholder groups that fund the platform. They comprise the following teams:

Core platform engineering: We designed the BI platform with an engineering


mindset. It's really a set of frameworks that support data ingestion, processing to
enrich the data, and delivery of that data in BI semantic models for analyst
consumption. Engineers are responsible for the technical design and
implementation of the core BI platform capabilities. For example, they design and
implement the data pipelines.
Infrastructure and hosting: IT engineers are responsible for provisioning and
managing all Azure services.
Support and operations: This team keeps the platform running. Support looks
after user needs like data permissions. Operations keep the platform running,
ensuring that Service Level Agreements (SLAs) are met, and communicating delays
or failures.
Release management: Technical program managers (PMs) release changes.
Changes can range from platform framework updates to change requests made to
BI semantic models. They're the last line of defense to ensure changes don't break
anything.

Dedicated deliveries
There's a dedicated delivery team for each stakeholder group. It typically consists of a
data engineer, an analytics engineer, and a technical PM—all funded by their
stakeholder group.

BI team roles
At Microsoft, our BI platform is operated by scalable teams of professionals. Teams are
aligned to dedicated and shared resources. Today, we have the following roles:

Program managers: PMs are a dedicated resource. They act as the primary contact
between the BI team and stakeholders. It's their job to translate stakeholder
business requirements to a technical specification. And, they manage the
prioritization of stakeholder deliverables.
Database leads: They're a dedicated resource responsible for onboarding new
datasets into the centralized data warehouse. Onboarding a dataset can involve
setting up conformed dimensions, adding business logic and custom attributes,
and standard names and formatting.
Analytics leads: They're a dedicated resource responsible for the design and
development of BI semantic models. They strive to apply a consistent architecture
using standard naming and formatting. Performance optimization is an important
part of their role.
Operations and infrastructure: They're a shared resource responsible for
managing jobs and data pipelines. They're also responsible for managing Azure
subscriptions, Power BI capacities, virtual machines, and data gateways.
Support: They're a shared resource responsible for writing documentation,
organizing training, communicating BI semantic model changes, and answering
user questions.

Governance and compliance


For each stakeholder group, PM leads provide cross-program governance and oversight.
Its overriding goal is to ensure investments in IT generate business value and mitigate
risk. Steering committee meetings are held on a regular basis to review progress and
approve major initiatives.

Grow your own community


Establish and grow a community within your organization by:
Holding regular "Office Hours" events that sets aside time with the BI team to
allow people to ask questions, make suggestions, share ideas, and even lodge
complaints.
Creating a Teams channel to provide support and encourage anyone to ask and
respond to posted questions.
Run and promote informal user groups and encourage employees to present or
attend.
Run more formal training events on specific products and the BI platform itself.
Consider delivering Power BI Dashboard in a Day , which is available as a free
course kit and is a great way to introduce employees to Power BI for the first time.

Next steps
For more information about this article, check out the following resources:

BI solution architecture in the COE


Power BI adoption roadmap: Center of Excellence
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

In the next article in this series, learn about BI solution architecture in the COE and the
different technologies employed.

Professional services
Certified Power BI partners are available to help your organization succeed when setting
up a COE. They can provide you with cost-effective training or an audit of your data. To
engage a Power BI partner, visit the Power BI partner portal .

You can also engage with experienced consulting partners. They can help you assess ,
evaluate , or implement Power BI.
BI solution architecture in the Center of
Excellence
Article • 02/27/2023

This article targets IT professionals and IT managers. You'll learn about BI solution
architecture in the COE and the different technologies employed. Technologies include
Azure, Power BI, and Excel. Together, they can be leveraged to deliver a scalable and
data-driven cloud BI platform.

Designing a robust BI platform is somewhat like building a bridge; a bridge that


connects transformed and enriched source data to data consumers. The design of such
a complex structure requires an engineering mindset, though it can be one of the most
creative and rewarding IT architectures you could design. In a large organization, a BI
solution architecture can consist of:

Data sources
Data ingestion
Big data / data preparation
Data warehouse
BI semantic models
Reports

The platform must support specific demands. Specifically, it must scale and perform to
meet the expectations of business services and data consumers. At the same time, it
must be secure from the ground up. And, it must be sufficiently resilient to adapt to
change—because it's a certainty that in time new data and subject areas must be
brought online.

Frameworks
At Microsoft, from the outset we adopted a systems-like approach by investing in
framework development. Technical and business process frameworks increase the reuse
of design and logic and provide a consistent outcome. They also offer flexibility in
architecture leveraging many technologies, and they streamline and reduce engineering
overhead via repeatable processes.

We learned that well-designed frameworks increase visibility into data lineage, impact
analysis, business logic maintenance, managing taxonomy, and streamlining
governance. Also, development became faster and collaboration across large teams
became more responsive and effective.

We'll describe several of our frameworks in this article.

Data models
Data models provide you with control over how data is structured and accessed. To
business services and data consumers, data models are their interface with the BI
platform.

A BI platform can deliver three different types of models:

Enterprise models
BI semantic models
Machine Learning (ML) models

Enterprise models
Enterprise models are built and maintained by IT architects. They're sometimes referred
to as dimensional models or data marts. Typically, data is stored in relational format as
dimension and fact tables. These tables store cleansed and enriched data consolidated
from many systems and they represent an authoritative source for reporting and
analytics.

Enterprise models deliver a consistent and single source of data for reporting and BI.
They're built once and shared as a corporate standard. Governance policies ensure data
is secure, so access to sensitive data sets—such as customer information or financials—
is restricted on a needs-basis. They adopt naming conventions ensuring consistency,
thereby further establishing credibility of data and quality.

In a cloud BI platform, enterprise models can be deployed to a Synapse SQL pool in


Azure Synapse. The Synapse SQL pool then becomes the single version of truth the
organization can count on for fast and robust insights.
BI semantic models
BI semantic models represent a semantic layer over enterprise models. They're built and
maintained by BI developers and business users. BI developers create core BI semantic
models that source data from enterprise models. Business users can create smaller-
scale, independent models—or, they can extend core BI semantic models with
departmental or external sources. BI semantic models commonly focus on a single
subject area, and are often widely shared.

Business capabilities are enabled not by data alone, but by BI semantic models that
describe concepts, relationships, rules, and standards. This way, they represent intuitive
and easy-to-understand structures that define data relationships and encapsulate
business rules as calculations. They can also enforce fine-grained data permissions,
ensuring the right people have access to the right data. Importantly, they accelerate
query performance, providing extremely responsive interactive analytics—even over
terabytes of data. Like enterprise models, BI semantic models adopt naming
conventions ensuring consistency.

In a cloud BI platform, BI developers can deploy BI semantic models to Azure Analysis


Services or Power BI Premium capacities. We recommend deploying to Power BI when
it's used as your reporting and analytics layer. These products support different storage
modes, allowing data model tables to cache their data or to use DirectQuery, which is a
technology that passes queries through to the underlying data source. DirectQuery is an
ideal storage mode when model tables represent large data volumes or there's a need
to deliver near-real time results. The two storage modes can be combined: Composite
models combine tables that use different storage modes in a single model.

For heavily queried models, Azure Load Balancer can be used to evenly distribute the
query load across model replicas. It also allows you to scale your applications and create
highly available BI semantic models.

Machine Learning models


Machine Learning (ML) models are built and maintained by data scientists. They're
mostly developed from raw sources in the data lake.

Trained ML models can reveal patterns within your data. In many circumstances, those
patterns can be used to make predictions that can be used to enrich data. For example,
purchasing behavior can be used to predict customer churn or segment customers.
Prediction results can be added to enterprise models to allow analysis by customer
segment.
In a cloud BI platform, you can use Azure Machine Learning to train, deploy, automate,
manage, and track ML models.

Data warehouse
Sitting at the heart of a BI platform is the data warehouse, which hosts your enterprise
models. It's a source of sanctioned data—as a system of record and as a hub—serving
enterprise models for reporting, BI, and data science.

Many business services, including line-of-business (LOB) applications, can rely upon the
data warehouse as an authoritative and governed source of enterprise knowledge.

At Microsoft, our data warehouse is hosted on Azure Data Lake Storage Gen2 (ADLS
Gen2) and Azure Synapse Analytics.

ADLS Gen2 makes Azure Storage the foundation for building enterprise data lakes
on Azure. It's designed to service multiple petabytes of information while
sustaining hundreds of gigabits of throughput. And, it offers low-cost storage
capacity and transactions. What's more, it supports Hadoop compatible access,
which allows you to manage and access data just as you would with a Hadoop
Distributed File System (HDFS). In fact, Azure HDInsight, Azure Databricks, and
Azure Synapse Analytics can all access data stored in ADLS Gen2. So, in a BI
platform, it's a good choice to store raw source data, semi-processed or staged
data, and production-ready data. We use it to store all our business data.
Azure Synapse Analytics is an analytics service that brings together enterprise data
warehousing and Big Data analytics. It gives you the freedom to query data on
your terms, using either serverless on-demand or provisioned resources—at scale.
Synapse SQL, a component of Azure Synapse Analytics, supports complete T-SQL-
based analytics, so it's ideal to host enterprise models comprising your dimension
and fact tables. Tables can be efficiently loaded from ADLS Gen2 using simple
Polybase T-SQL queries. You then have the power of MPP to run high-performance
analytics.

Business Rules Engine framework


We developed a Business Rules Engine (BRE) framework to catalog any business logic
that can be implemented in the data warehouse layer. A BRE can mean many things, but
in the context of a data warehouse it's useful for creating calculated columns in
relational tables. These calculated columns are usually represented as mathematical
calculations or expressions using conditional statements.

The intention is to split business logic from core BI code. Traditionally, business rules are
hard-coded into SQL stored procedures, so it often results in much effort to maintain
them when business needs change. In a BRE, business rules are defined once and used
multiple times when applied to different data warehouse entities. If calculation logic
needs to change, it only needs to be updated in one place and not in numerous stored
procedures. There's a side benefit, too: a BRE framework drives transparency and
visibility into implemented business logic, which can be exposed via a set of reports that
create self-updating documentation.

Data sources
A data warehouse can consolidate data from practically any data source. It's mostly built
over LOB data sources, which are commonly relational databases storing subject-specific
data for sales, marketing, finance, etc. These databases can be cloud-hosted or they can
reside on-premises. Other data sources can be file-based, especially web logs or IOT
data sourced from devices. What's more, data can be sourced from Software-as-a-
Service (SaaS) vendors.

At Microsoft, some of our internal systems output operational data direct to ADLS Gen2
using raw file formats. In addition to our data lake, other source systems comprise
relational LOB applications, Excel workbooks, other file-based sources, and Master Data
Management (MDM) and custom data repositories. MDM repositories allow us to
manage our master data to ensure authoritative, standardized, and validated versions of
data.

Data ingestion
On a periodic basis, and according to the rhythms of the business, data is ingested from
source systems and loaded into the data warehouse. It could be once a day or at more
frequent intervals. Data ingestion is concerned with extracting, transforming, and
loading data. Or, perhaps the other way round: extracting, loading, and then
transforming data. The difference comes down to where the transformation takes place.
Transformations are applied to cleanse, conform, integrate, and standardize data. For
more information, see Extract, transform, and load (ETL).

Ultimately, the goal is to load the right data into your enterprise model as quickly and
efficiently as possible.

At Microsoft, we use Azure Data Factory (ADF). The services is used to schedule and
orchestrate data validations, transformations, and bulk loads from external source
systems into our data lake. It's managed by custom frameworks to process data in
parallel and at scale. In addition, comprehensive logging is undertaken to support
troubleshooting, performance monitoring, and to trigger alert notifications when
specific conditions are met.

Meanwhile, Azure Databricks—an Apache Spark-based analytics platforms optimized for


the Azure cloud services platform—performs transformations specifically for data
science. It also builds and executes ML models using Python notebooks. Scores from
these ML models are loaded into the data warehouse to integrate predictions with
enterprise applications and reports. Because Azure Databricks accesses the data lake
files directly, it eliminates or minimizes the need to copy or acquire data.

Ingestion framework
We developed an ingestion framework as a set of configuration tables and procedures.
It supports a data-driven approach to acquiring large volumes of data at high speed and
with minimal code. In short, this framework simplifies the process of data acquisition to
load the data warehouse.

The framework depends on configuration tables that store data source and data
destination-related information such as source type, server, database, schema, and
table-related details. This design approach means we don't need to develop specific
ADF pipelines or SQL Server Integration Services (SSIS) packages. Instead, procedures
are written in the language of our choice to create ADF pipelines that are dynamically
generated and executed at run time. So, data acquisition becomes a configuration
exercise that's easily operationalized. Traditionally, it would require extensive
development resources to create hard-coded ADF or SSIS packages.

The ingestion framework was designed to simplify the process of handling upstream
source schema changes, too. It's easy to update configuration data—manually or
automatically, when schema changes are detected to acquire newly added attributes in
the source system.

Orchestration framework
We developed an orchestration framework to operationalize and orchestrate our data
pipelines. It uses a data-driven design that depends on a set of configuration tables.
These tables store metadata describing pipeline dependencies and how to map source
data to target data structures. The investment in developing this adaptive framework
has since paid for itself; there's no longer a requirement to hard-code each data
movement.

Data storage
A data lake can store large volumes of raw data for later use along with staging data
transformations.

At Microsoft, we use ADLS Gen2 as our single source of truth. It stores raw data
alongside staged data and production-ready data. It provides a highly scalable and cost-
effective data lake solution for big data analytics. Combining the power of a high-
performance file system with massive scale, it's optimized for data analytic workloads,
accelerating time to insight.

ADLS Gen2 provides the best of two worlds: it's BLOB storage and a high-performance
file system namespace, which we configure with fine-grained access permissions.

Refined data is then stored in a relational database to deliver a high-performance, highly


scalable data store for enterprise models, with security, governance, and manageability.
Subject-specific data marts are stored in Azure Synapse Analytics, which are loaded by
Azure Databricks or Polybase T-SQL queries.

Data consumption
At the reporting layer, business services consume enterprise data sourced from the data
warehouse. They also access data directly in the data lake for ad hoc analysis or data
science tasks.

Fine-grained permissions are enforced at all layers: in the data lake, enterprise models,
and BI semantic models. The permissions ensure data consumers can only see the data
they have rights to access.

At Microsoft, we use Power BI reports and dashboards, and Power BI paginated reports.
Some reporting and ad hoc analysis is done in Excel—particularly for financial reporting.

We publish data dictionaries, which provide reference information about our data
models. They're made available to our users so they can discover information about our
BI platform. Dictionaries document model designs, providing descriptions about entities,
formats, structure, data lineage, relationships, and calculations. We use Azure Data
Catalog to make our data sources easily discoverable and understandable.

Typically, data consumption patterns differ based on role:

Data analysts connect directly to core BI semantic models. When core BI semantic
models contain all data and logic they need, they use live connections to create
Power BI reports and dashboards. When they need to extend the models with
departmental data, they create Power BI composite models. If there's a need for
spreadsheet-style reports, they use Excel to produce reports based on core BI
semantic models or departmental BI semantic models.
BI developers and operational report authors connect directly to enterprise
models. They use Power BI Desktop to create live connection analytic reports. They
can also author operational-type BI reports as Power BI paginated reports, writing
native SQL queries to access data from the Azure Synapse Analytics enterprise
models by using T-SQL, or Power BI semantic models by using DAX or MDX.
Data scientists connect directly to data in the data lake. They use Azure Databricks
and Python notebooks to develop ML models, which are often experimental and
require specialty skills for production use.
Next steps
For more information about this article, check out the following resources:

Power BI adoption roadmap: Center of Excellence


Enterprise BI in Azure with Azure Synapse Analytics
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI

Professional services
Certified Power BI partners are available to help your organization succeed when setting
up a COE. They can provide you with cost-effective training or an audit of your data. To
engage a Power BI partner, visit the Power BI partner portal .

You can also engage with experienced consulting partners. They can help you assess ,
evaluate , or implement Power BI.
White papers for Power BI
Article • 03/09/2022

White papers allow you to explore Power BI topics at a deeper level. Here you can find a
list of available white papers for Power BI.

White paper Description Date

Planning a Power This updated technical white paper outlines considerations and June 2020
BI Enterprise best practices for a well-performing and secure organizational
Deployment Power BI deployment.

Power BI and This white paper describes dataflows in technical detail, and November
Dataflows describes the capabilities and initiatives behind dataflow 2018
features and functionality.

Power BI Premium The content of this white paper has been incorporated into March
Planning and general guidance. See the link for guidance and best practices 2019
Deployment for planning and deploying Premium capacity for well-defined
workloads.

Capacity planning This paper aims to offer guidance on capacity planning for March
guidance for Power BI Report Server by sharing results of numerous load 2018
Power BI Report test executions of various workloads against a report server.
Server

Security Provides a detailed explanation of security within Power BI. March


2019

Distribute Power This paper outlines how to distribute content to users outside March
BI content to the organization using the integration of Azure Active 2019
external guest Directory Business-to-business (AAD B2B).
users using Azure
Active Directory
B2B

Advanced Describes the advanced analytics capabilities of Power BI, February


Analytics with including predictive analytics, custom visualizations, R 2017
Power BI integration, and data analysis expressions.

Bidirectional Explains bidirectional cross-filtering in Power BI Desktop (the July 2018


filtering white paper also covers SQL Server Analysis Services 2016,
both have the same behavior).

DirectQuery in For SQL Server 2016, DirectQuery was redesigned for January
SQL Server 2016 dramatically improved speed and performance, however, it is 2017
Analysis Services also now more complex to understand and implement.
White paper Description Date

Power BI and SAP This document describes how SAP customers can benefit from November
BW connecting Power BI to their existing SAP Business Warehouse 2019
(BW) systems. Updated in November 2019.

Securing the This paper introduces the security model for tabular BI April 2016
Tabular BI semantic and Power BI. You will learn how to create roles,
Semantic Model implement dynamic security, configure impersonation settings,
manage roles, and choose a method for connecting to models
that works in your network security context.

Power BI and This link takes you to the list of white papers on the Service April 2018
GDPR Trust Portal, including the Microsoft Power BI GDPR white
paper.

Power BI This link takes you to an article that describes how to migrate September
migration from other business intelligence tools to Power BI. 2020
overview

7 Note

If you’re interested in viewing or deleting personal data, please review Microsoft's


guidance in the Windows Data Subject Requests for the GDPR site. If you’re
looking for general information about GDPR, see the GDPR section of the Service
Trust portal .

More questions? Try asking the Power BI Community


Power BI security white paper
Article • 02/23/2023

Summary: Power BI is an online software service (SaaS, or Software as a Service) offering


from Microsoft that lets you easily and quickly create self-service Business Intelligence
dashboards, reports, datasets, and visualizations. With Power BI, you can connect to
many different data sources, combine and shape data from those connections, then
create reports and dashboards that can be shared with others.

Writers: Yitzhak Kesselman, Paddy Osborne, Matt Neely, Tony Bencic, Srinivasan
Turuvekere, Cristian Petculescu, Adi Regev, Naveen Sivaraj, Ben Glastein, Evgeny
Tshiorny, Arthi Ramasubramanian Iyer, Sid Jayadevan, Ronald Chang, Ori Eduar, Anton
Fritz, Idan Sheinberg, Ron Gilad, Sagiv Hadaya, Paul Inbar, Igor Uzhviev, Michael Roth,
Jaime Tarquino, Gennady Pats, Orion Lee, Yury Berezansky, Maya Shenhav, Romit
Chattopadhyay, Yariv Maimon, Bogdan Crivat

Technical Reviewers: Cristian Petculescu, Amir Netz, Sergei Gundorov

Applies to: Power BI SaaS, Power BI Desktop, Power BI Premium, Power BI Embedded
Analytics, Power BI Mobile.

7 Note

You can save or print this white paper by selecting Print from your browser, then
selecting Save as PDF.

Introduction
Power BI is an online software service (SaaS, or Software as a Service) offering from
Microsoft that lets you easily and quickly create self-service Business Intelligence
dashboards, reports, datasets, and visualizations. With Power BI, you can connect to
many different data sources, combine and shape data from those connections, then
create reports and dashboards that can be shared with others.

The world is rapidly changing; organizations are going through an accelerated digital
transformation, and we're seeing a massive increase in remote working, increased
customer demand for online services, and increased use of advanced technologies in
operations and business decision-making. And all of this is powered by the cloud.
As the transition to the cloud has changed from a trickle to a flood, and with the new,
exposed surface area that comes with it, more companies are asking How secure is my
data in the cloud? and What end-to-end protection is available to prevent my sensitive
data from leaking? And for the BI platforms that often handle some of the most strategic
information in the enterprise, these questions are doubly important.

The decades-old foundations of the BI security model - object-level and row-level


security - while still important, clearly no longer suffice for providing the kind of security
needed in the cloud era. Instead, organizations must look for a cloud-native, multi-
tiered, defense-in-depth security solution for their business intelligence data.

Power BI was built to provide industry-leading complete and hermetic protection for
data. The product has earned the highest security classifications available in the industry,
and today many national security agencies, financial institutions, and health care
providers entrust it with their most sensitive information.

It all starts with the foundation. After a rough period in the early 2000s, Microsoft made
massive investments to address its security vulnerabilities, and in the following decades
built a strong security stack that goes as deep as the machine on-chip bios kernel and
extends all the way up to end-user experiences. These deep investments continue, and
today over 3,500 Microsoft engineers are engaged in building and enhancing
Microsoft's security stack and proactively addressing the ever-shifting threat landscape.
With billions of computers, trillions of logins, and countless zettabytes of information
entrusted to Microsoft's protection, the company now possesses the most advanced
security stack in the tech industry and is broadly viewed as the global leader in the fight
against malicious actors.

Power BI builds on this strong foundation. It uses the same security stack that earned
Azure the right to serve and protect the world's most sensitive data, and it integrates
with the most advanced information protection and compliance tools of Microsoft 365.
On top of these, it delivers security through multi-layered security measures, resulting in
end-to-end protection designed to deal with the unique challenges of the cloud era.

To provide an end-to-end solution for protecting sensitive assets, the product team
needed to address challenging customer concerns on multiple simultaneous fronts:

How do we control who can connect, where they connect from, and how they
connect? How can we control the connections?
How is the data stored? How is it encrypted? What controls do I have on my data?
How do I control and protect my sensitive data? How do I ensure this data can't leak
outside the organization?
How do I audit who conducts what operations? How do I react quickly if there's
suspicious activity on the service?
This article provides a comprehensive answer to all these questions. It starts with an
overview of the service architecture and explains how the main flows in the system work.
It then moves on to describe how users authenticate to Power BI, how data connections
are established, and how Power BI stores and moves data through the service. The last
section discusses the security features that allow you, as the service admin, to protect
your most valuable assets.

The Power BI service is governed by the Microsoft Online Services Terms , and the
Microsoft Enterprise Privacy Statement . For the location of data processing, refer to
the Location of Data Processing terms in the Microsoft Online Services Terms and to
the Data Protection Addendum . For compliance information, the Microsoft Trust
Center is the primary resource for Power BI. The Power BI team is working hard to
bring its customers the latest innovations and productivity. Learn more about
compliance in the Microsoft compliance offerings.

The Power BI service follows the Security Development Lifecycle (SDL), strict security
practices that support security assurance and compliance requirements. The SDL helps
developers build more secure software by reducing the number and severity of
vulnerabilities in software, while reducing development cost. Learn more at Microsoft
Security Development Lifecycle Practices .

Power BI architecture
The Power BI service is built on Azure, Microsoft's cloud computing platform . Power BI
is currently deployed in many datacenters around the world – there are many active
deployments made available to customers in the regions served by those datacenters,
and an equal number of passive deployments that serve as backups for each active
deployment.
Web front-end cluster (WFE)
The WFE cluster provides the user's browser with the initial HTML page contents on site
load, and pointers to CDN content used to render the site in the browser.

A WFE cluster consists of an ASP.NET website running in the Azure App Service
Environment. When users attempt to connect to the Power BI service, the client's DNS
service may communicate with the Azure Traffic Manager to find the most appropriate
(usually nearest) datacenter with a Power BI deployment. For more information about
this process, see Performance traffic-routing method for Azure Traffic Manager.

Static resources such as *.js, *.css, and image files are mostly stored on an Azure Content
Delivery Network (CDN) and retrieved directly by the browser. Note that Sovereign
Government cluster deployments are an exception to this rule, and for compliance
reasons will omit the CDN and instead use a WFE cluster from a compliant region for
hosting static content.

Power BI back-end cluster (BE)


The back-end cluster is the backbone of all the functionality available in Power BI. It
consists of several service endpoints consumed by Web Front End and API clients as well
as background working services, databases, caches, and various other components.

The back end is available in most Azure regions, and is being deployed in new regions
as they become available. A single Azure region hosts one or more back-end clusters
that allow unlimited horizontal scaling of the Power BI service once the vertical and
horizontal scaling limits of a single cluster are exhausted.

Each back-end cluster is stateful and hosts all the data of all the tenants assigned to that
cluster. A cluster that contains the data of a specific tenant is referred to as the tenant's
home cluster. An authenticated user's home cluster information is provided by Global
Service and used by the Web Front End to route requests to the tenant's home cluster.

Each back-end cluster consists of multiple virtual machines combined into multiple
resizable-scale sets tuned for performing specific tasks, stateful resources such as SQL
databases, storage accounts, service buses, caches, and other necessary cloud
components.

Tenant metadata and data are stored within cluster limits except for data replication to a
secondary back-end cluster in a paired Azure region in the same Azure geography. The
secondary back-end cluster serves as a failover cluster in case of regional outage, and is
passive at any other time.

Back-end functionality is served by micro-services running on different machines within


the cluster's virtual network that aren't accessible from the outside, except for two
components that can be accessed from the public internet:

Gateway Service
Azure API Management
Power BI Premium infrastructure
Power BI Premium offers a service for subscribers who require premium Power BI
features, such as Dataflows, Paginated Reports, AI, etc. When a customer signs up for a
Power BI Premium subscription, the Premium capacity is created through the Azure
Resource Manager.

Power BI Premium capacities are hosted in back-end clusters that are independent of
the regular Power BI back end – see above). This provides better isolation, resource
allocation, supportability, security isolation, and scalability of the Premium offering.

The following diagram illustrates the architecture of the Power BI Premium


infrastructure:
The connection to the Power BI Premium infrastructure can be done in many ways,
depending on the user scenario. Power BI Premium clients can be a user's browser, a
regular Power BI back end, direct connections via XMLA clients, ARM APIs, etc.

The Power BI Premium infrastructure in an Azure region consists of multiple Power BI


Premium clusters (the minimum is one). Most Premium resources are encapsulated
inside a cluster (for instance, compute), and there are some common regional resources
(for example, metadata storage). Premium infrastructure allows two ways of achieving
horizontal scalability in a region: increasing resources inside clusters and/or adding
more clusters on demand as needed (if cluster resources are approaching their limits).

The backbone of each cluster are compute resources managed by Virtual Machine Scale
Sets and Azure Service Fabric. Virtual Machine Scale Sets and Service Fabric allow fast
and painless increase of compute nodes as usage grows and orchestrates the
deployment, management, and monitoring of Power BI Premium services and
applications.

There are many surrounding resources that ensure a secure and reliable infrastructure:
load balancers, virtual networks, network security groups, service bus, storage, etc. Any
secrets, keys, and certificates required for Power BI Premium are managed by Azure Key
Vault exclusively. Any authentication is done via integration with Azure AD exclusively.

Any request that comes to Power BI Premium infrastructure goes to front-end nodes
first – they're the only nodes available for external connections. The rest of the resources
are hidden behind virtual networks. The front-end nodes authenticate the request,
handle it, or forward it to the appropriate resources (for example, back-end nodes).

Back-end nodes provide most of the Power BI Premium capabilities and features.

Power BI Mobile
Power BI Mobile is a collection of apps designed for the three primary mobile platforms:
Android, iOS, and Windows (UWP). Security considerations for the Power BI Mobile apps
fall into two categories:

Device communication
The application and data on the device

For device communication, all Power BI Mobile applications communicate with the
Power BI service, and use the same connection and authentication sequences used by
browsers, which are described in detail earlier in this white paper. The Power BI mobile
applications for iOS and Android bring up a browser session within the application itself,
while the Windows mobile app brings up a broker to establish the communication
channel with Power BI (for the sign-in process).

The following table shows certificate-based authentication (CBA) support for Power BI
Mobile, based on the mobile device platform:

CBA support iOS Android Windows

Power BI (sign in to service) Supported Supported Not supported


CBA support iOS Android Windows

SSRS ADFS on-premises (connect to SSRS server) Not supported Supported Not supported

SSRS App Proxy Supported Supported Not supported

Power BI Mobile apps actively communicate with the Power BI service. Telemetry is used
to gather mobile app usage statistics and similar data, which is transmitted to services
that are used to monitor usage and activity; no customer data is sent with telemetry.

The Power BI application stores data on the device that facilitates use of the app:

Azure AD and refresh tokens are stored in a secure mechanism on the device,
using industry-standard security measures.
Data and settings (key-value pairs for user configuration) is cached in storage on
the device, and can be encrypted by the OS. In iOS this is automatically done when
the user sets a passcode. In Android this can be configured in the settings. In
Windows it's accomplished by using BitLocker.
For the Android and iOS apps, the data and settings (key-value pairs for user
configuration) are cached in storage on the device in a sandbox and internal
storage that is accessible only to the app. For the Windows app, the data is only
accessible by the user (and system admin).
Geolocation is enabled or disabled explicitly by the user. If enabled, geolocation
data isn't saved on the device and isn't shared with Microsoft.
Notifications are enabled or disabled explicitly by the user. If enabled, Android and
iOS don't support geographic data residency requirements for notifications.

Data encryption can be enhanced by applying file-level encryption via Microsoft Intune,
a software service that provides mobile device and application management. All three
platforms for which Power BI Mobile is available support Intune. With Intune enabled
and configured, data on the mobile device is encrypted, and the Power BI application
itself can't be installed on an SD card. Learn more about Microsoft Intune .

The Windows app also supports Windows Information Protection (WIP).

In order to implement SSO, some secured storage values related to the token-based
authentication are available for other Microsoft 1st party apps (such as Microsoft
Authenticator) and are managed by the Azure Active Directory Authentication Library
(ADAL) SDK.

Power BI Mobile cached data is deleted when the app is removed, when the user signs
out of Power BI Mobile, or when the user fails to sign in (such as after a token expiration
event or password change). The data cache includes dashboards and reports previously
accessed from the Power BI Mobile app.
Power BI Mobile doesn't access other application folders or files on the device.

The Power BI apps for iOS and Android let you protect your data by configuring
additional identification, such as providing Face ID, Touch ID, or a passcode for iOS, and
biometric data (Fingerprint ID) for Android. Learn more about additional identification.

Authentication to the Power BI service


User authentication to the Power BI service consists of a series of requests, responses,
and redirects between the user's browser and the Power BI service or the Azure services
used by Power BI. That sequence describes the process of user authentication in Power
BI, which follows the Azure Active Directory's auth code grant flow. For more
information about options for an organization's user authentication models (sign-in
models), see Choosing a sign-in model for Microsoft 365 .

Authentication sequence
The user authentication sequence for the Power BI service occurs as described in the
following steps, which are illustrated in the image that follows them.

1. A user initiates a connection to the Power BI service from a browser, either by


typing in the Power BI address in the address bar or by selecting Sign in from the
Power BI marketing page (https://powerbi.microsoft.com ). The connection is
established using TLS and HTTPS, and all subsequent communication between the
browser and the Power BI service uses HTTPS.

2. The Azure Traffic Manager checks the user's DNS record to determine the most
appropriate (usually nearest) datacenter where Power BI is deployed, and responds
to the DNS with the IP address of the WFE cluster to which the user should be sent.

3. WFE then returns an HTML page to the browser client, which contains a MSAL.js
library reference necessary to initiate the sign-in flow.

4. The browser client loads the HTML page received from the WFE, and redirects the
user to the Microsoft Online Services sign-in page.

5. After the user has been authenticated, the sign-in page redirects the user back to
the Power BI WFE page with an auth code.

6. The browser client loads the HTML page, and uses the auth code to request tokens
(access, ID, refresh) from the Azure AD service.
7. The user's tenant ID is used by the browser client to query the Power BI Global
Service, which maintains a list of tenants and their Power BI back-end cluster
locations. The Power BI Global Service determines which Power BI back-end service
cluster contains the user's tenant, and returns the Power BI back-end cluster URL
back down to the client.

8. The client is now able to communicate with the Power BI back-end cluster URL API,
using the access token in the Authorization header for the HTTP requests. The
Azure AD access token will have an expiry date set according to Azure AD policies,
and to maintain the current session the Power BI Client in the user's browser will
make periodic requests to renew the access token before it expires.

In the rare cases where client-side authentication fails due to an unexpected error, the
code attempts to fall back to using server-side authentication in the WFE. Refer to the
questions and answers section at the end of this document for details about the server-
side authentication flow.
Data residency
Unless otherwise indicated in the documentation, Power BI stores customer data in an
Azure geography that is assigned when an Azure AD tenant signs up for Power BI
services for the first time. An Azure AD tenant houses the user and application identities,
groups, and other relevant information that pertain to an organization and its security.

The assignment of an Azure geography for tenant data storage is done by mapping the
country or region selected as part of the Azure AD tenant setup to the most suitable
Azure geography where a Power BI deployment exists. Once this determination is made,
all Power BI customer data will be stored in this selected Azure geography (also known
as the home geo), except in cases where organizations utilize multi-geo deployments.

Multiple geographies (multi-geo)


Some organizations have a global presence and may require Power BI services in
multiple Azure geographies. For example, a business may have their headquarters in the
United States but may also do business in other geographical areas, such as Australia. In
such cases the business may require that certain Power BI data remain stored at rest in
the remote region to comply with local regulations. This feature of the Power BI service
is referred to as multi-geo.

The query execution layer, query caches, and artifact data assigned to a multi-geo
workspace are hosted and remain in the remote capacity Azure geography. However,
some artifact metadata, such as report structure, may remain stored at rest in the
tenant's home geo. Additionally, some data transit and processing may still happen in
the tenant's home geo, even for workspaces that are hosted in a multi-geo Premium
capacity.

See Configure Multi-Geo support for Power BI Premium for more information about
creating and managing Power BI deployments that span multiple Azure geographies.

Regions and datacenters


Power BI services are available in specific Azure geographies as described in the
Microsoft Trust Center . For more information about where your data is stored and
how it's used, refer to the Microsoft Trust Center . Commitments concerning the
location of customer data at rest are specified in the Data Processing Terms of the
Microsoft Online Services Terms .

Microsoft also provides datacenters for sovereign entities. For more information about
Power BI service availability for national/regional clouds, see Power BI national/regional
clouds .

Data handling
This section outlines Power BI data handling practices when it comes to storing,
processing, and transferring customer data.

Data at rest
Power BI uses two primary data storage resource types:

Azure Storage
Azure SQL Databases

In most scenarios, Azure Storage is utilized to persist the data of Power BI artifacts, while
Azure SQL Databases are used to persist artifact metadata.

All data persisted by Power BI is encrypted by default using Microsoft-managed keys.


Customer data stored in Azure SQL Databases is fully encrypted using Azure SQL's
Transparent Data Encryption (TDE) technology. Customer data stored in Azure Blob
storage is encrypted using Azure Storage Encryption.

Optionally, organizations can utilize Power BI Premium to use their own keys to encrypt
data at rest that is imported into a dataset. This approach is often described as bring
your own key (BYOK). Utilizing BYOK helps ensure that even in case of a service operator
error, customer data won't be exposed – something that can't easily be achieved using
transparent service-side encryption. See Bring your own encryption keys for Power BI for
more information.

Power BI datasets allow for various data source connection modes that determine
whether the data source data is persisted in the service or not.

Dataset Mode (Kind) Data Persisted in Power BI

Import Yes

Direct Query No

Live Connect No

Composite If contains an Import data source

Streaming If configured to persist


Regardless of the dataset mode utilized, Power BI may temporarily cache any retrieved
data to optimize query and report load performance.

Data in processing
Data is in processing when it's either actively being used by one or more users as part of
an interactive scenario, or when a background process, such as refresh, touches this
data. Power BI loads actively processed data into the memory space of one or more
service workloads. To facilitate the functionality required by the workload, the processed
data in memory isn't encrypted.

Data in transit
Power BI requires all incoming HTTP traffic to be encrypted using TLS 1.2 or above. Any
requests attempting to use the service with TLS 1.1 or lower will be rejected.

Authentication to data sources


When connecting to a data source, a user can choose to import a copy of the data into
Power BI or to connect directly to the data source.

In the case of import, a user establishes a connection based on the user's sign in and
accesses the data with the credential. After the dataset is published to the Power BI
service, Power BI always uses this user's credential to import data. Once data is
imported, viewing the data in reports and dashboards doesn't access the underlying
data source. Power BI supports single sign-on authentication for selected data sources.
If the connection is configured to use single sign-on, the dataset owner's credentials are
used to connect to the data source.

If a data source is connected directly using preconfigured credentials, the preconfigured


credentials are used to connect to the data source when any user views the data. If a
data source is connected directly using single sign-on, the current user's credentials are
used to connect to the data source when a user views the data. When used with single
sign-on, Row Level Security (RLS) and/or object-level security (OLS) can be implemented
on the data source. This allows users to view only data they have privileges to access.
When the connection is to data sources in the cloud, Azure AD authentication is used
for single sign-on; for on-premises data sources, Kerberos, Security Assertion Markup
Language (SAML), and Azure AD are supported.

If the data source is Azure Analysis Services or on-premises Analysis Services, and RLS
and/or OLS is configured, the Power BI service will apply that row level security, and
users who don't have sufficient credentials to access the underlying data (which could
be a query used in a dashboard, report, or other data artifact) won't see data they don't
have sufficient privileges for.

Premium features

Dataflows architecture
Dataflows provide users the ability to configure back-end data processing operations
that will extract data from polymorphous data sources, execute transformation logic
against the data, and then land it in a target model for use across various reporting
presentation technologies. Any user who has either a member, contributor, or admin
role in a workspace may create a dataflow. Users in the viewer role may view data
processed by the dataflow but may not make changes to its composition. Once a
dataflow has been authored, any member, contributor, or admin of the workspace may
schedule refreshes, as well as view and edit the dataflow by taking ownership of it.

Each configured data source is bound to a client technology for accessing that data
source. The structure of credentials required to access them is formed to match required
implementation details of the data source. Transformation logic is applied by Power
Query services while the data is in flight. For premium dataflows, Power Query services
execute in back-end nodes. Data may be pulled directly from the cloud sources or
through a gateway installed on premises. When pulled directly from a cloud source to
the service or to the gateway, the transport uses protection methodology specific to the
client technology, if applicable. When data is transferred from the gateway to the cloud
service, it is encrypted. See the Data in Transit section above.

When customer specified data sources require credentials for access, the owner/creator
of the dataflow will provide them during authoring. They're stored using standard
product-wide credential storage. See the Authentication to Data Sources section above.
There are various approaches users may configure to optimize data persistence and
access. By default, the data is placed in a Power BI owned and protected storage
account. Storage encryption is enabled on the Blob storage containers to protect the
data while it is at rest. See the Data at Rest section below. Users may, however, configure
their own storage account associated with their own Azure subscription. When doing so,
a Power BI service principal is granted access to that storage account so that it may
write the data there during refresh. In this case, the storage resource owner is
responsible for configuring encryption on the configured ADLS storage account. Data is
always transmitted to Blob storage using encryption.
Since performance when accessing storage accounts may be suboptimal for some data,
users also have the option to use a Power BI-hosted compute engine to increase
performance. In this case, data is redundantly stored in an SQL database that is available
for DirectQuery through access by the back-end Power BI system. Data is always
encrypted on the file system. If the user provides a key for encrypting the data stored in
the SQL database, that key will be used to doubly encrypt it.

When querying using DirectQuery, the encrypted transport protocol HTTPS is used to
access the API. All secondary or indirect use of DirectQuery is controlled by the same
access controls previously described. Since dataflows are always bound to a workspace,
access to the data is always gated by the user's role in that workspace. A user must have
at least read access to be able to query the data via any means.

When Power BI Desktop is used to access data in a dataflow, it must first authenticate
the user using Azure AD to determine if the user has sufficient rights to view the data. If
so, a SaS key is acquired and used to access storage directly using the encrypted
transport protocol HTTPS.

The processing of data throughout the pipeline emits Office 365 auditing events. Some
of these events will capture security and privacy-related operations.

Paginated reports
Paginated reports are designed to be printed or shared. They're called paginated
because they're formatted to fit well on a page. They display all the data in a table, even
if the table spans multiple pages. You can control their report page layout exactly.

Paginated reports support rich and powerful expressions written in Microsoft Visual
Basic .NET. Expressions are widely used throughout Power BI Report Builder paginated
reports to retrieve, calculate, display, group, sort, filter, parameterize, and format data.

Expressions are created by the author of the report with access to the broad range of
features of the .NET framework. The processing and execution of paginated reports is
performed inside a sandbox.

Paginated report definitions (.rdl) are stored in Power BI, and to publish and/or render a
paginated report a user needs to authenticate and authorize in the same way as
described in the Authentication to the Power BI Service section above.

The Azure AD token obtained during the authentication is used to communicate directly
from the browser to the Power BI Premium cluster.
In Power BI Premium, the Power BI service runtime provides an appropriately isolated
execution environment for each report render. This includes cases where the reports
being rendered belong to workspaces assigned to the same capacity.

A paginated report can access a wide set of data sources as part of the rendering of the
report. The sandbox doesn't communicate directly with any of the data sources but
instead communicates with the trusted process to request data, and then the trusted
process appends the required credentials to the connection. In this way, the sandbox
never has access to any credential or secret.

In order to support features such as Bing maps, or calls to Azure Functions, the sandbox
does have access to the internet.

Power BI embedded analytics


Independent Software Vendors (ISVs) and solution providers have two main modes of
embedding Power BI artifacts in their web applications and portals: embed for your
organization and embed for your customers. The artifact is embedded into an IFrame in
the application or portal. An IFrame is not allowed to read or write data from the
external web application or portal, and the communication with the IFrame is done by
using the Power BI Client SDK using POST messages.

In an embed for your organization scenario, Azure AD users access their own Power BI
content through portals customized by their enterprises and ITs. All Power BI policies
and capabilities described in this paper such as Row Level Security (RLS) and object-level
security (OLS) are automatically applied to all users independently of whether they
access Power BI through the Power BI portal or through customized portals.

In an embed for your customers scenario, ISVs typically own Power BI tenants and Power
BI artifacts (dashboards, reports, datasets etc.). It's the responsibility of an ISV back-end
service to authenticate its end users and decide which artifacts and which access level is
appropriate for that end user. ISV policy decisions are encrypted in an embed token
generated by Power BI and passed to the ISV back-end for further distribution to the
end users according to the business logic of the ISV. End users using a browser or other
client applications aren't able to decrypt or modify embed tokens. Client-side SDKs such
as Power BI Client APIs automatically append the encrypted embed token to Power BI
requests as an Authorization: EmbedToken header. Based on this header, Power BI will
enforce all policies (such as access or RLS) precisely as was specified by the ISV during
generation.

To enable embedding and automation, and to generate the embed tokens described
above, Power BI exposes a rich set of REST APIs. These Power BI REST APIs support both
user delegated and service principal Azure AD methods of authentication and
authorization.

Power BI embedded analytics and its REST APIs support all Power BI network isolation
capabilities described in this article: For example, Service Tags and Private Links.

AI features
Power BI currently supports two broad categories of AI features in the product today: AI
visuals and AI enrichments. The visual-level AI features include capabilities such as Key-
Influencers, Decomposition-Tree, Smart-Narrative, Anomaly Detection, R-visual, Python-
visual, Clustering, Forecasting, Q&A, Quick-Insights etc. The AI enrichment capabilities
include capabilities such as AutoML, Azure Machine Learning, CognitiveServices,
R/Python transforms etc.

Most of the features mentioned above are supported in both Shared and Premium
workspaces today. However, AutoML and CognitiveServices are supported only in
Premium workspaces, due to IP restrictions. Today, with the AutoML integration in
Power BI, a user can build and train a custom ML model (for example, Prediction,
Classification, Regression, etc.) and apply it to get predictions while loading data into a
dataflow defined in a Premium workspace. Additionally, Power BI users can apply several
CognitiveServices APIs, such as TextAnalytics and ImageTagging, to transform data
before loading it into a dataflow/dataset defined in a Premium workspace.

The Premium AI enrichment features can be best viewed as a collection of stateless AI


functions/transforms that can be used by Power BI users in their data integration
pipelines used by a Power BI dataset or dataflow. Note that these functions can also be
accessed from current dataflow/dataset authoring environments in the Power BI Service
and Power BI Desktop. These AI functions/transforms always run in a Premium
workspace/capacity. These functions are surfaced in Power BI as a data source that
requires an Azure AD token for the Power BI user who is using the AI function. These AI
data sources are special because they don't surface any of their own data and they only
supply these functions/transforms. During execution, these features don't make any
outbound calls to other services to transmit the customer's data. Let us look at the
Premium scenarios individually to understand the communication patterns and relevant
security-related details pertaining to them.

For training and applying an AutoML model, Power BI uses the Azure AutoML SDK and
runs all the training in the customer's Power BI capacity. During training iterations,
Power BI calls an experimentation Azure Machine Learning service to select a suitable
model and hyper-parameters for the current iteration. In this outbound call, only
relevant experiment metadata (for example, accuracy, ml algorithm, algorithm
parameters, etc.) from the previous iteration is sent. The AutoML training produces an
ONNX model and training report data that is then saved in the dataflow. Later, Power BI
users can then apply the trained ML model as a transform to operationalize the ML
model on a scheduled basis. For TextAnalytics and ImageTagging APIs, Power BI doesn't
directly call the CognitiveServices service APIs, but rather uses an internal SDK to run the
APIs in the Power BI Premium capacity. Today these APIs are supported in both Power BI
dataflows and datasets. While authoring a dataset in Power BI Desktop, users can only
access this functionality if they have access to a Premium Power BI workspace. Hence
customers are prompted to supply their Azure AD credentials.

Network isolation
This section outlines advanced security features in Power BI. Some of the features have
specific licensing requirements. See the sections below for details.

Service tags
A service tag represents a group of IP address prefixes from a given Azure service. It
helps minimize the complexity of frequent updates to network security rules. Customers
can use service tags to define network access controls on Network Security Groups or
Azure Firewall. Customers can use service tags in place of specific IP addresses when
creating security rules. By specifying the service tag name (such as PowerBI ) in the
appropriate source or destination (for APIs) field of a rule, customers can allow or deny
the traffic for the corresponding service. Microsoft manages the address prefixes
encompassed by the service tag and automatically updates the service tag as addresses
change.

Private Link integration


Azure networking provides the Azure Private Link feature that enables Power BI to
provide secure access via Azure Networking private endpoints. With Azure Private Link
and private endpoints, data traffic is sent privately using Microsoft's backbone network
infrastructure, and thus the data doesn't traverse the Internet.

Private Link ensures that Power BI users use the Microsoft private network backbone
when going to resources in the Power BI service.

Using Private Link with Power BI provides the following benefits:

Private Link ensures that traffic will flow over the Azure backbone to a private
endpoint for Azure cloud-based resources.
Network traffic isolation from non-Azure-based infrastructure, such as on-premises
access, would require customers to have ExpressRoute or a Virtual Private Network
(VPN) configured.

See Private links for accessing Power BI for additional information.

VNet connectivity (preview - coming soon)


While the Private Link integration feature provides secure inbound connections to Power
BI, the VNet connectivity feature enables secure outbound connectivity from Power BI to
data sources within a VNet.

VNet gateways (Microsoft-managed) will eliminate the overhead of installing and


monitoring on-premises data gateways for connecting to data sources associated with a
VNet. They will, however, still follow the familiar process of managing security and data
sources, as with an on-premises data gateway.

The following is an overview of what happens when you interact with a Power BI report
that is connected to a data source within a VNet using VNet gateways:

1. The Power BI cloud service (or one of the other supported cloud services) kicks off
a query and sends the query, data source details, and credentials to the Power
Platform VNet service (PP VNet).

2. The PP VNet service then securely injects a container running a VNet gateway into
the subnet. This container can now connect to data services accessible from within
this subnet.

3. The PP VNet service then sends the query, data source details, and credentials to
the VNet gateway.

4. The VNet gateway gets the query and connects to the data sources with those
credentials.

5. The query is then sent to the data source for execution.

6. After execution, the results are sent to the VNet gateway, and the PP VNet service
securely pushes the data from the container to the Power BI cloud service.

This feature will be available in public preview soon.

Service principals
Power BI supports the use of service principals. Store any service principal credentials
used for encrypting or accessing Power BI in a Key Vault, assign proper access policies to
the vault, and regularly review access permissions.

See Automate Premium workspace and dataset tasks with service principals for
additional details.

Microsoft Purview for Power BI

Microsoft Purview Information Protection


Power BI is deeply integrated with Microsoft Purview Information Protection. Microsoft
Purview Information Protection enables organizations to have a single, integrated
solution for classification, labeling, auditing, and compliance across Azure, Power BI, and
Office.

When information protection is enabled in Power BI:

Sensitive data, both in the Power BI service and in Power BI Desktop, can be
classified and labeled using the same sensitivity labels used in Office and in Azure.
Governance policies can be enforced when Power BI content is exported to Excel,
PowerPoint, PDF, Word, or .pbix files, to help ensure that data is protected even
when it leaves Power BI.
It's easy to classify and protect .pbix files in Power BI Desktop, just like it's done in
Excel, Word, and PowerPoint applications. Files can be easily tagged according to
their level of sensitivity. Even further, they can be encrypted if they contain
business-confidential data, ensuring that only authorized users can edit these files.
Excel workbooks automatically inherit sensitivity labels when they connect to
Power BI (preview), making it possible to maintain end-to-end classification and
apply protection when Power BI datasets are analyzed in Excel.
Sensitivity labels applied to Power BI reports and dashboards are visible in the
Power BI iOS and Android mobile apps.
Sensitivity labels persist when a Power BI report is embedded in Teams, SharePoint,
or a secure website. This helps organizations maintain classification and protection
upon export when embedding Power BI content.
Label inheritance upon the creation of new content in the Power BI service ensures
that labels applied to datasets or datamarts in the Power BI service will be applied
to new content created on top of those datasets and datamarts.
Power BI admin scan APIs can extract a Power BI item's sensitivity label, enabling
Power BI and InfoSec admins to monitor labeling in the Power BI service and
produce executive reports.
Power BI admin APIs enable central teams to programmatically apply sensitivity
labels to content in the Power BI service.
Central teams can create mandatory label policies to enforce applying labels on
new or edited content in Power BI.
Central teams can create default label policies to ensure that a sensitivity label is
applied to all new or changed Power BI content.
Automatic downstream sensitivity labeling in the Power BI service ensures that
when a label on a dataset or datamart is applied or changed, the label will
automatically be applied or changed on all downstream content connected to the
dataset or datamart.

For more information, see Sensitivity labels in Power BI.

Microsoft Purview Data Loss Prevention (DLP) Policies for


Power BI (preview)
Microsoft Purview's DLP policies can help organizations reduce the risk of sensitive
business data leakage from Power BI. DLP policies can help them meet compliance
requirements of government or industry regulations, such as GDPR (the European
Union's General Data Protection Regulation) or CCPA (the California Consumer Privacy
Act) and make sure their data in Power BI is managed.

When DLP policies for Power BI are set up:

All datasets within the workspaces specified in the policy are evaluated by the
policy.
You can detect when sensitive data is uploaded into your Premium capacities. DLP
policies can detect:
Sensitivity labels.
Sensitive info types. Over 260 types are supported. Sensitive info type detection
relies on Microsoft Purview content scanning.
When you encounter a dataset identified as sensitive, you can see a customized
policy tip that helps you understand what you should do.
If you are a dataset owner, you can override a policy and prevent your dataset
from being identified as "sensitive" if you have a valid reason for doing so.
If you are a dataset owner, you can report an issue with a policy if you conclude
that a sensitive info type has been falsely identified.
Automatic risk mitigations, such as alerts to the security admin, can be invoked.

For more information, see Data loss prevention policies for Power BI.
Microsoft Defender for Cloud Apps for Power
BI
Microsoft Defender for Cloud Apps is one of the world's leading cloud access security
brokers, named as leader in Gartner's Magic Quadrant for the cloud access security
broker (CASB) market. Defender for Cloud Apps is used to secure the use of cloud apps.
It enables organizations to monitor and control, in real time, risky Power BI sessions
such as user access from unmanaged devices. Security administrators can define policies
to control user actions, such as downloading reports with sensitive information.

With Defender for Cloud Apps, organizations can gain the following DLP capabilities:

Set real-time controls to enforce risky user sessions in Power BI. For example, if a
user connects to Power BI from outside of their country or region, the session can
be monitored by the Defender for Cloud Apps real-time controls, and risky actions,
such as downloading data tagged with a "Highly Confidential" sensitivity label, can
be blocked immediately.
Investigate Power BI user activity with the Defender for Cloud Apps activity log.
The Defender for Cloud Apps activity log includes Power BI activity as captured in
the Office 365 audit log, which contains information about all user and admin
activities, as well as sensitivity label information for relevant activities such as apply,
change, and remove label. Admins can leverage the Defender for Cloud Apps
advanced filters and quick actions for effective issue investigation.
Create custom policies to alert on suspicious user activity in Power BI. The
Defender for Cloud Apps activity policy feature can be leveraged to define your
own custom rules, to help you detect user behavior that deviates from the norm,
and even possibly act upon it automatically, if it seems too dangerous.
Work with the Defender for Cloud Apps built-in anomaly detection. The Defender
for Cloud Apps anomaly detection policies provide out-of-the-box user behavioral
analytics and machine learning so that you're ready from the outset to run
advanced threat detection across your cloud environment. When an anomaly
detection policy identifies a suspicious behavior, it triggers a security alert.
Power BI admin role in the Defender for Cloud Apps portal. Defender for Cloud
Apps provides an app-specific admin role that can be used to grant Power BI
admins only the permissions they need to access Power BI-relevant data in the
portal, such as alerts, users at risk, activity logs, and other Power BI-related
information.

See Using Microsoft Defender for Cloud Apps Controls in Power BI for additional details.
Preview security features
This section lists features that are planned to release through March 2021. Because this
topic lists features that may not have released yet, delivery timelines may change and
projected functionality may be released later than March 2021, or may not be released
at all. For more information, about previews, please review the Online Services Terms .

Bring Your Own Log Analytics (BYOLA)


Bring Your Own Log Analytics enables integration between Power BI and Azure Log
Analytics. This integration includes Azure Log Analytics' advanced analytic engine,
interactive query language, and built-in machine learning constructs.

Power BI security questions and answers


The following questions are common security questions and answers for Power BI. These
are organized based on when they were added to this white paper, to facilitate your
ability to quickly find new questions and answers when this paper is updated. The
newest questions are added to the end of this list.

How do users connect to, and gain access to data sources while using Power BI?

Power BI manages credentials to data sources for each user for cloud credentials or
for connectivity through a personal gateway. Data sources managed by an on-
premises data gateway can be shared across the enterprise and permissions to
these data sources can be managed by the Gateway Admin. When configuring a
dataset, the user is allowed to select a credential from their personal store or use
an on-premises data gateway to use a shared credential.

In the import case, a user establishes a connection based on the user's sign in and
accesses the data with the credential. After the dataset is published to Power BI
service, Power BI always uses this user's credential to import data. Once data is
imported, viewing the data in reports and dashboard doesn't access the underlying
data source. Power BI supports single sign-on authentication for selected data
sources. If the connection is configured to use single sign-on, the dataset owner's
credential is used to connect with the data source.

For reports that are connected with DirectQuery, the data source is connected
directly using a preconfigured credential, the preconfigured credential is used to
connect to the data source when any user views the data. If a data source is
connected directly using single sign-on, the current user's credential is used to
connect to the data source when the user views the data. When using with single
sign-on, Row Level Security (RLS) and/or object-level security (OLS) can be
implemented on the data source, and this allows users to view data they have
privileges to access. When the connection is to data sources in the cloud, Azure AD
authentication is used for single sign-on; for on-premises data sources, Kerberos,
SAML, and Azure AD are supported.

When connecting with Kerberos, the user's UPN is passed to the gateway, and
using Kerberos constrained delegation, the user is impersonated and connected to
the respective data sources. SAML is also supported on the Gateway for SAP HANA
datasource. More information is available in overview of single sign-on for
gateways.

If the data source is Azure Analysis Services or on-premises Analysis Services and
Row Level Security (RLS) and/or object-level security (OLS) is configured, the Power
BI service will apply that row level security, and users who don't have sufficient
credentials to access the underlying data (which could be a query used in a
dashboard, report, or other data artifact) won't see data for which the user doesn't
have sufficient privileges.

Row Level security with Power BI can be used to restrict data access for given users.
Filters restrict data access at the row level, and you can define filters within role.

Object-level security (OLS) can be used to secure sensitive tables or columns.


However, unlike row-level security, object-level security also secures object names
and metadata. This helps prevent malicious users from discovering even the
existence of such objects. Secured tables and columns are obscured in the field list
when using reporting tools like Excel or Power BI, and moreover, users without
permissions can't access secured metadata objects via DAX or any other method.
From the standpoint of users without proper access permissions, secured tables
and columns simply don't exist.

Object-level security, together with row-level security, enables enhanced enterprise


grade security on reports and datasets, ensuring that only users with the requisite
permissions have access to view and interact with sensitive data.

How is data transferred to Power BI?

All data requested and transmitted by Power BI is encrypted in transit using HTTPS
(except when the data source chosen by the customer doesn't support HTTPS) to
connect from the data source to the Power BI service. A secure connection is
established with the data provider, and only once that connection is established
will data traverse the network.
How does Power BI cache report, dashboard, or model data, and is it secure?

When a data source is accessed, the Power BI service follows the process outlined
in the Authentication to Data Sources section earlier in this document.

Do clients cache web page data locally?

When browser clients access Power BI, the Power BI web servers set the Cache-
Control directive to no-store. The no-store directive instructs browsers not to cache
the web page being viewed by the user, and not to store the web page in the
client's cache folder.

What about role-based security, sharing reports or dashboards, and data


connections? How does that work in terms of data access, dashboard viewing, report
access or refresh?

For non-Role Level Security (RLS) enabled data sources, if a dashboard, report, or
data model is shared with other users through Power BI, the data is then available
for users with whom it's shared to view and interact with. Power BI does not
reauthenticate users against the original source of the data; once data is uploaded
into Power BI, the user who authenticated against the source data is responsible
for managing which other users and groups can view the data.

When data connections are made to an RLS-capable data source, such as an


Analysis Services data source, only dashboard data is cached in Power BI. Each time
a report or dataset is viewed or accessed in Power BI that uses data from the RLS-
capable data source, the Power BI service accesses the data source to get data
based on the user's credentials, and if sufficient permissions exist, the data is
loaded into the report or data model for that user. If authentication fails, the user
will see an error.

For more information, see the Authentication to Data Sources section earlier in this
document.

Our users connect to the same data sources all the time, some of which require
credentials that differ from their domain credentials. How can they avoid having to
input these credentials each time they make a data connection?

Power BI offers the Power BI Personal Gateway, which is a feature that lets users
create credentials for multiple different data sources, then automatically use those
credentials when subsequently accessing each of those data sources. For more
information, see Power BI Personal Gateway.
Which ports are used by on-premises data gateway and personal gateway? Are there
any domain names that need to be allowed for connectivity purposes?

The detailed answer to this question is available at the following link: Gateway
ports

When working with the on-premises data gateway, how are recovery keys used and
where are they stored? What about secure credential management?

During gateway installation and configuration, the administrator types in a


gateway Recovery Key. That Recovery Key is used to generate a strong AES
symmetric key. An RSA asymmetric key is also created at the same time.

Those generated keys (RSA and AES) are stored in a file located on the local
machine. That file is also encrypted. The contents of the file can only be decrypted
by that particular Windows machine, and only by that particular gateway service
account.

When a user enters data source credentials in the Power BI service UI, the
credentials are encrypted with the public key in the browser. The gateway decrypts
the credentials using the RSA private key and re-encrypts them with an AES
symmetric key before the data is stored in the Power BI service. With this process,
the Power BI service never has access to the unencrypted data.

Which communication protocols are used by the on-premises data gateway, and how
are they secured?

The gateway supports the following two communications protocols:

AMQP 1.0 – TCP + TLS: This protocol requires ports 443, 5671-5672, and 9350-
9354 to be open for outgoing communication. This protocol is preferred, since
it has lower communication overhead.

HTTPS – WebSockets over HTTPS + TLS: This protocol uses port 443 only. The
WebSocket is initiated by a single HTTP CONNECT message. Once the channel
is established, the communication is essentially TCP+TLS. You can force the
gateway to use this protocol by modifying a setting described in the on-
premises gateway article.

What is the role of Azure CDN in Power BI?

As mentioned previously, Power BI uses the Azure Content Delivery Network (CDN)
to efficiently distribute the necessary static content and files to users based on
geographical locale. To go into further detail, the Power BI service uses multiple
CDNs to efficiently distribute necessary static content and files to users through
the public Internet. These static files include product downloads (such as Power BI
Desktop, the on-premises data gateway, or Power BI apps from various
independent service providers), browser configuration files used to initiate and
establish any subsequent connections with the Power BI service, as well as the
initial secure Power BI sign-in page.

Based on information provided during an initial connection to the Power BI service,


a user's browser contacts the specified Azure CDN (or for some files, the WFE) to
download the collection of specified common files necessary to enable the
browser's interaction with the Power BI service. The browser page then includes
the Azure AD token, session information, the location of the associated back-end
cluster, and the collection of files downloaded from the Azure CDN and WFE
cluster, for the duration of the Power BI service browser session.

For Power BI visuals, does Microsoft perform any security or privacy assessment of
the custom visual code prior to publishing items to the Gallery?

No. It is the customer's responsibility to review and determine whether custom


visual code should be relied upon. All custom visual code is operated in a sandbox
environment, so that any errant code in a custom visual doesn't adversely affect
the rest of the Power BI service.

Are there other Power BI visuals that send information outside the customer network?

Yes. Bing Maps and ESRI visuals transmit data out of the Power BI service for
visuals that use those services.

For template apps, does Microsoft perform any security or privacy assessment of the
template app prior to publishing items to the Gallery?

No. The app publisher is responsible for the content while it's the customer's
responsibility to review and determine whether to trust the template app
publisher.

Are there template apps that can send information outside the customer network?

Yes. It's the customer's responsibility to review the publisher's privacy policy and
determine whether to install the template app on tenant. The publisher is
responsible for informing the customer about the app's behavior and capabilities.

What about data sovereignty? Can we provision tenants in data centers located in
specific geographies, to ensure data doesn't leave the country or region borders?

Some customers in certain geographies have an option to create a tenant in a


national/regional cloud, where data storage and processing is kept separate from
all other datacenters. National/Regional clouds have a slightly different type of
security, since a separate data trustee operates the national/regional cloud Power
BI service on behalf of Microsoft.

Alternatively, customers can also set up a tenant in a specific region. However,


such tenants do not have a separate data trustee from Microsoft. Pricing for
national/regional clouds is different from the generally available commercial Power
BI service. For more information about Power BI service availability for
national/regional clouds, see Power BI national/regional clouds .

How does Microsoft treat connections for customers who have Power BI Premium
subscriptions? Are those connections different than those established for the non-
Premium Power BI service?

The connections established for customers with Power BI Premium subscriptions


implement an Azure Business-to-Business (B2B) authorization process, using Azure
AD to enable access control and authorization. Power BI handles connections from
Power BI Premium subscribers to Power BI Premium resources just as it would any
other Azure AD user.

How does server-side authentication work in the WFE?

The user authentication sequence service-side authentication occurs as described


in the following steps, which are illustrated in the image that follows them.

1. A user initiates a connection to the Power BI service from a browser, either by


typing in the Power BI address in the address bar or by selecting Sign in from the
Power BI marketing page (https://powerbi.microsoft.com ). The connection is
established using TLS 1.2 and HTTPS, and all subsequent communication between
the browser and the Power BI service uses HTTPS.

2. The Azure Traffic Manager checks the user's DNS record to determine the most
appropriate (usually nearest) datacenter where Power BI is deployed, and responds
to the DNS with the IP address of the WFE cluster to which the user should be sent.

3. WFE then redirects the user to the Microsoft Online Services sign-in page.

4. After the user has been authenticated, the sign-in page redirects the user to the
previously determined nearest Power BI service WFE cluster with an auth code.

5. The WFE cluster checks with the Azure AD service to obtain an Azure AD security
token by using the auth code. When Azure AD returns the successful
authentication of the user and returns an Azure AD security token, the WFE cluster
consults the Power BI Global Service, which maintains a list of tenants and their
Power BI back-end cluster locations and determines which Power BI back-end
service cluster contains the user's tenant. The WFE cluster then returns an
application page to the user's browser with the session, access, and routing
information required for its operation.

6. Now, when the client's browser requires customer data, it will send requests to the
back-end cluster address with the Azure AD access token in the Authorization
header. The Power BI back-end cluster reads the Azure AD access token and
validates the signature to ensure that the identity for the request is valid. The
Azure AD access token will have an expiry date set according to Azure AD policies,
and to maintain the current session the Power BI Client in the user's browser will
make periodic requests to renew the access token before it expires.

Additional resources
For more information on Power BI, see the following resources.

Getting Started with Power BI Desktop


Power BI REST API - Overview
Power BI API reference
On-premises data gateway
Power BI National/Regional Clouds
Power BI Premium
Overview of single sign-on (SSO) for gateways in Power BI
Power BI enterprise deployment
whitepaper
Article • 04/05/2022

Deploying Power BI in a large enterprise is a complex task that requires a lot of thought
and planning. Getting proper guidance and best practices can help you understand the
choices you will make, gather requirements, and learn best practices that can make your
Power BI enterprise deployment a success. To facilitate those steps, and more, Microsoft
is providing the Power BI Enterprise Deployment whitepaper.

About the whitepaper


The purpose of the Enterprise Deployment whitepaper is to help make your Power BI
deployment a success: it covers key considerations, the decisions which will be necessary
throughout the process, and potential issues you may encounter. Best practices and
suggestions are offered when possible.

The target audience for this whitepaper is technology professionals. Some knowledge of
Power BI and general business intelligence concepts is assumed.

You can download the complete enterprise deployment whitepaper at the following
link:

Planning a Power BI Enterprise Deployment whitepaper

Next steps
For more information on Power BI, see the following resources:

Whitepapers for Power BI


Power BI security whitepaper
Power BI Premium
Deploying and Managing Power BI
Premium Capacities
Article • 02/06/2023

We have retired the Power BI Premium whitepaper in favor of providing up-to-date


information in separate articles. Use the following table to find content from the
whitepaper.

Articles Description

Basic concepts for designers in the Power BI service Background information about Power BI
Datasets in the Power BI service service capacities, workspaces,
Dataset modes in the Power BI service dashboards, reports, workbooks,
datasets, and dataflows.

What is Power BI Premium? An overview of Power BI Premium,


covering the basics of reserved
capacities, supported workloads,
unlimited content sharing, and other
features.

Managing Premium capacities


Configure and manage capacities in Power BI Premium

Detailed information about configuring


Configure workloads in a Premium capacity and managing capacities and workloads.

Use the Premium metrics app Monitoring with Power BI Premium


Capacity Metrics app, and interpreting
the metrics you see in the app.

Power BI Premium FAQ Answers to questions around purchase


and licensing, features, and common
scenarios.
Distribute Power BI content to external
guest users using Azure Active Directory
B2B
Article • 06/20/2023

Summary: This is a technical whitepaper outlining how to distribute content to users


outside the organization using the integration of Azure Active Directory Business-to-
business (Azure AD B2B).

Writers: Lukasz Pawlowski, Kasper de Jonge

Technical Reviewers: Adam Wilson, Sheng Liu, Qian Liang, Sergei Gundorov, Jacob
Grimm, Adam Saxton, Maya Shenhav, Nimrod Shalit, Elisabeth Olson

7 Note

You can save or print this whitepaper by selecting Print from your browser, then
selecting Save as PDF.

Introduction
Power BI gives organizations a 360-degree view of their business and empowers
everyone in these organizations to make intelligent decisions using data. Many of these
organizations have strong and trusted relationships with external partners, clients, and
contractors. These organizations need to provide secure access to Power BI dashboards
and reports to users in these external partners.

Power BI integrates with Azure Active Directory Business-to-business (Azure AD B2B) to


allow secure distribution of Power BI content to guest users outside the organization –
while still maintaining control and governing access to internal data.

This white paper covers the all the details you need to understand Power BI's integration
with Azure Active Directory B2B. We cover its most common use case, setup, licensing,
and row level security.

7 Note

Throughout this white paper, we refer to Azure Active Directory as Azure AD and
Azure Active Directory Business to Business as Azure AD B2B.
Scenarios
Contoso is an automotive manufacturer and works with many diverse suppliers who
provide it with all the components, materials, and services necessary to run its
manufacturing operations. Contoso wants to streamline its supply chain logistics and
plans to use Power BI to monitor key performance metrics of its supply chain. Contoso
wants to share with external supply chain partners analytics in a secure and manageable
way.

Contoso can enable the following experiences for external users using Power BI and
Azure AD B2B.

Ad hoc per item sharing


Contoso works with a supplier who builds radiators for Contoso's cars. Often, they need
to optimize the reliability of the radiators using data from all of Contoso's cars. An
analyst at Contoso uses Power BI to share a radiator reliability report with an Engineer at
the supplier. The Engineer receives an email with a link to view the report.

As described above, this ad-hoc sharing is performed by business users on an as needed


basis. The link sent by Power BI to the external user is an Azure AD B2B invite link. When
the external user opens the link, they're asked to join Contoso's Azure AD organization
as a Guest user. After the invite is accepted, the link opens the specific report or
dashboard. The Azure Active Directory admin delegates permission to invite external
users to the organization and chooses what those users can do once they accept the
invite as described in the Governance section of this document. The Contoso analyst can
invite the Guest user only because the Azure AD administrator allowed that action and
the Power BI administrator allowed users to invite guests to view content in Power BI's
tenant settings.
1. The process starts with a Contoso internal user sharing a dashboard or a report
with an external user. If the external user isn't already a guest in Contoso's Azure
AD, they're invited. An email is sent to their email address that includes an invite to
Contoso's Azure AD.
2. The recipient accepts the invite to Contoso's Azure AD and is added as a Guest
user in Contoso's Azure AD.
3. The recipient is then redirected to the Power BI dashboard, report, or app.

The process is considered ad-hoc since business users in Contoso perform the invite
action as needed for their business purposes. Each item shared is a single link the
external user can access to view the content.

Once the external user has been invited to access Contoso resources, a shadow account
may be created for them in Contoso Azure AD, and they don't need to be invited again.
The first time they try to access a Contoso resource like a Power BI dashboard, they go
through a consent process, which redeems the invitation. If they don't complete the
consent, they can't access any of Contoso's content. If they have trouble redeeming
their invitation via the original link provided, an Azure AD administrator can resent a
specific invitation link for them to redeem.

Planned per item sharing


Contoso works with a subcontractor to perform reliability analysis of radiators. The
subcontractor has a team of 10 people who need access to data in Contoso's Power BI
environment. The Contoso Azure AD administrator is involved to invite all the users and
to handle any additions/changes as personnel at the subcontractor change. The Azure
AD administrator creates a security group for all the employees at the subcontractor.
Using the security group, Contoso's employees can easily manage access to reports and
ensure all required subcontractor personnel have access to all the required reports,
dashboards, and Power BI apps. The Azure AD administrator can also avoid being
involved in the invitation process altogether by choosing to delegate invitation rights to
a trusted employee at Contoso or at the subcontractor to ensure timely personnel
management.

Some organizations require more control over when external users are added, are
inviting many users in an external organization, or many external organizations. In these
cases, planned sharing can be used to manage the scale of sharing, to enforce
organizational policies, and even to delegate rights to trusted individuals to invite and
manage external users. Azure AD B2B supports planned invites to be sent directly from
the Azure portal by an IT administrator, or through PowerShell using the invitation
manager API where a set of users can be invited in one action. Using the planned invites
approach, the organization can control who can invite users and implement approval
processes. Advanced Azure AD capabilities like dynamic groups can make it easy to
maintain security group membership automatically.

1. The process starts with an IT administrator inviting the guest user either manually
or through the API provided by Azure Active Directory
2. The user accepts the invite to the organization.
3. Once the user has accepted the invitation, a user in Power BI can share a report or
dashboard with the external user, or a security group they are in. Just like with
regular sharing in Power BI the external user receives an email with the link to the
item.
4. When the external user accesses the link, their authentication in their directory is
passed to Contoso's Azure AD and used to gain access to the Power BI content.

Ad hoc or planned sharing of Power BI Apps


Contoso has a set of reports and dashboards they need to share with one or more
Suppliers. To ensure all required external users have access to this content, it is
packaged as a Power BI app. The external users are either added directly to the app
access list or through security groups. Someone at Contoso then sends the app URL to
all the external users, for example in an email. When the external users open the link,
they see all the content in a single easy to navigate experience.

Using a Power BI app makes it easy for Contoso to build a BI Portal for its suppliers. A
single access list controls access to all the required content reducing wasted time
checking and setting item level permissions. Azure AD B2B maintains security access
using the Supplier's native identity so users don't need additional sign-in credentials. If
using planned invites with security groups, access management to the app as personnel
rotate into or out of the project is simplified. Membership in security groups manually or
by using dynamic groups, so that all external users from a supplier are automatically
added to the appropriate security group.

1. The process starts by the user being invited to Contoso's Azure AD organization
through the Azure portal or PowerShell.
2. The user can be added to a user group in Azure AD. A static or dynamic user group
can be used, but dynamic groups help reduce manual work.
3. The external users are given access to the Power BI App through the user group.
The app URL should be sent directly to the external user or placed on a site they
have access to. Power BI makes a best effort to send an email with the app link to
external users but when using user groups whose membership can change, Power
BI isn't able to send to all external users managed through user groups.
4. When the external user accesses the Power BI app URL, they're authenticated by
Contoso's Azure AD, the app is installed for the user, and the user can see all the
contained reports and dashboards within the app.
Apps also have a unique feature that allows app authors to install the application
automatically for the user, so it's available when the user logs in. This feature only
installs automatically for external users who are already part of Contoso's organization
at the time the application is published or updated. Thus, it's best used with the planned
invites approach, and depends on the app being published or updated after the users
are added to Contoso's Azure AD. External users can always install the app using the
app link.

Commenting and subscribing to content across


organizations
As Contoso continues to work with its subcontractors or suppliers, the external
Engineers need to work closely with Contoso's analysts. Power BI provides several
collaboration features that help users communicate about content they can consume.
Dashboard commenting (and soon Report commenting) allows users to discuss data
points they see and communicate with report authors to ask questions.

Currently, external guest users can participate in comments by leaving comments and
reading the replies. However, unlike internal users, guest users can't be @mentioned
and don't receive notifications that they've received a comment. Guest users can use the
subscriptions feature within Power BI to subscribe themselves to a report or dashboard.
Learn more in Email subscriptions for reports and dashboards in the Power BI service.

Access content in the Power BI mobile apps


When the guest user opens the link to the report or dashboard on their mobile device,
the content will open in the native Power BI mobile apps on their device, if they're
installed. The guest user will then be able to navigate between content shared with
them in the external tenant, and back to their own content from their home tenant. For
more information about accessing content that has been shared with you from an
external organization via Power BI mobile apps, see View Power BI content shared with
you from an external organization.

Organizational relationships using Power BI


and Azure AD B2B
When all the users of Power BI are internal to the organization, there's no need to use
Azure AD B2B. However, once two or more organizations want to collaborate on data
and insights, Power BI's support for Azure AD B2B makes it easy and cost effective to do
so.
Below are typically encountered organizational structures that are well suited for Azure
AD B2B style cross-organization collaboration in Power BI. Azure AD B2B works well in
most cases, but in some situations the Common alternative approaches covered at the
end of this document are worth considering.

Case 1: Direct collaboration between organizations


Contoso's relationship with its radiator supplier is an example of direct collaboration
between organizations. Since there are relatively few users at Contoso and its supplier
who need access to radiator reliability information, using Azure AD B2B based external
sharing is ideal. It's easy to use and simple to administer. This is also a common pattern
in consulting services where a consultant may need to build content for an organization.

Typically, this sharing occurs initially using Ad hoc per item sharing. However, as teams
grow or relationships deepen, the Planned per item sharing approach becomes the
preferred method to reduce management overhead. Additionally, the Ad hoc or planned
sharing of Power BI Apps, Commenting and subscribing to content across organizations,
access to content in mobile apps can come into play as well. Importantly, if both
organizations' users have Power BI Pro licenses in their respective organizations, they
can use those Pro licenses in each other's Power BI environments. This provides
advantageous licensing since the inviting organization may not need to pay for a Power
BI Pro license for the external users. This is discussed in more detail in the Licensing
section later in this document.

Case 2: Parent and its subsidiaries or affiliates


Some organization structures are more complex, including partially or wholly owned
subsidiaries, affiliated companies, or managed service provider relationships. These
organizations have a parent organization such as a holding company, but the underlying
organizations operate semi-autonomously, sometimes under different regional
requirements. This leads to each organization having its own Azure AD environment and
separate Power BI tenants.
In this structure, the parent organization typically needs to distribute standardized
insights to its subsidiaries. Typically, this sharing occurs using the Ad hoc or planned
sharing of Power BI Apps approach as illustrated in the following image, since it allows
distribution of standardized authoritative content to broad audiences. In practice a
combination of all the Scenarios mentioned earlier in this document is used.

This follows the following process:

1. Users from each Subsidiary are invited to Contoso's Azure AD


2. Then the Power BI app is published to give these users access to the required data
3. Finally, the users open the app through a link they've been given to see the reports

Several important challenges are faced by organizations in this structure:

How to distribute links to content in the Parent organization's Power BI


How to allow subsidiary users to access data source hosted by the parent
organization

Distributing links to content in the Parent organization's Power BI


Three approaches are commonly used to distribute links to the content. The first and
most basic is to send the link to the app to the required users or to place it in a
SharePoint Online site from which it can be opened. Users can then bookmark the link in
their browsers for faster access to the data they need.

The second approach is where the parent organization allows users from the
subsidiaries to access its Power BI and controls that they can access through permission.
This gives access to Power BI Home where the user from the subsidiary sees a
comprehensive list of content shared to them in the Parent organization's tenant. Then
the URL to the Parent organizations' Power BI environment is given to the users at the
subsidiaries.

The final approach uses a Power BI app created within the Power BI tenant for each
subsidiary. The Power BI app includes a dashboard with tiles configured with the
external link option. When the user presses the tile, they're taken to the appropriate
report, dashboard, or app in the parent organization's Power BI. This approach has the
added advantage that the app can be installed automatically for all users in the
subsidiary and is available to them whenever they sign in to their own Power BI
environment. An added advantage of this approach is that it works well with the Power
BI mobile apps that can open the link natively. You can also combine this with the
second approach to enable easier switching between Power BI environments.

Allowing subsidiary users to access data sources hosted by the


parent organization
Often analysts at a subsidiary need to create their own analytics using data supplied by
the parent organization. In this case, commonly cloud data sources are used to address
the challenge.

The first approach uses Azure Analysis Services to build an enterprise grade data
warehouse that serves the needs of Analysts across the parent and its subsidiaries as
shown the following image. Contoso can host the data and use capabilities like row level
security to ensure users in each subsidiary can access only their data. Analysts at each
organization can access the data warehouse through Power BI Desktop and publish
resulting analytics to their respective Power BI tenants.
The second approach uses Azure SQL Database to build a relational data warehouse
to provide access to data. This works similarly to the Azure Analysis Services approach,
though some capabilities like row level security may be harder to deploy and maintain
across subsidiaries.

More sophisticated approaches are also possible, however the above are by far the most
common.

Case 3: Shared environment across partners


Contoso may enter into a partnership with a competitor to jointly build a car on a
shared assembly line, but to distribute the vehicle under different brands or in different
regions. This requires extensive collaboration and coownership of data, intelligence, and
analytics across organizations. This structure is also common in the consulting services
industry where a team of consultants may do project-based analytics for a client.

In practice, these structures are complex as shown in the following image, and require
staff to maintain. To be effective, Organizations are allowed to reuse Power BI Pro
licenses purchased for their respective Power BI tenants.
To establish a shared Power BI tenant, an Azure Active Directory needs to be created and
at least one Power BI Pro user account needs to be purchased for a user in that active
directory. This user invites the required users to the shared organization. Importantly, in
this scenario, Contoso's users are treated as external users when they operate within the
Shared Organization's Power BI.

The process is as follows:

1. The Shared Organization is established as a new Azure Active Directory and at


least one user account is created in the new organization. That user should have a
Power BI Pro license assigned to them.
2. This user then establishes a Power BI tenant and invites the required users from
Contoso and the Partner organization. The user also establishes any shared data
assets like Azure Analysis Services. Contoso and the Partner's users can access the
shared organization's Power BI as guest users. Typically, all shared assets are stored
and accessed from the shared organization.
3. Depending on how the parties agree to collaborate, it's possible for each
organization to develop their own proprietary data and analytics using shared data
warehouse assets. They can distribute those to their respective internal users using
their internal Power BI tenants.

Case 4: Distribution to hundreds or thousands of external


partners
While Contoso created a radiator reliability report for one Supplier, now Contoso desires
to create a set of standardized reports for hundreds of Suppliers. This allows Contoso to
ensure all suppliers have the analytics they need to make improvements or to fix
manufacturing defects.
When an organization needs to distribute standardized data and insights to many
external users/organizations, they can use the Ad hoc or planned sharing of Power BI
Apps scenario to build a BI Portal quickly and without extensive development costs. The
process to build such a portal using a Power BI app is covered in the Case Study:
Building a BI Portal using Power BI + Azure AD B2B – Step-by-Step instructions later in
this document.

A common variant of this case is when an organization is attempting to share insights


with consumers, especially when looking to use Azure B2C with Power BI. Power BI
doesn't natively support Azure B2C. If you're evaluating options for this case, consider
using Alternative Option 2 in the Common alternative approaches the section later in
this document.

Case Study: Building a BI Portal using Power BI


+ Azure AD B2B – Step-by-Step instructions
Power BI's integration with Azure AD B2B gives Contoso a seamless, hassle-free way to
provide guest users with secure access to its BI portal. Contoso can set this up with
three steps:

1. Create BI portal in Power BI

The first task for Contoso is to create their BI portal in Power BI. Contoso's BI portal
will consist of a collection of purpose-built dashboards and reports that will be
made available to many internal and guest users. The recommended way for doing
this in Power BI is to build a Power BI app. Learn more about apps in Power BI .

Contoso's BI team creates a workspace in Power BI


Other authors are added to the workspace
Content is created inside the workspace
Now that the content is created in a workspace, Contoso is ready to invite guest
users in partner organizations to consume this content.

2. Invite Guest Users

There are two ways for Contoso to invite guest users to its BI portal in Power BI:

Planned Invites
Ad hoc Invites

Planned Invites

In this approach, Contoso invites the guest users to its Azure AD ahead of time and
then distributes Power BI content to them. Contoso can invite guest users from the
Azure portal or using PowerShell. Here are the steps to invite guest users from the
Azure portal:

Contoso's Azure AD administrator navigates to Azure portal > Azure Active


Directory > Users > All users > New guest user

Add an invitation message for the guest users and select Invite
7 Note

To invite guest users from the Azure portal, you need to an administrator for
the Azure Active Directory of your tenant.

If Contoso wants to invite many guest users, they can do so using PowerShell.
Contoso's Azure AD administrator stores the email addresses of all the guest users
in a CSV file. Here are Azure Active Directory B2B collaboration code and
PowerShell samples and instructions.

After the invitation, guest users receive an email with the invitation link.

Once the guest users select the link, they can access content in the Contoso Azure
AD tenant.
7 Note

It is possible to change the layout of the invitation email using the Azure AD
branding feature as described here.

Ad hoc Invites

What if Contoso doesn't know all the guest users it wants to invite ahead of time?
Or, what if the analyst in Contoso who created the BI portal wants to distribute
content to guest users herself? We also support this scenario in Power BI with ad-
hoc invites.

The analyst can just add the external users to the access list of the app when
they're publishing it. The guest users get an invite and once they accept it, they're
automatically redirected to the Power BI content.

7 Note

Invites are needed only the first time an external user is invited to your
organization.

3. Distribute Content

Now that Contoso's BI team has created the BI portal and invited guest users, they
can distribute their portal to their end users by giving guest users access to the
app and publishing it. Power BI autocompletes names of guest users who have
been previously added to the Contoso tenant. Adhoc invitations to other guest
users can also be added at this point.

7 Note

If using Security groups to manage access to the app for external users, use
the Planned Invites approach and share the app link directly with each
external user who must access it. Otherwise, the external user may not be able
to install or view content from within the app._

Guest users get an email with a link to the app.

On clicking this link, guest users are asked to authenticate with their own
organization's identity.
Once they're successfully authenticated, they're redirected to Contoso's BI app.

Guest users can later get to Contoso's app by clicking the link in the email or
bookmarking the link. Contoso can also make it easier for guest users by adding
this link to any existing extranet portal that the guest users already use.

4. Next steps
Using a Power BI app and Azure AD B2B, Contoso was able to quickly create a BI
Portal for its suppliers in a no-code way. This greatly simplified distributing
standardized analytics to all the suppliers who needed it.

While the example showed how a single common report could be distributed
among suppliers, Power BI can go much further. To ensure each partner sees only
data relevant to themselves, Row Level Security can be added easily to the report
and data model. The Data security for external partners section later in this
document describes this process in details.

Often individual reports and dashboards need to be embedded into an existing


portal. This can also be accomplished reusing many of the techniques shown in the
example. However, in those situations it may be easier to embed reports or
dashboards directly from a workspace. The process for inviting and assigning
security permission to the require users remain the same.

Under the hood: How is Lucy from Supplier1


able to access Power BI content from Contoso's
tenant?
Now that we have seen how Contoso is able to seamlessly distribute Power BI content
to guest users in partner organizations, let's look at how this works under the hood.

When Contoso invited [email protected] to its directory, Azure AD creates a link


between [email protected] and the Contoso Azure AD tenant. This link lets Azure AD
know that [email protected] can access content in the Contoso tenant.

When Lucy tries to access Contoso's Power BI app, Azure AD verifies that Lucy can
access the Contoso tenant, and then provides Power BI a token that indicates that Lucy
is authenticated to access content in the Contoso tenant. Power BI uses this token to
authorize and ensure that Lucy has access to Contoso's Power BI app.
Power BI's integration with Azure AD B2B works with all business email addresses. If the
user doesn't have an Azure AD identity, they may be prompted to create one. The
following image shows the detailed flow:

It's important to recognize that the Azure AD account will be used or created in the
external party's Azure AD, this will make it possible for Lucy to use their own username
and password and their credentials will automatically stop working in other tenants
whenever Lucy leaves the company when their organization also uses Azure AD.

Licensing
Contoso can choose one of three approaches to license guest users from its suppliers
and partner organizations to have access to Power BI content.

7 Note
The Azure AD B2B's free tier is enough to use Power BI with Azure AD B2B. Some
advanced Azure AD B2B features like dynamic groups require additional licensing.
For more information, see the Azure AD B2B documentation.

Approach 1: Contoso uses Power BI Premium


With this approach, Contoso purchases Power BI Premium capacity and assigns its BI
portal content to this capacity. This allows guest users from partner organizations to
access Contoso's Power BI app without any Power BI license.

External users are also subject to the consumption only experiences offered to "Free"
users in Power BI when consuming content within Power BI Premium.

Contoso can also take advantage of other Power BI premium capabilities for its apps like
increased refresh rates, capacity, and large model sizes.

Approach 2: Contoso assigns Power BI Pro licenses to


guest users
With this approach, Contoso assigns pro licenses to guest users from partner
organizations – this can be done from Contoso's Microsoft 365 admin center. This
allows guest users from partner organizations to access Contoso's Power BI app without
purchasing a license themselves. This can be appropriate for sharing with external users
whose organization hasn't adopted Power BI yet.

7 Note

Contoso's pro license applies to guest users only when they access content in the
Contoso tenant. Pro licenses enable access to content that is not in a Power BI
Premium capacity.

Approach 3: Guest users bring their own Power BI Pro


license
With this approach, Supplier 1 assigns a Power BI Pro license to Lucy. They can then
access Contoso's Power BI app with this license. Since Lucy can use their Pro license
from their own organization when accessing an external Power BI environment, this
approach is sometimes referred to as bring your own license (BYOL). If both
organizations are using Power BI, this offers advantageous licensing for the overall
analytics solution and minimizes overhead of assigning licenses to external users.
7 Note

The pro license given to Lucy by Supplier 1 applies to any Power BI tenant where
Lucy is a guest user. Pro licenses enable access to content that is not in a Power BI
Premium capacity.

Data security for external partners


Commonly when working with multiple external suppliers, Contoso needs to ensure that
each supplier sees data only about its own products. User-based security and dynamic
row level security make this easy to accomplish with Power BI.

User-based security
One of the most powerful features of Power BI is Row Level Security. This feature allows
Contoso to create a single report and dataset but still apply different security rules for
each user. For an in-depth explanation, see Row-level security (RLS) .

Power BI's integration with Azure AD B2B allows Contoso to assign Row Level Security
rules to guest users as soon as they're invited to the Contoso tenant. As we have seen
before, Contoso can add guest users through either planned or ad-hoc invites. If
Contoso wants to enforce row level security, it's strongly recommended to use planned
invites to add the guest users ahead of time and assigning them to the security roles
before sharing the content. If Contoso instead uses ad-hoc invites, there might be a
short period of time where the guest users won't be able to see any data.

7 Note

This delay in accessing data protected by RLS when using ad-hoc invites can lead to
support requests to your IT team because users will see either blank or broken
looking reports/dashboards when opening a sharing link in the email they receive.
Therefore, it is strongly recommended to use planned invites in this scenario.

Let's walk through this with an example.

As mentioned before, Contoso has suppliers around the globe, and they want to make
sure that the users from their supplier organizations get insights from data from just
their territory. But users from Contoso can access all the data. Instead of creating several
different reports, Contoso creates a single report and filters the data based the user
viewing it.

To make sure Contoso can filter data based on who is connecting, two roles are created
in Power BI desktop. One to filter all the data from the SalesTerritory "Europe" and
another for "North America".
Whenever roles are defined in the report, a user must be assigned to a specific role for
them to get access to any data. The assignment of roles happens inside the Power BI
service ( Datasets > Security ).
This opens a page where Contoso's BI team can see the two roles they created. Now
Contoso's BI team can assign users to the roles.

In the example Contoso is adding a user in a partner organization with email address
[email protected] to the Europe role:

When this gets resolved by Azure AD, Contoso can see the name show up in the
window ready to be added:
Now when this user opens the app that was shared with them, they only see a report
with data from Europe:

Dynamic row level security


Another interesting topic is to see how dynamic row level security (RLS) work with Azure
AD B2B.

In short, Dynamic row level security works by filtering data in the model based on the
username of the person connecting to Power BI. Instead of adding multiple roles for
groups of users, you define the users in the model. We won't describe the pattern in
detail here. Kasper de Jong offers a detailed write up on all the flavors of row level
security in Power BI Desktop Dynamic security cheat sheet , and in this whitepaper .

Let's look at a small example - Contoso has a simple report on sales by groups:
Now this report needs to be shared with two guest users and an internal user - the
internal user can see everything, but the guest users can only see the groups they have
access to. This means we must filter the data only for the guest users. To filter the data
appropriately, Contoso uses the Dynamic RLS pattern as described in the whitepaper
and blog post. This means, Contoso adds the usernames to the data itself:

Then, Contoso creates the right data model that filters the data appropriately with the
right relationships:
To filter the data automatically based on who is logged in, Contoso needs to create a
role that passes in the user who is connecting. In this case, Contoso creates two roles –
the first is the "securityrole" that filters the Users table with the current username of the
user logged in to Power BI (this works even for Azure AD B2B guest users).

Contoso also creates another "AllRole" for its internal users who can see everything –
this role doesn't have any security predicate.

After uploading the Power BI desktop file to the service, Contoso can assign guest users
to the "SecurityRole" and internal users to the "AllRole"

Now, when the guest users open the report, they only see sales from group A:
In the matrix to the right you can see the result of the USERNAME() and
USERPRINCIPALNAME() function both return the guest users email address.

Now the internal user gets to see all the data:

As you can see, Dynamic RLS works with both internal or guest users.

7 Note

This scenario also works when using a model in Azure Analysis Services. Usually
your Azure Analysis Service is connected to the same Azure AD as your Power BI -
in that case, Azure Analysis Services also knows the guest users invited through
Azure AD B2B.

Connecting to on premises data sources


Power BI offers the capability for Contoso to use on premises data sources like SQL
Server Analysis Services or SQL Server directly thanks to the on-premises data
gateway . It's even possible to sign on to those data sources with the same credentials
as used with Power BI.

7 Note

When installing a gateway to connect to your Power BI tenant, you must use a user
created within your tenant. External users cannot install a gateway and connect it to
your tenant._

For external users, this might be more complicated as the external users are usually not
known to the on-premises AD. Power BI offers a workaround for this by allowing
Contoso administrators to map the external usernames to internal usernames as
described in Manage your data source - Analysis Services . For example,
[email protected] can be mapped to lucy_supplier1_com#[email protected].
This method is fine if Contoso only has a handful of users or if Contoso can map all the
external users to a single internal account. For more complex scenarios where each user
needs their own credentials, there's a more advanced approach that uses custom AD
attributes to do the mapping as described in Manage your data source - Analysis
Services . This would allow the Contoso administrator to define a mapping for every
user in your Azure AD (also external B2B users). These attributes can be set through the
AD object model using scripts or code so Contoso can fully automate the mapping on
invite or on a scheduled cadence.

Governance
Additional Azure AD Settings that affect experiences in
Power BI related to Azure AD B2B
When using Azure AD B2B sharing, the Azure Active Directory administrator controls
aspects of the external user's experience. These are controlled on the External
collaboration settings page within the Azure Active Directory settings for your Tenant.

For more information, see Configure external collaboration settings.

7 Note

By default, the Guest users permissions are limited option is set to Yes, so Guest
users within Power BI have limited experiences especially surround sharing where
people picker UIs do not work for those users. It is important to work with your
Azure AD administrator to set it to No, as shown below to ensure a good
experience.
Control guest invites
Power BI administrators can control external sharing just for Power BI by visiting the
Power BI admin portal. But admins can also control external sharing with various Azure
AD policies. These policies allow admins to:

Turn off invitations by end users


Only admins and users in the Guest Inviter role can invite
Admins, the Guest Inviter role, and members can invite
All users, including guests, can invite

You can read more about these policies in Delegate invitations for Azure Active
Directory B2B collaboration.

All Power BI actions by external users are also audited in our auditing portal .

Conditional Access policies for guest users


Contoso can enforce conditional access policies for guest users who access content
from the Contoso tenant. You can find detailed instructions in Conditional access for
B2B collaboration users.

Common alternative approaches


While Azure AD B2B makes it easy to share data and reports across organizations, there
are several other approaches that are commonly used and may be superior in certain
cases.

Alternative Option 1: Create duplicate identities for


partner users
With this option, Contoso had to manually create duplicate identities for each partner
user in the Contoso Tenant, as shown in the following image. Then within Power BI,
Contoso can share to the assigned identities the appropriate reports, dashboards, or
apps.
Reasons to choose this alternative:

Since the user's identity is controlled by your organization, any related service such
as email, SharePoint, etc. are also within the control of your organization. Your IT
Administrators can reset passwords, disable access to accounts, or audit activities
in these services.
Users who use personal accounts for their business often are restricted from
accessing certain services so may need an organizational account.
Some services only work over your organization's users. For example, using Intune
to manage content on the personal/mobile devices of external users using Azure
B2B may not be possible.

Reasons not to choose this alternative:

Users from partner organizations must remember two sets of credentials– one to
access content from their own organization and the other to access content from
Contoso. This is a hassle for these guest users and many guest users are confused
by this experience.
Contoso must purchase and assign per-user licenses to these users. If a user needs
to receive email or use office applications, they need the appropriate licenses,
including Power BI Pro to edit and share content in Power BI.
Contoso might want to enforce more stringent authorization and governance
policies for external users compared to internal users. To achieve this, Contoso
needs to create an in-house nomenclature for external users and all Contoso users
need to be educated about this nomenclature.
When the user leaves their organization, they continue to have access to Contoso's
resources until the Contoso admin manually deletes their account
Contoso admins have to manage the identity for the guest, including creation,
password resets, etc.

Alternative Option 2: Create a custom Power BI


Embedded application using custom authentication
Another option for Contoso is to build its own custom embedded Power BI application
with custom authentication ('App owns data'). While many organizations don't have the
time or resources to create a custom application to distribute Power BI content to their
external partners, for some organizations this is the best approach and deserves serious
consideration.

Often, organizations have existing partner portals that centralize access to all
organizational resources for partners, provide isolation from internal organizational
resources, and provide streamlined experiences for partners to support many partners
and their individual users.

In the example above, users from each supplier sign in to Contoso's Partner Portal that
uses Azure AD as an identity provider. It could use Azure AD B2B, Azure B2C, native
identities, or federate with any number of other identity providers. The user would sign
in and access a partner portal build using Azure Web App or a similar infrastructure.

Within the web app, Power BI reports are embedded from a Power BI Embedded
deployment. The web app would streamline access to the reports and any related
services in a cohesive experience aimed to make it easy for suppliers to interact with
Contoso. This portal environment would be isolated from the Contoso internal Azure AD
and Contoso's internal Power BI environment to ensure suppliers couldn't access those
resources. Typically, data would be stored in a separate Partner data warehouse to
ensure isolation of data as well. This isolation has benefits since it limits the number of
external users with direct access to your organization's data, limiting what data could
potentially be available to the external user, and limiting accidental sharing with external
users.

Using Power BI Embedded, the portal can use advantageous licensing, using an app
token or the master user plus premium capacity purchased in Azure model, which
simplifies concerns about assigning licenses to end users, and can scale up/down based
on expected usage. The portal can offer an overall higher quality and consistent
experience since partners access a single portal designed with all of a Partner's needs in
mind. Lastly, since Power BI Embedded based solutions are typically designed to be
multi-tenant, it makes it easier to ensure isolation between partner organizations.

Reasons to choose this alternative:

Easier to manage as the number of partner organizations grows. Since partners are
added to a separate directory isolated from Contoso's internal Azure AD directory,
it simplifies IT's governance duties and helps prevent accidental sharing of internal
data to external users.
Typical Partner Portals are highly branded experiences with consistent experiences
across partners and streamlined to meet the needs of typical partners. Contoso can
therefore offer a better overall experience to partners by integrating all required
services into a single portal.
Licensing costs for advanced scenarios like Editing content within the Power BI
Embedded is covered by the Azure purchased Power BI Premium, and doesn't
require assignment of Power BI Pro licenses to those users.
Provides better isolation across partners if architected as a multi-tenant solution.
The Partner Portal often includes other tools for partners beyond Power BI reports,
dashboards, and apps.

Reasons not to choose this alternative:

Significant effort is required to build, operate, and maintain such a portal making it
a significant investment in resources and time.
Time to solution is much longer than using B2B sharing since careful planning and
execution across multiple workstreams is required.
Where there are a smaller number of partners the effort required for this
alternative is likely too high to justify.
Collaboration with ad-hoc sharing is the primary scenario faced by your
organization.
The reports and dashboards are different for each partner. This alternative
introduces management overhead beyond just sharing directly with Partners.

FAQ
Can Contoso send an invitation that is automatically redeemed, so that the user is just
"ready to go"? Or does the user always have to click through to the redemption URL?

The end user must always click through the consent experience before they can access
content.

If you'll be inviting many guest users, we recommend that you delegate this from your
core Azure AD admins by adding a user to the guest inviter role in the resource
organization. This user can invite other users in the partner organization by using the
sign-in UI, PowerShell scripts, or APIs. This reduces the administrative burden on your
Azure AD admins to invite or resent invites to users at the partner organization.

Can Contoso force multi-factor authentication for guest users if its partners don't
have multi-factor authentication?

Yes. For more information, see Conditional access for B2B collaboration users.

How does B2B collaboration work when the invited partner is using federation to add
their own on-premises authentication?

If the partner has an Azure AD tenant that is federated to the on-premises


authentication infrastructure, on-premises single sign-on (SSO) is automatically
achieved. If the partner doesn't have an Azure AD tenant, an Azure AD account may be
created for new users.

Can I invite guest users with consumer email accounts?

Inviting guest users with consumer email accounts is supported in Power BI. This
includes domains such as hotmail.com, outlook.com, and gmail.com. However, those
users may experience limitations beyond what users with work or school accounts
encounter.

You might also like