0% found this document useful (0 votes)
546 views

Power Bi Document

Automatic aggregations use machine learning to optimize DirectQuery datasets for faster report query performance without extensive data modeling. It creates and maintains an in-memory aggregations cache to return a percentage of queries instead of querying the backend data source directly. Automatic aggregations are configured by enabling training and scheduling refreshes, and benefits include faster queries, reduced load on data sources, and easy setup without complex user-defined aggregations.

Uploaded by

Junwei Zhou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
546 views

Power Bi Document

Automatic aggregations use machine learning to optimize DirectQuery datasets for faster report query performance without extensive data modeling. It creates and maintains an in-memory aggregations cache to return a percentage of queries instead of querying the backend data source directly. Automatic aggregations are configured by enabling training and scheduling refreshes, and benefits include faster queries, reduced load on data sources, and easy setup without complex user-defined aggregations.

Uploaded by

Junwei Zhou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 376

Contents

Power BI enterprise documentation


Enterprise
Automatic aggregations overview
Configure automatic aggregations
Power BI site reliability engineering model
Bring your own encryption keys
Using Azure AD B2B
Use customer managed keys
High availability and failover FAQ
Licensing
Get Power BI for your organization
Purchase Power BI Pro
Licensing for your organization
View and manage user licenses
Power BI for US Government
Enroll a US Government organization
Disable self-service sign-up
Sign up with a Microsoft 365 Trial
Add Power BI with a Microsoft 365 partner
Alternate email address for Power BI
Closing your account
Premium
Power BI Premium features
Gen2
What is Power BI Premium Gen2?
Premium Gen2 architecture
Power BI Premium Per User
Purchase Power BI Premium
Purchase Power BI Premium for testing
Plan your transition to Power BI Premium Gen2
Managing Premium Gen2 capacities
Configure and manage capacities
Premium Gen2 capacity load evaluation
Install the Gen2 metrics app
Using the Premium Gen2 metrics app
Backup and restore Power BI Premium datasets
Using autoscale with Premium Gen2
Configure workloads
Monitor capacities in the Admin portal
Configure large datasets
Automation with service principals
Dataset connectivity with the XMLA endpoint
Interactive and background operations
Troubleshoot XMLA endpoint connectivity
Power BI Premium Gen2 FAQ
Gen1
What is Power BI Premium?
Optimizing Premium capacities
Premium capacity scenarios
Monitoring Power BI capacities
Monitor capacities with the app
Monitor Premium workload metrics
Restart a Premium capacity
Power BI Premium FAQ
Governance and compliance
Power BI governance and deployment approaches
Metadata scanning
Information protection
Data protection in Power BI
Sensitivity labels
Sensitivity label overview
Enable sensitivity labeling
Apply sensitivity labels
Default label policy
Mandatory label policy
Sensitivity label downstream inheritance
Sensitivity label inheritance from data sources (preview)
Sensitivity label change enforcement
Custom help link for sensitivity labels
Sensitivity label support for paginated reports
Set or remove sensitivity labels programmatically
Audit schema for sensitivity labels
Protection metrics report
DLP policies for Power BI (preview)
Microsoft Defender for Cloud Apps with Power BI
Security
Power BI security
Row-level security
Row-level security
Privacy levels
Using service tags with Power BI
Use private links for Power BI
Configure mobile apps with Intune
Automation tools
Enable service principal authentication for read-only admin APIs
PowerShell, REST APIs, and .NET SDK
Power BI cmdlets for PowerShell
Automatic aggregations (Preview)
5/23/2022 • 18 minutes to read • Edit Online

Automatic aggregations use state-of-the-art machine learning (ML) to continuously optimize DirectQuery
datasets for maximum report query performance. Automatic aggregations are built on top of existing user-
defined aggregations infrastructure first introduced with composite models for Power BI. Unlike user-defined
aggregations, automatic aggregations don’t require extensive data modeling and query-optimization skills to
configure and maintain. Automatic aggregations are both self-training and self-optimizing. They enable dataset
owners of any skill level to improve query performance, providing faster report visualizations for even the
largest datasets.
With automatic aggregations:
Report visualizations are faster - An optimal percentage of report queries are returned by an automatically
maintained in-memory aggregations cache instead of backend data source systems. Outlier queries that
cannot be returned by the in-memory cache are passed directly to the data source using DirectQuery.
Balanced architecture - When compared to pure DirectQuery mode, most query results are returned by the
Power BI query engine and in-memory aggregations cache. Query processing load on data source systems
at peak reporting times can be significantly reduced, which means increased scalability in the data source
backend.
Easy setup - Dataset owners can enable automatic aggregations training and schedule one or more refreshes
for the dataset. With the first training and refresh, automatic aggregations begins creating an aggregations
framework and optimal aggregations. The system automatically tunes itself over time.
Fine-tuning – With a simple and intuitive user interface in the dataset settings, you can estimate the
performance gains for a different percentage of queries returned from the in-memory aggregations cache
and make adjustments for even greater gains. A single slide bar control helps you easily fine-tune for your
environment.

IMPORTANT
Automatic aggregations are in Preview . When in preview, functionality and documentation are likely to change.

Requirements
Supported plans
Automatic aggregations are supported for Power BI Premium per capacity , Premium per user , and Power
BI Embedded datasets.
Supported data sources
During preview, automatic aggregations are supported for the following data sources:
Azure SQL Database
Azure Synapse Dedicated SQL pool
Google BigQuery
Snowflake
Supported modes
Automatic aggregations are supported for DirectQuery mode datasets. Composite model datasets with both
import tables and DirectQuery connections are supported, however automatic aggregations are supported for
the DirectQuery connection only.
Permissions
To enable and configure automatic aggregations, you must be the Dataset owner . Workspace admins can take
over a dataset as owner to configure automatic aggregations settings.

Configuring automatic aggregations


Automatic aggregations are configured in dataset Settings. Configuring is simple - enable automatic
aggregations training and schedule one or more refreshes. But before you configure automatic aggregations for
your dataset, be sure to entirely read through this article. It provides a good understanding of how automatic
aggregations work and can help you decide if automatic aggregations are right for your environment. When
you're ready for step-by-step instructions on how to enable automatic aggregations training, configure a refresh
schedule, and fine-tune for your environment, see Configure automatic aggregations.

Benefits
With DirectQuery, each time a dataset user opens a report or interacts with a report visualization, DAX queries
are passed to the query engine and then on to the backend data source as SQL queries. The data source must
then calculate and return results for each query. Compared to import mode datasets stored in-memory,
DirectQuery data source round trips can be both time and process intensive, often causing slow query response
times in report visualizations.
When enabled for a DirectQuery dataset, automatic aggregations can boost report query performance by
avoiding data source query round trips. Pre-aggregated query results are automatically returned by an in-
memory aggregations cache rather than being sent to and returned by the data source. The amount of pre-
aggregated data in the in-memory aggregations cache is a small fraction of the amount of data kept in fact and
detail tables at the data source. The result is not only better report query performance, but also reduced load on
backend data source systems. With automatic aggregations, only a small portion of report and ad-hoc queries
that require aggregations not included in the in-memory cache are passed to the backend data source, just like
with pure DirectQuery mode.

Automatic query and aggregations management


While automatic aggregations eliminate the need to create user-defined aggregations tables and dramatically
simplify implementing a pre-aggregated data solution, a deeper familiarity with the underlying processes and
dependencies is helpful in understanding how automatic aggregations work. Power BI relies on the following to
create and manage automatic aggregations.
Query log
Power BI tracks dataset and user report queries in a query log. For each dataset, Power BI maintains seven days
of query log data. Query log data is rolled forward each day. The query log is secured and not visible to users or
through the XMLA endpoint.
Training operations
As part of the first scheduled dataset refresh operation for your selected frequency (Day or Week), Power BI first
initiates a training operation that evaluates the query log to ensure aggregations in the in-memory
aggregations cache adapt to changing query patterns. In-memory aggregations tables are created, updated, or
dropped, and special queries are sent to the data source to determine aggregations to be included in the cache.
Calculated aggregations data, however, is not loaded into the in-memory cache during training - it's loaded
during the subsequent refresh operation.
For example, if you choose a Day frequency and schedule refreshes at 4:00AM, 9:00AM, 2:00PM, and 7:00PM,
only the 4:00AM refresh each day will include both a training operation and a refresh operation .
The subsequent 9:00AM, 2:00PM, and 7:00PM scheduled refreshes for that day are refresh only operations that
update the existing aggregations in the cache.

While training operations evaluate past queries from the query log, the results are sufficiently accurate to
ensure future queries are covered. There is no guarantee however that future queries will be returned by the in-
memory aggregations cache because those new queries could be different than those derived from the query
log. Those queries not returned by the in-memory aggregations cache are passed to the data source by using
DirectQuery. Depending on the frequency and ranking of those new queries, aggregations for them may be
included in the in-memory aggregations cache with the next training operation.
The training operation has a 60 minute time limit. If training is unable to process the entire query log within the
time limit, a notification is logged in the dataset Refresh history and training resumes the next time it is
launched. The training cycle completes and replaces the existing automatic aggregations when the entire query
log is processed.
Refresh operations
As described above, after the training operation completes as part of the first scheduled refresh for your
selected frequency, Power BI performs a refresh operation that queries and loads new and updated aggregations
data into the in-memory aggregations cache and removes any aggregations that no longer rank high enough
(as determined by the training algorithm). All subsequent refreshes for your chosen Day or Week frequency are
refresh only operations that query the data source to update existing aggregations data in the cache. Using our
example above, the 9:00AM, 2:00PM, and 7:00PM scheduled refreshes for that day are refresh only operations.

Regularly scheduled refreshes throughout the day (or week) ensure aggregations data in the cache are more up
to date with data at the backend data source. Through dataset Settings, you can schedule up to 48 refreshes per
day to ensure report queries that are returned by the aggregations cache are getting results based on the most
recent refreshed data from the backend data source.
Cau t i on

Training and refresh operations are process and resource intensive for both the Power BI service and the data
source systems. Increasing the percentage of queries that use aggregations means more aggregations must be
queried and calculated from data sources during training and refresh operations, increasing the probability of
excessive use of system resources and potentially causing timeouts. To learn more, see Fine tuning.
Training on demand
As mentioned earlier, a training cycle may not complete within the time limits of a single data refresh cycle. If
you don’t want to wait until the next scheduled refresh cycle that includes training, you can also trigger
automatic aggregations training on-demand by clicking on Train and Refresh Now in dataset Settings. Using
Train and Refresh Now triggers both a training operation and a refresh operation. Check the dataset Refresh
history to see if the current operation is finished before running an additional on-demand training and refresh
operation, if necessary.
Refresh history
Each refresh operation is recorded in the dataset Refresh history. Important information about each refresh is
shown, including the amount of memory aggregations in the cache are consuming for the configured query
percentage. To view refresh history, in the dataset Settings page, click on Refresh histor y . If you want to drill
down a little further, click Show details.
By regularly checking refresh history you can ensure your scheduled refresh operations are completing within
an acceptable period. Make sure refresh operations are successfully completing before the next scheduled
refresh begins.
Training and refresh failures
While Power BI performs training and refresh operations as part of the first scheduled dataset refresh for the
day or week frequency you choose, these operations are implemented as separate transactions. If a training
operation cannot fully process the query log within its time limits, Power BI is going to proceed refreshing the
existing aggregations (and regular tables in a composite model) using the previous training state. In this case,
the refresh history will indicate the refresh succeeded and training is going to resume processing the query log
the next time training launches. Query performance might be less optimized if client report query patterns
changed and aggregations didn't adjust yet but the achieved performance level should still be far better than a
pure DirectQuery dataset without any aggregations.

If a training operation requires too many cycles to finish processing the query log, consider reducing the
percentage of queries that use the in-memory aggregations cache in dataset Settings. This will reduce the
number of aggregations created in the cache, but allow more time for training and refresh operations to
complete. To learn more, see Fine tuning.
If training succeeds but refresh fails, the entire dataset refresh is marked as Failed because the result is an
unavailable in-memory aggregations cache.
When scheduling refresh, you can specify email notifications in case of refresh failures.

User-defined and automatic aggregations


User-defined aggregations in Power BI can be manually configured based on hidden aggregated tables in the
dataset. Configuring user-defined aggregations is often complex, requiring a greater level of data-modeling and
query-optimization skills. Automatic aggregations on the other hand eliminate this complexity as part of an AI-
driven system. Unlike user-defined aggregations that remain static, Power BI continuously maintains query logs
and from those logs determines query patterns based on machine learning (ML) predictive modeling
algorithms. Pre-aggregated data is calculated and stored in-memory based on query pattern analysis. With
automatic aggregations, datasets are both self-training and self-optimizing. As client report query patterns
change, automatic aggregations adjust, prioritizing and caching those aggregations used most often.
Because automatic aggregations are built on top of the existing user-defined aggregations infrastructure, it's
possible to use both user-defined and automatic aggregations together in the same dataset. Skilled data
modelers can define aggregations for tables using DirectQuery, Import (with or without Incremental refresh), or
Dual storage modes, while at the same time having the benefits of more automatic aggregations for queries
over DirectQuery connections that don’t hit the user-defined aggregation tables. This flexibility enables balanced
architectures that can reduce query loads and avoid bottlenecks.
Aggregations created in the in-memory cache by the automatic aggregations training algorithm are identified
as System aggregations. The training algorithm creates and deletes only those System aggregations as
reporting queries are analyzed and adjustments are made to maintain the optimal aggregations for the dataset.
Both user-defined and automatic aggregations are refreshed with dataset refresh. Only those aggregations
created by automatic aggregations and marked as system-generated aggregations are included in automatic
aggregations processing.

Query caching and automatic aggregations


Power BI Premium also supports Query caching in Power BI Premium/Embedded to maintain query results.
Query caching is a different feature from automatic aggregations. With query caching, Power BI Premium uses
its local caching service to implement caching, whereas automatic aggregations are implemented at the dataset
level. With query caching, the service only caches queries for the initial report page load, therefore query
performance isn't improved when users interact with a report. In contrast, automatic aggregations optimize
most report queries by pre-caching aggregated query results, including those queries generated when users
interact with reports. Query caching and automatic aggregations can both be enabled for a dataset, but it's likely
not necessary.

Monitor with Azure Log Analytics


Azure Log Analytics (LA) is a service within Azure Monitor which Power BI can use to save activity logs. With
Azure Monitor suite, you can collect, analyze, and act on telemetry data from your Azure and on-premises
environments. It offers long-term storage, an ad-hoc query interface, and API access to allow data export and
integration with other systems. To learn more, see Using Azure Log Analytics in Power BI.
If Power BI is configured with an Azure LA account, as described in Configuring Azure Log Analytics for Power BI,
you can analyze the success rate of your automatic aggregations. Among other things, you can determine if
report queries are answered from the in-memory cache.
To use this ability, download the PBIT template from here and connect it to your log analytics account, as
described in this post. In the report, you can view data at three different levels: Summary view, DAX query level
view, and SQL query level view.
The following image shows the summary page for all the queries. As you can see, the marked chart shows the
percentage of total queries that were satisfied by aggregations vs. the ones had to utilize the data source.
The next step to dive deeper is to look at the use of aggregations at a DAX query level. Right-click a DAX query
from the list (bottom left) > Drill through > Quer y histor y .

This will provide you with a list of all the pertinent queries. Drill through to the next level to show more
aggregation details.
Application Lifecycle Management
From development to test and from test to production, datasets with automatic aggregations enabled have
special requirements for ALM solutions.
Deployment pipelines
When using deployment pipelines, Power BI can copy the datasets with their dataset configuration from the
current stage into the target stage. However, automatic aggregations must be reset in the target stage as the
settings do no get transferred from current to target stage. You can also deploy content programmatically, using
the deployment pipelines REST APIs. To learn more about this process, see Automate your deployment pipeline
using APIs and DevOps.
Custom ALM solutions
If you use a custom ALM solution based on XMLA endpoints, keep in mind that your solution might be able to
copy system-generated and user-created aggregations tables as part of the dataset metadata. However, you
must enable automatic aggregations after each deployment step at the target stage manually. Power BI will
retain the configuration if you overwrite an existing dataset.

NOTE
If you upload or republish a dataset as part of a Power BI Desktop (.pbix) file, system-created aggregation tables are lost
as Power BI replaces the existing dataset with all its metadata and data in the target workspace.

Altering a dataset
When altering a dataset with automatic aggregations enabled via XMLA endpoints, such as adding or removing
tables, Power BI preserves any existing aggregations that can be and removes those that are no longer needed
or relevant. Query performance could be impacted until the next training phase is triggered.

Metadata elements
Datasets with automatic aggregations enabled contain unique system-generated aggregations tables.
Aggregations tables aren't visible to users in reporting tools. They are however visible through the XMLA
endpoint by using tools with Analysis Services client libraries version 19.22.5 and higher . When working with
datasets with automatic aggregations enabled, be sure to upgrade your data modeling and administration tools
to the latest version of the client libraries. For SQL Server Management Studio (SSMS), upgrade to SSMS
version 18.9.2 or higher . Earlier versions of SSMS aren't able to enumerate tables or script out these datasets.
Automatic aggregations tables are identified by a SystemManaged table property, which is new to the Tabular
Object Model (TOM) in Analysis Services client libraries version 19.22.5 and higher. Shown in the following code
snippet, the SystemManaged property is set to true for automatic aggregations tables and false for regular
tables.

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.AnalysisServices.Tabular;

namespace AutoAggs
{
class Program
{
static void Main(string[] args)
{
string workspaceUri = "<Specify the URL of the workspace where your dataset resides>";
string datasetName = "<Specify the name of your dataset>";

Server sourceWorkspace = new Server();


sourceWorkspace.Connect(workspaceUri);
Database dataset = sourceWorkspace.Databases.GetByName(datasetName);

// Enumerate system-managed tables.


IEnumerable<Table> aggregationsTables = dataset.Model.Tables.Where(tbl => tbl.SystemManaged ==
true);

if (aggregationsTables.Any())
{
Console.WriteLine("The following auto aggs tables exist in this dataset:");
foreach (Table table in aggregationsTables)
{
Console.WriteLine($"\t{table.Name}");
}
}
else
{
Console.WriteLine($"This dataset has no auto aggs tables.");
}

Console.WriteLine("\n\rPress [Enter] to exit the sample app...");


Console.ReadLine();
}
}
}

Executing this snippet outputs automatic aggregations tables currently included in the dataset in a console.
Keep in mind, aggregations tables are constantly changing as training operations determine the optimal
aggregations to include in the in-memory aggregations cache.

IMPORTANT
Power BI fully manages automatic aggregations system-generated table objects. Do not delete or modify these tables
yourself. Doing so can cause degraded performance.

Power BI maintains the dataset configuration outside of the dataset. The presence of a system-managed
aggregations table in a dataset does not necessarily mean the dataset is in fact enabled for automatic
aggregations training. In other words, if you script out a full model definition for a dataset with automatic
aggregations enabled, and create a new copy of the dataset (with a different name/workspace/capacity), the new
resulting dataset is not yet enabled for automatic aggregations training. You still need to enable automatic
aggregations training for the new dataset in dataset Settings.

Considerations and limitations


When using automatic aggregations, keep the following in mind:
The SQL queries generated during the initial training phase can generate significant load for the data
warehouse. If training keeps finishing incomplete and you can verify on the data warehouse side that the
queries are encountering a timeout, consider temporarily scaling up your data warehouse to meet the
training demand.
Aggregations stored in the in-memory aggregations cache may not be calculated on the most recent data at
the data source. Unlike pure DirectQuery, and more like regular import tables, there is a latency between
updates at the data source and aggregations data stored in the in-memory aggregations cache. While there
will always be some degree of latency, it can be mitigated through an effective refresh schedule.
To further optimize performance, set all dimension tables to Dual mode and leave fact tables in DirectQuery
mode.
Automatic aggregations are not available with Power BI Pro, Azure Analysis Services, or SQL Server Analysis
Services.
Power BI does not support downloading datasets with automatic aggregations enabled. If you uploaded or
published a Power BI Desktop (.pbix) file to Power BI and then enabled automatic aggregations, you can no
longer download the PBIX file. Make sure you keep a copy of the PBIX file locally.
The preview release does not yet support automatic aggregations with external tables in Azure Synapse
Analytics. You can enumerate external tables in Synapse by using the following SQL query: SELECT
SCHEMA_NAME(schema_id) AS schema_name, name AS table_name FROM sys.external_tables.
Automatic aggregations are only available for datasets using enhanced metadata. If you want to enable
automatic aggregations for an older dataset, upgrade the dataset to enhanced metadata first. To learn more,
see Using enhanced dataset metadata.
Do not enable automatic aggregations if the DirectQuery data source is configured for single sign-on and
uses dynamic data views or security controls to limit the data a user is allowed to access. Automatic
aggregations are not aware of these data source-level controls, which makes it impossible to ensure correct
data is provided on a per user basis. Training will log a warning in the refresh history that it detected a data
source configured for single sign-on and skipped the tables that use this data source. If possible, disable SSO
for these data sources to take full advantage of the optimized query performance Automatic aggregations
can provide.
Do not enable automatic aggregations if the dataset contains only hybrid tables to avoid unnecessary
processing overhead. A hybrid table uses both import partitions and a DirectQuery partition. A common
scenario is incremental refresh with real-time data in which a DirectQuery partition fetches transactions from
the data source that occurred after the last data refresh. However, Power BI imports aggregations during
refresh. Automatic aggregations can therefore not include transactions that occurred after the last data
refresh. Training will log a warning in the refresh history that it detected and skipped hybrid tables.
Calculated columns are not considered for automatic aggregations. If you use a calculated column in
DirectQuery mode, such as by using the COMBINEVALUES DAX function to create a relationship based on
multiple columns from two DirectQuery tables, the corresponding report queries will not hit the in-memory
aggregations cache.
Automatic aggregations are only available in the Power BI service. Power BI Desktop does not create system-
generated aggregations tables.
If you modify the metadata of a dataset with automatic aggregations enabled, query performance might
degrade until the next training process is triggered. As a best practice, you should drop the automatic
aggregations, make the changes, and then re-train.
Do not modify or delete system-generated aggregations tables unless you have automatic aggregations
disabled and are cleaning up the dataset. The system takes responsibility for managing these objects.

Community
Power BI has a vibrant community where MVPs, BI pros, and peers share expertise in discussion groups, videos,
blogs and more. When learning about automatic aggregations, be sure to check out these additional resources:
Power BI Community
Search "Power BI automatic aggregations" on Bing

See also
Configure automatic aggregations
User-defined aggregations
DirectQuery in Power BI
Analysis Services client libraries
Configure automatic aggregations (Preview)
5/23/2022 • 6 minutes to read • Edit Online

Configuring automatic aggregations includes enabling training for a supported DirectQuery dataset and
configuring one or more scheduled refreshes. After several iterations of the training and refresh operations have
run, you can return to dataset settings to fine-tune the percentage of report queries that use the in-memory
aggregations cache. Before completing these steps, be sure you fully understand the functionality and
limitations described in Automatic aggregations.

IMPORTANT
Automatic aggregations is in Preview . When in preview, functionality and documentation are likely to change.

Enable
You must have dataset Owner permissions to enable automatic aggregations. Workspace admins can take over
dataset owner permissions.
1. In dataset Settings, expand Scheduled refresh and performance optimization .
2. Click the Automatic aggregations training slider to On . If the enable slider is greyed out, ensure Data
source credentials for the dataset are configured and signed in.

3. In Refresh schedule , specify a refresh frequency and time zone. If the Refresh schedule controls are
disabled, verify the data source configuration including gateway connection (if necessary) and data
source credentials.
4. Click Add another time , and then specify one or more refreshes.
You must schedule at least one refresh. The first refresh for the frequency you select will include both a
training operation and a refresh that loads new and updated aggregations into the in-memory cache.
Schedule more refreshes to ensure report queries that hit the aggregations cache are getting results that
are most in-sync with the backend data source. To learn more, see Refresh operations.
5. Click Apply .

On-demand train and refresh


The first scheduled refresh operation for your chosen frequency includes a training operation. If that training
operation does not complete within the 60 minute time limit, the subsequent refresh operation will not load or
update aggregations in the cache. The next training operation will not run until the first refresh operation of
your chosen frequency.
In such cases, you may want to manually run one or more on-demand training and refresh operations to fully
complete the training and load or refresh aggregations in the cache. For example, when checking the Refresh
history, if the first scheduled training and refresh operation for the day (frequency) does not complete within the
time limit, and you don't want to wait for the next day's scheduled refresh that includes a training operation to
run, you can run one or more on-demand train and refresh operations to fully process the data query log (train)
and load aggregations to the cache (refresh).
To run an on-demand train and refresh operation, click Train and Refresh Now . Be sure to keep an eye on the
refresh history to ensure the on-demand training operation completes successfully. If not, run additional train
and refresh operations until training completes successfully and aggregations are loaded or refreshed in the
cache.
Using Train and Refresh Now can also be helpful when fine-tuning the percentage of report queries that will use
aggregations from the in-memory cache. By running an on-demand train and refresh now operation, you can
more quickly determine if your new percentage setting allows the training operation to complete within the
time limit.
Keep in mind, training and refresh operations, whether scheduled or on-demand are process and resource
intensive for both the data source and Power BI. Choose a time when resources are least impacted.

Fine-tuning
Both user-defined and system-generated aggregations tables are part of the dataset, contribute to the dataset
size, and are subject to existing Power BI dataset size constraints. Aggregations processing also consumes
resources and impacts dataset refresh durations. An optimal configuration strikes a balance between providing
pre-aggregated results from the in-memory aggregations cache for the most frequently used report queries,
while accepting slower results for outlier and ad-hoc queries in exchange for faster training and refresh times
and a reduced burden on system resources.
Adjusting the percentage
By default, the aggregations cache setting that determines the percentage of report queries that will use
aggregations from the in-memory cache is 75%. Increasing the percentage means a greater number of report
queries are ranked higher and therefore aggregations for them are included in the in-memory aggregations
cache. While a higher percentage can mean more queries are answered from the in-memory cache, it can also
mean longer training and refresh times . Adjusting to a lower percentage, on the other hand, can mean
shorter training and refresh times, and less resource utilization, but report visualization performance could
diminish because fewer report queries would be answered by the in-memory aggregations cache, as those
report queries instead must then roundtrip to the data source.
Before the system can determine the optimal aggregations to include in the cache, it must first know the report
query patterns being used most often. Be sure to allow several iterations of the training/refresh operations to be
completed before adjusting the percentage of queries that will use the aggregations cache. This gives the
training algorithm time to analyze report queries over a broader time period and self-adjust accordingly. For
example, if you've scheduled refreshes for daily frequency, you might want to wait a full week. User reporting
patterns on some days of the week may be different than others.
To adjust the percentage
1. In dataset Settings, expand Scheduled refresh and performance optimization
2. In Quer y coverage , use the Adjust the percentage of queries that will use the aggregated caches slider
to increase or decrease the percentage to the desired value. As you adjust the percentage, the Query
performance impact lift chart provides estimated query response times.

3. Click Train and Refresh Now or Apply .


Estimating query performance impact
The Quer y performance impact lift chart provides estimated report query run times as a function of the
percentage of queries that will use cached aggregations. The chart will initially show 0.0 for all metrics until at
least one training/refresh operation is performed. After an initial training/refresh operation, the chart can help
you determine if adjusting the percentage of queries that use the in-memory aggregations cache can potentially
further improve query response.

Threshold appears as a marker line on the lift chart and indicates the target query response time for your
reports. You can then fine-tune the percentage of queries that will use the aggregations cache to determine a
new query percentage that meets the desired threshold.
Metrics
DirectQuer y - An estimated duration in seconds for a report query sent to and returned from the data source
by using DirectQuery. Queries that cannot be answered by the in-memory aggregations cache will typically be
within this estimate.
Current quer y percentage - An estimated duration in seconds for report queries answered from the in-
memory aggregations cache, based on the percentage setting for the most recent training/refresh operation.
New quer y percentage - An estimated duration in seconds for report queries answered from the in-memory
aggregations cache for the newly selected percentage. As the percentage slider is changed, this metric reflects
the potential change.

Disable
You must have dataset Owner permissions to disable automatic aggregations. Workspace admins can take over
dataset owner permissions.
1. To disable, click the Automatic aggregations training slider to Off .
When you disable training, you are prompted with an option to delete automatic aggregation tables.

If you choose not to delete existing automatic aggregation tables, the tables will remain in the dataset and
continue to be refreshed. However, because training has been disabled, no new aggregations will be
added to them. Power BI will continue to use the existing tables to get aggregated query results when
possible.
If you choose to delete the tables, the dataset is reverted back to its original state - without any automatic
aggregations.
2. Click Apply .

See also
Automatic aggregations
User-defined aggregations
DirectQuery in Power BI
Power BI site reliability engineering (SRE) model
5/23/2022 • 17 minutes to read • Edit Online

This document describes the Power BI team's approach to maintaining a reliable, performant, and scalable
service for customers. It describes monitoring service health, mitigating incidents, release management and
acting on necessary improvements. Other important operational aspects such as security are outside of the
scope of this document. This document was created to share knowledge with our customers, who often raise
questions regarding site reliability engineering practices. The intention is to offer transparency into how Power
BI minimizes service disruption through safe deployment, continuous monitoring, and rapid incident response.
The techniques described here also provide a blueprint for teams hosting service-based solutions to build
foundational live site processes that are efficient and effective at scale.
Author : Yitzhak Kesselman

Background
Power BI is a native cloud offering and global service, supporting the following customers and capabilities:
Serving 260,000 organizations and 97% of Fortune 500 companies
Deployed in 52 Azure regions around the world
Executes nearly 20 million queries per hour at peak
Ingests over 90 petabytes of data per month into customer datasets
Employs 149 clusters powered by more than 350,000 cores
Despite absorbing six straight years of triple-digit growth and substantial new capabilities, the Power BI service
exhibits strong service reliability and operational excellence. As the service grew and large enterprises deployed
it at scale to hundreds of thousands of users, the need for exceptional reliability became essential. The reliability
results shown in the following table are the direct consequence of engineering, tools, and culture changes made
by the Power BI team over the past few years, and are highlighted in this article.

Through solutions and disciplined operations, the Power BI team has sustained exponential growth and rapid
update cycles without increasing overall cost or burden on live site management. In the following graph, you
can see the continuous and significant decline in Service Reliability Engineering cost per monthly active user
(MAU).
The efficiencies gained from site reliability engineering (SRE) team efforts offset the cost of funding such a team.
The SRE team size, and its corresponding operational cost, has remained constant despite exponential service
growth over the same period. Without such dedicated focus on efficiency, live site support costs would have
grown substantially with increased service usage.
Further, an increasing percentage of Power BI live site incidents can now be addressed partially or completely
through automation. The following chart shows a 90% decrease in Time to Mitigate (TTM) incidents over the
past two years while usage has more than tripled. The same period saw the introduction of alert automation to
deflect more than 82% of incidents.

These efforts have resulted in greatly improved service reliability to customers, approaching four nines
(99.99%) success rate.
The remainder of this article describes the approach and best practices put in place that enabled the SRE team to
achieve the previous chart's outcomes. The following sections include details on live site incident types, standard
investigation processes, best practices for operationalizing those processes at scale, and the Objective Key
Results (OKRs) used by the team to measure success.
Why incidents occur and how to live with them
The Power BI team ships weekly feature updates to the service and on-demand targeted fixes to address service
quality issues. The release process includes a comprehensive set of quality gates, including comprehensive code
reviews, ad-hoc testing, automated component-based and scenario-based tests, feature flighting, and regional
safe deployment. However, even with these safeguards, live site incidents can and do happen.
Live site incidents can be divided into several categories:
Dependent-service issues (such as Azure AD, Azure SQL, Storage, virtual machine scale set, Service Fabric)
Infrastructure outage (such as a hardware failure, data center failure)
Power BI environmental configuration issues (such as insufficient capacity)
Power BI service code regressions
Customer misconfiguration (such as insufficient resources, bad queries/reports)
Reducing incident volume is one way to decrease live site burden and to improve customer satisfaction.
However, doing so isn't always possible given that some of the incident categories are outside the team's direct
control. Furthermore, as the service footprint expands to support rapid growth in usage, the probability of an
incident occurring due to external factors increases. High incident counts can occur even in cases where the
Power BI service has minimal service code regressions, and has met or exceeded its Service Level Objective
(SLO) for overall reliability of 99.95%, which has led the Power BI team to devote significant resources to
reducing incident costs to a level that is sustainable, by both financial and engineering measures.

Live site incident process


When investigating live site incidents, the Power BI team follows a standard operational process that's common
across Microsoft and the industry. The following image summarizes the standard live site incident handling
lifecycle.

In the first phase, which is the ser vice monitoring phase, the SRE team works with engineers, program
managers, and the Senior Leadership Team to define Service Level Indicators (SLIs) and Service Level Objectives
(SLOs) for both major scenarios and minor scenarios. These objectives apply to different metrics of the service,
including scenario/component reliability, scenario/component performance (latency), and resource
consumption. The live site team and product team then craft alerts that monitor Service Level Indicators (SLIs)
against agreed upon targets. When violations are detected, an alert is triggered for investigation.
In the second phase, which is the incident response phase, processes are structured to facilitate the following
results:
Prompt and targeted notification to customers of any relevant impact
Analysis of affected service components and workflows
Targeted mitigation of incident impact
In the final phase, which is the continuous improvement phase, the team focuses on completion of relevant
post-mortem analysis and resolution of any identified process, monitoring, or configuration or code fixes. The
fixes are then prioritized against the team's general engineering backlog based on overall severity and risk of
reoccurrence.

Our practices for service monitoring


The Power BI team emphasizes a consistent, data-driven, and customer-centric approach to its live site
operations. Defining Service Level Indicators (SLIs) and implementing corresponding live site monitoring alerts
is part of the approval criteria for enabling any new Power BI feature in production. Product group engineers
also include steps for investigation and mitigation of alerts when they occur using a template Troubleshooting
Guide (TSG). Those deliverables are then presented to the Site Reliability Engineering (SRE) team.
One way in which the Power BI team enables exponential service growth is by using a SRE team. These
individuals are skilled with service architecture, automation and incident management practices, and are
embedded within incidents to drive end-to-end resolution. The approach contrasts with the rotational model
where engineering leaders from the product group take on an incident manager role for only a few weeks per
year. The SRE team ensures that a consistent group of individuals are responsible for driving live site
improvements and ensuring that learnings from previous incidents are incorporated into future escalations. The
SRE team also assists with large-scale drills that test Business Continuity and Disaster Recovery (BCDR)
capabilities of the service.
SRE team members use their unique skill set and considerable live site experience, and also partner with feature
teams to enhance SLIs and alerts provided by the product team in numerous ways. Some of the ways they
enhance SLIs include:
Anomaly Aler ts: SREs develop monitors that consider typical usage and operational patterns within a given
production environment and alert when significant deviations occur. Example: Datasets refresh latency
increases by 50% relative to similar usage periods.
Customer/Environment-Specific Aler ts: SREs develop monitors that detect when specific customers,
provisioned capacities, or deployed clusters deviate from expected behavior. Example: A single capacity
owned by a customer is failing to load datasets for querying.
Fine-Grained Aler ts: SREs consider subsets of the population that might experience issues independently
of the broader population. For such cases, specific alerts are crafted to ensure that alerts will in fact fire if
those less common scenarios fail despite lower volume. Example: Refreshing datasets that use the GitHub
connector are failing.
Perceived Reliability Aler ts: SREs also craft alerts that detect cases when customers are unsuccessful due
to any type of error. This can include failures from user errors and indicate a need for improved
documentation or a modified user experience. These alerts also can notify engineers of unexpected system
errors that might otherwise be misclassified as a user error. Example: Dataset refresh fails due to incorrect
credentials.
Another critical role of the SRE team is to automate TSG actions to the extent possible through Azure
Automation. In cases where complete automation is not possible, the SRE team defines actions to enrich an alert
with useful and incident-specific diagnostic information to accelerate subsequent investigation. Such enrichment
is paired with prescriptive guidance in a corresponding TSG so that live site engineers can either take a specific
action to mitigate the incident or quickly escalate to SMEs for more investigation. Alerts with enrichment are
also candidates for complete automation when possible and when incident volume/severity provides a
sufficiently high ROI.
As a direct result of these efforts, more than 82% of incidents are mitigated without any human interaction. The
remaining incidents have enough enrichment data and supporting documentation to be handled without SME
involvement in 99.7% of cases.

Live Site SREs also enforce alert quality in several ways, including the following:
Ensuring that TSGs include impact analysis and escalation policy
Ensuring that alerts execute for the absolute smallest time window possible for faster detection
Ensuring that alerts use reliability thresholds instead of absolute limits to scale clusters of different size

Our practices for incident response


When an automated live site incident is created for the Power BI service, one of the first priorities is to notify
customers of potential impact. Azure has a target notification time of 15 minutes, which is difficult to achieve
when notifications are manually posted by incident managers after joining a call. Communications in such cases
are at risk of being late or inaccurate due to required manual analysis. Azure Monitoring offers centralized
monitoring and alerting solutions that can detect impact to certain metrics within this time window. However,
Power BI is a SaaS offering with complex scenarios and user interactions that cannot be easily modeled and
tracked using such alerting systems. In response, the Power BI team developed a novel solution called TTN0 .
TTN0 (Time To Notify “0”) is a fully automated incident notification service that uses our internal alerting
infrastructure to identify specific scenarios and customers that are impacted by a newly created incident. It is
also integrated with external monitoring agents outside of Azure to detect connectivity issues that might
otherwise go unnoticed. TTN0 allows customers to receive an email when TTN0 detects a service disruption
or degradation. With TTN0, the Power BI team can send reliable, targeted notifications within 10 minutes of
impact start time (which is 33% faster than the Azure target). Since the solution is fully automated, there is
minimal risk from human error or delays. As of May 2021, more than 8,000 companies have registered for
TTN0 alerts.
As mentioned in the previous section, Power BI’s live site philosophy emphasizes automated resolution of
incidents to improve overall scalability and sustainability of the SRE team. The emphasis on automation enables
mitigation at scale and can potentially avoid costly rollbacks or risky expedited fixes to production systems.
When manual investigation is required, Power BI adopts a tiered approach with initial investigation done by a
dedicated SRE team. SRE team members are experienced in managing live site incidents, facilitating cross-team
communication, and driving mitigation. In cases where the acting SRE team member requires more context on
an impacted scenario/component, they may engage the Subject Matter Expert (SME) of that area for guidance.
Finally, the SME team conducts simulations of system component failures to understand and to mitigate issues
in advance of an active live site incident.
Once the affected component/scenario of the service is determined, the Power BI team has multiple techniques
for quickly mitigating impact. Some of them are the following:
Activate side-by-side deployment infrastructure: Power BI supports running different versioned
workloads in the same cluster, allowing the team to run a new (or previous) version of a specific workload for
certain customers without triggering a full-scale deployment (or rollback). The approach can reduce
mitigation time to 15 minutes and lower overall deployment risk.
Execute Business Continuity/Disaster Recover y (BCDR) process: Allows the team to fail over primary
workloads to this alternate environment in three minutes if a serious issue is found in a new service version.
BCDR can also be used when environmental factors or dependent services prevent the primary
cluster/region from operating normally.
Leverage resiliency of dependent ser vices: Power BI proactively evaluates and invests in resiliency and
redundancy efforts for all dependent services (such as SQL, Redis Cache, Key Vault). Resiliency includes
sufficient component monitoring to detect upstream/downstream regressions as well as local, zonal, and
regional redundancy (where applicable). Investing in these capabilities ensures that tooling exists for
automatic or manual triggering of recovery operations to mitigate impact from an affected dependency.

Our practices for continuous improvement


The Power BI team reviews all customer-impacting incidents during a Weekly Service Review with
representation from all engineering groups that contribute to the Power BI service. The review disseminates key
learnings from the incident to leaders across the organization and provides an opportunity to adapt our
processes to close gaps and address inefficiencies.
Prior to review, the SRE team prepares post-mortem content and identifies preliminary repair items for the live
site team and product development team. Items may include code fixes, augmented telemetry, or updated
alerts/TSGs. Power BI SREs are familiar with many of these areas and often proactively make the adjustments in
real time while responding to an active incident. Doing so helps to ensure that changes are incorporated into the
system in time to detect reoccurrence of a similar issue. In cases where an incident was the result of a customer
escalation, the SRE team adjusts existing automated alerting and SLIs to reflect customer expectations. For the
~0.3% of incidents that require escalation to a Subject Matter Expert (SME) of the impacted
scenario/component, the Power BI SRE team will review ways in which the same incident (or similar incidents)
could be handled without escalation in the future. The detailed analysis by the SRE team helps the product
development team to design a more resilient, scalable, and supportable product.
Beyond review of specific postmortems, the SRE team also generates reports on aggregate incident data to
identify opportunities for service improvement such as future automation of incident mitigation or product
fixes. The reporting combines data from multiple sources, including the customer support team, automated
alerting, and service telemetry. The consolidated view provides visibility into those issues that are most
negatively impacting service and team health, and the SRE team then prioritizes potential improvements based
on overall ROI. For example, if a particular alert is firing too frequently or generating disproportionate impact
on service reliability, the SRE team can partner with the product development team to invest in relevant quality
improvements. Completing these work items drives improvement to service and live site metrics and directly
contributes to organizational objective key results (OKRs). In cases where an SLI has been consistently met for a
long period of time, the SRE team may suggest increases to the service SLO to provide an improved experience
for our customers.

Measuring success through objective key results (OKRs)


The Power BI team has a comprehensive set of Objective Key Results (OKRs) that are used to ensure overall
service health, customer satisfaction, and engineer happiness. OKRs can be divided into two categories:
Ser vice Health OKRs: These OKRs directly or indirectly measure the health of scenarios or components in
the service and often are tracked by monitoring/alerting. Example: A single capacity owned by a customer is
failing to load datasets for querying.
Live Site Health OKRs: These OKRs directly or indirectly measure how efficiently and effectively live site
operations are addressing service incidents and outages described in previous sections. Example: Time To
Notify (TTN) customers of an impacting incident.
The following table shows the major live site health OKRs.

The time required for the Power BI team to react to incidents as measured by TTN, TTA, and TTM significantly
exceeds targets. Alert automation directly correlates with the team’s ability to sustain exponential service
growth, while continuing to meet or exceed target response times for incident alerting, notification, and
mitigation. Over a two-year period, the Power BI SRE team added automation to deflect more than 82% of
incidents and to enrich an additional six percent with details that empower engineers to quickly take action to
mitigate incidents when they occur. The approach also enables SMEs to focus on features and proactive quality
improvements instead of repeatedly being engaged for reactive incident investigations.
The above OKRs are actively tracked by the Power BI live site team, and the Senior Leadership Team, to ensure
that the team continues to meet or exceed the baseline required to support substantial service growth, to
maintain a sustainable live site workload, and to ensure high customer satisfaction.

Release management and deployment process


Power BI releases weekly feature updates to the service and on-demand targeted fixes to address service quality
issues. The approach is intended to balance speed and safety. Any code change in Power BI passes through
various validation stages before being deployed broadly to external customers, as described in the following
diagram.

Every change to the Power BI code base passes through automated component and end-to-end tests that
validate common scenarios and ensure that interactions yield expected results. In addition, Power BI uses a
Continuous Integration/Continuous Deployment (CI/CD) pipeline on main development branches to detect
other issues that are cost-prohibitive to identify on a per-change basis. The CI/CD process triggers a full cluster
build out and various synthetic tests that must pass before a change can enter the next stage in the release
process. Approved CI/CD builds are deployed to internal test environments for more automated and manual
validation before being included in each weekly feature update. The process means that a change will be
incorporated into a candidate release within 1 to 7 days after it is completed by the developer.
The weekly feature update then passes through various official deployment rings of Power BI’s safe deployment
process. The updated product build is applied first to an internal cluster that hosts content for the Power BI team
followed by the internal cluster that is used by all employees across Microsoft. The changes wait in each of these
environments for one week prior to moving to the final step: production deployment. Here, the deployment
team adopts a gradual rollout process that selectively applies the new build by region to allow for validation in
certain regions prior to broad application.
Scaling this deployment model to handle exponential service growth is accomplished in several ways, as the
following bullets describe:
Comprehensive Dependency Reviews: Power BI is a complex service with many upstream
dependencies and nontrivial hardware capacity requirements. The deployment team ensures the
availability and necessary capacity of all dependent resources and services in a target deployment region.
Usage models project capacity needs based on anticipated customer demands.
Automation: Power BI deployments are essentially zero-touch with little to no interaction required by
the deployment team. Prebuilt rollout specifications exist for multiple deployment scenarios. Deployment
configuration is validated at build-time to avoid unexpected errors during live deployment roll-outs.
Cluster Health Checks: Power BI deployment infrastructure checks internal service health models
before, during, and after an upgrade to identify unexpected behavior and potential regressions. When
possible, deployment tooling attempts auto-mitigation of encountered issues.
Incident Response Process: Deployment issues are handled like other live site incidents using
techniques that are discussed in more detail in the following sections of this article. Engineers analyze
issues with a focus on immediate mitigation and then follow up with relevant manual or automated
process changes to prevent future reoccurrence.
Feature Management/Exposure Control: Power BI applies a comprehensive framework for
selectively exposing new features to customers. Feature exposure is independent of deployment cadences
and allows code for new scenario code to be deployed in a disabled state until it has passed all relevant
quality bars. In addition, new features can be exposed to a subset of the overall Power BI population as an
extra validation step prior to enabling globally. If an issue is detected, the Power BI feature management
service provides the ability to disable an offending feature in seconds without waiting for more time-
consuming deployment rollback operations.
These features have enabled the Power BI team to improve the success rate of deployments by 18 points while
absorbing a 400% year-over-year growth in monthly deployments.

What's next
Another high priority item on the SRE team roadmap is the reduction of system noise from false positive alerts
or ignorable alerts. In addition, the team will inventory transient alerts, drive RCAs, and determine if there are
underlying systemic issues that need to be addressed.
Finally, a foundational element of Power BI service resiliency is ensuring that the service is compartmentalized
such that incidents only impact a subset of the users. Doing so enables mitigation by redirecting impacted traffic
to a healthy cluster. Supporting this holistically requires significant architectural work and design changes but
should yield even higher SLOs than are attainable today.

Next steps
For more information and resources on the Power BI service, take a look at the following articles.
Governance and deployment approaches
White papers for Power BI
Bring your own encryption keys for Power BI
5/23/2022 • 7 minutes to read • Edit Online

Power BI encrypts data at-rest and in process. By default, Power BI uses Microsoft-managed keys to encrypt your
data. In Power BI Premium you can also use your own keys for data at-rest that is imported into a dataset (see
Data source and storage considerations for more information). This approach is often described as bring your
own key (BYOK).

Why use BYOK?


BYOK makes it easier to meet compliance requirements that specify key arrangements with the cloud service
provider (in this case Microsoft). With BYOK, you provide and control the encryption keys for your Power BI data
at-rest at the application level. As a result, you can exercise control and revoke your organization's keys, should
you decide to exit the service. By revoking the keys, the data is unreadable to the service within 30 minutes.

Data source and storage considerations


To use BYOK, you must upload data to the Power BI service from a Power BI Desktop (PBIX) file. You cannot use
BYOK in the following scenarios:
Analysis Services Live Connection
Excel workbooks (unless data is first imported into Power BI Desktop)
Push datasets
Streaming datasets
Power BI goals do not currently support bring your own key (BYOK).
BYOK applies only to datasets. Push datasets, Excel files, and CSV files that users can upload to the service are
not encrypted using your own key. To identify which artifacts are stored in your workspaces, use the following
PowerShell command:
PS C:\> Get-PowerBIWorkspace -Scope Organization -Include All

NOTE
This cmdlet requires Power BI management module v1.0.840. You can see which version you have by running
Get-InstalledModule -Name MicrosoftPowerBIMgmt . Install the latest version by running
Install-Module -Name MicrosoftPowerBIMgmt . You can get more information about the Power BI cmdlet and its
parameters in Power BI PowerShell cmdlet module.

Configure Azure Key Vault


In this section you learn how to configure Azure Key Vault, a tool for securely storing and accessing secrets, like
encryption keys. You can use an existing key vault to store encryption keys, or you can create a new one
specifically for use with Power BI.
The instructions in this section assume basic knowledge of Azure Key Vault. For more information, see What is
Azure Key Vault?
Configure your key vault in the following way:
1. Add the Power BI service as a service principal for the key vault, with wrap and unwrap permissions.
2. Create an RSA key with a 4096-bit length (or use an existing key of this type), with wrap and unwrap
permissions.

IMPORTANT
Power BI BYOK supports only RSA keys with a 4096-bit length.

3. (Recommended) Check that the key vault has the soft delete option enabled.
Add the service principal
1. In the Azure portal, in your key vault, under Access policies , select Add Access Policy .
2. Under Key permissions , select Unwrap Key and Wrap Key .

3. Under Select principal , search for and select Microsoft.Azure.AnalysisServices.

NOTE
If you can't find "Microsoft.Azure.AnalysisServices", it's likely that the Azure subscription associated with your
Azure Key Vault never had a Power BI resource associated with it. Try searching for the following string instead:
00000009-0000-0000-c000-000000000000.
4. Select Add , then Save .

NOTE
To revoke access of Power BI to your data in the future remove access rights to this service principal from your Azure Key
Vault.

Create an RSA key


1. In your key vault, under Keys , select Generate/Impor t .
2. Select a Key Type of RSA and an RSA Key Size of 4096.

3. Select Create .
4. Under Keys , select the key you created.
5. Select the GUID for the Current Version of the key.
6. Check that Wrap Key and Unwrap Key are both selected. Copy the Key Identifier to use when you
enable BYOK in Power BI.

Soft delete option


We recommend that you enable soft-delete on your key vault, to protect from data loss in case of accidental key
– or key vault – deletion. You must use PowerShell to enable the "soft-delete" property on the key vault, because
this option is not available from the Azure portal yet.
With Azure Key Vault properly configured, you're ready to enable BYOK on your tenant.

Configure the Azure Key Vault firewall


This section describes using the trusted Microsoft service firewall bypass, to configure a firewall around your
Azure Key Vault.

NOTE
Enabling firewall rules on your key vault is optional. You can also choose to leave the firewall disabled on your key vault as
per the default setting.

Power BI is a trusted Microsoft service. You can instruct the key vault firewall to allow access to all trusted
Microsoft services, a setting that enables Power BI to access your key vault without specifying end point
connections.
To configure Azure Key Vault to allow access to trusted Microsoft services, follow these steps:
1. Log into the Azure portal.
2. Search for Key Vaults .
3. Select the key vault you want to allow access to Power BI (and all other trusted Microsoft services).
4. Select Networking and then select Firewalls and vir tual networks .
5. From the Allow access from option, select Selected networks .

6. In the firewall section, in the Allow trusted Microsoft services to bypass this firewall, select Yes .

7. Select Save .
Enable BYOK on your tenant
You enable BYOK at the tenant level with PowerShell, by first introducing to your Power BI tenant the encryption
keys you created and stored in Azure Key Vault. You then assign these encryption keys per Premium capacity for
encrypting content in the capacity.
Important considerations
Before you enable BYOK, keep the following considerations in mind:
At this time, you cannot disable BYOK after you enable it. Depending on how you specify parameters for
Add-PowerBIEncryptionKey , you can control how you use BYOK for one or more of your capacities.
However, you can't undo the introduction of keys to your tenant. For more information, see Enable BYOK.
You cannot directly move a workspace that uses BYOK from a capacity in Power BI Premium to a shared
capacity. You must first move the workspace to a capacity that doesn't have BYOK enabled.
If you move a workspace that uses BYOK from a capacity in Power BI Premium, to shared, reports and
datasets will become inaccessible, as they are encrypted with the Key. To avoid this situation, you must
first move the workspace to a capacity that doesn’t have BYOK enabled.
Enable BYOK
To enable BYOK, you must be a Power BI admin, signed in using the Connect-PowerBIServiceAccount cmdlet. Then
use Add-PowerBIEncryptionKey to enable BYOK, as shown in the following example:

Add-PowerBIEncryptionKey -Name'Contoso Sales' -KeyVaultKeyUri'https://contoso-


vault2.vault.azure.net/keys/ContosoKeyVault/b2ab4ba1c7b341eea5ecaaa2wb54c4d2'

To add multiple keys, run Add-PowerBIEncryptionKey with different values for - -Name and -KeyVaultKeyUri .
The cmdlet accepts two switch parameters that affect encryption for current and future capacities. By default,
neither of the switches are set:
-Activate : Indicates that this key will be used for all existing capacities in the tenant that aren't already
encrypted.
-Default : Indicates that this key is now the default for the entire tenant. When you create a new capacity,
the capacity inherits this key.

IMPORTANT
If you specify -Default , all of the capacities created on your tenant from this point will be encrypted using the key you
specify (or an updated default key). You cannot undo the default operation, so you lose the ability to create a premium
capacity in your tenant that doesn't use BYOK.

After you enable BYOK on your tenant, set the encryption key for one or more Power BI capacities:
1. Use Get-PowerBICapacity to get the capacity ID that's required for the next step.

Get-PowerBICapacity -Scope Individual

The cmdlet returns output similar to the following output:

Id : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
DisplayName : Test Capacity
Admins : [email protected]
Sku : P1
State : Active
UserAccessRight : Admin
Region : North Central US

2. Use Set-PowerBICapacityEncryptionKey to set the encryption key:

Set-PowerBICapacityEncryptionKey -CapacityId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -KeyName 'Contoso


Sales'

You have control over how you use BYOK across your tenant. For example, to encrypt a single capacity, call
Add-PowerBIEncryptionKey without -Activate or -Default . Then call Set-PowerBICapacityEncryptionKey for the
capacity where you want to enable BYOK.

Manage BYOK
Power BI provides additional cmdlets to help manage BYOK in your tenant:
Use Get-PowerBICapacity to get the key that a capacity is currently using:

Get-PowerBICapacity -Scope Organization -ShowEncryptionKey

Use Get-PowerBIEncryptionKey to get the key that your tenant is currently using:

Get-PowerBIEncryptionKey

Use Get-PowerBIWorkspaceEncryptionStatus to see whether the datasets in a workspace are encrypted and
whether their encryption status is in sync with the workspace:
Get-PowerBIWorkspaceEncryptionStatus -Name'Contoso Sales'

Note that encryption is enabled at the capacity level, but you get encryption status at the dataset level for
the specified workspace.
Use Switch-PowerBIEncryptionKey to switch (or rotate) the version of the key being used for encryption.
The cmdlet simply updates the -KeyVaultKeyUri for a key -Name :

Switch-PowerBIEncryptionKey -Name'Contoso Sales' -KeyVaultKeyUri'https://contoso-


vault2.vault.azure.net/keys/ContosoKeyVault/b2ab4ba1c7b341eea5ecaaa2wb54c4d2'

Please note that the current key should be enabled.

Next steps
Power BI PowerShell cmdlet module
Ways to share your work in Power BI
Filter a report using query string parameters in the URL
Embed with report web part in SharePoint Online
Publish to Web from Power BI
Power BI Premium Generation 2
Distribute Power BI content to external guest users
with Azure AD B2B
5/23/2022 • 8 minutes to read • Edit Online

Power BI enables sharing content with external guest users through Azure Active Directory Business-to-Business
(Azure AD B2B). By using Azure AD B2B, your organization enables and governs sharing with external users in a
central place.
By default, external guests have mostly consumption experiences. You can also choose to provide external users
with elevated permissions to the workspaces to experience "Edit and Manage" privileges. Additionally, by
enabling the Allow external guest users to edit and manage content in the organization feature setting, you can
allow guest users outside your organization to browse and request access to your organization's content.
This article provides a basic introduction to Azure AD B2B in Power BI. For more information, see Distribute
Power BI content to external guest users using Azure Active Directory B2B.

Enable access
Make sure you enable the Invite external users to your organization feature in the Power BI admin portal before
inviting guest users. Even when this option is enabled, the user must be granted the Guest Inviter role in Azure
Active Directory to invite guest users.
The option to allow external guest users to edit and manage content in the organization lets you give guest
users the ability to see and create content in workspaces, including browsing your organization's Power BI. The
guest user can only be subscribed to content in workspaces that are backed by a Premium capacity.

NOTE
The Share content with external users setting controls whether Power BI allows inviting external users to your
organization. After an external user accepts the invite, they become an Azure AD B2B guest user in your organization.
They appear in people pickers throughout the Power BI experience. If the setting is disabled, existing guest users in your
organization continue to have access to any items they already had access to and continue to be listed in people picker
experiences. Additionally, if guests are added through the planned invite approach they will also appear in people pickers.
To prevent guest users from accessing Power BI, use an Azure AD conditional access policy.

Who can you invite?


Most email addresses are supported for guest user invitations, including personal email accounts like
gmail.com, outlook.com, and hotmail.com. Azure AD B2B calls these addresses social identities.
You can't invite users that are associated with a government cloud, like Power BI for US Government.

Invite guest users


Guest users only require invitations the first time you invite them to your organization. To invite users, use
planned or ad hoc invites.
To use ad hoc invites, use the following capabilities:
Report and Dashboard sharing
App access list
Ad hoc invites aren't supported in the workspace access list. Use the planned invites approach to add these
users to your organization. After the external user becomes a guest in your organization, add them to the
workspace access list.
Planned invites
Use a planned invite if you know which users to invite. The Azure portal or PowerShell enables you to send the
invites. You must be assigned the user admin role to invite people.
Follow these steps to send an invite in the Azure portal.
1. In the Azure portal, select Menu button then select Azure Active Director y .

2. Under Manage , select Users > All users > New guest user .

3. Scroll down and enter an email address and personal message .


4. Select Invite .
To invite more than one guest user, use PowerShell or create a bulk invite in Azure AD. To use PowerShell for the
bulk invite, follow the steps in Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users. To use
the Azure portal for the bulk invite, follow the steps in Tutorial: Bulk invite Azure AD B2B collaboration users.
The guest user must select Get Star ted in the email invitation they receive. The guest user is then added to the
organization.

Ad hoc invites
To invite an external user at any time, add them to your dashboard or report through the share feature or to
your app through the access page. Here is an example of what to do when inviting an external user to use an
app.
The guest user gets an email indicating that you shared the app with them.

The guest user must sign in with their organization email address. They'll receive a prompt to accept the
invitation after signing in. After signing in, the app opens for the guest user. To return to the app, they should
bookmark the link or save the email.

Licensing
The guest user must have the proper licensing in place to view the content that you shared. There are a few
ways to make sure the user has a proper license: use Power BI Premium, assign a Power BI Pro license, get a
Premium Per User (PPU) license, or use the guest's Power BI Pro license.
Guest users who can edit and manage content in the organization need a Power BI Pro or Premium Per User
(PPU) license to contribute content to workspaces or share content with others.
Use Power BI Premium
Assigning the workspace to Power BI Premium capacity lets the guest user use the app without requiring a
Power BI Pro license. Power BI Premium also lets apps take advantage of other capabilities like increased refresh
rates and large model sizes.

Assign a Power BI Pro license to guest user


Assigning a Power BI Pro license from your organization to a guest user lets that guest user view content shared
with them. For more information about assigning licenses, see Assign licenses to users on the Licenses page.
Before assigning Pro licenses to guest users, consult the Product Terms site to ensure you're in compliance with
the terms of your licensing agreement with Microsoft.

Guest user brings their own Power BI Pro license


The guest user may already have a Power BI Pro or Premium Per User (PPU) license that was assigned to them
through their own organization.
Guest users who can edit and manage content
When using the allow external guest users to edit and manage content in the organization feature, the specified
guest users get additional access to your organization's Power BI. Allowed guests can see any content that they
have permissions for, access Home, browse workspaces, install apps, see where they are on the access list, and
contribute content to workspaces. They can create, or be an Admin of, workspaces that use the new workspace
experience. Some limitations apply. The Considerations and Limitations section lists those restrictions.
To help allowed guests sign in to Power BI, provide them with the Tenant URL. To find the tenant URL, follow
these steps.
1. In the Power BI service, in the header menu, select help (? ), then select About Power BI .
2. Look for the value next to Tenant URL . Share the tenant URL with your allowed guest users.

Considerations and Limitations


External Azure AD B2B guests can view apps, dashboards, reports, and export data. They can't access
workspaces or publish their own content. To remove these restrictions, you can use the Allow external
guest users to edit and manage content in the organization feature.
To invite guest users, a Power BI Pro or Premium Per User (PPU) license is needed. Pro Trial users can't
invite guest users in Power BI.
Information protection in Power BI doesn't support B2B and multi-tenant scenarios. This means that
although external users may be able to see sensitivity labels in Power BI:
They can't set labels
Mandatory and default label polices will not be enforced for them
While they can view a report that has a label with protection settings, if they export data from that
report to a file, they may not be able to open the file, as it has the Azure Active Directory permissions
of the original organization that it got due to the label on the report.
Some experiences are not available to guest users who can edit and manage content in the organization.
To update or publish reports, guest users need to use the Power BI service, including Get Data, to upload
Power BI Desktop files. The following experiences aren't supported:
Direct publishing from Power BI desktop to the Power BI service
Guest users can't use Power BI desktop to connect to service datasets in the Power BI service
Classic workspaces tied to Microsoft 365 Groups
Guest users can't create or be Admins of these workspaces
Guest users can be members
Sending ad hoc invites isn't supported for workspace access lists
Power BI Publisher for Excel isn't supported for guest users
Guest users can't install a Power BI Gateway and connect it to your organization
Guest users can't install apps publish to the entire organization
Guest users can't use, create, update, or install organizational content packs
Guest users can't use Analyze in Excel
Guest users can't be @mentioned in commenting
Guest users can't create subscriptions
Guest users who use this capability should have a work or school account
Guest users using social identities will experience more limitations because of sign-in restrictions.
They can use consumption experiences in the Power BI service through a web browser
They can't use the Power BI Mobile apps
They won't be able to sign in where a work or school account is required
This feature isn't currently available with the Power BI SharePoint Online report web part.
There are Azure Active Directory settings that can limit what external guest users can do within your
overall organization. Those settings also apply to your Power BI environment. The following
documentation discusses the settings:
Manage External Collaboration Settings
Allow or block invitations to B2B users from specific organizations
Use Conditional Access to allow or block access
You can share content from a government cloud, like GCC, to an external commercial cloud user. However,
the guest user can't use their own license. The content has to be in capacity assigned to Premium to
enable access. Or, you can assign a Power BI Pro license to the guest account.
Sharing outside your organization isn't supported for national clouds, like the China cloud instance.
Instead, create user accounts in your organization that external users can use to access the content.
If you share directly to a guest user, Power BI will send them an email with the link. To avoid sending an
email, add the guest user to a security group and share to the security group.

Next steps
For more detailed info, including how row-level security works, check out the whitepaper: Distribute Power BI
content to external guest users using Azure AD B2B.
For information about Azure AD B2B, see What is Azure AD B2B collaboration?.
Use customer-managed keys in Power BI
5/23/2022 • 2 minutes to read • Edit Online

Power BI encrypts data at rest and in process. By default, Power BI uses Microsoft-managed keys to encrypt your
data. Organizations can choose to use their own keys for encryption of user content at rest across Power BI,
from report images to imported datasets in Premium capacities.

Why use customer-managed keys


With Power BI customer-managed keys (CMK), organizations can meet compliance requirements for data
encryption at rest with their cloud service provider (in this case, Microsoft). CMK is only offered to new Power BI
Premium customers and it enables organizations to encrypt their Power BI user content using a key they
provide and manage. Revoking a customer-managed key makes user content within Power BI unreadable for
everyone within an hour, including Microsoft. Compared to the BYOK offering, CMK also covers user content
that is generated by the service, in addition to customer data that is imported into reports and datasets hosted
on Premium capacities, it enforces stricter caching policies, and can only apply a single key to encrypt all the
data.

How to use customer-managed keys


To opt in to Power BI customer-managed keys, your organization must contact your organization’s Microsoft
account manager to validate that your organization meets certain size requirements that are required for
enabling CMK.

Next steps
The following links provide information that can be useful for customer-managed keys:
Bring your own encryption keys for Power BI
Configure Multi-Geo support for Power BI Premium
How capacities function
Power BI security white paper
Get a Power BI service subscription for your
organization
5/23/2022 • 3 minutes to read • Edit Online

Administrators can sign up for the Power BI service through the Purchase ser vices page of the Microsoft 365
admin center. When an administrator signs up for Power BI, they can assign licenses to users who should have
access.
Users in your organization can sign up for Power BI through the Power BI web site. When a user in your
organization signs up for Power BI, they're assigned a Power BI license automatically. If you want to turn off self-
service capabilities, follow the steps in Enable or disable self-service sign-up and purchasing.

Sign up through Microsoft 365


If you're a global admin or billing admin, you can get a Power BI subscription for your organization. For more
information, see Who can purchase and assign licenses?.

NOTE
A Microsoft 365 E5 subscription already includes Power BI Pro licenses. To learn how to manage licenses, see View and
manage user licenses.

Follow these steps to purchase Power BI Pro licenses in the Microsoft 365 admin center:
1. Sign in to the Microsoft 365 admin center.
2. On the navigation menu, select Billing > Purchase ser vices .
3. Search for Power BI or select the Power BI button from the View by categor y section near the top of
the page.
4. Select an offer, like Power BI Pro.
5. On the Purchase ser vices page, select Buy . If you haven't previously used it, you can start a Power BI
Pro free trial subscription. It includes 25 licenses and expires in one month.

6. Choose Pay monthly or Pay yearly , according to how you want to pay.
7. Under Select license quantity enter the number of licenses to buy, and then select Buy .
8. Complete the information on the checkout page, and then select Place order .
9. To verify your purchase, go to Billing > Your products and look for Power BI Pro .
To read more about how your organization can control and acquire the Power BI service, see Power BI in your
organization.

More ways to get Power BI for your organization


If you aren't already a Microsoft 365 subscriber, use the steps below to get a Power BI Pro trial for your
organization. Or, you can Sign up for a new Microsoft 365 trial, then add Power BI by following the steps in the
preceding section.
You'll need a work or school account to sign up for a Power BI subscription. We don't support email addresses
provided by consumer email services or telecommunications providers. If you don't have a work or school
account, you can create one during sign-up.
Follow these steps to sign up:
1. Go to Power BI Pro signup.
2. Enter your work or school email address, then select Next . It's okay if you enter an email address that
isn't considered a work or school email address. We'll get a new account set up for you when you create
your business identity.

3. We run a quick check to see if you need to create a new account. Select Set up account to continue with
the sign-up process.

NOTE
If your email address is already in use with another Microsoft service, you can Sign in or Create a new account
instead . If you choose to create a new account, continue to follow these steps to get set up.

4. Complete the form to tell us about yourself. Be sure to choose the correct country or region. The country
you select determines where your data is stored, as explained in Find the default region for your
organization. The country or region doesn't have to match your physical location, but should match the
location for the majority of your users.
5. Select Next . We need to send a verification code to verify your identity. Provide a phone number where
we can send a text or call you. Then, select Send Verification Code .
6. Enter the verification code, then continue to Create your business identity .

Enter a short name for your business, and we'll check to make sure it's available. We use this short name
to create your organization name in the datacenter as a subdomain of onmicrosoft.com. You can add your
own business domain later. Don't worry if the short name you want is taken. Most likely someone with a
similar business name chose the same short name - just try a different variation. Select Next .
7. Create your user ID and password to sign in to your account. Select Sign up , and you're all set.
The account you created is now the global admin of a new Power BI Pro trial tenant. You can sign in to the
Microsoft 365 admin center to add more users, set up a custom domain, purchase more services, and manage
your Power BI subscription.

Next steps
View and manage user licenses
Enable or disable self-service sign-up and purchasing
Business subscriptions and billing documentation
Purchase and assign Power BI Pro user licenses
5/23/2022 • 3 minutes to read • Edit Online

IMPORTANT
This article is for admins. Are you a user ready to upgrade to a Power BI Pro license? Go directly to Get started with Power
BI Pro to set up your account.

Power BI Pro is an individual user license that lets users read and interact with reports and dashboards that
others have published to the Power BI service. Users with this license type can share content and collaborate
with other Power BI Pro users. Only Power BI Pro users can publish or share content with other users or
consume content that's created by others, unless a Power BI Premium capacity hosts that content. For more
information about the available types of licenses and subscriptions, including Premium Per User (PPU) licenses,
see Power BI licensing in your organization.

Purchase Power BI Pro user licenses


This article explains how to buy Power BI Pro user licenses in the Microsoft 365 admin center. After you buy
licenses, you can assign them to users in either the Microsoft 365 admin center or the Azure portal.

NOTE
Self-service purchase, subscription, and license management capabilities for Power Platform products (Power BI, Power
Apps, and Power Automate) are available for commercial cloud customers. For more information, see Self-service purchase
FAQ. To enable or disable self-service purchasing capabilities, see Enable or disable self-service sign-up and purchasing.

Prerequisites
To purchase and assign licenses in the Microsoft 365 admin center, you must be a member of the global
administrator or Billing administrator role in Microsoft 365.
To assign licenses in the Azure portal, you must be an owner of the Azure subscription that Power BI uses for
Azure Active Directory lookups.
Purchase licenses in Microsoft 365

NOTE
If you usually purchase licenses through a volume licensing agreement, such as an Enterprise Agreement, and want to
receive an invoice instead of purchasing with a credit card or bank account, you need to submit the order differently.
Work with your Microsoft Reseller or go through the Volume Licensing Service Center to add or remove licenses. For
more information, see Manage subscription licenses.

Follow these steps to purchase Power BI Pro licenses in the Microsoft 365 admin center:
1. Sign in to the Microsoft 365 admin center.
2. On the navigation menu, select Billing > Purchase ser vices .
3. Search or scroll to find the subscription you want to buy. You'll find Power BI under Other categories
that might interest you near the bottom of the page. Select the link to view the Power BI subscriptions
available to your organization.
4. Select Power BI Pro .
5. On the Purchase ser vices page, select Buy .
6. Choose Pay monthly or Pay for a full year , according to how you want to pay.
7. Under How many users do you want? enter the number of licenses to buy, then select Check out
now to complete the transaction.
8. To verify your purchase, go to Billing > Products & ser vices and look for Power BI Pro .
9. To add more licenses later, locate Power BI Pro on the Products & ser vices page, and then select
Add/Remove licenses .

Assign licenses in the Microsoft 365 admin center


For information about assigning licenses in the Microsoft 365 admin center, see Assign licenses to users.
For guest users, see Assign licenses to users on the Licenses page. Before assigning Pro licenses to guest users,
contact your Microsoft account representative to make sure you're in compliance with the terms of your
agreement with Microsoft.

Assign licenses in the Azure portal


Follow these steps to assign Power BI Pro licenses to individual user accounts:
1. Sign in to the Azure portal.
2. Search for and select Azure Active Director y .
3. Under Manage on the Azure Active Director y resource menu, select Licenses .
4. Select All products from the Licenses - Over view resource menu, then select Power BI Pro to
display the list of licensed users.
5. From the command bar, select + Assign . On the Assign license page, first choose a user, then select
Assignment options to turn on a Power BI Pro license for the selected user account.

Next steps
Power BI licensing in your organization
Find Power BI users who have signed in
Sign up for Power BI as an individual
More questions? Try asking the Power BI Community
Licensing the Power BI service for users in your
organization
5/23/2022 • 8 minutes to read • Edit Online

What a user can do in the Power BI service depends on the type of per-user license that they have. The level of
access provided by their license depends on whether the workspace being accessed is in a Premium workspace
or not. All users of the Power BI service must have a license.
There are two ways for users to get a license. Using self-service sign-up capabilities and their work or school
account, users can get their own free, Pro, or Premium Per User license. Or, admins can get a Power BI license
subscription and assign licenses to users.
This article focuses on purchasing services and per-user licensing from an administrator perspective. For more
information about how users can get their own license, see Signing up for Power BI as an individual.

Who can purchase and assign licenses?


You must be assigned an admin role to purchase or assign licenses for your organization. Admin roles are
assigned by using the Azure Active Directory admin center or the Microsoft 365 admin center. The following
table shows which role is required to do tasks related to purchase and licensing. For more information about
administrator roles in Azure Active Directory, see View and assign administrator roles in Azure Active Directory.
To learn more about admin roles in Microsoft 365, including best practices, see About admin roles.

W H O C A N P URC H A SE SERVIC ES A N D L IC EN SES? W H O C A N M A N A GE USER L IC EN SES?

Billing administrator License administrator

Global administrator User administrator

Global administrator

These roles manage the organization. For information about Power BI service administrator roles, see
Understanding Power BI service administrator roles.

Get Power BI for your organization


For information about pricing, see Pricing & Product Comparison.
A global administrator or a billing administrator can sign up for the Power BI service and buy licenses for the
users in their organization. If you're not ready to purchase, select the Power BI Pro trial. You'll get 25 licenses to
use for one month. For step-by-step instructions on how to sign up, see Get a Power BI subscription for your
organization.

About self-service sign-up


Individual users can get their own Power BI license by signing up with their work or school account. With a free
license, users can explore Power BI for personal data analysis and visualization using My Workspace, but they
can't share with other users. A Power BI Pro or Power BI Premium Per User license is required to share content. A
Power BI Premium subscription unlocks access to a variety of features, capabilities, and types of content that are
only available through Premium. A Premium Per User license limits access to these features to other users who
also have a Premium Per User license. A capacity-based Premium subscription allows users with free licenses to
access any content, but only users with a Pro or Premium Per User license can create content. Users may
upgrade their license type to Pro, or sign up for Pro directly, if the organization is using the commercial cloud.
Direct purchase of or upgrade to Pro isn't available to educational organizations or organizations deployed to
Azure Government or Azure China 21Vianet clouds.
If you don't want users in your organization to use self-service sign-up, see Enable or disable self-service sign-
up to learn how to turn it off.
Turning off self-service sign-up keeps users from exploring Power BI for data visualization and analysis. If you
block individual sign-up, you may want to get Power BI (free) licenses for your organization and assign them to
all users. Follow these steps to assign a Power BI (free) license to all existing users:
1. Sign in to the Microsoft 365 admin center, using global admin or billing admin credentials.
2. From the left sidebar menu, select Billing > Purchase ser vices .
3. Search or scroll to locate the Power BI (free) offer. Select the Details button below the offer.
4. Enter the number of licenses needed to cover all your users, and then select Buy .

5. Complete the information on the Checkout page, and then select Place order .
6. Select Licenses from the left sidebar, and then select Power BI (free) from the subscriptions.
7. Select Assign licenses and assign the licenses to your users.
If you want to see which users in your organization might already have a license, see View and manage user
licenses to learn how.

License types and capabilities


There are three kinds of Power BI per-user licenses: Free, Pro, and Premium Per User. Which type of license a
user needs is determined by where content is stored, how they'll interact with that content, and if that content
uses Premium features. Where content can be stored is determined by your organization's subscription license
type.
One type of organizational subscription, Power BI Premium, is a capacity-based license. Using Premium allows
users with a free license to act on content in workspaces that are assigned to Premium capacity. Outside of
Premium capacity, a user with a free license can only use the Power BI service to connect to data and create
reports and dashboards in My Workspace . They can't share content with others or publish content to other
workspaces. To learn more about workspace types, see Types of workspaces.
A Power BI organization with only free and Pro per-user licenses uses a shared and limited capacity to process
content. If content is stored in that shared capacity, users who are assigned a Power BI Pro license can
collaborate only with other Power BI Pro users. They can consume content shared by other users, publish
content to app workspaces, share dashboards, and subscribe to dashboards and reports. When workspaces are
in Premium capacity, Pro users may distribute content to users who don't have a Power BI Pro license.
Content created by a user who is assigned a Premium Per User license can only be shared with other users that
have a Premium Per User license, unless that content is specifically put on a workspace hosted on a Premium
capacity. The table below summarizes the basic capabilities of each license type. For a detailed breakdown of
feature availability per license type, see Features by license type.

A DDIT IO N A L C A PA B IL IT IES W H EN
C A PA B IL IT IES W H EN W O RK SPA C E IS W O RK SPA C E IS IN P REM IUM
L IC EN SE T Y P E IN SH A RED C A PA C IT Y C A PA C IT Y

Power BI (free) Access to content in My Workspace Consume content shared with them

Power BI Pro Publish content to other workspaces, Distribute content to users who have
share dashboards, subscribe to free licenses
dashboards and reports, share with
users who have a Pro license

Power BI Premium Per User Publish content to other workspaces, Distribute content to users who have
share dashboards, subscribe to free and Pro licenses
dashboards and reports, share with
users who have a Premium Per User
license

Subscription license types


All user-based, commercial licenses from Microsoft are based on Azure Active Directory identities. To use the
Power BI service, you must sign in with an identity that Azure Active Directory supports for commercial licenses.
You can add Power BI to any Microsoft license that uses Azure Active Directory for identity services. Some
licenses, such as Office 365 E5, include a Power BI Pro license, so no separate sign-up for Power BI is needed.
There are two kinds of Power BI subscription licenses for organizations: standard and premium.
With a standard, self-service Power BI subscription, admins assign per user licenses. There's a per user monthly
fee for Power BI Pro licenses. This license type enables collaboration, publishing, sharing, and ad-hoc analysis.
Content is saved to shared storage capacity that is fully managed by Microsoft.
A Power BI Premium subscription license allocates a capacity to an organization. Suitable for enterprise BI, big
data analytics, and cloud and on-premises reporting, Premium provides advanced administration and
deployment controls. Reserved compute and storage resources are managed by capacity admins in your
organization. There's a monthly cost for this reserved environment. In addition to other Premium advantages,
content stored in Premium capacity can be accessed by and distributed to users who don't have Power BI Pro
licenses. At least one user has to have a Power BI Pro license assigned to use Premium, and content creators and
developers still need a Power BI Pro license.
The two types of subscriptions aren't mutually exclusive. You can have both Power BI Premium and standard. In
this configuration, content stored in Premium capacity can be shared with all users and shared capacity is also
available. For information about capacity limits, see Manage data storage in Power BI workspaces.
NOTE
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with improvements in
the following:
Performance
Per-user licensing. See the Premium Per User article for more information.
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.

To compare product features and pricing, see Power BI pricing.

Guest user access


You may want to distribute content to users who are outside of your organization. It's possible to share content
with external users by inviting them to view content as a guest. Azure Active Directory Business-to-business
(Azure AD B2B) enables sharing with external guest users. The following prerequisites must be met to share with
external users:
The ability to share content with external users must be enabled
The guest user must have the proper licensing in place to view the shared content
For more information about guest user access, see Distribute Power BI content to external guest users with
Azure AD B2B.

Purchase Power BI Pro licenses


As an administrator, you purchase Power BI Pro licenses through Microsoft 365 or through a Microsoft partner.
After you buy the licenses, you assign them to individual users. For more information, see Purchase and assign
Power BI Pro licenses.
Power BI Pro license expiration
There's a grace period after a Power BI Pro license expires. For licenses that are part of a volume license
purchase, the grace period is 90 days. If you bought the license directly, the grace period is 30 days.
Power BI Pro has the same license lifecycle as Microsoft 365. For more information, see What happens to my
data and access when my Microsoft 365 for business subscription ends.

Next steps
Purchase and assign Power BI Pro licenses
Business subscriptions and billing documentation
Find Power BI users that have signed in
More questions? Try asking the Power BI Community
View and manage Power BI user licenses
5/23/2022 • 2 minutes to read • Edit Online

This article explains how admins can use the Microsoft 365 admin center or the Azure portal to view and
manage user licenses for the Power BI service.

NOTE
It's possible for a user to have both a Power BI (free) and a Power BI Pro license assigned. This can happen when a user
signs up for a free license and then is later assigned a Power BI Pro license. The highest licensing level takes effect in this
case.

View your subscriptions


To see which Power BI subscriptions your organization has, follow these steps.
1. Sign in to the Microsoft 365 admin center.
2. In the navigation menu, select Billing > Your products .
Your active Power BI subscriptions are listed along with any other subscriptions you have. You may see an
unexpected subscription for Power BI (free), as shown here.

This type of subscription is created for you when users take advantage of self-service sign-up. To read more, see
Power BI in your organization.

Manage user licenses in Microsoft 365


To use Microsoft 365 admin center to manage user licenses, see the Business subscriptions and billing
documentation.

Manage user licenses in Azure portal


Follow these steps to view and assign Power BI licenses using the Azure portal.
1. Sign in to the Azure portal.
2. Search for and select Azure Active Director y .
3. Under Manage on the Azure Active Directory resource menu, select Licenses .
4. Select All products from the resource menu, then select a Power BI license type to display the list of
licensed users.
5. To assign a license, from the command bar, select + Assign . On the Assign license page, choose a user
then select Assignment options to turn on a Power BI license for the selected user account.
6. To remove a license, select the checkbox next to the user's name, then select Remove license .

Next steps
Purchase Power BI Pro
Licensing for your organization
Power BI for US government customers
5/23/2022 • 6 minutes to read • Edit Online

This article is for US government customers who are deploying Power BI as part of a Microsoft 365 Government
plan. Government plans are designed for the unique needs of organizations that must meet US compliance and
security standards.
The Power BI service that's designed for US government customers differs from the commercial version of the
Power BI service. These feature differences and capabilities are described in the following sections.

NOTE
Before you can get a Power BI US government subscription and assign licenses to users, you have to enroll in a Microsoft
365 Government plan. If your organization already has a Microsoft 365 Government plan, skip ahead to Buy a Power BI
Pro subscription for government customers.

Government cloud instances


If you're a new customer, you have to validate your organization's eligibility before you can sign up for a
Microsoft 365 Government plan. Get started by completing the Microsoft 365 for Government eligibility
validation form.
Microsoft 365 provides different environments for government agencies to meet varying compliance
requirements. To ensure that you're selecting the right plan for your organization, consult the Microsoft 365 US
Government service description for each environment:
Microsoft 365 Government Community Cloud (GCC) is designed for federal, state, and local government.
Microsoft 365 Government Community Cloud High (GCC High) is designed for federal agencies, defense
industry, aerospace industry, and other organizations that hold controlled unclassified information. This
environment is suited for national security organizations and companies that have International Traffic in
Arms Regulations (ITAR) data or Defense Federal Acquisition Regulations Supplement (DFARS)
requirements.
The Microsoft 365 DoD environment is designed exclusively for the US Department of Defense.

NOTE
If you've already deployed Power BI to a commercial environment and want to migrate to the US government cloud, you'll
need to add a new Power BI Pro or Premium Per User (PPU) subscription to your Microsoft 365 Government plan. Next,
replicate the commercial data to the Power BI service for US government, remove commercial license assignments from
user accounts, and then assign a Power BI Pro government license to the user accounts.

Buy a Power BI Pro subscription for government customers


After you've deployed Microsoft 365, you can add a Power BI Pro subscription. To buy the Power BI Pro
government service, follow the guidance in Enroll your US government organization. Buy enough licenses for all
the users who need to use Power BI, and then assign the licenses to individual user accounts.
IMPORTANT
Power BI US Government isn't available as a Free license. If you've purchased Power BI Premium, you don't have to assign
licenses to users to allow them to consume content published to a Premium capacity. For all other access, including
publishing content to the Premium capacity, each user must be assigned a Pro or Premium Per User (PPU) license. If a
user account has been assigned a Free license, the user is authorized to access only the commercial cloud and will
encounter authentication and access issues.
To review the differences between license types, see Power BI service features by license type.

Sign in to Power BI for US government


The URLs for connecting to Power BI differ for government users and commercial users. To sign in to the correct
Power BI version, use one of the following URLs:
Commercial version : https://app.powerbi.com
GCC : https://app.powerbigov.us
GCC High : https://app.high.powerbigov.us
DoD : https://app.mil.powerbigov.us
Your account might be set up in more than one cloud. If your account is set up that way, when you sign in to
Power BI Desktop, you can choose which cloud to connect to.

TIP
In this video, Using Power BI Desktop in government clouds, Technical Specialist Steve Winward shows how you can apply
a registry setting to go directly to the right cloud endpoint for your environment. The registry key settings to bypass the
global discovery endpoint are shared on GitHub.

Allow connections to Power BI


To use the Power BI service, you must allow connections to required endpoints on the internet. These
destinations have to be reachable to enable communication between your own network, Power BI, and other
dependent services.
The following table lists the required endpoints to add to your allowlist to enable connection to the Power BI
service for general site usage. These endpoints are unique to the US government cloud. The Power BI service
requires only Transmission Control Protocol (TCP) port 443 to be opened for the listed endpoints.
The endpoints for getting data, dashboard and report integration, Power BI visuals, and other optional services
aren’t unique to the US government cloud.
To add these URLs to your allowlist also, see Add Power BI URLs to your allowlist.
Authentication, identity, and administration for Power BI depend on connectivity to Microsoft 365 services. You
also have to connect to Microsoft 365 to view audit logs. To identify the endpoints for these services, see
"Microsoft 365 integration" in the following table:
Power BI URLs for general site usage
P URP O SE DEST IN AT IO N

Back-end APIs GCC : api.powerbigov.us


GCC High : api.high.powerbigov.us
DoD : api.mil.powerbigov.us
P URP O SE DEST IN AT IO N

Back-end APIs GCC : *.analysis.usgovcloudapi.net


GCC High : *.high.analysis.usgovcloudapi.net
DoD : *.mil.analysis.usgovcloudapi.net

Back-end APIs All: *.pbidedicated.usgovcloudapi.net

Content Delivery Network (CDN) GCC : gov.content.powerapps.us


GCC High : high.content.powerapps.us
DoD : mil.content.powerapps.us

Microsoft 365 integration GCC : Worldwide endpoints


GCC High : US Government GCC High endpoints
DoD : US Government DOD endpoints

Portal GCC : *.powerbigov.us


GCC High : *.high.powerbigov.us
DoD : *.mil.powerbigov.us

Service telemetry All: dc.services.visualstudio.us

Informational messages (optional) All: arc.msn.com

Connect government and global Azure cloud services


Azure is distributed across multiple clouds. By default, you can enable firewall rules to open a connection to a
cloud-specific instance, but cross-cloud networking is different. To communicate between services in the public
cloud and services in the Government Community Cloud, you have to configure specific firewall rules. For
example, if you want to access public cloud instances of a SQL database from your government cloud
deployment of Power BI, you need a firewall rule in the SQL database. Configure specific firewall rules for SQL
databases to allow connections to the Azure Government Cloud for the following datacenters:
USGov Iowa
USGov Virginia
USGov Texas
USGov Arizona
US DoD East
US DoD Central
To get the US government cloud IP ranges, download the Azure IP Ranges and Service Tags – US Government
Cloud file. Ranges are listed for both Power BI and Power Query.
For more information about Microsoft Azure Government cloud services, see Azure Government
documentation.
To set up firewalls for SQL databases, see Create and manage IP firewall rules.

Power BI feature availability


To accommodate the requirements of government cloud customers, government plans differ from commercial
plans in some respects. Our goal is to make all features available in government clouds within 30 days of
general availability. In a few cases, underlying dependencies prevent us from making a feature available.
The following table lists features that aren't yet available in a particular government environment or that are
available with limited functionality. The table uses the following keys:

K EY DESC RIP T IO N

The feature is available in the environment, and any


exceptions are defined in footnotes.

The feature isn't available in the environment, and we don't


have an estimated time frame for delivery.

If a release is planned for an environment, we include the quarter of estimated availability.

F EAT URE GC C GC C H IGH DO D

Azure B2B collaboration


between government and
commercial cloud1

Template apps2

Embed in SharePoint Online


by using the Power BI web
part

Data Protection (MIP labels)

Dataflows - Direct Query Not planned

Dataflows - SQL Compute Not planned


engine optimization

Power BI tab in Teams

Large models Not planned

Call Quality Data Connector CY2022-H2 CY2022-H2 CY2022-H2

Bring your own storage


(Azure Data Lake Gen 2)

Autoscale

1 Although B2B collaboration is available for


GCC, external users must be issued a license in that environment.
Commercial cloud licenses aren't valid in GCC. For more information about known limitations with B2B
collaboration for US government, see Compare Azure Government and global Azure.
2 Because marketplace apps aren't available to US government cloud instances, template apps are limited to
private and organizational apps.

Next steps
Article: Sign up for Power BI for US government
Article: Microsoft Power Apps US Government
Article: Power Automate US Government
Video: Power BI US Government demo
Enroll your US government organization in the
Power BI service
5/23/2022 • 3 minutes to read • Edit Online

This article describes the US government enrollment process for the Power BI service.
The Power BI service has a special version for the US government which is part of the Microsoft 365
Government plans. The enrollment process for the US government Power BI service described here, is different
from the commercial version of the Power BI service.
For more information about the Power BI service for the US government, see Power BI for United States
government customers - Overview.

NOTE
This article is intended for administrators who have authority to sign up their US government organization for Power BI. If
you're not an admin, contact your administrator about getting a subscription to Power BI for US government.

IMPORTANT
Power BI is enhancing the way that customers connect to these US government clouds:
Microsoft 365 Government Community Cloud (GCC)
Microsoft 365 Government Community Cloud High (GCC High)
Microsoft 365 Department of Defense (DoD)
From 20 March 2022, US government customers will need to complete an explicit request for onboarding these US
government clouds, to maintain continuity of data access.

Select the right sign-up process for your US government organization


Microsoft 365 provides different environments for government agencies to meet varying compliance
requirements. The Microsoft 365 Government Community Cloud (GCC) is designed for federal, state, and local
government. If your organization is in GCC, you can use the steps in this article to sign up and purchase services.

IMPORTANT
Don't follow these instructions if you belong to one the following:
Microsoft 365 Government Community Cloud High (GCC High)
Microsoft 365 Department of Defense (DoD)
To purchase the Power BI service for these US government clouds, use the process described in How do I buy Microsoft
365 Government, and work with your reseller to ensure new services are properly associated with your tenant.

After you sign up for the Power BI service for the US government, work with your account team to start the
allow list process described in this article. That step is needed to fully enable your organization in the
government community cloud.
Sign up for a new Microsoft 365 Government plan
If your organization is new to the government cloud community, follow the steps below to get a Microsoft 365
Government plan.
After this process is complete, follow the steps for existing Microsoft 365 Government customers to add a Power
BI subscription.

NOTE
These steps should be performed by the global administrator.

1. Go to Microsoft 365 Government plans.


2. Select Get star ted with a free trial .
3. Complete the form to tell us about your organization. Use the drop-down to select your organization
type.

4. Submit the form to start the onboarding process. Your Microsoft representative or partner can help with
any questions.

Add Power BI to a Microsoft 365 Government plan


If your organization already has a Microsoft 365 Government plan, follow these steps to add a Power BI
subscription:
1. Sign in to the Microsoft 365 admin center, using your global admin or billing admin credentials.
2. Select Billing > Purchase ser vices .
3. Search or scroll to locate the Power BI Pro Government offer and choose Tr y or Buy Now .
4. Complete your order.
5. Assign licenses to user accounts.

More signup information


Before you can use the Power BI service for the US government, you have to work with your Microsoft account
team to have your organization added to our allow list. The allow list process is used by the Power BI
engineering team to move customers from the commercial cloud environment into the secure, government
community cloud. This step ensures that features available in the US government cloud work as expected.
To start the allow list process, contact your Microsoft account team for assistance. Only administrators can
request addition to the allow list. The process takes about three weeks. During this time, the Power BI
engineering team makes appropriate changes to ensure your tenant operates properly in the US government
cloud.

Next steps
Overview of Power BI for US government
How do I buy Microsoft 365 Government?
Enable or disable self-service sign-up and
purchasing
5/23/2022 • 2 minutes to read • Edit Online

As an administrator, you determine whether to enable or disable self-service sign-up. You also determine
whether users in your organization can make self-service purchases to get their own license.
Turning off self-service sign-up keeps users from exploring Power BI for data visualization and analysis. If you
block individual sign-up, you may want to get Power BI (free) licenses for your organization and assign them to
all users.

NOTE
If you acquired Power BI through a Microsoft Cloud Solution Provider (CSP), the setting may be disabled to block users
from signing up individually. Your CSP may also be acting as the global admin for your organization, requiring that you
contact them to help you change this setting.

When to use self-service sign-up and purchase


Self-service is a good idea:
In larger and decentralized organizations (work or school), where individuals are often given the flexibility to
purchase SaaS (Software as a service) licenses for their own use.
For one-person or small organizations that need to purchase only one Power BI Pro license, or only a few
licenses.
For individuals interested in trying Power BI, getting proficient, before purchasing a subscription for the
entire organization.
For current users with a Power BI free or Pro license, who now want to create and share content and upgrade
themselves to a Power BI Premium Per User 60 day trial.
You may want to disable self-service when:
Your organization has procurement processes in place to meet compliance, regulatory, security, and
governance needs. You need to ensure that all licenses are approved and managed according to defined
processes.
Your organization has requirements for new Power BI Pro or Premium Per User licensees, such as mandatory
training or user acknowledgment of data protection policies.
Your organization prohibits use of the Power BI service due to data privacy or other concerns and needs to
control the assignment of Power BI free licenses very closely.
to ensure that all Power BI Pro or Premium Per User licenses fall under the enterprise agreement in order to
take advantage of negotiated/discounted licensing rates.
For current users with a Power BI free license, who are being prompted to try or directly purchase a Power BI
Pro license. Your organization may not want these users to upgrade because of security, privacy, or expense.

Use PowerShell, Azure AD, and Microsoft 365 to enable and disable
self-service
You'll use PowerShell commands to change the settings that control self-service sign-up and purchasing.
If you want to disable all self-service sign-ups, change a setting in Azure Active Directory named
AllowAdHocSubscriptions by using MSOL PowerShell module. Follow the steps in this article to Set
MsolCompanySettings. This option turns off self-service sign-up for all Microsoft cloud-based apps and
services.
If you want to prevent users from purchasing their own Pro license, change the
AllowSelfSer vicePurchase setting using MSCommerce PowerShell commands. This setting lets you
turn off self-service purchase for specific products. Follow the steps in this article to Use
AllowSelfServicePurchase for the MSCommerce PowerShell module.
Signing up for Power BI with a new Microsoft 365
Trial
5/23/2022 • 2 minutes to read • Edit Online

This article describes an alternative way to sign up for the Power BI service, if you don't already have a work or
school email account.
If you're having problems signing up for Power BI with your email address, first make sure it's an email address
that can be used with Power BI. If that's not successful, sign up for a Microsoft 365 trial and create a work
account. Then, use that new work account to sign up for the Power BI service. You'll be able to use Power BI even
after the Microsoft 365 trial expires.

Sign up for a Microsoft 365 trial of Office


Sign up for a Microsoft 365 trial on the Microsoft 365 web site. If you don't already have an account, Microsoft
will walk you through the steps to create one. Since commercial email accounts (such as Hotmail and Gmail)
won't work with Microsoft 365, you'll create a new account that will. That email account will look something like
[email protected].

If you select Office 365 E5 , your trial will include Power BI Pro. The Power BI Pro trial will expire at the same
time as your Office 365 E5 trial, which is currently 30 days. If, instead, you select Office 365 E3 , you'll be able to
sign up for Power BI as a free user and upgrade to Premium Per User for a 60-day trial. For more information
about Premium Per User (PPU), see Power BI Premium Per User.
1. Enter your email address. Microsoft will let you know if that email address will work with Microsoft 365
or if you'll need to create a new email address.
If you need a new email address, Microsoft will walk you through the steps. First step, creating a new
account. Select Set up account .

2. Enter details about the new account.


3. Create your new email address and password. Create a new sign-in name that looks like
[email protected]. This is the sign-in you'll use with your new work or school account
and with Power BI.
4. That's it! You now have an email address that you can use to sign up for Power BI. Head on over to Sign
up for the Power BI service as an individual
You may have to wait while your new tenant gets created.

Important considerations
If you have any issues signing in with the new account, try using a private browser session.
By using this signup method, you are creating a new organizational tenant and you'll become the User
administrator of the tenant. For more information, see What is Power BI administration?. You can add new users
to your tenant, then share with them, as described in the Microsoft 365 admin documentation.

Next steps
What is Power BI administration?
Power BI licensing in your organization
Signing up for Power BI as an individual
More questions? Try asking the Power BI Community
Add Power BI to a Microsoft 365 partner
subscription
5/23/2022 • 2 minutes to read • Edit Online

Microsoft 365 enables companies to resell Microsoft 365 bundled and integrated with their own solutions,
providing customers with a single point of contact for purchasing, billing, and support.
If you're interested in adding Power BI to your Microsoft 365 subscription, we recommend you contact your
partner to do so. If your partner doesn't currently offer Power BI, you can pursue the options described below.

Work with your partner to purchase Power BI


If you want to buy a subscription to Power BI Pro or Power BI Premium, work with your partner to consider what
options you have:
Your partner agrees to add Power BI to their portfolio so that you can purchase from them.
Your partner can transition you to a model where you can buy Power BI directly from Microsoft or
another partner who offers Power BI.

Purchase from Microsoft or another channel


Depending on the relationship with your partner, you might be able to purchase Power BI directly from
Microsoft or another partner. You can verify whether you can add Power BI subscriptions in the Microsoft 365
admin center (requires membership in the global admin or billing admin role).
1. Go to the Microsoft 365 admin center.
2. In the left menu, open Billing , then select Your products :

3. Look for Subscriptions as shown in the image below. If you see Subscriptions , you can acquire the
service from Microsoft directly, or you can contact another partner that offers Power BI.
If you don't see Subscriptions , you can't buy from Microsoft directly or from another partner.
If your partner doesn't offer Power BI and you can't buy directly from Microsoft or another partner, consider
signing up for a free trial.

Sign up for a free trial


You can sign up for a free trial of Power BI Premium Per User. If you don't purchase Power BI at the end of the
trial period, your license returns to the version you had prior to starting the trial. You still have a Pro or free
license that offers many of the features of Power BI. For more information, see Sign up for Power BI as an
individual.
Enable ad-hoc subscriptions
By default, individual sign-ups (also known as ad-hoc subscriptions) are disabled. In this case, you see the
following message when you try to sign up: Your IT department has turned off signup for Microsoft Power BI.

To enable ad-hoc subscriptions, you can contact your partner and request that they turn it on. If you're an
administrator of your tenant, and know how to use Azure Active Directory PowerShell commands, you can
enable ad-hoc subscriptions yourself. For more information, follow the steps in Enable or disable self-service
purchasing.

Next steps
Power BI licensing in your organization
Purchase and assign Power BI Pro licenses
More questions? Try asking the Power BI Community
Use an alternate email address
5/23/2022 • 2 minutes to read • Edit Online

When you sign up for Power BI, you provide an email address. By default, Power BI uses this address to send you
updates about activity in the service. For example, when someone sends you a sharing invitation, it goes to this
address.
In some cases, you might want these emails delivered to an alternate email address rather than the one you
signed up with. This article explains how to specify an alternate address in Microsoft 365 and in PowerShell. The
article also explains how Azure Active Directory (Azure AD) resolves an email address.

NOTE
Specifying an alternate address doesn't affect which email address Power BI uses for e-mail subscriptions, service updates,
newsletters, and other promotional communications. Those communications are always sent to the email address you
used when you signed up for Power BI.

Use Microsoft 365


To specify an alternate address in Microsoft 365, follow these steps.
1. Open the personal info page of your account. If the app prompts you, sign in with the email address and
password you use for Power BI.
2. On the left menu, select Personal info .
3. In the Contact details section, select Edit .
If you cannot edit your details, this means your admin manages your email address. Contact your admin
to update your email address.

4. In the Alternate email field, enter the email address you'd like Microsoft 365 to use for Power BI
updates.

Use PowerShell
To specify an alternate address in PowerShell, use the Set-AzureADUser command.

Set-AzureADUser -ObjectId [email protected] -OtherMails "[email protected]"

Email address resolution in Azure AD


To capture an Azure AD embed token for Power BI, you can use one of three different types of email addresses:
The main email address associated with a user’s Azure AD account
The UserPrincipalName (UPN) email address
The other email address array attribute
Power BI selects which email to use based on the following sequence:
1. If the mail attribute in the Azure AD user object is present, then Power BI uses that mail attribute for the
email address.
2. If the UPN email is not a *.onmicrosoft.com domain email address (the information after the "@"
symbol), then Power BI uses that mail attribute for the email address.
3. If the other email address array attribute in the Azure AD user object is present, then Power BI uses the
first email in that list (since there can be a list of emails in this attribute).
4. If none of the above conditions are present, then Power BI uses the UPN address.
More questions? Try the Power BI Community
Close your Power BI account
5/23/2022 • 2 minutes to read • Edit Online

If you don't want to use Power BI any longer, you can close your Power BI account. After you close your account,
you can't sign in to Power BI. Also, as it states in the data retention policy in the Power BI Service Agreement,
Power BI deletes any customer data you uploaded or created.

Individual Power BI users


If you signed up for Power BI as an individual, you can close your account from the Settings screen.
1. In Power BI, select the gear in the upper right, then select Settings .

2. On the General tab, select Close Account .

3. Select a reason for closing the account (1). You can also provide further information (2). Then select
Close account .
4. Confirm that you want to close your account.

You should see a confirmation that Power BI closed your account. You can reopen your account from here
if necessary.

Managed users
If your organization signed you up for Power BI, contact your admin. Ask them to unassign the license from your
account.
More questions? Try asking the Power BI Community
Power BI Premium features
5/23/2022 • 2 minutes to read • Edit Online

This article lists the main Power BI Premium features. Most of the features apply to all the Power BI Premium
licenses, Premium Gen2, Premium (original version) and Premium Per User (PPU). When a feature only works
with a specific license, the required license is indicated in the description field. If no license is listed, the feature
works with any license.

IMPORTANT
If your organization is using the original version of Power BI Premium, you're required to migrate to the modern Premium
Gen2 platform. Microsoft began migrating all Premium capacities to Gen2. If you have a Premium capacity that requires
migrating, you’ll receive an email notification 60 days before the migration is scheduled to star t . For more
information see Plan your transition to Power BI Premium Gen2.

Power BI Premium feature list


F EAT URE DESC RIP T IO N

Advanced AI Use artificial intelligence (AI) with dataflows

Asynchronous refresh (preview) Perform asynchronous data-refresh operations

Automatic aggregations (preview) Optimize DirectQuery datasets

Autoscale Automatically add compute capability when your capacity is


overloaded

Available for Premium Gen2 only

Backup and restore Backup and restore data using XMLA endpoints

Bring your own key (BYOK) Use your own keys to encrypt data

Available for Premium Gen2 and Premium (original version)

Dataflows computed entities Perform in-storage computations

Dataflows enhanced compute engine Optimize the use of dataflows

Dataflows incremental refresh Use incremental refresh with dataflows

Dataflows linked entities Reference other dataflows

Deployment pipelines Manage the lifecycle of your Power BI content

DirectQuery with dataflows Connect directly to your dataflow without having to import
its data
F EAT URE DESC RIP T IO N

Hybrid tables (preview) Incremental refresh augmented with real-time data

Insights (preview) Explore and find insights such as anomalies and trends in
your reports

Model size limit Available memory is set to:

Premium Gen2 - The limit of memory footprint of a single


Power BI item; see the column RAM in the table at the
bottom of Limitations in Premium Gen2

Premium (original version) - The cumulative consumption of


memory of the capacity; see the column RAM in the table at
the bottom of Capacity nodes

Premium Per User (PPU) - See Considerations and limitations

Multi-geo Deploy content to data centers in regions other than the


home region of your tenant

Available for Premium Gen2 and Premium (original version)

On-demand loading capabilities for large models Improve report load time by loading datasets to memory on
demand

Paginated reports Pixel-perfect reports

Power BI Report Server On-premises report server

Available for Premium Gen2 and Premium Per User (PPU)

Refresh rate The ability to refresh more than eight times a day

Query caching Speed up reports by using local caching

Storage Manage data storage

Streaming dataflows (preview) Connect to, ingest, mash up, model, and build reports using
near real-time data

Unlimited content sharing Share Power BI content with anyone

Available for Premium Gen2 and Premium (original version)

Virtual network data gateway (preview) Connect from Microsoft Cloud to Azure using a virtual
network (VNet)

XMLA read/write Enable XMLA endpoint

Next steps
What is Power BI Premium Gen2?
What is Power BI Premium?
What is Power BI Premium Gen2?
5/23/2022 • 10 minutes to read • Edit Online

Power BI Premium released a new version of Power BI Premium, Power BI Premium Generation 2 , referred
to as Premium Gen2 for convenience. You can select to use the original version of Premium, or switch to using
Premium Gen2. You can only use one or the other for your Premium capacity.

Premium Gen2 provides the following updates or improved experiences:


Ability to license Premium Per User in addition to by capacity.
Enhanced performance on any capacity size, anytime: Analytics operations run up to 16X faster on
Premium Gen2. Operations will always perform at top speed and won't slow down when the load on the
capacity approaches the capacity limits.
Greater scale :
Higher limits on refresh concurrency, alleviating the need to track schedules for datasets being
refreshed on your capacity
Fewer memory restrictions
Complete separation between report interaction and scheduled refreshes
Improved and streamlined metrics with clear and normalized capacity utilization data, that's
dependent only on the complexity of analytics operations the capacity performs, and not on its size, the
level of load on the system while performing analytics, or other factors. With the improved metrics,
utilization analysis, budget planning, chargebacks, and the need to upgrade are clearly visible with built-in
reporting.
Autoscale is an optional feature that allows for automatically adding one v-core at a time for 24-hour
periods when the load on the capacity exceeds its limits, preventing slowdowns caused by overload.
Additional v-cores are charged to your Azure subscription on a pay-as-you-go basis. See using Autoscale
with Power BI Premium for steps on how to configure and use Autoscale .
Reduced management overhead with proactive and configurable admin notifications about capacity
utilization level and load increasing.

NOTE
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.

Enabling Premium Gen2


Enable Premium Gen2 to take advantage of its updates. To enable Premium Gen2, take the following steps:
1. In the admin portal, navigate to Capacity settings .
2. Select Power BI Premium .
3. If you have already allocated capacity, select it.
4. A section appears titled Premium Generation 2 , and in that section is a slider to enable Premium
Generation 2.
5. Move the slider to Enabled .
The following short video shows how to enable Premium Gen2.

Optionally, you can also configure and use Autoscale with Power BI Premium to ensure capacity and
performance for your Premium users.

Workspaces and Premium Gen2


Workspaces reside within capacities. Each Power BI user has a personal workspace known as My Workspace .
Additional workspaces known as workspaces can be created to enable collaboration. By default, workspaces,
including personal workspaces, are created in the shared capacity. When you have Premium capacities, both My
Workspaces and workspaces can be assigned to Premium capacities.
Capacity administrators automatically have their my workspaces assigned to Premium capacities.

Capacity nodes for Premium Gen2


With Premium Gen2 and Embedded Gen 2, the amount of memory available on each node size is set to the
limit of memory footprint of a single artifact, and not to the cumulative consumption of memory. For example,
in Premium Gen2 P1 capacity, only a single dataset size is limited to 25 GB, in comparison to the original
Premium, where the total memory footprint of the datasets being handled at the same time was limited to 25
GB.

Refresh in Premium Gen2


Premium Gen2 and Embedded Gen 2 don't require cumulative memory limits, and therefore concurrent dataset
refreshes don't contribute to resource constraints. There is no limit on the number of refreshes running per v-
core. However, the refresh of individual datasets continues to be governed by existing capacity memory and
CPU limits. You can schedule and run as many refreshes as required at any given time, and the Power BI service
will run those refreshes at the time scheduled as a best effort.

Monitoring in Gen2
The intent of monitoring in Premium Gen2 is to simplify monitoring and management of Premium capacities.
Premium Gen2 customers can adapt their monitoring approach from a tool to ensure their Premium capacities
are running properly, into a tool that alerts them if attention should be applied to correct overusage or if more
resources are required. In other words, rather than constantly having to monitor for issues and adjust, Premium
Gen2 aims to assure that everything is running properly and only alerts users if they must act.
Updates for Premium Gen2 and Embedded Gen2 - Premium Gen2 and Embedded Gen 2 only require
monitoring a single aspect: how much CPU time your capacity requires to serve the load at any moment.
This reduction in the need for monitoring is a departure from the many metrics that the original version of
Power BI Premium required. Organizations that created a cadence of monitoring and reporting on their original
Premium capacities will need to transition their existing rhythm of monitoring their Premium Gen2 capacities,
due to the streamlined metrics and monitoring requirements of Premium Gen2.
In Premium Gen2, if you exceed your CPU time per the SKU size you purchased, your capacity either autoscales
to accommodate the need (if you've optionally enabled autoscale), or throttles your interactive operations, based
on your configuration settings.
In Embedded Gen 2, if you exceed your CPU time per the SKU size you purchased, your capacity throttles your
interactive operations, based on your configuration settings. To autoscale in Embedded Gen 2, see Autoscaling in
Embedded Gen2.
Updates for Premium Gen2
Premium Gen2 and Embedded Gen 2 capacities use the Capacity Utilization App.
You can download and install the metrics app for Premium Gen2 and Embedded Gen2 using the following link.

Paginated reports and Premium Gen2


In Premium Gen2 and Embedded Gen2, there is no memory management for Paginated reports. With
Premium Gen2 and Embedded Gen2, Paginated reports are also supported on the EM1-EM3 and A1-A3 SKUs.
When using Premium Gen2, Paginated reports in Power BI benefit from the architectural and engineering
improvements reflected in Premium Gen2. The following sections describe the benefits of Premium Gen2 for
Paginated reports.
Broader SKU availability - Paginated reports running on Premium Gen2 can run reports across all
available embedded and Premium SKUs. Billing is calculated per CPU hour, across a 24-hour period. This
greatly expands the SKUs that support Paginated reports.
Dynamic scaling - With Premium Gen2, challenges associated with spikes in activity, or need for
resources, can be handled dynamically as need arises.
Improved caching - Prior to Premium Gen2, Paginated reports were required to perform many
operations in the context of memory allocated on the capacity for the workload. Now, using Premium
Gen2, reductions in the required memory for many operations enhance customers' ability to perform
long-running operations without impacting other user sessions.
Enhanced security and code isolation - With Premium Gen2, code isolation can occur at a per-user
level rather than at per-capacity, as was the case in the original Premium offering.
To learn more, see Paginated reports in Power BI Premium. To learn more about enabling the Paginated reports
workload, see Configure workloads.

Subscriptions and licensing


Power BI Premium Gen2 is a tenant-level Microsoft 365 subscription available in two SKU (Stock-Keeping Unit)
families:
P SKUs (P1-P5) for embedding and enterprise features, requiring a monthly or yearly commitment, billed
monthly, and includes a license to install Power BI Report Server on-premises.
EM SKUs (EM1-EM3) for organizational embedding, requiring a yearly commitment, billed monthly. EM1
and EM2 SKUs are available only through volume licensing plans. You can't purchase them directly.
In addition, Premium Per User has the benefits available with Premium Gen2, but on an individual user basis.
Purchasing
Power BI Premium subscriptions are purchased by administrators in the Microsoft 365 admin center. Specifically,
only Global administrators or Billing Administrators can purchase SKUs. When purchased, the tenant receives a
corresponding number of v-cores to assign to capacities, known as v-core pooling. For example, purchasing a
P3 SKU provides the tenant with 32 v-cores. To learn more, see How to purchase Power BI Premium.

Limitations in Premium Gen2


The following known limitations currently apply to Premium Gen2:
Client applications and tools that connect to and work with datasets on Premium Gen2 capacities through
the XMLA endpoint require Analysis Services client libraries. Most client applications and tools install the
most recent client libraries with regular updates, so manually installing the client libraries typically isn't
necessary. Regardless of the client application or tool version, the following minimum client library
versions are required:

C L IEN T L IB RA RY VERSIO N

MSOLAP 15.1.65.22

AMO 19.12.7.0

ADOMD 19.12.7.0

In some cases, manually installing the most recent client libraries may be necessary to reduce potential
connection and operations errors. To learn more about verifying existing installed client library versions
and manually installing the most recent versions, see Analysis Services client libraries.
There's a 225 second limitation for rendering Power BI visuals. Visuals that take longer to render, will be
timed-out and will not display.
Throttling can occur in Power BI Premium capacities. Concurrency limits are applied per session. An error
message will appear when too many operations are being processed concurrently.
Memory restrictions are different in Premium Gen2 and Embedded Gen 2. In the first generation of
Premium and Embedded, memory was restricted to a limited amount of RAM used by all artifacts
simultaneously running. In Gen2, there is no memory Limit for the capacity as a whole. Instead, individual
artifacts (such as datasets, dataflows, paginated reports) are subject to the following RAM limitations:
A single artifact cannot exceed the amount of memory the capacity SKU offers.
The limitation includes all the operations (interactive and background) being processed for the
artifact while in use (for example, while a report is being viewed, interacted with, or refreshed).
Dataset operations like queries are also subject to individual memory limits, just as they are in the
first version of Premium.
To illustrate the restriction, consider a dataset with an in-memory footprint of 1 GB, and a user
initiating an on-demand refresh while interacting with a report based on the same dataset. Two
separate actions determine the amount of memory attributed to the original dataset, which may
be larger than two times the dataset size:
The dataset needs to be loaded into memory.
The refresh operation will cause the memory used by the dataset to double, at least, since the
original copy of data is still available for active queries, while an additional copy is being
processed by the refresh. Once the refresh transaction commits, the memory footprint will
reduce.
Report interactions will execute DAX queries. Each DAX query consumes a certain amount of
temporary memory required to produce the results. Each query may consume a different
amount of memory and will be subject to the query memory limitation as described.
The following table summarizes all the limitations that are dependent on the capacity size.

DIREC TQ U
ERY / L IVE MAX
C O N N EC T M EM O RY M O DEL
F RO N T EN IO N ( P ER P ER REF RESH
C A PA C IT Y TOTA L V- B A C K EN D D V- RA M SEC O N D) 1, Q UERY PA RA L L EL I
SK US C O RES V- C O RES C O RES ( GB ) 1, 2, 3 2 [ GB ] 1, 2 SM 2

EM1/A1 1 0.5 0.5 3 3.75 1 5

EM2/A2 2 1 1 5 7.5 2 10

EM3/A3 4 2 2 10 15 2 20

P1/A4 8 4 4 25 30 6 40

P2/A5 16 8 8 50 60 6 80

P3/A6 32 16 16 100 120 10 160

P4/A74 64 32 32 200 240 10 320

P5/A84 128 64 64 400 480 10 640

1 The Power BI Premium Utilization and Metrics app doesn't currently expose these metrics.
2 These limits only apply to the datasets workload per capacity.
3 The RAM column represents an upper bound for the dataset size. However, an amount of memory must
be reserved for operations such as refreshes and queries on the dataset. The maximum dataset size
permitted on a capacity may be smaller than the numbers in this column.
4 SKUs greater than 100 GB are not available in all regions. To request using these SKUs in regions where
they're not available, contact your Microsoft account manager.

Next steps
The following articles provide additional information about Power BI Premium.
Power BI Premium Per User
Managing Premium capacities
Azure Power BI Embedded Documentation
More questions? Try asking the Power BI Community
Power BI Premium Gen2 architecture
5/23/2022 • 4 minutes to read • Edit Online

Power BI Premium Generation 2 , referred to as Premium Gen2 for convenience, is an improved and
architecturally redesigned generation of Power BI Premium.
Architectural changes in Premium Gen2, especially around how CPU resources are allocated and used, enables
more versatility in offerings, and more flexibility in licensing models. For example, the new architecture enables
offering Premium on a per-user basis, offered as Premium Per User. The architecture also provides customers
with better performance, and better governance and control over their Power BI expenses.
The most significant update in the architecture of Premium Gen2 is the way capacities' back-end v-cores (CPUs,
often referred to as v-cores) are implemented:
In the original version of Power BI Premium, backend v-cores were reserved physical computing nodes in the
cloud, with differences in the number of v-cores and the amount of onboard memory according to the
customer's licensing SKU. Customer administrators were required to keep track of how busy these nodes were,
using the Premium metrics app. They had to use the app and other tools to determine how much capacity their
users required to meet their computing needs.
In Premium Gen2, backend v-cores are implemented on regional clusters of physical nodes in the cloud, which
are shared by all tenants using Premium capacities in that Power BI region. The regional cluster is further
divided into specialized groups of nodes, where each group handles a different Power BI workload (datasets,
dataflows, or paginated reports). These specialized groups of nodes help avoid resource contention between
fundamentally different workloads running on the same node.
In both Premium Gen1 and Gen2 versions, administrators have the ability to tweak and configure workload
settings for their capacity. This can be used to reduce resource contention between workloads (datasets,
dataflows, paginated reports, and AI), and adjust other settings such as memory limits and timeouts based on
the capacity usage patterns.
The contents of workspaces assigned to a Premium Gen2 capacity is stored on your organizations capacity's
storage layer, which is implemented on top of capacity-specific Azure storage blob containers, similar to the
original version of Premium. This approach enables features like BYOK to be used for your data.
When the content needs to be viewed or refreshed, it is read from the storage layer and placed on a Premium
Gen2 backend node for computing. Power BI uses a placement mechanism that assures the optimal node is
chosen within the proper group of computing nodes. The mechanism typically places new content on the node
with the most available memory at the time the content is loaded, so that the view or refresh operation can gain
access to the most resources and can perform optimally.
As your capacity renders and refreshes more content, it uses more computation nodes, each with enough
resources to complete operations fast and successfully. This means your capacity may use multiple
computational nodes and in some cases, content might even move between nodes due to the Power BI service
performing internal load-balancing across nodes or resources. When such load balancing occurs, Power BI
makes sure content movement doesn't impact end-user experiences.
There are several positive results from distributing backend processing of content (datasets, dataflows, and
paginated reports) across shared backend nodes:
The shared nodes are at least as large as an original Premium P3 node, which means there are more v-
cores to perform any operations, which can increase performance by up to 16x when comparing to an
original Premium P1.
Whatever node your processing lands on, the placement mechanism makes sure memory remains
available for your operation to complete, within the applicable memory constraints of your capacity. (see
limitations section of this doc for full detail of memory constraints)
Cross-workloads resource contention is prevented by separating the shared nodes into specialized
workload groups. As a result of this separation, there are no controls for paginated report workloads.
The limitations on different capacity SKUs are not based on the physical constraints as they were in the
original version of Premium; rather, they are based on an expected and clear set of rules that the Power BI
Premium service enforces:
Total capacity CPU throughput is at or below the throughput possible with the v-cores your
purchased capacity has.
Memory consumption required for viewing and refresh operations remains within the memory
limits of your purchased capacity.
Because of this new architecture, customer admins do not need to monitor their capacities for signs of
approaching the limits of their resources, and instead are provided with clear indication when such limits
are met. This significantly reduces the effort and overhead required of capacity administrators to
maintain optimal capacity performance.

Next steps
What is Power BI Premium Gen2?
Premium Gen2 capacity load evaluation
Using Autoscale with Power BI Premium
Power BI Premium Gen2 FAQ
Power BI Premium Per User FAQ
Add or change Azure subscription administrators
More questions? Try asking the Power BI Community.
How to purchase Power BI Premium
5/23/2022 • 4 minutes to read • Edit Online

This article describes how to purchase Power BI Premium capacity for your organization. The article covers the
following scenario:
Using P SKUs for typical production scenarios. P SKUs require a monthly or yearly commitment, and are
billed monthly.
For more information about Power BI Premium, see What is Power BI Premium?. For current pricing and
planning information, see the Power BI pricing page. Content creators still need a Power BI Pro license, even if
your organization uses Power BI Premium. Ensure you purchase at least one Power BI Pro license for your
organization. With A SKUs, all users who consume content also require Pro licenses.

NOTE
If a Premium subscription expires, you have 30 days of full access to your capacity. After that, your content reverts to a
shared capacity where it will continue to be accessible. However, you will not be able to view reports that are based on
datasets that are greater than 1 GB or reports that require Premium capacities to render.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifes the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.

Purchase P SKUs for typical production scenarios


You can create a new tenant with a Power BI Premium P1 SKU configured, or you can purchase a Power BI
Premium capacity for an existing organization. In both cases, you can then add capacity if you need it.
Create a new tenant with Power BI Premium P1
If you don't have an existing tenant and want to create one, you can purchase Power BI Premium at the same
time. The following link walks you through the process of creating a new tenant and enables you to purchase
Power BI Premium: Power BI Premium P1 offer. When you create your tenant, you will automatically be assigned
to the Microsoft 365 Global Administrator role for that tenant.
After you purchase capacity, learn how to manage capacities and assign workspaces to a capacity.
Purchase a Power BI Premium capacity for an existing organization
If you have an existing organization (tenant), you must be in the Microsoft 365 Global Administrator role or
Billing Administrator role to purchase subscriptions and licenses. For more information, see About Microsoft
365 admin roles.
To purchase Premium capacity, follow these steps.
1. From within the Power BI service, select the Microsoft 365 app picker, and then select Admin .
Alternatively, you can browse to the Microsoft 365 admin center.
2. Select Billing > Purchase ser vices .
3. Under Power BI , look for Power BI Premium offerings. This will list as P1 through P3, EM3 and P1
(month to month).
4. Select Details under the service you want, select a license quantity, and then select Buy .

5. Follow the steps to complete the purchase.


After you have completed the purchase, the Purchase ser vices page shows that the item is purchased and
active.
After you purchase capacity, learn how to manage capacities and assign workspaces to a capacity.
Purchase additional capacities
Now that you have a capacity, you can add more as your needs grow. You can use any combination of Premium
capacity SKUs (P1 through P3) within your organization. The different SKUs provide different resource
capabilities.
1. In the Microsoft 365 admin center, select Billing > Your products .
2. Select the Power BI Premium service you want to add capacity to.
3. Select Buy licenses .
4. Change the number of instances that you want to have for this item. Then select Submit when finished.

IMPORTANT
Selecting Submit charges the credit card on file.

The Your products page will then indicate the number of instances you have. Within the Power BI admin portal,
under Capacity settings , the available v-cores reflects the new capacity purchased.

Cancel your subscription


You can cancel your subscription from within the Microsoft 365 admin center. To cancel your Premium
subscription, do the following.
1. Browse to the Microsoft 365 admin center.
2. Select Billing > Your products .
3. Select your Power BI Premium product from the list.
4. Under Subscription status , select Cancel subscription .
5. The Cancel subscription page will indicate whether or not you are responsible for an early termination
fee.
6. Read through the information, and if you want to proceed, select Cancel subscription .
When canceling or your license expires
When you cancel your Premium subscription, or your capacity license expires, you can continue to access your
Premium capacities for a period of 30 days from the date of cancellation or license expiration. After 30 days,
your workspaces will move to a shared capacity and will still be accessible. However, you will not be able to view
reports that are based on datasets that require Premium capacities to render. This includes datasets larger than
1GB and refreshes of those datasets.

Purchase A SKUs for testing and other scenarios


You can also purchase A SKUs for testing and other scenarios, which provides Premium capacity on an hourly
basis. For more information and steps, see Purchase Power BI Premium for testing.

Purchase Premium Per User (PPU) licenses


You can purchase Power BI Premium for individual users, using the Premium Per User (PPU) license model. For
more information about Premium Per User, see Power BI Premium Per User.

Next steps
Configure and manage capacities in Power BI Premium
Power BI pricing page
Power BI Premium FAQ
Planning a Power BI Enterprise Deployment whitepaper
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Purchase Power BI Premium for testing
5/23/2022 • 2 minutes to read • Edit Online

This article describes how to purchase Power BI Premium A SKUs for testing scenarios, and for cases where you
don't have the permissions necessary to purchase P SKUs (Microsoft 365 Global Administrator role or Billing
Administrator role). A SKUs require no time commitment, and are billed hourly. You purchase A SKUs in the
Azure portal.
For more information about Power BI Premium, see What is Power BI Premium?. For current pricing and
planning information, see the Power BI pricing page. Content creators still need a Power BI Pro license, even if
your organization uses Power BI Premium. Ensure you purchase at least one Power BI Pro license for your
organization. With A SKUs, all users who consume content also require Pro licenses.

NOTE
If a Premium subscription expires, you have 30 days of full access to your capacity. After that, your content reverts to a
shared capacity. Models that are greater than 1 GB are not supported in shared capacity.

Purchase A SKUs for testing and other scenarios


A SKUs are made available through the Azure Power BI Embedded service. You can use A SKUs in the following
ways:
Enable embedding of Power BI in third party applications. For more information, see Power BI Embedded.
Test Premium functionality before you buy a P SKU.
Create development and test environments alongside a production environment that uses P SKUs.
Purchase Power BI Premium even though you're not a Microsoft 365 Global Administrator role or Billing
Administrator role.

NOTE
If you purchase an A4 or higher SKU, you can take advantage of all Premium features except for unlimited sharing of
content. With A SKUs, all users who consume content require Pro licenses.

Follow these steps to purchase A SKUs in the Azure portal:


1. Sign in to the Azure portal with an account that has at least capacity admin permissions in Power BI.
2. Search for Power BI Embedded and select the service in the search results.
3. Select Create Power BI Embedded .

4. On the Power BI Embedded create screen, specify the following information:


The Subscription in which to create the Power BI Embedded service.
The physical Location in which to create the resource group that contains the service. For better
performance, this location should be close to the location of your Azure Active Directory tenant for
Power BI.
The existing Resource group to use, or create a new one as shown in the example.
The Power BI capacity administrator . The capacity admin must be a member user or a service
principal in your Azure AD tenant.
5. If you want to use all features of Power BI Premium (except unlimited sharing), you need at at least an A4
SKU. Select Change size .
6. Select a capacity size of A4, A5, or A6, which correspond to P1, P2, and P3. Prices in the following image
are examples only.

7. Select Review + Create , review the options you chose, then select Create .
8. It can take a few minutes to complete the deployment. When it's ready, select Go to resource .

9. On the management screen, review the options you have for managing the service, including pausing the
service when you're not using it.
After you purchase capacity, learn how to manage capacities and assign workspaces to a capacity.

Next steps
What is Power BI Premium? How to purchase Power BI Premium Configure and manage capacities in Power BI
Premium
Power BI pricing page
Power BI Premium FAQ
Planning a Power BI Enterprise Deployment whitepaper
More questions? Try asking the Power BI Community
Plan your transition to Power BI Premium Gen2
5/23/2022 • 3 minutes to read • Edit Online

This article provides information about key dates for migrating Power BI Premium capacity to the latest
platform.
Over the last several months, we've been working to make many improvements to Power BI Premium. Changes
include updates to licensing, performance, scaling, management overhead, and improved insight to utilization
metrics. This next generation of Power BI Premium, referred to as Power BI Premium Gen2, has officially moved
from preview to general availability as of October 4, 2021. You can read the announcement about this release in
the Power BI blog.
If your organization is using the original version of Power BI Premium, you're required to migrate to the modern
Gen2 platform. Microsoft began migrating all Premium capacities to Gen2. If you have a Premium capacity that
requires migrating, you’ll receive an email notification 60 days before the migration is scheduled to
star t .

Premium Gen2 prerequisites


Power BI Premium Gen2 and Embedded Gen2 support open-platform connectivity from Microsoft and third-
party client applications and tools by using XMLA endpoints.
The article Dataset connectivity with the XMLA endpoint lists the minimum requirements for Power BI Premium,
Premium Per User (PPU) and Embedded connectivity. In addition to these requirements, for dataset connectivity
in Premium Gen2, you need to have the following:
Microsoft Excel - Version 16.0.13612.10000 or higher
PowerShell cmdlets - Version 21.1.18256 or higher
Ser ver Profiler - Version 18.9 or higher
SQL Ser ver Management Studio (SSMS) - Version 18.9 or higher
Visual Studio with Analysis Ser vices projects (SSDT) - Version 2.9.16 or higher
You also need to use the following client libraries when working with Gen2 capacities:
ADOMD - Version 19.12.7.0 or higher
AMO - Version 19.12.7.0 or higher
MSOL AP - Version 15.1.65.22 or higher

Self-migration to Premium Generation 2


If you want to perform your own migration to the latest platform before March 2022, it's easy to transition. You
simply need to enable Premium Gen2 in the Power BI admin portal. Migrating doesn't interrupt your Power BI
service. The change typically completes within a minute and won't take more than 10 minutes.
Ready for the next generation? Follow these steps:
1. Sign in to the Power BI service as a Power BI capacity admin.
2. From the navigation bar, select Settings > Admin por tal > Capacity settings .
3. Select Power BI Premium .
4. If you have already allocated capacity, select it.
5. The section Premium Generation 2 appears.
6. Select the slider to switch the setting to Enabled . This step is demonstrated in the following animation:

Transition from preview to Premium Gen 2 general availability


Customers using Power BI Premium Gen2 in preview don't need to take any action to transition to the general
availability release. However, there are some key dates to consider if you've been using Autoscale to balance
your capacity needs.
To date, organizations that have enabled Autoscale for capacities have gotten the burst processing benefits of
Autoscale for free. Beginning November 4, 2021 we'll begin charging for Autoscale cores. Take one of the
following actions:
You can continue to use Autoscale to enable the automatic use of additional cores during periods of higher-
than normal demand on your capacities. Review the pricing details for Premium per capacity add-ons so that
you're aware of upcoming charges.
Or, to avoid Autoscale charges, disable the feature. Autoscale is an optional feature and benefit of the
Premium Gen2 platform. You can choose to not use it.

Migration notification
Following the general availability of gen2, we’ll begin to notify affected customers so that you can prepare your
organization for changes. We’ll post additional awareness, along with specific migration timelines to Microsoft
365 Message Center. Admins will receive 60 days advance notice of changes. The timeline will vary by cloud.

National cloud supportability


The following table describes Gen2 national cloud supportability. If a certain cloud environment has
unsupported Gen2 features, they're also listed in the table.
EN VIRO N M EN T SUP P O RT ED UN SUP P O RT ED F EAT URES

U.S. Government Community Cloud ️


✔ Autoscale
(GCC)

U.S. Government Community Cloud ️



High (GCC High)

Next steps
What is Power BI Premium Gen2?
Using Autoscale with Power BI Premium
Install the Gen2 metrics app
Managing Premium Gen2 capacities
5/23/2022 • 8 minutes to read • Edit Online

Managing Power BI Premium involves creating, managing, and monitoring Premium capacities. This article
provides an overview of capacities; see Configure and manage capacities for step-by-step instructions.

Creating and managing capacities


The Capacity Settings page of the Power BI Admin portal displays the number of v-cores purchased and
Premium capacities available. The page allows Global administrators or Power BI service administrators to
create Premium capacities from available v-cores, or to modify existing Premium capacities.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.

NOTE
You can also get Premium Per User (PPU) licenses for individuals, which provides many of the features and capabilities of a
Premium capacity, and also incorporates all functionality included with a Power BI Pro license. For more information, see
Power BI Premium Per User.

When creating a Premium capacity, administrators are required to define:


Capacity name (unique within the tenant).
Capacity admin(s).
Capacity size.
Region for data residency.
At least one Capacity Admin must be assigned. Users assigned as Capacity Admins can:
Remove assigned workspaces from the capacity
Manage user permissions and assign:
Additional Capacity Admins
Contributors - Users who are allowed to assign workspaces to that capacity (Capacity Admins are
automatically also Contributors)
Manage Autoscale settings for that capacity
Setup email alerts for resource utilization level
Track capacity resources usage using the dedicated out of the box app
Capacity Admins cannot access workspace content unless explicitly assigned in workspace permissions. They
also don't have access to all Power BI admin areas (unless explicitly assigned) such as usage metrics, audit logs,
or tenant settings. Importantly, Capacity Admins do not have permissions to create new capacities or scale
existing capacities. Admins are assigned on a per capacity basis, ensuring that they can only view and manage
capacities to which they are assigned.
Capacity size is selected from an available list of SKU options, which is constrained by the number of available v-
cores in the pool. It's possible to create multiple capacities from the pool, which could be sourced from one or
more purchased SKUs. For example, a P3 SKU (32 v-cores) could be used to create three capacities: one P2 (16
v-cores), and two P1 (2 x 8 v-cores). The following image shows an example setup for the fictitious Contoso
organization consisting of five Premium capacities (3 x P1, and 2 x P3) with each containing workspaces, and
several workspaces in shared capacity.

A Premium capacity can be assigned to a region other than the home region of the Power BI tenant, known as
multi-geo. Multi-geo provides administrative control over which datacenters within defined geographic regions
your Power BI content resides. The rationale for a multi-geo deployment is typically for corporate or
government compliance, rather than performance and scale. Report and dashboard loading still involves
requests to the home region for metadata. To learn more, see Multi-Geo support for Power BI Premium.
Power BI service administrators and Global Administrators can modify Premium capacities. Specifically, they can:
Change the capacity size to scale-up or scale-down resources.
Add or remove Capacity Admins.
Add or remove users that have assignment permissions.
Change regions.

NOTE
Service and global administrators do not have access to capacity metrics unless explicitly added as capacity admins.

Contributor assignment permissions are required to assign a workspace to a specific Premium capacity. The
permissions can be granted to the entire organization, specific users, or groups.
By default, Premium capacities support workloads associated with running Power BI queries. Premium
capacities also support additional workloads: AI (Cognitive Ser vices) , Paginated Repor ts , and Dataflows .
Deleting a Premium capacity is possible and won't result in the deletion of its workspaces and content. Instead, it
moves any assigned workspaces to shared capacity. When the Premium capacity was created in a different
region, the workspace is moved to shared capacity of the home region.
Capacities have limited resources, defined by each capacity SKU. Resources consumption by Power BI items
(such as reports and dashboards) across capacities can be tracked using the metrics app.
Assigning workspaces to capacities
Workspaces can be assigned to a Premium capacity in the Power BI Admin portal or, for a workspace, in the
Workspace pane.
Capacity Admins, as well as Global Administrators or Power BI service administrators, can bulk assign
workspaces in the Power BI Admin portal. Bulk assigned can apply to:
Workspaces by users - All workspaces owned by those users, including personal workspaces, are
assigned to the Premium capacity. This will include the reassignment of workspaces when they are
already assigned to a different Premium capacity. In addition, the users are also assigned workspace
assignment permissions.
Specific workspaces
The entire organization's workspaces - All workspaces, including personal workspaces, are assigned
to the Premium capacity. All current and future users are assigned workspace assignment permissions.
This approach is not recommended. A more targeted approach is preferred.
You can enable Premium capabilities in a workspace by setting the proper license mode. To set a license mode,
you must be both a workspace admin, and have assignment permissions. To enable Premium capabilities for P
and EM SKUs, set the license mode to Premium per capacity. To enable Premium capabilities for A SKU’s, set the
license mode to Embedded. To enable Premium capabilities for Premium Per User (PPU), mark the license mode
as Premium Per User. To remove a workspace from Premium, mark the workspace license mode as Pro.

Workspace admins can remove a workspace from a capacity (to shared capacity) without requiring assignment
permission. Removing workspaces from reserved capacities effectively relocates the workspace to shared
capacity. Note that removing a workspace from a Premium capacity may have negative consequences resulting,
for example, in shared content becoming unavailable to Power BI Free licensed users, or the suspension of
scheduled refresh when they exceed the allowances supported by shared capacities.
In the Power BI service, a workspace assigned to a Premium capacity is easily identified by the diamond icon
that adorns the workspace name.

Planning your capacity size in advance


Different Premium capacity SKUs have different amounts of resources that are made available to support Power
BI items (such as reports, dashboards and datasets) processed by each capacity. The SKUs differentiate by the
number of standard v-cores they have. The most influential resources to consider when sizing in advance are:
CPU power – The amount of CPU power each capacity has is a function of its base v-core and the
number of autoscale cores it has (purchased in-advance and allocated in advance during capacity
instantiation). The CPU power exhaustion of a capacity is measured by aggregating CPU power used
across all the Power BI items it processes. The more operations done against more items, the higher the
CPU spend.
Item size - The size of a Power BI item relates to the amount of data available for processing inside the
item. Size can have multiple dimensions depending on the item. Datasets size for example is determined
by the footprint the dataset has in memory while being processed. Different items may have size
measures that are defined differently. The size footprint across the capacity, unlike CPU, is not aggregated
across all active items but is evaluated per item only. This means a capacity can support multiple items
running concurrently if neither of those items exceeds the capacity size limit.
Due to the individually enforced nature of a Power BI item's size measure, the size usually dictates how big a
capacity should be. For example, if you have a P1 SKU, datasets are supported up to a limit of 25Gb. As long as
your datasets do not exceed this value, the SKU should meet your needs. You can evaluate a typical dataset’s size
by measuring the memory footprint of the Power BI Desktop tool. A a typical item's usage pattern will dictate its
CPU power spend, which if exhausted can severely degrade report interaction performance for end-users.
Therefore, once you have a typical report for evaluation, it will be beneficial to use that report in a load test, and
evaluate the results to determine whether a higher SKU size or turning on autoscale is required.
How to decide when to turn on autoscale?
Using the Power BI Premium Capacity Utilization and Metrics app will indicate cases of overload impact in the
overloaded minutes visual, in the overview page. You can evaluate the severity of the impact of those overload
minutes by using the evidence page, where you can track how much impact an overload moment had, what
Power BI items it impacted and how many users got affected. If based on your evaluation the impact is too high,
you should turn on autoscale.
How to decide when to scale up to a higher SKU?
There are two different indicators that suggest you need to scale up your capacity:
Using autoscale beyond a certain degree, is not economically viable. If your autoscaling patterns lead you
to consume more than 25% of your capacity size on a regular basis, it may be less costly to upgrade your
capacity to a higher SKU since your capacity CPU Power requirements are significantly higher than the
capacity’s original power. Here we consider over 25% as both how many cores got added and how long
were they added for. For example, a P1 SKU with 8 v-cores that uses auto scale in a way that is equivalent
to two additional cores consistently applied, will cost the same as a P2.
The size of your Power BI items approach or exceed capacity limits. If the item size of any of the items
reported in the metrics app approaches your capacity limit or exceeds it, operations against that item will
fail. Therefore if a critical item approaches those limits (80% of the capacity size) it is advisable to
consider upgrading the capacity in advance, to avoid interruption of service should that item exceed the
capacity limit.

Next steps
Using autoscale with Premium Gen2
Install the Gen2 metrics app
Using the Premium Gen2 metrics app
Configure and manage capacities in Power BI
Premium
5/23/2022 • 5 minutes to read • Edit Online

Managing Power BI Premium involves creating, managing, and monitoring Premium capacities. This article
provides step-by-step instructions; for an overview of capacities; see Managing Premium capacities.
Learn how to manage Power BI Premium and Power BI Embedded capacities, which provide reserved resources
for your content.

Capacity is at the heart of the Power BI Premium and Power BI Embedded offerings. It is a set of resources
reserved for exclusive use by your organization. Having a capacity enables you to publish dashboards, reports,
and datasets to users throughout your organization without having to purchase per-user licenses for them. It
also offers dependable, consistent performance for the content hosted in capacity. For more information, see
What is Power BI Premium?.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.

NOTE
You can also get Premium Per User (PPU) licenses for individuals, which provides many of the features and capabilities of a
Premium capacity, and also incorporates all functionality included with a Power BI Pro license. For more information, see
Power BI Premium Per User.

Manage capacity
After you have purchased capacity nodes in Microsoft 365, you set up the capacity in the Power BI admin portal.
You manage Power BI Premium capacities in the Capacity settings section of the portal.
You manage a capacity by selecting the name of the capacity. This takes you to the capacity management screen.

If no workspaces have been assigned to the capacity, you will see a message about assigning a workspace to the
capacity.
Setting up a new capacity (Power BI Premium)
The admin portal shows the number of virtual cores (v-cores) that you have used and that you still have
available. The total number of v-cores is based on the Premium SKUs that you have purchased. For example,
purchasing a P3 and a P2 results in 48 available cores – 32 from the P3 and 16 from the P2.
If you have available v-cores, set up your new capacity by following these steps.
1. Select Set up new capacity .
2. Give your capacity a name.
3. Define who the admin is for this capacity.
4. Select your capacity size. Available options are dependent on how many available v-cores you have. You
can't select an option that is larger than what you have available.

5. Select Set up .
Capacity admins, as well as Power BI admins and global administrators, then see the capacity listed in the admin
portal.
Capacity settings
1. In the Premium capacity management screen, under Actions , select the gear icon to review and update
settings.

2. You can see who the service admins are, the SKU/size of the capacity, and what region the capacity is in.
3. You can also rename or delete a capacity.

NOTE
Power BI Embedded capacity settings are managed in the Microsoft Azure portal.

Change capacity size


Power BI admins and global administrators can change Power BI Premium capacity. Capacity admins who are
not a Power BI admin or global administrator don't have this option.
1. Select the capacity name you want to change the size of.
2. Select Change size . You can see the

3. On the Change size screen, upgrade or downgrade your capacity as appropriate.

NOTE
To upgrade to a P4 or a P5 capacity you need to buy a few smaller SKUs that will add up to the size of the
capacity you want.
Administrators are free to create, resize and delete nodes, so long as they have the requisite number of v-
cores.
P SKUs cannot be downgraded to EM SKUs. You can hover over any disabled options to see an
explanation.

IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. See capacity and reliability notifications for more
information.

Manage user permissions


You can assign additional capacity admins, and assign users that have contributor permissions. Users that have
contributor permissions can assign a workspace to a capacity if they are an admin of that workspace. They can
also assign their personal My Workspace to the capacity. Users with contributor permissions do not have access
to the admin portal.

NOTE
For Power BI Embedded, capacity admins are defined in the Microsoft Azure portal.

Expand Contributor permissions , then add users or groups as appropriate.


Assign a workspace to a capacity
There are two ways to assign a workspace to a capacity: in the admin portal; and from a workspace.
Assign from the admin portal
Capacity admins, along with Power BI admins and global administrators, can bulk assign workspaces in the
premium capacity management section of the admin portal. When you manage a capacity, you see a
Workspaces assigned to this capacity section that allows you to assign workspaces.

1. Select Assign workspaces .


2. Select an option for Apply to .
SEL EC T IO N DESC RIP T IO N

Workspaces by users When you assign workspaces by user, or group, all the
workspaces that the user or group is admin of become
part of the Premium capacity, including the user's
personal workspace. The users automatically get
workspace assignment permissions.
This includes workspaces already assigned to a different
capacity.

Specific workspaces Enter the name of a specific workspace to assign to the


selected capacity.

The entire organization's workspaces Assigning the entire organization's workspaces to


Premium capacity assigns all workspaces and My
Workspaces, in your organization, to this Premium
capacity. In addition, all current and future users will have
the permission to reassign individual workspaces to this
capacity.

3. Select Apply .
Assign from workspace settings
You can also assign a workspace to a Premium capacity from the settings of that workspace. To move a
workspace into a capacity, you must have admin permissions to that workspace, and also capacity assignment
permissions to that capacity. Note that workspace admins can always remove a workspace from Premium
capacity.
1. Edit a workspace by selecting the ellipsis (. . .) then selecting Edit this workspace .
2. Under Edit this workspace , expand Advanced .
3. Select the capacity that you want to assign this workspace to.

4. Select Save .
Once saved, the workspace and all its contents are moved into Premium capacity without any experience
interruption for end users.

Power BI Report Server product key


On the Capacity settings tab of the Power BI admin portal, you will have access to your Power BI Report
Server product key. This will only be available for Global Admins or users assigned the Power BI service
administrator role and if you have purchase a Power BI Premium SKU.

Selecting Power BI Repor t Ser ver key will display a dialog contain your product key. You can copy it and use
it with the installation.
For more information, see Install Power BI Report Server.

Next steps
Managing Premium capacities
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Premium Gen2 capacity load evaluation
5/23/2022 • 4 minutes to read • Edit Online

TIP
This article explains how to evaluate your Gen2 capacity load. It covers concepts such as overload and autoscale. You can
also watch the Gen2 features breakdown video, which illustrates some of the Gen2 features described in this article.

To enforce CPU throughput limitations, Power BI evaluates the throughput from your Premium Gen2 capacity on
an ongoing basis.
Power BI evaluates throughput every 30 seconds . It allows operations to complete, collects execution time on
the shared pool physical node’s CPUs, and then for all operations on your capacity, aggregates them into 30-
second CPU inter vals and compares the results to what your purchased capacity is able to support.
The following image illustrates how Premium Gen2 evaluates and completes queries.

Let's look at an example: a P1 with four backend v-cores can support 120 seconds (4 x 30 seconds = 120) of v-
core execution time, also known as CPU time.
The aggregation is complex. It uses specialized algorithms for different workloads, and for different types of
operations, as described in the following points:
Slow-running operations , such as dataset and dataflow refresh, are considered background
operations since they typically run in the background and users don’t actively monitor them or look at
them visually. Background operations are lengthy and require significant CPU power to complete during
the long process. Power BI spreads CPU costs of background operations over 24 hours, so that capacities
don't hit maximum resource usage due to too many refreshes running simultaneously. This allows Power
BI Premium Gen2 subscribers to run as many background operations as allowed by their purchased
capacity SKU, and doesn’t limit them like the original Premium generation.
Fast operations like queries, report loads, and others are considered interactive operations. The CPU
time required to complete those operations is aggregated, to minimize the number of 30-seconds
windows that are impacted following that operation's completion.
Premium Gen2 background operation scheduling
Refreshes are run on Premium Gen2 capacities at the time they are scheduled, or close to it, regardless of how
many other background operations were scheduled for the same time. Datasets and dataflows being refreshed
are placed on a physical processing node that has enough memory available to load them, and then begin the
refresh process.
While processing the refresh, datasets may consume more memory to complete the refresh process. The refresh
engine makes sure no artifact can exceed the amount of memory that their base SKU allows them to consume
(for example, 25 GB on a P1 subscription, 50 GB on a P2 subscription, and so on).

How capacity size limits are enforced when viewing reports


Premium Gen2 evaluates utilization by aggregating utilization records every 30 seconds. Each evaluation
consists of 2 different aggregations:
Interactive utilization
Background utilization
Interactive utilization is evaluated by considering all interactive operations that completed on or near the
current 30-second evaluation cycle.
Background utilization is evaluated by considering all the background operations that completed during the
past 24 hours. Each background operation contributes only 1/2880 of its total CPU cost (2880 is the number of
evaluation cycles in a 24-hour period).
Each capacity consists of an equal number of frontend and backend v-cores. The CPU time measured in
utilization records reflect the backend v-cores' utilization, and that utilization drives the need to autoscale.
Utilization of frontend v-cores is not tracked, and you cannot convert frontend v-cores to backend v-cores.
If you have a P1 subscription with 4 backend v-cores, each evaluation cycle quota equates to 120 seconds (4 x
30 = 120 seconds) of CPU utilization. If the sum of both interactive and background utilizations exceeds the total
backend v-core quote in your capacity, and you have not optionally enabled autoscale, the workload for your
Gen2 capacity will exceed your available resources, also called your capacity threshold. The following image
illustrates this condition, called overload, when autoscale is not enabled.

In contrast, if autoscale is optionally enabled, if the sum of both interactive and background utilizations exceeds
the total backend v-core quota in your capacity, your capacity is automatically autoscales (raised) by one v-core
for the next 24 hours.
The following image shows how autoscale works.

Autoscale always considers your current capacity size to evaluate how much you use, so if you already
autoscaled into one v-core, that v-core is spread evenly at 50% for frontend utilization and 50% for backend
utilization. This means your maximum capacity is now at (120 + 0.5 * 30 = 135 seconds) of CPU time in an
evaluation cycle.
Autoscale always ensures that no single interactive operation can account for all of your capacity, and you must
have two or more operations occurring in a single evaluation cycle to initiate autoscale.

Using Premium Gen2 without autoscale


If a capacity's utilization exceeded 100% of its resources, and it cannot initiate autoscale due to autoscale being
turned off, or already being at its maximum v-core value, the capacity enters a temporary interactive request
delay mode. During the interactive request delay mode, each interactive request (such as a report load, visual
interaction, and others) is delayed before it is sent to the engine for execution.
The capacity stays in interactive request delay mode if the previous evaluation is evaluated at greater than 100%
resource utilization.

Configure autoscale
To configure autoscale on a Power BI Premium Gen2 capacity, follow the instructions in Using Autoscale with
Power BI Premium.

Next steps
What is Power BI Premium Gen2?
Power BI Premium Gen2 architecture
Using Autoscale with Power BI Premium
Power BI Premium Gen2 FAQ
Power BI Premium Per User FAQ (preview)
Add or change Azure subscription administrators
More questions? Try asking the Power BI Community
Install the Gen2 metrics app
5/23/2022 • 4 minutes to read • Edit Online

The Power BI Premium Utilization and Metrics app is designed to provide monitoring capabilities for Power BI
Gen2 Premium capacities. Use this guide to install the app. Once the app is installed, you can learn how to use it.

NOTE
The app is updated regularly with new features and functionalities. If you see there's a pending update in the notifications
center, we recommend that you update the app.

Prerequisites
Before you install the Gen2 metrics app, review these requirements:
You need to be a capacity admin
The app only works with Gen2 capacities

Install the app


Follow the steps below according to the type of installation you need.

NOTE
If you're installing the app in a government cloud environment, use one of the links below. You can also use these links to
upgrade the app. When upgrading, you don't need to delete the old app.
Microsoft 365 Government Community Cloud (GCC)
Microsoft 365 Government Community Cloud High (GCC High)
Microsoft 365 Department of Defense (DoD)
Power BI for China cloud

First time installation


Upgrade the app

To install the Power BI Premium Capacity Utilization and Metrics app for the first time, follow these steps:
1. Select one of these options to get the app from AppSource:
Go to AppSource > Power BI Premium Capacity Utilization and Metrics and select Get it now .
In Power BI service, do the following:
a. Select Apps .
b. Select Get apps .
c. Search for Power BI Premium .
d. Select the Power BI Premium Capacity Utilization and Metrics app.
e. Select Get it now .
2. When prompted, sign in to AppSource using your Microsoft account and complete the registration
screen. The app will take you to the Power BI service to complete the process. Select Install to continue.

3. In the Install this Power BI app window, select Install .

4. Wait a few seconds for the app to install.


Run the app for the first time
To complete the installation, you need to configure the Power BI Premium utilization and metrics app by running
it for the first time.
1. In Power BI service, select Apps .
2. Select the Premium Capacity Utilization And Metrics app.
3. when you see the message You have to connect to your own data to view this report, select Connect .

4. In the Connect to Premium Capacity Utilization And Metrics first window, fill in the fields
according to the table below:

F IEL D REQ UIRED VA L UE N OT ES

CapacityID Yes An ID of a capacity you're You can find the capacity


an admin of ID in the URL of the
capacity management
page. In the Power BI
service, go to Settings >
Admin por tal >
Capacity settings , then
select a Gen2 capacity.
The capacity ID is shown
in the URL after
/capacities/. For example,
9B77CC50-E537-40E4-
99B9-2B356347E584
is the capacity ID in this
URL:
https://app.powerbi.com/admin-
portal/capacities/9B77CC50-
E537-40E4-99B9-2B356347E584
.
Once installed, the app
will let you see all the
capacities you can access.
F IEL D REQ UIRED VA L UE N OT ES

UTC_offset Yes Numerical values ranging Enter your organization's


from 14 to -12 . standard time in
To signify a Half hour Coordinated Universal
timezone, use .5 . For Time (UTC).
example, for Iran's
standard time enter 3.5
.

Timepoint Automatically populated This field is automatically


populated and is used for
internal purposes. The
value in this field will be
overwritten when you use
the app.

Timepoint2 Automatically populated This field is automatically


populated and is used for
internal purposes. The
value in this field will be
overwritten when you use
the app.

Advanced Optional On or Off The app automatically


refreshed your data at
midnight. This option can
be disabled by expanding
the advanced option and
selecting Off .

5. Select Next .
6. In the Connect to Premium Capacity Utilization And Metrics second window, fill in the following
fields:
Authentication method - Select your authentication method. The default authentication method
is OAuth2.
Privacy level setting for this data source - Select Organizational to enable app access to all
the data sources in your organization.

NOTE
ExtensionDataSourceKind and ExtensionDataSourcePath are internal fields related to the app's connector. Do not
change the values of these fields.

7. Select Sign in and continue .


8. Select a capacity from the capacity name dropdown.

9. After configuring the app, it may take a few minutes for the app to get your data. If you run the app and
it's not displaying any data, refresh the app. This behavior happens only when you open the app for the
first time.
Next steps
Use the gen2 metrics app
Use the Gen2 metrics app
5/23/2022 • 17 minutes to read • Edit Online

The Power BI Premium utilization and metrics app is designed to provide monitoring capabilities for Power BI
Gen2 Premium capacities. Monitoring your capacities is essential for making informed decisions on how to best
use your Premium capacity resources. For example, the app can help identify when to scale up your capacity or
when to turn on autoscale.

NOTE
When turning on autoscale, make sure there are no Azure policies preventing autoscale from working.

The app is updated often with new features and functionalities and provides the most in-depth information into
how your capacities are performing.
To install the Gen2 metrics app, you must be a capacity admin. Once installed, anyone in the organization with
the right permissions can view the app.
The Gen2 metrics app has six pages:
Overview
Evidence
Refresh
Timepoint
Artifact Detail

Overview
This page provides an overview of the capacity performance. It's divided into the three sections listed below.
At the top of each page, the CapacityID field allows you to select the capacity the app shows results for.
Artifacts
The artifacts section is made up of two visuals, one on top of the other, in the left side of the page. The top visual
is a stacked column table, and below it is a matrix table.

Multi metric column chart


A stacked column table that provides an hourly view of your capacity's usage. Drill down to a specific day to
identify daily patterns. Selecting each stacked column will filter the main matrix and the other visuals according
to your selection.

The Multi metric column chart displays the four values listed below. It shows the top results for these values per
Power BI item during the past two weeks.

CPU - CPU processing time in seconds.


Duration - Processing time in seconds.
Operations - The number of Power BI operations that took place.
Users - The number of users that performed operations.
Matrix by artifact and operation
A matrix table that displays metrics for each Power BI item on the capacity.
To gain a better understanding of your capacity's performance, you can sort this table according to the
parameters listed below.

Ar tifacts - A list of Power BI items active during the selected period of time. The item name is a string
with the syntax: item name \ item type \ workspace name . You can expand each entry to show the various
operations (such as queries and refreshes) the item performed.
CPU (s) - CPU processing time in seconds. Sort to view the top CPUs that consumed Power BI items over
the past two weeks.
Duration (s) - Processing time in seconds. Sort to view the Power BI items that needed the longest
processing time during the past two weeks.
Users - The number of users that used the Power BI item.
Ar tifact Size - The amount of memory a Power BI item needs. Sort to view the Power BI items that have
the largest memory footprint.
Overloaded minutes - Displays a sum of 30 seconds increments where overloading occurred at least
once. Sort to view the Power BI items that were affected the most due to overload penalty.
Performance delta - Displays the performance effect on Power BI items. The number represents the
percent of change from seven days ago. For example, 20 suggests that there's a 20% improvement today,
compared with the same metric taken a week ago.
To create the performance delta Power BI calculates an hourly average for all the fast operations that take
under 200 milliseconds to complete. The hourly value is used as a slow moving average over the last
seven days (168 hours). The slow moving average is then compared to the average between the most
recent data point, and a data point from seven days ago. The performance delta indicates the difference
between these two averages.
You can use the performance delta value to assess whether the average performance of your Power BI
items improved or worsened over the past week. The higher the value is, the better the performance is
likely to be. A value close to zero indicates that not much has changed, and a negative value suggests that
the average performance of your Power BI items got worse over the past week.
Sorting the matrix by the performance delta column helps identify datasets that have had the biggest
change in their performance. During your investigation, don't forget to consider the CPU (s) and number
of Users. The performance delta value is a good indicator when it comes to Power BI items that have a
high CPU utilization because they're heavily used or run many operations. However, small datasets with
little CPU activity may not reflect a true picture, as they can easily show large positive or negative values.
Performance
The performance section is made up of four visuals, one on top of the other, in the middle of the page.

CPU over time


Displays the CPU usage of the selected capacity over time. Filters applied to the page in the Multi metric column
chart, affect this chart's display as follows:
No filters applied - Columns display the peak timepoint per hour.
Filters are applied - The visuals displays every 30 second timepoint.

NOTE
Peak is calculated as the highest number of seconds from both interactive and background operations.

To access the Timepoint page from this visual, right-click an overloaded timepoint, select Drill through and
then select TimePoint Detail .

The CPU over time chart displays the following elements:


Interactive CPU - Red columns represent the number of CPU seconds used during interactive
operations in a 30 second period.
Interactive operations cover a wide range of resources triggered by Power BI users. These operations are
associated with interactive page loads and are handled by backend cores.
Background - Blue columns represent the number of CPU seconds used during background operations
in a 30 second period.
Background operations cover Power BI backend processes that are not directly triggered by users, such as
data refreshes. These operations are handled by backend cores.
CPU Limit - A yellow dotted line that shows the threshold of the allowed number of CPU seconds for the
selected capacity. Columns that stretch above this line, represent timepoints where the capacity is
overloaded.
Overloaded minutes per hour
Displays a score that represents the severity that overload had on the performance of a Power BI item. If no item
is filtered, this chart shows the maximum value seen from all items at each load evaluation interval (30 seconds)
in the past two weeks.
Artifact size
Displays the memory footprint recorded for Power BI items over time. If no item is filtered this chart shows the
maximum value seen from all items at each ten minute time sample in the past two weeks.
Performance profile
Displays the percentage of fast, moderate, and slow operations from the total number of operations performed
on a Power BI item, over the past two weeks. If no item is filtered, this chart shows the performance profile for
datasets on the entire capacity.
Weekly trendlines
The weekly trendlines section is made up of four visuals, one on top of the other, in the right side of the report.
These visuals summarize the capacity's behavior over the past four weeks. This section is designed to provide a
snapshot of your capacity, highlighting trends for the past four weeks.

CPU
Displays the total CPU power your capacity consumed over the past four weeks. Each data point is the
aggregated sum of CPU used for the past seven days.
Active Artifacts
Displays the number of Power BI items (such as reports, dashboards, and datasets) that used CPU during the
past four weeks.
Active Users
Displays the number of users that used the capacity during the past four weeks.
Cores
Displays the number of cores used by the capacity in the past four weeks. Each data point is the maximum
capacity size reported during that week. If your capacity used autoscaling or scaled up to a bigger size, the visual
will show the increase.

Evidence
This page provides information about overloads in your capacity. You can use it to establish which Power BI
items (such as reports, dashboards, and datasets) cause overload, and which items are affected by this overload.

NOTE
This page only displays data when the capacity is overloaded.

When you detect a Power BI item that causes overload, you can either optimize that item to reduce its impact on
the capacity, or you can scale up the capacity.
Artifacts causing overloading
You can visually identify the different Power BI items that cause overload, by using the timeline. Each day in the
timeline displays items causing overload. Drill down to see an hourly timeline. The value shown is an aggregate
of the CPU power consumed by artifacts when they overloaded the capacity.
Overloaders
Use this visual to identify the Power BI items that generate impactful overload events. This is shown as an
Overloading score when you select the Overloaders pivot. The overloading score for an artifact is derived from
the severity of an overload event, and how frequently the overload event occurred over the past 14 days. This
score has no physical property.

Switch to the Overloaded artifacts pivot to identify the items most affected by overload over the past 14 days.
The overloading impact can affect either the item that's causing the overload, or other items that are hosted in
the same capacity.
The Overloaded time (s) value is the amount of processing time that was impacted by an overload penalty. This
value is shown for each affected item, over the past 14 days.
Overloading windows
Use this visual to understand whether overload or autoscale events happen due to a single Power BI item, or
many items. Each Power BI item is given a different color.
Each column represents a 30 second window where CPU usage for the capacity exceeded allowance. The height
of the column represents the amount of CPU used.
The 30 second CPU allowance is determined by the number of v-cores your capacity has. When autoscale is
turned on, each added autoscale CPU adds 15 seconds to the allowance. When autoscale isn't turned on, or if
autoscale is fully utilized, penalties are applied to interactive operations in the next 30 second window. You can
see a visualization of these penalties in the Artifacts overloaded (seconds) chart.
To access the Timepoint page from this visual, right-click an overloaded timepoint, select Drill through and
then select TimePoint Detail .

Artifacts overloaded (seconds)


Use this visual to understand whether overloading Power BI items impacts their own performance, or whether
they produce a noisy neighbor problem by impacting the performance of other items. Each item is given a
different color.
The column height represents the duration of operations subject to overload penalties, which occur when
autoscale isn't turned on or used to its maximum.
Number of users overloaded
Use this visual to understand how widespread the impact of overload is. The visual will help you determine
whether a single user is impacted by an overload event, or whether the overload event impacts multiple users.
The column height represents the number of distinct users affected when overload occurs.

Refresh
This page is designed to help you identify aspects concerning refresh performance such as refresh CPU
consumption power.

NOTE
You can get to a version of this page, dedicated to a specific Power BI item, using the drill through feature in one of the
visuals that displays individual items. The visuals in the drill through version of the page are identical to the ones listed
below. However, they only display information for the item you're drilling into.

At the top of the page there's a multi-selection pivot allowing you to focus on refreshing the page according to
the filters listed below. Each of these pivots filters all the visuals in the refresh page.

Ar tifact Kind - Filter the page by Power BI item type, such as report, dataset and dashboard.
Status - Filter the page by failed or successful operations.
Metric - Filter the page by one of the following:
CPU - CPU consumption
Duration - Operation processing time
Operations - Number of operations
Operation - Filter according to the type of operation selected.
Refresh by artifact
Displays the breakdown of the metric selected in the pivot at the top, in the past 14 days. These breakdowns can
indicate which refresh optimization is more likely to reduce the capacity footprint or the data source load.
When you select CPU, you can identify whether to reduce the capacity footprint.
When you select Duration, you can identify which data source load to reduce.
Duration
Each column represents the number of seconds it took to compete a single operation per hour, over a 14 day
period.
CPU
Each column represents the number of CPU seconds used to compete a single operation per hour, over a 14 day
period.
Operations
Each column represents the number of seconds it took to compete a single operation per hour, over a 14 day
period.
Refresh detail
A matrix table that describes all the metadata for each individual refresh operation that took place. Selecting a
cell in the visual will filter the matrix to show specific events.
Scheduled and manual refresh workflows can trigger multiple internal operations in the backend service. For
example, refreshes sometimes perform automatic retries if a temporary error occurred. These operations might
be recorded in the app using different activity IDs. Each activity ID is represented as a row in the table. When
reviewing the table, take into consideration that several rows may indicate an operation of a single activity.
The table has a Ratio column describing the ratio between CPU time and processing time. A low ratio suggests
data source inefficiencies, where Power BI service is spending more time waiting for the data source, and less
time processing the refresh.
Refresh operations
On the right side of the refresh page, there are two visuals designed to help you identify patterns.
Timeline - Displays the number of operations per day, for the past 14 days.
Score card - Displays the total number of performed operations.

Timepoint
This page provides a detailed view of every operation that resulted in CPU activity in a given timepoint. Use this
page to understand which interactive and background operations contributed the most to CPU usage.

IMPORTANT
You can only get to this page by using the drill through feature in an overloaded timepoint in one of these visuals:
CPU over time in the Overview page
Overloading windows in the Evidence page
When the total combined CPU for interactive and background operations exceeds the 30 second timepoint
allowance, the capacity is overloaded and depending on whether autoscale is enabled or not, throttling is
applied.
Autoscale is enabled - If the capacity has autoscale enabled, a new v-core will get added for the next 24
hours and will be shown as an increased value in the CPU Limit line in the CPU over time chart.

NOTE
When autoscale is enabled, if the capacity reaches the maximum number of v-cores allowed by the autoscale
operation, throttling is applied.

Autoscale isn't enabled - If autoscale isn't enabled, throttling gets applied to every interactive
operation in the subsequent timepoint.
Top row visuals
This section describes the operations of the visuals in the top row of the timepoint page.
Top left card - Displays the timepoint used to drill through to this page.
Hear tbeat line char t - Shows a 60 minute window of CPU activity. Use this visual to establish the
duration of peaks and troughs.
Vertical red line - The timepoint you currently drilled to view. The visual shows the 30 minutes of
CPU activity leading to the selected timepoint, as well as the 30 minutes of CPU activity after the
selected timepoint.
Blue line - Total CPUs.
Yellow line - The capacity allowance.

NOTE
If the blue line is above the yellow line the capacity is overloaded.

Interactive operations card - Displays the total number of interactive operations that contributed to
the CPU's activity during this timepoint.
Background operations card - Displays the total number of background operations that contributed
to the CPU's activity during this timepoint.
SKU card - Displays the current SKU.
Capacity CPU card - Displays the total number of CPU seconds allowed for this capacity, for a given 30
second timepoint window.
Interactive Operations
A table showing every interactive operation that contributed CPU usage in the timepoint used to drill through to
this page. Once an interactive operation completes, all of the CPU seconds used by it get attributed to the
timepoint window.
Ar tifact - The name of the Power BI item, its type, and its workspace details.
Operation - The type of interactive operation.
Star t - The time the interactive operation began.
End - The time the interactive operation finished.
Status - An indication showing if the operation succeeded or failed.

NOTE
CPU usage for failed operations is counted when determining if the capacity is in overload.

User - The name of the user that triggered the interactive operation.
Duration - The number of seconds the interactive operation took to complete.
Total CPU - The number of CPU seconds used by the interactive operation. This metric contributes to
determine if the capacity exceeds the total number of CPU seconds allowed for the capacity.
Timepoint CPU - The number of CPU seconds assigned to the interactive operation in the current
timepoint.
Throttling - The number of seconds of throttling applied to this interactive operation because of the
capacity being overloaded in the previous timepoint.
% Of Capacity - Interactive CPU operations as a proportion of the overall capacity allowance.
Background Operations
A table showing every background operation that contributed CPU usage to the timepoint window used to drill
through to this page. Every background operation that completed in the prior 24 hours (defined as a 2,880 x 30
second timepoint window), contributes a small portion of its total usage to the CPU value. This means that a
background operation that completed the previous day can contribute some CPU activity to determine if the
capacity is in overload.
All the columns in the background operations table are similar to the ones in the interactive operations table.
However, the background operations table doesn't have a users column.

Artifact Detail
This page provides useful information about a specific Power BI item.

IMPORTANT
You can only get to this page by using the drill through feature in one of the visuals that displays individual Power BI
items.

NOTE
Some of the visuals in the Artifact Detail page may not display information. A visual will not show anything when it's
designed to display an event that hasn't occurred.

You can tell which Power BI item you're reviewing, by looking at the card at the top left side of the report,
highlighted below. This syntax of this card is workspace \ Power BI item type \ Power BI item name .
Overloading
The overloading visual displays time slots where overloading occurred involving the Power BI item you're
drilling into.
The overloading visual has the following columns:
Date - The date the item was in overload.
Overloaded mins - Summed 30 second windows where at least one overload event took place.
Overload time % - The number of overloaded seconds divided by the duration of interactive operations
that took place.
Performance
Displays the percentage of fast, moderate, and slow operations from the total number of operations performed
by the Power BI item you're drilling into, over the past two weeks.
Fast - The moving average of fast operations as a percentage of all the operations over time. A fast
operation takes less than 100 milliseconds.
Moderate - The moving average of moderate operations as a percentage of all the operations over time.
A moderate operation takes between 100 milliseconds to two seconds.
Slow - The moving average of slow operations as a percentage of all the operations over time. A slow
operation takes over two seconds.
Artifact size
This visual displays the peak amount of memory detected in any three hour window, over a 14 day period, for
the item you're drilling into. You can cross filter this visual from the matrix by artifact and operation visual, to
show a peak memory profile for an individual day.
CPU duration and users
Use these visuals to review CPU consumption, operation duration and number of users for the item you're
drilling into. In these visuals, each column represents a single hour over a 14 day period.

CPU - Each column displays the number of CPU seconds used to complete each operation per hour.
Duration - Each column displays the number of seconds used to complete each operation per hour.
Users - Each column displays the number of active users per hour.

Considerations and limitations


The app displays results for the last 14 or 28 days, depending on the visual.
The app only displays memory measurements and performance breakdown for datasets.
The app only supports monitoring datasets that use import mode. To monitor Power BI service live
connections use Azure Analysis Services.
The Users column in the visuals displays how many distinct users have been using a Power BI item (such
as a report or dashboard). When you expand the measure to display user breakdown for different types
of operations for this item, counting can become faulty.
Email subscriptions will be sent with the app's default filter and slicer states.

Next steps
Install the Gen2 metrics app
Backup and restore datasets with Power BI Premium
5/23/2022 • 5 minutes to read • Edit Online

You can use the Backup and Restore feature with Power BI datasets if you have a Power BI Premium or
Premium Per User (PPU) license, similar to the backup and restore operations available in tabular models for
Azure Analysis Services (Azure AS).
You can use SQL Server Management Studio (SSMS), Analysis Services cmdlets for PowerShell, and other tools
to perform backup and restore operations in Power BI using XMLA endpoints. The following sections describe
backup and restore concepts for Power BI datasets, certain requirements, and other considerations.

The ability to backup and restore Power BI datasets provides a migration path from Azure Analysis Services
workloads to Power BI Premium. It also enables dataset backups for multiple reasons, including corruption or
loss, data retention requirements, and tenant movement, among others.

Using dataset backup and restore


The Backup and Restore feature uses existing connections between Power BI and Azure, such as the ability to
register an Azure Data Lake Gen2 (ADLS Gen2) storage account at the tenant- or workspace-level to facilitate
dataflow storage and operations. Since Backup and Restore uses the same connection, no other storage account
is required.
You can also perform offline backups, downloading the files from your ADLS Gen2 storage account using the file
system, Azure Storage Explorer, .NET tools, and PowerShell cmdlets, such as the Get-
AzDataLakeGen2ItemContent cmdlet. The following image shows a workspace with three datasets and their
corresponding backup files in Azure Storage Explorer.
To learn how to configure Power BI to use an ADLS Gen2 storage account, see configuring dataflow storage to
use Azure Data Lake Gen 2.
Multi-geo considerations
Backup and Restore relies on the Azure connections infrastructure in Power BI to register an Azure Data Lake
Gen2 (ADLS Gen2) storage account at the tenant or workspace level. As such, you should provision the storage
account in the region of your Power BI Premium capacity to avoid data transfer costs across regional
boundaries. Check your data residency requirements before configuring your workspaces on a multi-geo
Premium capacity with a storage account.
Who can perform backup and restore
With an ADLS Gen2 storage account associated with a workspace, workspace admins who have write or
administrator permissions can conduct backups. Users with such permissions may be an admin, a member, or a
contributor, or may not be part of the workspace level roles, but has direct write permission to the dataset.
To restore an existing dataset, users who have write or admin permission to the dataset can conduct a restore
operation. To restore a new dataset, the user must be an admin, member, or contributor of the workspace.
To browse the backup/restore filesystem using Azure Storage Explorer (the Browse... button in SSMS), a user
must be a admin, or a member or contributor of the workspace.
Power BI associates workspaces with their backup directories based on the workspace name. With owner
permissions at the storage account level, you can download backup files or copy them from their original
location to the backup directory of a different workspace, and restore them there if you are a workspace
administrator in the target workspace as well.
Storage account owners have unrestricted access to the backup files, so ensure storage account permissions are
set and maintained carefully.
How to perform backup and restore
Backup and Restore requires using XMLA-based tools, such as SQL Server Management Studio (SSMS). There
is no backup or restore facility or option in the Power BI user interface. Because of the XMLA dependency,
Backup and Restore currently requires your datasets to reside on a Premium or PPU capacity.
The storage account settings for Backup and Restore can be applied at either the tenant or the workspace
level.
For Backup and Restore , Power BI creates a new container called power-bi-backup in your storage account,
and creates a backup folder using the same name as your workspace in the power-bi-backup container. If you
configure a storage account at the tenant level, Power BI only creates the power-bi-backup container. Power BI
creates the backup folder at the time you attach the storage account to a workspace. If you configure a storage
account at the workspace level, Power BI creates the power-bi-backup container and creates the backup folder.
During backup and restore, the following actions apply:
Backup files are placed into the backup folder in the power-bi-backup container
For restore, you must place the backup files (.abf files) into the folder before conducting a restore
If you rename a workspace, the backup folder in the power-bi-backup container is automatically renamed to
match. However, if you have an existing folder with the same name as the renamed workspace, the automatic
renaming for the backup folder will fail.

Considerations and limitations


When using the Backup and Restore feature with Power BI, keep the following considerations in mind.
Power BI must be able to access your ADLS Gen2 directly. Your ADLS Gen2 cannot be located in a VNET.
If your ADLS Gen2 is already working with Backup and Restore , if you disconnect and later reconfigure it
to work with Backup and Restore again, you must first rename or move the previous backup folder, or the
attempt will result in errors and failure.
Restore only supports restoring the database as a Large Model (Premium) database.
Only the enhanced format model (V3 model) is allowed to be restored.
Password encryption in the backup command is not supported
There is a new property, IgnoreIncompatibilities, for restore command. The new property is to address RLS
incompatibilities between Azure AS and Power BI Premium. Power BI Premium only support read permission
for roles but AAS does support all permissions. If you try to restore a backup file which some roles are not
read permission, you have to specify IgnoreIncompatibilities in your restore command, otherwise, restore
will fail. Once IgnoreIncompatibilities is specified, the role which permission is not read will be dropped. So
far, there is no UX support to IgnoreIncompatibilities in SSMS, so you need to specify it in restore command
manually. For example:

{
"restore": {
"database": "DB",
"file": "/Backup.abf",
"allowOverwrite": true,
"security": "copyAll",
"ignoreIncompatibilities": true
}
}

Next steps
What is Power BI Premium?
SQL Server Management Studio (SSMS)
Analysis Services cmdlets for PowerShell
Dataset connectivity with the XMLA endpoint
Using Autoscale with Power BI Premium
Power BI Premium FAQ
Power BI Premium Per User FAQ
Add or change Azure subscription administrators
Configuring tenant and workspace storage
More questions? Try asking the Power BI Community
Using Autoscale with Power BI Premium
5/23/2022 • 3 minutes to read • Edit Online

Power BI Premium offers scale and performance for Power BI content in your organization. With Power BI
Premium Gen2, many improvements are introduced including enhanced performance, greater scale, improved
metrics. In addition, Premium Gen2 enables customers to automatically add compute capacity to avoid
slowdowns under heavy use, using Autoscale .

Autoscale uses an Azure subscription to automatically use more v-cores (virtual CPU cores) when the
computing load on your Power BI Premium subscription would otherwise be slowed by its capacity. This article
describes the steps necessary to get Autoscale working for your Power BI Premium subscription. Autoscale only
works with Power BI Premium Gen2.
To enable Autoscale, the following steps need to be completed:
1. Configure an Azure subscription to use with Autoscale.
2. Enable Autoscale in the Power BI Admin portal
The following sections describe the steps in detail.

NOTE
Autoscale isn’t available for Microsoft 365 Government Community Cloud (GCC), due to the use of the commercial
Azure cloud.
Embedded Gen 2 does not provide an out-of-the-box vertical autoscale feature. To learn about alternative autoscale
options for Embedded Gen2, see Autoscaling in Embedded Gen2

Configure an Azure subscription to use with Autoscale


To select and configure an Azure subscription to work with Autoscale, you need to have contributor rights for
the selected Azure subscription. Any user with Account admin rights for the Azure subscription can add a user as
a contributor. In addition, you must be an admin for the Power BI tenant to enable Autoscale.
To select an Azure subscription to work with Autoscale, take the following steps:
1. Log into the Azure portal and in the search box type and select Subscriptions .

2. From the Subscriptions page, select the subscription you want to work with autoscale.

3. From the Settings selections for your selected subscription, select Resource groups .
4. Select Create to create a resource group to use with Autoscale.

5. Name your resource group and select Review + create . In the following image, the resource group is
called powerBIPremiumAutoscaleCores. You can name your resource group whatever you prefer. Just
remember the name of the subscription, and the name of your resource group, since you'll need to select
it again when you configure Autoscale in the Power BI Admin Portal.
6. Azure validates the information. After the validation process completes successfully, select Create . Once
created, you receive a notification in the upper-right corner of the Azure portal.
Enable Autoscale in the Power BI Admin portal
Once you've selected the Azure subscription to use with Autoscale, and created a resource group as described in
the previous section, you're ready to enable Autoscale and associate it with the resource group you created. The
person configuring Autoscale must be at least a contributor for the Azure subscription to successfully complete
these steps. You can learn more about assigning a user to a contributor role for an Azure subscription.

NOTE
After creating the subscription and enabling Autoscale in the admin portal, a
Microsoft.PowerBIDedicated/autoScaleVCores resource is created. Make sure that you don't have any Azure policies
that prevent Power BI Premium from provisioning, updating or deleting the
Microsoft.PowerBIDedicated/autoScaleVCores resource.

The following steps show you how to enable and associated Autoscale with the resource group.
1. Open the Power BI Admin por tal and select Capacity settings from the left pane. Information about
your Power BI Premium capacity is displayed.

2. Autoscale only works with Power BI Premium Gen2. Enabling Gen2 is easy: just move the slider to
Enabled in the Premium Generation 2 box.
3. Select the Manage auto-scale button to enable and configure Autoscale , and the Auto-scale
settings pane appears. Select the Enable auto scale .

4. You can then select the Azure subscription to use with Autoscale. Only subscriptions available to the
current user are displayed, which is why you must be at least a contributor for the subscription. Once
your subscription is selected, select the Resource group you created in the previous section, from the
list of resource groups available to the subscription.
5. Next, assign the maximum number of v-cores to use for Autoscale, and then select Save to save your
settings. Power BI applies your changes, then closes the pane and returns the view to Capacity settings ,
where you can see your settings have been applied. In the following image, there were a maximum of
two v-cores configured for Autoscale.
Here's a short video that shows how quickly you can configure Autoscale for Power BI Premium Gen2:

Next steps
What is Power BI Premium?
Power BI Premium FAQ
Power BI Premium Per User FAQ
Add or change Azure subscription administrators
Configure workloads in a Premium capacity
5/23/2022 • 13 minutes to read • Edit Online

This article lists the workloads for Power BI Premium, and describes their capacities. Use the Gen2 and Gen1
tabs to review the differences between workloads for these Premium offerings.

IMPORTANT
Premium Gen1, also known as the original version of Premium, is being deprecated. If you're still using Premium Gen1,
you need to migrate your Power BI content to Premium Gen2. For more information, see Plan your transition to Power BI
Premium Gen2.

NOTE
Workloads can be enabled and assigned to a capacity by using the Capacities REST APIs.

Supported workloads
Gen2
Gen1

Query workloads are optimized for and limited by resources determined by your Premium capacity SKU.
Premium capacities also support additional workloads that can use your capacity's resources.
The list of workloads below, describes which Premium Gen2 SKUs supports each workload:
AI - All SKUs are supported apart from the EM1/A1 SKUs
Datasets - All SKUs are supported
Dataflows - All SKUs are supported
Paginated repor ts - All SKUs are supported
Configure workloads
You can tune the behavior of the workloads, by configuring workload settings for your capacity.
Gen2
Gen1

IMPORTANT
All workloads are always enabled and cannot be disabled. Your capacity resources are managed by Power BI according to
your capacity usage.

To configure workloads in the Power BI admin portal


1. Sign in to Power BI using your admin account credentials.
2. From the page header, select ... > Settings > Admin por tal .
3. Go to Capacity settings and from the Power BI Premium tab, select a capacity.
4. Expand Workloads .
5. Set the values for each workload according to your specifications.
6. Select Apply .
Monitor workloads
Gen2
Gen1

Use the Power BI Premium utilization and metrics app to monitor your capacity's activity.

IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. For more information, see capacity and reliability notifications.

AI (Preview)
The AI workload lets you use cognitive services and Automated Machine Learning in Power BI. Use the following
settings to control workload behavior.

SET T IN G N A M E DESC RIP T IO N

Max Memor y (%) 1 The maximum percentage of available memory that AI


processes can use in a capacity.

Allow usage from Power BI Desktop This setting is reserved for future use and doesn't appear in
all tenants.

Allow building machine learning models Specifies whether business analysts can train, validate, and
invoke machine learning models directly in Power BI. For
more information, see Automated Machine Learning in
Power BI (Preview).

Enable parallelism for AI requests Specifies whether AI requests can run in parallel.

1 PremiumGen2 doesn't require memory settings to be changed. Memory in Premium Gen2 is automatically
managed by the underlying system.
Datasets
Use the settings in the table below to control workload behavior. There's additional usage information below the
table for some of the settings.

NOTE
In Premium Gen1, the datasets workload is enabled by default and cannot be disabled.

SET T IN G N A M E DESC RIP T IO N

Max Memor y (%) 1 The maximum percentage of available memory that datasets
can use in a capacity.

XML A Endpoint Specifies that connections from client applications honor the
security group membership set at the workspace and app
levels. For more information, see Connect to datasets with
client applications and tools.

Max Intermediate Row Set Count The maximum number of intermediate rows returned by
DirectQuery. The default value is 1000000, and the allowable
range is between 100000 and 2147483646. The upper limit
may need to be further constrained based on what the
datasource supports.

Max Offline Dataset Size (GB) The maximum size of the offline dataset in memory. This is
the compressed size on disk. The default value is 0, which is
the highest limit defined by SKU. The allowable range is
between 0 and the capacity size limit.

Max Result Row Set Count The maximum number of rows returned in a DAX query. The
default value is -1 (no limit), and the allowable range is
between 100000 and 2147483647.

Quer y Memor y Limit (%) The maximum percentage of available memory in the
workload that can be used for executing an MDX or DAX
query. The default value is 0, which results in SKU-specific
automatic query memory limit being applied.

Quer y Timeout (seconds) The maximum amount of time before a query times out. The
default is 3600 seconds (1 hour). A value of 0 specifies that
queries won't time out.

Automatic page refresh On/Off toggle to allow premium workspaces to have reports
with automatic page refresh based on fixed intervals.

Minimum refresh inter val If automatic page refresh is on, the minimum interval
allowed for page refresh interval. The default value is five
minutes, and the minimum allowed is one second.

Change detection measure On/Off toggle to allow premium workspaces to have reports
with automatic page refresh based on change detection.

Minimum execution inter val If change detection measure is on, the minimum execution
interval allowed to poll for data changes. The default value is
five seconds, and the minimum allowed is one second.

1
1 PremiumGen2 doesn't require memory settings to be changed. Memory in Premium Gen2 is automatically
managed by the underlying system.
Max Intermediate Row Set Count
Use this setting to control the impact of resource-intensive or poorly designed reports. When a query to a
DirectQuery dataset results in a very large result from the source database, it can cause a spike in memory
usage and processing overhead. This situation can lead to other users and reports running low on resources.
This setting allows the capacity administrator to adjust how many rows an individual query can fetch from the
data source.
Alternatively, if the capacity can support more than the one million row default, and you have a large dataset,
increase this setting to fetch more rows.
This setting affects only DirectQuery queries, whereas Max Result Row Set Count affects DAX queries.
Max Offline Dataset Size
Use this setting to prevent report creators from publishing a large dataset that could negatively impact the
capacity. Power BI can't determine actual in-memory size until the dataset is loaded into memory. It's possible
that a dataset with a smaller offline size can have a larger memory footprint than a dataset with a larger offline
size.
If you have an existing dataset that is larger than the size you specify for this setting, the dataset will fail to load
when a user tries to access it. The dataset can also fail to load if it's larger than the Max Memory configured for
the datasets workload.
This setting is applicable for models in both small dataset storage format (ABF format) and large dataset storage
format (PremiumFiles), although the offline size of the same model might differ when stored in one format vs
another. For more information, see Large models in Power BI Premium.
To safeguard the performance of the system, an additional SKU-specific hard ceiling for max offline dataset size
is applied, regardless of the configured value. The additional SKU-specific hard ceiling in the below table does
not apply to Power BI datasets stored in large dataset storage format.

EM 1/ A 1 EM 2/ A 2 EM 3/ A 3 P 1/ A 4 P 2/ A 5 P 3/ A 6 P 4/ A 7 P 5/ A 8

Hard 3 GB 5 GB 6 GB 10 GB 10 GB 10 GB 10 GB 10 GB
ceiling
for Max
Offline
Dataset
Size

Max Result Row Set Count


Use this setting to control the impact of resource-intensive or poorly designed reports. If this limit is reached in
a DAX query, a report user sees the following error. They should copy the error details and contact an
administrator.

This setting affects only DAX queries, whereas Max Intermediate Row Set Count affects DirectQuery queries.
Query Memory Limit
Use this setting to control the impact of resource-intensive or poorly designed reports. Some queries and
calculations can result in intermediate results that use a lot of memory on the capacity. This situation can cause
other queries to execute very slowly, cause eviction of other datasets from the capacity, and lead to out of
memory errors for other users of the capacity.
This setting applies to all DAX and MDX queries that are executed by Power BI reports, Analyze in Excel reports,
as well as other tools that might connect over the XMLA endpoint.
Data refresh operations may also execute DAX queries as part of refreshing the dashboard tiles and visual
caches after the data in the dataset has been refreshed. Such queries may also potentially fail because of this
setting, and this could lead to the data refresh operation being shown in a failed state, even though the data in
the dataset was successfully updated.
The default setting is 0, which results in the following SKU-specific automatic query memory limit being applied.

EM 1/ A 1 EM 2/ A 2 EM 3/ A 3 P 1/ A 4 P 2/ A 5 P 3/ A 6 P 4/ A 7 P 5/ A 8

Automa 1 GB 2 GB 2 GB 6 GB 6 GB 10 GB 10 GB 10 GB
tic
Quer y
Memor y
Limit

To safeguard the performance of the system, a hard ceiling of 10 GB is enforced for all queries executed by
Power BI reports, regardless of the query memory limit configured by the user. This hard ceiling doesn't apply to
queries issued by tools that use the Analysis Services protocol (also known as XMLA). Users should consider
simplifying the query or its calculations if the query is too memory intensive.
Query Timeout
Use this setting to maintain better control of long-running queries, which can cause reports to load slowly for
users.
This setting applies to all DAX and MDX queries that are executed by Power BI reports, Analyze in Excel reports,
as well as other tools that might connect over the XMLA endpoint.
Data refresh operations may also execute DAX queries as part of refreshing the dashboard tiles and visual
caches after the data in the dataset has been refreshed. Such queries may also potentially fail because of this
setting, and this could lead to the data refresh operation being shown in a failed state, even though the data in
the dataset was successfully updated.
This setting applies to a single query and not the length of time it takes to run all of the queries associated with
updating a dataset or report. Consider the following example:
The Quer y Timeout setting is 1200 (20 minutes).
There are five queries to execute, and each runs 15 minutes.
The combined time for all queries is 75 minutes, but the setting limit isn't reached because all of the individual
queries run for less than 20 minutes.
Note that Power BI reports override this default with a much smaller timeout for each query to the capacity. The
timeout for each query is typically about three minutes.
Automatic page refresh
When enabled, automatic page refresh allows users in your Premium capacity to refresh pages in their report at
a defined interval, for DirectQuery sources. As a capacity admin, you can do the following:
Turn automatic page refresh on and off
Define a minimum refresh interval
To find the automatic page refresh setting:
1. In the Power BI Admin portal, select Capacity settings .
2. Select your capacity, and then scroll down and expand the Workloads menu.
3. Scroll down to the Datasets section.

Queries created by automatic page refresh go directly to the data source, so it's important to consider reliability
and load on those sources when allowing automatic page refresh in your organization.

Dataflows
The dataflows workload lets you use dataflows self-service data prep, to ingest, transform, integrate, and enrich
data. Use the following settings to control workload behavior.

SET T IN G N A M E DESC RIP T IO N

Max Memor y (%) 1 The maximum percentage of available memory that


dataflows can use in a capacity.

Enhanced Dataflows Compute Engine (Preview) Enable this option for up to 20x faster calculation of
computed entities when working with large scale data
volumes. You must restar t the capacity to activate
the new engine. For more information, see Enhanced
dataflows compute engine.

Container Size The maximum size of the container that dataflows use for
each entity in the dataflow. The default value is 700 MB. For
more information, see Container size.

1
1 PremiumGen2 doesn't require memory settings to be changed. Memory in Premium Gen2 is automatically
managed by the underlying system.
Enhanced dataflows compute engine
To benefit from the new compute engine, split ingestion of data into separate dataflows and put transformation
logic into computed entities in different dataflows. This approach is recommended because the compute engine
works on dataflows that reference an existing dataflow. It doesn't work on ingestion dataflows. Following this
guidance ensures that the new compute engine handles transformation steps, such as joins and merges, for
optimal performance.
Container size
When refreshing a dataflow, the dataflow workload spawns a container for each entity in the dataflow. Each
container can take memory up to the volume specified in the Container Size setting. The default for all SKUs is
700 MB. You might want to change this setting if:
Dataflows take too long to refresh, or dataflow refresh fails on a timeout.
Dataflow entities include computation steps, for example, a join.
It's recommended you use the Power BI Premium Capacity Metrics app to analyze Dataflow workload
performance.
In some cases, increasing container size may not improve performance. For example, if the dataflow is getting
data only from a source without performing significant calculations, changing container size probably won't
help. Increasing container size might help if it will enable the Dataflow workload to allocate more memory for
entity refresh operations. By having more memory allocated, it can reduce the time it takes to refresh heavily
computed entities.
The Container Size value can't exceed the maximum memory for the Dataflows workload. For example, a P1
capacity has 25 GB of memory. If the Dataflow workload Max Memory (%) is set to 20%, Container Size (MB)
can't exceed 5000. In all cases, the Container Size can't exceed the Max Memory, even if you set a higher value.

Paginated reports
The paginated reports workload lets you run paginated reports, based on the standard SQL Server Reporting
Services format, in the Power BI service.
Paginated reports offer the same capabilities that SQL Server Reporting Services (SSRS) reports do today,
including the ability for report authors to add custom code. This allows authors to dynamically change reports,
such as changing text colors based on code expressions.

Gen2
Gen1

The paginated reports workload is enabled automatically, and is always enabled.

Next steps
Power BI Premium Generation 2
Optimizing Power BI Premium capacities
Self-service data prep in Power BI with Dataflows
What are paginated reports in Power BI Premium?
Automatic page refresh in Power BI Desktop (preview)
Monitor capacities in the Admin portal
5/23/2022 • 4 minutes to read • Edit Online

The Health tab in the Capacity settings area in the Admin portal provides a metrics summary about your
capacity and enabled workloads.

NOTE
This article refers to monitoring Premium (original version) capacities. To monitor Premium Gen2 capacities, install and use
the Power BI Premium Utilization and Metrics app.

If you need more comprehensive metrics, use the Power BI Premium Capacity Metrics app. The app provides
drill-down and filtering capabilities, and the most detailed metrics for near every aspect affecting capacity
performance. To learn more, see Monitor Premium capacities with the app.

IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. See capacity and reliability notifications for more information.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.

NOTE
The Admin portal cannot be used to monitor Premium Per User (PPU) activities or capacity.
System Metrics
On the Health tab, at the highest level, CPU utilization and memory usage provide a quick view of the most
important metrics for the capacity. These metrics are cumulative, including all enabled workloads for the
capacity.

M ET RIC DESC RIP T IO N

CPU UTILIZATION Average CPU utilization, as a percentage of total available


CPU.

MEMORY USAGE Average memory usage in gigabytes (GB).

Workload metrics
For each workload enabled for the capacity. CPU utilization and memory usage are shown.

M ET RIC DESC RIP T IO N

CPU UTILIZATION Average CPU utilization, as a percentage of total available


CPU.

MEMORY USAGE Average memory usage in gigabytes (GB).

Detailed workload metrics


Each workload has additional metrics. The type of metrics shown depend on the workload. To see detailed
metrics for a workload, click the expand (down) arrow.

Dataflows
D a t a fl o w O p e r a t i o n s

M ET RIC DESC RIP T IO N

Total Count Total refreshes for each dataflow.

Success Count Total successful refreshes for each dataflow.

Average Duration (min) The average duration of refresh for the dataflow, in minutes

Max Duration (min) The duration of the longest-running refresh for the dataflow,
in minutes.

Average Wait Time (min) The average lag between the scheduled time and start of a
refresh for the dataflow, in minutes.

Max Wait Time (min) The maximum wait time for the dataflow, in minutes.

Datasets
R e fr e sh
M ET RIC DESC RIP T IO N

Total Count Total refreshes for each dataset.

Success Count Total successful refreshes for each dataset.

Failure Count Total failed refreshes for each dataset.

Success Rate Number of successful refreshes divided by the total refreshes


to measure. reliability.

Average Duration (min) The average duration of refresh for the dataset, in minutes.

Max Duration (min) The duration of the longest-running refresh for the dataset,
in minutes.

Average Wait Time (min) The average lag between the scheduled time and start of a
refresh for the dataset, in minutes.

Max Wait Time (min) The maximum wait time for the dataset, in minutes.

Q u er y

M ET RIC DESC RIP T IO N

Total Count The total number of queries run for the dataset.

Average Duration (ms) The average query duration for the dataset, in milliseconds

Max Duration (ms) The duration of the longest-running query in the dataset, in
milliseconds.

Average Wait Time (ms) The average query wait time for the dataset, in milliseconds.

Max Wait Time (ms) The duration of the longest-waiting query in the dataset, in
milliseconds.

Ev i c t i o n

M ET RIC DESC RIP T IO N

Model Count The total number of dataset evictions for this capacity. When
a capacity faces memory pressure, the node evicts one or
more datasets from memory. Datasets that are inactive (with
no query/refresh operation currently executing) are evicted
first. Then the eviction order is based on a measure of 'least
recently used' (LRU).

Paginated Reports
R e p o r t Ex e c u t i o n

M ET RIC DESC RIP T IO N

Execution Count The number of times the report was been executed and
viewed by users.

R e p o r t U sa g e
M ET RIC DESC RIP T IO N

Success Count The number of times the report has been viewed by a user.

Failure Count The number of times the report has been viewed by a user.

Row Count The number of rows of data in the report.

Data Retrieval Duration (ms) The average amount of time it takes to retrieve data for the
report, in milliseconds. Long durations can indicate slow
queries or other data source issues.

Processing Duration (ms) The average amount of time it takes to process the data for
a report, in milliseconds.

Rendering Duration (ms) The average amount of time it takes to render a report in
the browser, in milliseconds.

NOTE
Detailed metrics for the AI workload are not yet available.

Next steps
Now that you understand how to monitor Power BI Premium capacities, learn more about optimizing capacities.
Optimizing Power BI Premium capacities
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Large datasets in Power BI Premium
5/23/2022 • 6 minutes to read • Edit Online

Power BI datasets can store data in a highly compressed in-memory cache for optimized query performance,
enabling fast user interactivity. With Premium capacities, large datasets beyond the default limit can be enabled
with the Large dataset storage format setting. When enabled, dataset size is limited by the Premium capacity
size or the maximum size set by the administrator.
Large datasets can be enabled for all Premium P SKUs, Embedded A SKUs, and with Premium Per User (PPU).
The large dataset size limit in Premium is comparable to Azure Analysis Services, in terms of data model size
limitations.
While required for datasets to grow beyond 10 GB, enabling the Large dataset storage format setting has other
benefits. If you're planning to use XMLA endpoint-based tools for dataset write operations, be sure to enable the
setting, even for datasets that you wouldn't necessarily characterize as a large dataset. When enabled, the large
dataset storage format can improve XMLA write operations performance.
Large datasets in the service do not affect the Power BI Desktop model upload size, which is still limited to 10
GB. Instead, datasets can grow beyond that limit in the service on refresh.

IMPORTANT
Power BI Premium does support large datasets. Enable the Large dataset storage format option to use datasets in
Power BI Premium that are larger than the default limit.

Enable large datasets


Steps here describe enabling large datasets for a new model published to the service. For existing datasets, only
step 3 is necessary.
1. Create a model in Power BI Desktop. If your dataset will become larger and progressively consume more
memory, be sure to configure Incremental refresh.
2. Publish the model as a dataset to the service.
3. In the service > dataset > Settings , expand Large dataset storage format , click the slider to On , and
then click Apply .

4. Invoke a refresh to load historical data based on the incremental refresh policy. The first refresh could
take a while to load the history. Subsequent refreshes should be faster, depending on your incremental
refresh policy.

Set default storage format


All new datasets created in a workspace assigned to Premium capacity can have the large dataset storage
format enabled by default.
1. In the workspace, click Settings > Premium .
2. In Default storage format , select Large dataset storage format , and then click Save .

Enable with PowerShell


You can also enable large dataset storage format by using PowerShell. You must have capacity admin and
workspace admin privileges to run the PowerShell cmdlets.
1. Find the dataset ID (GUID). On the Datasets tab for the workspace, under the dataset settings, you can
see the ID in the URL.

2. From a PowerShell admin prompt, install the MicrosoftPowerBIMgmt module.

Install-Module -Name MicrosoftPowerBIMgmt

3. Run the following cmdlets to sign in and check the dataset storage mode.
Login-PowerBIServiceAccount

(Get-PowerBIDataset -Scope Organization -Id <Dataset ID> -Include actualStorage).ActualStorage

The response should be the following. The storage mode is ABF (Analysis Services backup file), which is
the default.

Id StorageMode

-- -----------

<Dataset ID> Abf

4. Run the following cmdlets to set the storage mode. It can take a few seconds to convert to Premium Files.

Set-PowerBIDataset -Id <Dataset ID> -TargetStorageMode PremiumFiles

(Get-PowerBIDataset -Scope Organization -Id <Dataset ID> -Include actualStorage).ActualStorage

The response should be the following. The storage mode is now set to Premium Files.

Id StorageMode

-- -----------

<Dataset ID> PremiumFiles

You can check the status of dataset conversions to and from Premium Files by using the Get-
PowerBIWorkspaceMigrationStatus cmdlet.

Dataset eviction
Power BI uses dynamic memory management to evict inactive datasets from memory. Power BI evicts datasets
so it can load other datasets to address user queries. Dynamic memory management allows the sum of dataset
sizes to be significantly greater than the memory available on the capacity, but a single dataset must fit into
memory. For more info on dynamic memory management, see How capacities function.
You should consider the impact of eviction on large models. Despite relatively fast dataset load times, there
could still be a noticeable delay for users if they have to wait for large evicted datasets to be reloaded. For this
reason, in its current form, the large models feature is recommended primarily for capacities dedicated to
enterprise BI requirements rather than capacities mixed with self-service BI requirements. Capacities dedicated
to enterprise BI requirements are less likely to frequently trigger eviction and need to reload datasets. Capacities
for self-service BI on the other hand can have many small datasets that are more frequently loaded in and out of
memory.

On-demand load
On-demand load is enabled by default for large datasets, and can provide significantly improved report
performance. With on-demand load, you get the following benefits during subsequent queries and refreshes:
Relevant data pages are loaded on-demand (paged in to memory).
Evicted datasets are quickly made available for queries.
On-demand loading surfaces additional Dynamic Management View (DMV) information that can be used to
identify usage patterns and understand the state of your models. For example, you can check the Temperature
and Last Accessed statistics for each column in the dataset, by running the following DMV query from SQL
Server Management Studio (SSMS):

Select * from SYSTEMRESTRICTSCHEMA ($System.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS, [DATABASE_NAME] =


'<Dataset Name>')

Checking dataset size


After loading historical data, you can use SSMS through the XMLA endpoint to check the estimated dataset size
in the model properties window.

You can also check the dataset size by running the following DMV queries from SSMS. Sum the
DICTIONARY_SIZE and USED_SIZE columns from the output to see the dataset size in bytes.

SELECT * FROM SYSTEMRESTRICTSCHEMA


($System.DISCOVER_STORAGE_TABLE_COLUMNS,
[DATABASE_NAME] = '<Dataset Name>') //Sum DICTIONARY_SIZE (bytes)

SELECT * FROM SYSTEMRESTRICTSCHEMA


($System.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS,
[DATABASE_NAME] = '<Dataset Name>') //Sum USED_SIZE (bytes)

Default segment size


For datasets using the large dataset storage format, Power BI automatically sets the default segment size to 8
million rows to strike a good balance between memory requirements and query performance for large tables.
This is the same segment size as in Azure Analysis Services. Keeping the segment sizes aligned helps ensure
comparable performance characteristics when migrating a large data model from Azure Analysis Services to
Power BI.

Considerations and limitations


Keep in mind the following restrictions when using large datasets:
New workspaces are required : Large datasets only work with New workspaces.
Download to Power BI Desktop : If a dataset is stored on Premium Files, downloading as a .pbix file
will fail.
Suppor ted regions : Large datasets are available in Azure regions that support Azure Premium Files
Storage. Review the table in region availability to see a list of all the supported regions.
Setting maximum dataset size : Maximum dataset size can be set by administrators. For more
information see Max Memory in Datasets.
Refreshing large datasets : Datasets that are close to half the size of the capacity size (for example, a 12
GB dataset on a 25 GB capacity size) may exceed the available memory during refreshes. Using an XMLA
endpoint you can configure fine grained data refreshes, so that the memory needed by the refresh can be
minimized to fit within your capacity's size.
Push datasets : Push datasets do not support the large dataset storage format.
You cannot enable large datasets using the REST API.

Region availability
Large datasets in Power BI are only available in Azure regions that support Azure Premium Files Storage.
The following list provides regions where large datasets in Power BI are available. Regions not in the following
list are not supported for large models.

NOTE
Once a large dataset is created in a workspace, it must stay in that region. You cannot reassign a workspace with a large
dataset to a Premium capacity in another region.

A Z URE REGIO N A Z URE REGIO N A B B REVIAT IO N

Australia East australiaeast

Australia Southeast australiasoutheast

Canada East canadaeast

Canada Central canadacentral

Central India centralindia

Central US centralus
A Z URE REGIO N A Z URE REGIO N A B B REVIAT IO N

East Asia eastasia

East US eastus

East US 2 eastus2

France Central francecentral

France South francesouth

Japan East japaneast

Japan West japanwest

Korea Central koreacentral

Korea South koreasouth

North Central US northcentralus

North Europe northeurope

South Central US southcentralus

Southeast Asia southeastasia

Switzerland North switzerlandnorth

Switzerland West switzerlandwest

UK South uksouth

UK West ukwest

West Europe westeurope

West India westindia

West US westus

West US 2 westus2

Next steps
The following links provide information that can be useful for working with large models:
Azure Premium Files Storage
Configure Multi-Geo support for Power BI Premium
Bring your own encryption keys for Power BI
How capacities function
Incremental refresh for datasets
Power BI Premium Generation 2.
Automate Premium workspace and dataset tasks
with service principals
5/23/2022 • 3 minutes to read • Edit Online

Service principals are an Azure Active Directory app registration you create within your tenant to perform
unattended resource and service level operations. They're a unique type of user identity with an app name,
application ID, tenant ID, and client secret or certificate for a password.
Power BI Premium uses the same service principal functionality as Power BI Embedded. To learn more, see
Embedding Power BI content with service principals.
In Power BI Premium , service principals can also be used with the XMLA endpoint to automate dataset
management tasks such as provisioning workspaces, deploying models, and dataset refresh with:
PowerShell
Azure Automation
Azure Logic Apps
Custom client applications
Only New workspaces support XMLA endpoint connections using service principals. Classic workspaces aren't
supported. A service principal has only those permissions necessary to perform tasks for workspaces that it is
assigned. Permissions are assigned through workspace Access, much like regular UPN accounts.
To perform write operations, the capacity's Datasets workload must have the XMLA endpoint enabled for
read-write. Datasets published from Power BI Desktop should have the Enhanced metadata format feature
enabled.

Create a service principal


Service principals are created as an app registration in the Azure portal or by using PowerShell. When creating
your service principal, be sure to copy and save separately the app name, Application (client) ID, Directory
(tenant) ID, and client secret. For steps on how to create a service principal, see:
Create service principal - Azure portal
Create service principal - PowerShell

Create an Azure AD security group


By default, service principals have access to any tenant settings they're enabled for. Depending on your admin
settings, access can include specific security groups or the entire organization.
To restrict service principal access to specific tenant settings, you can allow access to specific security groups.
Alternatively, you can create a dedicated security group for service principals, and exclude it from the desired
tenant settings. For steps on how to create a security group and add a service principal, see Create a basic group
and add members using Azure Active Directory.

Enable service principals


Before using service principals in Power BI, an admin must first enable service principal access in the Power BI
admin portal.
In the Power BI Admin por tal > Tenant settings , expand Allow ser vice principals to use Power BI APIs ,
and then click Enabled . To apply permissions to a security group, add the group name to Specific security
groups .

Workspace access
In order for your service principal to have the necessary permissions to perform Premium workspace and
dataset operations, you must add the service principal as a workspace Member or Admin. Using Workspace
access in the Power BI service is described here, but you can also use the Add Group User REST API.
1. In the Power BI service, for a workspace, select More > Workspace access .
2. Search by application name, Add the service principal as an Admin or Member to the workspace.

Connection strings for the XMLA endpoint


Once you've created a service principal, enabled service principals for your tenant, and added the service
principal to Workspace access, you can use it as a user identity in connection strings with the XMLA endpoint.
The difference is for the User ID and Password parameters you specify the application ID, tenant ID, and
application secret.
Data Source=powerbi://api.powerbi.com/v1.0/myorg/<workspace name>; Initial Catalog=<dataset name>;User
ID=app:<appId>@<tenantId>;Password=<app_secret>;

PowerShell
Using SQLServer module
In the following example, AppId, TenantId, and AppSecret are used to authenticate a dataset refresh operation:

Param (
[Parameter(Mandatory=$true)] [String] $AppId,
[Parameter(Mandatory=$true)] [String] $TenantId,
[Parameter(Mandatory=$true)] [String] $AppSecret
)
$PWord = ConvertTo-SecureString -String $AppSecret -AsPlainText -Force

$Credential = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList $AppId, $PWord

Invoke-ProcessTable -Server "powerbi://api.powerbi.com/v1.0/myorg/myworkspace" -TableName "mytable" -


DatabaseName "mydataset" -RefreshType "Full" -ServicePrincipal -ApplicationId $AppId -TenantId $TenantId -
Credential $Credential

AMO and ADOMD


When connecting with client applications and web apps, AMO and ADOMD client libraries version 15.1.42.26
(June 2020) and higher installable packages from NuGet support service principals in connection strings using
the following syntax: app:AppID and password or cert:thumbprint .
In the following example, appID and a password are used to perform a model database refresh operation:
string appId = "xxx";
string authKey = "yyy";
string connString = $"Provider=MSOLAP;Data
source=powerbi://api.powerbi.com/v1.0/<tenant>/<workspacename>;Initial catalog=<datasetname>;User ID=app:
{appId};Password={authKey};";
Server server = new Server();
server.Connect(connString);
Database db = server.Databases.FindByName("adventureworks");
Table tbl = db.Model.Tables.Find("DimDate");
tbl.RequestRefresh(RefreshType.Full);
db.Model.SaveChanges();

Next steps
Dataset connectivity with the XMLA endpoint
Azure Automation
Azure Logic Apps
Power BI REST APIs
Dataset connectivity with the XMLA endpoint
5/23/2022 • 19 minutes to read • Edit Online

Power BI Premium, Premium Per User, and Power BI Embedded workspaces support open-platform connectivity
from Microsoft and third-party client applications and tools by using an XMLA endpoint.

What's an XMLA endpoint?


Workspaces use the XML for Analysis (XMLA) protocol for communications between client applications and the
engine that manages your Power BI workspaces and datasets. These communications are through what are
commonly referred to as XMLA endpoints. XMLA is the same communication protocol used by the Microsoft
Analysis Services engine, which under the hood, runs Power BI's semantic modeling, governance, lifecycle, and
data management. Data sent over the XMLA protocol is fully encrypted.
By default, read-only connectivity using the endpoint is enabled for the Datasets workload in a capacity. With
read-only, data visualization applications and tools can query dataset model data, metadata, events, and schema.
Read-write operations using the endpoint can be enabled providing additional dataset management,
governance, advanced semantic modeling, debugging, and monitoring. With read-write enabled, datasets have
more parity with Azure Analysis Services and SQL Server Analysis Services enterprise grade tabular modeling
tools and processes.

Terms of Use
Use of the XMLA endpoint is subject to the following:
Single-user application - The application uses a single user account or app identity to access a Power BI
dataset through the XMLA endpoint. Typical examples are developer tools, admin scripts, and automated
processes to perform data modeling and administrative tasks, such as altering the metadata of a dataset,
performing a backup or restore operation, or triggering a data refresh. The user account or app identity that the
client application uses to access a dataset must have a valid Premium Per User (PPU) license unless the dataset
resides on a Premium capacity.
Multi-user application - The application provides multiple users with access to a Power BI dataset. For
example, a middle-tier application integrating a dataset into a business solution and accessing the dataset on
behalf of its business users.
For Premium Per User (PPU) workspaces, the application must require each user to sign in to Power BI. The
application uses each user's access token to access the datasets. The application is not permitted to use a
service account or other app identity to perform tasks on behalf of its users. Each user must use their own
Power BI account for opening reports, accessing datasets, and executing queries.
For Premium workspaces, the application may use a service account or app identity on behalf of the end
users without requiring each user to sign in to Power BI.

Client applications and tools


Here are some of the most common applications and tools used with Azure Analysis Services and SQL Server
Analysis Services, and now supported by Power BI Premium datasets:
Microsoft Excel – Excel PivotTables are one of the most common tools used to summarize, analyze, explore, and
present summary data from Power BI datasets. Read-only is required for query operations. Click-to-Run version
of Office 16.0.13612.10000 or higher is required.
Visual Studio with Analysis Ser vices projects – Also known as SQL Server Data Tools, or simply SSDT , is
an enterprise grade model authoring tool for Analysis Services tabular models. Analysis Services projects
extensions are supported on all Visual Studio 2017 and later editions, including the free Community edition.
Extension version 2.9.14 or higher is required to deploy tabular models to a Premium workspace. When
deploying, the model must be at the 1500 or higher compatibility level. XMLA read-write is required on the
datasets workload. To learn more, see Tools for Analysis Services.
SQL Ser ver Management Studio (SSMS) - Supports DAX, MDX, and XMLA queries. Perform fine-grain
refresh operations and scripting of dataset metadata by using the Tabular Model Scripting Language (TMSL).
Read-only is required for query operations. Read-write is required for scripting metadata. Requires SSMS
version 18.9 or higher. Downloadhere.
SQL Ser ver Profiler – Installed with SSMS, this tool provides tracing and debugging of dataset events. While
officially deprecated for SQL Server, Profiler continues to be included in SSMS and remains supported for
Analysis Services and Power BI. Requires SQL Server Profiler version 18.9 or higher. User must specify the
dataset (initial catalog) when connecting with the XMLA endpoint. To learn more, seeSQL Server Profiler for
Analysis Services.
Analysis Ser vices Deployment Wizard – Installed with SSMS, this tool provides deployment of Visual Studio
authored tabular model projects to Analysis Services and Premium workspaces. It can be run interactively or
from the command line for automation. XMLA read-write is required. To learn more, see Analysis Services
Deployment Wizard.
PowerShell cmdlets – Analysis Services cmdlets can be used to automate dataset management tasks like
refresh operations. XMLA read-write is required. Version 21.1.18256 (for Premium Gen2 capacities, see
Premium Gen2 prerequisites) or higher of the SqlServer PowerShell module is required. Azure Analysis Services
cmdlets in the Az.AnalysisServices module are not supported for Power BI datasets. To learn more, see Analysis
Services PowerShell Reference.
Power BI Repor t Builder - A tool for authoring paginated reports. Create a report definition that specifies
what data to retrieve, where to get it, and how to display it. You can preview your report in Report Builder and
then publish your report to the Power BI service. XMLA read-only is required. To learn more, see Power BI
Report Builder.
Tabular Editor - An open-source tool for creating, maintaining, and managing tabular models using an
intuitive, lightweight editor. A hierarchical view shows all objects in your tabular model. Objects are organized by
display folders with support for multi-select property editing and DAX syntax highlighting. XMLA read-only is
required for query operations. Read-write is required for metadata operations. To learn more, see
tabulareditor.github.io.
DAX Studio – An open-source tool for DAX authoring, diagnosis, performance tuning, and analysis. Features
include object browsing, integrated tracing, query execution breakdowns with detailed statistics, DAX syntax
highlighting and formatting. XMLA read-only is required for query operations. To learn more, see daxstudio.org.
ALM Toolkit - An open-source schema compare tool for Power BI datasets, most often used for application
lifecycle management (ALM) scenarios. Perform deployment across environments and retain incremental
refresh historical data. Diff and merge metadata files, branches, and repos. Reuse common definitions between
datasets. Read-only is required for query operations. Read-write is required for metadata operations. To learn
more, see alm-toolkit.com.
Third par ty - Includes client data visualization applications and tools that can connect to, query, and consume
datasets in Premium workspaces. Most tools require the latest versions of MSOLAP client libraries, but some
may use ADOMD. Read-only or read-write XMLA endpoint is dependent on the operations.
Client libraries
Client applications and tools don't communicate directly with the XMLA endpoint. Instead, they use client
libraries as an abstraction layer. These are the same client libraries that applications use to connect to Azure
Analysis Services and SQL Server Analysis Services. Microsoft applications like Excel, SQL Server Management
Studio (SSMS), and Analysis Services projects extension for Visual Studio install all three client libraries and
update them along with regular application and extension updates. Developers can also use the client libraries
to build custom applications. In some cases, particularly with third-party applications, if not installed with the
application, it may be necessary to install newer versions of the client libraries. Client libraries are updated
monthly. To learn more, seeClient libraries for connecting to Analysis Services.
The minimum required client library versions for Premium Gen2 capacities are listed in the Premium Gen2
prerequisites.

Optimize datasets for write operations by enabling large models


When using the XMLA endpoint for dataset management with write operations, it's recommended you enable
the dataset for large models. This reduces the overhead of write operations, which can make them considerably
faster. For datasets over 1 GB in size (after compression), the difference can be significant. To learn more, see
Large models in Power BI Premium.

Enable XMLA read-write


By default, Premium capacity or Premium Per User dataset workloads have the XMLA endpoint property setting
enabled for read-only. This means applications can only query a dataset. For applications to perform write
operations, the XMLA Endpoint property must be enabled for read-write.
To enable read-write for a Premium capacity
1. Click Settings > Admin por tal .
2. In the Admin portal, select Capacity settings > Power BI Premium > capacity name.
3. Expand Workloads . In the XML A Endpoint setting, select Read Write . The XMLA Endpoint setting
applies to all workspaces and datasets assigned to the capacity.

To enable read-write for Premium Per User


1. Click Settings > Admin por tal .
2. In the Admin portal, select Premium Per User .
3. Expand Dataset workload settings . In the XML A Endpoint setting, select Read Write .

Connecting to a Premium workspace


Workspaces assigned to a capacity have a connection string in URL format. For example:
powerbi://api.powerbi.com/v1.0/[tenant name]/[workspace name] .
Applications connecting to the workspace use the URL as if it were an Analysis Services server name. For
example:
powerbi://api.powerbi.com/v1.0/contoso.com/Sales Workspace .
Users with UPNs in the same tenant (not B2B) can replace the tenant name with myorg . For example:
powerbi://api.powerbi.com/v1.0/myorg/Sales Workspace .
B2B users must specify their organization UPN in tenant name. For example:
powerbi://api.powerbi.com/v1.0/fabrikam.com/Sales Workspace .

NOTE
To determine the primary domain name and ID of a Power BI tenant, sign into the Azure portal, select Azure Active
Directory from the main menu, and then note the information on the Azure Active Directory Overview page. For more
information, see Find the Microsoft Azure AD tenant ID and primary domain name.

NOTE
Connecting to a My Workspace by using the XMLA endpoint is currently not supported.

To get the workspace connection URL


In workspace Settings > Premium > Workspace Connection , select Copy .

Connection requirements
Initial catalog
With some tools, such as SQL Server Profiler, you must specify an Initial Catalog, which is the dataset (database)
to connect to in your workspace. In the Connect to Ser ver dialog, select Options > Connection Proper ties
> Connect to database , enter the dataset name.
Duplicate workspace names
New workspaces (created using the new workspace experience) in Power BI impose validation to disallow
creating or renaming workspaces with duplicate names. Workspaces that haven't been migrated can result in
duplicate names. When connecting to a workspace with the same name as another workspace, you may get the
following error:
Cannot connect to powerbi://api.powerbi.com/v1.0/[tenant name]/[workspace name] .
To work around this error, in addition to the workspace name, specify the ObjectIDGuid, which can be copied
from the workspace objectID in the URL. Append the objectID to the connection URL. For example:
powerbi://api.powerbi.com/v1.0/myorg/Contoso Sales - 9d83d204-82a9-4b36-98f2-a40099093830 .
Duplicate dataset name
When connecting to a dataset with the same name as another dataset in the same workspace, append the
dataset guid to the dataset name. You can get both dataset name and guid when connected to the workspace in
SSMS.
Delay in datasets shown
When connecting to a workspace, changes from new, deleted, and renamed datasets can take up to a few
minutes to appear.
Unsupported datasets
The following datasets aren't accessible by using the XMLA endpoint. These datasets won't appear under the
workspace in SSMS or in other tools:
Datasets based on a live connection to an Azure Analysis Services or SQL Server Analysis Services model.
Datasets based on a live connection to a Power BI dataset in another workspace. To learn more, see Intro to
datasets across workspaces.
Datasets with Push data by using the REST API.
Datasets in My Workspace.
Excel workbook datasets.
Server/workspace alias
Server name aliases, supported in Azure Analysis Services are not supported for Premium workspaces.

Security
In addition to the XMLA Endpoint property being enabled read-write by the capacity admin, the tenant-level
setting Allow XML A endpoints and Analyze in Excel with on-premises datasets must be enabled in the
admin portal. If you need to generate Analyze in Excel (AIXL) files that connect to the XMLA endpoint, the tenant-
level setting Allow live connections should also be enabled. These settings are both enabled by default.
Allow XML A endpoints and Analyze in Excel with on-premises datasets is an integration setting.

The following table describes the implications of the setting Expor t data for XMLA and Analyze in Excel (AIXL):

A L LO W XM L A EN DP O IN T S A N D A L LO W XM L A EN DP O IN T S A N D
A N A LY Z E IN EXC EL W IT H O N - A N A LY Z E IN EXC EL W IT H O N -
SET T IN G P REM ISES DATA SET S = DISA B L ED P REM ISES DATA SET S = EN A B L ED

Allow Live Connections toggle = XMLA disallowed, Analyze in Excel XMLA allowed, Analyze in Excel
disabled disallowed, AIXL for on-prem datasets disallowed, AIXL for on-prem datasets
disallowed allowed

Allow Live Connections toggle = XMLA disallowed, Analyze in Excel XMLA allowed, Analyze in Excel
enabled allowed, AIXL for on-prem datasets allowed, AIXL for on-prem datasets
disallowed allowed

Allow live connections is an export and sharing setting.

Access through the XMLA endpoint will honor security group membership set at the workspace/app level.
Workspace contributors and above have write access to the dataset and are therefore equivalent to Analysis
Services database admins. They can deploy new datasets from Visual Studio and execute TMSL scripts in SSMS.
Operations that require Analysis Services server admin permissions (rather than database admin) such as
server-level traces and user impersonation using the EffectiveUserName connection-string property are not
supported in Premium workspaces at this time.
Other users who have Build permission on a dataset are equivalent to Analysis Services database readers. They
can connect to and browse datasets for data consumption and visualization. Row-level security (RLS) rules are
honored and they cannot see internal dataset metadata.
Model roles
With the XMLA endpoint, roles can be defined for a dataset, role membership can be defined for Azure Active
Directory (Azure AD) users, and row-level security (RLS) filters can be defined. Model roles in Power BI are used
only for RLS. Use the Power BI security model to control permissions beyond RLS.
For tabular model projects authored in Visual Studio, roles can be defined by using Role Manager in the model
designer. For datasets in Power BI, roles can be defined by using SSMS to create role objects and define role
properties. In most cases, however, role object definitions can be scripted by using TMSL to create or modify the
Roles object. TMSL scripts can be executed in SSMS or with the Invoke-ASCmd PowerShell cmdlet.
The following limitations apply when working with dataset roles through the XMLA endpoint:
The only permission for a role that can be set for datasets is Read permission. Other permissions are granted
using the Power BI security model.
Service Principals, which require workspace Member or Admin permissions cannot be added to roles.
Build permission for a dataset is required for read access through the XMLA endpoint, regardless of the
existence of dataset roles.
The "Roles=" connection string property can be used to test downgrading role members with Write
permissions to Read permissions. The member account must still be a member of the relevant RLS role. This
is different than using Impersonation with SQL Server Analysis Services or Azure Analysis Services where if
the account is a server admin, the RLS role membership is assumed. For Premium workspaces, since there is
no server admin, the account must belong to a role in order for RLS to be applied.
To learn more, see Roles in tabular models.
Setting data-source credentials
Metadata specified through the XMLA endpoint can create connections to data sources, but cannot set data-
source credentials. Instead, credentials can be set in the dataset settings page in the Power BI Service.
Service principals
Service principals are an Azure Active Directory app registration you create within your tenant to perform
unattended resource and service level operations. They're a unique type of user identity with an app name,
application ID, tenant ID, and client secret or certificate for a password. Power BI Premium uses the same service
principal functionality as Power BI Embedded.
Service principals can also be used with the XMLA endpoint to automate dataset management tasks such as
provisioning workspaces, deploying models, and dataset refresh with:
PowerShell
Azure Automation
Azure Logic Apps
Custom client applications
To learn more, see Automate Premium workspace and dataset tasks with service principals.

Deploy model projects from Visual Studio (SSDT)


Deploying a tabular model project in Visual Studio to a Premium workspace is much the same as deploying to
an Azure or SQL Server Analysis Services server. The only differences are in the Deployment Server property
specified for the project, and how data source credentials are specified so processing operations can import data
from data sources into the new dataset on the workspace.
To deploy a tabular model project authored in Visual Studio, you must first set the workspace connection URL in
the project Deployment Ser ver property. In Visual Studio, in Solution Explorer , right-click the project >
Proper ties . In the Ser ver property, paste the workspace connection URL.

When the Deployment Server property has been specified, the project can then be deployed.
When deployed the first time , a dataset is created in the workspace by using metadata from the model.bim.
As part of the deployment operation, after the dataset has been created in the workspace from model metadata,
processing to load data into the dataset from data sources will fail.
Processing fails because unlike when deploying to an Azure or SQL Server Analysis Server instance, where data
source credentials are prompted for as part of the deployment operation, when deploying to a Premium
workspace data source credentials cannot be specified as part of the deployment operation. Instead, after
metadata deployment has succeeded and the dataset has been created, data source credentials are then
specified in the Power BI Service in dataset settings. In the workspace, select Datasets > Settings > Data
source credentials > Edit credentials .
When data source credentials are specified, you can then refresh the dataset in the Power BI service, configure
schedule refresh, or process (refresh) from SQL Server Management Studio to load data into the dataset.
The deployment Processing Option property specified in the project in Visual Studio is observed. However, if a
data source has not yet had credentials specified in the Power BI service, even if the metadata deployment
succeeds, processing will fail. You can set the property to Do Not Process , preventing an attempt to process as
part of the deployment, but you might want to set the property back to Default because once the data source
credentials are specified in the data source settings for the new dataset, processing as part of subsequent
deployment operations will then succeed.

Connect with SSMS


Using SSMS to connect to a workspace is just like connecting to an Azure or SQL Server Analysis Services
server. The only difference is you specify the workspace URL in server name, and you must use Active
Director y - Universal with MFA authentication.
Connect to a workspace by using SSMS
1. In SQL Server Management Studio, select Connect > Connect to Ser ver .
2. In Ser ver Type , select Analysis Ser vices . In Ser ver name , enter the workspace URL. In
Authentication , select Active Director y - Universal with MFA , and then in User name , enter your
organizational user ID.
When connected, the workspace is shown as an Analysis Services server, and datasets in the workspace are
shown as databases.

To learn more about using SSMS to script metadata, see Create Analysis Services scripts and Tabular Model
Scripting Language (TMSL).

Dataset refresh
The XMLA endpoint enables a wide range of scenarios for fine-grain refresh capabilities using SSMS,
automation with PowerShell, Azure Automation, and Azure Functions using TOM. You can, for example, refresh
certain incremental refresh historical partitions without having to reload all historical data.
Unlike configuring refresh in the Power BI service, refresh operations through the XMLA endpoint are not
limited to 48 refreshes per day, and the scheduled refresh timeout is not imposed.
Date, time, and status for dataset refresh operations that include a write transaction through the XMLA endpoint
are recorded and shown in dataset Refresh history.
Dynamic Management Views (DMV)
Analysis Services DMVs provide visibility of dataset metadata, lineage, and resource usage. DMVs available for
querying in Power BI through the XMLA endpoint are limited to, at most, those that require database-admin
permissions. Some DMVs for example are not accessible because they require Analysis Services server-admin
permissions.

Power BI Desktop authored datasets


Enhanced metadata
XMLA write operations on datasets authored in Power BI Desktop and published to a Premium workspace
require enhanced metadata. To learn more, see Enhanced dataset metadata.
Cau t i on

At this time, a write operation on a dataset authored in Power BI Desktop will prevent it from being downloaded
back as a PBIX file. Be sure to retain your original PBIX file.
Data-source declaration
When connecting to data sources and querying data, Power BI Desktop uses Power Query M expressions as
inline data source declarations. While supported in Premium workspaces, Power Query M inline data-source
declaration is not supported by Azure Analysis Services or SQL Server Analysis Services. Instead, Analysis
Services data modeling tools like Visual Studio create metadata using structured and/or provider data source
declarations. With the XMLA endpoint, Premium also supports structured and provider data sources, but not as
part of Power Query M inline data source declarations in Power BI Desktop models. To learn more, see
Understanding providers.
Power BI Desktop in live connect mode
Power BI Desktop can connect to a Power BI Premium dataset using a live connection. When using a live
connection, data doesn't need to be replicated locally, making it easier for users to consume semantic models.
There are two ways users can connect:
By selecting Power BI datasets , and then selecting a dataset to create a report. This is the recommended way
for users to connect live to datasets. This method provides an improved discover experience showing the
endorsement level of datasets. Users don't need to find and keep track of workspace URLs. To find a dataset,
users simply type in the dataset name or scroll to find the dataset they're looking for.
The other way users can connect is by using Get Data > Analysis Ser vices , specify a Power BI Premium
workspace name as a URL, select Connect live , and then in Navigator, select a dataset. In this case, Power BI
Desktop uses the XMLA endpoint to connect live to the dataset as though it were an Analysis Services data
model.

Organizations that have existing reports connected live to Analysis Services data models intending to migrate to
Premium datasets only have to change the server name URL in Transform data > Data source settings .

Audit logs
When applications connect to a workspace, access through XMLA endpoints is logged in the Power BI audit logs
with the following operations:

O P ERAT IO N F RIEN DLY N A M E O P ERAT IO N N A M E

Connected to Power BI dataset from an external application ConnectFromExternalApplication

Requested Power BI dataset refresh from an external RefreshDatasetFromExternalApplication


application

Created Power BI dataset from an external application CreateDatasetFromExternalApplication

Edited Power BI dataset from an external application EditDatasetFromExternalApplication


O P ERAT IO N F RIEN DLY N A M E O P ERAT IO N N A M E

Deleted Power BI dataset from an external application DeleteDatasetFromExternalApplication

To learn more, see Auditing Power BI.

See also
For more information related to this article, check out the following resources:
Power BI usage scenarios: Advanced data model management
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Interactive and background operations
5/23/2022 • 2 minutes to read • Edit Online

Power BI divides operations into two types, interactive and background. This article lists these operations and
explains the difference between them

Interactive operations
Shorter running operations such as dataset queries are classified as interactive operations. They’re usually
triggered by user interactions with the UI. For example, an interactive operation is triggered when a user opens a
report or clicks on a slicer in a Power BI report. Interactive operations can also be triggered without interacting
with the UI, for example when using SQL Server Management Studio (SSMS) or a custom application to run a
DAX query.

Background operations
Longer running operations such as dataset or dataflow refreshes are classified as background operations. They
can be triggered manually by a user, or automatically without user interaction. Background operations include
scheduled refreshes, interactive refreshes, REST-based refreshes and XMLA-based refresh operations. Users
aren't expected to wait for these operations to finish. Instead, they might come back later to check the status of
the operations.

Operation list
The table below lists the Power BI operations. It provides a short description for each operation and identifies
the operation's type.

O P ERAT IO N DESC RIP T IO N W O RK LO A D TYPE

Artificial intelligence (AI) AI function evaluation AI Background

Background query Queries for refreshing tiles Datasets Background


and creating report
snapshots

Dataflow DirectQuery Connect directly to a Dataflows Interactive


dataflow without the need
to import the data into a
dataset

Dataflow refresh An on demand or Dataflows Background


scheduled background
dataflow refresh, performed
by the service or with REST
APIs

Dataset on-demand refresh A background dataset Datasets Background


refresh initiated by the user,
using the service, REST APIs
or public XMLA endpoints
O P ERAT IO N DESC RIP T IO N W O RK LO A D TYPE

Dataset Scheduled Refresh A scheduled background Datasets Background


dataset refresh, performed
by the service, REST APIs or
public XMLA endpoints

Interactive query User queries for loading Datasets Interactive


models, opening, and
interacting with reports

PublicApiExport A Power BI report exported Report Background


with the Export report to
file REST API

Render A Power BI paginated Paginated report Background


report exported with the
Export paginated report to
file REST API

Render A Power BI paginated Paginated report Interactive


report viewed in Power BI
service

XMLA read XMLA read operations Datasets Interactive


initiated by the user, for
queries and discoveries

XMLA write A background XMLA write Datasets Background


operation that changes the
model

Next steps
What is Power BI Premium Gen2?
Power BI Premium Gen2 architecture
Managing Premium Gen2 capacities
Use the gen2 metrics app
Troubleshoot XMLA endpoint connectivity
5/23/2022 • 14 minutes to read • Edit Online

XMLA endpoints in Power BI rely on the native Analysis Services communication protocol for access to Power BI
datasets. Because of this, XMLA endpoint troubleshooting is much the same as troubleshooting a typical
Analysis Services connection. However, some differences around Power BI-specific dependencies apply.

Before you begin


Before troubleshooting an XMLA endpoint scenario, be sure to review the basics covered in Dataset connectivity
with the XMLA endpoint. Most common XMLA endpoint use cases are covered there. Other Power BI
troubleshooting guides, such as Troubleshoot gateways - Power BI and Troubleshooting Analyze in Excel, can
also be helpful.

Enabling the XMLA endpoint


The XMLA endpoint can be enabled on both Power BI Premium, Premium Per User, and Power BI Embedded
capacities. On smaller capacities, such as an A1 capacity with only 2.5 GB of memory, you might encounter an
error in Capacity settings when trying to set the XMLA Endpoint to Read/Write and then selecting Apply . The
error states "There was an issue with your workload settings. Try again in a little while.".
Here are a couple things to try:
Limit the memory consumption of other services on the capacity, such as Dataflows, to 40% or less, or
disable an unnecessary service completely.
Upgrade the capacity to a larger SKU. For example, upgrading from an A1 to an A3 capacity solves this
configuration issue without having to disable Dataflows.
Keep in-mind, you must also enable the tenant-level Export data setting in the Power BI Admin Portal. This
setting is also required for the Analyze in Excel feature.

Establishing a client connection


After enabling the XMLA endpoint, it's a good idea to test connectivity to a workspace on the capacity. To learn
more, see Connecting to a Premium workspace. Also, be sure to read the section Connection requirements for
helpful tips and information about current XMLA connectivity limitations.
Connecting with a service principal
If you've enabled tenant settings to allow service principals to use Power BI APIs, as described in Enable service
principals, you can connect to an XMLA endpoint by using a service principal. Keep in mind the service principal
requires the same level of access permissions at the workspace or dataset level as regular users.
To use a service principal, be sure to specify the application identity information in the connection string as:
User ID=<app:appid@tenantid>
Password=<application secret>

For example:
Data Source=powerbi://api.powerbi.com/v1.0/myorg/Contoso;Initial Catalog=PowerBI_Dataset;User
ID=app:91ab91bb-6b32-4f6d-8bbc-97a0f9f8906b@19373176-316e-4dc7-834c-328902628ad4;Password=6drX...;

If you receive the following error:


"We cannot connect to the dataset due to incomplete account information. For service principals, make sure you
specify the tenant ID together with the app ID using the format app:<appId>@<tenantId>, then try again."
Make sure you specify the tenant ID together with the app ID using the correct format.
It's also valid to specify the app ID without the tenant ID. However, in this case, you must replace the myorg alias
in the data source URL with the actual tenant ID. Power BI can then locate the service principal in the correct
tenant. But, as a best practice, use the myorg alias and specify the tenant ID together with the app ID in the User
ID parameter.
Connecting with Azure Active Directory B2B
With support for Azure Active Directory (Azure AD) business-to-business (B2B) in Power BI, you can provide
external guest users with access to datasets over the XMLA endpoint. Make sure the Share content with external
users setting is enabled in the Power BI Admin portal. To learn more, see Distribute Power BI content to external
guest users with Azure AD B2B.

Deploying a dataset
You can deploy a tabular model project in Visual Studio (SSDT) to a workspace assigned to a Premium capacity,
much the same as to a server resource in Azure Analysis Services. However, when deploying there are some
additional considerations. Be sure to review the section Deploy model projects from Visual Studio (SSDT) in the
Dataset connectivity with the XMLA endpoint article.
Deploying a new model
In the default configuration, Visual Studio attempts to process the model as part of the deployment operation to
load data into the dataset from the data sources. As described in Deploy model projects from Visual Studio
(SSDT), this operation can fail because data source credentials cannot be specified as part of the deployment
operation. Instead, if credentials for your data source aren't already defined for any of your existing datasets, you
must specify the data source credentials in the dataset settings using the Power BI user interface (Datasets >
Settings > Data source credentials > Edit credentials ). Having defined the data source credentials, Power
BI can then apply the credentials to this data source automatically for any new dataset, after metadata
deployment has succeeded and the dataset has been created.
If Power BI cannot bind your new dataset to data source credentials, you will receive an error stating "Cannot
process database. Reason: Failed to save modifications to the server." with the error code
"DMTS_DatasourceHasNoCredentialError", as shown below:
To avoid the processing failure, set the Deployment Options > Processing Options to Do not Process , as
shown in the following image. Visual Studio then deploys only metadata. You can then configure the data source
credentials, and click on Refresh now for the dataset in the Power BI user interface.

New project from an existing dataset


Creating a new tabular project in Visual Studio by importing the metadata from an existing dataset is not
supported. However, you can connect to the dataset by using SQL Server Management Studio, script out the
metadata, and reuse it in other tabular projects.
Migrating a dataset to Power BI
It's recommended you specify the 1500 (or higher) compatibility level for tabular models. This compatibility
level supports the most capabilities and data source types. Later compatibility levels are backwards compatible
with earlier levels.
Supported data providers
At the 1500 compatibility level, Power BI supports the following data source types:
Provider data sources (legacy with a connection string in the model metadata).
Structured data sources (introduced with the 1400 compatibility level).
Inline M declarations of data sources (as Power BI Desktop declares them).
It's recommended you use structured data sources, which Visual Studio creates by default when going through
the Import data flow. However, if you are planning to migrate an existing model to Power BI that uses a provider
data source, make sure the provider data source relies on a supported data provider. Specifically, the Microsoft
OLE DB Driver for SQL Server and any third-party ODBC drivers. For OLE DB Driver for SQL Server, you must
switch the data source definition to the .NET Framework Data Provider for SQL Server. For third-party ODBC
drivers that might be unavailable in the Power BI service, you must switch to a structured data source definition
instead.
It's also recommended you replace the outdated Microsoft OLE DB Driver for SQL Server (SQLNCLI11) in your
SQL Server data source definitions with the .NET Framework Data Provider for SQL Server.
The following table provides an example of a .NET Framework Data Provider for SQL Server connection string
replacing a corresponding connection string for the OLE DB Driver for SQL Server.

O L E DB DRIVER F O R SQ L SERVER . N ET F RA M EW O RK DATA P RO VIDER F O R SQ L SERVER

Provider=SQLNCLI11;Data Data Source=sqldb.database.windows.net;Initial


Source=sqldb.database.windows.net;Initial Catalog=AdventureWorksDW2016;Integrated
Catalog=AdventureWorksDW;Trusted_Connection=yes; Security=SSPI;Encrypt=true;TrustServerCertificate=false

Cross-referencing partition sources


Just as there are multiple data source types, there are also multiple partition source types a tabular model can
include to import data into a table. Specifically, a partition can use a query partition source or an M partition
source. These partition source types, in turn, can reference provider data sources or structured data sources.
While tabular models in Azure Analysis Services support cross-referencing these various data source and
partition types, Power BI enforces a more strict relationship. Query partition sources must reference provider
data sources, and M partition sources must reference structured data sources. Other combinations are not
supported in Power BI. If you want to migrate a cross-referencing dataset, the following table describes
supported configurations:

SUP P O RT ED W IT H XM L A
DATA SO URC E PA RT IT IO N SO URC E C O M M EN T S EN DP O IN T

Provider data source Query partition source The AS engine uses the Yes
cartridge-based
connectivity stack to access
the data source.
SUP P O RT ED W IT H XM L A
DATA SO URC E PA RT IT IO N SO URC E C O M M EN T S EN DP O IN T

Provider data source M partition source The AS engine translates No


the provider data source
into a generic structured
data source and then uses
the Mashup engine to
import the data.

Structured data source Query partition source The AS engine wraps the No
native query on the
partition source into an M
expression and then uses
the Mashup engine to
import the data.

Structured data source M partition source The AS engine uses the Yes
Mashup engine to import
the data.

Data sources and impersonation


Impersonation settings you can define for provider data sources are not relevant for Power BI. Power BI uses a
different mechanism based on dataset settings to manage data source credentials. For this reason, make sure
you select Ser vice Account if you are creating a Provider Data Source.

Fine -grained processing


When triggering a scheduled refresh or on-demand refresh in Power BI, Power BI typically refreshes the entire
dataset. In many cases, it's more efficient to perform refreshes more selectively. You can perform fine-grained
processing tasks in SQL Server Management Studio (SSMS) as shown below, or by using third-party tools or
scripts.
Overrides in Refresh TMSL command
Overrides in Refresh command (TMSL) allow users choosing a different partition query definition or data source
definition for the refresh operation.

Errors on Premium Gen 2 capacity


Connect to Server error in SSMS
When connecting to a Power BI workspace with SQL Server Management Studio (SSMS), the following error
may be displayed:

TITLE: Connect to Server


------------------------------
Cannot connect to powerbi://api.powerbi.com/v1.0/[tenant name]/[workspace name].
------------------------------
ADDITIONAL INFORMATION:
The remote server returned an error: (400) Bad Request.
Technical Details:
RootActivityId:
Date (UTC): 10/6/2021 1:03:25 AM (Microsoft.AnalysisServices.AdomdClient)
------------------------------
The remote server returned an error: (400) Bad Request. (System)

When connecting to a Power BI workspace with SSMS, ensure the following:


The XMLA endpoint setting is enabled for your tenant's capacity. To learn more, see Enable XMLA read-write.
The Allow XMLA endpoints and Analyze in Excel with on-premises datasets setting is enabled in Tenant
settings.
You're using the latest version of SSMS. Download the latest.
Query execution in SSMS
When connected to a workspace in a Premium Gen2 or an Embedded Gen2 capacity, SQL Server Management
Studio may display the following error:

Executing the query ...


Error -1052311437: We had to move the session with ID '<Session ID>' to another Power BI Premium node.
Moving the session temporarily interrupted this trace - tracing will resume automatically as soon as the
session has been fully moved to the new node.

This is an informational message that can be ignored in SSMS 18.8 and higher because the client libraries will
reconnect automatically. Note that client libraries installed with SSMS v18.7.1 or lower do not support session
tracing. Download the latest SSMS.
Refresh operations in SSMS
When using SSMS v18.7.1 or lower to perform a long running (>1 min) refresh operation on a dataset in a
Premium Gen2 or an Embedded Gen2 capacity, SSMS may display an error like the following even though the
refresh operation succeeds:

Executing the query ...


Error -1052311437:
The remote server returned an error: (400) Bad Request.

Technical Details:
RootActivityId: 3716c0f7-3d01-4595-8061-e6b2bd9f3428
Date (UTC): 11/13/2020 7:57:16 PM
Run complete

This is due to a known issue in the client libraries where the status of the refresh request is incorrectly tracked.
This is resolved in SSMS 18.8 and higher. Download the latest SSMS.
Other client applications and tools
Client applications and tools such as Excel, Power BI Desktop, SSMS, or external tools connecting to and working
with datasets in Power BI Premium Gen2 capacities may cause the following error: The remote ser ver
returned an error : (400) Bad Request.. The error can be caused especially if an underlying DAX query or
XMLA command is long running. To mitigate potential errors, be sure to use the most recent applications and
tools that install recent versions of the Analysis Services client libraries with regular updates. Regardless of
application or tool, the minimum required client library versions to connect to and work with datasets in a
Premium Gen2 capacity through the XMLA endpoint are:

C L IEN T L IB RA RY VERSIO N

MSOLAP 15.1.65.22

AMO 19.12.7.0

ADOMD 19.12.7.0

Editing role memberships in SSMS


When using the SQL Server Management Studio (SSMS) v18.8 to edit a role membership on a dataset, SSMS
may display the following error:
Failed to save modifications to the server.
Error returned: ‘Metadata change of current operation cannot be resolved, please check the command or try
again later.’

This is due to a known issue in the app services REST API. This will be resolved in an upcoming release. In the
meantime, to get around this error, in Role Proper ties , click Script , and then enter and execute the following
TMSL command:

{
"createOrReplace": {
"object": {
"database": "AdventureWorks",
"role": "Role"
},
"role": {
"name": "Role",
"modelPermission": "read",
"members": [
{
"memberName": "xxxx",
"identityProvider": "AzureAD"
},
{
"memberName": “xxxx”
"identityProvider": "AzureAD"
}
]
}
}
}

Publish Error - Live connected dataset


When republishing a live connected dataset utilizing the Analysis Services connector, the following error, "There
is an existing repor t/dataset with the same name. Please delete or rename the existing dataset and
retr y." may be shown.

This is due to the dataset being published having a different connection string but having the same name as the
existing dataset. To resolve this issue, either delete or rename the existing dataset. Also be sure to republish any
apps that are dependent on the report. If necessary, downstream users should be informed to update any
bookmarks with the new report address to ensure they access the latest report.

Workspace/server alias
Unlike Azure Analysis Services, server name aliases are not suppor ted for Premium workspaces.

DISCOVER_M_EXPRESSIONS
The DMV DISCOVER_M_EXPRESSIONS data management view (DMV) is currently not supported in Power BI
using the XMLA Endpoint. Applications can use the Tabular object model (TOM) to obtain M expressions used by
the data model.

Resource governing command memory limit in Premium Gen 2


Premium Gen2 capacities use resource governing to ensure no single dataset operation can exceed the amount
of available memory resources for the capacity - determined by SKU. For example, a P1 subscription has an
effective memory limit per artifact of 25 GB, for a P2 subscription the limit is 50 GB, and for a P3 subscription
the limit is 100 GB. In addition to dataset (database) size, the effective memory limit also applies to underlying
dataset command operations like Create, Alter, and Refresh.
The effective memory limit for a command is based on the lesser of the capacity's memory limit (determined by
SKU) or the value of the DbpropMsmdRequestMemoryLimit XMLA property.
For example, for a P1 capacity, if:
DbpropMsmdRequestMemoryLimit = 0 (or unspecified), the effective memory limit for the command is
25 GB.
DbpropMsmdRequestMemoryLimit = 5 GB, the effective memory limit for the command is 5 GB.
DbpropMsmdRequestMemoryLimit = 50 GB, the effective memory limit for the command is 25 GB.
Typically, the effective memory limit for a command is calculated on the memory allowed for the dataset by the
capacity (25 GB, 50 GB, 100 GB) and how much memory the dataset is already consuming when the command
starts executing. For example, a dataset using 12 GB on a P1 capacity allows an effective memory limit for a new
command of 13 GB. However, the effective memory limit can be further constrained by the
DbPropMsmdRequestMemoryLimit XMLA property when optionally specified by an application. Using the
previous example, if 10 GB is specified in the DbPropMsmdRequestMemoryLimit property, then the command’s
effective limit is further reduced to 10 GB.
If the command operation attempts to consume more memory than allowed by the limit, the operation can fail,
and an error is returned. For example, the following error describes an effective memory limit of 25 GB (P1
capacity) has been exceeded because the dataset already consumed 12 GB (12288 MB) when the command
started execution, and an effective limit of 13 GB (13312 MB) was applied for the command operation:
"Resource governing: This operation was canceled because there wasn’t enough memor y to finish
running it. Either increase the memor y of the Premium capacity where this dataset is hosted or
reduce the memor y footprint of your dataset by doing things like limiting the amount of impor ted
data. More details: consumed memor y 13312 MB, memor y limit 13312 MB, database size before
command execution 12288 MB. Learn more: https://go.microsoft.com/fwlink/?linkid=2159753 ."
In some cases, as shown in the following error, "consumed memory" is 0 but the amount shown for "database
size before command execution" is already greater than the effective memory limit. This means the operation
failed to begin execution because the amount of memory already used by the dataset is greater than the
memory limit for the SKU.
"Resource governing: This operation was canceled because there wasn’t enough memor y to finish
running it. Either increase the memor y of the Premium capacity where this dataset is hosted or
reduce the memor y footprint of your dataset by doing things like limiting the amount of impor ted
data. More details: consumed memor y 0 MB, memor y limit 25600 MB, database size before
command execution 26000 MB. Learn more: https://go.microsoft.com/fwlink/?linkid=2159753 ."
To potentially avoid exceeding the effective memory limit:
Upgrade to a larger Premium capacity (SKU) size for the dataset.
Reduce the memory footprint of your dataset by limiting the amount of data loaded with each refresh.
For refresh operations through the XMLA endpoint, reduce the number of partitions being processed in
parallel. Too many partitions being processed in parallel with a single command can exceed the effective
memory limit.

See also
Dataset connectivity with the XMLA endpoint
Automate Premium workspace and dataset tasks with service principals
Troubleshooting Analyze in Excel
Tabular model solution deployment
What is Power BI Premium?
5/23/2022 • 19 minutes to read • Edit Online

You can use Power BI Premium to access features and capabilities only available in Premium, and offer greater
scale and performance for Power BI content in your organization. Power BI Premium enables more users in your
organization to get the most out of Power BI with better performance and responsiveness. For example, with
Power BI Premium, you and your organization's users get the following capabilities:
Greater scale and performance for your Power BI reports
Flexibility to license by capacity
Best-in-class features for data visualization and insight-extraction such as AI-driven analysis, composable and
reusable dataflows, and paginated reports
Unify self-service and enterprise BI with a variety of Premium-only capabilities that support heavier
workloads and require enterprise scale
Built-in license to extend on-premises BI with Power BI Report Server
Support for data residency by region (Multi-Geo) and customer-managed encryption keys for data at rest
(BYOK)
Ability to share Power BI content with anyone (even outside your organization) without purchasing a per-
user license

This article introduces key features in Power BI Premium. Where necessary, links to additional articles with more
detailed information are provided. For more information about Power BI Pro and Power BI Premium, see the
Power BI features comparison section of Power BI pricing.

Power BI Premium Generation 2


Power BI Premium recently released a new version of Power BI Premium, Power BI Premium Generation 2 ,
referred to as Premium Gen2 for convenience.
For more information about Premium Gen2, see What is Power BI Premium Gen2?
Subscriptions and licensing
Power BI Premium is a tenant-level Microsoft 365 subscription available in two SKU (Stock-Keeping Unit)
families:
P SKUs (P1-P5) for embedding and enterprise features, requiring a monthly or yearly commitment, billed
monthly, and includes a license to install Power BI Report Server on-premises.
EM SKUs (EM1-EM3) for organizational embedding, requiring a yearly commitment, billed monthly. EM1
and EM2 SKUs are available only through volume licensing plans. You can't purchase them directly.
Purchasing
Power BI Premium subscriptions are purchased by administrators in the Microsoft 365 admin center. Specifically,
only Global administrators or Billing Administrators can purchase SKUs. When purchased, the tenant receives a
corresponding number of v-cores to assign to capacities, known as v-core pooling. For example, purchasing a
P3 SKU provides the tenant with 32 v-cores. To learn more, see How to purchase Power BI Premium.
Power BI Premium Per User
Power BI Premium Per User allows organizations to license Premium features on a per-user basis. Premium
Per User (PPU) includes all Power BI Pro license capabilities, and adds features such as paginated reports, AI, and
other capabilities that are only available to Premium subscribers. For more information about Premium per user,
including a feature comparison and other information, see the Power BI Premium Per User article.

Reserved capacities
With Power BI Premium, you get reserved capacities. In contrast to a shared capacity where workloads' analytics
processing run on computational resources shared with other customers, a reserved capacity is for exclusive use
by an organization. It's isolated with reserved computational resources, which provide dependable and
consistent performance for hosted content. Note that the processing of the following types of Power BI content
is stored in shared capacity rather than your reserved capacity:
Excel workbooks (unless data is first imported into Power BI Desktop)
Push datasets
Streaming datasets
Q&A
Workspaces reside within capacities. Each Power BI user has a personal workspace known as My Workspace .
Additional workspaces known as workspaces can be created to enable collaboration. By default, workspaces,
including personal workspaces, are created in the shared capacity. When you have Premium capacities, both My
Workspaces and workspaces can be assigned to Premium capacities.
Capacity administrators automatically have their my workspaces assigned to Premium capacities.
Capacity nodes
As described in the Subscriptions and Licensing section, there are two Power BI Premium SKU families: EM and
P . All Power BI Premium SKUs are available as capacity nodes, each representing a set amount of resources
consisting of processor, memory, and storage. In addition to resources, each SKU has operational limits on the
number of DirectQuery and Live Connection connections per second, and the number of parallel model
refreshes. While there is a lot of overlap in features for the two SKU families, only the P Premium SKU gives free
users the ability to consume content hosted in the Premium capacity. EM SKUs are used for embedding content.
Processing is achieved by a set number of v-cores, divided equally between backend and frontend.
Backend v-cores are responsible for core Power BI functionality, including query processing, cache
management, running R services, model refresh, and server-side rendering of reports and images. Backend v-
cores are assigned a fixed amount of memory that is primarily used to host models, also known as active
datasets.
Frontend v-cores are responsible for the web service, dashboard and report document management, access
rights management, scheduling, APIs, uploads and downloads, and generally for everything related to the user
experiences.
Storage is set to 100 TB per capacity node .
The resources and limits of each Premium SKU (and equivalently sized A SKU) are described in the following
table:

DIREC TQ UE
RY / L IVE MAX M O DEL
C O N N EC T I M EM O RY REF RESH
C A PA C IT Y TOTA L V- B A C K EN D F RO N T EN D O N ( P ER P ER Q UERY PA RA L L EL IS
SK US C O RES V- C O RES V- C O RES RA M ( GB ) SEC O N D) [ GB ] M1

EM1/A1 1 0.5 0.5 3 3.75 1 1

EM2/A2 2 1 1 5 7.5 2 2

EM3/A3 4 2 2 10 15 2 3

P1/A4 8 4 4 25 30 6 6

P2/A5 16 8 8 50 60 6 12

P3/A6 32 16 16 100 120 10 24

P4/A72 64 32 32 200 240 10 48

P5/A82 128 64 64 400 480 10 96

1 The model refresh parallelism limits only apply to dataset workloads per capacity.
2 SKUs greater than 100 GB aren't available in all regions. To request using these SKUs in regions where they're
not available, contact your Microsoft account manager.

NOTE
Using a single larger SKU (e.g. one P2 SKU) can be preferable to combining smaller SKUs (e.g. two P1 SKUs). For example,
you can use larger models and achieve better parallelism with the P2.

Capacity workloads
Capacity workloads are services made available to users. By default, Premium and Azure capacities support only
a dataset workload associated with running Power BI queries. The dataset workload cannot be disabled.
Additional workloads can be enabled for AI (Cognitive Services), Dataflows, and Paginated reports. These
workloads are supported in Premium subscriptions only.
Each additional workload allows configuring the maximum memory (as a percentage of total capacity memory)
that can be used by the workload. Default values for maximum memory are determined by SKU. You can
maximize your capacity's available resources by enabling only those additional workloads when they're used.
And you can change memory settings only when you have determined default settings aren't meeting your
capacity resource requirements. Workloads can be enabled and configured for a capacity by capacity admins
using Capacity settings in the Admin portal or using the Capacities REST APIs.
To learn more, see Configure workloads in a Premium capacity.
How capacities function
At all times, the Power BI service makes the best use of capacity resources while not exceeding limits imposed
on the capacity.
Capacity operations are classified as either interactive or background. Interactive operations include rendering
requests and responding to user interactions (filtering, Q&A querying, etc.). Background operations include
dataflow and import model refreshes, and dashboard query caching.
It's important to understand that interactive operations are always prioritized over background operations to
ensure the best possible user experience. If there are insufficient resources, background operations are added to
a waiting queue until resources free up. Background operations, like dataset refreshes, can be interrupted mid-
process by the Power BI service, added to a queue, and retried later on.
Import models must be fully loaded into memory so they can be queried or refreshed. The Power BI service
uses sophisticated algorithms to manage memory usage fairly, but in rare cases, the capacity can get overloaded
if there are insufficient resources to meet customers' real-time demands. While it's possible for a capacity to
store many import models in persistent storage (up to 100 TB per Premium capacity), not all the models
necessarily reside in memory at the same time, otherwise their in-memory dataset size can easily exceed the
capacity memory limit. Besides the memory required to load the datasets, additional memory is needed for
execution of queries and refresh operations.
Import models are therefore loaded and removed from memory according to usage. An import model is loaded
when it is queried (interactive operation), or if it needs to be refreshed (background operation).
The removal of a model from memory is known as eviction. It's an operation Power BI can perform quickly
depending on the size of the models. If the capacity isn't experiencing any memory pressure and the model isn't
idle (i.e., actively in-used), the model can reside in memory without being evicted. When Power BI determines
there is insufficient memory to load a model, the Power BI service will attempt to free up memory by evicting
inactive models, typically defined as models loaded for interactive operations which have not been used in the
last three minutes. If there are no inactive models to evict, the Power BI service attempts to evict models loaded
for background operations. A last resort, after 30 seconds of failed attempts, is to fail the interactive operation.
In this case, the report user is notified of failure with a suggestion to try again shortly. In some cases, models
may be unloaded from memory due to service operations.
It's important to stress that dataset eviction is a normal behavior on the capacity. The capacity strives to balance
memory usage by managing the in-memory lifecycle of models in a way that is transparent to users. A high
eviction rate does not necessarily mean the capacity is insufficiently resourced. It can, however, become a
concern if the performance of queries or refreshes degrades due to the overhead of loading and evicting models
repeatedly within a short span of time.
Refreshes of import models are always memory intensive as models must be loaded into memory. Additional
intermediate memory is also required for processing. A full refresh can use approximately double the amount of
memory required by the model because Power BI maintains an existing snapshot of the model in memory until
the processing operation is completed. This allows the model to be queried even when it's being processed.
Queries can be sent to the existing snapshot of the model until the refresh has completed and the new model
data is available.
Incremental refresh performs partition refresh instead of a full model refresh, and will typically be faster and
require less memory, and can substantially reduce the capacity's resource usage. Refreshes can also be CPU-
intensive for models, especially those with complex Power Query transformations, or calculated tables or
columns that are complex or are based on a large volume of data.
Refreshes, like queries, require the model be loaded into memory. If there is insufficient memory, the Power BI
service will attempt to evict inactive models, and if this isn't possible (as all models are active), the refresh job is
queued. Refreshes are typically CPU-intensive, even more so than queries. For this reason, a limit on the number
of concurrent refreshes, calculated as the ceiling of 1.5 x the number of backend v-cores, is imposed. If there are
too many concurrent refreshes, the scheduled refresh is queued until a refresh slot is available, resulting in the
operation taking longer to complete. On-demand refreshes such as those triggered by a user request or an API
call will retry three times. If there still aren't enough resources, the refresh will then fail.
Regional support
When creating a new capacity, global administrators and Power BI service administrators can specify a region
where workspaces assigned to the capacity will reside. This is known as Multi-Geo . With Multi-Geo,
organizations can meet data residency requirements by deploying content to datacenters in a specific region,
even if it's different than the region where the Microsoft 365 subscription resides. To learn more, see Multi-Geo
support for Power BI Premium.
Capacity management
Managing Premium capacities involves creating or deleting capacities, assigning admins, assigning workspaces,
configuring workloads, monitoring, and making adjustments to optimize capacity performance.
Global administrators and Power BI service administrators can create Premium capacities from available v-
cores, or modify existing Premium capacities. When a capacity is created, capacity size and geographic region
are specified, and at least one capacity admin is assigned.
When capacities are created, most administrative tasks are completed in the Admin portal.
Capacity admins can assign workspaces to the capacity, manage user permissions, and assign other admins.
Capacity admins can also configure workloads, adjust memory allocations, and if necessary, restart a capacity,
resetting operations if a capacity becomes overloaded.

Capacity admins can also make sure a capacity is running smoothly. They can monitor capacity health right in
the Admin portal or by using the Premium capacity metrics app.
To learn more about creating capacities, assigning admins, and assigning workspaces, see Managing Premium
capacities. To learn more about roles, see Administrator roles related to Power BI.
Monitoring
Monitoring Premium capacities provides administrators with an understanding of how capacities are
performing. Capacities can be monitored by using the Admin portal and the Power BI Premium Capacity Metrics
app.
Monitoring in the portal provides a quick view with high-level metrics indicating loads placed and the resources
utilized by your capacity, averaged, over the past seven days.
The Power BI Premium Capacity Metrics app provides the most in-depth information into how your
capacities are performing. The app provides a high-level dashboard and more detailed reports.

From the app's dashboard, you can click a metric cell to open an in-depth report. Reports provide in-depth
metrics and filtering capability to drill down on the most important information you need to keep your
capacities running smoothly.
To learn more about monitoring capacities, see Monitoring in the Power BI Admin portal and Monitoring with
the Power BI Premium Capacity Metrics app.
Optimizing capacities
Making the best use of your capacities is critical to assuring users get the performance and you're getting the
most value for your Premium investment. By monitoring key metrics, administrators can determine how best to
troubleshoot bottlenecks and take necessary action. To learn more, see Optimizing Premium capacities and
Premium capacity scenarios.
Capacities REST APIs
The Power BI REST APIs include a collection of Capacities APIs. With the APIs, admins can programmatically
manage many aspects of your Premium capacities, including enabling and disabling workloads, assigning
workspaces to a capacity, and more.

Large datasets
Depending on the SKU, Power BI Premium supports uploading Power BI Desktop (.pbix) model files up to a
maximum of 10 GB in size. When loaded, the model can then be published to a workspace assigned to a
Premium capacity. The dataset can then be refreshed to up to 12 GB in size.
Size considerations
Large datasets can be resource intensive. You should have at least a P1 or an A4 SKU for any datasets larger
than one GB. Although publishing large datasets to workspaces backed by A SKUs up to A3 could work,
refreshing them will not.
The following table shows the recommended SKUs for uploading or publishing a .pbix file to the Power BI
service:

SK U SIZ E O F . P B IX

P1/A4 Up to 3 GB

P2/A5 Up to 6 GB

P3/A6, P4/A7 and P5/A8 Up to 10 GB

NOTE
When using a PPU capacity you can upload or publish .pbix files that are up to 10 GB in size.

Large dataset storage format


If you enable the Large dataset storage format setting for a dataset, the .pbix file size limitations still apply to file
upload or publish. The upload size limit is unaffected by the large dataset storage format. However, when
published to the service, with incremental refresh and large dataset storage format enabled, datasets can grow
much larger than these limits. With large dataset storage format, the dataset size is limited only by the Power BI
Premium capacity size.
Power BI datasets can store data in a highly compressed, in-memory cache for optimized query performance
enabling fast user interactivity over large datasets. Previously, datasets in Power BI Premium have been limited
to 10 GB after compression. With large models, the limitation is removed and dataset sizes are limited only by
the capacity size, or a maximum size set by the administrator. Enabling such large dataset sizes enables Power BI
dataset sizes to align better to Azure Analysis Services model sizes.
Your .pbix files represent data in a highly compressed state. The data will likely expand when loaded in memory,
and from there it may expand several more times during data refresh.
Scheduled refresh of large datasets can take a long time and be resource-intensive. It's important to not
schedule too many overlapping refreshes. It's recommended incremental refresh is configured, because it's
faster, more reliable, and consumes fewer resources.
The initial report load of large datasets can take a long time if it has been a while since the last time the dataset
was used. A loading bar for longer-loading reports displays the load progress.
While the per-query memory and time constraints are much higher in Premium capacity, it's recommended you
use filters and slicers to limit visuals to display only what is necessary.

Incremental refresh
Incremental refresh provides an integral part of having and maintaining large datasets in Power BI Premium and
Power BI Pro. Incremental refresh has many benefits, for example, refreshes are faster because only data that
has changed needs to be refreshed. Refreshes are more reliable because it's unnecessary to maintain long-
running connections to volatile data sources. Resource consumption is reduced because less data to refresh
reduces overall consumption of memory and other resources. Incremental refresh policies are defined in Power
BI Desktop , and applied in the service. To learn more, see Incremental refresh for datasets.

Paginated reports
Paginated reports, supported on all EM, A and P SKU's in Premium Gen2, are based on Report Definition
Language (RDL) technology in SQL Server Reporting Services. While based on RDL technology, it's not the same
as Power BI Report Server, which is a downloadable reporting platform you can install on-premises, also
included with Power BI Premium. Paginated reports are formatted to fit well on a page that can be printed or
shared. Data is displayed in a table, even if the table spans multiple pages. By using the free Power BI Repor t
Builder Windows Desktop application, users author paginated reports and publish them to the service.
In Power BI Premium, Paginated reports are a workload that must be enabled for a capacity by using the Admin
portal. Capacity admins can enable and then specify the amount of memory as a percentage of the capacity's
overall memory resources. Unlike other types of workloads, Premium runs paginated reports in a contained
space within the capacity. The maximum memory specified for this space is used whether or not the workload is
active. The default is 20%.

Premium features unique to Dataflows


Dataflows are supported for Power BI Pro, Premium Per User (PPU), and Power BI Premium users. Some
features are only available with a Power BI Premium subscription or Premium Per User (PPU) license. This article
describes and details the Premium Per User (PPU) and Premium-only features and their uses.
To learn more, see Premium features unique to Dataflows.
Deployment Pipelines
The deployment pipelines tool enables BI creators to manage the lifecycle of organizational content. It's an
efficient and reusable tool for creators in an enterprise with Premium capacity. Deployment pipelines enable
creators to develop and test Power BI content in the Power BI service, before the content is consumed by users.
The content types include reports, paginated reports, dashboards, and datasets.
To learn more, see Introduction to Deployment Pipelines.

Power BI Report Server


Included with Power BI Premium, Power BI Report Server is an on-premises report server with a web portal. You
can build your BI environment on-premises and distribute reports behind your organization's firewall. Report
Server gives users access to rich, interactive, and enterprise reporting capabilities of SQL Server Reporting
Services. Users can explore visual data and quickly discover patterns to make better, faster decisions. Report
Server provides governance on your own terms. If and when the time comes, Power BI Report Server makes it
easy to migrate to the cloud, where your organization can take full advantage of all Power BI Premium
functionality.
To learn more, see Power BI Report Server.

Unlimited content sharing


With P Premium SKUs, anyone, whether they're inside or outside your organization can view your Power BI
content including paginated and interactive reports without purchasing individual licenses. P SKUs allow free
Power BI users to consumer Power BI apps and shared content, in the Power BI service. EM Premium SKUs do
not support unlimited content sharing, though they do support embedding in applications.

Premium enables widespread distribution of content by Pro users without requiring Pro or Premium Per User
(PPU) licenses for recipients who view the content. Pro or Premium Per User (PPU) licenses are required for
content creators. Creators connect to data sources, model data, and create reports and dashboards that are
packaged as workspace apps. Users without a Pro or Premium Per User (PPU) license can still access a
workspace that's in Power BI Premium capacity, as long as they only have a Viewer role. A Pro or PPU license is
required for other roles.
To learn more, see Power BI licensing.

Analysis Services in Power BI Premium


Under the hood, the enterprise proven Microsoft Analysis Ser vices Ver tiPaq engine powers Power BI
Premium workspaces and datasets. Analysis Services provides programmability and client application and tool
support through client libraries and APIs that support the open-standard XMLA protocol. By default, Power BI
Premium capacity dataset workloads support read-only operations from Microsoft and third-party client
applications and tools through an XML A endpoint . Capacity admins can also choose to disable or allow
read/write operations through the endpoint.
With read-only access, Microsoft tools like SQL Server Management Studio (SSMS) and SQL Server Profiler, and
third-party apps such as DAX Studio and data visualization applications, can connect to and query Premium
datasets by using XMLA, DAX, MDX, DMVs, and Trace events. With read/write access, enterprise data modeling
tools like Visual Studio with Analysis Services projects extension or the open source Tabular Editor can deploy
tabular models as a dataset to a Premium workspace. And with tools like SSMS, admins can use Tabular Model
Scripting Language (TMSL) to script metadata changes and advanced data refresh scenarios.
Cau t i on

The XMLA endpoint and 3rd party tools enable organizations to create perspectives. Power BI does not honor
perspectives when building reports on top of Live connect models or reports. Instead, Power BI points to the
main model once published to the Power BI service, showing all elements in the data model. If your Azure
Analysis Services model uses perspectives, you should not move or migrate those models to Power BI Premium.
To learn more, see Dataset connectivity with the XMLA endpoint.

Next steps
Managing Premium capacities
Azure Power BI Embedded Documentation
What is Power BI Premium Gen2?
Optimizing Premium capacities
5/23/2022 • 19 minutes to read • Edit Online

When Premium capacity performance issues arise, a common first approach is to optimize or tune your
solutions to restore acceptable response times. The rationale being to avoid purchasing additional Premium
capacity unless justified.
When additional Premium capacity is required, there are two options described in this article:
Scale-up an existing Premium capacity
Add a new Premium capacity
Finally, testing approaches and Premium capacity sizing conclude this article.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.

NOTE
You can also get Premium Per User (PPU) licenses for individuals, which provides many of the features and capabilities of a
Premium capacity, and also incorporates all functionality included with a Power BI Pro license. For more information, see
Power BI Premium Per User.

The recommendations and best practices recommended in this article ensure CPU utilization of each dataset,
and other Power BI artifacts, are optimized.

Best practices
When trying to get the best utilization and performance, there are some recommended best practices, including:
Using workspaces instead of personal workspaces.
Separating business critical and Self-Service BI (SSBI) into different capacities.
If sharing content only with Power BI Pro users, there may be no need to store the content in a reserved
capacity.
Use reserved capacities when looking to achieve a specific refresh time, or when specific features are
required. For example, with large datasets or paginated reporting.
Addressing common questions
Optimizing Power BI Premium deployments is a complex subject involving an understanding of workload
requirements, available resources, and their effective use.
This article addresses seven common support questions, describing possible issues and explanations, and
information on how to identify and resolve them.
Why is the capacity slow, and what can I do?
There are many reasons that can contribute to a slow Premium capacity. This question requires further
information to understand what is meant by slow. Are reports slow to load? Or are they failing to load? Are
report visuals slow to load or update when users interact with the report? Are refreshed taking longer to
complete than expected, or previously experienced?
Having gained an understanding of the reason, you can then begin to investigate. Responses to the following six
questions will help you to address more specific issues.
What content is using up my capacity?
You can use the Power BI Premium Capacity Metrics app to filter by capacity, and review performance
metrics for workspace content. It's possible to review the performance metrics and resource usage by hour for
the past seven days for all content stored within a Premium capacity. Monitoring is often the first step to take
when troubleshooting a general concern about Premium capacity performance.
Key metrics to monitor include:
Average CPU and high utilization count.
Average Memory and high utilization count, and memory usage for specific datasets, dataflows, and
paginated reports.
Active datasets loaded in memory.
Average and maximum query durations.
Average query wait times.
Average dataset and dataflow refresh times.
In the Power BI Premium Capacity Metrics app, active memory shows the total amount of memory given to a
report that cannot be evicted because it has been in use within the last three minutes. A high spike in refresh
wait time could be correlated with a large and/or active dataset.
The Top 5 by Average Duration chart highlights the top five datasets, paginated reports, and dataflows
consuming capacity resources. Content in the top five lists is candidates for investigation and possible
optimization.
Why are reports slow?
The following tables show possible issues and ways to identify and handle them.
Insufficient capacity resources

P O SSIB L E EXP L A N AT IO N S H O W TO IDEN T IF Y H O W TO RESO LVE

High total active memory (model can't Monitor memory metrics [1], and Decrease the model size, or convert to
be evicted because it's in use within eviction counts [2]. DirectQuery mode. See the Optimizing
the last three minutes). models section in this article.

Multiple high spikes in query wait Scale-up the capacity.


times.
Assign the content to a different
Multiple high spikes in refresh wait capacity.
times.

Inefficient report designs

P O SSIB L E EXP L A N AT IO N S H O W TO IDEN T IF Y H O W TO RESO LVE

Report pages contain too many visuals Review report designs. Redesign reports with fewer visuals per
(interactive filtering can trigger at least page.
one query per visual). Interview report users to understand
how they interact with the reports.
Visuals retrieve more data than
necessary. Monitor dataset query metrics [3].

Dataset is slow, especially when reports have previously performed well

P O SSIB L E EXP L A N AT IO N S H O W TO IDEN T IF Y H O W TO RESO LVE

Increasingly large volumes of import Review model designs. See the Optimizing models section in
data. this article.
Monitor gateway performance
Complex or inefficient calculation logic, counters.
including RLS roles.

Model not fully optimized.

(DQ/LC) Gateway latency.

Slow DQ source query response times.

High concurrent report usage

P O SSIB L E EXP L A N AT IO N S H O W TO IDEN T IF Y H O W TO RESO LVE

High query wait times. Monitor CPU utilization [4], query wait Scale-up the capacity, or assign the
times, and DQ/LC utilization [5] metrics content to a different capacity.
CPU saturation. + Query durations. If fluctuating, can
indicate concurrency issues. Redesign reports with fewer visuals per
DQ/LC connection limits exceeded. page.
Notes:
[1] Average Memory Usage (GB), and Highest Memory Consumption (GB).
[2] Dataset evictions.
[3] Dataset Queries, Dataset Average Query Duration (ms), Dataset Wait Count, and Dataset Average Wait Time
(ms).
[4] CPU High Utilization Count and CPU Time of Highest Utilization (past seven days).
[5] DQ/LC High Utilization Count and DQ/LC Time of Highest Utilization (past seven days).
Why are reports not loading?
When reports fail to load, it's a sure sign the capacity has insufficient memory and is over-heated. This can occur
when all loaded models are being actively queried and so cannot be evicted, and any refresh operations have
been paused or delayed. The Power BI service will attempt to load the dataset for 30 seconds, and the user is
gracefully notified of the failure with a suggestion to try again shortly.
Currently there is no metric to monitor for report loading failures. You can identify the potential for this issue by
monitoring system memory, specifically highest utilization and time of highest utilization. High dataset evictions
and long dataset refresh average wait time could suggest that this issue is occurring.
If this happens only very occasionally, this may not be considered a priority issue. Report users are informed
that the service is busy and that they should retry after a short time. If this happens too frequently, the issue can
be resolved by scaling up the Premium capacity or by assigning the content to a different capacity.
Capacity Admins (and Power BI service administrators) can monitor the Quer y Failures metric to determine
when this happens. They can also restart the capacity, resetting all operations in case of system overload.
Why are refreshes not starting on schedule?
Scheduled refresh start times are not guaranteed. Recall that the Power BI service will always prioritize
interactive operations over background operations. Refresh is a background operation that can occur when two
conditions are met:
There is sufficient memory
The number of supported concurrent refreshes for the Premium capacity is not exceeded
When the conditions are not met, the refresh is queued until the conditions are favorable.
For a full refresh, recall that at least double the current dataset memory size is required. If sufficient memory is
not available, then the refresh cannot commence until model eviction frees up memory - this means delays until
one or more datasets becomes inactive and can be evicted.
Recall that the supported number of maximum concurrent refreshes is set to 1.5 times the backend v-cores,
rounded up.
A scheduled refresh will fail when it cannot commence before the next scheduled refresh is due to commence.
An on-demand refresh triggered manually from the UI will attempt to run up to three times before failing.
Capacity Admins (and Power BI service administrators) can monitor the Average Refresh Wait Time
(minutes) metric to determine average lag between the scheduled time and the start of the operation.
While not usually an administrative priority, to influence on-time data refreshes, ensure that sufficient memory
is available. This may involve isolating datasets to capacities with known sufficient resources. It's also possible
that admins could coordinate with dataset owners to help stagger or reduce scheduled data refresh times to
minimize collisions. Note that it's not possible for an administrator to view the refresh queue, or to retrieve
dataset schedules.
Why are refreshes slow?
Refreshes can be slow - or perceived to be slow (as the previous common question addresses).
When the refresh is in fact slow, it can be due to several reasons:
Insufficient CPU (refresh can be very CPU-intensive).
Insufficient memory, resulting in refresh pausing (which requires the refresh to start over when conditions
are favorable to recommence).
Non-capacity reasons, including datasource system responsiveness, network latency, invalid permissions or
gateway throughput.
Data volume - a good reason to configure incremental refresh, as discussed below.
Capacity Admins (and Power BI service administrators) can monitor the Average Refresh Duration
(minutes) metric to determine a benchmark for comparison over time, and the Average Refresh Wait Time
(minutes) metrics to determine average lag between average lag between the scheduled time and the start of
the operation.
Incremental refresh can significantly reduce data refresh duration, especially for large model tables. There are
four benefits associated with incremental refresh:
Refreshes are faster - Only a subset of a table needs loading, reducing CPU and memory usage, and
parallelism can be higher when refreshing multiple partitions.
Refreshes occur only when required - Incremental refresh policies can be configured to load only when
data has changed.
Refreshes are more reliable - Shorter running connections to volatile datasource systems are less
susceptible to disconnection.
Models remain trim - Incremental refresh policies can be configured to automatically remove history
beyond a sliding window of time.
To learn more, see Incremental refresh for datasets.
Why are data refreshes not completing?
When the data refresh commences but fails to complete, it can be due to several reasons:
Insufficient memory, even if there is only one model in the Premium capacity, i.e. the model size is very large.
Non-capacity reasons, including datasource system disconnection, invalid permissions or gateway error.
Capacity Admins (and Power BI service administrators) can monitor the Refresh Failures due to out of
Memor y metric.

Optimizing models
Optimal model design is crucial to delivering an efficient and scalable solution. However, it's beyond the scope of
this article to provide a complete discussion. Instead, this section will provide key areas for consideration when
optimizing models.
Optimizing Power BI hosted models
Optimizing models hosted in a Premium capacity can be achieved at the datasource(s) and model layers.
Consider the optimization possibilities for an Import model:
At the datasource layer:
Relational data sources can be optimized to ensure the fastest possible refresh by pre-integrating data,
applying appropriate indexes, defining table partitions that align to incremental refresh periods, and
materializing calculations (in place of calculated model tables and columns) or adding calculation logic to
views.
Non-relational data sources can be pre-integrated with relational stores.
Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network
bandwidth and in close proximity to the data sources.
At the model layer:
Power Query query designs can minimize or remove complex transformations and especially those that
merge different data sources (data warehouses achieve this during their Extract-Transform-Load stage). Also,
ensuring that appropriate datasource privacy levels are set, this can avoid requiring Power BI to load full
results to produce a combined result across queries.
The model structure determines the data to load and has a direct impact on the model size. It can be
designed to avoid loading unnecessary data by removing columns, removing rows (especially historic data)
or by loading summarized data (at the expense of loading detailed data). Dramatic size reduction can be
achieved by removing high cardinality columns (especially text columns) which do not store or compress
very efficiently.
Model query performance can be improved by configuring single direction relationships unless there is a
compelling reason to allow bi-directional filtering. Consider also using the CROSSFILTER function instead of
bi-directional filtering.
Aggregation tables can achieve fast query responses by loading pre-summarized data, however this will
increase the size of the model and result in longer refresh times. Generally, aggregation tables should be
reserved for very large models or Composite model designs.
Calculated tables and columns increase the model size and result in longer refresh times. Generally, a smaller
storage size and faster refresh time can be achieved when the data is materialized or calculated in the
datasource. If this is not possible, using Power Query custom columns can offer improved storage
compression.
There may be opportunity to tune DAX expressions for measures and RLS rules, perhaps rewriting logic to
avoid expensive formulas
Incremental refresh can dramatically reduce refresh time and conserve memory and CPU. The incremental
refresh can also be configured to remove historic data keeping model sizes trim.
A model could be redesigned as two models when there are different and conflicting query patterns. For
example, some reports present high-level aggregates over all history, and can tolerate 24 hours' latency.
Other reports are concerned with today's data and need granular access to individual transactions. Rather
than design a single model to satisfy all reports, create two models optimized for each requirement.
Consider the optimization possibilities for a DirectQuery model. As the model issues query requests to the
underlying datasource, datasource optimization is critical to delivering responsive model queries.
At the datasource layer:
The datasource can be optimized to ensure the fastest possible querying by pre-integrating data (which is
not possible at the model layer), applying appropriate indexes, defining table partitions, materializing
summarized data (with indexed views), and minimizing the amount of calculation. The best experience is
achieved when pass-through queries need only filter and perform inner joins between indexed tables or
views.
Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network
bandwidth and in close proximity to the datasource.
At the model layer:
Power Query query designs should preferably apply no transformations - otherwise attempt to keep
transformations to an absolute minimum.
Model query performance can be improved by configuring single direction relationships unless there is a
compelling reason to allow bi-directional filtering. Also, model relationships should be configured to assume
referential integrity is enforced (when this is the case) and will result in datasource queries using more
efficient inner joins (instead of outer joins).
Avoid creating Power Query query custom columns or model calculated column - materialize these in the
datasource, when possible.
There may be opportunity to tune DAX expressions for measures and RLS rules, perhaps rewriting logic to
avoid expensive formulas.
Consider the optimization possibilities for a Composite model. Recall that a Composite model enables a mix of
import and DirectQuery tables.

Generally, the optimization for Import and DirectQuery models apply to Composite model tables that use
these storage modes.
Typically, strive to achieve a balanced design by configuring dimension-type tables (representing business
entities) as Dual storage mode and fact-type tables (often large tables, representing operational facts) as
DirectQuery storage mode. Dual storage mode means both Import and DirectQuery storage modes, and this
allows the Power BI service to determine the most efficient storage mode to use when generating a native
query for pass-through.
Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network
bandwidth and in close proximity to the data sources
Aggregations tables configured as Import storage mode can deliver dramatic query performance
enhancements when used to summarize DirectQuery storage mode fact-type tables. In this case, aggregation
tables will increase the size of the model and increase refresh time, and often this is an acceptable tradeoff
for faster queries.
Optimizing externally hosted models
Many optimization possibilities discussed in the Optimizing Power BI hosted models section apply also to
models developed with Azure Analysis Services and SQL Server Analysis Services. Clear exceptions are certain
features which are not currently supported, including Composite models and aggregation tables.
An additional consideration for externally-hosted datasets is the database hosting in relation to the Power BI
service. For Azure Analysis Services, this means creating the Azure resource in the same region as the Power BI
tenant (home region). For SQL Server Analysis Services, for IaaS, this means hosting the VM in the same region,
and for on-premises, it means ensuring an efficient gateway setup.
As an aside, it may be of interest to note that Azure Analysis Services databases and SQL Server Analysis
Services tabular databases require that their models be loaded fully into memory and that they remain there at
all times to support querying. Like the Power BI service, there needs to be sufficient memory for refreshing if
the model must remain online during the refresh. Unlike the Power BI service, there is no concept that models
are automatically aged in and out of memory according to usage. Power BI Premium, therefore, offers a more
efficient approach to maximize model querying with lower memory usage.

Capacity planning
The size of a Premium capacity determines its available memory and processor resources and limits imposed on
the capacity. The number of Premium capacities is also a consideration, as creating multiple Premium capacities
can help isolate workloads from each other. Note that storage is 100 TB per capacity node, and this is likely to be
more than sufficient for any workload.
Determining the size and number of Premium capacities can be challenging, especially for the initial capacities
you create. The first step when capacity sizing is to understand the average workload representing expected
day-to-day usage. It's important to understand that not all workloads are equal. For example - at one end of a
spectrum - 100 concurrent users accessing a single report page that contains a single visual is easily achievable.
Yet - at the other end of the spectrum - 100 concurrent users accessing 100 different reports, each with 100
visuals on the report page, is going to make very different demands of capacity resources.
Capacity Admins will therefore need to consider many factors specific to your environment, content and
expected usage. The overriding objective is to maximize capacity utilization while delivering consistent query
times, acceptable wait times, and eviction rates. Factors for consideration can include:
Model size and data characteristics - Import models must be fully loaded into memory to allow
querying or refreshing. LC/DQ datasets can require significant processor time and possibly significant
memory to evaluate complex measures or RLS rules. Memory and processor size, and LC/DQ query
throughput are constrained by the capacity size.
Concurrent active models - The concurrent querying of different import models will deliver best
responsiveness and performance when they remain in memory. There should be sufficient memory to host
all heavily-queried models, with additional memory to allow for their refresh.
Impor t model refresh - The refresh type (full or incremental), duration and complexity of Power Query
queries and calculated table/column logic can impact on memory and especially processor usage.
Concurrent refreshes are constrained by the capacity size (1.5 x backend v-cores, rounded up).
Concurrent queries - Many concurrent queries can result in unresponsive reports when processor or
LC/DQ connections exceeds the capacity limit. This is especially the case for report pages that include many
visuals.
Dataflows and paginated repor ts - The capacity can be configured to support dataflows and paginated
reports, with each requiring a configurable maximum percentage of capacity memory. Memory is
dynamically allocated to dataflows, but is statically allocated to paginated reports.
In addition to these factors, Capacity Admins can consider creating multiple capacities. Multiple capacities allow
for the isolation of workloads and can be configured to ensure priority workloads have guaranteed resources.
For example, two capacities can be created to separate business-critical workloads from self-service BI (SSBI)
workloads. The business-critical capacity can be used to isolate large corporate models providing them with
guaranteed resources, with authoring access granted only to the IT department. The SSBI capacity can be used
to host a growing number of smaller models, with access granted to business analysts. The SSBI capacity may at
times experience query or refresh waits that are tolerable.
Over time, Capacity Admins can balance workspaces across capacities by moving content between workspaces,
or workspaces between capacities, and by scaling capacities up or down. Generally, to host larger models you
scale-up and for higher concurrency you scale out.
Recall that purchasing a license provides the tenant with v-cores. The purchase of a P3 subscription can be used
to create one, or up to four Premium capacities, i.e. 1 x P3, or 2 x P2, or 4 x P1. Also, before upsizing a P2
capacity to a P3 capacity, consideration can be given to splitting the v-cores to create two P1 capacities.

Testing approaches
Once a capacity size is decided, testing can be performed by creating a controlled environment. A practical and
economic option is to create an Azure (A SKUs) capacity, noting that a P1 capacity is the same size as an A4
capacity, with the P2 and P3 capacities the same size as the A5 and A6 capacities, respectively. Azure capacities
can be created quickly and are billed on an hourly basis. So, once testing is complete, they can be easily deleted
to stop accruing costs.
The test content can be added to the workspaces created on the Azure capacity, and then as a single user can run
reports to generate a realistic and representative workload of queries. If there are import models, a refresh for
each model should be performed also. Monitoring tools can then be used to review all metrics to understand
resource utilization.
It's important that the tests are repeatable. Tests should be run several times and they should deliver
approximately the same result each time. An average of these results can be used to extrapolate and estimate a
workload under true production conditions.
If you already have a capacity and the reports you want to load test for, use the PowerShell load generating tool
to quickly generate a load test. The tool enables you to estimate how many instances of each report your
capacity can run in an hour. You can use the tool to evaluate your capacity's ability for individual report
rendering or for rendering several different reports in parallel. For more information, see the video Microsoft
Power BI: Premium capacity.
To generate a more complex test, consider developing a load testing application that simulates a realistic
workload. For more information, see the webinar Load Testing Power BI Applications with Visual Studio Load
Test.

Acknowledgments
This article was written by Peter Myers, Data Platform MVP and independent BI expert with Bitwise Solutions.

Next steps
Premium capacity scenarios
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Premium capacity scenarios
5/23/2022 • 12 minutes to read • Edit Online

This article describes real-world scenarios where Power BI premium capacities have been implemented.
Common issues and challenges are described, also how to identify issues, and help resolve them:
Keeping datasets up-to-date
Identifying slow-responding datasets
Identifying causes for sporadically slow-responding datasets
Determining whether there is enough memory
Determining whether there is enough CPU
The steps, along with chart and table examples are from the Power BI Premium Capacity Metrics app that a
Power BI administrator will have access to.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.

Keeping datasets up to date


In this scenario, an investigation was triggered when users complained that report data sometimes appeared to
be old or "stale".
In the app, the admin interacts with the Refreshes visual, sorting datasets by the Max Wait Time statistics in
descending order. This visual helps them reveal datasets having the longest wait times, grouped by workspace
name.

In the Hourly Average Refresh Wait Times visual, they notice that the refresh wait times peak consistently
around 4PM each day.
There are several possible explanations for these results:
Too many refresh attempts could be occurring at the same time, exceeding the limits defined by the
capacity node. In this case, six concurrent refreshes on a P1 with default memory allocation.
Datasets to be refreshed may be too large to fit into available memory (requiring at least 2x the memory
required for full refresh).
Inefficient Power Query logic may be resulting in a memory usage spike during dataset refresh. On a
busy capacity, this spike can occasionally reach the physical limit, failing the refresh and potentially
affecting other report view operations on the capacity.
Frequently queried datasets that need to stay in memory may affect the ability of other datasets to
refresh because of limited available memory.
To help investigate, the Power BI administrator can look for:
Low available memory at the time of data refreshes when available memory is less than 2x the size of the
dataset to be refreshed.
Datasets not being refreshed and not in memory before refresh, yet started to show interactive traffic during
heavy refresh times. To see which datasets are loaded into memory at any given time, a Power BI
administrator can look at the datasets area of the Datasets tab in the app. The admin can then cross-filter to
a given time by clicking on one of the bars in the Hourly Loaded Dataset Counts . A local spike, shown in
the below image, indicates an hour when multiple datasets were loaded into memory, which could delay the
start of scheduled refreshes.
Increased dataset evictions taking place when data refreshes are scheduled to start. Evictions can indicate
that there was high memory pressure caused by serving too many different interactive reports before
refresh. The Hourly Dataset Evictions and Memor y Consumption visual can clearly indicate spikes in
evictions.
The following image shows a local spike in loaded datasets, which suggests interactive querying delayed the
start of refreshes. Selecting a time period in the Hourly Loaded Dataset Counts visual will cross-filter the
Dataset Sizes visual.

The Power BI administrator can attempt to resolve the issue by taking steps to ensure that sufficient memory is
available for data refreshes to start by:
Contacting dataset owners and asking them to stagger and space out data refresh schedules.
Reducing dataset query load by removing unnecessary dashboards or dashboard tiles, especially content
that enforces row-level security.
Speeding data refreshes by optimizing Power Query logic. Improve modeling calculated columns or tables.
Reduce dataset sizes or configure larger datasets to perform incremental data refresh.

Identifying slow-responding datasets


In this scenario, an investigation began when users complained that certain reports took too long to open.
Sometimes the reports would stop responding.
In the app, the Power BI administrator can use the Quer y Durations visual to determine the worst-performing
datasets by sorting datasets by descending Average Duration . This visual also shows dataset query counts, so
you can see how often the datasets are queried.

The administrator can refer to the Quer y Duration Distribution visual, which shows an overall distribution of
bucketed query performance (<= 30ms, 0-100ms) for the filtered time period. Generally, queries that take one
second or less are considered responsive by most users. Queries that take longer tend to create a perception of
bad performance.
The Hourly Quer y Duration Distribution visual allows the Power BI administrator to identify one-hour
periods when the capacity performance could have been perceived as poor. The larger the bar segments that
represent query durations over one second, the larger the risk that users will perceive poor performance.
The visual is interactive, and when a segment of the bar is selected, the corresponding Quer y Durations table
visual on the report page is cross-filtered to show the datasets it represents. This cross-filtering allows the
Power BI administrator to easily identify which datasets are responding slowly.
The following image shows a visual filtered by Hourly Quer y Duration Distributions , focusing on the worst-
performing datasets in one-hour buckets.
After the poor-performing dataset in a specific one-hour time span is identified, the Power BI administrator can
investigate whether poor performance is caused by an overloaded capacity or due to a poorly designed dataset
or report. They can refer to the Quer y Wait Times visual, and sort datasets by descending average query wait
time. If a large percentage of queries is waiting, a high demand for the dataset is likely the cause of too many
query delays. If the average query wait time is substantial (> 100 ms), it may be worth reviewing the dataset
and report to see if optimizations can be made. For example, fewer visuals on given report pages or a DAX
expression optimization.

There are several possible reasons for query wait time buildup in datasets:
A suboptimal model design, measure expressions, or even report design - all circumstances that can
contribute to long running queries that consume high levels of CPU. This forces new queries to wait until
CPU threads become available and can create a convoy effect (think traffic jam), commonly seen during peak
business hours. The Quer y Waits page will be the main resource to determine whether datasets have high
average query wait times.
A high number of concurrent capacity users (hundreds to thousands) consuming the same report or dataset.
Even well-designed datasets can perform badly beyond a concurrency threshold. This performance problem
is indicated by a single dataset showing a dramatically higher value for query counts than other datasets. For
example, you may see 300K queries for one dataset compared to <30K queries for all other datasets. At
some point the query waits for this dataset will start to stagger, which can be seen in the Quer y Durations
visual.
Many disparate datasets queried concurrently, causing thrashing as datasets frequently cycle in and out of
memory. This situation results in users experiencing slow performance when the dataset is loaded into
memory. To confirm, the Power BI administrator can refer to the Hourly Dataset Evictions and Memor y
Consumption visual, which may indicate that a high number of datasets loaded into memory are being
repeatedly evicted.

Identifying causes for sporadically slow-responding datasets


In this scenario, an investigation began when users described that report visuals sometimes were slow to
respond or could become unresponsive. At other times the report visuals were acceptably responsive.
Within the app, the Quer y Durations section was used to find the culprit dataset in the following way:
In the Query Durations visual, the admin filtered dataset by dataset (starting at the top datasets queried) and
examined the cross filtered bars in the Hourly Quer y Distributions visual.
When a single one-hour bar showed significant changes in the ratio between all query duration groups vs.
other one-hour bars for that dataset (for example, the ratios between the colors changes drastically), it means
this dataset demonstrated a sporadic change in performance.
The one-hour bars showing an irregular portion of poor performing queries, indicated a timespan where that
dataset was impacted by a noisy neighbor effect, caused by other datasets' activities.
The image below shows one hour on January 30, where a significant setback in a dataset's performance
occurred, indicated by the size of the "(3,10s]" execution duration bucket. Clicking that one-hour bar reveals all
the datasets loaded into memory during that time, surfacing possible datasets causing the noisy neighbor effect.

Once a problematic timespan is identified (for example, during Jan. 30th in the image above) the Power BI
administrator can remove all dataset filters then filter only by that timespan to determine which datasets were
actively queried during this time. The culprit dataset for the noisy neighbor effect is usually the top queried
dataset or the one with the longest average query duration.
A solution to this problem could be to distribute the culprit datasets over different workspaces on different
Premium capacities, or on shared capacity if the dataset size, consumption requirements, and data refresh
patterns are supported.
The reverse could be true as well. The Power BI administrator could identify times when a dataset query
performance drastically improves and then look for what disappeared. If certain information is missing at that
point, then that may help to point to the causing problem.

Determining whether there is enough memory


To determine whether there is enough memory for the capacity to complete its workloads, the Power BI
administrator can refer to the Consumed Memor y Percentages visual in the Datasets tab of the app. All
(total) memory represents the memory consumed by datasets loaded into memory, regardless of whether they
are actively queried or processed. Active memory represents the memory consumed by datasets that are being
actively processed.
In a healthy capacity the visual will look like this, showing a gap between All (total) and Active memory:

In a capacity experiencing memory pressure, the same visual will clearly show active memory and total memory
converging, meaning that it is impossible to load additional datasets into memory then. In this case, the Power
BI administrator can click Capacity Restar t (in Advanced Options of the capacity settings area of the admin
portal). Restarting the capacity results in all datasets being flushed from memory and allowing them to reload
into memory as required (by queries or data refresh).

NOTE
For Premium Gen2 and Embedded Gen2, memory consumption does not need to be tracked. The only limitation in
Premium Gen2 and Embedded Gen2, is on the memory footprint of a single artifact. The footprint cannot exceed the
memory available on the capacity. For more information about Premium Gen2, see Power BI Premium Generation 2.

Determining whether there is enough CPU


In general, a capacity's average CPU utilization should remain below 80%. Exceeding this value means the
capacity is approaching CPU saturation.
Effects of CPU saturation are expressed by operations taking longer than they should, due to the capacity
performing many CPU contexts switches as it attempts to process all operations. In a Premium capacity with a
high number of concurrent queries, this is indicated by high query wait times. A consequence of high query wait
times is slower responsiveness than usual. The Power BI administrator can easily identify when the CPU is
saturated by viewing the Hourly Quer y Wait Time Distributions visual. Periodic peaks of query wait time
counts indicate potential CPU saturation.

A similar pattern can sometimes be detected in background operations if they contribute to CPU saturation. A
Power BI administrator can look for a periodic spike in refresh times for a specific dataset, which can indicate
CPU saturation at the time (probably because of other ongoing dataset refreshes and/or interactive queries). In
this instance, referring to the System view in the app may not necessarily reveal that the CPU is at 100%. The
System view displays hourly averages, but the CPU can become saturated for several minutes of heavy
operations, which shows up as spikes in wait times.
There are more nuances to seeing the effect of CPU saturation. While the number of queries that wait is
important, query wait time will always happen to some extent without causing discernable performance
degradation. Some datasets (with lengthier average query time, indicating complexity or size) are more prone to
the effects of CPU saturation than others. To easily identify these datasets, the Power BI administrator can look
for changes in the color composition of the bars in the Hourly Wait Time Distribution visual. After spotting
an outlier bar, they can look for the datasets that had query waits during that time and also look at the average
query wait time compared to average query duration. When these two metrics are of the same magnitude and
the query workload for the dataset is non-trivial, it is likely that the dataset is impacted by insufficient CPU.
This effect can be especially apparent when a dataset is consumed in short bursts of high frequency queries by
multiple users (for example, in a training session), resulting in CPU saturation during each burst. In this case,
significant query wait times on this dataset can be experienced as well as impacting on other datasets in the
capacity (noisy neighbor effect).
In some cases, Power BI administrators can request that dataset owners create a less volatile query workload by
creating a dashboard (which queries periodically with any dataset refresh for cached tiles) instead of a report.
This can help prevent spikes when the dashboard is loaded. This solution may not always be possible for given
business requirements, however it can be an effective way to avoid CPU saturation, without making changing to
the dataset.

NOTE
For Premium Gen2 and Embedded Gen2, CPU time utilization is tracked on a per-artifact level, and is visible in the
capacity utilization app. Each artifact displays their total CPU time utilization on a given timespan. For more information
about Premium Gen2, see Power BI Premium Generation 2.

Acknowledgments
This article was written by Peter Myers, Data Platform MVP and independent BI expert with Bitwise Solutions.
Next steps
Monitor Premium capacities with the app
Monitor capacities in the Admin portal
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Monitoring Power BI capacities
5/23/2022 • 4 minutes to read • Edit Online

Monitoring Premium capacities provides administrators with an understanding of how the capacities are
performing. Capacities can be monitored by using the Power BI Admin portal or the Power BI Premium
Capacity Metrics (Power BI) app.

Power BI Admin portal


In the Admin portal, for each capacity, the Health tab provides summary metrics for the capacity and each
enabled workload. Metrics show an average over the past seven days.
At the capacity level, metrics are cumulative of all enabled workloads. the following metrics are provided:
CPU UTILIZATION - Provides average CPU utilization as a percentage of total available CPU for the capacity.
MEMORY USAGE - Provides average memory usage (in GB) as a total of available memory for the capacity.
For each enabled workload, CPU utilization and memory usage are provided, as well as a number of workload
specific metrics. For example, for the Dataflow workload, Total Count shows total refreshes for each dataflow,
and Average Duration shows the average duration of refresh for the dataflow.

To learn more about all available metrics for each workload, see Monitor capacities in the Admin portal.
The monitoring capabilities in the Power BI Admin portal are designed to provide a quick summary of key
capacity metrics. For more detailed monitoring, it's recommended you use the Power BI Premium Capacity
Metrics app.

Power BI Premium Capacity Metrics app


The Power BI Premium Capacity Metrics app is a Power BI app available to capacity admins and is installed like
any other Power BI app. It contains a dashboard and report.
When the app opens, the dashboard is loaded to present numerous tiles expressing an aggregated view over all
capacities of which the user is a Capacity Admin. The dashboard layout includes five main sections:
Over view - App version, number of capacities and workspaces
System Summar y - Memory and CPU metrics
Dataset Summar y - Number of datasets, DQ/LC, refresh, and query metrics
Dataflow Summar y - Number of dataflows, and dataset metrics
Paginated Repor t Summar y - Refresh and view metrics
The underlying report, from which the dashboard tiles were pinned, can be accessed by clicking on any
dashboard tile. It provides a more detailed perspective of each of the dashboard sections and supports
interactive filtering.
Filtering can be achieved by setting slicers by date range, capacity, workspace and workload (report, dataset,
dataflow), and by selecting elements within report visuals to cross filter the report page. Cross filtering is a
powerful technique to narrow down to specific time periods, capacities, workspaces, datasets, etc. and can be
very helpful when performing root cause analysis.
For detailed information about dashboard and report metrics in the app, see Monitor Premium capacities with
the app.

Interpreting metrics
Metrics should be monitored to establish a baseline understanding of resource usage and workload activity. If
the capacity becomes slow, it is important to understand which metrics to monitor, and the conclusions you can
make.
Ideally, queries should complete within a second to deliver responsive experiences to report users and enable
higher query throughput. It is usually of lesser concern when background processes - including refreshes - take
longer times to complete.
In general, slow reports can be an indication of an over-heating capacity. When reports fail to load, this is an
indication of an over-heated capacity. In either situation, the root cause could be attributable to many factors,
including:
Failed queries certainly indicate memory pressure, and that a model could not be loaded into memory.
The Power BI service will attempt to load a model for 30 seconds before failing.
Excessive quer y wait times can be due to several reasons:
The need for the Power BI service to first evict model(s) and then load the to-be-queried model (recall
that higher dataset eviction rates alone are not an indication of capacity stress, unless accompanied by
long query wait times that indicate memory thrashing).
Model load times (especially the wait to load a large model into memory).
Long running queries.
Too many LC\DQ connections (exceeding capacity limits).
CPU saturation.
Complex report designs with an excessive number of visuals on a page (recall that each visual is a
query).
Long quer y durations can indicate that model designs are not optimized, especially when multiple
datasets are active in a capacity, and just one dataset is producing long query durations. This suggests
that the capacity is sufficiently resourced, and that the in-question dataset is sub-optimal or just slow.
Long running queries can be problematic as they can block access to resources required by other
processes.
Long refresh wait times indicate insufficient memory due to many active models consuming memory,
or that a problematic refresh is blocking other refreshes (exceeding parallel refresh limits).
A more detailed explanation of how to use the metrics is covered in the Optimizing Premium capacities article.

Acknowledgments
This article was written by Peter Myers, Data Platform MVP and independent BI expert with Bitwise Solutions.

Next steps
Optimizing Premium capacities
Configure workloads in a Premium capacity
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Monitor Premium capacities with the app
5/23/2022 • 14 minutes to read • Edit Online

Monitoring your capacities is essential to making informed decisions on how best to utilize your Premium
capacity resources. You can monitor capacities in the Admin portal or with the Power BI Premium Capacity
Metrics app. This article describes using the Premium Capacity Metrics app. The app provides the most in-
depth information into how your capacities are performing. For a higher level overview of average use metrics
over the last seven days, you can use the Admin portal. To learn more about monitoring in the portal, see
Monitor Premium capacities in the Admin portal.
The app is updated regularly with new features and functionality. Make sure you're running the latest version.
When a new version becomes available, you will receive notification.

IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. See capacity and reliability notifications for more information.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifes the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.

To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.

NOTE
The metrics app cannot be used to monitor Premium Per User (PPU) activities or capacity.

Install the app


Go to Connect to Power BI Premium Capacity Metrics to see how to install the app and connect to data.
Alternatively, you can go straight to the app.

Get app refresh history


To check the last time your Premium Capacity Metrics app refreshed:
1. Go to the workspace that was installed with the app.
2. The last refresh performed is shown in the Refreshed column.

Monitor capacities with the app


Now that you've installed the app, you can see metrics for the capacities in your organization. The app provides
a Dashboard with metrics summaries, and detailed metrics Reports.
Dashboard
To see a dashboard that summarizes key metrics for capacities for which you are an admin, in Dashboards ,
click Power BI Premium Capacity Metrics . A dashboard appears.
The dashboard includes the following metrics:
Top

M ET RIC DESC RIP T IO N

Version App version.

Capacities Number of capacities for which you are admin.

Workspaces Number of workspaces in your capacities that are reporting


metrics.

System Summary

M ET RIC DESC RIP T IO N

CPU Highest Utilization Capacity Capacity with the maximum number of times CPU exceeded
80% of the thresholds in the past seven days.

CPU Highest Utilization Count Number of times CPU the named capacity exceeded 80% of
the thresholds in the past seven days.

Memory Max Utilization Capacity Capacity with the maximum number of times max memory
limit was hit in the past seven days, split into three-minute
buckets.

Memory Max Utilization Count Number of times the named capacity reached the max
memory limit in the past seven days, split into three-minute
buckets.

Dataset Summary

M ET RIC DESC RIP T IO N

Datasets Total number of datasets across all workspaces in your


capacities.
M ET RIC DESC RIP T IO N

Datasets Average Size (MB) Average size of datasets across all workspaces in your
capacities.

Datasets Average Loaded Count Average count of datasets loaded into memory.

Datasets - Average Active Dataset (%) Average active datasets in the past seven days. A dataset is
defined as active if the user has interacted on the visuals
with the past three minutes.

CPU - Datasets Max (%) Max CPU consumption by dataset workload in the past
seven days.

CPU - Datasets Average (%) Average CPU consumption by dataset workload in the past
seven days.

Memory - Datasets Average (GB) Average memory consumption by dataset workload in the
past seven days.

Memory - Datasets Max (GB) Max memory consumption by dataset workload in the past
seven days.

Datasets Evictions Total number of datasets evicted due to memory pressure.

DirectQuery/Live High Utilization Count Number of times DirectQuery/Live connections exceeded


80% of the thresholds in the past seven days, split into
three-minute buckets.

DirectQuery/Live Max Utilization Count Most times the DirectQuery/Live connections exceeded 80%
in the past seven days, split into one-hour buckets.

DirectQuery/Live Max High Utilization Maximum number of times DirectQuery/Live connections


exceeded 80% of the thresholds in the past seven days, split
into three-minute buckets.

DirectQuery/Live Max Occurred Time Time in UTC that DirectQuery/Live connections exceeded
80% the most times in an hour.

Refreshes Total Total number of refreshes in the past seven days.

Refresh Reliability (%) Number of successful refreshes divided by the total number
of refreshes in the past seven days.

Refreshes Average Duration (Minutes) Average amount of time to complete refresh.

Refreshes Average Wait Time (Minutes) Average amount of time before starting refresh.

Queries Total Total number of queries run in the past seven days.

Queries Total Wait Count Total number of queries that had to wait before being
executed.

Queries Average Duration (MS) Average time taken to complete queries.


M ET RIC DESC RIP T IO N

Queries Average Wait Time (MS) Average time queries waited on system resources before
being executed.

Dataflow Summary

M ET RIC DESC RIP T IO N

Dataflows Total number of dataflows across all workspaces in your


capacities.

Refreshes Total Total number of refreshes in the past seven days.

Refreshes Average Duration (Minutes) The time taken to complete the refresh.

Refreshes Average Wait Times (Minutes) The lag between the scheduled time and actual start of the
refresh.

CPU - Dataflows Max (%) Max CPU consumption by dataflows workload in the past
seven days.

CPU - Dataflows Average (%) Average CPU consumption by dataflows workload in the
past seven days.

Memory - Dataflows Max (GB) Max memory consumption by dataflows workload in the
past seven days.

Memory - Dataflows Average (GB) Average memory consumption by dataflows workload in the
past seven days.

Paginated Report Summary

M ET RIC DESC RIP T IO N

Paginated Reports Total number of paginated reports across all workspaces in


your capacities.

Views Total Total number of times that all reports have been viewed by
users.

Rows Total Total number of rows of data in all reports.

Total Time Total time it takes for all phases (data retrieval, processing,
and rendering) of all reports, in milliseconds.

CPU - Paginated Reports Max (%) Maximum CPU consumption by paginated report workload
in the past seven days.

CPU - Paginated Reports Average (%) Average CPU consumption by paginated report workload in
the past seven days.
M ET RIC DESC RIP T IO N

Memory - Paginated Reports Max (GB) Maximum memory consumption by paginated report
workload in the past seven days.

Memory - Paginated Reports Average (GB) Average memory consumption by paginated report
workload in the past seven days.

AI Summary

M ET RIC DESC RIP T IO N

AI Function Execution Total number of executions in the past seven days.

AI Function Execution Reliability (%) Number of successful executions divided by the total
number of executions in the past seven days.

CPU Max (%) Max CPU consumption by the AI workload in the past seven
days.

Memory Max (GB) Max memory consumption by the AI workload in the past
seven days.

AI Function Execution Max Wait Time (MS) Maximum amount of time before starting execution.

AI Function Execution Average Wait Time (MS) Average amount of time before starting execution.

AI Function Execution Max Duration (MS) Maximum amount of time to complete execution.

AI Function Execution Average Duration (MS) Average amount of time to complete execution.

Reports
Reports provide more detailed metrics. To see reports for capacities for which you are an admin, in Repor ts ,
click Power BI Premium Capacity Metrics . Or, from the dashboard, click a metric cell to go to the underlying
report.
At the bottom of the report, there are five tabs:
Datasets - Provides detailed metrics on the health of the Power BI datasets in your capacities. Paginated
Repor ts - Provides detailed metrics on the health of the paginated reports in your capacities. Dataflows -
Provides detailed refresh metrics for dataflows in your capacities. AI - Provides detailed metrics on the health of
the AI functions used in your capacities. Resource Consumption - Provides detailed resource metrics
including memory and CPU high utilization. IDs and Info - Names, IDs, and owners for capacities, workspaces,
and workloads.
Each tab opens a page where you can filter metrics by capacity and date range. If no filters are selected, the
report defaults to show the past week’s metrics for all capacities that are reporting metrics.
Datasets
The Datasets page has different areas, which include Refreshes , Quer y Durations , Quer y Waits , and
Datasets . Use the buttons at the top of the page to navigate to different areas.
Refreshes area
REP O RT SEC T IO N M ET RIC S

Refreshes Total Count: Total refreshes for each dataset.


Reliability: The percentage of refreshes that completed for
each dataset.
Avg Wait Time: The average lag between the scheduled time
and start of a refresh for the dataset, in minutes.
Max Wait Time: The maximum wait time for the dataset, in
minutes.
Avg Duration: The average duration of refresh for the
dataset, in minutes.
Max Duration: The duration of the longest-running refresh
for the dataset, in minutes.

Top 5 Datasets by Average Duration (minutes) The five datasets with the longest average refresh duration,
in minutes.

Top 5 Datasets by Average Wait Time (minutes) The five datasets with the longest average refresh wait time,
in minutes.

Hourly Refresh Count and Memory Consumption (GB) Successes, failures, and memory consumption, split into one-
hour buckets, reported in UTC time.

Hourly Average Refresh Wait Times (minutes) The average refresh wait time, split into one-hour buckets,
reported in UTC time. Multiple spikes with high refresh wait
times are indicative of the capacity running hot.

Query Durations area

REP O RT SEC T IO N M ET RIC S

Query Durations Data in this section is sliced by datasets, workspace, and


hourly buckets in the past seven days.
Total: The total number of queries run for the dataset.
Average: The average query duration for the dataset, in
milliseconds
Max: The duration of the longest-running query in the
dataset, in milliseconds.

Query Duration Distribution The query duration histogram is bucketed by query


durations (in milliseconds)into the following categories: <=
30ms, 30-100ms, 100-300ms, 300ms-1sec, 1sec-3sec,
3sec-10sec, 10sec-30sec, and> 30 seconds intervals. Long
query durations and long wait times are indicative of the
capacity running hot. It may also mean that a single dataset
is causing problems and further investigation is needed.

Top 5 Datasets by Average Duration The five datasets with the longest average query duration, in
milliseconds.

Hourly Query Duration Distributions Query counts and average duration (in milliseconds) vs.
memory consumption in GB, split into one-hour buckets,
reported in UTC time.

DirectQuery / Live Connections (> 80% Utilization) The times that a DirectQuery or live connection exceeded
80% CPU utilization, split into one-hour buckets, reported in
UTC time.
REP O RT SEC T IO N M ET RIC S

Query Waits area

REP O RT SEC T IO N M ET RIC S

Query Wait Times Data in this section is sliced by datasets, workspace, and
hourly buckets in the past seven days.
Total: The total number of queries run for the dataset.
Wait count: The number of queries in the dataset that waited
on system resources before starting execution.
Average: The average query wait time for the dataset, in
milliseconds.
Max: The duration of the longest-waiting query in the
dataset, in milliseconds.

Top 5 Datasets by Average Wait Time The five datasets with the longest average wait time to start
executing a query, in milliseconds.

Wait Time Distributions The query duration histogram is bucketed by query


durations (in milliseconds)into the following categories: <=
50ms , 50-100ms , 100-200ms , 200-400ms 400ms-1sec ,
1 sec-5 sec and> 5 secondsintervals.

Hourly Query Wait Time Distributions Query wait counts and average wait time (in milliseconds) vs.
memory consumption in GB, split into one-hour buckets
reported in UTC time.

Datasets area

REP O RT SEC T IO N M ET RIC S

Dataset Sizes Max size: The maximum size of the dataset in MB for the
period shown.

Dataset Eviction Counts Total: The total number of dataset evictions for each capacity.
When a capacity faces memory pressure, the node evicts
one or more datasets from memory. Datasets that are
inactive (with no query/refresh operation currently
executing) are evicted first. Then the eviction order is based
on a measure of 'least recently used' (LRU).

Hourly Loaded Dataset Counts Number of datasets loaded into memory vs. memory
consumption in GB, split into one-hour buckets, reported in
UTC time.

Hourly Dataset Evictions and Memory Consumption Dataset evictions vs. memory consumption in GB, split into
one-hour buckets, reported in UTC time.

Consumed Memory Percentages Total active datasets in memory as a percentage of total


memory. The delta between Active and All define datasets
that can be evicted. Shown hourly, for the previous seven
days.

Paginated Reports
REP O RT SEC T IO N M ET RIC S

Overall usage Total Views: The number of times that the report has been
viewed by users.
Row Count: The number of rows of data in the report.
Retrieval (avg): The average amount of time it takes to
retrieve data for the report, in milliseconds. Long durations
can indicate slow queries or other data source issues.
Processing (avg): The average amount of time it takes to
process the data for a report, in milliseconds.
Rendering (avg): The average amount of time it takes to
render a report in the browser, in milliseconds.
Total time: The time it takes for all phases of the report, in
milliseconds.

Top 5 Reports by Average Data Retrieval Time The five reports with the longest average data retrieval time,
in milliseconds.

Top 5 Reports by Average Report Processing Time The five reports with the longest average report processing
time, in milliseconds.

Hourly Results Successes, failures, and memory consumption, split into one-
hour buckets, reported in UTC time.

Hourly Durations Data retrieval vs. processing and rendering time, split into
one-hour buckets, reported in UTC time.

Dataflows
REP O RT SEC T IO N M ET RIC S

Refreshes Total: Total refreshes for each dataflow.


Reliability: The percentage of refreshes that completed for
each dataflow.
Avg Wait Time: The average lag between the scheduled time
and start of a refresh for the dataflow, in minutes.
Max Wait Time: The maximum wait time for the dataflow, in
minutes.
Avg Duration: The average duration of refresh for the
dataflow, in minutes.
Max Duration: The duration of the longest-running refresh
for the dataflow, in minutes.

Top 5 dataflows by Average Refresh Duration The five dataflows with the longest average refresh duration,
in minutes.

Top 5 dataflows by Average Wait Time The five dataflows with the longest average refresh wait
time, in minutes.

Hourly Average Refresh Wait Times The average refresh wait time, split into one-hour buckets,
reported in UTC time. Multiple spikes with high refresh wait
times are indicative of the capacity running hot.

Hourly Refresh Count and Memory Consumption Successes, failures, and memory consumption, split into one-
hour buckets, reported in UTC time.
AI
REP O RT SEC T IO N M ET RIC S

AI Memory Consumption Memory consumption in GB, split into one-hour buckets,


reported in UTC time.

Hourly AI Function Execution and Average Wait Time AI executions and average wait time, in milliseconds, split
into one-hour buckets, reported in UTC time.

Overall Usage Total count: Number of AI functions in a workspace or


dataflow.
System Reliability: The percentage of executions that
completed.
Avg. Wait Time: The average lag between the scheduled time
and start of an execution, in milliseconds.
Max Wait Time: The maximum wait time, in milliseconds.
Avg. Duration: The average duration of an execution, in
milliseconds.
Max Duration: The duration of the longest-running
execution, in milliseconds.
Avg Total Size: The average size, in bytes, of the input and
output data for the AI function.

Resource Consumption
REP O RT SEC T IO N M ET RIC S

CPU consumption Maximum CPU consumption during the hour, by workload


as a percentage of total CPU capacity. Shown hourly, for the
previous seven days.

Memory consumption Maximum memory consumption during the hour, in GB by


workload (solid lines), overlaid with workload limits (dotted
line). Shown hourly, for the previous seven days.

IDs and Info


The IDs and Info tab contains areas for Capacities , Workspaces , Datasets , Paginated Repor ts , and
Dataflows .
Capacities area

REP O RT SEC T IO N M ET RIC S

SKU and Workload Information SKU and workload settings for the capacity.

Administrators Names of administrators for the capacity.

Workspaces area

REP O RT SEC T IO N M ET RIC S

Workspaces Names and IDs for all workspaces.


Datasets area

REP O RT SEC T IO N M ET RIC S

Datasets Workspace names and IDs for all datasets.

Paginated Reports area

REP O RT SEC T IO N M ET RIC S

Paginated Reports Names, workspace names, and IDs for all paginated reports.

Dataflows area

REP O RT SEC T IO N M ET RIC S

Dataflows Dataflow names, workspace names, and IDs for all dataflows.

Monitor Power BI Embedded capacity


You can use the Power BI Premium Capacity Metrics app to monitor A SKU capacities in Power BI Embedded.
Those capacities will show up in the report as long as you are an admin of the capacity. However, refresh of the
report fails unless you grant certain permissions to Power BI on your A SKUs:
1. Open your capacity in the Azure portal.
2. Click Access control (IAM) , and then add the Power BI Premium app to the reader role. If you are
unable to find the app by name, you can also add it by client identifier:
cb4dc29f-0bf4-402a-8b30-7511498ed654 .

NOTE
You can monitor Power BI Embedded capacity usage in the app or the Azure portal, but not in the Power BI admin portal.

Next steps
Optimizing Power BI Premium capacities
More questions? Ask the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Power BI Premium Metrics app
5/23/2022 • 16 minutes to read • Edit Online

You can use the Power BI Premium Metrics app to manage the health and capacity of your Power BI
Premium subscription. With the app, administrators use the app's Capacity health center to see and interact
with indicators that monitor the health of their premium capacity. The Metrics app consists of the landing page,
called the Capacity Health Center , and details about three important metrics:
Active memory
Query waits
Refresh waits

The following sections describe the landing page, and the three metrics report pages, in detail.

IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. See capacity and reliability notifications for more information.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. In particular, it greatly reduces the metrics
administrators must monitor (CPU only) to ensure performance and users’ experience. For more information, see Power BI
Premium Generation 2.
NOTE
The metrics app cannot be used to monitor Premium Per User (PPU) activities or capacity.

Premium capacity health center


When you open the Power BI Premium metrics app you're presented with the Capacity health center ,
which provides an overview of the health of your Power BI Premium capacity.

From the landing page, you can select the Power BI Premium capacity you want to view, in case your
organization has multiple Premium subscriptions. To view a Premium capacity, select the dropdown near the top
of the page called Select a capacity to see its metrics .
The three KPIs show the current health of the selected Premium capacity, based on the settings applied to each
of the three KPIs.
To view specifics about each KPI, select the Explore button at the bottom of each KPI's visual, and its detail page
is displayed. The following sections describe each KPI and the details its page provides.

The active memory metric


The active memor y metric is part of the capacity planning category, which is a good health indicator to
evaluate your capacity's resource consumption for usage, so you can adjust the capacity as necessary to plan
capacity scale.
Active memor y is the memory used to process datasets that are currently in use, and which therefore will not
be evicted when memory is needed. The active memory metric indicates whether your capacity can handle
additional load, or of already nearing or over capacity, the capacity's current load. The active memory currently
being consumed means less memory is available to support additional refreshes and queries.
The active memor y KPI measures how many times the capacity's active memory has crossed the 70%
threshold 50 times (the marker is set to 30% of the last seven days), which indicates that the capacity is
approaching a point when users may begin seeing performance issues with queries.
The gauge visual shown in this section reveals that, in the last seven days from the time the report was last
refreshed, the capacity has crossed the 70% threshold four times, split by hourly buckets. The maximum value of
the gauge, 168, represents the last seven days, in hours.
To learn the details of the active memory KPI, click the Explore button to see a report page that provides specific
visualizations of its detailed metrics, along with a troubleshooting guide shown on the right column of the page.
There are two scenarios explained, which you can show on the report page by selecting Scenario 1 or
Scenario 2 on the page.

The troubleshooting guides, associated with each scenario, provide detailed explanations about what the metrics
mean, so you can better understand the state of the capacity, and what can be done to mitigate any issues.
Those two scenarios are described in the following sections.
Scenario one - current load is too high
To determine whether there's enough memory for the capacity to complete its workloads, consult the first visual
on the page: A: Consumed Memor y Percentages , which displays the memory consumed by datasets that are
being actively processed, and thus, cannot be evicted.
The alarm threshold, which is the red dotted line, marks incidents of 90% memory consumption.
The warning threshold, which is the yellow dotted line, marks incidents of 70% memory consumption.
The black dotted line indicates the memory usage trendline, based on the current capacity's memory usage over
the course of the graph timeline.
High occurrences of active memory above the alarm threshold (red dotted line) and memory trendline (black
dotted line) indicates memory capacity pressure, possibly preventing additional datasets from being loaded into
memory during that time.
When you see such cases, you should look carefully at the other charts on the page to better determine what
and why so much memory is so frequently being consumed, and how to load balance or optimize, or if
necessary, scale up the capacity.
The second visual on the page, B: Hourly loaded active datasets displays the counts of the maximum
number of datasets that were loaded in memory, in hourly buckets.
The third visual, C: Why datasets are in memor y is a table that lists the dataset by workspace name, dataset
name, datasets uncompressed size in memory, explains the reason it's loaded in memory (such as, being
refreshed or queried against, or both).
Diagnosing scenario one
Consistent high active memory utilization may result in forcing datasets that are actively being used to be
evicted, or can prevent new datasets from able to load. The following steps can help you diagnose problems
1. Have a look at chart A: Consumed memory percentages
a. If Chart A shows the alarm threshold (90%) is crossed many times and/or for consecutive hours, then
your capacity is running low on memory too frequently. In the chart below, we can see the warning
threshold (70%) was crossed four times.

b. The chart titled B: Hourly loaded active datasets shows the maximum number of unique datasets
loaded in memory by hourly buckets. Selecting a bar in the visual will cross filter the reasons datasets are
in memory visual.
c. Consult the Why datasets are in memor y table to see a list of the datasets that were loaded in
memory. Sort by Dataset Size (MB) to highlight the datasets taking up the most memory. Capacity
operations are classified as either interactive or background. Interactive operations include rendering
requests and responding to user interactions (filtering, Q&A querying, and so on). Total queries and total
refreshes provide an idea of whether there are interactive (queries) heavy or background (refreshes)
operations done on the dataset. It's important to understand that interactive operations are always
prioritized over background operations to ensure the best possible user experience. If there are
insufficient resources, background operations are added to a queue, and are processed once resources
free up. Background operations, such as dataset refreshes and AI functions, can be stopped mid-process
by the Power BI service and added to a queue.

Remedies for scenario one


You can take the following steps to remedy the problems associated with scenario one:
1. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
memory than the current SKU, thus alleviating any memory pressure the capacity is currently
experiencing.
2. Move datasets to another capacity - if you have another capacity that has more memory available,
you can move the workspaces that contain the larger datasets to that capacity.
Scenario two - future load will exceed limits
To determine whether there's enough memory for the capacity to complete its workloads, you can refer to the A:
Consumed Memor y Percentages visual on the top of the page, representing the memory consumed by
datasets that are being actively processed so cannot be evicted. The black dotted line highlights the trends. In a
capacity experiencing memory pressure, the same visual will clearly show the memory trendline (black dotted
line) upwards, meaning that it is possibly preventing additional datasets from being loaded into memory at that
point in time. The trend line, the black dashed line, shows the trend of growth based on the seven days of data.
Diagnosing scenario two
To diagnose scenario two, determine if the trend line is showing an upward trend towards warning or alarm
thresholds.
1. Consider Char t A:

a. If the chart shows an upward slope, that indicates that memory consumption has increased over the
past seven days.
b. Assume the current growth, and predict when the trend line will cross the warning threshold (the
yellow dashed line).
c. Keep checking the trend line at least every two days, to see if the trend continuing.
Remedies for scenario two
You can take the following steps to remedy the problems associated with scenario two:
1. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
memory than the current SKU, thus alleviating any memory pressure the capacity is currently
experiencing.
2. Move datasets to another capacity - if you have another capacity that has more memory available,
you can move the workspaces that contain the larger datasets to that capacity.

The query waits metric


The Queries category indicates whether users could experience report visuals that are slow to respond, or
could become unresponsive. Quer y waits is the time the query takes to start execution from the time it was
triggered. This KPI measures whether 25% or more of the selected capacity's queries are waiting 100
milliseconds or longer to execute. Query waits occur when there's not enough available CPU to execute all
pending queries.

The gauge in this visual shows that in the last seven days from the time the report was last refreshed, 17.32% of
the queries waited more than 100 milliseconds.
To learn details of Query waits KPI, click the Explore button to display a report page with visualization of
relevant metrics, and a troubleshooting guide in the right column of the page. The troubleshooting guide has
two scenarios, each providing detailed explanations of the metric, the state of the capacity, and what you can do
to mitigate the issue.
We discuss each query waits scenario, in turn, in the following sections.
Scenario one - long running queries consume CPU
In scenario one, long running queries are taking up too much CPU.
You can investigate whether poor report performance is caused by an overloaded capacity, or due to a poorly
designed dataset or report. There are several reasons why a query can run for an extended period, which is
defined as taking more than 10 seconds to finish executing. Dataset size and complexity, as well as query
complexity are just a few examples of what can cause a long running query.
On the report page, the following visuals appear:
The top table titled A: High wait times lists the datasets with queries that are waiting.
B: Hourly high wait time distributions shows the distribution of high wait times.
The chart titled C: Hourly long quer y counts displays the count of long running queries that were
executed split by hourly buckets.
The last visual, table D: Long running queries , lists the long running queries and their stats.
There are steps you can take to diagnose and remedy issues with query wait times, described next.
Diagnosing scenario one
First, you can determine if long running queries are occurring when your queries are waiting.

Look at Char t B , which shows the count of queries that are waiting more than 100 ms. Select one of the
columns that shows a high number of waits.

When you click on a column with high wait times, Char t C is filtered to show the count of long-running queries
that executed during that time, shown in the following image:
And in addition, Char t D is also filtered to show the queries that were long running during that selected time
period.

Remedies for scenario one


Here are steps you can take to remedy issues from scenario one:
1. Run PerfAnalyzer to optimize repor ts and datasets - the performance analyzer for reports will
show the effect of every interaction on a page, including how long each visual takes to refresh and where
the time is spent.
2. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
CPU thus alleviating any CPU pressure that may be causing the queries to run longer.
3. Move datasets to another capacity - if you have another capacity that has more CPU available, you
can move the workspaces that contain the datasets that contain the queries that are waiting to the other
capacity.
Scenario two - too many queries
In scenario two, too many queries are executing.
When the number of queries to execute exceeds the limits of the capacity, queries are placed in a queue until
resources are available to execute them. If the size of the queue grows too large, queries in that queue can end
up waiting more than 100 milliseconds.
Diagnosing scenario two
From Table A select a dataset that has a high percentage of wait time.

Once you've selected a dataset with a high wait time, Char t B is filtered to show the wait time distributions for
queries on that dataset, over the past seven days. Next, select one of the columns from Char t B .

Char t C is then filtered to show the queue length at the time selected from Chart B.
If the length of the queue has crossed the threshold of 20, then it's likely that the queries in the selected dataset
are delayed, due to too many queries trying to execute at the same time.

Remedies for scenario two


You can take the following steps to remedy the problems associated with scenario two:
1. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
memory than the current SKU, thus alleviating any memory pressure the capacity is currently
experiencing.
2. Move datasets to another capacity - if you have another capacity that has more memory available,
you can move the workspaces that contain the larger datasets to that capacity.

The refresh waits metric


The Refresh waits metric provides insights to when users could be experiencing report data that's old or stale.
Refresh waits is the time a given data refresh waited to start execution, from the time it was triggered on
demand, or scheduled to run. This KPI measures whether 10% or more of pending refresh requests are waiting
10 minutes or longer. Waiting generally occurs when there's insufficient available memory or CPU.

This gauge shows that in the last seven days from the last refresh report refresh, 3.18% of the refreshes waited
more than 10 minutes.
To learn the details of the Refresh waits KPI, click the Explore button, which presents a page with metrics and a
troubleshooting guide on the right column of the report page. The guide provides detailed explanations about
the metrics on the page, and helps you understand the state of the capacity, and what you can do to mitigate any
issues.

There are two scenarios explained, which you can show on the report page by selecting Scenario 1 or Scenario 2
on the page. We discuss each scenario in turn, in the following sections.
Scenario one - not enough memory
In scenario one, there isn't enough available memory to load the dataset. There are two situations that result in a
refresh being throttled during low memory conditions:
1. Not enough memory to load the dataset.
2. The refresh was canceled due to a higher priority operation.
The priority for loading datasets is the following:
1. Interactive query
2. On-demand refresh
3. Scheduled refresh
If there isn't enough memory to load a dataset for an interactive query, scheduled refreshes are stopped and
their datasets unloaded until sufficient memory become available. If that doesn't free up enough memory, then
on-demand refreshes are stopped and their datasets are unloaded.
Diagnosing scenario one
To diagnose scenario one, first determine whether throttling is due to insufficient memory. The steps to do so
are the following.
1. Select the dataset you're interested in from Table A by clicking on it:
a. When a dataset is selected in Table A , Char t B is filtered to show when waiting occurred.

b. Char t C is then filtered to show any throttling, explained in the next step.
2. Look at the results in the now-filtered Char t C . If the chart shows out of memory throttling occurred at
the times the dataset was waiting, then the dataset was waiting due to low memory conditions.

3. Finally, check Char t D , which shows the types of refreshes that were occurring, scheduled versus on-
demand. Any on-demand refreshes occurring at the same time could be the cause of the throttling.

Remedies for scenario one


You can take the following steps to remedy the problems associated with scenario one:
1. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
memory than the current SKU, thus alleviating any memory and CPU pressure the capacity is currently
experiencing.
2. Move datasets to another capacity - if your wait times are being caused by memory pressure and
you have another capacity that has more memory available, you can move the workspaces that contain
the datasets that are waiting to the other capacity.
3. Spread out scheduled refreshes - spreading out the refreshes will help to avoid too many refreshes
attempting to execute concurrently.
Scenario two - not enough CPU for refresh
In scenario two, there isn't enough available CPU to carry out the refresh.
For capacities, Power BI limits the number of refreshes that can happen concurrently. This number is equal to the
number of back-end cores x 1.5. For example, a P1 capacity, which has four back-end cores, can run 6 refreshes
concurrently. Once the maximum number of concurrent refreshes has been reached, other refreshes will wait
until an executing refresh finishes.

Diagnosing scenario two


To diagnose scenario two, first determine whether throttling is due to running into the maximum concurrency
for refreshes. The steps to do so are the following.
1. Select the dataset you're interested in from Table A by clicking on it:

a. When a dataset is selected in Table A , Char t B is filtered to show when waiting occurred.
b. Char t C is then filtered to show any throttling, explained in the next step.
2. Look at the results in the now-filtered Char t C . If the chart shows max concurrency occurred at the times
the dataset was waiting, then the dataset was waiting due to not enough available CPU.

3. Finally, check Char t D , which shows the types of refreshes that were occurring, scheduled versus on-
demand. Any on-demand refreshes occurring at the same time could be the cause of the throttling.

Remedies for scenario two


1. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
memory than the current SKU, and twice the number of concurrent refreshes than the current SKU, thus
alleviating any memory and CPU pressure the capacity is currently experiencing.
2. Move datasets to another capacity - if your wait times are being caused by maximum concurrency
being reached and you have another capacity that has available concurrency, you can move the
workspaces that contain the datasets that are waiting to the other capacity.
3. Spread out scheduled refreshes - spreading out the refreshes will help to avoid too many refreshes
attempting to execute concurrently.

Next steps
What is Power BI Premium?
Microsoft Power BI Premium whitepaper
Planning a Power BI Enterprise Deployment whitepaper
Extended Pro Trial activation
Power BI Embedded FAQ
More questions? Try asking the Power BI Community
Power BI has introduced Power BI Premium Gen2 as a preview offering, which improves the Power BI Premium
experience with improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Restart a Power BI Premium capacity
5/23/2022 • 2 minutes to read • Edit Online

As a Power BI administrator, you might need to restart a Premium capacity. This article explains how to restart a
capacity and addresses several questions about restart and performance.

Why does Power BI provide this option?


Power BI gives users the ability to perform complex analyses on huge amounts of data. Unfortunately, users can
cause performance issues by overloading the Power BI service with jobs, writing overly complex queries,
creating circular references, and so on.
Power BI shared capacity offers some protection from such cases by imposing limits on file sizes, refresh
schedules, and other aspects of the service. In a Power BI Premium capacity, by contrast, most of those limits are
raised. As a result, a single report with a bad DAX expression or a very complex model can cause significant
performance issues. When processed, the report can consume all of the resources available on the capacity.
Power BI is constantly improving in how it protects Premium capacity users against such issues. We are also
empowering administrators with the tools to analyze when capacities are overburdened and why. For more
information, see our short training session and longer training session. At the same time, you need the ability to
mitigate significant issues when they occur. The quickest way to mitigate these issues is to restart the capacity.

NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 capacities do not
require restarts, so this feature is not available in Premium Gen2.
Embedded Gen2 capacities also don't require restart. To review the Power BI Embedded Gen2 enhancements, refer to
Power BI Embedded Generation 2.

NOTE
This process and functionality does not apply to Power BI Premium Per User (PPU) capacities or activities.

Is the restart process safe? Will I lose any data?


All the saved data, definitions, reports, and dashboards on your capacity remain fully intact after restart. When
you restart a capacity, ongoing scheduled and ad-hoc refreshes are stopped temporarily by the refresh engine,
in most cases, then they restart due to refresh retry logic built into Power BI. The service attempts to retry any
impacted refreshes once the capacity becomes available. The state of refreshes may not change in the user
interface during the restart process.
Users interacting with the capacity will lose unsaved work during a restart process. Users should refresh their
browsers after the restart is complete.

How do I restart a capacity?


Follow these steps to restart a capacity.
1. In the Power BI admin portal, on the Capacity Settings tab, navigate to your capacity.
2. Add the CapacityRestar t feature flag to your capacity URL:
https://app.powerbi.com/admin-portal/capacities/<YourCapacityId>?capacityRestartButton=true .
3. Under Advanced Settings > CAPACITY RESTART , select Restar t capacity .

4. In the Capacity restar t dialog box, select Yes, restar t capacity .

How can I prevent issues from happening in the future?


The best way to prevent issues is to educate users about efficient data modeling. For more information, see our
training session.
We also recommend that you monitor your capacities regularly to look for trends that indicate underlying
issues. We plan regular releases of the monitoring app and other tools so that you can monitor and manage
your capacities more effectively.

Next steps
What is Power BI Premium?
More questions? Try asking the Power BI Community
Governance and deployment approaches
5/23/2022 • 2 minutes to read • Edit Online

Over the last few decades, companies have become increasingly aware of the need to leverage data assets to
profit from market opportunities. Either by performing competitive analysis or by understanding operational
patterns, many organizations now understand they can benefit from having a data strategy as a way to stay
ahead of their competition.
The Planning a Power BI Enterprise Deployment whitepaper provides a framework for increasing the return on
investment related to Power BI as companies increasingly seek to become more data-savvy.
Business Intelligence practitioners typically define data-savvy companies as those that benefit from the use of
factual information to support decision making. We even describe certain organizations as having a data
culture. Whether at the organizational level, or at a departmental level, a data culture can positively alter a
company’s ability to adapt and thrive. Data insights don't always have to be at enterprise scope to be far-
reaching: small operational insights that alter day-to-day operations can be transformational as well.
For these companies, there is an understanding that facts – and fact analysis – must drive how business
processes are defined. Team members attempt to seek data, identify patterns, and share findings with others.
This approach can be useful regardless of if the analysis is done over external or internal business factors. It is
first and foremost a perspective, not a process. Read the whitepaper to learn about concepts, options and
suggestions for governance within the Power BI ecosystem.
Metadata scanning
5/23/2022 • 4 minutes to read • Edit Online

Metadata scanning facilitates governance over your organization's Power BI data by making it possible to
quickly catalog and report on all the metadata of your organization's Power BI artifacts. It accomplishes this
using a set of Admin REST APIs that are collectively known as the scanner APIs. With the scanner APIs, you can
extract both general information such as artifact name, owner, sensitivity label, endorsement status, and last
refresh, as well as more detailed metadata such as dataset table and column names, measures, DAX expressions,
mashup queries, etc.
The following are the scanner APIs. They support both public and sovereign clouds.
GetModifiedWorkspaces
WorkspaceGetInfo
WorkspaceScanStatus
WorkspaceScanResult
Before metadata scanning can be run, a Power BI admin needs to set it up. See Setting up metadata scanning in
an organization.

Run metadata scanning


The following short walkthrough shows how to use the scanner APIs to retrieve metadata from your
organizations artifacts. It assumes that a Power BI admin has set up metadata scanning in your organization.
Step 1: Perform a full scan
Call workspaces/modified without the modifiedSince parameter to get the complete list of workspace IDs in
the tenant. This retrieves all the workspaces in the tenant, including classic workspaces, personal workspaces,
and new workspaces. If you wish to exclude personal workspaces from the scan, use the workspaces/modified
excludePersonalWorkspaces parameter.
Divide the list into chunks of 100 workspaces at most.
For each chunk of 100 workspaces:
Call workspaces/getInfo to trigger a scan call for these 100 workspaces. You will receive the scanId in the
response to use in the next steps. In the location header, you’ll also receive the URI to call for the next step.

NOTE
Not more than 16 calls can be made simultaneously. The caller should wait for a scan succeed/failed response from the
scanStatus API before invoking another call.
If some metadata you expected to receive is not returned, check with your Power BI admin to make sure they have
enabled all relevant admin switches.

Use the URI from the location header you received from calling workspaces/getInfo and poll on
workspaces/scanStatus/{scan_id} until the status returned is "Succeeded". This means the scan result is ready. It
is recommended to use a polling interval of 30-60 seconds. In the location header, you’ll also receive the URI to
call in the next step. Use it only once the status is "Succeeded".
Use the URI from the location header you received from calling workspaces/scanStatus/{scan-id} and read the
data using workspaces/scanResult/{scan_id}. The data contains the list of workspaces, artifact info, and other
metadata based on the parameters passed in the workspaces/getInfo call.
Step 2: Perform an incremental scan
Now that you have all the workspaces and the metadata and lineage of their assets, it's recommended that you
perform only incremental scans that reference the previous scan that you did.
Call workspaces/modified with the modifiedSince parameter set to the start time of the last scan in order to
get the workspaces that have changed and which therefore require another scan. The modifiedSince parameter
should be set for a date within the last 30 days.
Divide this list into chunks of up to 100 workspaces, and get the data for these changed workspaces using the
three API calls, workspaces/getInfo, workspaces/scanStatus/{scan_id}, and workspaces/scanResult/{scan_id}, as
described in Step 1 above.

Considerations and limitations


Datasets that have not been refreshed or republished will be returned in API responses but without their
detailed low-level information and expressions. For example, you will see dataset name and lineage in the
response, but not the dataset's table and column names.
Datasets containing only DirectQuery tables will return low-level details only if they have been republished
since enhanced metadata scanning has been enabled. This is because DirectQuery datasets don't use the
regular Power BI dataset refresh flow that triggers caching. If, however, a dataset also contains tables that use
import mode, caching takes place upon dataset refresh as described above, and it is not necessary for the
dataset to be republished in order to for low-level details to be returned.
Real time datasets, datasets with Object Level Security, datasets with a live connection to AS-Azure and AS
on-prem, and Excel full fidelity datasets are not supported for detailed metadata. For unsupported datasets,
the response returns the reason for not getting the detailed metadata about the dataset. It is found in a field
named schemaRetrievalError, for example, schemaRetrievalError: Unsupported request for RealTime model.
The API doesn't return sub-artifact metadata for datasets that are larger than 1GB in shared workspaces. For
Premium workspaces there is no size limitation.

Licensing
Metadata scanning requires no special license. It works for all of your tenant metadata, including that of artifacts
that are located in non-Premium workspaces.

Next steps
Power BI REST Admin APIs
Set up metadata scanning
Enable service principal authentication for read-only admin APIs
More questions? Try asking the Power BI Community
Data protection in Power BI
5/23/2022 • 2 minutes to read • Edit Online

Overview
Power BI plays a key role in bringing data insights to everyone in an organization. However, as data becomes
more accessible to inform decisions, risk of accidental oversharing or misuse of business-critical information
increases.
Microsoft has world-class security capabilities to help protect customers from threats. Over 3,500 security
researchers along with sophisticated AI models reason every day over 6.5+ trillion signals globally to help
protect customers against threats at Microsoft.
Data protection capabilities in Power BI build on Microsoft's strengths in security and enable customers to
empower every user with Power BI and better protect their data no matter how or where it is accessed.

The pillars of Power BI's data protection capabilities and how they help you protect your organization's sensitive
data are listed below:
Microsoft Information Protection sensitivity labels
Classify and label sensitive Power BI data using Microsoft Information Protection sensitivity
labels used in Office and other Microsoft products.
Enforce governance policies even when Power BI content is expor ted to Excel, PowerPoint,
PDF, and other supported export formats to help ensure data is protected even when it leaves Power
BI.
Microsoft Defender for Cloud Apps
Monitor and protect user activity on sensitive data in real time with alerts, session
monitoring, and risk remediation using Defender for Cloud Apps.
Empower security administrators who use data protection reports and security investigation
capabilities with Defender for Cloud Apps to enhance organizational oversight.
Microsoft 365 data loss prevention
Data loss prevention policies for Power BI enable central security teams to use Microsoft 365
data loss prevention policies to enforce the organization’s DLP policies on Power BI. DLP policies for
Power BI currently support detection of sensitive info types and sensitivity labels on datasets, and can
trigger automatic risk remediation actions such as alerts to security admins in Microsoft 365
compliance portal and policy tips for end users.
Read more about Microsoft Information Protection sensitivity labels, Microsoft Defender for Cloud Apps, and
Microsoft 365 data loss prevention.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!

Next steps
Learn about sensitivity labels in Power BI and how to use them
Set up and use Defender for Cloud Apps controls in Power BI
Learn about data loss prevention
Microsoft Business Applications Summit video session - Power BI and Microsoft Information Protection - The
game changer for secure BI
Sensitivity labels in Power BI
5/23/2022 • 18 minutes to read • Edit Online

This article describes the functionality of Microsoft Information Protection sensitivity labels in Power BI.
For information about enabling sensitivity labels on your tenant, including licensing requirements and
prerequisites, see Enable data sensitivity labels in Power BI.
For information about how to apply sensitivity labels on your Power BI content and files, see How to apply
sensitivity labels in Power BI.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!

Introduction
Microsoft Information Protection sensitivity labels provide a simple way for your users to classify critical content
in Power BI without compromising productivity or the ability to collaborate. They can be applied in both Power
BI Desktop and the Power BI service, making it possible to protect your sensitive data from the moment you first
start developing your content on through to when it's being accessed from Excel via a live connection.
Sensitivity labels are retained when you move your content back and forth between Desktop and the service in
the form of .pbix files.
In the Power BI service, sensitivity labels can be applied to datasets, reports, dashboards, and dataflows. When
labeled data leaves Power BI, either via export to Excel, PowerPoint, PDF, or .pbix files, or via other supported
export scenarios such as Analyze in Excel or live connection PivotTables in Excel, Power BI automatically applies
the label to the exported file and protects it according to the label's file encryption settings. This way your
sensitive data can remain protected, even when it leaves Power BI.
In addition, sensitivity labels can be applied to .pbix files in Power BI Desktop, so that your data and content is
safe when it’s shared outside Power BI (for example, so that only users within your organization can open a
confidential .pbix that has been shared or attached in an email), even before it has been published to the Power
BI service. See Restrict access to content by using sensitivity labels to apply encryption for more detail.
Sensitivity labels on reports, dashboards, datasets, and dataflows are visible from many places in the Power BI
service. Sensitivity labels on reports and dashboards are also visible in the Power BI iOS and Android mobile
apps and in embedded visuals. In Desktop, you can see the sensitivity label in the status bar.
A protection metrics report available in the Power BI admin portal gives Power BI admins full visibility over the
sensitive data in the Power BI tenant. In addition, the Power BI audit logs include sensitivity label information
about activities such as applying, removing, and changing labels, as well as about activities such as viewing
reports, dashboards, etc. This gives Power BI and security admins visibility over sensitive data consumption for
the purposes of monitoring and investigating security alerts.

Important considerations
In the Power BI service, sensitivity labeling does not affect access to content. Access to content in the service is
managed solely by Power BI permissions. While the labels are visible, any associated encryption settings
(configured in the Microsoft 365 compliance center) aren’t applied. They’re applied only to data that leaves the
service via a supported export path, such as export to Excel, PowerPoint, or PDF, and download to .pbix.
In Power BI Desktop, sensitivity labels with encryption settings do affect access to content. If a user doesn’t have
sufficient permissions according to the encryption settings of the sensitivity label on the .pbix file, they will not
be able to open the file. In addition, in Desktop, when you save your work, any sensitivity label you've added and
its associated encryption settings will be applied to the saved .pbix file.
Sensitivity labels and file encryption are not applied in non-supported export paths. The Power BI admin can
block export from non-supported export paths.

NOTE
Users who are granted access to a report are granted access to the entire underlying dataset, unless row-level security
(RLS) limits their access. Report authors can classify and label reports using sensitivity labels. If the sensitivity label has
protection settings, Power BI applies these protection settings when the report data leaves Power BI via a supported
export path such as export to Excel, PowerPoint, or PDF, download to .pbix, and Save (Desktop). Only authorized users
will be able to open protected files.

Supported export paths


Applying sensitivity labels and their associated protection to data that leaves the Power BI service is currently
supported for the following export paths:
Export to Excel, PDF files (Service only), and PowerPoint.
Analyze in Excel from the Power BI service, which triggers download of an Excel file with a live connection to
a Power BI dataset.
PivotTable in Excel with a live connection to a Power BI dataset, for users with Microsoft 365 E3 and above.
Download to .pbix (Service)

NOTE
When using Download the .pbix in the Power BI service, if the downloaded report and its dataset have different labels,
the more restrictive label will be applied to the .pbix file.

How sensitivity labels work in Power BI


When you apply a sensitivity label to Power BI content and files, it's similar to applying a tag on that resource
that has the following benefits:
Customizable - you can create categories for different levels of sensitive content in your organization, such
as Personal, Public, General, Confidential, and Highly Confidential.
Clear text - since the label is in clear text, it's easy for users to understand how to treat the content
according to sensitivity label guidelines.
Persistent - after a sensitivity label has been applied to content, it accompanies that content when it’s
exported to Excel, PowerPoint and PDF files, downloaded to .pbix, or saved (in Desktop) and becomes the
basis for applying and enforcing policies.
Here's a quick example of how sensitivity labels in Power BI work. The image below shows how a sensitivity
label is applied on a report in the Power BI service, then how the data from the report is exported to an Excel file,
and finally how the sensitivity label and its protections persist in the exported file.
The sensitivity labels you apply to content persist and roam with the content as it's used and shared throughout
Power BI. You can use the labeling to generate usage reports and to see activity data for your sensitive content.

Sensitivity labels in Power BI Desktop


Sensitivity labels can also be applied in Power BI Desktop. This makes it possible to protect your data from the
moment you first start developing your content. When you save your work in Desktop, the sensitivity label you
applied, along with any associated encryption settings, is applied to the resulting .pbix file. If the label has
encryption settings, the file is thus protected wherever it goes and however it’s transmitted. Only users with the
necessary RMS permissions will be able to open it.

NOTE
Some limitations may apply. See Considerations and limitations.

If you apply a sensitivity label in Desktop, when you publish your work to the service, or when you upload a
.pbix file of that work to the service, the label travels with the data into the service. In the service, the label will
be applied to both the dataset and the report that you get with the file. If the dataset and report already have
sensitivity labels, you can choose to keep those labels or to overwrite them with the label coming from Desktop.
If you upload a .pbix file that has never been published to the service before, and that has the same name as a
report or dataset that already exists on the service, the upload will succeed only if the uploader has the RMS
permissions necessary to change the label.
The same is also true in the opposite direction - when you download to .pbix in the service and then load the
.pbix into Desktop, the label that was in the service will be applied to the downloaded .pbix file and from there
be loaded into Desktop. If the report and dataset in the service have different labels, the more restrictive of the
two will be applied to the downloaded .pbix file.
When you apply a label in Desktop, it shows up in the status bar.
Learn how to apply sensitivity labels to Power BI content and files.

Sensitivity label inheritance upon creation of new content


When new reports and dashboards are created in the Power BI service, they automatically inherit the sensitivity
label previously applied on parent dataset or report. For example, a new report created on top of a dataset that
has a "Highly Confidential" sensitivity label will automatically receive the "Highly Confidential" label as well.
The following image shows how a dataset's sensitivity label is automatically applied on a new report that is built
on top of the dataset.

NOTE
If for any reason the sensitivity label can't be applied on the new report or dashboard, Power BI will not block creation of
the new item.

Sensitivity label inheritance from data sources (preview)


Power BI datasets that connect to sensitivity-labeled data in supported data sources can inherit those labels so
that the data remains classified and secure when brought into Power BI. Currently, Azure Synapse Analytics
(formerly SQL Data Warehouse) and Azure SQL Database are supported. See Sensitivity label inheritance from
data sources to learn how inheritance from data sources works and how to enable it for your organization.

Sensitivity label downstream inheritance


When a sensitivity label is applied to a dataset or report in the Power BI service, it is possible to have the label
trickle down and be automatically applied to content that is built from that dataset or report as well. This
capability is called downstream inheritance.
Downstream inheritance is a critical link in Power BI's end-to-end information protection solution. Together with
inheritance from data sources, inheritance upon creation of new content, inheritance upon export to file, and
other capabilities for applying sensitivity labels, downstream inheritance helps ensure that sensitive data
remains protected throughout its journey through Power BI, from data source to point of consumption.
Read more about downstream inheritance

Data loss prevention (DLP) policies (preview)


Power BI leverages Microsoft 365 data loss prevention to enable central security teams to use data loss
prevention policies to enforce their organization's DLP policies in Power BI. See Data loss prevention policies for
Power BI (preview) for detail.

Default label policy


To help ensure comprehensive protection and governance of sensitive data, organizations can create default
label policies for Power BI that automatically apply default sensitivity labels to unlabeled content. Currently,
default label policies are supported in Power BI Desktop only. For more information, see Default label policy.

Mandatory label policy


To help ensure comprehensive protection and governance of sensitive data, organizations can require users to
apply labels to their sensitive Power BI content. Such a policy is called a mandatory label policy. For more
information, see Mandatory label policy.

Admin APIs for setting and removing labels programmatically


To meet compliance requirements, organizations are often required to classify and label all sensitive data in
Power BI. This task can be challenging for tenants that have large volumes of data in Power BI. To make the task
easier and more effective, Power BI has admin REST APIs that admins can use to set and remove sensitivity
labels on large numbers of Power BI artifacts programatically. See the following:
Admin - InformationProtection SetLabelsAsAdmin
Admin - InformationProtection RemoveLabelsAsAdmin

Auditing for activity on sensitivity labels


Whenever a sensitivity label on a dataset, report, dashboard, or dataflow is applied, changed, or removed, that
activity is recorded in the audit log for Power BI. You can track these activities in the unified audit log or in the
Power BI activity log. See Audit schema for sensitivity labels in Power BI for detail.

Sensitivity labels and protection on exported data


When data is exported from Power BI to Excel, PDF files (service only) or PowerPoint files, Power BI automatically
applies a sensitivity label on the exported file and protects it according to the label's file encryption settings. This
way your sensitive data remains protected no matter where it is.
A user who exports a file from Power BI has permissions to access and edit that file according to the sensitivity
label settings; they don’t get owner permissions to the file.
NOTE
When using Download the .pbix in the Power BI service, if the downloaded report and its dataset have different labels,
the more restrictive label will be applied to the .pbix file.

Sensitivity labels and protection aren’t applied when data is exported to .csv, files or any other unsupported
export path.
Applying a sensitivity label and protection to an exported file doesn't add content marking to the file. However, if
the label is configured to apply content markings, the markings are automatically applied by the Azure
Information Protection unified labeling client when the file is opened in Office desktop apps. The content
markings aren’t automatically applied when you use built-in labeling for desktop, mobile, or web apps. See
When Office apps apply content marking and encryption for more detail.
Export fails if a label can't be applied when data is exported to a file. To check if export failed because the label
couldn't be applied, click the report or dashboard name at the center of the title bar and see whether it says
"Sensitivity label can't be loaded" in the info dropdown that opens. This can happen as the result of a temporary
system issue, or if the applied label has been unpublished or deleted by the security admin.

Sensitivity label inheritance in Analyze in Excel


When you create a PivotTable in Excel with a live connection to a Power BI dataset (you can do this either from
Power BI through Analyze In Excel or from Excel), the dataset's sensitivity label is inherited and applied to your
Excel file, along with any associated protection. If the label on the dataset later changes to a more restrictive one,
the label applied on the linked Excel file will automatically update upon data refresh.

Sensitivity labels in Excel that were manually set aren’t automatically overwritten by the dataset's sensitivity
label. Rather, a banner notifies you that the dataset has a sensitivity label and recommends that you apply it.
NOTE
If the dataset's sensitivity label is less restrictive than the Excel file's sensitivity label, no label inheritance or update takes
place. An Excel file never inherits a less restrictive sensitivity label.

Sensitivity label persistence in embedded reports and dashboards


You can embed Power BI reports, dashboards, and visuals in business applications such as Microsoft Teams and
SharePoint, or in an organization's website. When you embed a visual, report or dashboard that has a sensitivity
label applied to it, the sensitivity label will be visible in the embedded view, and the label and its protection will
persist when data is exported to Excel.

The following embedding scenarios are supported:


Embed for your organization
Microsoft 365 apps (for example, Teams and SharePoint)
Secure URL embedding (embedding from the Power BI service)

Sensitivity labels in paginated reports


Sensitivity labels can be applied to paginated reports hosted in the Power BI service. After uploading a
paginated report to the service, you apply the label to the report just as you would to a regular Power BI report.
See Sensitivity label support for paginated reports for detail.

Sensitivity labels in deployment pipelines


Sensitivity labels are supported in deployment pipelines. See the deployment pipeline documentation for details
about how sensitivity labels are handled as content is deployed from stage to stage.

Sensitivity labels in the Power BI mobile apps


Sensitivity labels can be viewed on reports and dashboards in the Power BI mobile apps. An icon near the name
of the report or dashboard indicates that it has a sensitivity label, and the type of label and its description can be
found in the report or dashboard's info box.
Label change enforcement
Power BI restricts permission to change or remove Microsoft Information Protection sensitivity labels that have
file encryption settings to authorized users only. See Sensitivity label change enforcement for detail.

Supported clouds
Sensitivity labels are supported for tenants in global (public) clouds, and the following national clouds:
US Government: GCC, GCC High, DoD
China
Sensitivity labels are not currently supported in other national clouds.

Licensing and requirements


See Licensing and requirements.

Sensitivity label creation and management


Sensitivity labels are created and managed in the Microsoft 365 compliance center.
To access sensitivity labels in either of these centers, navigate to Classification > Sensitivity labels . These
sensitivity labels can be used by multiple Microsoft services such Azure Information Protection, Office apps, and
Office 365 services.

IMPORTANT
If your organization uses Azure Information Protection sensitivity labels, you need to migrate them to one of the
previously listed services in order for the labels to be used in Power BI.

Custom help link


To help your users understand what your sensitivity labels mean or how they should be used, you can provide a
Learn more URL that appears at the bottom of the sensitivity label menu that you see when you're applying a
sensitivity label.

See Custom help link for sensitivity labels for detail.

Considerations and limitations


General
Power BI admins: If a sensitivity label is or becomes a parent (that is, has sublabels), exporting data from
content that has that label applied will fail. See Sublabels (grouping labels).
Data sensitivity labels aren’t supported for template apps. Sensitivity labels set by the template app
creator are removed when the app is extracted and installed, and sensitivity labels added to artifacts in an
installed template app by the app consumer are lost (reset to nothing) when the app is updated.
In the Power BI service, if a dataset has a label that has been deleted from the label admin center, you will
not be able to export or download the data. In Analyze in Excel, a warning will be issued and the data will
be exported to an .odc file with no sensitivity label.
Power BI doesn’t support sensitivity labels of the Do Not Forward, user-defined, and HYOK protection
types. The Do Not Forward and user-defined protection types refer to labels defined in the Microsoft 365
compliance center.
Getting data from encrypted Excel (.xlsx) files isn’t supported. This includes "Get data" and refresh
scenarios.
Information protection in Power BI doesn't support B2B and multi-tenant scenarios .
Power BI service
Sensitivity labels can be applied only on dashboards, reports, datasets, dataflows, and paginated reports.
They aren't currently available for workbooks.
Sensitivity labels on Power BI assets are visible in the workspace list, lineage, favorites, recents, and apps
views; labels aren’t currently visible in the "shared with me" view. Note, however, that a label applied to a
Power BI asset, even if not visible, will always persist on data exported to Excel, PowerPoint, PDF, and PBIX
files.
Power BI Desktop
Power BI Desktop for Power BI Report Server doesn’t support information protection. If you try to open a
protected .pbix file, the file won’t open and you’ll receive an error message. Sensitivity-labeled .pbix files
that aren’t encrypted can be opened as normal.
To open a protected .pbix file, a user must have Full control and/or Expor t usage rights for the relevant
label. See more detail. In addition, the label must be in the user's label policy. If it isn't, the open action will
fail.
The user that set the label also has Full control and can never be locked out unless connectivity fails and
authentication can't take place.
"Publish" or "Get data" of a protected .pbix file requires that the label on the .pbix file be in the user's label
policy. If the label isn't in the user's label policy, the Publish or Get data action will fail.
If the label applied to a .pbix file hasn't been published to the user in the Microsoft 365 compliance center,
the user won’t be able to save the file in Desktop.
Power BI supports publishing or importing a .pbix file that has an unprotected sensitivity label to the
service via APIs running under a service principal. Publishing or importing a .pbix file that has a
protected sensitivity label to the service via APIs running under a service principal is not supported and
will fail. To mitigate, users can remove the label and then publish using service principals.
Power BI Desktop users may experience problems saving their work when internet connectivity is lost,
such as after going offline. With no internet connection, some actions related to sensitivity labels and
rights management might not complete properly. In such cases it’s recommended to go back online and
try saving again.
In general, when you protect a file with a sensitivity label that applies encryption, it’s good practice to use
another encryption method as well, such as pagefile encryption, NTFS encryption, BitLocker instances,
antimalware, etc.
Temp files aren’t encrypted.
Get data can upload protected files only if they’re local. Protected files from online services such as
SharePoint Online or OneDrive for Business can’t be uploaded. For a protected file, you can either upload
it from your local device, or first remove the file's label in Power BI Desktop and then upload it via one of
the online services.
Expor t to PDF in Desktop doesn’t support sensitivity labels. In Desktop, if you export a file that has a
sensitivity label to PDF, the PDF won’t receive the label and no protection will be applied.
If you overwrite a labeled dataset or report in the service with an unlabeled .pbix file, the labels in the
service will be retained.

Next steps
This article provided an overview of data protection in Power BI. The following articles provide more details
about data protection in Power BI.
Enable sensitivity labels in Power BI
How to apply sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Protection metrics report
Enable sensitivity labels in Power BI
5/23/2022 • 4 minutes to read • Edit Online

In order for Microsoft Information Protection sensitivity labels to be used in Power BI, they must be enabled on
the tenant. This article shows Power BI admins how to do this. For an overview about sensitivity labels in Power
BI, see Sensitivity labels in Power BI. For information about applying sensitivity labels in Power BI, see Applying
sensitivity labels
When sensitivity labels are enabled:
Specified users and security groups in the organization can classify and apply sensitivity labels to their Power
BI content. In the Power BI service, this means their reports, dashboards, datasets, and dataflows. In Power BI
Desktop, it means their .pbix files.
In the service, all members of the organization will be able to see those labels. In Desktop, only members of
the organization who have the labels published to them will be able to see the labels.
Enabling sensitivity labels requires an Azure Information Protection license. See Licensing and requirements for
detail.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!

Licensing and requirements


An Azure Information Protection Premium P1 or Premium P2 license is required to apply or view
Microsoft Information Protection sensitivity labels in Power BI. Azure Information Protection can be
purchased either standalone or through one of the Microsoft licensing suites. See Azure Information
Protection pricing for detail.

NOTE
If your organization uses Azure Information Protection sensitivity labels, they need to be migrated to the
Microsoft Information Protection Unified Labeling platform in order for the them to be used in Power BI. Learn
more about migrating sensitivity labels.

To be able to apply labels to Power BI content and files, a user must have a Power BI Pro or Premium Per
User (PPU) license in addition to one of the Azure Information Protection licenses mentioned above.
Office apps have their own licensing requirements for viewing and applying sensitivity labels.
Before enabling sensitivity labels on your tenant, make sure that sensitivity labels have been defined and
published for relevant users and groups. See Create and configure sensitivity labels and their policies for
detail.
Customers in China must enable rights management for the tenant and add the Microsoft Information
Protection Sync Service service principle, as described in steps 1 and 2 under Configure Azure
Information Protection for customers in China.
Using sensitivity labels in Desktop requires the Desktop December 2020 release and later.
NOTE
If you try to open a protected .pbix file with a Desktop version earlier than December 2020, it will fail, and you will
be prompted to upgrade your Desktop version.

Enable sensitivity labels


Sensitivity labels must be enabled on the tenant before they can be used in both the service and in Desktop. This
section describes how to enable them in the tenant settings.
To enable sensitivity labels on the tenant, go to the Power BI Admin por tal , open the Tenant settings pane,
and find the Information protection section.

In the Information Protection section, perform the following steps:


1. Open Allow users to apply sensitivity labels for Power BI content .
2. Enable the toggle.
3. Define who can apply and change sensitivity labels in Power BI assets. By default, everyone in your
organization will be able to apply sensitivity labels. However, you can choose to enable setting sensitivity
labels only for specific users or security groups. With either the entire organization or specific security
groups selected, you can exclude specific subsets of users or security groups.
When sensitivity labels are enabled for the entire organization, exceptions are typically security
groups.
When sensitivity labels are enabled only for specific users or security groups, exceptions are typically
specific users.
This approach makes it possible to prevent certain users from applying sensitivity labels in Power BI,
even if they belong to a group that has permissions to do so.
4. Press Apply .
IMPORTANT
Only Power BI Pro users who have create and edit permissions on the asset, and who are part of the relevant security
group that was set in this section, will be able to set and edit the sensitivity labels. Users who are not part of this group
won't be able to set or edit the label.

Troubleshooting
Power BI uses Microsoft Information Protection sensitivity labels. Thus if you encounter an error message when
trying to enable sensitivity labels, it might be due to one of the following:
You do not have an Azure Information Protection license.
Sensitivity labels have not been migrated to the Microsoft Information Protection version supported by
Power BI.
No Microsoft Information Protection sensitivity labels have been defined in the organization.

Considerations and limitations


See Sensitivity labels in Power BI for the list of sensitivity label limitations in Power BI.

Next steps
This article described how to enable sensitivity labels in Power BI. The following articles provide more details
about data protection in Power BI.
Overview of sensitivity labels in Power BI
How to apply sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Protection metrics report
How to apply sensitivity labels in Power BI
5/23/2022 • 5 minutes to read • Edit Online

Microsoft Information Protection sensitivity labels on your reports, dashboards, datasets, dataflows, and .pbix
files can guard your sensitive content against unauthorized data access and leakage. Labeling your data
correctly with sensitivity labels ensures that only authorized people can access your data. This article shows you
how to apply sensitivity labels in the Power BI service and in Power BI Desktop.
For more information about sensitivity labels in Power BI, see Sensitivity labels in Power BI.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!

Apply sensitivity labels in the Power BI service


In the Power BI service, you can apply sensitivity labels to reports, dashboards, datasets, and dataflows.
To be able to apply sensitivity labels in the Power BI service:
You must have a Power BI Pro or Premium Per User (PPU) license and edit permissions on the content you
wish to label.
Sensitivity labels must be enabled for your organization. Contact your Power BI admin if you aren't sure
about this.
You must belong to a security group that has permissions to apply sensitivity labels, as described in Enable
sensitivity labels in Power BI.
All licensing and other requirements must have been met.
When data protection is enabled on your tenant, sensitivity labels appear in the sensitivity column in the list
view of dashboards, reports, datasets, and dataflows.

To apply or change a sensitivity label on a repor t or dashboard:


1. Go to Settings .
2. In the settings side pane, go to the Sensitivity label section and choose the appropriate sensitivity label.
3. Save the settings.
The following image illustrates these steps on a report

NOTE
If the label is greyed out, you may not have the correct usage rights to change the label. If you need to change a
sensitivity label and can't, either ask the person who applied the label in the first place to modify it, or contact the
Microsoft 365/Office security administrator and request the necessary usage rights for the label.

To apply or change a sensitivity label on a dataset or dataflow:


1. Go to Settings .
2. Select the datasets or dataflows tab, whichever is relevant.
3. Expand the sensitivity labels section and choose the appropriate sensitivity label.
4. Apply the settings.
The following two images illustrate these steps on a dataset.
Choose More options (...) and then Settings .
On the settings datasets tab, open the sensitivity label section, choose the desired sensitivity label, and click
Apply .

NOTE
If the label is greyed out, you may not have the correct usage rights to change the label. If you need to change a
sensitivity label and can't, either ask the person who applied the label in the first place to modify it, or contact the
Microsoft 365/Office security administrator and request the necessary usage rights for the label.

Apply sensitivity labels in Power BI Desktop


To use sensitivity labels in Power BI Desktop:
You must have a Power BI Pro or Premium Per User (PPU) license.
Sensitivity labels must be enabled for your organization. Contact your Power BI admin if you aren't sure
about this.
You must belong to a security group that has permissions to apply sensitivity labels, as described in Enable
sensitivity labels in Power BI.
All licensing and other requirements must have been met.
You must be signed in.
Watch a short video on applying sensitivity labels and then try it out yourself.

NOTE
This video might use earlier versions of Power BI Desktop or the Power BI service.

To apply a sensitivity label on the file you're working on, click the sensitivity button in the home tab and choose
the desired label from the menu that appears.

NOTE
If the sensitivity button is greyed out, it may indicate that you don't have an appropriate license or that you do not
belong to a security group that has permissions to apply sensitivity labels, as described in Enable sensitivity labels in
Power BI.
If a particular label you wish to change is greyed out, you may not have the correct usage rights to change that label. If
you need to change a sensitivity label and can't, either ask the person who applied the label in the first place to modify it,
or contact the Microsoft 365/Office security administrator and request the necessary usage rights for the label.

After you've applied the label, it will be visible in the status bar.

Sensitivity labels when uploading or downloading .pbix files to/from the service
When you publish a .pbix file to the Power BI service from Desktop, or when you upload a .pbix file to the
Power BI service directly using Get data , the .pbix file's label gets applied to both the report and the dataset
that are created in the service. If the .pbix file you're publishing or uploading replaces existing assets (i.e. that
have the same name as the .pbix file), a dialog will prompt you to choose whether to keep the labels on those
assets or to have the .pbix file's label overwrite those labels. If the .pbix file is unlabeled, the labels in the
service will be retained.
When using "Download to .pbix" in the Power BI service, if the report and dataset being downloaded both
have labels, and those labels are different, the label that will be applied to the .pbix file is the more restrictive
of the two.

Remove sensitivity labels


Service
To remove a sensitivity label from a report, dashboard, dataset, or dataflow, follow the same procedure used for
applying labels in the Power BI service, but choose (None) when prompted to classify the sensitivity of the data.
Desktop
To remove a sensitivity label from a .pbix file, reselect the label in the sensitivity drop down menu.

Considerations and limitations


See Sensitivity labels in Power BI for the list of sensitivity label limitations in Power BI.

Next steps
This article described how to apply sensitivity labels in Power BI. The following articles provide more details
about data protection in Power BI.
Overview of sensitivity labels in Power BI
Enable sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Default label policy for Power BI
5/23/2022 • 2 minutes to read • Edit Online

To help ensure comprehensive protection and governance of sensitive data, organizations can create default
label policies for Power BI that automatically apply default sensitivity labels to unlabeled content.
This article describes how to enable a default label policy, both in the Microsoft 365 compliance center and by
using the Security & Compliance Center PowerShell setLabelPolicy API.

NOTE
The default label policy settings for Power BI are independent of the default label policy settings for files and email.

What happens when a default label policy is in effect?


In Power BI Desktop, when a user to whom the policy applies opens a new .pbix file or an existing unlabeled
.pbix file, the default label will be applied to the file. If the user is working offline, the label will be applied
when the user signs in.
In the Power BI service, when a user to whom the policy applies creates a new dataset, report, dashboard,
dataflow or scorecard, the default label will be applied to that item.

Enabling a default label policy for Power BI


A Microsoft 365 administrator can enable a default label policy for Power BI by selecting the desired label in the
Apply this label by default to Power BI drop-down menu in the policy settings for Power BI in the Microsoft
365 compliance center. See What label policies can do.

For existing policies, it is also possible to enable default label policies for Power BI using the Security &
Compliance Center PowerShell setLabelPolicy API.
Set-LabelPolicy -Identity "<default label policy name>" -AdvancedSettings @{powerbidefaultlabelid="
<LabelId>"}

Where:
<default label policy name> = the name of the policy whose associated sensitivity label you want to be
applied by default to unlabeled content in Power BI.

IMPORTANT
If a user has more than one label policy, the default label setting is always taken from the policy with the highest priority,
so be sure to configure the default label on that policy.

Requirements for using PowerShell


You need the EXO V2 module to run this command. For more information, see About the Exchange Online
PowerShell V2 module
A connection to the Microsoft 365 compliance center is also required. For more information, see Connect to
Security & Compliance Center PowerShell using the EXO V2 module
Documentation
Admin Guide: Custom configurations for the Azure Information Protection unified labeling client
Create and configure sensitivity labels and their policies
Set-LabelPolicy documentation

Considerations and limitations


Default labeling in Power BI covers most common scenarios, but there may be some less common flows that
still allow users to open or create unlabeled .pbix files or Power BI artifacts.
Default label policy settings for Power BI are independent of the default label policy settings for files and
email.
Default labeling in Power BI is not supported for service principals and APIs. Service principals and APIs are
not subject to default label policies.
Default label policies in Power BI are not supported for external guest users (B2B users). When a B2B user
opens or creates an unlabeled .pbix file in Power BI Desktop or Power BI artifact in the Power BI service, no
default label will be applied automatically.

Next steps
Mandatory label policy for Power BI
Sensitivity labels in Power BI
Data protection metrics report
Audit schema for sensitivity labels in Power BI
Mandatory label policy for Power BI
5/23/2022 • 2 minutes to read • Edit Online

To help ensure comprehensive protection and governance of sensitive data, you can require your organization's
Power BI users to apply sensitivity labels to content they create or edit in Power BI. You do this by enabling, in
their sensitivity label policies, a special setting for mandatory labeling in Power BI. This article describes the user
actions that are affected by a mandatory labeling policy, and explains how to enable a mandatory labeling policy
for Power BI.

NOTE
The mandatory label policy setting for Power BI is independent of the mandatory label policy setting for files and email.
Mandatory labeling in Power BI is not supported for service principals and APIs. Service principals and APIs are not
subject to mandatory label policies.

What happens when a mandatory label policy is in effect?


In the Power BI ser vice :
Users must apply a sensitivity label in order to be able to save new reports, dashboards, or datasets.
Users must apply a sensitivity label in order to be able to save changes to the settings or content of
existing, unlabeled reports and dashboards.
If users try to import data from an unlabeled .pbix file, they will be prompted to select a label before
the import will be allowed to continue. The label they select will be applied to the resulting dataset and
report in the service. It is not applied to the .pbix file itself .
In Power BI Desktop :
Users must apply sensitivity labels to unlabeled .pbix files before they will be allowed to save them or
publish them to the service.

Enabling a mandatory label policy for Power BI


A Microsoft 365 administrator can enable a mandatory label policy for Power BI by selecting the Require users
to apply a label to their Power BI content checkbox in the Microsoft 365 compliance center. See What label
policies can do.
If you already have an existing policy and you want to enable mandatory labeling in Power BI in it, you can use
the Security & Compliance Center PowerShell setLabelPolicy API.

Set-LabelPolicy -Identity "<policy name>" -AdvancedSettings @{powerbimandatory="true"}

Where:
policy name = the name of the policy where you want to set labeling in Power BI as mandatory.
Requirements for using PowerShell
You need the EXO V2 module to run this command. For more information, see About the Exchange Online
PowerShell V2 module
A connection to the Microsoft 365 compliance center is also required. For more information, see Connect to
Security & Compliance Center PowerShell using the EXO V2 module
Documentation
Admin Guide: Custom configurations for the Azure Information Protection unified labeling client
Create and configure sensitivity labels and their policies
Set-LabelPolicy documentation

Considerations and limitations


Mandatory labeling in Power BI covers most common scenarios, but there may be some less common flows
that still allow a user to create or edit unlabeled content.
The mandatory label policy setting for Power BI is independent of the mandatory label policy setting for files
and email.
Mandatory labeling in Power BI is not supported for service principals and APIs. Service principals and APIs
are not subject to mandatory label policies.
Mandatory labeling in Power BI is not supported for external guest users (B2B users). B2B users are not
subject to mandatory label policies.

Next steps
Default label policy for Power BI
Sensitivity labels in Power BI
Data protection metrics report
Audit schema for sensitivity labels in Power BI
Sensitivity label downstream inheritance
5/23/2022 • 3 minutes to read • Edit Online

When a sensitivity label is applied to a dataset or report in the Power BI service, it is possible to have the label
trickle down and be applied to content that is built from that dataset or report as well. For datasets, this means
other datasets, reports, and dashboards. For reports, this means dashboards. This capability is called
downstream inheritance.
Downstream inheritance is a critical link in Power BI’s end-to-end information protection solution. Together with
inheritance from data sources, inheritance upon creation of new content, inheritance upon export to file, and
other capabilities for applying sensitivity labels, downstream inheritance helps ensure that sensitive data
remains protected throughout its journey through Power BI, from data source to point of consumption.
Downstream inheritance is illustrated below using lineage view. When a label is applied to the dataset
“Customer profitability”, that label filters down and also gets applied to the dataset’s downstream content – the
reports that are built using that dataset, and, in this case, a dashboard that is built from visuals from one of
those reports.

IMPORTANT
Downstream inheritance never overwrites labels that were applied manually.
Downstream inheritance never overwrites a label with a less restrictive label.

Downstream inheritance modes


Downstream inheritance operates in one of two modes. The Power BI admin decides via a tenant setting which
mode will be operable on the tenant.
Downstream inheritance with user consent (default): In this mode, when users apply sensitivity labels
on datasets or reports, they can choose whether to apply that label downstream as well. They make their
choice using a checkbox that appears along with the sensitivity label selector.
Fully automated downstream inheritance (when enabled by Power BI admin): In this mode, downstream
inheritance happens automatically whenever a label is applied to a dataset or report. There is no checkbox
provided for user consent.
The two downstream inheritance modes are explained in more detail below.
Downstream inheritance with user consent
In user consent mode, when a user applies a sensitivity label to a dataset or report, they can choose whether to
apply the label to its downstream content as well. A checkbox appears along with the label selector:

By default, the checkbox is selected. This means that when the user applies a sensitivity label to a dataset or
report, the label will filter down to its downstream content. For each downstream item, the label will be applied
only if:
The user who applied or changed the label has Power BI edit permissions on the downstream item (that is,
the user is an admin, member, or contributor in the workspace where the downstream item is located).
The user who applied or changed the label is authorized to change the sensitivity label that already exists on
the downstream item.
Clearing the checkbox prevents the label from being inherited downstream.
Fully automated downstream inheritance
In fully automated mode, a label applied to either a dataset or report will automatically be propagated and
applied to the dataset or report’s downstream content, without regard to edit permissions on the downstream
item and the usage rights on the label.

Relaxed label change enforcement


In certain cases, downstream inheritance (like other automated labeling scenarios) can result in a situation
where no user has all the required permissions needed to change a label. For such situations, label change
enforcement relaxations are in place to guarantee access to affected items. See Relaxations to accommodate
automatic labeling scenarios for detail.

Enabling fully automated downstream inheritance


By default, downstream inheritance operates in user consent mode. To switch downstream inheritance in the
tenant to fully automated mode, the Power BI admin must enable the Automatically apply sensitivity labels
to downstream content tenant setting in the admin portal.
Considerations and limitations
Downstream inheritance is limited to 80 items. If the number of downstream items exceeds 80, no
downstream inheritance takes place. Only the item the label was actually applied to will receive the label.
Downstream inheritance never overwrites manually applied labels.
Downstream inheritance never overwrites labels on downstream content with less restrictive labels.
Sensitivity labels inherited from data sources are automatically propagated downstream only when fully
automated downstream inheritance mode is enabled.

Next steps
Sensitivity label overview
Label change enforcement
Sensitivity label inheritance from data sources
(preview)
5/23/2022 • 2 minutes to read • Edit Online

Power BI datasets that connect to sensitivity-labeled data in supported data sources can inherit those labels, so
that the data remains classified and secure when brought into Power BI.
Currently supported data sources:
Excel
Azure Synapse Analytics (formerly SQL Data Warehouse)
Azure SQL Database
To be operative, sensitivity label inheritance from data sources must be enabled on the tenant.

Requirements
The data in the data source must be labeled with Microsoft Information Protection labels.
For Azure Synapse Analytics and Azure SQL Database, this is accomplished using a two-step Purview
flow:
1. Automatically apply sensitivity labels to your data.
2. Classify your Azure SQL data using Azure Purview labels.
The scope of the labels must be Files and emails and Azure Pur view assets . See Extending
sensitivity labels to Azure Purview and Creating new sensitivity labels or modifying existing labels.
Sensitivity labels must be enabled in Power BI.
The Apply sensitivity labels from data sources to their data in Power BI (preview) tenant admin
setting must be enabled.
All conditions for applying a label must be met.

Inheritance behavior
In the Power BI service, when the dataset is connected to the data source, Power BI inherits the label and
applies it automatically to the dataset. Subsequently, inheritance occurs upon dataset refresh. In Power BI
Desktop, when you connect to the data source via Get data , Power BI inherits the label and automatically
applies it to the .pbix file (both the dataset and report). Subsequently inheritance occurs upon refresh.
If the data source has sensitivity labels of different degrees, the most restrictive is chosen for inheritance. In
order to be applied, that label (the most restrictive) must be published for the dataset owner.
Labels from data sources never overwrite manually applied labels.
Less restrictive labels from the data source never overwrite more restrictive labels on the dataset.
In Desktop, if the incoming label is more restrictive than the label that is currently applied in Desktop, a
banner will appear recommending to the user to apply the more restrictive label.
Dataset refresh will succeed even if for some reason the label from the data source is not applied.
NOTE
No inheritance takes place if the dataset owner is not authorized to apply sensitivity labels in Power BI, or if the specific
label in question has not been published for the dataset owner.

Considerations and limitations


Inheritance from data sources is not supported for datasets located in classic workspaces. My Workspace and
V2 workspaces are supported.
Inheritance from data sources is supported only for datasets with enhanced metadata. See Using enhanced
dataset metadata for more information.
Inheritance from data sources is supported only for datasets using the Import data connectivity mode. Live
connection and DirectQuery connectivity is not supported.
Inheritance from data sources is not supported in connections via gateways or Azure Virtual Network (VNet).
This means that inheritance from an Excel file located on a local machine won't work, because this requires a
gateway.

Next steps
Enable sensitivity label inheritance from data sources
Sensitivity label overview
Sensitivity label change enforcement
5/23/2022 • 2 minutes to read • Edit Online

Power BI restricts permission to change or remove Microsoft Information Protection sensitivity labels that have
file encryption settings to authorized users only.
Authorized users are:
The user who applied the sensitivity label.
Users who have been assigned at least one of the following usage rights to the label in the labeling admin
center (Microsoft 365 compliance center):
OWNER
EXPORT
EDIT and EDITRIGHTSDATA
Users who try to change a label and can’t should ask the person who applied the label to perform the
modification, or they can contact their Microsoft 365/Office security administrator and ask to be granted the
necessary usage rights.

Relaxations to accommodate automatic labeling scenarios


Power BI supports several capabilities, such as label inheritance from data sources and downstream inheritance,
which automatically apply sensitivity labels to content. These automated scenarios can result in situations where
no user has been set as the RMS label issuer for a label on an item. This means that there is no user who is
guaranteed to be able to change or remove the label.
In such cases, the usage rights requirements for changing or removing the label are relaxed - a user needs just
one of the following usage rights to be able to change or remove the label:
OWNER
EXPORT
EDIT
If no user has even these usage rights, nobody will be able to change or remove the label from the item, and
access to the item is potentially endangered.
To avoid this situation, the Power BI admin can enable the Allow workspace admins to override
automatically applied sensitivity labels (preview) tenant setting. This makes it possible for workspace
admins to override automatically applied sensitivity labels without regard to label change enforcement rules.
To enable this setting, go to: Admin por tal > Tenant settings > Information protection .
Next steps
Sensitivity label overview
Custom help link for sensitivity labels
5/23/2022 • 2 minutes to read • Edit Online

To help your organization's Power BI users understand what your sensitivity labels mean or how they should be
used, you can provide a Learn more link pointing to your organization’s custom web page that users will see
when they're applying or being prompted to apply sensitivity labels. The image below is an example that shows
how the Learn more link appears when applying a sensitivity label in Power BI Desktop.

Define a custom help link


You can define a custom help link for sensitivity labels in two ways:
Using the Security & Compliance Center PowerShell Set-LabelPolicy command. This creates a Power BI
dedicated help link.

Set-LabelPolicy -Identity "<policy name>" -AdvancedSettings @{powerbicustomurl=https://<your link>}

If a dedicated custom help link for Power BI isn't set, Power BI uses the custom help link defined for Office
365 apps. This link is defined in the Microsoft 365 compliance center. See What label policies can do.

If a user has more than one label policy, the custom URL is always taken from the policy with the highest
priority, so be sure to configure the custom URL on that policy.

Next steps
Sensitivity label overview
Sensitivity label support for paginated reports
5/23/2022 • 2 minutes to read • Edit Online

Sensitivity labels can be applied to paginated reports hosted in the Power BI service. After uploading a
paginated report to the service, you apply the label to the report just as you would to a regular Power BI report.
When you export data from a labeled paginated report to a supported file type (Excel, PDF, PPTX, and Word), the
sensitivity label on the paginated report is applied to the exported file.
Sensitivity labels on paginated reports are included in protection metrics (as part of the Report count), and can
be audited (label-change audits only) and modified by public APIs, just like labels on regular Power BI reports.

Considerations and limitations


Downstream inheritance is not supported. The label of an upstream model will not propagate down to its
downstream paginated reports. Likewise, the label of a paginated report will not propagate down to the
report’s downstream content.
Mandatory labeling will not apply to paginated reports.

Paginated Report visuals


A Paginated Report visual is a special type of visual that you can include in a regular Power BI report. It renders a
selected paginated report inside the regular Power BI report.
When a supported file type is exported from a Paginated Report visual that is included in a Power BI report, and
the original paginated report being rendered in the visual has a sensitivity label, the exported file inherits the
sensitivity label of the original paginated report. If the original paginated report does not have a label, the
exported file inherits the label of the Power BI report, if it has one.

Next steps
Apply sensitivity labels in Power BI
Sensitivity label overview
Set or remove sensitivity labels using Power BI REST
admin APIs
5/23/2022 • 2 minutes to read • Edit Online

To meet compliance requirements, organizations are often required to classify and label all sensitive data in
Power BI. This task can be challenging for tenants that have large volumes of data in Power BI. To make the task
easier and more effective, the Power BI setLabels and removeLabels admin REST APIs can be used to set and
remove sensitivity labels on large numbers of Power BI artifacts programatically.
The APIs set or remove labels from artifacts by artifact ID.

Requirements and considerations


The user must have administrator rights (such as Office 365 Global Administrator or Power BI Service
Administrator) to call these APIs.
The admin user (and the delegated user, if provided) must have sufficient usage rights to set or remove
labels.
To set a sensitivity label using the setLabels API, the admin user (or the delegated user, if provided) must have
the label included in their label policy.
The APIs allow a maximum of 25 requests per hour. Each request can update up to 2000 artifacts.
Required scope : Tenant.ReadWrite.All

API documentation
setLabels
removeLabels

Sample
The following sample demonstrates how to set and remove sensitivity labels on Power BI dashboards. Similar
code can be used to set and remove labels on datasets, reports, and dataflows.
const string adminBearerToken = "<adminBearerToken>";
const string ApiUrl = "<api url>";
var persistedDashboardId = Guid.Parse("<dashboard object Id>");
var credentials = new TokenCredentials(adminBearerToken, "Bearer");

var artifacts = new InformationProtectionArtifactsChangeLabel();


artifacts.Dashboards = new List<ArtifactId> { new ArtifactId(id: persistedDashboardId) };

using (PowerBIClient client = new PowerBIClient(credentials))


{
client.BaseUri = new Uri(ApiUrl);

// Delete labels

var removeResponse = client.InformationProtection.RemoveLabelsAsAdmin(artifacts);

foreach (var updateLabelResult in removeResponse.Dashboards)


{
if (updateLabelResult.Status == Status.Succeeded)
{
Console.WriteLine($"label has been deleted from artifact {updateLabelResult.Id}");
}
else
{
Console.WriteLine($"label has not been deleted from artifact
{updateLabelResult.Id}");
}
}

// Set labels

var setLabelRequest = new InformationProtectionChangeLabelDetails();


setLabelRequest.Artifacts = artifacts;
setLabelRequest.LabelId = Guid.Parse("<label Id>");

// assignmentMethod (optional)
setLabelRequest.AssignmentMethod = AssignmentMethod.Priviledged;

// delegetedUser (optional)
var delegatedUser = new DelegatedUser();
delegatedUser.EmailAddress = "<delegated user email address>";

setLabelRequest.DelegatedUser = delegatedUser;

var setResponse = client.InformationProtection.SetLabelsAsAdmin(setLabelRequest);


foreach (var updateLabelResult in setResponse.Dashboards)
{
if (updateLabelResult.Status == Status.Succeeded)
{
Console.WriteLine($"label has been upsert on artifact {updateLabelResult.Id}");
}
else
{
Console.WriteLine($"label has not been upsert on artifact {updateLabelResult.Id}");
}
}

Next steps
setLabels API
removeLabels API
Sensitivity label overview
Audit schema for sensitivity labels in Power BI
5/23/2022 • 2 minutes to read • Edit Online

Whenever a sensitivity label on a dataset, report, dashboard, or dataflow is applied, changed, or removed, that
activity is recorded in the audit log for Power BI. You can track these activities in the unified audit log or in the
Power BI activity log. See Track user activities in Power BI for detail.
This article documents the information in the Power BI auditing schema that is specific to sensitivity labels. It
covers the following activity keys:
SensitivityLabelApplied
SensitivityLabelChanged
SensitivityLabelRemoved

SensitivityLabelEventData
M UST A P P EA R IN T H E
F IEL D TYPE SC H EM A DESC RIP T IO N

SensitivityLabelId Edm.Guid The guid of the new label.


This field is only present
when the activity key is
SensitivityLabelApplied or
SensitivityLabelChanged.

OldSensitivityLabelId Edm.Guid The guid of the label on the


artifact before the action.
This field is only present
when the activity key is
SensitivityLabelChanged or
SensitivityLabelRemoved.

ActionSource Edm.Enum Yes This field indicates whether


the label change is the
result of an automatic or
manual process.

ActionSourceDetail Edm.Enum Yes This field gives more detail


about what caused the
action to take place.

LabelEventType Edm.Enum Yes This field indicates whether


the action resulted in a
more restrictive label, less
restrictive label, or a label of
the same degree of
sensitivity.

ArtifactType
This field indicates the type of artifact the label change took place on.
VA L UE F IEL D

1 Dashboard

2 Report

3 Dataset

7 Dataflow

ActionSource
This field indicates whether the label change is the result of an automatic or manual process.

VA L UE M EA N IN G DESC RIP T IO N

2 Auto An automatic process performed the


action.

3 Manual A manual process performed the


action.

ActionSourceDetail
This field gives more detail about what caused the action to take place.

VA L UE M EA N IN G DESC RIP T IO N

0 None There are no additional details.

3 AutoByInheritance The label change took place as a result


of an automatically triggered
inheritance process.

4 AutoByDeploymentPipeline The label change took place


automatically as a result of the
deployment pipeline process.

5 PublicAPI The label change action was performed


by one of the following Power BI public
admin REST APIs: setLabels,
removeLabels.

LabelEventType
This field indicates whether the action resulted in a more restrictive label, less restrictive label, or a label of the
same degree of sensitivity.

VA L UE M EA N IN G DESC RIP T IO N

1 LabelUpgraded A more restrictive label was applied to


the artifact.
VA L UE M EA N IN G DESC RIP T IO N

2 LabelDowngraded A less restrictive label was applied to


the artifact.

3 LabelRemoved The label was removed from the


artifact.

4 LabelChangedSameOrder The label was replaced by another label


with the same level of sensitivity.

Next steps
Sensitivity labels in Power BI
Track user activities in Power BI
Data protection metrics report
5/23/2022 • 2 minutes to read • Edit Online

What is the data protection metrics report?


The data protection metrics report is a dedicated report that Power BI administrators can use to monitor and
track sensitivity label usage and adoption in their tenant.

The report features:


A 100% stacked column chart that shows daily sensitivity label usage in the tenant for the last 7, 30, or 90
days. This chart makes it easy to track the relative usage of the different label types over time.
Doughnut charts that show the current state of sensitivity label usage in the tenant for dashboards, reports,
datasets, and dataflows.
A link to the Defender for Cloud Apps portal where Power BI alerts, users-at-risk, activity logs, and other
information is available. For more information, see Using Microsoft Defender for Cloud Apps controls in
Power BI.
The report refreshes every 24 hours.

Viewing the data protection metrics report


You must have a Power BI administrator role to open and view the report. To view the report, go to Settings >
Admin por tal , and choose Protection metrics .
The first time you open the data protection metrics report, it may take a few seconds to load. A report and a
dataset entitled Data protection metrics (automatically generated) will be created in your private
environment under "My workspace". We do not recommend viewing it here - this is not the full-featured report.
Rather, view the report in the Admin portal as described above.
Cau t i on

Do not change the report or dataset in any way, since new versions of the report are rolled out from time to
time and any changes you've made to the original report will be overwritten if you update to the new version.

Report updates
Improved versions of the data protection metrics report are released periodically. When you open the report, if a
new version is available you will be asked if you want to open the new version. If you say "yes", the new version
of the report will load and overwrite the old version. Any changes you might have made to the old report
and/or dataset will be lost. You can choose not to open the new version, but in that case you will not benefit
from the new version's improvements.

Notes and considerations


In order for the data protection metrics report to be successfully generated, information protection must
be enabled on your tenant and sensitivity labels should have been applied.
In order to access Defender for Cloud Apps information, your organization must have the appropriate
Defender for Cloud Apps license.
If you decide to share information from the data protection metrics report with a user who is not a Power
BI administrator, be aware that this report contains sensitive information about your organization.
The data protection metrics report is a special kind of report and does not show up in "Shared with me",
"Recents", and "Favorites" lists.
The data protection metrics report is not available to external users (Azure Active Directory B2B guest
users).

Next steps
Sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Understanding the Power BI service administrator role
Enable sensitivity labels in Power BI
Data loss prevention policies for Power BI (preview)
5/23/2022 • 7 minutes to read • Edit Online

To help organizations detect and protect their sensitive data, Power BI supports Microsoft Purview data loss
prevention (DLP) polices. When a DLP policy for Power BI detects a sensitive dataset, a policy tip can be attached
to the dataset in the Power BI service that explains the nature of the sensitive content, and an alert can be
registered in the data loss prevention Aler ts tab in the Microsoft Purview compliance portal for monitoring and
management by administrators. In addition, email alerts can be sent to administrators and specified users.

Considerations and limitations


DLP policies for Power BI are defined in the Microsoft Purview compliance portal.
DLP policies apply to workspaces. Only workspaces hosted in Premium Gen2 capacities are supported.
DLP dataset evaluation workloads impact capacity. Metering for DLP evaluation workloads is not yet
supported.
Both classic and new experience workspaces are supported, provided that they are hosted in Premium Gen2
capacities.
DLP policy templates are not yet supported for Power BI DLP policies. When creating a DLP policy for Power
BI, choose the "custom policy" option.
Power BI DLP policy rules currently support sensitivity labels and sensitive info types as conditions.
DLP policies for Power BI are not supported for sample datasets, streaming datasets, or datasets that connect
to their data source via DirectQuery or live connection.
DLP policies for Power BI are not supported in sovereign clouds.
Currently, DLP policies for Power BI don't support scanning for sensitive info types in data stored in the
Southeast Asia region. See How to find the default region for your organization to learn how to find your
organization's default data region.

Licensing and permissions


SKU/subscriptions licensing
Before you get started with DLP for Power BI, you should confirm your Microsoft 365 subscription. The admin
account that sets up the DLP rules must be assigned one of the following licenses:
Microsoft 365 E5
Microsoft 365 E5 Compliance
Microsoft 365 E5 Information Protection & Governance
Permissions
Data from DLP for Power BI can be viewed in Activity explorer. There are four roles that grant permission to
activity explorer; the account you use for accessing the data must be a member of any one of them.
Global administrator
Compliance administrator
Security administrator
Compliance data administrator

How do DLP policies for Power BI work


You define a DLP policy in the data loss prevention section of the compliance portal. In the policy, you specify the
sensitivity labels and/or sensitive info types you want to detect. You also specify the actions that will happen
when the policy detects a dataset that contains sensitive data of the kind you specified. DLP policies for Power BI
support two actions:
User notification via policy tips.
Alerts. Alerts can be sent by email to administrators and users. Additionally, administrators can monitor and
manage alerts on the Aler ts tab in the compliance center.
When a dataset is evaluated by DLP policies, if it matches the conditions specified in a DLP policy, the actions
specified in the policy occur. A dataset is evaluated against DLP policies whenever one of the following events
occurs:
Publish
Republish
On-demand refresh
Scheduled refresh

NOTE
DLP evaluation of the dataset does not occur if either of the following is true:
The initiator of the event is a service principal.
The dataset owner is either a service principal or a B2B user.

What happens when a dataset is flagged by a Power BI DLP policy


When a DLP policy detects an issue with a dataset:
If "user notification" is enabled in the policy, the dataset will be marked in the Power BI service with a
shield that indicates that a DLP policy has detected an issue with the dataset.

Open the dataset details page to see a policy tip that explains the policy violation and how the detected
type of sensitive information should be handled.

NOTE
If you hide the policy tip, it doesn’t get deleted. It will appear the next time you visit the page.
If alerts are enabled in the policy, an alert will be recorded on the data loss prevention Aler ts tab in the
compliance center, and (if configured) an email will be sent to administrators and/or specified users. The
following image shows the Aler ts tab in the data loss prevention section of the compliance center.

Configure a DLP policy for Power BI


1. Log into the Microsoft Purview compliance portal.
2. Choose the Data loss prevention solution in the navigation pane, select the Policies tab, choose
Create policy .

3. Choose the Custom category and then the Custom policy template.
NOTE
No other categories or templates are currently supported.

When done, click Next .


4. Name the policy and provide a meaningful description.

When done, click Next .


5. Enable Power BI as a location for the DLP policy. Disable all other locations . Currently, DLP policies for
Power BI must specify Power BI as the sole location.
By default the policy will apply to all workspaces. Alternatively, you can specify particular workspaces to
include in the policy as well as workspaces to exclude from the policy.

NOTE
DLP actions are supported only for workspaces hosted in Premium Gen2 capacities.

If you select Choose workspaces or Exclude workspaces , a dialog will allow you to create a list of
included (or excluded) workspaces. You must specify workspaces by workspace object ID. Click the info
icon for information about how to find workspace object IDs.
After enabling Power BI as a DLP location for the policy and choosing which workspaces the policy will
apply to, click Next .
6. The Define policy settings page appears. Choose Create or customize advanced DLP rules to
begin defining your policy.

When done, click Next .


7. On the Customize advanced DLP rules page, you can either start creating a new rule or choose an
existing rule to edit. Click Create rule .

8. The Create rule page appears. On the create rule page, provide a name and description for the rule, and
then configure the other sections, which are described following the image below.

Conditions
In the condition section, you define the conditions under which the policy will apply to a dataset. Conditions are
created in groups. Groups make it possible to construct complex conditions.
1. Open the conditions section, choose Add condition and then Content contains .

This opens the first group (named Default – you can change this).
2. Choose Add , and then chose either Sensitive info types or Sensitivity labels .
NOTE
Currently, DLP policies for Power BI don't support scanning for sensitive info types in data stored in the Southeast
Asia region. See How to find the default region for your organization to learn how to find your organization's
default data region.

When you choose either Sensitive info types or Sensitivity labels , you will be able to choose the
particular sensitivity labels or sensitive info types you want to detect from a list that will appear in a
sidebar.

When you select a sensitive info type as a condition, you then need to specify how many instances of that
type must be detected in order for the condition to be considered as met. You can specify from 1 to 500
instances. If you want to detect 500 or more unique instances, enter a range of '500' to 'Any'. You also can
select the degree of confidence in the matching algorithm. Click the info button next to the confidence
level to see the definition of each level.

You can add additional sensitivity labels or sensitive info types to the group. To the right of the group
name, you can specify Any of these or All of these . This determines whether matches on all or any of
the items in the group is required for the condition to hold. If you specified more than one sensitivity
label, you will only be able to choose Any of these , since datasets can’t have more than one label
applied.
The image below shows a group (Default) that contains two sensitivity label conditions. The logic Any of
these means that a match on any one of the sensitivity labels in the group constitutes “true” for that
group.

You can create more than one group, and you can control the logic between the groups with AND or OR
logic.
The image below shows a rule containing two groups, joined by OR logic.

Exceptions
If the dataset has a sensitivity label or sensitive info type that matches any of the defined exceptions, the rule
won’t be applied to the dataset.
Exceptions are configured in the same way as conditions, described above.

Actions
Protection actions are currently unavailable for Power BI DLP policies.
User notifications
The user notifications section is where you configure your policy tip. Turn on the toggle, select the Notify users
in Office 365 ser vice with a policy tip and Policy tips checkboxes, and write your policy tip in the text box.

User overrides
User overrides are currently unavailable for Power BI DLP policies.

Incident reports
Assign a severity level that will be shown in alerts generated from this policy. Enable (default) or disable email
notification to admins, specify users or groups for email notification, and configure the details about when
notification will occur.

Additional options
Monitor and manage policy alerts
Log into the Microsoft Purview compliance portal and navigate to Data loss prevention > Aler ts .

Click on an alert to start drilling down to its details and to see management options.

Next steps
Learn about data loss prevention
Get started with Data loss prevention policies for Power BI
Sensitivity labels in Power BI
Audit schema for sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls
in Power BI
5/23/2022 • 7 minutes to read • Edit Online

Using Defender for Cloud Apps with Power BI, you can help protect your Power BI reports, data, and services
from unintended leaks or breaches. With Defender for Cloud Apps, you can create conditional access policies for
your organization's data, using real-time session controls in Azure Active Directory (Azure AD), that help to
ensure your Power BI analytics are secure. Once these policies have been set, administrators can monitor user
access and activity, perform real-time risk analysis, and set label-specific controls.

You can configure Defender for Cloud Apps for all sorts of apps and services, not only Power BI. You'll need to
configure Defender for Cloud Apps to work with Power BI to benefit from Defender for Cloud Apps protections
for your Power BI data and analytics. For more information about Defender for Cloud Apps, including an
overview of how it works, the dashboard, and app risk scores, see the Defender for Cloud Apps documentation.

Defender for Cloud Apps licensing


To use Defender for Cloud Apps with Power BI, you must use and configure relevant Microsoft security services,
some of which are set outside Power BI. In order to have Defender for Cloud Apps in your tenant, you must have
one of the following licenses:
Microsoft Defender for Cloud Apps: Provides Defender for Cloud Apps capabilities for all supported apps,
part of the EMS E5 and Microsoft 365 E5 suites.
Office 365 Cloud App Security: Provides Defender for Cloud Apps capabilities only for Office 365, part of the
Office 365 E5 suite.

Configure real-time controls for Power BI with Defender for Cloud


Apps
NOTE
An Azure Active Directory Premium P1 license is required in order to benefit from Defender for Cloud Apps real-time
controls.

The sections below describe the steps for configuring real-time controls for Power BI with Defender for Cloud
Apps.
Set session policies in Azure AD (required)
The steps necessary to set session controls are completed in the Azure AD and Defender for Cloud Apps portals.
In the Azure AD portal, you create a conditional access policy for Power BI, and route sessions used in Power BI
through the Defender for Cloud Apps service.
Defender for Cloud Apps operates using a reverse-proxy architecture, and is integrated with Azure AD
conditional access to monitor Power BI user activity in real-time. The following steps are provided here to help
you understand the process, and detailed step-by-step instructions are provided in the linked content in each of
the following steps. You can also read this Defender for Cloud Apps article that describes the process in whole.
1. Create an Azure AD conditional access test policy
2. Sign into each app using a user scoped to the policy
3. Verify the apps are configured to use access and session controls
4. Enable the app for use in your organization
5. Test the deployment
The process for setting session policies is described in detail in the Session policies article.
Set anomaly detection policies to monitor Power BI activities (recommended)
You can define anomaly Power BI detection policies that can be independently scoped, so that they apply to only
the users and groups you want to include and exclude in the policy. Learn more.
Defender for Cloud Apps also has two dedicated, built-in detections for Power BI. See the section later on in this
document for detail.
Use Microsoft Information Protection sensitivity labels (recommended)
Sensitivity labels enable you to classify and help protect sensitive content, so that people in your organization
can collaborate with partners outside your organization, yet still be careful and aware of sensitive content and
data.
You can read the article on sensitivity labels in Power BI, which goes into detail about the process of using
sensitivity labels for Power BI. See below for an example of a Power BI policy based on sensitivity labels.

Custom policies to alert on suspicious user activity in Power BI


Defender for Cloud Apps activity policies enable administrators to define their own custom rules to help detect
user behavior that deviates from the norm, and even possibly act upon it automatically, if it seems too
dangerous. For example:
Massive sensitivity label removal. For example: alert me when sensitivity labels are removed by a
single user from 20 different reports in a time window shorter than 5 minutes.
Encr ypting sensitivity label downgrade. For example: alert me when a report that had a 'Highly
confidential' sensitivity label is now classified as 'Public'.
NOTE
The unique identifiers (IDs) of Power BI artifacts and sensitivity labels can be found using Power BI REST APIs. See Get
datasets or Get reports.

Custom activity policies are configured in the Defender for Cloud Apps portal. Learn more.

Built-in Defender for Cloud Apps detections for Power BI


Defender for Cloud Apps detections enable administrators to monitor specific activities of a monitored app. For
Power BI, there are currently two dedicated, built-in Defender for Cloud Apps detections:
Suspicious share – detects when a user shares a sensitive report with an unfamiliar (external to the
organization) email. A sensitive report is a report whose sensitivity label is set to INTERNAL-ONLY or
higher.
Mass share of repor ts – detects when a user shares a massive number of reports in a single session.
Settings for these detections are configured in the Defender for Cloud Apps portal. Learn more.

Power BI admin role in Defender for Cloud Apps


A new role is created for Power BI admins when using Defender for Cloud Apps with Power BI. When you log in
as a Power BI admin to the Defender for Cloud Apps portal, you have limited access to data, alerts, users at risk,
activity logs, and other information relevant to Power BI.

Considerations and limitations


Using Defender for Cloud Apps with Power BI is designed to help secure your organization's content and data,
with detections that monitor user sessions and their activities. When using Defender for Cloud Apps with Power
BI, there are a few considerations and limitations you should keep in mind:
Defender for Cloud Apps can only operate on Excel, PowerPoint, and PDF files.
If you want to use sensitivity labels capabilities in your session policies for Power BI, you need to have an
Azure Information Protection Premium P1 or Premium P2 license. Microsoft Azure Information Protection
can be purchased either standalone or through one of the Microsoft licensing suites. See Azure Information
Protection pricing for detail. In addition, sensitivity labels must have been applied on your Power BI assets.
Session control is available for any browser on any major platform on any operating system. We recommend
using Internet Explorer 11, Microsoft Edge (latest), Google Chrome (latest), Mozilla Firefox (latest), or Apple
Safari (latest). Power BI public API calls and other non-browser-based sessions aren't supported as part of
Defender for Cloud Apps session control. See more detail.
If you experience login difficulties, such as having to login more than once, it could be related to the way
some apps handle authentication. See Slow login in the Defender for Cloud Apps documentation for more
information and remediation steps.
Cau t i on

In the session policy, in the "Action" part, the "protect" capability works only if no label exists on the item. If a
label already exists, the "protect" action won't apply; you can't override an existing label that has already been
applied to an item in Power BI.

Example
The following example shows you how to create a new session policy using Defender for Cloud Apps with
Power BI.
First, create a new session policy. In the Defender for Cloud Apps portal, select Policies on the navigation
pane. Then on the policies page, click Create policy and choose Session policy .

In the window that appears, create the session policy. The numbered steps describe settings for the following
image.
1. In the Policy template drop-down, choose No template.
2. For the Policy name box, provide a relevant name for your session policy.
3. For Session control type , select Control file download (with inspection) (for DLP).
For the Activity source section, choose relevant blocking policies. We recommend blocking unmanaged
and non-compliant devices. Choose to block downloads when the session is in Power BI.
When you scroll down you see more options. The following image shows those options, with additional
examples.
4. Create a filter on Sensitivity label and choose Highly confidential or whatever best fits your
organization.
5. Change the Inspection method to none.
6. Choose the Block option that fits your needs.
7. Make sure you create an alert for such an action.
8. Finally, select the Create button to create the session policy.
Next steps
This article described how Defender for Cloud Apps can provide data and content protections for Power BI. You
might also be interested in the following articles, which describe Data Protection for Power BI and supporting
content for the Azure services that enable it.
Overview of sensitivity labels in Power BI
Enable sensitivity labels in Power BI
How to apply sensitivity labels in Power BI
You might also be interested in the following Azure and security articles:
Protect apps with Microsoft Defender for Cloud Apps Conditional Access App Control
Deploy Conditional Access App Control for featured apps
Session policies
Overview of sensitivity labels
Data protection metrics report
Power BI Security
5/23/2022 • 4 minutes to read • Edit Online

For a detailed explanation of Power BI security, read the Power BI Security whitepaper.
The Power BI service is built on Azure , which is Microsoft’s cloud computing infrastructure and platform. The
Power BI service architecture is based on two clusters – the Web Front End (WFE ) cluster and the Back-End
cluster. The WFE cluster manages the initial connection and authentication to the Power BI service, and once
authenticated, the Back-End handles all subsequent user interactions. Power BI uses Azure Active Directory
(AAD) to store and manage user identities, and manages the storage of data and metadata using Azure BLOB
and Azure SQL Database, respectively.

Power BI Architecture
Each Power BI deployment consists of two clusters – a Web Front End (WFE ) cluster, and a Back-End cluster.
The WFE cluster manages the initial connection and authentication process for Power BI, using AAD to
authenticate clients and provide tokens for subsequent client connections to the Power BI service. Power BI also
uses the Azure Traffic Manager (ATM) to direct user traffic to the nearest datacenter, determined by the DNS
record of the client attempting to connect, for the authentication process and to download static content and
files. Power BI uses the Azure Content Deliver y Network (CDN) to efficiently distribute the necessary static
content and files to users based on geographical locale.

The Back-End cluster is how authenticated clients interact with the Power BI service. The Back-End cluster
manages visualizations, user dashboards, datasets, reports, data storage, data connections, data refresh, and
other aspects of interacting with the Power BI service. The Gateway Role acts as a gateway between user
requests and the Power BI service. Users do not interact directly with any roles other than the Gateway Role .
Azure API Management will eventually handle the Gateway Role .

IMPORTANT
It is imperative to note that only Azure API Management (APIM) and Gateway (GW) roles are accessible through the
public Internet. They provide authentication, authorization, DDoS protection, Throttling, Load Balancing, Routing, and
other capabilities.

Data Storage Security


Power BI uses two primary repositories for storing and managing data: data that is uploaded from users is
typically sent to Azure Blob Storage , and all metadata as well as artifacts for the system itself are stored in
Azure SQL Database .
The dotted line in the Back-End cluster image, above, clarifies the boundary between the only two components
that are accessible by users (left of the dotted line), and roles that are only accessible by the system. When an
authenticated user connects to the Power BI Service, the connection and any request by the client is accepted
and managed by the Gateway Role (eventually to be handled by Azure API Management ), which then
interacts on the user’s behalf with the rest of the Power BI Service. For example, when a client attempts to view a
dashboard, the Gateway Role accepts that request then separately sends a request to the Presentation Role
to retrieve the data needed by the browser to render the dashboard.

User Authentication
Power BI uses Azure Active Directory (AAD) to authenticate users who sign in to the Power BI service, and in
turn, uses the Power BI login credentials whenever a user attempts to access resources that require
authentication. Users sign in to the Power BI service using the email address used to establish their Power BI
account; Power BI uses that login email as the effective username, which is passed to resources whenever a user
attempts to connect to data. The effective username is then mapped to a User Principal Name (UPN) and
resolved to the associated Windows domain account, against which authentication is applied.
For organizations that used work emails for Power BI login (such as [email protected]), the effective
username to UPN mapping is straightforward. For organizations that did not use work emails for Power BI login
(such as [email protected]), mapping between AAD and on-premises credentials will require
directory synchronization to work properly.
Platform security for Power BI also includes multi-tenant environment security, networking security, and the
ability to add additional AAD-based security measures.

Data and Service Security


For more information, please visit the Microsoft Trust Center.
As described earlier in this article, a user’s Power BI login is used by on-premises Active Directory servers to
map to a UPN for credentials. However, it’s impor tant to note that users are responsible for the data they share:
if a user connects to data sources using their credentials, then shares a report (or dashboard, or dataset) based
on that data, users with whom the dashboard is shared are not authenticated against the original data source,
and will be granted access to the report.
An exception is connections to SQL Ser ver Analysis Ser vices using the On-premises data gateway ;
dashboards are cached in Power BI, but access to underlying reports or datasets initiates authentication for the
user attempting to access the report (or dataset), and access will only be granted if the user has sufficient
credentials to access the data. For more information, see On-premises data gateway deep dive.

Enforcing TLS version usage


Network and IT administrators can enforce the requirement to use current TLS (Transport Layer Security) for any
secured communication on their network. Windows provides support for TLS versions over the Microsoft
Schannel Provider, as described in the TLS Schannel SSP article.
This enforcement can be done by administratively setting registry keys. Enforcement is described in the
Managing SSL Protocols in AD FS article.
Power BI Desktop respects the registry key settings described in those articles, and only created connections
using the version of TLS allowed based on those registry settings, when present.
For more information about setting these registry keys, see the TLS Registry Settings article.
Row-level security (RLS) with Power BI
5/23/2022 • 11 minutes to read • Edit Online

Row-level security (RLS) with Power BI can be used to restrict data access for given users. Filters restrict data
access at the row level, and you can define filters within roles. In the Power BI service, members of a workspace
have access to datasets in the workspace. RLS doesn't restrict this data access.
You can configure RLS for data models imported into Power BI with Power BI Desktop. You can also configure
RLS on datasets that are using DirectQuery, such as SQL Server. For Analysis Services or Azure Analysis Services
lives connections, you configure Row-level security in the model, not in Power BI Desktop. The security option
will not show up for live connection datasets.

Define roles and rules in Power BI Desktop


You can define roles and rules within Power BI Desktop. When you publish to Power BI, it also publishes the role
definitions.
To define security roles, follow these steps.
1. Import data into your Power BI Desktop report, or configure a DirectQuery connection.

NOTE
You can't define roles within Power BI Desktop for Analysis Services live connections. You need to do that within
the Analysis Services model.

2. From the Modeling tab, select Manage Roles .

3. From the Manage roles window, select Create .

4. Under Roles , provide a name for the role.

NOTE
You can't define a role with a comma, for example London,ParisRole .

5. Under Tables , select the table to which you want to apply a DAX rule.
6. In the Table filter DAX expression box, enter the DAX expressions. This expression returns a value of
true or false. For example: [Entity ID] = “Value” .

NOTE
You can use username() within this expression. Be aware that username() has the format of DOMAIN\username
within Power BI Desktop. Within the Power BI service and Power BI Report Server, it's in the format of the user's
User Principal Name (UPN). Alternatively, you can use userprincipalname(), which always returns the user in the
format of their user principal name, [email protected].

7. After you've created the DAX expression, select the checkmark above the expression box to validate the
expression.

NOTE
In this expression box, you use commas to separate DAX function arguments even if you're using a locale that
normally uses semicolon separators (e.g. French or German).

8. Select Save .
You can't assign users to a role within Power BI Desktop. You assign them in the Power BI service. You can enable
dynamic security within Power BI Desktop by making use of the username() or userprincipalname() DAX
functions and having the proper relationships configured.
By default, row-level security filtering uses single-directional filters, whether the relationships are set to single
direction or bi-directional. You can manually enable bi-directional cross-filtering with row-level security by
selecting the relationship and checking the Apply security filter in both directions checkbox. Select this
option when you've also implemented dynamic row-level security at the server level, where row-level security is
based on username or login ID.
For more information, see Bidirectional cross-filtering using DirectQuery in Power BI Desktop and the Securing
the Tabular BI Semantic Model technical article.
Validate the roles within Power BI Desktop
After you've created your roles, test the results of the roles within Power BI Desktop.
1. From the Modeling tab, select View as .

The View as roles window appears, where you see the roles you've created.

2. Select a role you created, and then select OK to apply that role.
The report renders the data relevant for that role.
3. You can also select Other user and supply a given user.

It's best to supply the User Principal Name (UPN) as that's what the Power BI service and Power BI Report
Server use.
Within Power BI Desktop, Other user displays different results only if you're using dynamic security
based on your DAX expressions.
4. Select OK .
The report renders based on what that user can see.

NOTE
The View as role feature doesn't work for DirectQuery models with Single Sign-On (SSO) enabled.
Now that you're done validating the roles in Power BI Desktop, go ahead and publish your report to the Power BI
service.

Manage security on your model


To manage security on your data model, open the workspace where you saved your report in the Power BI
service and do the following steps:
1. In the Power BI service, select the More options menu for a dataset. This menu appears when you hover
on a dataset name, whether you select it from the navigation menu or the workspace page.

2. Select Security .

Security will take you to the Role-Level Security page where you add members to a role you created in Power BI
Desktop. Only the owners of the dataset will see Security . If the dataset is in a Group, only administrators of the
group will see the security option.
You can only create or modify roles within Power BI Desktop.

Working with members


Add members
In the Power BI service, you can add a member to the role by typing in the email address or name of the user or
security group. You can't add Groups created in Power BI. You can add members external to your organization.
You can use the following groups to set up row level security.
Distribution Group
Mail-enabled Group
Security Group
Note, however, that Office 365 groups are not supported and cannot be added to any roles.

You can also see how many members are part of the role by the number in parentheses next to the role name,
or next to Members.

Remove members
You can remove members by selecting the X next to their name.

Validating the role within the Power BI service


You can validate that the role you defined is working correctly in the Power BI service by testing the role.
1. Select More options (...) next to the role.
2. Select Test data as role .

You'll see reports that are available for this role. Dashboards aren't shown in this view. In the page header, the
role being applied is shown.
Test other roles, or a combination of roles, by selecting Now viewing as .

You can choose to view data as a specific person or you can select a combination of available roles to validate
they're working.
To return to normal viewing, select Back to Row-Level Security .

NOTE
The Test as role feature doesn't work for DirectQuery models with Single Sign-On (SSO) enabled.

Using the username() or userprincipalname() DAX function


You can take advantage of the DAX functions username() or userprincipalname() within your dataset. You can
use them within expressions in Power BI Desktop. When you publish your model, it will be used within the
Power BI service.
Within Power BI Desktop, username() will return a user in the format of DOMAIN\User and userprincipalname()
will return a user in the format of [email protected].
Within the Power BI service, username() and userprincipalname() will both return the user's User Principal Name
(UPN). This looks similar to an email address.

Using RLS with workspaces in Power BI


If you publish your Power BI Desktop report to a new workspace experience in the Power BI service, the RLS
roles are applied to members who are assigned to the Viewer role in the workspace. Even if Viewers are given
Build permissions to the dataset, RLS still applies. For example, if Viewers with Build permissions use Analyze in
Excel, their view of the data will be protected by RLS. Workspace members assigned Admin , Member , or
Contributor have edit permission for the dataset and, therefore, RLS doesn’t apply to them. If you want RLS to
apply to people in a workspace, you can only assign them the Viewer role. Read more about roles in the new
workspaces.
WARNING
If you have configured a classic workspace so that members have edit permissions, the RLS roles won't be applied to
them. Users can see all of the data. Read more about classic workspaces.

Considerations and limitations


The current limitations for row-level security on cloud models are as follows:
If you previously defined roles and rules in the Power BI service, you must re-create them in Power BI
Desktop.
You can define RLS only on the datasets created with Power BI Desktop. If you want to enable RLS for
datasets created with Excel, you must convert your files into Power BI Desktop (PBIX) files first. Learn more.
Service principals cannot be added to an RLS role. Accordingly, RLS won’t be applied for apps using a service
principal as the final effective identity.
Only Import and DirectQuery connections are supported. Live connections to Analysis Services are handled
in the on-premises model.
The Test as role/View as role feature doesn't work for DirectQuery models with Single Sign-On (SSO)
enabled.

Issue: Republishing when RLS is configured


There's a known issue where you'll get an error message if you try to publish a previously published report from
Power BI Desktop. The scenario is as follows:
1. Anna has a dataset that is published to the Power BI service and has configured RLS.
2. Anna updates the report in Power BI Desktop and republishes.
3. Anna receives an error.
Workaround
Republish the Power BI Desktop file from the Power BI service until this issue is resolved. You can do that by
selecting Get Data > Files .

Issue: Multiple roles and limited relationships


You get an error message if you belong to multiple RLS roles and at least one of the roles relies on a limited
relationship.
Consider the following data model:
In this simplified data model, which combines data from two Power BI Datasets, two relationships exist:
A regular relationship between Sales and Product.
A limited relationship between Sales and Customer. This relationship is limited because Customer is in a
different source group. That's not the only reason a relationship can be limited. For more information, see
limited relationships.
Also, two RLS roles have been defined in this data model:
RLS_Product, which is defined on Product and restricts access to product information.
RLS_Customer, which is defined on Customer and restricts access to customer information.
User A belongs both RLS_Product and RLS_Customer. When User A accesses the data in the report, both
RLS_Product and RLS_Customer get evaluated. To evaluate RLS_Customer, data needs to be shared across the
limited relationship between Sales and Customer. This sharing might unintentionally disclose potential
information about Products. Therefore, Power BI doesn't allow this sharing to happen and instead generates the
following error:
"The user belongs to multiple roles 'RLS_Product, RLS_Customer' that have security filters, which isn't supported
when one of the roles has filters affecting table 'Sales' with SecurityFilteringBehavior=Both relationships."
Workaround
Adopt one of the following workarounds to avoid this error:
If feasible, don't put any user into multiple RLS roles. In the scenario above, we can create another RLS role,
e.g. RLS_Product_Customer, which combines the RLS filters set in both RLS_Product and RLS_Customer. Next,
we can assign User A to just RLS_Product_Customer, and remove the user from both RLS_Product and
RLS_Customer.
Define RLS roles only on one source group. If it's necessary for a user to belong to multiple RLS roles, make
sure all RLS filters set in the roles are defined on tables from a single source group. In the scenario above, if
we could define RLS_Customer on the source group that contains Sales and Product, we could avoid the
error.
NOTE
We're aware that in many situations Power BI is too restrictive and the information can safely be shared between the
sources involved. While we're working on releasing a solution for this situation, consider adopting one of the workarounds
above.

FAQ
Question: What if I had previously created roles and rules for a dataset in the Power BI service? Will they still
work if I do nothing?
Answer : No, visuals will not render properly. You will have to re-create the roles and rules within Power BI
Desktop and then publish to the Power BI service.
Question: Can I create these roles for Analysis Services data sources?
Answer : You can if you imported the data into Power BI Desktop. If you are using a live connection, you will not
be able to configure RLS within the Power BI service. This is defined within the Analysis Services model on-
premises.
Question: Can I use RLS to limit the columns or measures accessible by my users?
Answer : No, if a user has access to a particular row of data, they can see all the columns of data for that row.
Question: Does RLS let me hide detailed data but give access to data summarized in visuals?
Answer : No, you secure individual rows of data but users can always see either the details or the summarized
data.
Question: My data source already has security roles defined (for example SQL Server roles or SAP BW roles).
What is the relationship between these and RLS?
Answer : The answer depends on whether you're importing data or using DirectQuery. If you're importing data
into your Power BI dataset, the security roles in your data source aren't used. In this case, you should define RLS
to enforce security rules for users who connect in Power BI. If you're using DirectQuery, the security roles in your
data source are used. When a user opens a report Power BI sends a query to the underlying data source, which
applies security rules to the data based on the user's credentials.

Next steps
Restrict data access with row-level security (RLS) for Power BI Desktop
Row-level security (RLS) guidance in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Power BI Desktop privacy levels
5/23/2022 • 3 minutes to read • Edit Online

In Power BI Desktop , privacy levels specify an isolation level that defines the degree that one data source will
be isolated from other data sources. Although a restrictive isolation level blocks information from being
exchanged between data sources, it may reduce functionality and impact performance.
The Privacy Levels setting, found in File > Options and settings > Options and then Current File >
Privacy determines whether Power BI Desktop uses your Privacy Level settings while combining data. This
dialog includes a link to Power BI Desktop documentation about Privacy Levels and Privacy Levels (this article).

Configure a privacy level


With privacy level settings, you can specify an isolation level that defines the degree that one data source must
be isolated from other data sources.

SET T IN G DESC RIP T IO N EXA M P L E DATA SO URC ES


SET T IN G DESC RIP T IO N EXA M P L E DATA SO URC ES

Private data source A Private data source contains Facebook data, a text file containing
sensitive or confidential information, stock awards, or a workbook
and the visibility of the data source containing employee review
may be restricted to authorized users. information.
Data from a Private data source will
not be folded to other sources (not
even to other Private sources).

Organizational data source An Organizational data source limits A Microsoft Word document on an
the visibility of a data source to a intranet SharePoint site with
trusted group of people. Data from an permissions enabled for a trusted
Organizational data source will not group.
be folded to Public data sources, but
may be folded to other
Organizational data sources, as well
as to Private data sources.

Public data source A Public data source gives everyone Free data from the Microsoft Azure
visibility to the data contained in the Marketplace, data from a Wikipedia
data source. Only files, internet data page, or a local file containing data
sources, or workbook data can be copied from a public web page.
marked Public. Data from a Public
data source may be freely folded to
other sources.

Configure privacy level settings


The Privacy settings dialog for each data source is found in File > Options and settings > Data source
settings .
To configure a data source privacy level, select the data source, then select Edit Permissions . The Edit
Permissions dialog appears, from which you can select the appropriate privacy level from the drop-down
menu at the bottom of the dialog, as shown in the following image.
Cau t i on

You should configure a data source containing highly sensitive or confidential data as Private .

Configure Privacy Levels


Privacy Levels is set to Combine data according to your Privacy Level settings for each source by
default, which means that Privacy Levels are enforced.

SET T IN G DESC RIP T IO N

Combine data according to your Privacy Level Privacy level settings are used to determine the level of
settings for each source (on, and the default setting) isolation between data sources when combining data.

Ignore the Privacy levels and potentially improve Privacy levels are not considered when combining data,
performance (off) however, performance and functionality of the data may
increase.

Security Note: Selecting Ignore the Privacy levels and potentially improve performance in the
Privacy Levels dialog could expose sensitive or confidential data to an unauthorized person. Do not turn
this setting to off unless you are confident that the data source does not contain sensitive or confidential
data.

Cau t i on

The Ignore the Privacy levels and potentially improve performance does not work in the Power BI
service. As such, Power BI Desktop reports with this setting enabled, which are then published to the Power BI
service, do not reflect this behavior when used in the service. However, the privacy levels are available on the
personal gateway.
Configure Privacy Levels
In Power BI Desktop or in Query Editor, select File > Options and settings > Options and then Current File
> Privacy .
a. When Combine data according to your Privacy Level settings for each source is selected, data will be
combined according to your Privacy Levels setting. Merging data across Privacy isolation zones will result in
some data buffering.
b. When Ignore the Privacy levels and potentially improve performance is selected, the data will be
combined ignoring your Privacy Levels which could reveal sensitive or confidential data to an unauthorized user.
The setting may improve performance and functionality.

Security Note: Selecting Ignore the Privacy levels and potentially improve performance may
improve performance; however, Power BI Desktop cannot ensure the privacy of data merged into the Power
BI Desktop file.
Using service tags with Power BI
5/23/2022 • 4 minutes to read • Edit Online

You can use Azure ser vice tags with Power BI to enable an Azure SQL Managed Instance (MI) to allow
incoming connections from the Power BI service. In Azure, a service tag is a defined group of IP addresses that
you can configure to be automatically managed, as a group, to minimize the complexity of updates or changes
to network security rules. By using service tags with Power BI, you can enable a SQL Managed Instance to allow
incoming connections from the Power BI service.
The following configurations are necessary to successfully enable the endpoints for use in the Power BI service:
1. Enable a public endpoint in the SQL Managed Instance
2. Create a Network Security Group rule to allow inbound traffic
3. Enter the credentials in Power BI
The following sections look at each of these steps in turn.

Enable a public endpoint


The first part of the process is to enable a Public Endpoint in the SQL Managed Instance. Take the following
steps:
1. Navigate to your SQL Managed Instance in the Azure portal.
2. On the Networking blade, slid the Public endpoint (data) to Enable . The following image shows the
screen in the Azure portal.
3. Set the Minimum TLS version to 1.2
4. Select Save to save your settings.

Create a network security group rule


The next collection of steps requires that you create a Network Security Group (NSG) rule to allow inbound
traffic for the Power BI service. This cannot currently be done in the Azure portal, and rather, must be
accomplished using either the Command Line Interface (CLI) or by using PowerShell.

NOTE
The priority of the rule you set must be higher than the 4096 deny_all_inbound rule, which means the priority value must
be lower than 4096. In the following example, a priority value of 400 is used.

The following CLI script is provided as a reference example. See az network nsg rule for more information. You
may need to change multiple values for the example to work properly in your situation. A PowerShell script is
provided afterward.
#login to azure
az login

#set subscription that contains SQL MI instance


$subname = "mysubscriptionname"
az account set --subscription $subname

#set NSG rule for inbound PowerBI traffic

#update $RG to your resource group name


$rg = 'myresourcegroup'
#update $nsg to your Network Security Group name
$nsg = 'nsgresourcename'
# Name the NSG rule
$rule = 'allow_inbound_PowerBI'
#set the priority - this must be higher priority (lower number) than the deny_all_inbound rule
$priority = 400
#specifiy the service tag to use
$servicetag = 'PowerBI'
#specify the public endpoint port defined in step 1
$port = 3342
#set the rule to inbound direction
$direction = 'Inbound'
#set the access type to "Allow"
$access = 'Allow'
#Set the protocol as TCP
$protocol = 'tcp'
#Provide a description for the rule
$desc = 'Allow PowerBI Access to SQL MI for Direct Query or Data Refresh.'

#create the NSG rule


az network nsg rule create -g $rg \
--nsg-name $nsg -n $rule --priority $priority \
--source-address-prefixes $servicetag --destination-address-prefixes '*' \
--destination-port-ranges $port --direction $direction --access $access \
--protocol $protocol --description $desc

The following PowerShell script is provided as another reference to create the Network Security Group (NSG)
rule. See Add a network security group rule in PowerShell for more information. You may need to change
multiple values for the example to work properly in your situation.
#login to azure
Login-AzAccount

#get your subscription ID


Get-AzSubscription

####
#Script to create Network Security Group Rule
###

#enter your subscription ID


Set-AzContext -SubscriptionId "yoursubscriptionID"

#Provide the resource group for your Network Security Group


$RGname="yourRG"
#Enter the port for the SQL Managed Instance Public Endpoint
$port=3342
#name the NSG rule
$rulename="allow_inbound_PowerBI"
#provide the name of the Network Security Group to add the rule to
$nsgname="yourNSG"
#set direction to inbound to allow PowerBI to access SQL MI
$direction ="Inbound"
#set the priority of the rule. Priority must be higher (ie. lower number) than the deny_all_inbound (4096)
$priority=400
#set the service tags for the source to \u201cPowerBI\u201d
$serviceTag = "PowerBI"

# Get the NSG resource


$nsg = Get-AzNetworkSecurityGroup -Name $nsgname -ResourceGroupName $RGname

# Add the inbound security rule.


$nsg | Add-AzNetworkSecurityRuleConfig -Name $rulename -Description "Allow app port" -Access Allow `
-Protocol * -Direction $direction -Priority $priority -SourceAddressPrefix $serviceTag -SourcePortRange
* `
-DestinationAddressPrefix * -DestinationPortRange $port

# Update the NSG.


$nsg | Set-AzNetworkSecurityGroup

Enter the credentials in the Power BI service


The last part of the process is entering the credentials in the Power BI service. Log into Power BI and navigate to
the workspace containing the dataset(s) that are using SQL Managed Instance. In the following example, that
workspace is called ASAdataset and the dataset is called Contoso SQL MI Demo. Take the following steps to
complete the process:
1. Navigate to Dataset settings .
2. Expand the Data source credentials section, as shown in the following image.
3. Select the edit credentials link. In the dialog that appears, enter valid credentials.
Save your settings and exit. Your SQL Managed Instance is now configured to allow incoming connections from
the Power BI service.

Next steps
What is Power BI Premium?
Enable a Public Endpoint in the SQL Managed Instance
az network nsg rule
Add a network security group rule in PowerShell
Private endpoints for accessing Power BI
5/23/2022 • 10 minutes to read • Edit Online

You can use the Azure Private Link feature to provide secure access for data traffic in Power BI. Azure networking
provides the Azure Private Link feature. In this configuration, Azure Private Link and Azure Networking private
endpoints are used to send data traffic privately using Microsoft's backbone network infrastructure. The data
travels the Microsoft private network backbone instead of going across the Internet.
Private endpoints make sure that Power BI users go through the Microsoft private network backbone when they
access resources in the Power BI service.
See What is Azure Private Link to learn more about Azure Private Link.

Understanding private endpoints


Private endpoints guarantee that traffic going into your organization’s Power BI artifacts (such as reports, or
workspaces) always follow your organization's configured private link network path. User traffic to your Power
BI artifacts must come from the established private link. You can configure Power BI to deny all requests that
don’t come from the configured network path.
Private endpoints do not guarantee that traffic from Power BI to your external data sources, whether in the cloud
or on premises, is secured. Configure firewall rules and virtual networks to further secure your data sources.
Power BI and private endpoint integration
Azure Private Endpoint for Power BI is a network interface that connects you privately and securely to the Power
BI service, powered by Azure Private Link.
Private Endpoints integration enables Platform as a Service (PaaS) services to be deployed and accessed
privately from customer's virtual and on-premises networks, while the service is still running outside of the
customer’s network. Private Endpoints is a single, directional technology that lets clients initiate connections to a
given service, but it doesn't allow the service to initiate a connection into the customer network. This Private
Endpoint integration pattern provides management isolation, since the service can operate independently of
customer network policy configuration. For multi-tenant services, this Private Endpoint model provides link
identifiers to prevent access to other customers' resources hosted within the same service. When using Private
Endpoints, only a limited set of other PaaS service resources can be accessed from services using the
integration.
The Power BI service implements Private Endpoints, and not Service Endpoints.
Using private endpoints with Power BI provide the following benefits:
1. Private endpoints ensure that traffic will flow over the Azure backbone to a private endpoint for Azure
cloud-based resources.
2. Network traffic isolation from non-Azure based infrastructure, such as on-premises access, would require
customers to have ExpressRoute or a Virtual Private Network (VPN) configured.

Using secure private endpoints to access Power BI


In Power BI, you can configure and use an endpoint that enables your organization to access Power BI privately.
To configure private endpoints, you must be a Power BI administrator and have permissions in Azure to create
and configure resources such as Virtual Machines (VMs) and Virtual Networks (V-Net).
The steps that enable you to securely access Power BI from private endpoints are:
1. Enable private endpoints for Power BI
2. Create a Power BI resource in the Azure portal
3. Create a virtual network
4. Create a virtual machine (VM)
5. Create a private endpoint
6. Connect to a VM using Remote Desktop (RDP)
7. Access Power BI privately from the virtual machine
8. Disable public access for Power BI
The following sections provide additional information for each step.

Enable private endpoints for Power BI


To get started, sign in to the Power BI service as an administrator, then perform the following steps:
1. From the page header, select Settings > Admin por tal .
2. Select Tenant settings and scroll to Advanced Networking . Toggle the radio button to turn on Azure
Private Link .

It takes about 15 minutes to configure a private link for your tenant, which includes configuring a separate
FQDN for the tenant in order to communicate privately with Power BI services.
After this process is finished, you can move on to the next step.

Create a Power BI resource in the Azure portal


Next, sign into the Azure portal and create a Power BI resource, using an Azure Template . Replace the
parameters in the ARM template example, shown in the following table, to create a Power BI resource.

PA RA M ET ER VA L UE

<resource-name> myPowerBIResource

<tenant-object-id> Find your tenant ID in the Azure portal

Create the ARM template

{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"resources": [
{
"type":"Microsoft.PowerBI/privateLinkServicesForPowerBI",
"apiVersion": "2020-06-01",
"name" : "<resource-name>",
"location": "global",
"properties" :
{
"tenantId": "<tenant-object-id>"
}
}
]
}

In the dialog that appears, select the checkbox to agree to the terms and conditions, and then select Purchase .
Create a virtual network
The next step is to create a virtual network and subnet. Replace the sample parameters in the table below with
your own to create a virtual network and subnet.

PA RA M ET ER VA L UE

<resource-group-name> myResourceGroup

<virtual-network-name> myVirtualNetwork

<region-name> Central US

<IPv4-address-space> 10.5.0.0/16

<subnet-name> mySubnet

<subnet-address-range> 10.5.0.0/24

1. On the upper-left side of the screen, select Create a resource > Networking > Vir tual network or
search for Vir tual network in the search box.
2. In Create vir tual network enter or select the following information in the Basics tab:

SET T IN GS VA L UE

Project details

Subscription Select your Azure Subscription

Resource Group Select Create new , enter <resource-group-name> ,


then select OK , or select an existing
<resource-group-name> based on parameters.

Instance details

Name Enter <virtual-network-name>

Region Select <region-name>

The following image shows the Basics tab.


3. Next, select the IP Addresses tab or select the Next: IP Addresses button at the bottom of the form. In
the IP Addresses tab, enter the following information:

SET T IN GS VA L UE

IPv4 address space Enter <IPv4-address-space>

4. In Subnet name select the word default, and in Edit subnet , enter the following information:

SET T IN GS VA L UE

Subnet name Enter <subnet-name>


SET T IN GS VA L UE

Subnet address range Enter <subnet-address-range>

5. Then select Save , and then select the Review + create tab, or select the Review + create button.
6. Then, select Create .
Once you've completed these steps, you can create a virtual machine (VM), as described in the next section.

Create a virtual machine (VM)


The next step is to create virtual network, and the subnet to host the virtual machine (VM).
1. On the upper-left side of the screen in your Azure portal, select Create a resource > Compute >
Vir tual Machine .
2. In Create a vir tual machine - Basics enter or select the following information:

SET T IN GS VA L UE

Project details

Subscription Select your Azure Subscription

Resource Group Select myResourceGroup which you created in the


previous section.

Instance details

Name Enter myVm

Region Select Central US


SET T IN GS VA L UE

Availability options Leave the default No infrastructure redundancy


required

Image Select Windows 10 Pro

Size Leave the default Standard DS1 v2

ADMINISTRATOR ACCOUNT

Username Enter a username of your choosing

Password Enter a password of your choosing. The password must


be at least 12 characters long and meet the defined
complexity requirements

Confirm Password Reenter password

INBOUND PORT RULES

Public inbound ports Leave the default None

SAVE MONEY

Already have a Windows license? Leave the default No

3. Then select Next: Disks


4. In Create a vir tual machine - Disks , leave the defaults and select Next: Networking .
5. In Create a vir tual machine - Networking , select the following information:

SET T IN GS VA L UE

Virtual network Leave the default MyVir tualNetwork

Address space Leave the default 10.5.0.0/24

Subnet Leave the default mySubnet (10.5.0.0/24)

Public IP Leave the default (new) myVm-ip

Public inbound ports Select **Allow selected **

Select inbound ports Select RDP

6. Select Review + create . You're taken to the Review + create page where Azure validates your
configuration.
7. When you see the Validation passed message, select Create .

Create a private endpoint


The next step, is to create a private endpoint for Power BI.
1. On the upper-left side of the Azure portal screen Create a resource > Networking > Private Link
Center (Preview) .
2. In Private Link Center - Over view , on the option to Build a private connection to a ser vice , select
Create private endpoint .
3. In Create a private endpoint (Preview) - Basics enter or select the following information:

SET T IN GS VA L UE

Project details

Subscription Select your Azure Subscription

Resource Group Select myResourceGroup . You created this in the


previous section

Instance details

Name Enter myPrivateEndpoint. If this name is taken, create a


unique name

Region Select Central US

The following image shows the Create a private endpoint - Basics window.

4. Once that information is complete, select Next: Resource and in the Create a private endpoint -
Resource page, enter or select the following information:

SET T IN GS VA L UE

Connection method Select connect to an Azure resource in my directory

Subscription Select your subscription

Resource type Select


Microsoft.PowerBI/privateLinkSer vicesForPowerBI
SET T IN GS VA L UE

Resource myPowerBIResource

Target sub-resource Tenant

The following image shows the Create a private endpoint - Resource window.

5. Once that information is properly input, select Next: Configuration and in the Create a private
endpoint (Preview) - Configuration and enter or select the following information:

SET T IN GS VA L UE

NETWORKING

Virtual network Select myVirtualNetwork

Subnet Select mySubnet

PRIVATE DNS INTEGRATION

Integrate with private DNS zone Select Yes

Private DNS Zone Select


(New)privatelink.analysis.windows.net
(New)privatelink.pbidedicated.windows.net
(New)privatelink.tip1.powerquery.microsoft.com

The following image shows the Create a private endpoint - Configuration window.
Next select Review + create , which displays the Review + create page where Azure validates your
configuration. When you see the Validation passed message, select Create .

Connect to a VM using Remote Desktop (RDP)


Once you've created your virtual machine, called myVM , connected to it from the Internet using the following
steps:
1. In the portal's search bar, enter myVm.
2. Select the Connect button. Once you select the Connect button, Connect to vir tual machine opens.
3. Select Download RDP File . Azure creates a Remote Desktop Protocol (.rdp) file and downloads it to your
computer.
4. Open the .rdp file.
5. If prompted, select Connect .
6. Enter the username and password you specified when creating the VM in the previous step.
7. Select OK .
8. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select
Yes or Continue .

Access Power BI privately from the VM


The next step is to access Power BI privately, from the virtual machine you created in the previous step, using the
following steps:
1. In the Remote Desktop of myVM, open PowerShell.
2. Enter nslookup tenant-object-id-without-hyphens-api.privatelink.analysis.windows.net.
3. You'll receive a response similar to the message shown below:
Server: UnKnown
Address: 168.63.129.16

Non-authoritative answer:
Name: 52d40f65ad6d48c3906f1ccf598612d4-api.privatelink.analysis.windows.net
Address: 10.5.0.4

4. Open the browser and go to app.powerbi.com to access Power BI privately.

Disable public access for Power BI


Lastly, you need to disable public access for Power BI.
Sign to the Power BI service as an administrator, and navigate to the Admin por tal . Select Tenant settings
and scroll to the Advanced networking section. Enable the toggle button in the Block Public Internet
Access section, as shown in the following image. It takes approximately 15 minutes for the system to disable
your organization's access to Power BI from the public Internet.
And that's it - after following these steps, Power BI for your organizations is only accessible from private
endpoints, and not accessible from the public Internet.

Considerations and limitations


There are a few considerations to keep in mind while working with private endpoints in Power BI:
Any uses of external images or themes aren't available when using a private link environment.
If Internet access is disabled, and if the dataset or dataflow is connecting to a Power BI dataset or dataflow
as a data source, the connection will fail.
Usage metrics do not work when private endpoints are enabled.
The Power BI Premium Capacity Metrics app doesn’t work when private links are enabled.
Publish to Web is not supported when you enable Azure Private Link in Power BI.
Exporting a report as PDF or PowerPoint is not supported when you enable Azure Private Link in
Power BI.
Email subscriptions are not supported when you enable Block Public Internet Access in Power BI.
Microsoft Information Protection (MIP) doesn't currently support Private Links. This means that in Power
BI Desktop running in an isolated network, the Sensitivity button will be grayed out, label information will
not appear, and decryption of .pbix files will fail.
To enable these capabilities in Power BI Desktop, admins can configure Service Tags for the underlying
services that support MIP, EOP, and AIP. Make sure you understand the implications of using Service Tags
in a Private Links isolated network.
Gateways enabled for Power BI private endpoints will not work properly with non-Power BI scenarios. A
potential workaround is to turn off Private Links, config the gateway, and then reenable the Private Links.
When private links are enabled for Power BI, an on-premises data gateway (personal mode) will fail to
register.

Next steps
Administering Power BI in your Organization
Understanding the Power BI admin role
Auditing Power BI in your organization
How to find your Azure Active Directory tenant ID
The following video shows how to connect a mobile device to Power BI, using private endpoints:

NOTE
This video might use earlier versions of Power BI Desktop or the Power BI service.

More questions? Try asking the Power BI Community


Configure mobile apps with Microsoft Intune
5/23/2022 • 3 minutes to read • Edit Online

Microsoft Intune enables organizations to manage devices and applications. The Power BI mobile applications
for iOS and Android integrate with Intune. This integration enables you to manage the application on your
devices, and to control security. Through configuration policies, you can control items like requiring an access
pin, how data is handled by the application, and even encrypting application data when the app is not in use.
The Microsoft Power BI mobile app allows you to get access to your important business information. You can
view and interact with your dashboards and reports for all your organization's managed device and app
business data. For more information about supported Intune apps, see Microsoft Intune protected apps.

General mobile device management configuration


This article assumes that Intune is configured properly and you have devices enrolled with Intune. The article is
not meant as a full configuration guide for Microsoft Intune. For more information on Intune, see What is
Intune?.
Microsoft Intune can co-exist with Mobile Device Management (MDM) within Microsoft 365. If you're using
MDM, the device will show as enrolled with MDM, but is available to manage in Intune.
Before end users can use the Power BI app on their devices, an Intune admin must add the app to Intune and
also assign the app to end users.

NOTE
After you configure Intune, background data refresh is turned off for the Power BI mobile app on your iOS or Android
device. Power BI refreshes the data from the Power BI service on the web when you enter the app.

Step 1: Add the Power BI app to Intune


To add the Power BI app to Intune, use the steps provided in the following topics:
Add iOS store apps to Microsoft Intune
Add Android store apps to Microsoft Intune

Step 2: Assign the app to your end users


After you've added the Power BI app to Microsoft Intune, you can assign the app to users and devices. It's
important to note that you can assign an app to a device whether or not the device is managed by Intune.
To assign the Power BI app to users and devices, use the steps provided in Assign apps to groups with Microsoft
Intune.

Step 3: Create and assign app protection policies


App protection policies (APP) are rules that ensure an organization's data remains safe or contained in a
managed app. A policy can be a rule that is enforced when the user attempts to access or move "corporate" data,
or a set of actions that are prohibited or monitored when the user is inside the app. A managed app is an app
that has app protection policies applied to it, and can be managed by Intune.
Mobile Application Management (MAM) app protection policies allows you to manage and protect your
organization's data within an application. With MAM without enrollment (MAM-WE), a work or school-related
app that contains sensitive data can be managed on almost any device, including personal devices in bring-
your-own-device (BYOD) scenarios. For more information, see App protection policies overview.
To create and assign an app protection policy for the Power BI app, use the steps provided in How to create and
assign app protection policies.

Step 4: Use the application on a device


Managed apps are apps that your company support can set up to help protect company data that you can
access in that app. When you access company data in a managed app on your device, you may notice that the
app works a little differently than what you expect. For example, you might not be able to copy and paste
protected company data, or you might not be able to save that data to certain locations.
To understand how your end users can use the Power BI app on their device, review the steps provided in the
following articles:
Use managed apps on your iOS device
Use managed apps on your Android device

Next steps
How to create and assign app protection policies
Power BI apps for mobile devices
More questions? Try asking the Power BI Community
Enable service principal authentication for read-only
admin APIs
5/23/2022 • 2 minutes to read • Edit Online

Service principal is an authentication method that can be used to let an Azure Active Directory (Azure AD)
application access Power BI service content and APIs. When you create an Azure AD app, a service principal
object is created. The service principal object, also known simply as the service principal, allows Azure AD to
authenticate your app. Once authenticated, the app can access Azure AD tenant resources.

Method
To enable service principal authentication for Power BI read-only APIs, follow these steps:
1. Create an Azure AD app. You can skip this step if you already have an Azure AD app you want to use. Take
note of the App-Id for later steps.

NOTE
Make sure the app you use doesn't have any Power BI admin roles set on it in Azure portal.

2. Create a new Security Group in Azure Active Directory. Read more about how to create a basic group
and add members using Azure Active Directory. You can skip this step if you already have a security
group you would like to use. Make sure to select Security as the Group type.

3. Add your App-Id as a member of the security group you created. To do so:
a. Navigate to Azure por tal > Azure Active Director y > Groups , and choose the security group
you created in Step 2.
b. Select Add Members . Note: Make sure the app you use doesn't have any Power BI admin roles
set on it in Azure portal. To check the assigned roles:
Sign into the Azure por tal as a Global Administrator, an Application Administrator, or a Cloud
Application Administrator.
Select Azure Active Director y , then Enterprise applications .
Select the application you want to grant access to Power BI.
Select Permissions .

IMPORTANT
Make sure there are no Power BI admin-consent-required permissions set on this application. For more
information, see Managing consent to applications and evaluating consent requests.

4. Enable the Power BI service admin settings:


a. Log into the Power BI admin portal. You need to be a Power BI admin to see the tenant settings
page.
b. Under Admin API settings , you'll see Allow ser vice principals to use read-only Power BI
admin APIs . Set the toggle to Enabled, and then select the Specific security groups radio
button and add the security group you created in Step 2 in the text field that appears below it, as
shown in the figure below.

5. Start using the read-only admin APIs. See the list of supported APIs below.

IMPORTANT
Once you enable the service principal to be used with Power BI, the application's Azure AD permissions no longer
have any effect. The application's permissions are then managed through the Power BI admin portal.
Supported APIs
Service principal currently supports the following APIs:
GetGroupsAsAdmin with $expand for dashboards, datasets, reports, and dataflows
GetGroupUsersAsAdmin
GetDashboardsAsAdmin with $expand tiles
GetDashboardUsersAsAdmin
GetAppsAsAdmin
GetAppUsersAsAdmin
GetDatasourcesAsAdmin
GetDatasetToDataflowsLinksAsAdmin
GetDataflowDatasourcesAsAdmin
GetDataflowUpstreamDataflowsAsAdmin
GetCapacitiesAsAdmin
GetCapacityUsersAsAdmin
GetActivityLog
GetModifiedWorkspaces
WorkspaceGetInfo
WorkspaceScanStatus
WorkspaceScanResult
GetDashboardsInGroupAsAdmin
GetTilesAsAdmin
ExportDataflowAsAdmin
GetDataflowsAsAdmin
GetDataflowUsersAsAdmin
GetDataflowsInGroupAsAdmin
GetDatasetsAsAdmin
GetDatasetUsersAsAdmin
GetDatasetsInGroupAsAdmin
Get Power BI Encryption Keys
Get Refreshable For Capacity
Get Refreshables
Get Refreshables For Capacity
GetImportsAsAdmin
GetReportsAsAdmin
GetReportUsersAsAdmin
GetReportsInGroupAsAdmin

Considerations and limitations


You can't sign into the Power BI portal using service principal.
Power BI admin rights are required to enable service principal in the Admin API settings in the Power BI
admin portal.
PowerShell cmdlets, REST APIs, and .NET Client
library for Power BI administration
5/23/2022 • 2 minutes to read • Edit Online

Power BI enables administrators to script common tasks with PowerShell cmdlets. It also exposes REST APIs and
provides a .NET client library for developing administrative solutions. This topic shows a list of cmdlets and the
corresponding APIs and REST API endpoint. For more information, see:
PowerShell download and documentation
REST API documentation
.NET Client library download

Cmdlets below should be called with -Scope Organization to operate against the tenant for administration.

C M DL ET N A M E A L IA SES API REST A P I EN DP O IN T DESC RIP T IO N

Get- N/A /v1.0/myorg/admin/d


Datasets_GetDataSourcesAsAdmin Gets the data
PowerBIDatasource atasets/{datasetkey}/ sources for a given
datasources dataset.

Get- N/A /v1.0/myorg/admin/d


Datasets_GetDatasetsAsAdmin Gets the full list of
PowerBIDataset atasets datasets in a Power
BI tenant.

Get- Get-PowerBIGroup Groups_GetGroupsAsAdmin/v1.0/myorg/admin/g Gets the full list of


PowerBIWorkspace roups workspaces in a
Power BI tenant.

Add- Add- Groups_AddUserAsAdmin /v1.0/myorg/admin/g Adds a user as a


PowerBIWorkspaceUser PowerBIGroupUser roups/{groupId}/user member to a given
s workspace.

Remove- Remove- /v1.0/myorg/admin/g


Groups_DeleteUserAsAdmin Removes a user from
PowerBIWorkspaceUser PowerBIGroupUser roups/{groupId}/user the membership list
s/{user} of a given workspace.

Restore- Restore- /v1.0/myorg/admin/g


Groups_RestoreDeletedGroupAsAdmin Restores a deleted
PowerBIWorkspace PowerBIGroup roups/{groupId}/rest workspace.
ore

Set- Set-PowerBIGroup /v1.0/myorg/admin/g


Groups_UpdateGroupAsAdmin Updates the
PowerBIWorkspace roups/{groupId} properties of a given
workspace.

Get- N/A /v1.0/myorg/admin/g


Groups_GetDatasetsAsAdmin Gets the datasets
PowerBIDataset - roups/{group_id}/dat within a given
WorkspaceId
asets workspace.

Get-PowerBIReport N/A /v1.0/myorg/admin/r


Reports_GetReportsAsAdmin Gets the full list of
eports reports in a Power BI
tenant.
C M DL ET N A M E A L IA SES API REST A P I EN DP O IN T DESC RIP T IO N

Get- N/A /v1.0/myorg/admin/d


Dashboards_GetDashboardsAsAdmin Gets the full list of
PowerBIDashboard ashboards dashboards in a
Power BI tenant.

Get- N/A /v1.0/myorg/admin/g


Groups_GetDashboardsAsAdmin Gets the dashboards
PowerBIDashboard roups/{group_id}/das within a given
-WorkspaceId
hboards workspace.

Get-PowerBITile Get- /v1.0/myorg/admin/d


Dashboards_GetTilesAsAdmin Gets the tiles of a
PowerBIDashboardTile ashboards/{dashboar given dashboard.
d_id}/tiles

Get-PowerBIReport N/A /v1.0/myorg/admin/g


Groups_GetReportsAsAdmin Gets the reports
roups/{group_id}/rep within a given
orts workspace.

Get-PowerBIImport N/A /v1.0/myorg/admin/i


Imports_GetImportsAsAdmin Gets the full list of
mports imports in a Power BI
tenant.

Connect- Login-PowerBI & N/A N/A Login to Power BI


PowerBIServiceAccount and start a session.
Login-
PowerBIServiceAccount

Disconnect- Logout-PowerBI & N/A N/A Logout of Power BI


PowerBIServiceAccount and close the existing
Logout-
PowerBIServiceAccount session.

Invoke- N/A N/A N/A Send arbitrary REST


PowerBIRestMethod API calls to Power BI.

Get- N/A N/A N/A Obtain the Power BI


PowerBIAccessToken access token in a
session.

Resolve- N/A N/A N/A Get detailed error


PowerBIError information for
unsuccessful cmdlet
calls.

You might also like