Power Bi Document
Power Bi Document
Automatic aggregations use state-of-the-art machine learning (ML) to continuously optimize DirectQuery
datasets for maximum report query performance. Automatic aggregations are built on top of existing user-
defined aggregations infrastructure first introduced with composite models for Power BI. Unlike user-defined
aggregations, automatic aggregations don’t require extensive data modeling and query-optimization skills to
configure and maintain. Automatic aggregations are both self-training and self-optimizing. They enable dataset
owners of any skill level to improve query performance, providing faster report visualizations for even the
largest datasets.
With automatic aggregations:
Report visualizations are faster - An optimal percentage of report queries are returned by an automatically
maintained in-memory aggregations cache instead of backend data source systems. Outlier queries that
cannot be returned by the in-memory cache are passed directly to the data source using DirectQuery.
Balanced architecture - When compared to pure DirectQuery mode, most query results are returned by the
Power BI query engine and in-memory aggregations cache. Query processing load on data source systems
at peak reporting times can be significantly reduced, which means increased scalability in the data source
backend.
Easy setup - Dataset owners can enable automatic aggregations training and schedule one or more refreshes
for the dataset. With the first training and refresh, automatic aggregations begins creating an aggregations
framework and optimal aggregations. The system automatically tunes itself over time.
Fine-tuning – With a simple and intuitive user interface in the dataset settings, you can estimate the
performance gains for a different percentage of queries returned from the in-memory aggregations cache
and make adjustments for even greater gains. A single slide bar control helps you easily fine-tune for your
environment.
IMPORTANT
Automatic aggregations are in Preview . When in preview, functionality and documentation are likely to change.
Requirements
Supported plans
Automatic aggregations are supported for Power BI Premium per capacity , Premium per user , and Power
BI Embedded datasets.
Supported data sources
During preview, automatic aggregations are supported for the following data sources:
Azure SQL Database
Azure Synapse Dedicated SQL pool
Google BigQuery
Snowflake
Supported modes
Automatic aggregations are supported for DirectQuery mode datasets. Composite model datasets with both
import tables and DirectQuery connections are supported, however automatic aggregations are supported for
the DirectQuery connection only.
Permissions
To enable and configure automatic aggregations, you must be the Dataset owner . Workspace admins can take
over a dataset as owner to configure automatic aggregations settings.
Benefits
With DirectQuery, each time a dataset user opens a report or interacts with a report visualization, DAX queries
are passed to the query engine and then on to the backend data source as SQL queries. The data source must
then calculate and return results for each query. Compared to import mode datasets stored in-memory,
DirectQuery data source round trips can be both time and process intensive, often causing slow query response
times in report visualizations.
When enabled for a DirectQuery dataset, automatic aggregations can boost report query performance by
avoiding data source query round trips. Pre-aggregated query results are automatically returned by an in-
memory aggregations cache rather than being sent to and returned by the data source. The amount of pre-
aggregated data in the in-memory aggregations cache is a small fraction of the amount of data kept in fact and
detail tables at the data source. The result is not only better report query performance, but also reduced load on
backend data source systems. With automatic aggregations, only a small portion of report and ad-hoc queries
that require aggregations not included in the in-memory cache are passed to the backend data source, just like
with pure DirectQuery mode.
While training operations evaluate past queries from the query log, the results are sufficiently accurate to
ensure future queries are covered. There is no guarantee however that future queries will be returned by the in-
memory aggregations cache because those new queries could be different than those derived from the query
log. Those queries not returned by the in-memory aggregations cache are passed to the data source by using
DirectQuery. Depending on the frequency and ranking of those new queries, aggregations for them may be
included in the in-memory aggregations cache with the next training operation.
The training operation has a 60 minute time limit. If training is unable to process the entire query log within the
time limit, a notification is logged in the dataset Refresh history and training resumes the next time it is
launched. The training cycle completes and replaces the existing automatic aggregations when the entire query
log is processed.
Refresh operations
As described above, after the training operation completes as part of the first scheduled refresh for your
selected frequency, Power BI performs a refresh operation that queries and loads new and updated aggregations
data into the in-memory aggregations cache and removes any aggregations that no longer rank high enough
(as determined by the training algorithm). All subsequent refreshes for your chosen Day or Week frequency are
refresh only operations that query the data source to update existing aggregations data in the cache. Using our
example above, the 9:00AM, 2:00PM, and 7:00PM scheduled refreshes for that day are refresh only operations.
Regularly scheduled refreshes throughout the day (or week) ensure aggregations data in the cache are more up
to date with data at the backend data source. Through dataset Settings, you can schedule up to 48 refreshes per
day to ensure report queries that are returned by the aggregations cache are getting results based on the most
recent refreshed data from the backend data source.
Cau t i on
Training and refresh operations are process and resource intensive for both the Power BI service and the data
source systems. Increasing the percentage of queries that use aggregations means more aggregations must be
queried and calculated from data sources during training and refresh operations, increasing the probability of
excessive use of system resources and potentially causing timeouts. To learn more, see Fine tuning.
Training on demand
As mentioned earlier, a training cycle may not complete within the time limits of a single data refresh cycle. If
you don’t want to wait until the next scheduled refresh cycle that includes training, you can also trigger
automatic aggregations training on-demand by clicking on Train and Refresh Now in dataset Settings. Using
Train and Refresh Now triggers both a training operation and a refresh operation. Check the dataset Refresh
history to see if the current operation is finished before running an additional on-demand training and refresh
operation, if necessary.
Refresh history
Each refresh operation is recorded in the dataset Refresh history. Important information about each refresh is
shown, including the amount of memory aggregations in the cache are consuming for the configured query
percentage. To view refresh history, in the dataset Settings page, click on Refresh histor y . If you want to drill
down a little further, click Show details.
By regularly checking refresh history you can ensure your scheduled refresh operations are completing within
an acceptable period. Make sure refresh operations are successfully completing before the next scheduled
refresh begins.
Training and refresh failures
While Power BI performs training and refresh operations as part of the first scheduled dataset refresh for the
day or week frequency you choose, these operations are implemented as separate transactions. If a training
operation cannot fully process the query log within its time limits, Power BI is going to proceed refreshing the
existing aggregations (and regular tables in a composite model) using the previous training state. In this case,
the refresh history will indicate the refresh succeeded and training is going to resume processing the query log
the next time training launches. Query performance might be less optimized if client report query patterns
changed and aggregations didn't adjust yet but the achieved performance level should still be far better than a
pure DirectQuery dataset without any aggregations.
If a training operation requires too many cycles to finish processing the query log, consider reducing the
percentage of queries that use the in-memory aggregations cache in dataset Settings. This will reduce the
number of aggregations created in the cache, but allow more time for training and refresh operations to
complete. To learn more, see Fine tuning.
If training succeeds but refresh fails, the entire dataset refresh is marked as Failed because the result is an
unavailable in-memory aggregations cache.
When scheduling refresh, you can specify email notifications in case of refresh failures.
This will provide you with a list of all the pertinent queries. Drill through to the next level to show more
aggregation details.
Application Lifecycle Management
From development to test and from test to production, datasets with automatic aggregations enabled have
special requirements for ALM solutions.
Deployment pipelines
When using deployment pipelines, Power BI can copy the datasets with their dataset configuration from the
current stage into the target stage. However, automatic aggregations must be reset in the target stage as the
settings do no get transferred from current to target stage. You can also deploy content programmatically, using
the deployment pipelines REST APIs. To learn more about this process, see Automate your deployment pipeline
using APIs and DevOps.
Custom ALM solutions
If you use a custom ALM solution based on XMLA endpoints, keep in mind that your solution might be able to
copy system-generated and user-created aggregations tables as part of the dataset metadata. However, you
must enable automatic aggregations after each deployment step at the target stage manually. Power BI will
retain the configuration if you overwrite an existing dataset.
NOTE
If you upload or republish a dataset as part of a Power BI Desktop (.pbix) file, system-created aggregation tables are lost
as Power BI replaces the existing dataset with all its metadata and data in the target workspace.
Altering a dataset
When altering a dataset with automatic aggregations enabled via XMLA endpoints, such as adding or removing
tables, Power BI preserves any existing aggregations that can be and removes those that are no longer needed
or relevant. Query performance could be impacted until the next training phase is triggered.
Metadata elements
Datasets with automatic aggregations enabled contain unique system-generated aggregations tables.
Aggregations tables aren't visible to users in reporting tools. They are however visible through the XMLA
endpoint by using tools with Analysis Services client libraries version 19.22.5 and higher . When working with
datasets with automatic aggregations enabled, be sure to upgrade your data modeling and administration tools
to the latest version of the client libraries. For SQL Server Management Studio (SSMS), upgrade to SSMS
version 18.9.2 or higher . Earlier versions of SSMS aren't able to enumerate tables or script out these datasets.
Automatic aggregations tables are identified by a SystemManaged table property, which is new to the Tabular
Object Model (TOM) in Analysis Services client libraries version 19.22.5 and higher. Shown in the following code
snippet, the SystemManaged property is set to true for automatic aggregations tables and false for regular
tables.
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.AnalysisServices.Tabular;
namespace AutoAggs
{
class Program
{
static void Main(string[] args)
{
string workspaceUri = "<Specify the URL of the workspace where your dataset resides>";
string datasetName = "<Specify the name of your dataset>";
if (aggregationsTables.Any())
{
Console.WriteLine("The following auto aggs tables exist in this dataset:");
foreach (Table table in aggregationsTables)
{
Console.WriteLine($"\t{table.Name}");
}
}
else
{
Console.WriteLine($"This dataset has no auto aggs tables.");
}
Executing this snippet outputs automatic aggregations tables currently included in the dataset in a console.
Keep in mind, aggregations tables are constantly changing as training operations determine the optimal
aggregations to include in the in-memory aggregations cache.
IMPORTANT
Power BI fully manages automatic aggregations system-generated table objects. Do not delete or modify these tables
yourself. Doing so can cause degraded performance.
Power BI maintains the dataset configuration outside of the dataset. The presence of a system-managed
aggregations table in a dataset does not necessarily mean the dataset is in fact enabled for automatic
aggregations training. In other words, if you script out a full model definition for a dataset with automatic
aggregations enabled, and create a new copy of the dataset (with a different name/workspace/capacity), the new
resulting dataset is not yet enabled for automatic aggregations training. You still need to enable automatic
aggregations training for the new dataset in dataset Settings.
Community
Power BI has a vibrant community where MVPs, BI pros, and peers share expertise in discussion groups, videos,
blogs and more. When learning about automatic aggregations, be sure to check out these additional resources:
Power BI Community
Search "Power BI automatic aggregations" on Bing
See also
Configure automatic aggregations
User-defined aggregations
DirectQuery in Power BI
Analysis Services client libraries
Configure automatic aggregations (Preview)
5/23/2022 • 6 minutes to read • Edit Online
Configuring automatic aggregations includes enabling training for a supported DirectQuery dataset and
configuring one or more scheduled refreshes. After several iterations of the training and refresh operations have
run, you can return to dataset settings to fine-tune the percentage of report queries that use the in-memory
aggregations cache. Before completing these steps, be sure you fully understand the functionality and
limitations described in Automatic aggregations.
IMPORTANT
Automatic aggregations is in Preview . When in preview, functionality and documentation are likely to change.
Enable
You must have dataset Owner permissions to enable automatic aggregations. Workspace admins can take over
dataset owner permissions.
1. In dataset Settings, expand Scheduled refresh and performance optimization .
2. Click the Automatic aggregations training slider to On . If the enable slider is greyed out, ensure Data
source credentials for the dataset are configured and signed in.
3. In Refresh schedule , specify a refresh frequency and time zone. If the Refresh schedule controls are
disabled, verify the data source configuration including gateway connection (if necessary) and data
source credentials.
4. Click Add another time , and then specify one or more refreshes.
You must schedule at least one refresh. The first refresh for the frequency you select will include both a
training operation and a refresh that loads new and updated aggregations into the in-memory cache.
Schedule more refreshes to ensure report queries that hit the aggregations cache are getting results that
are most in-sync with the backend data source. To learn more, see Refresh operations.
5. Click Apply .
Fine-tuning
Both user-defined and system-generated aggregations tables are part of the dataset, contribute to the dataset
size, and are subject to existing Power BI dataset size constraints. Aggregations processing also consumes
resources and impacts dataset refresh durations. An optimal configuration strikes a balance between providing
pre-aggregated results from the in-memory aggregations cache for the most frequently used report queries,
while accepting slower results for outlier and ad-hoc queries in exchange for faster training and refresh times
and a reduced burden on system resources.
Adjusting the percentage
By default, the aggregations cache setting that determines the percentage of report queries that will use
aggregations from the in-memory cache is 75%. Increasing the percentage means a greater number of report
queries are ranked higher and therefore aggregations for them are included in the in-memory aggregations
cache. While a higher percentage can mean more queries are answered from the in-memory cache, it can also
mean longer training and refresh times . Adjusting to a lower percentage, on the other hand, can mean
shorter training and refresh times, and less resource utilization, but report visualization performance could
diminish because fewer report queries would be answered by the in-memory aggregations cache, as those
report queries instead must then roundtrip to the data source.
Before the system can determine the optimal aggregations to include in the cache, it must first know the report
query patterns being used most often. Be sure to allow several iterations of the training/refresh operations to be
completed before adjusting the percentage of queries that will use the aggregations cache. This gives the
training algorithm time to analyze report queries over a broader time period and self-adjust accordingly. For
example, if you've scheduled refreshes for daily frequency, you might want to wait a full week. User reporting
patterns on some days of the week may be different than others.
To adjust the percentage
1. In dataset Settings, expand Scheduled refresh and performance optimization
2. In Quer y coverage , use the Adjust the percentage of queries that will use the aggregated caches slider
to increase or decrease the percentage to the desired value. As you adjust the percentage, the Query
performance impact lift chart provides estimated query response times.
Threshold appears as a marker line on the lift chart and indicates the target query response time for your
reports. You can then fine-tune the percentage of queries that will use the aggregations cache to determine a
new query percentage that meets the desired threshold.
Metrics
DirectQuer y - An estimated duration in seconds for a report query sent to and returned from the data source
by using DirectQuery. Queries that cannot be answered by the in-memory aggregations cache will typically be
within this estimate.
Current quer y percentage - An estimated duration in seconds for report queries answered from the in-
memory aggregations cache, based on the percentage setting for the most recent training/refresh operation.
New quer y percentage - An estimated duration in seconds for report queries answered from the in-memory
aggregations cache for the newly selected percentage. As the percentage slider is changed, this metric reflects
the potential change.
Disable
You must have dataset Owner permissions to disable automatic aggregations. Workspace admins can take over
dataset owner permissions.
1. To disable, click the Automatic aggregations training slider to Off .
When you disable training, you are prompted with an option to delete automatic aggregation tables.
If you choose not to delete existing automatic aggregation tables, the tables will remain in the dataset and
continue to be refreshed. However, because training has been disabled, no new aggregations will be
added to them. Power BI will continue to use the existing tables to get aggregated query results when
possible.
If you choose to delete the tables, the dataset is reverted back to its original state - without any automatic
aggregations.
2. Click Apply .
See also
Automatic aggregations
User-defined aggregations
DirectQuery in Power BI
Power BI site reliability engineering (SRE) model
5/23/2022 • 17 minutes to read • Edit Online
This document describes the Power BI team's approach to maintaining a reliable, performant, and scalable
service for customers. It describes monitoring service health, mitigating incidents, release management and
acting on necessary improvements. Other important operational aspects such as security are outside of the
scope of this document. This document was created to share knowledge with our customers, who often raise
questions regarding site reliability engineering practices. The intention is to offer transparency into how Power
BI minimizes service disruption through safe deployment, continuous monitoring, and rapid incident response.
The techniques described here also provide a blueprint for teams hosting service-based solutions to build
foundational live site processes that are efficient and effective at scale.
Author : Yitzhak Kesselman
Background
Power BI is a native cloud offering and global service, supporting the following customers and capabilities:
Serving 260,000 organizations and 97% of Fortune 500 companies
Deployed in 52 Azure regions around the world
Executes nearly 20 million queries per hour at peak
Ingests over 90 petabytes of data per month into customer datasets
Employs 149 clusters powered by more than 350,000 cores
Despite absorbing six straight years of triple-digit growth and substantial new capabilities, the Power BI service
exhibits strong service reliability and operational excellence. As the service grew and large enterprises deployed
it at scale to hundreds of thousands of users, the need for exceptional reliability became essential. The reliability
results shown in the following table are the direct consequence of engineering, tools, and culture changes made
by the Power BI team over the past few years, and are highlighted in this article.
Through solutions and disciplined operations, the Power BI team has sustained exponential growth and rapid
update cycles without increasing overall cost or burden on live site management. In the following graph, you
can see the continuous and significant decline in Service Reliability Engineering cost per monthly active user
(MAU).
The efficiencies gained from site reliability engineering (SRE) team efforts offset the cost of funding such a team.
The SRE team size, and its corresponding operational cost, has remained constant despite exponential service
growth over the same period. Without such dedicated focus on efficiency, live site support costs would have
grown substantially with increased service usage.
Further, an increasing percentage of Power BI live site incidents can now be addressed partially or completely
through automation. The following chart shows a 90% decrease in Time to Mitigate (TTM) incidents over the
past two years while usage has more than tripled. The same period saw the introduction of alert automation to
deflect more than 82% of incidents.
These efforts have resulted in greatly improved service reliability to customers, approaching four nines
(99.99%) success rate.
The remainder of this article describes the approach and best practices put in place that enabled the SRE team to
achieve the previous chart's outcomes. The following sections include details on live site incident types, standard
investigation processes, best practices for operationalizing those processes at scale, and the Objective Key
Results (OKRs) used by the team to measure success.
Why incidents occur and how to live with them
The Power BI team ships weekly feature updates to the service and on-demand targeted fixes to address service
quality issues. The release process includes a comprehensive set of quality gates, including comprehensive code
reviews, ad-hoc testing, automated component-based and scenario-based tests, feature flighting, and regional
safe deployment. However, even with these safeguards, live site incidents can and do happen.
Live site incidents can be divided into several categories:
Dependent-service issues (such as Azure AD, Azure SQL, Storage, virtual machine scale set, Service Fabric)
Infrastructure outage (such as a hardware failure, data center failure)
Power BI environmental configuration issues (such as insufficient capacity)
Power BI service code regressions
Customer misconfiguration (such as insufficient resources, bad queries/reports)
Reducing incident volume is one way to decrease live site burden and to improve customer satisfaction.
However, doing so isn't always possible given that some of the incident categories are outside the team's direct
control. Furthermore, as the service footprint expands to support rapid growth in usage, the probability of an
incident occurring due to external factors increases. High incident counts can occur even in cases where the
Power BI service has minimal service code regressions, and has met or exceeded its Service Level Objective
(SLO) for overall reliability of 99.95%, which has led the Power BI team to devote significant resources to
reducing incident costs to a level that is sustainable, by both financial and engineering measures.
In the first phase, which is the ser vice monitoring phase, the SRE team works with engineers, program
managers, and the Senior Leadership Team to define Service Level Indicators (SLIs) and Service Level Objectives
(SLOs) for both major scenarios and minor scenarios. These objectives apply to different metrics of the service,
including scenario/component reliability, scenario/component performance (latency), and resource
consumption. The live site team and product team then craft alerts that monitor Service Level Indicators (SLIs)
against agreed upon targets. When violations are detected, an alert is triggered for investigation.
In the second phase, which is the incident response phase, processes are structured to facilitate the following
results:
Prompt and targeted notification to customers of any relevant impact
Analysis of affected service components and workflows
Targeted mitigation of incident impact
In the final phase, which is the continuous improvement phase, the team focuses on completion of relevant
post-mortem analysis and resolution of any identified process, monitoring, or configuration or code fixes. The
fixes are then prioritized against the team's general engineering backlog based on overall severity and risk of
reoccurrence.
Live Site SREs also enforce alert quality in several ways, including the following:
Ensuring that TSGs include impact analysis and escalation policy
Ensuring that alerts execute for the absolute smallest time window possible for faster detection
Ensuring that alerts use reliability thresholds instead of absolute limits to scale clusters of different size
The time required for the Power BI team to react to incidents as measured by TTN, TTA, and TTM significantly
exceeds targets. Alert automation directly correlates with the team’s ability to sustain exponential service
growth, while continuing to meet or exceed target response times for incident alerting, notification, and
mitigation. Over a two-year period, the Power BI SRE team added automation to deflect more than 82% of
incidents and to enrich an additional six percent with details that empower engineers to quickly take action to
mitigate incidents when they occur. The approach also enables SMEs to focus on features and proactive quality
improvements instead of repeatedly being engaged for reactive incident investigations.
The above OKRs are actively tracked by the Power BI live site team, and the Senior Leadership Team, to ensure
that the team continues to meet or exceed the baseline required to support substantial service growth, to
maintain a sustainable live site workload, and to ensure high customer satisfaction.
Every change to the Power BI code base passes through automated component and end-to-end tests that
validate common scenarios and ensure that interactions yield expected results. In addition, Power BI uses a
Continuous Integration/Continuous Deployment (CI/CD) pipeline on main development branches to detect
other issues that are cost-prohibitive to identify on a per-change basis. The CI/CD process triggers a full cluster
build out and various synthetic tests that must pass before a change can enter the next stage in the release
process. Approved CI/CD builds are deployed to internal test environments for more automated and manual
validation before being included in each weekly feature update. The process means that a change will be
incorporated into a candidate release within 1 to 7 days after it is completed by the developer.
The weekly feature update then passes through various official deployment rings of Power BI’s safe deployment
process. The updated product build is applied first to an internal cluster that hosts content for the Power BI team
followed by the internal cluster that is used by all employees across Microsoft. The changes wait in each of these
environments for one week prior to moving to the final step: production deployment. Here, the deployment
team adopts a gradual rollout process that selectively applies the new build by region to allow for validation in
certain regions prior to broad application.
Scaling this deployment model to handle exponential service growth is accomplished in several ways, as the
following bullets describe:
Comprehensive Dependency Reviews: Power BI is a complex service with many upstream
dependencies and nontrivial hardware capacity requirements. The deployment team ensures the
availability and necessary capacity of all dependent resources and services in a target deployment region.
Usage models project capacity needs based on anticipated customer demands.
Automation: Power BI deployments are essentially zero-touch with little to no interaction required by
the deployment team. Prebuilt rollout specifications exist for multiple deployment scenarios. Deployment
configuration is validated at build-time to avoid unexpected errors during live deployment roll-outs.
Cluster Health Checks: Power BI deployment infrastructure checks internal service health models
before, during, and after an upgrade to identify unexpected behavior and potential regressions. When
possible, deployment tooling attempts auto-mitigation of encountered issues.
Incident Response Process: Deployment issues are handled like other live site incidents using
techniques that are discussed in more detail in the following sections of this article. Engineers analyze
issues with a focus on immediate mitigation and then follow up with relevant manual or automated
process changes to prevent future reoccurrence.
Feature Management/Exposure Control: Power BI applies a comprehensive framework for
selectively exposing new features to customers. Feature exposure is independent of deployment cadences
and allows code for new scenario code to be deployed in a disabled state until it has passed all relevant
quality bars. In addition, new features can be exposed to a subset of the overall Power BI population as an
extra validation step prior to enabling globally. If an issue is detected, the Power BI feature management
service provides the ability to disable an offending feature in seconds without waiting for more time-
consuming deployment rollback operations.
These features have enabled the Power BI team to improve the success rate of deployments by 18 points while
absorbing a 400% year-over-year growth in monthly deployments.
What's next
Another high priority item on the SRE team roadmap is the reduction of system noise from false positive alerts
or ignorable alerts. In addition, the team will inventory transient alerts, drive RCAs, and determine if there are
underlying systemic issues that need to be addressed.
Finally, a foundational element of Power BI service resiliency is ensuring that the service is compartmentalized
such that incidents only impact a subset of the users. Doing so enables mitigation by redirecting impacted traffic
to a healthy cluster. Supporting this holistically requires significant architectural work and design changes but
should yield even higher SLOs than are attainable today.
Next steps
For more information and resources on the Power BI service, take a look at the following articles.
Governance and deployment approaches
White papers for Power BI
Bring your own encryption keys for Power BI
5/23/2022 • 7 minutes to read • Edit Online
Power BI encrypts data at-rest and in process. By default, Power BI uses Microsoft-managed keys to encrypt your
data. In Power BI Premium you can also use your own keys for data at-rest that is imported into a dataset (see
Data source and storage considerations for more information). This approach is often described as bring your
own key (BYOK).
NOTE
This cmdlet requires Power BI management module v1.0.840. You can see which version you have by running
Get-InstalledModule -Name MicrosoftPowerBIMgmt . Install the latest version by running
Install-Module -Name MicrosoftPowerBIMgmt . You can get more information about the Power BI cmdlet and its
parameters in Power BI PowerShell cmdlet module.
IMPORTANT
Power BI BYOK supports only RSA keys with a 4096-bit length.
3. (Recommended) Check that the key vault has the soft delete option enabled.
Add the service principal
1. In the Azure portal, in your key vault, under Access policies , select Add Access Policy .
2. Under Key permissions , select Unwrap Key and Wrap Key .
NOTE
If you can't find "Microsoft.Azure.AnalysisServices", it's likely that the Azure subscription associated with your
Azure Key Vault never had a Power BI resource associated with it. Try searching for the following string instead:
00000009-0000-0000-c000-000000000000.
4. Select Add , then Save .
NOTE
To revoke access of Power BI to your data in the future remove access rights to this service principal from your Azure Key
Vault.
3. Select Create .
4. Under Keys , select the key you created.
5. Select the GUID for the Current Version of the key.
6. Check that Wrap Key and Unwrap Key are both selected. Copy the Key Identifier to use when you
enable BYOK in Power BI.
NOTE
Enabling firewall rules on your key vault is optional. You can also choose to leave the firewall disabled on your key vault as
per the default setting.
Power BI is a trusted Microsoft service. You can instruct the key vault firewall to allow access to all trusted
Microsoft services, a setting that enables Power BI to access your key vault without specifying end point
connections.
To configure Azure Key Vault to allow access to trusted Microsoft services, follow these steps:
1. Log into the Azure portal.
2. Search for Key Vaults .
3. Select the key vault you want to allow access to Power BI (and all other trusted Microsoft services).
4. Select Networking and then select Firewalls and vir tual networks .
5. From the Allow access from option, select Selected networks .
6. In the firewall section, in the Allow trusted Microsoft services to bypass this firewall, select Yes .
7. Select Save .
Enable BYOK on your tenant
You enable BYOK at the tenant level with PowerShell, by first introducing to your Power BI tenant the encryption
keys you created and stored in Azure Key Vault. You then assign these encryption keys per Premium capacity for
encrypting content in the capacity.
Important considerations
Before you enable BYOK, keep the following considerations in mind:
At this time, you cannot disable BYOK after you enable it. Depending on how you specify parameters for
Add-PowerBIEncryptionKey , you can control how you use BYOK for one or more of your capacities.
However, you can't undo the introduction of keys to your tenant. For more information, see Enable BYOK.
You cannot directly move a workspace that uses BYOK from a capacity in Power BI Premium to a shared
capacity. You must first move the workspace to a capacity that doesn't have BYOK enabled.
If you move a workspace that uses BYOK from a capacity in Power BI Premium, to shared, reports and
datasets will become inaccessible, as they are encrypted with the Key. To avoid this situation, you must
first move the workspace to a capacity that doesn’t have BYOK enabled.
Enable BYOK
To enable BYOK, you must be a Power BI admin, signed in using the Connect-PowerBIServiceAccount cmdlet. Then
use Add-PowerBIEncryptionKey to enable BYOK, as shown in the following example:
To add multiple keys, run Add-PowerBIEncryptionKey with different values for - -Name and -KeyVaultKeyUri .
The cmdlet accepts two switch parameters that affect encryption for current and future capacities. By default,
neither of the switches are set:
-Activate : Indicates that this key will be used for all existing capacities in the tenant that aren't already
encrypted.
-Default : Indicates that this key is now the default for the entire tenant. When you create a new capacity,
the capacity inherits this key.
IMPORTANT
If you specify -Default , all of the capacities created on your tenant from this point will be encrypted using the key you
specify (or an updated default key). You cannot undo the default operation, so you lose the ability to create a premium
capacity in your tenant that doesn't use BYOK.
After you enable BYOK on your tenant, set the encryption key for one or more Power BI capacities:
1. Use Get-PowerBICapacity to get the capacity ID that's required for the next step.
Id : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
DisplayName : Test Capacity
Admins : [email protected]
Sku : P1
State : Active
UserAccessRight : Admin
Region : North Central US
You have control over how you use BYOK across your tenant. For example, to encrypt a single capacity, call
Add-PowerBIEncryptionKey without -Activate or -Default . Then call Set-PowerBICapacityEncryptionKey for the
capacity where you want to enable BYOK.
Manage BYOK
Power BI provides additional cmdlets to help manage BYOK in your tenant:
Use Get-PowerBICapacity to get the key that a capacity is currently using:
Use Get-PowerBIEncryptionKey to get the key that your tenant is currently using:
Get-PowerBIEncryptionKey
Use Get-PowerBIWorkspaceEncryptionStatus to see whether the datasets in a workspace are encrypted and
whether their encryption status is in sync with the workspace:
Get-PowerBIWorkspaceEncryptionStatus -Name'Contoso Sales'
Note that encryption is enabled at the capacity level, but you get encryption status at the dataset level for
the specified workspace.
Use Switch-PowerBIEncryptionKey to switch (or rotate) the version of the key being used for encryption.
The cmdlet simply updates the -KeyVaultKeyUri for a key -Name :
Next steps
Power BI PowerShell cmdlet module
Ways to share your work in Power BI
Filter a report using query string parameters in the URL
Embed with report web part in SharePoint Online
Publish to Web from Power BI
Power BI Premium Generation 2
Distribute Power BI content to external guest users
with Azure AD B2B
5/23/2022 • 8 minutes to read • Edit Online
Power BI enables sharing content with external guest users through Azure Active Directory Business-to-Business
(Azure AD B2B). By using Azure AD B2B, your organization enables and governs sharing with external users in a
central place.
By default, external guests have mostly consumption experiences. You can also choose to provide external users
with elevated permissions to the workspaces to experience "Edit and Manage" privileges. Additionally, by
enabling the Allow external guest users to edit and manage content in the organization feature setting, you can
allow guest users outside your organization to browse and request access to your organization's content.
This article provides a basic introduction to Azure AD B2B in Power BI. For more information, see Distribute
Power BI content to external guest users using Azure Active Directory B2B.
Enable access
Make sure you enable the Invite external users to your organization feature in the Power BI admin portal before
inviting guest users. Even when this option is enabled, the user must be granted the Guest Inviter role in Azure
Active Directory to invite guest users.
The option to allow external guest users to edit and manage content in the organization lets you give guest
users the ability to see and create content in workspaces, including browsing your organization's Power BI. The
guest user can only be subscribed to content in workspaces that are backed by a Premium capacity.
NOTE
The Share content with external users setting controls whether Power BI allows inviting external users to your
organization. After an external user accepts the invite, they become an Azure AD B2B guest user in your organization.
They appear in people pickers throughout the Power BI experience. If the setting is disabled, existing guest users in your
organization continue to have access to any items they already had access to and continue to be listed in people picker
experiences. Additionally, if guests are added through the planned invite approach they will also appear in people pickers.
To prevent guest users from accessing Power BI, use an Azure AD conditional access policy.
2. Under Manage , select Users > All users > New guest user .
Ad hoc invites
To invite an external user at any time, add them to your dashboard or report through the share feature or to
your app through the access page. Here is an example of what to do when inviting an external user to use an
app.
The guest user gets an email indicating that you shared the app with them.
The guest user must sign in with their organization email address. They'll receive a prompt to accept the
invitation after signing in. After signing in, the app opens for the guest user. To return to the app, they should
bookmark the link or save the email.
Licensing
The guest user must have the proper licensing in place to view the content that you shared. There are a few
ways to make sure the user has a proper license: use Power BI Premium, assign a Power BI Pro license, get a
Premium Per User (PPU) license, or use the guest's Power BI Pro license.
Guest users who can edit and manage content in the organization need a Power BI Pro or Premium Per User
(PPU) license to contribute content to workspaces or share content with others.
Use Power BI Premium
Assigning the workspace to Power BI Premium capacity lets the guest user use the app without requiring a
Power BI Pro license. Power BI Premium also lets apps take advantage of other capabilities like increased refresh
rates and large model sizes.
Next steps
For more detailed info, including how row-level security works, check out the whitepaper: Distribute Power BI
content to external guest users using Azure AD B2B.
For information about Azure AD B2B, see What is Azure AD B2B collaboration?.
Use customer-managed keys in Power BI
5/23/2022 • 2 minutes to read • Edit Online
Power BI encrypts data at rest and in process. By default, Power BI uses Microsoft-managed keys to encrypt your
data. Organizations can choose to use their own keys for encryption of user content at rest across Power BI,
from report images to imported datasets in Premium capacities.
Next steps
The following links provide information that can be useful for customer-managed keys:
Bring your own encryption keys for Power BI
Configure Multi-Geo support for Power BI Premium
How capacities function
Power BI security white paper
Get a Power BI service subscription for your
organization
5/23/2022 • 3 minutes to read • Edit Online
Administrators can sign up for the Power BI service through the Purchase ser vices page of the Microsoft 365
admin center. When an administrator signs up for Power BI, they can assign licenses to users who should have
access.
Users in your organization can sign up for Power BI through the Power BI web site. When a user in your
organization signs up for Power BI, they're assigned a Power BI license automatically. If you want to turn off self-
service capabilities, follow the steps in Enable or disable self-service sign-up and purchasing.
NOTE
A Microsoft 365 E5 subscription already includes Power BI Pro licenses. To learn how to manage licenses, see View and
manage user licenses.
Follow these steps to purchase Power BI Pro licenses in the Microsoft 365 admin center:
1. Sign in to the Microsoft 365 admin center.
2. On the navigation menu, select Billing > Purchase ser vices .
3. Search for Power BI or select the Power BI button from the View by categor y section near the top of
the page.
4. Select an offer, like Power BI Pro.
5. On the Purchase ser vices page, select Buy . If you haven't previously used it, you can start a Power BI
Pro free trial subscription. It includes 25 licenses and expires in one month.
6. Choose Pay monthly or Pay yearly , according to how you want to pay.
7. Under Select license quantity enter the number of licenses to buy, and then select Buy .
8. Complete the information on the checkout page, and then select Place order .
9. To verify your purchase, go to Billing > Your products and look for Power BI Pro .
To read more about how your organization can control and acquire the Power BI service, see Power BI in your
organization.
3. We run a quick check to see if you need to create a new account. Select Set up account to continue with
the sign-up process.
NOTE
If your email address is already in use with another Microsoft service, you can Sign in or Create a new account
instead . If you choose to create a new account, continue to follow these steps to get set up.
4. Complete the form to tell us about yourself. Be sure to choose the correct country or region. The country
you select determines where your data is stored, as explained in Find the default region for your
organization. The country or region doesn't have to match your physical location, but should match the
location for the majority of your users.
5. Select Next . We need to send a verification code to verify your identity. Provide a phone number where
we can send a text or call you. Then, select Send Verification Code .
6. Enter the verification code, then continue to Create your business identity .
Enter a short name for your business, and we'll check to make sure it's available. We use this short name
to create your organization name in the datacenter as a subdomain of onmicrosoft.com. You can add your
own business domain later. Don't worry if the short name you want is taken. Most likely someone with a
similar business name chose the same short name - just try a different variation. Select Next .
7. Create your user ID and password to sign in to your account. Select Sign up , and you're all set.
The account you created is now the global admin of a new Power BI Pro trial tenant. You can sign in to the
Microsoft 365 admin center to add more users, set up a custom domain, purchase more services, and manage
your Power BI subscription.
Next steps
View and manage user licenses
Enable or disable self-service sign-up and purchasing
Business subscriptions and billing documentation
Purchase and assign Power BI Pro user licenses
5/23/2022 • 3 minutes to read • Edit Online
IMPORTANT
This article is for admins. Are you a user ready to upgrade to a Power BI Pro license? Go directly to Get started with Power
BI Pro to set up your account.
Power BI Pro is an individual user license that lets users read and interact with reports and dashboards that
others have published to the Power BI service. Users with this license type can share content and collaborate
with other Power BI Pro users. Only Power BI Pro users can publish or share content with other users or
consume content that's created by others, unless a Power BI Premium capacity hosts that content. For more
information about the available types of licenses and subscriptions, including Premium Per User (PPU) licenses,
see Power BI licensing in your organization.
NOTE
Self-service purchase, subscription, and license management capabilities for Power Platform products (Power BI, Power
Apps, and Power Automate) are available for commercial cloud customers. For more information, see Self-service purchase
FAQ. To enable or disable self-service purchasing capabilities, see Enable or disable self-service sign-up and purchasing.
Prerequisites
To purchase and assign licenses in the Microsoft 365 admin center, you must be a member of the global
administrator or Billing administrator role in Microsoft 365.
To assign licenses in the Azure portal, you must be an owner of the Azure subscription that Power BI uses for
Azure Active Directory lookups.
Purchase licenses in Microsoft 365
NOTE
If you usually purchase licenses through a volume licensing agreement, such as an Enterprise Agreement, and want to
receive an invoice instead of purchasing with a credit card or bank account, you need to submit the order differently.
Work with your Microsoft Reseller or go through the Volume Licensing Service Center to add or remove licenses. For
more information, see Manage subscription licenses.
Follow these steps to purchase Power BI Pro licenses in the Microsoft 365 admin center:
1. Sign in to the Microsoft 365 admin center.
2. On the navigation menu, select Billing > Purchase ser vices .
3. Search or scroll to find the subscription you want to buy. You'll find Power BI under Other categories
that might interest you near the bottom of the page. Select the link to view the Power BI subscriptions
available to your organization.
4. Select Power BI Pro .
5. On the Purchase ser vices page, select Buy .
6. Choose Pay monthly or Pay for a full year , according to how you want to pay.
7. Under How many users do you want? enter the number of licenses to buy, then select Check out
now to complete the transaction.
8. To verify your purchase, go to Billing > Products & ser vices and look for Power BI Pro .
9. To add more licenses later, locate Power BI Pro on the Products & ser vices page, and then select
Add/Remove licenses .
Next steps
Power BI licensing in your organization
Find Power BI users who have signed in
Sign up for Power BI as an individual
More questions? Try asking the Power BI Community
Licensing the Power BI service for users in your
organization
5/23/2022 • 8 minutes to read • Edit Online
What a user can do in the Power BI service depends on the type of per-user license that they have. The level of
access provided by their license depends on whether the workspace being accessed is in a Premium workspace
or not. All users of the Power BI service must have a license.
There are two ways for users to get a license. Using self-service sign-up capabilities and their work or school
account, users can get their own free, Pro, or Premium Per User license. Or, admins can get a Power BI license
subscription and assign licenses to users.
This article focuses on purchasing services and per-user licensing from an administrator perspective. For more
information about how users can get their own license, see Signing up for Power BI as an individual.
Global administrator
These roles manage the organization. For information about Power BI service administrator roles, see
Understanding Power BI service administrator roles.
5. Complete the information on the Checkout page, and then select Place order .
6. Select Licenses from the left sidebar, and then select Power BI (free) from the subscriptions.
7. Select Assign licenses and assign the licenses to your users.
If you want to see which users in your organization might already have a license, see View and manage user
licenses to learn how.
A DDIT IO N A L C A PA B IL IT IES W H EN
C A PA B IL IT IES W H EN W O RK SPA C E IS W O RK SPA C E IS IN P REM IUM
L IC EN SE T Y P E IN SH A RED C A PA C IT Y C A PA C IT Y
Power BI (free) Access to content in My Workspace Consume content shared with them
Power BI Pro Publish content to other workspaces, Distribute content to users who have
share dashboards, subscribe to free licenses
dashboards and reports, share with
users who have a Pro license
Power BI Premium Per User Publish content to other workspaces, Distribute content to users who have
share dashboards, subscribe to free and Pro licenses
dashboards and reports, share with
users who have a Premium Per User
license
Next steps
Purchase and assign Power BI Pro licenses
Business subscriptions and billing documentation
Find Power BI users that have signed in
More questions? Try asking the Power BI Community
View and manage Power BI user licenses
5/23/2022 • 2 minutes to read • Edit Online
This article explains how admins can use the Microsoft 365 admin center or the Azure portal to view and
manage user licenses for the Power BI service.
NOTE
It's possible for a user to have both a Power BI (free) and a Power BI Pro license assigned. This can happen when a user
signs up for a free license and then is later assigned a Power BI Pro license. The highest licensing level takes effect in this
case.
This type of subscription is created for you when users take advantage of self-service sign-up. To read more, see
Power BI in your organization.
Next steps
Purchase Power BI Pro
Licensing for your organization
Power BI for US government customers
5/23/2022 • 6 minutes to read • Edit Online
This article is for US government customers who are deploying Power BI as part of a Microsoft 365 Government
plan. Government plans are designed for the unique needs of organizations that must meet US compliance and
security standards.
The Power BI service that's designed for US government customers differs from the commercial version of the
Power BI service. These feature differences and capabilities are described in the following sections.
NOTE
Before you can get a Power BI US government subscription and assign licenses to users, you have to enroll in a Microsoft
365 Government plan. If your organization already has a Microsoft 365 Government plan, skip ahead to Buy a Power BI
Pro subscription for government customers.
NOTE
If you've already deployed Power BI to a commercial environment and want to migrate to the US government cloud, you'll
need to add a new Power BI Pro or Premium Per User (PPU) subscription to your Microsoft 365 Government plan. Next,
replicate the commercial data to the Power BI service for US government, remove commercial license assignments from
user accounts, and then assign a Power BI Pro government license to the user accounts.
TIP
In this video, Using Power BI Desktop in government clouds, Technical Specialist Steve Winward shows how you can apply
a registry setting to go directly to the right cloud endpoint for your environment. The registry key settings to bypass the
global discovery endpoint are shared on GitHub.
K EY DESC RIP T IO N
Template apps2
Autoscale
Next steps
Article: Sign up for Power BI for US government
Article: Microsoft Power Apps US Government
Article: Power Automate US Government
Video: Power BI US Government demo
Enroll your US government organization in the
Power BI service
5/23/2022 • 3 minutes to read • Edit Online
This article describes the US government enrollment process for the Power BI service.
The Power BI service has a special version for the US government which is part of the Microsoft 365
Government plans. The enrollment process for the US government Power BI service described here, is different
from the commercial version of the Power BI service.
For more information about the Power BI service for the US government, see Power BI for United States
government customers - Overview.
NOTE
This article is intended for administrators who have authority to sign up their US government organization for Power BI. If
you're not an admin, contact your administrator about getting a subscription to Power BI for US government.
IMPORTANT
Power BI is enhancing the way that customers connect to these US government clouds:
Microsoft 365 Government Community Cloud (GCC)
Microsoft 365 Government Community Cloud High (GCC High)
Microsoft 365 Department of Defense (DoD)
From 20 March 2022, US government customers will need to complete an explicit request for onboarding these US
government clouds, to maintain continuity of data access.
IMPORTANT
Don't follow these instructions if you belong to one the following:
Microsoft 365 Government Community Cloud High (GCC High)
Microsoft 365 Department of Defense (DoD)
To purchase the Power BI service for these US government clouds, use the process described in How do I buy Microsoft
365 Government, and work with your reseller to ensure new services are properly associated with your tenant.
After you sign up for the Power BI service for the US government, work with your account team to start the
allow list process described in this article. That step is needed to fully enable your organization in the
government community cloud.
Sign up for a new Microsoft 365 Government plan
If your organization is new to the government cloud community, follow the steps below to get a Microsoft 365
Government plan.
After this process is complete, follow the steps for existing Microsoft 365 Government customers to add a Power
BI subscription.
NOTE
These steps should be performed by the global administrator.
4. Submit the form to start the onboarding process. Your Microsoft representative or partner can help with
any questions.
Next steps
Overview of Power BI for US government
How do I buy Microsoft 365 Government?
Enable or disable self-service sign-up and
purchasing
5/23/2022 • 2 minutes to read • Edit Online
As an administrator, you determine whether to enable or disable self-service sign-up. You also determine
whether users in your organization can make self-service purchases to get their own license.
Turning off self-service sign-up keeps users from exploring Power BI for data visualization and analysis. If you
block individual sign-up, you may want to get Power BI (free) licenses for your organization and assign them to
all users.
NOTE
If you acquired Power BI through a Microsoft Cloud Solution Provider (CSP), the setting may be disabled to block users
from signing up individually. Your CSP may also be acting as the global admin for your organization, requiring that you
contact them to help you change this setting.
Use PowerShell, Azure AD, and Microsoft 365 to enable and disable
self-service
You'll use PowerShell commands to change the settings that control self-service sign-up and purchasing.
If you want to disable all self-service sign-ups, change a setting in Azure Active Directory named
AllowAdHocSubscriptions by using MSOL PowerShell module. Follow the steps in this article to Set
MsolCompanySettings. This option turns off self-service sign-up for all Microsoft cloud-based apps and
services.
If you want to prevent users from purchasing their own Pro license, change the
AllowSelfSer vicePurchase setting using MSCommerce PowerShell commands. This setting lets you
turn off self-service purchase for specific products. Follow the steps in this article to Use
AllowSelfServicePurchase for the MSCommerce PowerShell module.
Signing up for Power BI with a new Microsoft 365
Trial
5/23/2022 • 2 minutes to read • Edit Online
This article describes an alternative way to sign up for the Power BI service, if you don't already have a work or
school email account.
If you're having problems signing up for Power BI with your email address, first make sure it's an email address
that can be used with Power BI. If that's not successful, sign up for a Microsoft 365 trial and create a work
account. Then, use that new work account to sign up for the Power BI service. You'll be able to use Power BI even
after the Microsoft 365 trial expires.
If you select Office 365 E5 , your trial will include Power BI Pro. The Power BI Pro trial will expire at the same
time as your Office 365 E5 trial, which is currently 30 days. If, instead, you select Office 365 E3 , you'll be able to
sign up for Power BI as a free user and upgrade to Premium Per User for a 60-day trial. For more information
about Premium Per User (PPU), see Power BI Premium Per User.
1. Enter your email address. Microsoft will let you know if that email address will work with Microsoft 365
or if you'll need to create a new email address.
If you need a new email address, Microsoft will walk you through the steps. First step, creating a new
account. Select Set up account .
Important considerations
If you have any issues signing in with the new account, try using a private browser session.
By using this signup method, you are creating a new organizational tenant and you'll become the User
administrator of the tenant. For more information, see What is Power BI administration?. You can add new users
to your tenant, then share with them, as described in the Microsoft 365 admin documentation.
Next steps
What is Power BI administration?
Power BI licensing in your organization
Signing up for Power BI as an individual
More questions? Try asking the Power BI Community
Add Power BI to a Microsoft 365 partner
subscription
5/23/2022 • 2 minutes to read • Edit Online
Microsoft 365 enables companies to resell Microsoft 365 bundled and integrated with their own solutions,
providing customers with a single point of contact for purchasing, billing, and support.
If you're interested in adding Power BI to your Microsoft 365 subscription, we recommend you contact your
partner to do so. If your partner doesn't currently offer Power BI, you can pursue the options described below.
3. Look for Subscriptions as shown in the image below. If you see Subscriptions , you can acquire the
service from Microsoft directly, or you can contact another partner that offers Power BI.
If you don't see Subscriptions , you can't buy from Microsoft directly or from another partner.
If your partner doesn't offer Power BI and you can't buy directly from Microsoft or another partner, consider
signing up for a free trial.
To enable ad-hoc subscriptions, you can contact your partner and request that they turn it on. If you're an
administrator of your tenant, and know how to use Azure Active Directory PowerShell commands, you can
enable ad-hoc subscriptions yourself. For more information, follow the steps in Enable or disable self-service
purchasing.
Next steps
Power BI licensing in your organization
Purchase and assign Power BI Pro licenses
More questions? Try asking the Power BI Community
Use an alternate email address
5/23/2022 • 2 minutes to read • Edit Online
When you sign up for Power BI, you provide an email address. By default, Power BI uses this address to send you
updates about activity in the service. For example, when someone sends you a sharing invitation, it goes to this
address.
In some cases, you might want these emails delivered to an alternate email address rather than the one you
signed up with. This article explains how to specify an alternate address in Microsoft 365 and in PowerShell. The
article also explains how Azure Active Directory (Azure AD) resolves an email address.
NOTE
Specifying an alternate address doesn't affect which email address Power BI uses for e-mail subscriptions, service updates,
newsletters, and other promotional communications. Those communications are always sent to the email address you
used when you signed up for Power BI.
4. In the Alternate email field, enter the email address you'd like Microsoft 365 to use for Power BI
updates.
Use PowerShell
To specify an alternate address in PowerShell, use the Set-AzureADUser command.
If you don't want to use Power BI any longer, you can close your Power BI account. After you close your account,
you can't sign in to Power BI. Also, as it states in the data retention policy in the Power BI Service Agreement,
Power BI deletes any customer data you uploaded or created.
3. Select a reason for closing the account (1). You can also provide further information (2). Then select
Close account .
4. Confirm that you want to close your account.
You should see a confirmation that Power BI closed your account. You can reopen your account from here
if necessary.
Managed users
If your organization signed you up for Power BI, contact your admin. Ask them to unassign the license from your
account.
More questions? Try asking the Power BI Community
Power BI Premium features
5/23/2022 • 2 minutes to read • Edit Online
This article lists the main Power BI Premium features. Most of the features apply to all the Power BI Premium
licenses, Premium Gen2, Premium (original version) and Premium Per User (PPU). When a feature only works
with a specific license, the required license is indicated in the description field. If no license is listed, the feature
works with any license.
IMPORTANT
If your organization is using the original version of Power BI Premium, you're required to migrate to the modern Premium
Gen2 platform. Microsoft began migrating all Premium capacities to Gen2. If you have a Premium capacity that requires
migrating, you’ll receive an email notification 60 days before the migration is scheduled to star t . For more
information see Plan your transition to Power BI Premium Gen2.
Backup and restore Backup and restore data using XMLA endpoints
Bring your own key (BYOK) Use your own keys to encrypt data
DirectQuery with dataflows Connect directly to your dataflow without having to import
its data
F EAT URE DESC RIP T IO N
Insights (preview) Explore and find insights such as anomalies and trends in
your reports
On-demand loading capabilities for large models Improve report load time by loading datasets to memory on
demand
Refresh rate The ability to refresh more than eight times a day
Streaming dataflows (preview) Connect to, ingest, mash up, model, and build reports using
near real-time data
Virtual network data gateway (preview) Connect from Microsoft Cloud to Azure using a virtual
network (VNet)
Next steps
What is Power BI Premium Gen2?
What is Power BI Premium?
What is Power BI Premium Gen2?
5/23/2022 • 10 minutes to read • Edit Online
Power BI Premium released a new version of Power BI Premium, Power BI Premium Generation 2 , referred
to as Premium Gen2 for convenience. You can select to use the original version of Premium, or switch to using
Premium Gen2. You can only use one or the other for your Premium capacity.
NOTE
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.
Optionally, you can also configure and use Autoscale with Power BI Premium to ensure capacity and
performance for your Premium users.
Monitoring in Gen2
The intent of monitoring in Premium Gen2 is to simplify monitoring and management of Premium capacities.
Premium Gen2 customers can adapt their monitoring approach from a tool to ensure their Premium capacities
are running properly, into a tool that alerts them if attention should be applied to correct overusage or if more
resources are required. In other words, rather than constantly having to monitor for issues and adjust, Premium
Gen2 aims to assure that everything is running properly and only alerts users if they must act.
Updates for Premium Gen2 and Embedded Gen2 - Premium Gen2 and Embedded Gen 2 only require
monitoring a single aspect: how much CPU time your capacity requires to serve the load at any moment.
This reduction in the need for monitoring is a departure from the many metrics that the original version of
Power BI Premium required. Organizations that created a cadence of monitoring and reporting on their original
Premium capacities will need to transition their existing rhythm of monitoring their Premium Gen2 capacities,
due to the streamlined metrics and monitoring requirements of Premium Gen2.
In Premium Gen2, if you exceed your CPU time per the SKU size you purchased, your capacity either autoscales
to accommodate the need (if you've optionally enabled autoscale), or throttles your interactive operations, based
on your configuration settings.
In Embedded Gen 2, if you exceed your CPU time per the SKU size you purchased, your capacity throttles your
interactive operations, based on your configuration settings. To autoscale in Embedded Gen 2, see Autoscaling in
Embedded Gen2.
Updates for Premium Gen2
Premium Gen2 and Embedded Gen 2 capacities use the Capacity Utilization App.
You can download and install the metrics app for Premium Gen2 and Embedded Gen2 using the following link.
C L IEN T L IB RA RY VERSIO N
MSOLAP 15.1.65.22
AMO 19.12.7.0
ADOMD 19.12.7.0
In some cases, manually installing the most recent client libraries may be necessary to reduce potential
connection and operations errors. To learn more about verifying existing installed client library versions
and manually installing the most recent versions, see Analysis Services client libraries.
There's a 225 second limitation for rendering Power BI visuals. Visuals that take longer to render, will be
timed-out and will not display.
Throttling can occur in Power BI Premium capacities. Concurrency limits are applied per session. An error
message will appear when too many operations are being processed concurrently.
Memory restrictions are different in Premium Gen2 and Embedded Gen 2. In the first generation of
Premium and Embedded, memory was restricted to a limited amount of RAM used by all artifacts
simultaneously running. In Gen2, there is no memory Limit for the capacity as a whole. Instead, individual
artifacts (such as datasets, dataflows, paginated reports) are subject to the following RAM limitations:
A single artifact cannot exceed the amount of memory the capacity SKU offers.
The limitation includes all the operations (interactive and background) being processed for the
artifact while in use (for example, while a report is being viewed, interacted with, or refreshed).
Dataset operations like queries are also subject to individual memory limits, just as they are in the
first version of Premium.
To illustrate the restriction, consider a dataset with an in-memory footprint of 1 GB, and a user
initiating an on-demand refresh while interacting with a report based on the same dataset. Two
separate actions determine the amount of memory attributed to the original dataset, which may
be larger than two times the dataset size:
The dataset needs to be loaded into memory.
The refresh operation will cause the memory used by the dataset to double, at least, since the
original copy of data is still available for active queries, while an additional copy is being
processed by the refresh. Once the refresh transaction commits, the memory footprint will
reduce.
Report interactions will execute DAX queries. Each DAX query consumes a certain amount of
temporary memory required to produce the results. Each query may consume a different
amount of memory and will be subject to the query memory limitation as described.
The following table summarizes all the limitations that are dependent on the capacity size.
DIREC TQ U
ERY / L IVE MAX
C O N N EC T M EM O RY M O DEL
F RO N T EN IO N ( P ER P ER REF RESH
C A PA C IT Y TOTA L V- B A C K EN D D V- RA M SEC O N D) 1, Q UERY PA RA L L EL I
SK US C O RES V- C O RES C O RES ( GB ) 1, 2, 3 2 [ GB ] 1, 2 SM 2
EM2/A2 2 1 1 5 7.5 2 10
EM3/A3 4 2 2 10 15 2 20
P1/A4 8 4 4 25 30 6 40
P2/A5 16 8 8 50 60 6 80
1 The Power BI Premium Utilization and Metrics app doesn't currently expose these metrics.
2 These limits only apply to the datasets workload per capacity.
3 The RAM column represents an upper bound for the dataset size. However, an amount of memory must
be reserved for operations such as refreshes and queries on the dataset. The maximum dataset size
permitted on a capacity may be smaller than the numbers in this column.
4 SKUs greater than 100 GB are not available in all regions. To request using these SKUs in regions where
they're not available, contact your Microsoft account manager.
Next steps
The following articles provide additional information about Power BI Premium.
Power BI Premium Per User
Managing Premium capacities
Azure Power BI Embedded Documentation
More questions? Try asking the Power BI Community
Power BI Premium Gen2 architecture
5/23/2022 • 4 minutes to read • Edit Online
Power BI Premium Generation 2 , referred to as Premium Gen2 for convenience, is an improved and
architecturally redesigned generation of Power BI Premium.
Architectural changes in Premium Gen2, especially around how CPU resources are allocated and used, enables
more versatility in offerings, and more flexibility in licensing models. For example, the new architecture enables
offering Premium on a per-user basis, offered as Premium Per User. The architecture also provides customers
with better performance, and better governance and control over their Power BI expenses.
The most significant update in the architecture of Premium Gen2 is the way capacities' back-end v-cores (CPUs,
often referred to as v-cores) are implemented:
In the original version of Power BI Premium, backend v-cores were reserved physical computing nodes in the
cloud, with differences in the number of v-cores and the amount of onboard memory according to the
customer's licensing SKU. Customer administrators were required to keep track of how busy these nodes were,
using the Premium metrics app. They had to use the app and other tools to determine how much capacity their
users required to meet their computing needs.
In Premium Gen2, backend v-cores are implemented on regional clusters of physical nodes in the cloud, which
are shared by all tenants using Premium capacities in that Power BI region. The regional cluster is further
divided into specialized groups of nodes, where each group handles a different Power BI workload (datasets,
dataflows, or paginated reports). These specialized groups of nodes help avoid resource contention between
fundamentally different workloads running on the same node.
In both Premium Gen1 and Gen2 versions, administrators have the ability to tweak and configure workload
settings for their capacity. This can be used to reduce resource contention between workloads (datasets,
dataflows, paginated reports, and AI), and adjust other settings such as memory limits and timeouts based on
the capacity usage patterns.
The contents of workspaces assigned to a Premium Gen2 capacity is stored on your organizations capacity's
storage layer, which is implemented on top of capacity-specific Azure storage blob containers, similar to the
original version of Premium. This approach enables features like BYOK to be used for your data.
When the content needs to be viewed or refreshed, it is read from the storage layer and placed on a Premium
Gen2 backend node for computing. Power BI uses a placement mechanism that assures the optimal node is
chosen within the proper group of computing nodes. The mechanism typically places new content on the node
with the most available memory at the time the content is loaded, so that the view or refresh operation can gain
access to the most resources and can perform optimally.
As your capacity renders and refreshes more content, it uses more computation nodes, each with enough
resources to complete operations fast and successfully. This means your capacity may use multiple
computational nodes and in some cases, content might even move between nodes due to the Power BI service
performing internal load-balancing across nodes or resources. When such load balancing occurs, Power BI
makes sure content movement doesn't impact end-user experiences.
There are several positive results from distributing backend processing of content (datasets, dataflows, and
paginated reports) across shared backend nodes:
The shared nodes are at least as large as an original Premium P3 node, which means there are more v-
cores to perform any operations, which can increase performance by up to 16x when comparing to an
original Premium P1.
Whatever node your processing lands on, the placement mechanism makes sure memory remains
available for your operation to complete, within the applicable memory constraints of your capacity. (see
limitations section of this doc for full detail of memory constraints)
Cross-workloads resource contention is prevented by separating the shared nodes into specialized
workload groups. As a result of this separation, there are no controls for paginated report workloads.
The limitations on different capacity SKUs are not based on the physical constraints as they were in the
original version of Premium; rather, they are based on an expected and clear set of rules that the Power BI
Premium service enforces:
Total capacity CPU throughput is at or below the throughput possible with the v-cores your
purchased capacity has.
Memory consumption required for viewing and refresh operations remains within the memory
limits of your purchased capacity.
Because of this new architecture, customer admins do not need to monitor their capacities for signs of
approaching the limits of their resources, and instead are provided with clear indication when such limits
are met. This significantly reduces the effort and overhead required of capacity administrators to
maintain optimal capacity performance.
Next steps
What is Power BI Premium Gen2?
Premium Gen2 capacity load evaluation
Using Autoscale with Power BI Premium
Power BI Premium Gen2 FAQ
Power BI Premium Per User FAQ
Add or change Azure subscription administrators
More questions? Try asking the Power BI Community.
How to purchase Power BI Premium
5/23/2022 • 4 minutes to read • Edit Online
This article describes how to purchase Power BI Premium capacity for your organization. The article covers the
following scenario:
Using P SKUs for typical production scenarios. P SKUs require a monthly or yearly commitment, and are
billed monthly.
For more information about Power BI Premium, see What is Power BI Premium?. For current pricing and
planning information, see the Power BI pricing page. Content creators still need a Power BI Pro license, even if
your organization uses Power BI Premium. Ensure you purchase at least one Power BI Pro license for your
organization. With A SKUs, all users who consume content also require Pro licenses.
NOTE
If a Premium subscription expires, you have 30 days of full access to your capacity. After that, your content reverts to a
shared capacity where it will continue to be accessible. However, you will not be able to view reports that are based on
datasets that are greater than 1 GB or reports that require Premium capacities to render.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifes the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
IMPORTANT
Selecting Submit charges the credit card on file.
The Your products page will then indicate the number of instances you have. Within the Power BI admin portal,
under Capacity settings , the available v-cores reflects the new capacity purchased.
Next steps
Configure and manage capacities in Power BI Premium
Power BI pricing page
Power BI Premium FAQ
Planning a Power BI Enterprise Deployment whitepaper
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Purchase Power BI Premium for testing
5/23/2022 • 2 minutes to read • Edit Online
This article describes how to purchase Power BI Premium A SKUs for testing scenarios, and for cases where you
don't have the permissions necessary to purchase P SKUs (Microsoft 365 Global Administrator role or Billing
Administrator role). A SKUs require no time commitment, and are billed hourly. You purchase A SKUs in the
Azure portal.
For more information about Power BI Premium, see What is Power BI Premium?. For current pricing and
planning information, see the Power BI pricing page. Content creators still need a Power BI Pro license, even if
your organization uses Power BI Premium. Ensure you purchase at least one Power BI Pro license for your
organization. With A SKUs, all users who consume content also require Pro licenses.
NOTE
If a Premium subscription expires, you have 30 days of full access to your capacity. After that, your content reverts to a
shared capacity. Models that are greater than 1 GB are not supported in shared capacity.
NOTE
If you purchase an A4 or higher SKU, you can take advantage of all Premium features except for unlimited sharing of
content. With A SKUs, all users who consume content require Pro licenses.
7. Select Review + Create , review the options you chose, then select Create .
8. It can take a few minutes to complete the deployment. When it's ready, select Go to resource .
9. On the management screen, review the options you have for managing the service, including pausing the
service when you're not using it.
After you purchase capacity, learn how to manage capacities and assign workspaces to a capacity.
Next steps
What is Power BI Premium? How to purchase Power BI Premium Configure and manage capacities in Power BI
Premium
Power BI pricing page
Power BI Premium FAQ
Planning a Power BI Enterprise Deployment whitepaper
More questions? Try asking the Power BI Community
Plan your transition to Power BI Premium Gen2
5/23/2022 • 3 minutes to read • Edit Online
This article provides information about key dates for migrating Power BI Premium capacity to the latest
platform.
Over the last several months, we've been working to make many improvements to Power BI Premium. Changes
include updates to licensing, performance, scaling, management overhead, and improved insight to utilization
metrics. This next generation of Power BI Premium, referred to as Power BI Premium Gen2, has officially moved
from preview to general availability as of October 4, 2021. You can read the announcement about this release in
the Power BI blog.
If your organization is using the original version of Power BI Premium, you're required to migrate to the modern
Gen2 platform. Microsoft began migrating all Premium capacities to Gen2. If you have a Premium capacity that
requires migrating, you’ll receive an email notification 60 days before the migration is scheduled to
star t .
Migration notification
Following the general availability of gen2, we’ll begin to notify affected customers so that you can prepare your
organization for changes. We’ll post additional awareness, along with specific migration timelines to Microsoft
365 Message Center. Admins will receive 60 days advance notice of changes. The timeline will vary by cloud.
Next steps
What is Power BI Premium Gen2?
Using Autoscale with Power BI Premium
Install the Gen2 metrics app
Managing Premium Gen2 capacities
5/23/2022 • 8 minutes to read • Edit Online
Managing Power BI Premium involves creating, managing, and monitoring Premium capacities. This article
provides an overview of capacities; see Configure and manage capacities for step-by-step instructions.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
NOTE
You can also get Premium Per User (PPU) licenses for individuals, which provides many of the features and capabilities of a
Premium capacity, and also incorporates all functionality included with a Power BI Pro license. For more information, see
Power BI Premium Per User.
A Premium capacity can be assigned to a region other than the home region of the Power BI tenant, known as
multi-geo. Multi-geo provides administrative control over which datacenters within defined geographic regions
your Power BI content resides. The rationale for a multi-geo deployment is typically for corporate or
government compliance, rather than performance and scale. Report and dashboard loading still involves
requests to the home region for metadata. To learn more, see Multi-Geo support for Power BI Premium.
Power BI service administrators and Global Administrators can modify Premium capacities. Specifically, they can:
Change the capacity size to scale-up or scale-down resources.
Add or remove Capacity Admins.
Add or remove users that have assignment permissions.
Change regions.
NOTE
Service and global administrators do not have access to capacity metrics unless explicitly added as capacity admins.
Contributor assignment permissions are required to assign a workspace to a specific Premium capacity. The
permissions can be granted to the entire organization, specific users, or groups.
By default, Premium capacities support workloads associated with running Power BI queries. Premium
capacities also support additional workloads: AI (Cognitive Ser vices) , Paginated Repor ts , and Dataflows .
Deleting a Premium capacity is possible and won't result in the deletion of its workspaces and content. Instead, it
moves any assigned workspaces to shared capacity. When the Premium capacity was created in a different
region, the workspace is moved to shared capacity of the home region.
Capacities have limited resources, defined by each capacity SKU. Resources consumption by Power BI items
(such as reports and dashboards) across capacities can be tracked using the metrics app.
Assigning workspaces to capacities
Workspaces can be assigned to a Premium capacity in the Power BI Admin portal or, for a workspace, in the
Workspace pane.
Capacity Admins, as well as Global Administrators or Power BI service administrators, can bulk assign
workspaces in the Power BI Admin portal. Bulk assigned can apply to:
Workspaces by users - All workspaces owned by those users, including personal workspaces, are
assigned to the Premium capacity. This will include the reassignment of workspaces when they are
already assigned to a different Premium capacity. In addition, the users are also assigned workspace
assignment permissions.
Specific workspaces
The entire organization's workspaces - All workspaces, including personal workspaces, are assigned
to the Premium capacity. All current and future users are assigned workspace assignment permissions.
This approach is not recommended. A more targeted approach is preferred.
You can enable Premium capabilities in a workspace by setting the proper license mode. To set a license mode,
you must be both a workspace admin, and have assignment permissions. To enable Premium capabilities for P
and EM SKUs, set the license mode to Premium per capacity. To enable Premium capabilities for A SKU’s, set the
license mode to Embedded. To enable Premium capabilities for Premium Per User (PPU), mark the license mode
as Premium Per User. To remove a workspace from Premium, mark the workspace license mode as Pro.
Workspace admins can remove a workspace from a capacity (to shared capacity) without requiring assignment
permission. Removing workspaces from reserved capacities effectively relocates the workspace to shared
capacity. Note that removing a workspace from a Premium capacity may have negative consequences resulting,
for example, in shared content becoming unavailable to Power BI Free licensed users, or the suspension of
scheduled refresh when they exceed the allowances supported by shared capacities.
In the Power BI service, a workspace assigned to a Premium capacity is easily identified by the diamond icon
that adorns the workspace name.
Next steps
Using autoscale with Premium Gen2
Install the Gen2 metrics app
Using the Premium Gen2 metrics app
Configure and manage capacities in Power BI
Premium
5/23/2022 • 5 minutes to read • Edit Online
Managing Power BI Premium involves creating, managing, and monitoring Premium capacities. This article
provides step-by-step instructions; for an overview of capacities; see Managing Premium capacities.
Learn how to manage Power BI Premium and Power BI Embedded capacities, which provide reserved resources
for your content.
Capacity is at the heart of the Power BI Premium and Power BI Embedded offerings. It is a set of resources
reserved for exclusive use by your organization. Having a capacity enables you to publish dashboards, reports,
and datasets to users throughout your organization without having to purchase per-user licenses for them. It
also offers dependable, consistent performance for the content hosted in capacity. For more information, see
What is Power BI Premium?.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
NOTE
You can also get Premium Per User (PPU) licenses for individuals, which provides many of the features and capabilities of a
Premium capacity, and also incorporates all functionality included with a Power BI Pro license. For more information, see
Power BI Premium Per User.
Manage capacity
After you have purchased capacity nodes in Microsoft 365, you set up the capacity in the Power BI admin portal.
You manage Power BI Premium capacities in the Capacity settings section of the portal.
You manage a capacity by selecting the name of the capacity. This takes you to the capacity management screen.
If no workspaces have been assigned to the capacity, you will see a message about assigning a workspace to the
capacity.
Setting up a new capacity (Power BI Premium)
The admin portal shows the number of virtual cores (v-cores) that you have used and that you still have
available. The total number of v-cores is based on the Premium SKUs that you have purchased. For example,
purchasing a P3 and a P2 results in 48 available cores – 32 from the P3 and 16 from the P2.
If you have available v-cores, set up your new capacity by following these steps.
1. Select Set up new capacity .
2. Give your capacity a name.
3. Define who the admin is for this capacity.
4. Select your capacity size. Available options are dependent on how many available v-cores you have. You
can't select an option that is larger than what you have available.
5. Select Set up .
Capacity admins, as well as Power BI admins and global administrators, then see the capacity listed in the admin
portal.
Capacity settings
1. In the Premium capacity management screen, under Actions , select the gear icon to review and update
settings.
2. You can see who the service admins are, the SKU/size of the capacity, and what region the capacity is in.
3. You can also rename or delete a capacity.
NOTE
Power BI Embedded capacity settings are managed in the Microsoft Azure portal.
NOTE
To upgrade to a P4 or a P5 capacity you need to buy a few smaller SKUs that will add up to the size of the
capacity you want.
Administrators are free to create, resize and delete nodes, so long as they have the requisite number of v-
cores.
P SKUs cannot be downgraded to EM SKUs. You can hover over any disabled options to see an
explanation.
IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. See capacity and reliability notifications for more
information.
NOTE
For Power BI Embedded, capacity admins are defined in the Microsoft Azure portal.
Workspaces by users When you assign workspaces by user, or group, all the
workspaces that the user or group is admin of become
part of the Premium capacity, including the user's
personal workspace. The users automatically get
workspace assignment permissions.
This includes workspaces already assigned to a different
capacity.
3. Select Apply .
Assign from workspace settings
You can also assign a workspace to a Premium capacity from the settings of that workspace. To move a
workspace into a capacity, you must have admin permissions to that workspace, and also capacity assignment
permissions to that capacity. Note that workspace admins can always remove a workspace from Premium
capacity.
1. Edit a workspace by selecting the ellipsis (. . .) then selecting Edit this workspace .
2. Under Edit this workspace , expand Advanced .
3. Select the capacity that you want to assign this workspace to.
4. Select Save .
Once saved, the workspace and all its contents are moved into Premium capacity without any experience
interruption for end users.
Selecting Power BI Repor t Ser ver key will display a dialog contain your product key. You can copy it and use
it with the installation.
For more information, see Install Power BI Report Server.
Next steps
Managing Premium capacities
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Premium Gen2 capacity load evaluation
5/23/2022 • 4 minutes to read • Edit Online
TIP
This article explains how to evaluate your Gen2 capacity load. It covers concepts such as overload and autoscale. You can
also watch the Gen2 features breakdown video, which illustrates some of the Gen2 features described in this article.
To enforce CPU throughput limitations, Power BI evaluates the throughput from your Premium Gen2 capacity on
an ongoing basis.
Power BI evaluates throughput every 30 seconds . It allows operations to complete, collects execution time on
the shared pool physical node’s CPUs, and then for all operations on your capacity, aggregates them into 30-
second CPU inter vals and compares the results to what your purchased capacity is able to support.
The following image illustrates how Premium Gen2 evaluates and completes queries.
Let's look at an example: a P1 with four backend v-cores can support 120 seconds (4 x 30 seconds = 120) of v-
core execution time, also known as CPU time.
The aggregation is complex. It uses specialized algorithms for different workloads, and for different types of
operations, as described in the following points:
Slow-running operations , such as dataset and dataflow refresh, are considered background
operations since they typically run in the background and users don’t actively monitor them or look at
them visually. Background operations are lengthy and require significant CPU power to complete during
the long process. Power BI spreads CPU costs of background operations over 24 hours, so that capacities
don't hit maximum resource usage due to too many refreshes running simultaneously. This allows Power
BI Premium Gen2 subscribers to run as many background operations as allowed by their purchased
capacity SKU, and doesn’t limit them like the original Premium generation.
Fast operations like queries, report loads, and others are considered interactive operations. The CPU
time required to complete those operations is aggregated, to minimize the number of 30-seconds
windows that are impacted following that operation's completion.
Premium Gen2 background operation scheduling
Refreshes are run on Premium Gen2 capacities at the time they are scheduled, or close to it, regardless of how
many other background operations were scheduled for the same time. Datasets and dataflows being refreshed
are placed on a physical processing node that has enough memory available to load them, and then begin the
refresh process.
While processing the refresh, datasets may consume more memory to complete the refresh process. The refresh
engine makes sure no artifact can exceed the amount of memory that their base SKU allows them to consume
(for example, 25 GB on a P1 subscription, 50 GB on a P2 subscription, and so on).
In contrast, if autoscale is optionally enabled, if the sum of both interactive and background utilizations exceeds
the total backend v-core quota in your capacity, your capacity is automatically autoscales (raised) by one v-core
for the next 24 hours.
The following image shows how autoscale works.
Autoscale always considers your current capacity size to evaluate how much you use, so if you already
autoscaled into one v-core, that v-core is spread evenly at 50% for frontend utilization and 50% for backend
utilization. This means your maximum capacity is now at (120 + 0.5 * 30 = 135 seconds) of CPU time in an
evaluation cycle.
Autoscale always ensures that no single interactive operation can account for all of your capacity, and you must
have two or more operations occurring in a single evaluation cycle to initiate autoscale.
Configure autoscale
To configure autoscale on a Power BI Premium Gen2 capacity, follow the instructions in Using Autoscale with
Power BI Premium.
Next steps
What is Power BI Premium Gen2?
Power BI Premium Gen2 architecture
Using Autoscale with Power BI Premium
Power BI Premium Gen2 FAQ
Power BI Premium Per User FAQ (preview)
Add or change Azure subscription administrators
More questions? Try asking the Power BI Community
Install the Gen2 metrics app
5/23/2022 • 4 minutes to read • Edit Online
The Power BI Premium Utilization and Metrics app is designed to provide monitoring capabilities for Power BI
Gen2 Premium capacities. Use this guide to install the app. Once the app is installed, you can learn how to use it.
NOTE
The app is updated regularly with new features and functionalities. If you see there's a pending update in the notifications
center, we recommend that you update the app.
Prerequisites
Before you install the Gen2 metrics app, review these requirements:
You need to be a capacity admin
The app only works with Gen2 capacities
NOTE
If you're installing the app in a government cloud environment, use one of the links below. You can also use these links to
upgrade the app. When upgrading, you don't need to delete the old app.
Microsoft 365 Government Community Cloud (GCC)
Microsoft 365 Government Community Cloud High (GCC High)
Microsoft 365 Department of Defense (DoD)
Power BI for China cloud
To install the Power BI Premium Capacity Utilization and Metrics app for the first time, follow these steps:
1. Select one of these options to get the app from AppSource:
Go to AppSource > Power BI Premium Capacity Utilization and Metrics and select Get it now .
In Power BI service, do the following:
a. Select Apps .
b. Select Get apps .
c. Search for Power BI Premium .
d. Select the Power BI Premium Capacity Utilization and Metrics app.
e. Select Get it now .
2. When prompted, sign in to AppSource using your Microsoft account and complete the registration
screen. The app will take you to the Power BI service to complete the process. Select Install to continue.
4. In the Connect to Premium Capacity Utilization And Metrics first window, fill in the fields
according to the table below:
5. Select Next .
6. In the Connect to Premium Capacity Utilization And Metrics second window, fill in the following
fields:
Authentication method - Select your authentication method. The default authentication method
is OAuth2.
Privacy level setting for this data source - Select Organizational to enable app access to all
the data sources in your organization.
NOTE
ExtensionDataSourceKind and ExtensionDataSourcePath are internal fields related to the app's connector. Do not
change the values of these fields.
9. After configuring the app, it may take a few minutes for the app to get your data. If you run the app and
it's not displaying any data, refresh the app. This behavior happens only when you open the app for the
first time.
Next steps
Use the gen2 metrics app
Use the Gen2 metrics app
5/23/2022 • 17 minutes to read • Edit Online
The Power BI Premium utilization and metrics app is designed to provide monitoring capabilities for Power BI
Gen2 Premium capacities. Monitoring your capacities is essential for making informed decisions on how to best
use your Premium capacity resources. For example, the app can help identify when to scale up your capacity or
when to turn on autoscale.
NOTE
When turning on autoscale, make sure there are no Azure policies preventing autoscale from working.
The app is updated often with new features and functionalities and provides the most in-depth information into
how your capacities are performing.
To install the Gen2 metrics app, you must be a capacity admin. Once installed, anyone in the organization with
the right permissions can view the app.
The Gen2 metrics app has six pages:
Overview
Evidence
Refresh
Timepoint
Artifact Detail
Overview
This page provides an overview of the capacity performance. It's divided into the three sections listed below.
At the top of each page, the CapacityID field allows you to select the capacity the app shows results for.
Artifacts
The artifacts section is made up of two visuals, one on top of the other, in the left side of the page. The top visual
is a stacked column table, and below it is a matrix table.
The Multi metric column chart displays the four values listed below. It shows the top results for these values per
Power BI item during the past two weeks.
Ar tifacts - A list of Power BI items active during the selected period of time. The item name is a string
with the syntax: item name \ item type \ workspace name . You can expand each entry to show the various
operations (such as queries and refreshes) the item performed.
CPU (s) - CPU processing time in seconds. Sort to view the top CPUs that consumed Power BI items over
the past two weeks.
Duration (s) - Processing time in seconds. Sort to view the Power BI items that needed the longest
processing time during the past two weeks.
Users - The number of users that used the Power BI item.
Ar tifact Size - The amount of memory a Power BI item needs. Sort to view the Power BI items that have
the largest memory footprint.
Overloaded minutes - Displays a sum of 30 seconds increments where overloading occurred at least
once. Sort to view the Power BI items that were affected the most due to overload penalty.
Performance delta - Displays the performance effect on Power BI items. The number represents the
percent of change from seven days ago. For example, 20 suggests that there's a 20% improvement today,
compared with the same metric taken a week ago.
To create the performance delta Power BI calculates an hourly average for all the fast operations that take
under 200 milliseconds to complete. The hourly value is used as a slow moving average over the last
seven days (168 hours). The slow moving average is then compared to the average between the most
recent data point, and a data point from seven days ago. The performance delta indicates the difference
between these two averages.
You can use the performance delta value to assess whether the average performance of your Power BI
items improved or worsened over the past week. The higher the value is, the better the performance is
likely to be. A value close to zero indicates that not much has changed, and a negative value suggests that
the average performance of your Power BI items got worse over the past week.
Sorting the matrix by the performance delta column helps identify datasets that have had the biggest
change in their performance. During your investigation, don't forget to consider the CPU (s) and number
of Users. The performance delta value is a good indicator when it comes to Power BI items that have a
high CPU utilization because they're heavily used or run many operations. However, small datasets with
little CPU activity may not reflect a true picture, as they can easily show large positive or negative values.
Performance
The performance section is made up of four visuals, one on top of the other, in the middle of the page.
NOTE
Peak is calculated as the highest number of seconds from both interactive and background operations.
To access the Timepoint page from this visual, right-click an overloaded timepoint, select Drill through and
then select TimePoint Detail .
CPU
Displays the total CPU power your capacity consumed over the past four weeks. Each data point is the
aggregated sum of CPU used for the past seven days.
Active Artifacts
Displays the number of Power BI items (such as reports, dashboards, and datasets) that used CPU during the
past four weeks.
Active Users
Displays the number of users that used the capacity during the past four weeks.
Cores
Displays the number of cores used by the capacity in the past four weeks. Each data point is the maximum
capacity size reported during that week. If your capacity used autoscaling or scaled up to a bigger size, the visual
will show the increase.
Evidence
This page provides information about overloads in your capacity. You can use it to establish which Power BI
items (such as reports, dashboards, and datasets) cause overload, and which items are affected by this overload.
NOTE
This page only displays data when the capacity is overloaded.
When you detect a Power BI item that causes overload, you can either optimize that item to reduce its impact on
the capacity, or you can scale up the capacity.
Artifacts causing overloading
You can visually identify the different Power BI items that cause overload, by using the timeline. Each day in the
timeline displays items causing overload. Drill down to see an hourly timeline. The value shown is an aggregate
of the CPU power consumed by artifacts when they overloaded the capacity.
Overloaders
Use this visual to identify the Power BI items that generate impactful overload events. This is shown as an
Overloading score when you select the Overloaders pivot. The overloading score for an artifact is derived from
the severity of an overload event, and how frequently the overload event occurred over the past 14 days. This
score has no physical property.
Switch to the Overloaded artifacts pivot to identify the items most affected by overload over the past 14 days.
The overloading impact can affect either the item that's causing the overload, or other items that are hosted in
the same capacity.
The Overloaded time (s) value is the amount of processing time that was impacted by an overload penalty. This
value is shown for each affected item, over the past 14 days.
Overloading windows
Use this visual to understand whether overload or autoscale events happen due to a single Power BI item, or
many items. Each Power BI item is given a different color.
Each column represents a 30 second window where CPU usage for the capacity exceeded allowance. The height
of the column represents the amount of CPU used.
The 30 second CPU allowance is determined by the number of v-cores your capacity has. When autoscale is
turned on, each added autoscale CPU adds 15 seconds to the allowance. When autoscale isn't turned on, or if
autoscale is fully utilized, penalties are applied to interactive operations in the next 30 second window. You can
see a visualization of these penalties in the Artifacts overloaded (seconds) chart.
To access the Timepoint page from this visual, right-click an overloaded timepoint, select Drill through and
then select TimePoint Detail .
Refresh
This page is designed to help you identify aspects concerning refresh performance such as refresh CPU
consumption power.
NOTE
You can get to a version of this page, dedicated to a specific Power BI item, using the drill through feature in one of the
visuals that displays individual items. The visuals in the drill through version of the page are identical to the ones listed
below. However, they only display information for the item you're drilling into.
At the top of the page there's a multi-selection pivot allowing you to focus on refreshing the page according to
the filters listed below. Each of these pivots filters all the visuals in the refresh page.
Ar tifact Kind - Filter the page by Power BI item type, such as report, dataset and dashboard.
Status - Filter the page by failed or successful operations.
Metric - Filter the page by one of the following:
CPU - CPU consumption
Duration - Operation processing time
Operations - Number of operations
Operation - Filter according to the type of operation selected.
Refresh by artifact
Displays the breakdown of the metric selected in the pivot at the top, in the past 14 days. These breakdowns can
indicate which refresh optimization is more likely to reduce the capacity footprint or the data source load.
When you select CPU, you can identify whether to reduce the capacity footprint.
When you select Duration, you can identify which data source load to reduce.
Duration
Each column represents the number of seconds it took to compete a single operation per hour, over a 14 day
period.
CPU
Each column represents the number of CPU seconds used to compete a single operation per hour, over a 14 day
period.
Operations
Each column represents the number of seconds it took to compete a single operation per hour, over a 14 day
period.
Refresh detail
A matrix table that describes all the metadata for each individual refresh operation that took place. Selecting a
cell in the visual will filter the matrix to show specific events.
Scheduled and manual refresh workflows can trigger multiple internal operations in the backend service. For
example, refreshes sometimes perform automatic retries if a temporary error occurred. These operations might
be recorded in the app using different activity IDs. Each activity ID is represented as a row in the table. When
reviewing the table, take into consideration that several rows may indicate an operation of a single activity.
The table has a Ratio column describing the ratio between CPU time and processing time. A low ratio suggests
data source inefficiencies, where Power BI service is spending more time waiting for the data source, and less
time processing the refresh.
Refresh operations
On the right side of the refresh page, there are two visuals designed to help you identify patterns.
Timeline - Displays the number of operations per day, for the past 14 days.
Score card - Displays the total number of performed operations.
Timepoint
This page provides a detailed view of every operation that resulted in CPU activity in a given timepoint. Use this
page to understand which interactive and background operations contributed the most to CPU usage.
IMPORTANT
You can only get to this page by using the drill through feature in an overloaded timepoint in one of these visuals:
CPU over time in the Overview page
Overloading windows in the Evidence page
When the total combined CPU for interactive and background operations exceeds the 30 second timepoint
allowance, the capacity is overloaded and depending on whether autoscale is enabled or not, throttling is
applied.
Autoscale is enabled - If the capacity has autoscale enabled, a new v-core will get added for the next 24
hours and will be shown as an increased value in the CPU Limit line in the CPU over time chart.
NOTE
When autoscale is enabled, if the capacity reaches the maximum number of v-cores allowed by the autoscale
operation, throttling is applied.
Autoscale isn't enabled - If autoscale isn't enabled, throttling gets applied to every interactive
operation in the subsequent timepoint.
Top row visuals
This section describes the operations of the visuals in the top row of the timepoint page.
Top left card - Displays the timepoint used to drill through to this page.
Hear tbeat line char t - Shows a 60 minute window of CPU activity. Use this visual to establish the
duration of peaks and troughs.
Vertical red line - The timepoint you currently drilled to view. The visual shows the 30 minutes of
CPU activity leading to the selected timepoint, as well as the 30 minutes of CPU activity after the
selected timepoint.
Blue line - Total CPUs.
Yellow line - The capacity allowance.
NOTE
If the blue line is above the yellow line the capacity is overloaded.
Interactive operations card - Displays the total number of interactive operations that contributed to
the CPU's activity during this timepoint.
Background operations card - Displays the total number of background operations that contributed
to the CPU's activity during this timepoint.
SKU card - Displays the current SKU.
Capacity CPU card - Displays the total number of CPU seconds allowed for this capacity, for a given 30
second timepoint window.
Interactive Operations
A table showing every interactive operation that contributed CPU usage in the timepoint used to drill through to
this page. Once an interactive operation completes, all of the CPU seconds used by it get attributed to the
timepoint window.
Ar tifact - The name of the Power BI item, its type, and its workspace details.
Operation - The type of interactive operation.
Star t - The time the interactive operation began.
End - The time the interactive operation finished.
Status - An indication showing if the operation succeeded or failed.
NOTE
CPU usage for failed operations is counted when determining if the capacity is in overload.
User - The name of the user that triggered the interactive operation.
Duration - The number of seconds the interactive operation took to complete.
Total CPU - The number of CPU seconds used by the interactive operation. This metric contributes to
determine if the capacity exceeds the total number of CPU seconds allowed for the capacity.
Timepoint CPU - The number of CPU seconds assigned to the interactive operation in the current
timepoint.
Throttling - The number of seconds of throttling applied to this interactive operation because of the
capacity being overloaded in the previous timepoint.
% Of Capacity - Interactive CPU operations as a proportion of the overall capacity allowance.
Background Operations
A table showing every background operation that contributed CPU usage to the timepoint window used to drill
through to this page. Every background operation that completed in the prior 24 hours (defined as a 2,880 x 30
second timepoint window), contributes a small portion of its total usage to the CPU value. This means that a
background operation that completed the previous day can contribute some CPU activity to determine if the
capacity is in overload.
All the columns in the background operations table are similar to the ones in the interactive operations table.
However, the background operations table doesn't have a users column.
Artifact Detail
This page provides useful information about a specific Power BI item.
IMPORTANT
You can only get to this page by using the drill through feature in one of the visuals that displays individual Power BI
items.
NOTE
Some of the visuals in the Artifact Detail page may not display information. A visual will not show anything when it's
designed to display an event that hasn't occurred.
You can tell which Power BI item you're reviewing, by looking at the card at the top left side of the report,
highlighted below. This syntax of this card is workspace \ Power BI item type \ Power BI item name .
Overloading
The overloading visual displays time slots where overloading occurred involving the Power BI item you're
drilling into.
The overloading visual has the following columns:
Date - The date the item was in overload.
Overloaded mins - Summed 30 second windows where at least one overload event took place.
Overload time % - The number of overloaded seconds divided by the duration of interactive operations
that took place.
Performance
Displays the percentage of fast, moderate, and slow operations from the total number of operations performed
by the Power BI item you're drilling into, over the past two weeks.
Fast - The moving average of fast operations as a percentage of all the operations over time. A fast
operation takes less than 100 milliseconds.
Moderate - The moving average of moderate operations as a percentage of all the operations over time.
A moderate operation takes between 100 milliseconds to two seconds.
Slow - The moving average of slow operations as a percentage of all the operations over time. A slow
operation takes over two seconds.
Artifact size
This visual displays the peak amount of memory detected in any three hour window, over a 14 day period, for
the item you're drilling into. You can cross filter this visual from the matrix by artifact and operation visual, to
show a peak memory profile for an individual day.
CPU duration and users
Use these visuals to review CPU consumption, operation duration and number of users for the item you're
drilling into. In these visuals, each column represents a single hour over a 14 day period.
CPU - Each column displays the number of CPU seconds used to complete each operation per hour.
Duration - Each column displays the number of seconds used to complete each operation per hour.
Users - Each column displays the number of active users per hour.
Next steps
Install the Gen2 metrics app
Backup and restore datasets with Power BI Premium
5/23/2022 • 5 minutes to read • Edit Online
You can use the Backup and Restore feature with Power BI datasets if you have a Power BI Premium or
Premium Per User (PPU) license, similar to the backup and restore operations available in tabular models for
Azure Analysis Services (Azure AS).
You can use SQL Server Management Studio (SSMS), Analysis Services cmdlets for PowerShell, and other tools
to perform backup and restore operations in Power BI using XMLA endpoints. The following sections describe
backup and restore concepts for Power BI datasets, certain requirements, and other considerations.
The ability to backup and restore Power BI datasets provides a migration path from Azure Analysis Services
workloads to Power BI Premium. It also enables dataset backups for multiple reasons, including corruption or
loss, data retention requirements, and tenant movement, among others.
{
"restore": {
"database": "DB",
"file": "/Backup.abf",
"allowOverwrite": true,
"security": "copyAll",
"ignoreIncompatibilities": true
}
}
Next steps
What is Power BI Premium?
SQL Server Management Studio (SSMS)
Analysis Services cmdlets for PowerShell
Dataset connectivity with the XMLA endpoint
Using Autoscale with Power BI Premium
Power BI Premium FAQ
Power BI Premium Per User FAQ
Add or change Azure subscription administrators
Configuring tenant and workspace storage
More questions? Try asking the Power BI Community
Using Autoscale with Power BI Premium
5/23/2022 • 3 minutes to read • Edit Online
Power BI Premium offers scale and performance for Power BI content in your organization. With Power BI
Premium Gen2, many improvements are introduced including enhanced performance, greater scale, improved
metrics. In addition, Premium Gen2 enables customers to automatically add compute capacity to avoid
slowdowns under heavy use, using Autoscale .
Autoscale uses an Azure subscription to automatically use more v-cores (virtual CPU cores) when the
computing load on your Power BI Premium subscription would otherwise be slowed by its capacity. This article
describes the steps necessary to get Autoscale working for your Power BI Premium subscription. Autoscale only
works with Power BI Premium Gen2.
To enable Autoscale, the following steps need to be completed:
1. Configure an Azure subscription to use with Autoscale.
2. Enable Autoscale in the Power BI Admin portal
The following sections describe the steps in detail.
NOTE
Autoscale isn’t available for Microsoft 365 Government Community Cloud (GCC), due to the use of the commercial
Azure cloud.
Embedded Gen 2 does not provide an out-of-the-box vertical autoscale feature. To learn about alternative autoscale
options for Embedded Gen2, see Autoscaling in Embedded Gen2
2. From the Subscriptions page, select the subscription you want to work with autoscale.
3. From the Settings selections for your selected subscription, select Resource groups .
4. Select Create to create a resource group to use with Autoscale.
5. Name your resource group and select Review + create . In the following image, the resource group is
called powerBIPremiumAutoscaleCores. You can name your resource group whatever you prefer. Just
remember the name of the subscription, and the name of your resource group, since you'll need to select
it again when you configure Autoscale in the Power BI Admin Portal.
6. Azure validates the information. After the validation process completes successfully, select Create . Once
created, you receive a notification in the upper-right corner of the Azure portal.
Enable Autoscale in the Power BI Admin portal
Once you've selected the Azure subscription to use with Autoscale, and created a resource group as described in
the previous section, you're ready to enable Autoscale and associate it with the resource group you created. The
person configuring Autoscale must be at least a contributor for the Azure subscription to successfully complete
these steps. You can learn more about assigning a user to a contributor role for an Azure subscription.
NOTE
After creating the subscription and enabling Autoscale in the admin portal, a
Microsoft.PowerBIDedicated/autoScaleVCores resource is created. Make sure that you don't have any Azure policies
that prevent Power BI Premium from provisioning, updating or deleting the
Microsoft.PowerBIDedicated/autoScaleVCores resource.
The following steps show you how to enable and associated Autoscale with the resource group.
1. Open the Power BI Admin por tal and select Capacity settings from the left pane. Information about
your Power BI Premium capacity is displayed.
2. Autoscale only works with Power BI Premium Gen2. Enabling Gen2 is easy: just move the slider to
Enabled in the Premium Generation 2 box.
3. Select the Manage auto-scale button to enable and configure Autoscale , and the Auto-scale
settings pane appears. Select the Enable auto scale .
4. You can then select the Azure subscription to use with Autoscale. Only subscriptions available to the
current user are displayed, which is why you must be at least a contributor for the subscription. Once
your subscription is selected, select the Resource group you created in the previous section, from the
list of resource groups available to the subscription.
5. Next, assign the maximum number of v-cores to use for Autoscale, and then select Save to save your
settings. Power BI applies your changes, then closes the pane and returns the view to Capacity settings ,
where you can see your settings have been applied. In the following image, there were a maximum of
two v-cores configured for Autoscale.
Here's a short video that shows how quickly you can configure Autoscale for Power BI Premium Gen2:
Next steps
What is Power BI Premium?
Power BI Premium FAQ
Power BI Premium Per User FAQ
Add or change Azure subscription administrators
Configure workloads in a Premium capacity
5/23/2022 • 13 minutes to read • Edit Online
This article lists the workloads for Power BI Premium, and describes their capacities. Use the Gen2 and Gen1
tabs to review the differences between workloads for these Premium offerings.
IMPORTANT
Premium Gen1, also known as the original version of Premium, is being deprecated. If you're still using Premium Gen1,
you need to migrate your Power BI content to Premium Gen2. For more information, see Plan your transition to Power BI
Premium Gen2.
NOTE
Workloads can be enabled and assigned to a capacity by using the Capacities REST APIs.
Supported workloads
Gen2
Gen1
Query workloads are optimized for and limited by resources determined by your Premium capacity SKU.
Premium capacities also support additional workloads that can use your capacity's resources.
The list of workloads below, describes which Premium Gen2 SKUs supports each workload:
AI - All SKUs are supported apart from the EM1/A1 SKUs
Datasets - All SKUs are supported
Dataflows - All SKUs are supported
Paginated repor ts - All SKUs are supported
Configure workloads
You can tune the behavior of the workloads, by configuring workload settings for your capacity.
Gen2
Gen1
IMPORTANT
All workloads are always enabled and cannot be disabled. Your capacity resources are managed by Power BI according to
your capacity usage.
Use the Power BI Premium utilization and metrics app to monitor your capacity's activity.
IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. For more information, see capacity and reliability notifications.
AI (Preview)
The AI workload lets you use cognitive services and Automated Machine Learning in Power BI. Use the following
settings to control workload behavior.
Allow usage from Power BI Desktop This setting is reserved for future use and doesn't appear in
all tenants.
Allow building machine learning models Specifies whether business analysts can train, validate, and
invoke machine learning models directly in Power BI. For
more information, see Automated Machine Learning in
Power BI (Preview).
Enable parallelism for AI requests Specifies whether AI requests can run in parallel.
1 PremiumGen2 doesn't require memory settings to be changed. Memory in Premium Gen2 is automatically
managed by the underlying system.
Datasets
Use the settings in the table below to control workload behavior. There's additional usage information below the
table for some of the settings.
NOTE
In Premium Gen1, the datasets workload is enabled by default and cannot be disabled.
Max Memor y (%) 1 The maximum percentage of available memory that datasets
can use in a capacity.
XML A Endpoint Specifies that connections from client applications honor the
security group membership set at the workspace and app
levels. For more information, see Connect to datasets with
client applications and tools.
Max Intermediate Row Set Count The maximum number of intermediate rows returned by
DirectQuery. The default value is 1000000, and the allowable
range is between 100000 and 2147483646. The upper limit
may need to be further constrained based on what the
datasource supports.
Max Offline Dataset Size (GB) The maximum size of the offline dataset in memory. This is
the compressed size on disk. The default value is 0, which is
the highest limit defined by SKU. The allowable range is
between 0 and the capacity size limit.
Max Result Row Set Count The maximum number of rows returned in a DAX query. The
default value is -1 (no limit), and the allowable range is
between 100000 and 2147483647.
Quer y Memor y Limit (%) The maximum percentage of available memory in the
workload that can be used for executing an MDX or DAX
query. The default value is 0, which results in SKU-specific
automatic query memory limit being applied.
Quer y Timeout (seconds) The maximum amount of time before a query times out. The
default is 3600 seconds (1 hour). A value of 0 specifies that
queries won't time out.
Automatic page refresh On/Off toggle to allow premium workspaces to have reports
with automatic page refresh based on fixed intervals.
Minimum refresh inter val If automatic page refresh is on, the minimum interval
allowed for page refresh interval. The default value is five
minutes, and the minimum allowed is one second.
Change detection measure On/Off toggle to allow premium workspaces to have reports
with automatic page refresh based on change detection.
Minimum execution inter val If change detection measure is on, the minimum execution
interval allowed to poll for data changes. The default value is
five seconds, and the minimum allowed is one second.
1
1 PremiumGen2 doesn't require memory settings to be changed. Memory in Premium Gen2 is automatically
managed by the underlying system.
Max Intermediate Row Set Count
Use this setting to control the impact of resource-intensive or poorly designed reports. When a query to a
DirectQuery dataset results in a very large result from the source database, it can cause a spike in memory
usage and processing overhead. This situation can lead to other users and reports running low on resources.
This setting allows the capacity administrator to adjust how many rows an individual query can fetch from the
data source.
Alternatively, if the capacity can support more than the one million row default, and you have a large dataset,
increase this setting to fetch more rows.
This setting affects only DirectQuery queries, whereas Max Result Row Set Count affects DAX queries.
Max Offline Dataset Size
Use this setting to prevent report creators from publishing a large dataset that could negatively impact the
capacity. Power BI can't determine actual in-memory size until the dataset is loaded into memory. It's possible
that a dataset with a smaller offline size can have a larger memory footprint than a dataset with a larger offline
size.
If you have an existing dataset that is larger than the size you specify for this setting, the dataset will fail to load
when a user tries to access it. The dataset can also fail to load if it's larger than the Max Memory configured for
the datasets workload.
This setting is applicable for models in both small dataset storage format (ABF format) and large dataset storage
format (PremiumFiles), although the offline size of the same model might differ when stored in one format vs
another. For more information, see Large models in Power BI Premium.
To safeguard the performance of the system, an additional SKU-specific hard ceiling for max offline dataset size
is applied, regardless of the configured value. The additional SKU-specific hard ceiling in the below table does
not apply to Power BI datasets stored in large dataset storage format.
EM 1/ A 1 EM 2/ A 2 EM 3/ A 3 P 1/ A 4 P 2/ A 5 P 3/ A 6 P 4/ A 7 P 5/ A 8
Hard 3 GB 5 GB 6 GB 10 GB 10 GB 10 GB 10 GB 10 GB
ceiling
for Max
Offline
Dataset
Size
This setting affects only DAX queries, whereas Max Intermediate Row Set Count affects DirectQuery queries.
Query Memory Limit
Use this setting to control the impact of resource-intensive or poorly designed reports. Some queries and
calculations can result in intermediate results that use a lot of memory on the capacity. This situation can cause
other queries to execute very slowly, cause eviction of other datasets from the capacity, and lead to out of
memory errors for other users of the capacity.
This setting applies to all DAX and MDX queries that are executed by Power BI reports, Analyze in Excel reports,
as well as other tools that might connect over the XMLA endpoint.
Data refresh operations may also execute DAX queries as part of refreshing the dashboard tiles and visual
caches after the data in the dataset has been refreshed. Such queries may also potentially fail because of this
setting, and this could lead to the data refresh operation being shown in a failed state, even though the data in
the dataset was successfully updated.
The default setting is 0, which results in the following SKU-specific automatic query memory limit being applied.
EM 1/ A 1 EM 2/ A 2 EM 3/ A 3 P 1/ A 4 P 2/ A 5 P 3/ A 6 P 4/ A 7 P 5/ A 8
Automa 1 GB 2 GB 2 GB 6 GB 6 GB 10 GB 10 GB 10 GB
tic
Quer y
Memor y
Limit
To safeguard the performance of the system, a hard ceiling of 10 GB is enforced for all queries executed by
Power BI reports, regardless of the query memory limit configured by the user. This hard ceiling doesn't apply to
queries issued by tools that use the Analysis Services protocol (also known as XMLA). Users should consider
simplifying the query or its calculations if the query is too memory intensive.
Query Timeout
Use this setting to maintain better control of long-running queries, which can cause reports to load slowly for
users.
This setting applies to all DAX and MDX queries that are executed by Power BI reports, Analyze in Excel reports,
as well as other tools that might connect over the XMLA endpoint.
Data refresh operations may also execute DAX queries as part of refreshing the dashboard tiles and visual
caches after the data in the dataset has been refreshed. Such queries may also potentially fail because of this
setting, and this could lead to the data refresh operation being shown in a failed state, even though the data in
the dataset was successfully updated.
This setting applies to a single query and not the length of time it takes to run all of the queries associated with
updating a dataset or report. Consider the following example:
The Quer y Timeout setting is 1200 (20 minutes).
There are five queries to execute, and each runs 15 minutes.
The combined time for all queries is 75 minutes, but the setting limit isn't reached because all of the individual
queries run for less than 20 minutes.
Note that Power BI reports override this default with a much smaller timeout for each query to the capacity. The
timeout for each query is typically about three minutes.
Automatic page refresh
When enabled, automatic page refresh allows users in your Premium capacity to refresh pages in their report at
a defined interval, for DirectQuery sources. As a capacity admin, you can do the following:
Turn automatic page refresh on and off
Define a minimum refresh interval
To find the automatic page refresh setting:
1. In the Power BI Admin portal, select Capacity settings .
2. Select your capacity, and then scroll down and expand the Workloads menu.
3. Scroll down to the Datasets section.
Queries created by automatic page refresh go directly to the data source, so it's important to consider reliability
and load on those sources when allowing automatic page refresh in your organization.
Dataflows
The dataflows workload lets you use dataflows self-service data prep, to ingest, transform, integrate, and enrich
data. Use the following settings to control workload behavior.
Enhanced Dataflows Compute Engine (Preview) Enable this option for up to 20x faster calculation of
computed entities when working with large scale data
volumes. You must restar t the capacity to activate
the new engine. For more information, see Enhanced
dataflows compute engine.
Container Size The maximum size of the container that dataflows use for
each entity in the dataflow. The default value is 700 MB. For
more information, see Container size.
1
1 PremiumGen2 doesn't require memory settings to be changed. Memory in Premium Gen2 is automatically
managed by the underlying system.
Enhanced dataflows compute engine
To benefit from the new compute engine, split ingestion of data into separate dataflows and put transformation
logic into computed entities in different dataflows. This approach is recommended because the compute engine
works on dataflows that reference an existing dataflow. It doesn't work on ingestion dataflows. Following this
guidance ensures that the new compute engine handles transformation steps, such as joins and merges, for
optimal performance.
Container size
When refreshing a dataflow, the dataflow workload spawns a container for each entity in the dataflow. Each
container can take memory up to the volume specified in the Container Size setting. The default for all SKUs is
700 MB. You might want to change this setting if:
Dataflows take too long to refresh, or dataflow refresh fails on a timeout.
Dataflow entities include computation steps, for example, a join.
It's recommended you use the Power BI Premium Capacity Metrics app to analyze Dataflow workload
performance.
In some cases, increasing container size may not improve performance. For example, if the dataflow is getting
data only from a source without performing significant calculations, changing container size probably won't
help. Increasing container size might help if it will enable the Dataflow workload to allocate more memory for
entity refresh operations. By having more memory allocated, it can reduce the time it takes to refresh heavily
computed entities.
The Container Size value can't exceed the maximum memory for the Dataflows workload. For example, a P1
capacity has 25 GB of memory. If the Dataflow workload Max Memory (%) is set to 20%, Container Size (MB)
can't exceed 5000. In all cases, the Container Size can't exceed the Max Memory, even if you set a higher value.
Paginated reports
The paginated reports workload lets you run paginated reports, based on the standard SQL Server Reporting
Services format, in the Power BI service.
Paginated reports offer the same capabilities that SQL Server Reporting Services (SSRS) reports do today,
including the ability for report authors to add custom code. This allows authors to dynamically change reports,
such as changing text colors based on code expressions.
Gen2
Gen1
Next steps
Power BI Premium Generation 2
Optimizing Power BI Premium capacities
Self-service data prep in Power BI with Dataflows
What are paginated reports in Power BI Premium?
Automatic page refresh in Power BI Desktop (preview)
Monitor capacities in the Admin portal
5/23/2022 • 4 minutes to read • Edit Online
The Health tab in the Capacity settings area in the Admin portal provides a metrics summary about your
capacity and enabled workloads.
NOTE
This article refers to monitoring Premium (original version) capacities. To monitor Premium Gen2 capacities, install and use
the Power BI Premium Utilization and Metrics app.
If you need more comprehensive metrics, use the Power BI Premium Capacity Metrics app. The app provides
drill-down and filtering capabilities, and the most detailed metrics for near every aspect affecting capacity
performance. To learn more, see Monitor Premium capacities with the app.
IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. See capacity and reliability notifications for more information.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
NOTE
The Admin portal cannot be used to monitor Premium Per User (PPU) activities or capacity.
System Metrics
On the Health tab, at the highest level, CPU utilization and memory usage provide a quick view of the most
important metrics for the capacity. These metrics are cumulative, including all enabled workloads for the
capacity.
Workload metrics
For each workload enabled for the capacity. CPU utilization and memory usage are shown.
Dataflows
D a t a fl o w O p e r a t i o n s
Average Duration (min) The average duration of refresh for the dataflow, in minutes
Max Duration (min) The duration of the longest-running refresh for the dataflow,
in minutes.
Average Wait Time (min) The average lag between the scheduled time and start of a
refresh for the dataflow, in minutes.
Max Wait Time (min) The maximum wait time for the dataflow, in minutes.
Datasets
R e fr e sh
M ET RIC DESC RIP T IO N
Average Duration (min) The average duration of refresh for the dataset, in minutes.
Max Duration (min) The duration of the longest-running refresh for the dataset,
in minutes.
Average Wait Time (min) The average lag between the scheduled time and start of a
refresh for the dataset, in minutes.
Max Wait Time (min) The maximum wait time for the dataset, in minutes.
Q u er y
Total Count The total number of queries run for the dataset.
Average Duration (ms) The average query duration for the dataset, in milliseconds
Max Duration (ms) The duration of the longest-running query in the dataset, in
milliseconds.
Average Wait Time (ms) The average query wait time for the dataset, in milliseconds.
Max Wait Time (ms) The duration of the longest-waiting query in the dataset, in
milliseconds.
Ev i c t i o n
Model Count The total number of dataset evictions for this capacity. When
a capacity faces memory pressure, the node evicts one or
more datasets from memory. Datasets that are inactive (with
no query/refresh operation currently executing) are evicted
first. Then the eviction order is based on a measure of 'least
recently used' (LRU).
Paginated Reports
R e p o r t Ex e c u t i o n
Execution Count The number of times the report was been executed and
viewed by users.
R e p o r t U sa g e
M ET RIC DESC RIP T IO N
Success Count The number of times the report has been viewed by a user.
Failure Count The number of times the report has been viewed by a user.
Data Retrieval Duration (ms) The average amount of time it takes to retrieve data for the
report, in milliseconds. Long durations can indicate slow
queries or other data source issues.
Processing Duration (ms) The average amount of time it takes to process the data for
a report, in milliseconds.
Rendering Duration (ms) The average amount of time it takes to render a report in
the browser, in milliseconds.
NOTE
Detailed metrics for the AI workload are not yet available.
Next steps
Now that you understand how to monitor Power BI Premium capacities, learn more about optimizing capacities.
Optimizing Power BI Premium capacities
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Large datasets in Power BI Premium
5/23/2022 • 6 minutes to read • Edit Online
Power BI datasets can store data in a highly compressed in-memory cache for optimized query performance,
enabling fast user interactivity. With Premium capacities, large datasets beyond the default limit can be enabled
with the Large dataset storage format setting. When enabled, dataset size is limited by the Premium capacity
size or the maximum size set by the administrator.
Large datasets can be enabled for all Premium P SKUs, Embedded A SKUs, and with Premium Per User (PPU).
The large dataset size limit in Premium is comparable to Azure Analysis Services, in terms of data model size
limitations.
While required for datasets to grow beyond 10 GB, enabling the Large dataset storage format setting has other
benefits. If you're planning to use XMLA endpoint-based tools for dataset write operations, be sure to enable the
setting, even for datasets that you wouldn't necessarily characterize as a large dataset. When enabled, the large
dataset storage format can improve XMLA write operations performance.
Large datasets in the service do not affect the Power BI Desktop model upload size, which is still limited to 10
GB. Instead, datasets can grow beyond that limit in the service on refresh.
IMPORTANT
Power BI Premium does support large datasets. Enable the Large dataset storage format option to use datasets in
Power BI Premium that are larger than the default limit.
4. Invoke a refresh to load historical data based on the incremental refresh policy. The first refresh could
take a while to load the history. Subsequent refreshes should be faster, depending on your incremental
refresh policy.
3. Run the following cmdlets to sign in and check the dataset storage mode.
Login-PowerBIServiceAccount
The response should be the following. The storage mode is ABF (Analysis Services backup file), which is
the default.
Id StorageMode
-- -----------
4. Run the following cmdlets to set the storage mode. It can take a few seconds to convert to Premium Files.
The response should be the following. The storage mode is now set to Premium Files.
Id StorageMode
-- -----------
You can check the status of dataset conversions to and from Premium Files by using the Get-
PowerBIWorkspaceMigrationStatus cmdlet.
Dataset eviction
Power BI uses dynamic memory management to evict inactive datasets from memory. Power BI evicts datasets
so it can load other datasets to address user queries. Dynamic memory management allows the sum of dataset
sizes to be significantly greater than the memory available on the capacity, but a single dataset must fit into
memory. For more info on dynamic memory management, see How capacities function.
You should consider the impact of eviction on large models. Despite relatively fast dataset load times, there
could still be a noticeable delay for users if they have to wait for large evicted datasets to be reloaded. For this
reason, in its current form, the large models feature is recommended primarily for capacities dedicated to
enterprise BI requirements rather than capacities mixed with self-service BI requirements. Capacities dedicated
to enterprise BI requirements are less likely to frequently trigger eviction and need to reload datasets. Capacities
for self-service BI on the other hand can have many small datasets that are more frequently loaded in and out of
memory.
On-demand load
On-demand load is enabled by default for large datasets, and can provide significantly improved report
performance. With on-demand load, you get the following benefits during subsequent queries and refreshes:
Relevant data pages are loaded on-demand (paged in to memory).
Evicted datasets are quickly made available for queries.
On-demand loading surfaces additional Dynamic Management View (DMV) information that can be used to
identify usage patterns and understand the state of your models. For example, you can check the Temperature
and Last Accessed statistics for each column in the dataset, by running the following DMV query from SQL
Server Management Studio (SSMS):
You can also check the dataset size by running the following DMV queries from SSMS. Sum the
DICTIONARY_SIZE and USED_SIZE columns from the output to see the dataset size in bytes.
Region availability
Large datasets in Power BI are only available in Azure regions that support Azure Premium Files Storage.
The following list provides regions where large datasets in Power BI are available. Regions not in the following
list are not supported for large models.
NOTE
Once a large dataset is created in a workspace, it must stay in that region. You cannot reassign a workspace with a large
dataset to a Premium capacity in another region.
Central US centralus
A Z URE REGIO N A Z URE REGIO N A B B REVIAT IO N
East US eastus
East US 2 eastus2
UK South uksouth
UK West ukwest
West US westus
West US 2 westus2
Next steps
The following links provide information that can be useful for working with large models:
Azure Premium Files Storage
Configure Multi-Geo support for Power BI Premium
Bring your own encryption keys for Power BI
How capacities function
Incremental refresh for datasets
Power BI Premium Generation 2.
Automate Premium workspace and dataset tasks
with service principals
5/23/2022 • 3 minutes to read • Edit Online
Service principals are an Azure Active Directory app registration you create within your tenant to perform
unattended resource and service level operations. They're a unique type of user identity with an app name,
application ID, tenant ID, and client secret or certificate for a password.
Power BI Premium uses the same service principal functionality as Power BI Embedded. To learn more, see
Embedding Power BI content with service principals.
In Power BI Premium , service principals can also be used with the XMLA endpoint to automate dataset
management tasks such as provisioning workspaces, deploying models, and dataset refresh with:
PowerShell
Azure Automation
Azure Logic Apps
Custom client applications
Only New workspaces support XMLA endpoint connections using service principals. Classic workspaces aren't
supported. A service principal has only those permissions necessary to perform tasks for workspaces that it is
assigned. Permissions are assigned through workspace Access, much like regular UPN accounts.
To perform write operations, the capacity's Datasets workload must have the XMLA endpoint enabled for
read-write. Datasets published from Power BI Desktop should have the Enhanced metadata format feature
enabled.
Workspace access
In order for your service principal to have the necessary permissions to perform Premium workspace and
dataset operations, you must add the service principal as a workspace Member or Admin. Using Workspace
access in the Power BI service is described here, but you can also use the Add Group User REST API.
1. In the Power BI service, for a workspace, select More > Workspace access .
2. Search by application name, Add the service principal as an Admin or Member to the workspace.
PowerShell
Using SQLServer module
In the following example, AppId, TenantId, and AppSecret are used to authenticate a dataset refresh operation:
Param (
[Parameter(Mandatory=$true)] [String] $AppId,
[Parameter(Mandatory=$true)] [String] $TenantId,
[Parameter(Mandatory=$true)] [String] $AppSecret
)
$PWord = ConvertTo-SecureString -String $AppSecret -AsPlainText -Force
Next steps
Dataset connectivity with the XMLA endpoint
Azure Automation
Azure Logic Apps
Power BI REST APIs
Dataset connectivity with the XMLA endpoint
5/23/2022 • 19 minutes to read • Edit Online
Power BI Premium, Premium Per User, and Power BI Embedded workspaces support open-platform connectivity
from Microsoft and third-party client applications and tools by using an XMLA endpoint.
Terms of Use
Use of the XMLA endpoint is subject to the following:
Single-user application - The application uses a single user account or app identity to access a Power BI
dataset through the XMLA endpoint. Typical examples are developer tools, admin scripts, and automated
processes to perform data modeling and administrative tasks, such as altering the metadata of a dataset,
performing a backup or restore operation, or triggering a data refresh. The user account or app identity that the
client application uses to access a dataset must have a valid Premium Per User (PPU) license unless the dataset
resides on a Premium capacity.
Multi-user application - The application provides multiple users with access to a Power BI dataset. For
example, a middle-tier application integrating a dataset into a business solution and accessing the dataset on
behalf of its business users.
For Premium Per User (PPU) workspaces, the application must require each user to sign in to Power BI. The
application uses each user's access token to access the datasets. The application is not permitted to use a
service account or other app identity to perform tasks on behalf of its users. Each user must use their own
Power BI account for opening reports, accessing datasets, and executing queries.
For Premium workspaces, the application may use a service account or app identity on behalf of the end
users without requiring each user to sign in to Power BI.
NOTE
To determine the primary domain name and ID of a Power BI tenant, sign into the Azure portal, select Azure Active
Directory from the main menu, and then note the information on the Azure Active Directory Overview page. For more
information, see Find the Microsoft Azure AD tenant ID and primary domain name.
NOTE
Connecting to a My Workspace by using the XMLA endpoint is currently not supported.
Connection requirements
Initial catalog
With some tools, such as SQL Server Profiler, you must specify an Initial Catalog, which is the dataset (database)
to connect to in your workspace. In the Connect to Ser ver dialog, select Options > Connection Proper ties
> Connect to database , enter the dataset name.
Duplicate workspace names
New workspaces (created using the new workspace experience) in Power BI impose validation to disallow
creating or renaming workspaces with duplicate names. Workspaces that haven't been migrated can result in
duplicate names. When connecting to a workspace with the same name as another workspace, you may get the
following error:
Cannot connect to powerbi://api.powerbi.com/v1.0/[tenant name]/[workspace name] .
To work around this error, in addition to the workspace name, specify the ObjectIDGuid, which can be copied
from the workspace objectID in the URL. Append the objectID to the connection URL. For example:
powerbi://api.powerbi.com/v1.0/myorg/Contoso Sales - 9d83d204-82a9-4b36-98f2-a40099093830 .
Duplicate dataset name
When connecting to a dataset with the same name as another dataset in the same workspace, append the
dataset guid to the dataset name. You can get both dataset name and guid when connected to the workspace in
SSMS.
Delay in datasets shown
When connecting to a workspace, changes from new, deleted, and renamed datasets can take up to a few
minutes to appear.
Unsupported datasets
The following datasets aren't accessible by using the XMLA endpoint. These datasets won't appear under the
workspace in SSMS or in other tools:
Datasets based on a live connection to an Azure Analysis Services or SQL Server Analysis Services model.
Datasets based on a live connection to a Power BI dataset in another workspace. To learn more, see Intro to
datasets across workspaces.
Datasets with Push data by using the REST API.
Datasets in My Workspace.
Excel workbook datasets.
Server/workspace alias
Server name aliases, supported in Azure Analysis Services are not supported for Premium workspaces.
Security
In addition to the XMLA Endpoint property being enabled read-write by the capacity admin, the tenant-level
setting Allow XML A endpoints and Analyze in Excel with on-premises datasets must be enabled in the
admin portal. If you need to generate Analyze in Excel (AIXL) files that connect to the XMLA endpoint, the tenant-
level setting Allow live connections should also be enabled. These settings are both enabled by default.
Allow XML A endpoints and Analyze in Excel with on-premises datasets is an integration setting.
The following table describes the implications of the setting Expor t data for XMLA and Analyze in Excel (AIXL):
A L LO W XM L A EN DP O IN T S A N D A L LO W XM L A EN DP O IN T S A N D
A N A LY Z E IN EXC EL W IT H O N - A N A LY Z E IN EXC EL W IT H O N -
SET T IN G P REM ISES DATA SET S = DISA B L ED P REM ISES DATA SET S = EN A B L ED
Allow Live Connections toggle = XMLA disallowed, Analyze in Excel XMLA allowed, Analyze in Excel
disabled disallowed, AIXL for on-prem datasets disallowed, AIXL for on-prem datasets
disallowed allowed
Allow Live Connections toggle = XMLA disallowed, Analyze in Excel XMLA allowed, Analyze in Excel
enabled allowed, AIXL for on-prem datasets allowed, AIXL for on-prem datasets
disallowed allowed
Access through the XMLA endpoint will honor security group membership set at the workspace/app level.
Workspace contributors and above have write access to the dataset and are therefore equivalent to Analysis
Services database admins. They can deploy new datasets from Visual Studio and execute TMSL scripts in SSMS.
Operations that require Analysis Services server admin permissions (rather than database admin) such as
server-level traces and user impersonation using the EffectiveUserName connection-string property are not
supported in Premium workspaces at this time.
Other users who have Build permission on a dataset are equivalent to Analysis Services database readers. They
can connect to and browse datasets for data consumption and visualization. Row-level security (RLS) rules are
honored and they cannot see internal dataset metadata.
Model roles
With the XMLA endpoint, roles can be defined for a dataset, role membership can be defined for Azure Active
Directory (Azure AD) users, and row-level security (RLS) filters can be defined. Model roles in Power BI are used
only for RLS. Use the Power BI security model to control permissions beyond RLS.
For tabular model projects authored in Visual Studio, roles can be defined by using Role Manager in the model
designer. For datasets in Power BI, roles can be defined by using SSMS to create role objects and define role
properties. In most cases, however, role object definitions can be scripted by using TMSL to create or modify the
Roles object. TMSL scripts can be executed in SSMS or with the Invoke-ASCmd PowerShell cmdlet.
The following limitations apply when working with dataset roles through the XMLA endpoint:
The only permission for a role that can be set for datasets is Read permission. Other permissions are granted
using the Power BI security model.
Service Principals, which require workspace Member or Admin permissions cannot be added to roles.
Build permission for a dataset is required for read access through the XMLA endpoint, regardless of the
existence of dataset roles.
The "Roles=" connection string property can be used to test downgrading role members with Write
permissions to Read permissions. The member account must still be a member of the relevant RLS role. This
is different than using Impersonation with SQL Server Analysis Services or Azure Analysis Services where if
the account is a server admin, the RLS role membership is assumed. For Premium workspaces, since there is
no server admin, the account must belong to a role in order for RLS to be applied.
To learn more, see Roles in tabular models.
Setting data-source credentials
Metadata specified through the XMLA endpoint can create connections to data sources, but cannot set data-
source credentials. Instead, credentials can be set in the dataset settings page in the Power BI Service.
Service principals
Service principals are an Azure Active Directory app registration you create within your tenant to perform
unattended resource and service level operations. They're a unique type of user identity with an app name,
application ID, tenant ID, and client secret or certificate for a password. Power BI Premium uses the same service
principal functionality as Power BI Embedded.
Service principals can also be used with the XMLA endpoint to automate dataset management tasks such as
provisioning workspaces, deploying models, and dataset refresh with:
PowerShell
Azure Automation
Azure Logic Apps
Custom client applications
To learn more, see Automate Premium workspace and dataset tasks with service principals.
When the Deployment Server property has been specified, the project can then be deployed.
When deployed the first time , a dataset is created in the workspace by using metadata from the model.bim.
As part of the deployment operation, after the dataset has been created in the workspace from model metadata,
processing to load data into the dataset from data sources will fail.
Processing fails because unlike when deploying to an Azure or SQL Server Analysis Server instance, where data
source credentials are prompted for as part of the deployment operation, when deploying to a Premium
workspace data source credentials cannot be specified as part of the deployment operation. Instead, after
metadata deployment has succeeded and the dataset has been created, data source credentials are then
specified in the Power BI Service in dataset settings. In the workspace, select Datasets > Settings > Data
source credentials > Edit credentials .
When data source credentials are specified, you can then refresh the dataset in the Power BI service, configure
schedule refresh, or process (refresh) from SQL Server Management Studio to load data into the dataset.
The deployment Processing Option property specified in the project in Visual Studio is observed. However, if a
data source has not yet had credentials specified in the Power BI service, even if the metadata deployment
succeeds, processing will fail. You can set the property to Do Not Process , preventing an attempt to process as
part of the deployment, but you might want to set the property back to Default because once the data source
credentials are specified in the data source settings for the new dataset, processing as part of subsequent
deployment operations will then succeed.
To learn more about using SSMS to script metadata, see Create Analysis Services scripts and Tabular Model
Scripting Language (TMSL).
Dataset refresh
The XMLA endpoint enables a wide range of scenarios for fine-grain refresh capabilities using SSMS,
automation with PowerShell, Azure Automation, and Azure Functions using TOM. You can, for example, refresh
certain incremental refresh historical partitions without having to reload all historical data.
Unlike configuring refresh in the Power BI service, refresh operations through the XMLA endpoint are not
limited to 48 refreshes per day, and the scheduled refresh timeout is not imposed.
Date, time, and status for dataset refresh operations that include a write transaction through the XMLA endpoint
are recorded and shown in dataset Refresh history.
Dynamic Management Views (DMV)
Analysis Services DMVs provide visibility of dataset metadata, lineage, and resource usage. DMVs available for
querying in Power BI through the XMLA endpoint are limited to, at most, those that require database-admin
permissions. Some DMVs for example are not accessible because they require Analysis Services server-admin
permissions.
At this time, a write operation on a dataset authored in Power BI Desktop will prevent it from being downloaded
back as a PBIX file. Be sure to retain your original PBIX file.
Data-source declaration
When connecting to data sources and querying data, Power BI Desktop uses Power Query M expressions as
inline data source declarations. While supported in Premium workspaces, Power Query M inline data-source
declaration is not supported by Azure Analysis Services or SQL Server Analysis Services. Instead, Analysis
Services data modeling tools like Visual Studio create metadata using structured and/or provider data source
declarations. With the XMLA endpoint, Premium also supports structured and provider data sources, but not as
part of Power Query M inline data source declarations in Power BI Desktop models. To learn more, see
Understanding providers.
Power BI Desktop in live connect mode
Power BI Desktop can connect to a Power BI Premium dataset using a live connection. When using a live
connection, data doesn't need to be replicated locally, making it easier for users to consume semantic models.
There are two ways users can connect:
By selecting Power BI datasets , and then selecting a dataset to create a report. This is the recommended way
for users to connect live to datasets. This method provides an improved discover experience showing the
endorsement level of datasets. Users don't need to find and keep track of workspace URLs. To find a dataset,
users simply type in the dataset name or scroll to find the dataset they're looking for.
The other way users can connect is by using Get Data > Analysis Ser vices , specify a Power BI Premium
workspace name as a URL, select Connect live , and then in Navigator, select a dataset. In this case, Power BI
Desktop uses the XMLA endpoint to connect live to the dataset as though it were an Analysis Services data
model.
Organizations that have existing reports connected live to Analysis Services data models intending to migrate to
Premium datasets only have to change the server name URL in Transform data > Data source settings .
Audit logs
When applications connect to a workspace, access through XMLA endpoints is logged in the Power BI audit logs
with the following operations:
See also
For more information related to this article, check out the following resources:
Power BI usage scenarios: Advanced data model management
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Interactive and background operations
5/23/2022 • 2 minutes to read • Edit Online
Power BI divides operations into two types, interactive and background. This article lists these operations and
explains the difference between them
Interactive operations
Shorter running operations such as dataset queries are classified as interactive operations. They’re usually
triggered by user interactions with the UI. For example, an interactive operation is triggered when a user opens a
report or clicks on a slicer in a Power BI report. Interactive operations can also be triggered without interacting
with the UI, for example when using SQL Server Management Studio (SSMS) or a custom application to run a
DAX query.
Background operations
Longer running operations such as dataset or dataflow refreshes are classified as background operations. They
can be triggered manually by a user, or automatically without user interaction. Background operations include
scheduled refreshes, interactive refreshes, REST-based refreshes and XMLA-based refresh operations. Users
aren't expected to wait for these operations to finish. Instead, they might come back later to check the status of
the operations.
Operation list
The table below lists the Power BI operations. It provides a short description for each operation and identifies
the operation's type.
Next steps
What is Power BI Premium Gen2?
Power BI Premium Gen2 architecture
Managing Premium Gen2 capacities
Use the gen2 metrics app
Troubleshoot XMLA endpoint connectivity
5/23/2022 • 14 minutes to read • Edit Online
XMLA endpoints in Power BI rely on the native Analysis Services communication protocol for access to Power BI
datasets. Because of this, XMLA endpoint troubleshooting is much the same as troubleshooting a typical
Analysis Services connection. However, some differences around Power BI-specific dependencies apply.
For example:
Data Source=powerbi://api.powerbi.com/v1.0/myorg/Contoso;Initial Catalog=PowerBI_Dataset;User
ID=app:91ab91bb-6b32-4f6d-8bbc-97a0f9f8906b@19373176-316e-4dc7-834c-328902628ad4;Password=6drX...;
Deploying a dataset
You can deploy a tabular model project in Visual Studio (SSDT) to a workspace assigned to a Premium capacity,
much the same as to a server resource in Azure Analysis Services. However, when deploying there are some
additional considerations. Be sure to review the section Deploy model projects from Visual Studio (SSDT) in the
Dataset connectivity with the XMLA endpoint article.
Deploying a new model
In the default configuration, Visual Studio attempts to process the model as part of the deployment operation to
load data into the dataset from the data sources. As described in Deploy model projects from Visual Studio
(SSDT), this operation can fail because data source credentials cannot be specified as part of the deployment
operation. Instead, if credentials for your data source aren't already defined for any of your existing datasets, you
must specify the data source credentials in the dataset settings using the Power BI user interface (Datasets >
Settings > Data source credentials > Edit credentials ). Having defined the data source credentials, Power
BI can then apply the credentials to this data source automatically for any new dataset, after metadata
deployment has succeeded and the dataset has been created.
If Power BI cannot bind your new dataset to data source credentials, you will receive an error stating "Cannot
process database. Reason: Failed to save modifications to the server." with the error code
"DMTS_DatasourceHasNoCredentialError", as shown below:
To avoid the processing failure, set the Deployment Options > Processing Options to Do not Process , as
shown in the following image. Visual Studio then deploys only metadata. You can then configure the data source
credentials, and click on Refresh now for the dataset in the Power BI user interface.
SUP P O RT ED W IT H XM L A
DATA SO URC E PA RT IT IO N SO URC E C O M M EN T S EN DP O IN T
Provider data source Query partition source The AS engine uses the Yes
cartridge-based
connectivity stack to access
the data source.
SUP P O RT ED W IT H XM L A
DATA SO URC E PA RT IT IO N SO URC E C O M M EN T S EN DP O IN T
Structured data source Query partition source The AS engine wraps the No
native query on the
partition source into an M
expression and then uses
the Mashup engine to
import the data.
Structured data source M partition source The AS engine uses the Yes
Mashup engine to import
the data.
This is an informational message that can be ignored in SSMS 18.8 and higher because the client libraries will
reconnect automatically. Note that client libraries installed with SSMS v18.7.1 or lower do not support session
tracing. Download the latest SSMS.
Refresh operations in SSMS
When using SSMS v18.7.1 or lower to perform a long running (>1 min) refresh operation on a dataset in a
Premium Gen2 or an Embedded Gen2 capacity, SSMS may display an error like the following even though the
refresh operation succeeds:
Technical Details:
RootActivityId: 3716c0f7-3d01-4595-8061-e6b2bd9f3428
Date (UTC): 11/13/2020 7:57:16 PM
Run complete
This is due to a known issue in the client libraries where the status of the refresh request is incorrectly tracked.
This is resolved in SSMS 18.8 and higher. Download the latest SSMS.
Other client applications and tools
Client applications and tools such as Excel, Power BI Desktop, SSMS, or external tools connecting to and working
with datasets in Power BI Premium Gen2 capacities may cause the following error: The remote ser ver
returned an error : (400) Bad Request.. The error can be caused especially if an underlying DAX query or
XMLA command is long running. To mitigate potential errors, be sure to use the most recent applications and
tools that install recent versions of the Analysis Services client libraries with regular updates. Regardless of
application or tool, the minimum required client library versions to connect to and work with datasets in a
Premium Gen2 capacity through the XMLA endpoint are:
C L IEN T L IB RA RY VERSIO N
MSOLAP 15.1.65.22
AMO 19.12.7.0
ADOMD 19.12.7.0
This is due to a known issue in the app services REST API. This will be resolved in an upcoming release. In the
meantime, to get around this error, in Role Proper ties , click Script , and then enter and execute the following
TMSL command:
{
"createOrReplace": {
"object": {
"database": "AdventureWorks",
"role": "Role"
},
"role": {
"name": "Role",
"modelPermission": "read",
"members": [
{
"memberName": "xxxx",
"identityProvider": "AzureAD"
},
{
"memberName": “xxxx”
"identityProvider": "AzureAD"
}
]
}
}
}
This is due to the dataset being published having a different connection string but having the same name as the
existing dataset. To resolve this issue, either delete or rename the existing dataset. Also be sure to republish any
apps that are dependent on the report. If necessary, downstream users should be informed to update any
bookmarks with the new report address to ensure they access the latest report.
Workspace/server alias
Unlike Azure Analysis Services, server name aliases are not suppor ted for Premium workspaces.
DISCOVER_M_EXPRESSIONS
The DMV DISCOVER_M_EXPRESSIONS data management view (DMV) is currently not supported in Power BI
using the XMLA Endpoint. Applications can use the Tabular object model (TOM) to obtain M expressions used by
the data model.
See also
Dataset connectivity with the XMLA endpoint
Automate Premium workspace and dataset tasks with service principals
Troubleshooting Analyze in Excel
Tabular model solution deployment
What is Power BI Premium?
5/23/2022 • 19 minutes to read • Edit Online
You can use Power BI Premium to access features and capabilities only available in Premium, and offer greater
scale and performance for Power BI content in your organization. Power BI Premium enables more users in your
organization to get the most out of Power BI with better performance and responsiveness. For example, with
Power BI Premium, you and your organization's users get the following capabilities:
Greater scale and performance for your Power BI reports
Flexibility to license by capacity
Best-in-class features for data visualization and insight-extraction such as AI-driven analysis, composable and
reusable dataflows, and paginated reports
Unify self-service and enterprise BI with a variety of Premium-only capabilities that support heavier
workloads and require enterprise scale
Built-in license to extend on-premises BI with Power BI Report Server
Support for data residency by region (Multi-Geo) and customer-managed encryption keys for data at rest
(BYOK)
Ability to share Power BI content with anyone (even outside your organization) without purchasing a per-
user license
This article introduces key features in Power BI Premium. Where necessary, links to additional articles with more
detailed information are provided. For more information about Power BI Pro and Power BI Premium, see the
Power BI features comparison section of Power BI pricing.
Reserved capacities
With Power BI Premium, you get reserved capacities. In contrast to a shared capacity where workloads' analytics
processing run on computational resources shared with other customers, a reserved capacity is for exclusive use
by an organization. It's isolated with reserved computational resources, which provide dependable and
consistent performance for hosted content. Note that the processing of the following types of Power BI content
is stored in shared capacity rather than your reserved capacity:
Excel workbooks (unless data is first imported into Power BI Desktop)
Push datasets
Streaming datasets
Q&A
Workspaces reside within capacities. Each Power BI user has a personal workspace known as My Workspace .
Additional workspaces known as workspaces can be created to enable collaboration. By default, workspaces,
including personal workspaces, are created in the shared capacity. When you have Premium capacities, both My
Workspaces and workspaces can be assigned to Premium capacities.
Capacity administrators automatically have their my workspaces assigned to Premium capacities.
Capacity nodes
As described in the Subscriptions and Licensing section, there are two Power BI Premium SKU families: EM and
P . All Power BI Premium SKUs are available as capacity nodes, each representing a set amount of resources
consisting of processor, memory, and storage. In addition to resources, each SKU has operational limits on the
number of DirectQuery and Live Connection connections per second, and the number of parallel model
refreshes. While there is a lot of overlap in features for the two SKU families, only the P Premium SKU gives free
users the ability to consume content hosted in the Premium capacity. EM SKUs are used for embedding content.
Processing is achieved by a set number of v-cores, divided equally between backend and frontend.
Backend v-cores are responsible for core Power BI functionality, including query processing, cache
management, running R services, model refresh, and server-side rendering of reports and images. Backend v-
cores are assigned a fixed amount of memory that is primarily used to host models, also known as active
datasets.
Frontend v-cores are responsible for the web service, dashboard and report document management, access
rights management, scheduling, APIs, uploads and downloads, and generally for everything related to the user
experiences.
Storage is set to 100 TB per capacity node .
The resources and limits of each Premium SKU (and equivalently sized A SKU) are described in the following
table:
DIREC TQ UE
RY / L IVE MAX M O DEL
C O N N EC T I M EM O RY REF RESH
C A PA C IT Y TOTA L V- B A C K EN D F RO N T EN D O N ( P ER P ER Q UERY PA RA L L EL IS
SK US C O RES V- C O RES V- C O RES RA M ( GB ) SEC O N D) [ GB ] M1
EM2/A2 2 1 1 5 7.5 2 2
EM3/A3 4 2 2 10 15 2 3
P1/A4 8 4 4 25 30 6 6
P2/A5 16 8 8 50 60 6 12
1 The model refresh parallelism limits only apply to dataset workloads per capacity.
2 SKUs greater than 100 GB aren't available in all regions. To request using these SKUs in regions where they're
not available, contact your Microsoft account manager.
NOTE
Using a single larger SKU (e.g. one P2 SKU) can be preferable to combining smaller SKUs (e.g. two P1 SKUs). For example,
you can use larger models and achieve better parallelism with the P2.
Capacity workloads
Capacity workloads are services made available to users. By default, Premium and Azure capacities support only
a dataset workload associated with running Power BI queries. The dataset workload cannot be disabled.
Additional workloads can be enabled for AI (Cognitive Services), Dataflows, and Paginated reports. These
workloads are supported in Premium subscriptions only.
Each additional workload allows configuring the maximum memory (as a percentage of total capacity memory)
that can be used by the workload. Default values for maximum memory are determined by SKU. You can
maximize your capacity's available resources by enabling only those additional workloads when they're used.
And you can change memory settings only when you have determined default settings aren't meeting your
capacity resource requirements. Workloads can be enabled and configured for a capacity by capacity admins
using Capacity settings in the Admin portal or using the Capacities REST APIs.
To learn more, see Configure workloads in a Premium capacity.
How capacities function
At all times, the Power BI service makes the best use of capacity resources while not exceeding limits imposed
on the capacity.
Capacity operations are classified as either interactive or background. Interactive operations include rendering
requests and responding to user interactions (filtering, Q&A querying, etc.). Background operations include
dataflow and import model refreshes, and dashboard query caching.
It's important to understand that interactive operations are always prioritized over background operations to
ensure the best possible user experience. If there are insufficient resources, background operations are added to
a waiting queue until resources free up. Background operations, like dataset refreshes, can be interrupted mid-
process by the Power BI service, added to a queue, and retried later on.
Import models must be fully loaded into memory so they can be queried or refreshed. The Power BI service
uses sophisticated algorithms to manage memory usage fairly, but in rare cases, the capacity can get overloaded
if there are insufficient resources to meet customers' real-time demands. While it's possible for a capacity to
store many import models in persistent storage (up to 100 TB per Premium capacity), not all the models
necessarily reside in memory at the same time, otherwise their in-memory dataset size can easily exceed the
capacity memory limit. Besides the memory required to load the datasets, additional memory is needed for
execution of queries and refresh operations.
Import models are therefore loaded and removed from memory according to usage. An import model is loaded
when it is queried (interactive operation), or if it needs to be refreshed (background operation).
The removal of a model from memory is known as eviction. It's an operation Power BI can perform quickly
depending on the size of the models. If the capacity isn't experiencing any memory pressure and the model isn't
idle (i.e., actively in-used), the model can reside in memory without being evicted. When Power BI determines
there is insufficient memory to load a model, the Power BI service will attempt to free up memory by evicting
inactive models, typically defined as models loaded for interactive operations which have not been used in the
last three minutes. If there are no inactive models to evict, the Power BI service attempts to evict models loaded
for background operations. A last resort, after 30 seconds of failed attempts, is to fail the interactive operation.
In this case, the report user is notified of failure with a suggestion to try again shortly. In some cases, models
may be unloaded from memory due to service operations.
It's important to stress that dataset eviction is a normal behavior on the capacity. The capacity strives to balance
memory usage by managing the in-memory lifecycle of models in a way that is transparent to users. A high
eviction rate does not necessarily mean the capacity is insufficiently resourced. It can, however, become a
concern if the performance of queries or refreshes degrades due to the overhead of loading and evicting models
repeatedly within a short span of time.
Refreshes of import models are always memory intensive as models must be loaded into memory. Additional
intermediate memory is also required for processing. A full refresh can use approximately double the amount of
memory required by the model because Power BI maintains an existing snapshot of the model in memory until
the processing operation is completed. This allows the model to be queried even when it's being processed.
Queries can be sent to the existing snapshot of the model until the refresh has completed and the new model
data is available.
Incremental refresh performs partition refresh instead of a full model refresh, and will typically be faster and
require less memory, and can substantially reduce the capacity's resource usage. Refreshes can also be CPU-
intensive for models, especially those with complex Power Query transformations, or calculated tables or
columns that are complex or are based on a large volume of data.
Refreshes, like queries, require the model be loaded into memory. If there is insufficient memory, the Power BI
service will attempt to evict inactive models, and if this isn't possible (as all models are active), the refresh job is
queued. Refreshes are typically CPU-intensive, even more so than queries. For this reason, a limit on the number
of concurrent refreshes, calculated as the ceiling of 1.5 x the number of backend v-cores, is imposed. If there are
too many concurrent refreshes, the scheduled refresh is queued until a refresh slot is available, resulting in the
operation taking longer to complete. On-demand refreshes such as those triggered by a user request or an API
call will retry three times. If there still aren't enough resources, the refresh will then fail.
Regional support
When creating a new capacity, global administrators and Power BI service administrators can specify a region
where workspaces assigned to the capacity will reside. This is known as Multi-Geo . With Multi-Geo,
organizations can meet data residency requirements by deploying content to datacenters in a specific region,
even if it's different than the region where the Microsoft 365 subscription resides. To learn more, see Multi-Geo
support for Power BI Premium.
Capacity management
Managing Premium capacities involves creating or deleting capacities, assigning admins, assigning workspaces,
configuring workloads, monitoring, and making adjustments to optimize capacity performance.
Global administrators and Power BI service administrators can create Premium capacities from available v-
cores, or modify existing Premium capacities. When a capacity is created, capacity size and geographic region
are specified, and at least one capacity admin is assigned.
When capacities are created, most administrative tasks are completed in the Admin portal.
Capacity admins can assign workspaces to the capacity, manage user permissions, and assign other admins.
Capacity admins can also configure workloads, adjust memory allocations, and if necessary, restart a capacity,
resetting operations if a capacity becomes overloaded.
Capacity admins can also make sure a capacity is running smoothly. They can monitor capacity health right in
the Admin portal or by using the Premium capacity metrics app.
To learn more about creating capacities, assigning admins, and assigning workspaces, see Managing Premium
capacities. To learn more about roles, see Administrator roles related to Power BI.
Monitoring
Monitoring Premium capacities provides administrators with an understanding of how capacities are
performing. Capacities can be monitored by using the Admin portal and the Power BI Premium Capacity Metrics
app.
Monitoring in the portal provides a quick view with high-level metrics indicating loads placed and the resources
utilized by your capacity, averaged, over the past seven days.
The Power BI Premium Capacity Metrics app provides the most in-depth information into how your
capacities are performing. The app provides a high-level dashboard and more detailed reports.
From the app's dashboard, you can click a metric cell to open an in-depth report. Reports provide in-depth
metrics and filtering capability to drill down on the most important information you need to keep your
capacities running smoothly.
To learn more about monitoring capacities, see Monitoring in the Power BI Admin portal and Monitoring with
the Power BI Premium Capacity Metrics app.
Optimizing capacities
Making the best use of your capacities is critical to assuring users get the performance and you're getting the
most value for your Premium investment. By monitoring key metrics, administrators can determine how best to
troubleshoot bottlenecks and take necessary action. To learn more, see Optimizing Premium capacities and
Premium capacity scenarios.
Capacities REST APIs
The Power BI REST APIs include a collection of Capacities APIs. With the APIs, admins can programmatically
manage many aspects of your Premium capacities, including enabling and disabling workloads, assigning
workspaces to a capacity, and more.
Large datasets
Depending on the SKU, Power BI Premium supports uploading Power BI Desktop (.pbix) model files up to a
maximum of 10 GB in size. When loaded, the model can then be published to a workspace assigned to a
Premium capacity. The dataset can then be refreshed to up to 12 GB in size.
Size considerations
Large datasets can be resource intensive. You should have at least a P1 or an A4 SKU for any datasets larger
than one GB. Although publishing large datasets to workspaces backed by A SKUs up to A3 could work,
refreshing them will not.
The following table shows the recommended SKUs for uploading or publishing a .pbix file to the Power BI
service:
SK U SIZ E O F . P B IX
P1/A4 Up to 3 GB
P2/A5 Up to 6 GB
NOTE
When using a PPU capacity you can upload or publish .pbix files that are up to 10 GB in size.
Incremental refresh
Incremental refresh provides an integral part of having and maintaining large datasets in Power BI Premium and
Power BI Pro. Incremental refresh has many benefits, for example, refreshes are faster because only data that
has changed needs to be refreshed. Refreshes are more reliable because it's unnecessary to maintain long-
running connections to volatile data sources. Resource consumption is reduced because less data to refresh
reduces overall consumption of memory and other resources. Incremental refresh policies are defined in Power
BI Desktop , and applied in the service. To learn more, see Incremental refresh for datasets.
Paginated reports
Paginated reports, supported on all EM, A and P SKU's in Premium Gen2, are based on Report Definition
Language (RDL) technology in SQL Server Reporting Services. While based on RDL technology, it's not the same
as Power BI Report Server, which is a downloadable reporting platform you can install on-premises, also
included with Power BI Premium. Paginated reports are formatted to fit well on a page that can be printed or
shared. Data is displayed in a table, even if the table spans multiple pages. By using the free Power BI Repor t
Builder Windows Desktop application, users author paginated reports and publish them to the service.
In Power BI Premium, Paginated reports are a workload that must be enabled for a capacity by using the Admin
portal. Capacity admins can enable and then specify the amount of memory as a percentage of the capacity's
overall memory resources. Unlike other types of workloads, Premium runs paginated reports in a contained
space within the capacity. The maximum memory specified for this space is used whether or not the workload is
active. The default is 20%.
Premium enables widespread distribution of content by Pro users without requiring Pro or Premium Per User
(PPU) licenses for recipients who view the content. Pro or Premium Per User (PPU) licenses are required for
content creators. Creators connect to data sources, model data, and create reports and dashboards that are
packaged as workspace apps. Users without a Pro or Premium Per User (PPU) license can still access a
workspace that's in Power BI Premium capacity, as long as they only have a Viewer role. A Pro or PPU license is
required for other roles.
To learn more, see Power BI licensing.
The XMLA endpoint and 3rd party tools enable organizations to create perspectives. Power BI does not honor
perspectives when building reports on top of Live connect models or reports. Instead, Power BI points to the
main model once published to the Power BI service, showing all elements in the data model. If your Azure
Analysis Services model uses perspectives, you should not move or migrate those models to Power BI Premium.
To learn more, see Dataset connectivity with the XMLA endpoint.
Next steps
Managing Premium capacities
Azure Power BI Embedded Documentation
What is Power BI Premium Gen2?
Optimizing Premium capacities
5/23/2022 • 19 minutes to read • Edit Online
When Premium capacity performance issues arise, a common first approach is to optimize or tune your
solutions to restore acceptable response times. The rationale being to avoid purchasing additional Premium
capacity unless justified.
When additional Premium capacity is required, there are two options described in this article:
Scale-up an existing Premium capacity
Add a new Premium capacity
Finally, testing approaches and Premium capacity sizing conclude this article.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.
NOTE
You can also get Premium Per User (PPU) licenses for individuals, which provides many of the features and capabilities of a
Premium capacity, and also incorporates all functionality included with a Power BI Pro license. For more information, see
Power BI Premium Per User.
The recommendations and best practices recommended in this article ensure CPU utilization of each dataset,
and other Power BI artifacts, are optimized.
Best practices
When trying to get the best utilization and performance, there are some recommended best practices, including:
Using workspaces instead of personal workspaces.
Separating business critical and Self-Service BI (SSBI) into different capacities.
If sharing content only with Power BI Pro users, there may be no need to store the content in a reserved
capacity.
Use reserved capacities when looking to achieve a specific refresh time, or when specific features are
required. For example, with large datasets or paginated reporting.
Addressing common questions
Optimizing Power BI Premium deployments is a complex subject involving an understanding of workload
requirements, available resources, and their effective use.
This article addresses seven common support questions, describing possible issues and explanations, and
information on how to identify and resolve them.
Why is the capacity slow, and what can I do?
There are many reasons that can contribute to a slow Premium capacity. This question requires further
information to understand what is meant by slow. Are reports slow to load? Or are they failing to load? Are
report visuals slow to load or update when users interact with the report? Are refreshed taking longer to
complete than expected, or previously experienced?
Having gained an understanding of the reason, you can then begin to investigate. Responses to the following six
questions will help you to address more specific issues.
What content is using up my capacity?
You can use the Power BI Premium Capacity Metrics app to filter by capacity, and review performance
metrics for workspace content. It's possible to review the performance metrics and resource usage by hour for
the past seven days for all content stored within a Premium capacity. Monitoring is often the first step to take
when troubleshooting a general concern about Premium capacity performance.
Key metrics to monitor include:
Average CPU and high utilization count.
Average Memory and high utilization count, and memory usage for specific datasets, dataflows, and
paginated reports.
Active datasets loaded in memory.
Average and maximum query durations.
Average query wait times.
Average dataset and dataflow refresh times.
In the Power BI Premium Capacity Metrics app, active memory shows the total amount of memory given to a
report that cannot be evicted because it has been in use within the last three minutes. A high spike in refresh
wait time could be correlated with a large and/or active dataset.
The Top 5 by Average Duration chart highlights the top five datasets, paginated reports, and dataflows
consuming capacity resources. Content in the top five lists is candidates for investigation and possible
optimization.
Why are reports slow?
The following tables show possible issues and ways to identify and handle them.
Insufficient capacity resources
High total active memory (model can't Monitor memory metrics [1], and Decrease the model size, or convert to
be evicted because it's in use within eviction counts [2]. DirectQuery mode. See the Optimizing
the last three minutes). models section in this article.
Report pages contain too many visuals Review report designs. Redesign reports with fewer visuals per
(interactive filtering can trigger at least page.
one query per visual). Interview report users to understand
how they interact with the reports.
Visuals retrieve more data than
necessary. Monitor dataset query metrics [3].
Increasingly large volumes of import Review model designs. See the Optimizing models section in
data. this article.
Monitor gateway performance
Complex or inefficient calculation logic, counters.
including RLS roles.
High query wait times. Monitor CPU utilization [4], query wait Scale-up the capacity, or assign the
times, and DQ/LC utilization [5] metrics content to a different capacity.
CPU saturation. + Query durations. If fluctuating, can
indicate concurrency issues. Redesign reports with fewer visuals per
DQ/LC connection limits exceeded. page.
Notes:
[1] Average Memory Usage (GB), and Highest Memory Consumption (GB).
[2] Dataset evictions.
[3] Dataset Queries, Dataset Average Query Duration (ms), Dataset Wait Count, and Dataset Average Wait Time
(ms).
[4] CPU High Utilization Count and CPU Time of Highest Utilization (past seven days).
[5] DQ/LC High Utilization Count and DQ/LC Time of Highest Utilization (past seven days).
Why are reports not loading?
When reports fail to load, it's a sure sign the capacity has insufficient memory and is over-heated. This can occur
when all loaded models are being actively queried and so cannot be evicted, and any refresh operations have
been paused or delayed. The Power BI service will attempt to load the dataset for 30 seconds, and the user is
gracefully notified of the failure with a suggestion to try again shortly.
Currently there is no metric to monitor for report loading failures. You can identify the potential for this issue by
monitoring system memory, specifically highest utilization and time of highest utilization. High dataset evictions
and long dataset refresh average wait time could suggest that this issue is occurring.
If this happens only very occasionally, this may not be considered a priority issue. Report users are informed
that the service is busy and that they should retry after a short time. If this happens too frequently, the issue can
be resolved by scaling up the Premium capacity or by assigning the content to a different capacity.
Capacity Admins (and Power BI service administrators) can monitor the Quer y Failures metric to determine
when this happens. They can also restart the capacity, resetting all operations in case of system overload.
Why are refreshes not starting on schedule?
Scheduled refresh start times are not guaranteed. Recall that the Power BI service will always prioritize
interactive operations over background operations. Refresh is a background operation that can occur when two
conditions are met:
There is sufficient memory
The number of supported concurrent refreshes for the Premium capacity is not exceeded
When the conditions are not met, the refresh is queued until the conditions are favorable.
For a full refresh, recall that at least double the current dataset memory size is required. If sufficient memory is
not available, then the refresh cannot commence until model eviction frees up memory - this means delays until
one or more datasets becomes inactive and can be evicted.
Recall that the supported number of maximum concurrent refreshes is set to 1.5 times the backend v-cores,
rounded up.
A scheduled refresh will fail when it cannot commence before the next scheduled refresh is due to commence.
An on-demand refresh triggered manually from the UI will attempt to run up to three times before failing.
Capacity Admins (and Power BI service administrators) can monitor the Average Refresh Wait Time
(minutes) metric to determine average lag between the scheduled time and the start of the operation.
While not usually an administrative priority, to influence on-time data refreshes, ensure that sufficient memory
is available. This may involve isolating datasets to capacities with known sufficient resources. It's also possible
that admins could coordinate with dataset owners to help stagger or reduce scheduled data refresh times to
minimize collisions. Note that it's not possible for an administrator to view the refresh queue, or to retrieve
dataset schedules.
Why are refreshes slow?
Refreshes can be slow - or perceived to be slow (as the previous common question addresses).
When the refresh is in fact slow, it can be due to several reasons:
Insufficient CPU (refresh can be very CPU-intensive).
Insufficient memory, resulting in refresh pausing (which requires the refresh to start over when conditions
are favorable to recommence).
Non-capacity reasons, including datasource system responsiveness, network latency, invalid permissions or
gateway throughput.
Data volume - a good reason to configure incremental refresh, as discussed below.
Capacity Admins (and Power BI service administrators) can monitor the Average Refresh Duration
(minutes) metric to determine a benchmark for comparison over time, and the Average Refresh Wait Time
(minutes) metrics to determine average lag between average lag between the scheduled time and the start of
the operation.
Incremental refresh can significantly reduce data refresh duration, especially for large model tables. There are
four benefits associated with incremental refresh:
Refreshes are faster - Only a subset of a table needs loading, reducing CPU and memory usage, and
parallelism can be higher when refreshing multiple partitions.
Refreshes occur only when required - Incremental refresh policies can be configured to load only when
data has changed.
Refreshes are more reliable - Shorter running connections to volatile datasource systems are less
susceptible to disconnection.
Models remain trim - Incremental refresh policies can be configured to automatically remove history
beyond a sliding window of time.
To learn more, see Incremental refresh for datasets.
Why are data refreshes not completing?
When the data refresh commences but fails to complete, it can be due to several reasons:
Insufficient memory, even if there is only one model in the Premium capacity, i.e. the model size is very large.
Non-capacity reasons, including datasource system disconnection, invalid permissions or gateway error.
Capacity Admins (and Power BI service administrators) can monitor the Refresh Failures due to out of
Memor y metric.
Optimizing models
Optimal model design is crucial to delivering an efficient and scalable solution. However, it's beyond the scope of
this article to provide a complete discussion. Instead, this section will provide key areas for consideration when
optimizing models.
Optimizing Power BI hosted models
Optimizing models hosted in a Premium capacity can be achieved at the datasource(s) and model layers.
Consider the optimization possibilities for an Import model:
At the datasource layer:
Relational data sources can be optimized to ensure the fastest possible refresh by pre-integrating data,
applying appropriate indexes, defining table partitions that align to incremental refresh periods, and
materializing calculations (in place of calculated model tables and columns) or adding calculation logic to
views.
Non-relational data sources can be pre-integrated with relational stores.
Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network
bandwidth and in close proximity to the data sources.
At the model layer:
Power Query query designs can minimize or remove complex transformations and especially those that
merge different data sources (data warehouses achieve this during their Extract-Transform-Load stage). Also,
ensuring that appropriate datasource privacy levels are set, this can avoid requiring Power BI to load full
results to produce a combined result across queries.
The model structure determines the data to load and has a direct impact on the model size. It can be
designed to avoid loading unnecessary data by removing columns, removing rows (especially historic data)
or by loading summarized data (at the expense of loading detailed data). Dramatic size reduction can be
achieved by removing high cardinality columns (especially text columns) which do not store or compress
very efficiently.
Model query performance can be improved by configuring single direction relationships unless there is a
compelling reason to allow bi-directional filtering. Consider also using the CROSSFILTER function instead of
bi-directional filtering.
Aggregation tables can achieve fast query responses by loading pre-summarized data, however this will
increase the size of the model and result in longer refresh times. Generally, aggregation tables should be
reserved for very large models or Composite model designs.
Calculated tables and columns increase the model size and result in longer refresh times. Generally, a smaller
storage size and faster refresh time can be achieved when the data is materialized or calculated in the
datasource. If this is not possible, using Power Query custom columns can offer improved storage
compression.
There may be opportunity to tune DAX expressions for measures and RLS rules, perhaps rewriting logic to
avoid expensive formulas
Incremental refresh can dramatically reduce refresh time and conserve memory and CPU. The incremental
refresh can also be configured to remove historic data keeping model sizes trim.
A model could be redesigned as two models when there are different and conflicting query patterns. For
example, some reports present high-level aggregates over all history, and can tolerate 24 hours' latency.
Other reports are concerned with today's data and need granular access to individual transactions. Rather
than design a single model to satisfy all reports, create two models optimized for each requirement.
Consider the optimization possibilities for a DirectQuery model. As the model issues query requests to the
underlying datasource, datasource optimization is critical to delivering responsive model queries.
At the datasource layer:
The datasource can be optimized to ensure the fastest possible querying by pre-integrating data (which is
not possible at the model layer), applying appropriate indexes, defining table partitions, materializing
summarized data (with indexed views), and minimizing the amount of calculation. The best experience is
achieved when pass-through queries need only filter and perform inner joins between indexed tables or
views.
Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network
bandwidth and in close proximity to the datasource.
At the model layer:
Power Query query designs should preferably apply no transformations - otherwise attempt to keep
transformations to an absolute minimum.
Model query performance can be improved by configuring single direction relationships unless there is a
compelling reason to allow bi-directional filtering. Also, model relationships should be configured to assume
referential integrity is enforced (when this is the case) and will result in datasource queries using more
efficient inner joins (instead of outer joins).
Avoid creating Power Query query custom columns or model calculated column - materialize these in the
datasource, when possible.
There may be opportunity to tune DAX expressions for measures and RLS rules, perhaps rewriting logic to
avoid expensive formulas.
Consider the optimization possibilities for a Composite model. Recall that a Composite model enables a mix of
import and DirectQuery tables.
Generally, the optimization for Import and DirectQuery models apply to Composite model tables that use
these storage modes.
Typically, strive to achieve a balanced design by configuring dimension-type tables (representing business
entities) as Dual storage mode and fact-type tables (often large tables, representing operational facts) as
DirectQuery storage mode. Dual storage mode means both Import and DirectQuery storage modes, and this
allows the Power BI service to determine the most efficient storage mode to use when generating a native
query for pass-through.
Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network
bandwidth and in close proximity to the data sources
Aggregations tables configured as Import storage mode can deliver dramatic query performance
enhancements when used to summarize DirectQuery storage mode fact-type tables. In this case, aggregation
tables will increase the size of the model and increase refresh time, and often this is an acceptable tradeoff
for faster queries.
Optimizing externally hosted models
Many optimization possibilities discussed in the Optimizing Power BI hosted models section apply also to
models developed with Azure Analysis Services and SQL Server Analysis Services. Clear exceptions are certain
features which are not currently supported, including Composite models and aggregation tables.
An additional consideration for externally-hosted datasets is the database hosting in relation to the Power BI
service. For Azure Analysis Services, this means creating the Azure resource in the same region as the Power BI
tenant (home region). For SQL Server Analysis Services, for IaaS, this means hosting the VM in the same region,
and for on-premises, it means ensuring an efficient gateway setup.
As an aside, it may be of interest to note that Azure Analysis Services databases and SQL Server Analysis
Services tabular databases require that their models be loaded fully into memory and that they remain there at
all times to support querying. Like the Power BI service, there needs to be sufficient memory for refreshing if
the model must remain online during the refresh. Unlike the Power BI service, there is no concept that models
are automatically aged in and out of memory according to usage. Power BI Premium, therefore, offers a more
efficient approach to maximize model querying with lower memory usage.
Capacity planning
The size of a Premium capacity determines its available memory and processor resources and limits imposed on
the capacity. The number of Premium capacities is also a consideration, as creating multiple Premium capacities
can help isolate workloads from each other. Note that storage is 100 TB per capacity node, and this is likely to be
more than sufficient for any workload.
Determining the size and number of Premium capacities can be challenging, especially for the initial capacities
you create. The first step when capacity sizing is to understand the average workload representing expected
day-to-day usage. It's important to understand that not all workloads are equal. For example - at one end of a
spectrum - 100 concurrent users accessing a single report page that contains a single visual is easily achievable.
Yet - at the other end of the spectrum - 100 concurrent users accessing 100 different reports, each with 100
visuals on the report page, is going to make very different demands of capacity resources.
Capacity Admins will therefore need to consider many factors specific to your environment, content and
expected usage. The overriding objective is to maximize capacity utilization while delivering consistent query
times, acceptable wait times, and eviction rates. Factors for consideration can include:
Model size and data characteristics - Import models must be fully loaded into memory to allow
querying or refreshing. LC/DQ datasets can require significant processor time and possibly significant
memory to evaluate complex measures or RLS rules. Memory and processor size, and LC/DQ query
throughput are constrained by the capacity size.
Concurrent active models - The concurrent querying of different import models will deliver best
responsiveness and performance when they remain in memory. There should be sufficient memory to host
all heavily-queried models, with additional memory to allow for their refresh.
Impor t model refresh - The refresh type (full or incremental), duration and complexity of Power Query
queries and calculated table/column logic can impact on memory and especially processor usage.
Concurrent refreshes are constrained by the capacity size (1.5 x backend v-cores, rounded up).
Concurrent queries - Many concurrent queries can result in unresponsive reports when processor or
LC/DQ connections exceeds the capacity limit. This is especially the case for report pages that include many
visuals.
Dataflows and paginated repor ts - The capacity can be configured to support dataflows and paginated
reports, with each requiring a configurable maximum percentage of capacity memory. Memory is
dynamically allocated to dataflows, but is statically allocated to paginated reports.
In addition to these factors, Capacity Admins can consider creating multiple capacities. Multiple capacities allow
for the isolation of workloads and can be configured to ensure priority workloads have guaranteed resources.
For example, two capacities can be created to separate business-critical workloads from self-service BI (SSBI)
workloads. The business-critical capacity can be used to isolate large corporate models providing them with
guaranteed resources, with authoring access granted only to the IT department. The SSBI capacity can be used
to host a growing number of smaller models, with access granted to business analysts. The SSBI capacity may at
times experience query or refresh waits that are tolerable.
Over time, Capacity Admins can balance workspaces across capacities by moving content between workspaces,
or workspaces between capacities, and by scaling capacities up or down. Generally, to host larger models you
scale-up and for higher concurrency you scale out.
Recall that purchasing a license provides the tenant with v-cores. The purchase of a P3 subscription can be used
to create one, or up to four Premium capacities, i.e. 1 x P3, or 2 x P2, or 4 x P1. Also, before upsizing a P2
capacity to a P3 capacity, consideration can be given to splitting the v-cores to create two P1 capacities.
Testing approaches
Once a capacity size is decided, testing can be performed by creating a controlled environment. A practical and
economic option is to create an Azure (A SKUs) capacity, noting that a P1 capacity is the same size as an A4
capacity, with the P2 and P3 capacities the same size as the A5 and A6 capacities, respectively. Azure capacities
can be created quickly and are billed on an hourly basis. So, once testing is complete, they can be easily deleted
to stop accruing costs.
The test content can be added to the workspaces created on the Azure capacity, and then as a single user can run
reports to generate a realistic and representative workload of queries. If there are import models, a refresh for
each model should be performed also. Monitoring tools can then be used to review all metrics to understand
resource utilization.
It's important that the tests are repeatable. Tests should be run several times and they should deliver
approximately the same result each time. An average of these results can be used to extrapolate and estimate a
workload under true production conditions.
If you already have a capacity and the reports you want to load test for, use the PowerShell load generating tool
to quickly generate a load test. The tool enables you to estimate how many instances of each report your
capacity can run in an hour. You can use the tool to evaluate your capacity's ability for individual report
rendering or for rendering several different reports in parallel. For more information, see the video Microsoft
Power BI: Premium capacity.
To generate a more complex test, consider developing a load testing application that simulates a realistic
workload. For more information, see the webinar Load Testing Power BI Applications with Visual Studio Load
Test.
Acknowledgments
This article was written by Peter Myers, Data Platform MVP and independent BI expert with Bitwise Solutions.
Next steps
Premium capacity scenarios
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Premium capacity scenarios
5/23/2022 • 12 minutes to read • Edit Online
This article describes real-world scenarios where Power BI premium capacities have been implemented.
Common issues and challenges are described, also how to identify issues, and help resolve them:
Keeping datasets up-to-date
Identifying slow-responding datasets
Identifying causes for sporadically slow-responding datasets
Determining whether there is enough memory
Determining whether there is enough CPU
The steps, along with chart and table examples are from the Power BI Premium Capacity Metrics app that a
Power BI administrator will have access to.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.
In the Hourly Average Refresh Wait Times visual, they notice that the refresh wait times peak consistently
around 4PM each day.
There are several possible explanations for these results:
Too many refresh attempts could be occurring at the same time, exceeding the limits defined by the
capacity node. In this case, six concurrent refreshes on a P1 with default memory allocation.
Datasets to be refreshed may be too large to fit into available memory (requiring at least 2x the memory
required for full refresh).
Inefficient Power Query logic may be resulting in a memory usage spike during dataset refresh. On a
busy capacity, this spike can occasionally reach the physical limit, failing the refresh and potentially
affecting other report view operations on the capacity.
Frequently queried datasets that need to stay in memory may affect the ability of other datasets to
refresh because of limited available memory.
To help investigate, the Power BI administrator can look for:
Low available memory at the time of data refreshes when available memory is less than 2x the size of the
dataset to be refreshed.
Datasets not being refreshed and not in memory before refresh, yet started to show interactive traffic during
heavy refresh times. To see which datasets are loaded into memory at any given time, a Power BI
administrator can look at the datasets area of the Datasets tab in the app. The admin can then cross-filter to
a given time by clicking on one of the bars in the Hourly Loaded Dataset Counts . A local spike, shown in
the below image, indicates an hour when multiple datasets were loaded into memory, which could delay the
start of scheduled refreshes.
Increased dataset evictions taking place when data refreshes are scheduled to start. Evictions can indicate
that there was high memory pressure caused by serving too many different interactive reports before
refresh. The Hourly Dataset Evictions and Memor y Consumption visual can clearly indicate spikes in
evictions.
The following image shows a local spike in loaded datasets, which suggests interactive querying delayed the
start of refreshes. Selecting a time period in the Hourly Loaded Dataset Counts visual will cross-filter the
Dataset Sizes visual.
The Power BI administrator can attempt to resolve the issue by taking steps to ensure that sufficient memory is
available for data refreshes to start by:
Contacting dataset owners and asking them to stagger and space out data refresh schedules.
Reducing dataset query load by removing unnecessary dashboards or dashboard tiles, especially content
that enforces row-level security.
Speeding data refreshes by optimizing Power Query logic. Improve modeling calculated columns or tables.
Reduce dataset sizes or configure larger datasets to perform incremental data refresh.
The administrator can refer to the Quer y Duration Distribution visual, which shows an overall distribution of
bucketed query performance (<= 30ms, 0-100ms) for the filtered time period. Generally, queries that take one
second or less are considered responsive by most users. Queries that take longer tend to create a perception of
bad performance.
The Hourly Quer y Duration Distribution visual allows the Power BI administrator to identify one-hour
periods when the capacity performance could have been perceived as poor. The larger the bar segments that
represent query durations over one second, the larger the risk that users will perceive poor performance.
The visual is interactive, and when a segment of the bar is selected, the corresponding Quer y Durations table
visual on the report page is cross-filtered to show the datasets it represents. This cross-filtering allows the
Power BI administrator to easily identify which datasets are responding slowly.
The following image shows a visual filtered by Hourly Quer y Duration Distributions , focusing on the worst-
performing datasets in one-hour buckets.
After the poor-performing dataset in a specific one-hour time span is identified, the Power BI administrator can
investigate whether poor performance is caused by an overloaded capacity or due to a poorly designed dataset
or report. They can refer to the Quer y Wait Times visual, and sort datasets by descending average query wait
time. If a large percentage of queries is waiting, a high demand for the dataset is likely the cause of too many
query delays. If the average query wait time is substantial (> 100 ms), it may be worth reviewing the dataset
and report to see if optimizations can be made. For example, fewer visuals on given report pages or a DAX
expression optimization.
There are several possible reasons for query wait time buildup in datasets:
A suboptimal model design, measure expressions, or even report design - all circumstances that can
contribute to long running queries that consume high levels of CPU. This forces new queries to wait until
CPU threads become available and can create a convoy effect (think traffic jam), commonly seen during peak
business hours. The Quer y Waits page will be the main resource to determine whether datasets have high
average query wait times.
A high number of concurrent capacity users (hundreds to thousands) consuming the same report or dataset.
Even well-designed datasets can perform badly beyond a concurrency threshold. This performance problem
is indicated by a single dataset showing a dramatically higher value for query counts than other datasets. For
example, you may see 300K queries for one dataset compared to <30K queries for all other datasets. At
some point the query waits for this dataset will start to stagger, which can be seen in the Quer y Durations
visual.
Many disparate datasets queried concurrently, causing thrashing as datasets frequently cycle in and out of
memory. This situation results in users experiencing slow performance when the dataset is loaded into
memory. To confirm, the Power BI administrator can refer to the Hourly Dataset Evictions and Memor y
Consumption visual, which may indicate that a high number of datasets loaded into memory are being
repeatedly evicted.
Once a problematic timespan is identified (for example, during Jan. 30th in the image above) the Power BI
administrator can remove all dataset filters then filter only by that timespan to determine which datasets were
actively queried during this time. The culprit dataset for the noisy neighbor effect is usually the top queried
dataset or the one with the longest average query duration.
A solution to this problem could be to distribute the culprit datasets over different workspaces on different
Premium capacities, or on shared capacity if the dataset size, consumption requirements, and data refresh
patterns are supported.
The reverse could be true as well. The Power BI administrator could identify times when a dataset query
performance drastically improves and then look for what disappeared. If certain information is missing at that
point, then that may help to point to the causing problem.
In a capacity experiencing memory pressure, the same visual will clearly show active memory and total memory
converging, meaning that it is impossible to load additional datasets into memory then. In this case, the Power
BI administrator can click Capacity Restar t (in Advanced Options of the capacity settings area of the admin
portal). Restarting the capacity results in all datasets being flushed from memory and allowing them to reload
into memory as required (by queries or data refresh).
NOTE
For Premium Gen2 and Embedded Gen2, memory consumption does not need to be tracked. The only limitation in
Premium Gen2 and Embedded Gen2, is on the memory footprint of a single artifact. The footprint cannot exceed the
memory available on the capacity. For more information about Premium Gen2, see Power BI Premium Generation 2.
A similar pattern can sometimes be detected in background operations if they contribute to CPU saturation. A
Power BI administrator can look for a periodic spike in refresh times for a specific dataset, which can indicate
CPU saturation at the time (probably because of other ongoing dataset refreshes and/or interactive queries). In
this instance, referring to the System view in the app may not necessarily reveal that the CPU is at 100%. The
System view displays hourly averages, but the CPU can become saturated for several minutes of heavy
operations, which shows up as spikes in wait times.
There are more nuances to seeing the effect of CPU saturation. While the number of queries that wait is
important, query wait time will always happen to some extent without causing discernable performance
degradation. Some datasets (with lengthier average query time, indicating complexity or size) are more prone to
the effects of CPU saturation than others. To easily identify these datasets, the Power BI administrator can look
for changes in the color composition of the bars in the Hourly Wait Time Distribution visual. After spotting
an outlier bar, they can look for the datasets that had query waits during that time and also look at the average
query wait time compared to average query duration. When these two metrics are of the same magnitude and
the query workload for the dataset is non-trivial, it is likely that the dataset is impacted by insufficient CPU.
This effect can be especially apparent when a dataset is consumed in short bursts of high frequency queries by
multiple users (for example, in a training session), resulting in CPU saturation during each burst. In this case,
significant query wait times on this dataset can be experienced as well as impacting on other datasets in the
capacity (noisy neighbor effect).
In some cases, Power BI administrators can request that dataset owners create a less volatile query workload by
creating a dashboard (which queries periodically with any dataset refresh for cached tiles) instead of a report.
This can help prevent spikes when the dashboard is loaded. This solution may not always be possible for given
business requirements, however it can be an effective way to avoid CPU saturation, without making changing to
the dataset.
NOTE
For Premium Gen2 and Embedded Gen2, CPU time utilization is tracked on a per-artifact level, and is visible in the
capacity utilization app. Each artifact displays their total CPU time utilization on a given timespan. For more information
about Premium Gen2, see Power BI Premium Generation 2.
Acknowledgments
This article was written by Peter Myers, Data Platform MVP and independent BI expert with Bitwise Solutions.
Next steps
Monitor Premium capacities with the app
Monitor capacities in the Admin portal
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Monitoring Power BI capacities
5/23/2022 • 4 minutes to read • Edit Online
Monitoring Premium capacities provides administrators with an understanding of how the capacities are
performing. Capacities can be monitored by using the Power BI Admin portal or the Power BI Premium
Capacity Metrics (Power BI) app.
To learn more about all available metrics for each workload, see Monitor capacities in the Admin portal.
The monitoring capabilities in the Power BI Admin portal are designed to provide a quick summary of key
capacity metrics. For more detailed monitoring, it's recommended you use the Power BI Premium Capacity
Metrics app.
Interpreting metrics
Metrics should be monitored to establish a baseline understanding of resource usage and workload activity. If
the capacity becomes slow, it is important to understand which metrics to monitor, and the conclusions you can
make.
Ideally, queries should complete within a second to deliver responsive experiences to report users and enable
higher query throughput. It is usually of lesser concern when background processes - including refreshes - take
longer times to complete.
In general, slow reports can be an indication of an over-heating capacity. When reports fail to load, this is an
indication of an over-heated capacity. In either situation, the root cause could be attributable to many factors,
including:
Failed queries certainly indicate memory pressure, and that a model could not be loaded into memory.
The Power BI service will attempt to load a model for 30 seconds before failing.
Excessive quer y wait times can be due to several reasons:
The need for the Power BI service to first evict model(s) and then load the to-be-queried model (recall
that higher dataset eviction rates alone are not an indication of capacity stress, unless accompanied by
long query wait times that indicate memory thrashing).
Model load times (especially the wait to load a large model into memory).
Long running queries.
Too many LC\DQ connections (exceeding capacity limits).
CPU saturation.
Complex report designs with an excessive number of visuals on a page (recall that each visual is a
query).
Long quer y durations can indicate that model designs are not optimized, especially when multiple
datasets are active in a capacity, and just one dataset is producing long query durations. This suggests
that the capacity is sufficiently resourced, and that the in-question dataset is sub-optimal or just slow.
Long running queries can be problematic as they can block access to resources required by other
processes.
Long refresh wait times indicate insufficient memory due to many active models consuming memory,
or that a problematic refresh is blocking other refreshes (exceeding parallel refresh limits).
A more detailed explanation of how to use the metrics is covered in the Optimizing Premium capacities article.
Acknowledgments
This article was written by Peter Myers, Data Platform MVP and independent BI expert with Bitwise Solutions.
Next steps
Optimizing Premium capacities
Configure workloads in a Premium capacity
More questions? Try asking the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Monitor Premium capacities with the app
5/23/2022 • 14 minutes to read • Edit Online
Monitoring your capacities is essential to making informed decisions on how best to utilize your Premium
capacity resources. You can monitor capacities in the Admin portal or with the Power BI Premium Capacity
Metrics app. This article describes using the Premium Capacity Metrics app. The app provides the most in-
depth information into how your capacities are performing. For a higher level overview of average use metrics
over the last seven days, you can use the Admin portal. To learn more about monitoring in the portal, see
Monitor Premium capacities in the Admin portal.
The app is updated regularly with new features and functionality. Make sure you're running the latest version.
When a new version becomes available, you will receive notification.
IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. See capacity and reliability notifications for more information.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifes the
management of Premium capacities, and reduces management overhead. For more information, see Power BI Premium
Generation 2.
To review the Power BI Embedded Gen2 enhancements, refer to Power BI Embedded Generation 2.
NOTE
The metrics app cannot be used to monitor Premium Per User (PPU) activities or capacity.
System Summary
CPU Highest Utilization Capacity Capacity with the maximum number of times CPU exceeded
80% of the thresholds in the past seven days.
CPU Highest Utilization Count Number of times CPU the named capacity exceeded 80% of
the thresholds in the past seven days.
Memory Max Utilization Capacity Capacity with the maximum number of times max memory
limit was hit in the past seven days, split into three-minute
buckets.
Memory Max Utilization Count Number of times the named capacity reached the max
memory limit in the past seven days, split into three-minute
buckets.
Dataset Summary
Datasets Average Size (MB) Average size of datasets across all workspaces in your
capacities.
Datasets Average Loaded Count Average count of datasets loaded into memory.
Datasets - Average Active Dataset (%) Average active datasets in the past seven days. A dataset is
defined as active if the user has interacted on the visuals
with the past three minutes.
CPU - Datasets Max (%) Max CPU consumption by dataset workload in the past
seven days.
CPU - Datasets Average (%) Average CPU consumption by dataset workload in the past
seven days.
Memory - Datasets Average (GB) Average memory consumption by dataset workload in the
past seven days.
Memory - Datasets Max (GB) Max memory consumption by dataset workload in the past
seven days.
DirectQuery/Live Max Utilization Count Most times the DirectQuery/Live connections exceeded 80%
in the past seven days, split into one-hour buckets.
DirectQuery/Live Max Occurred Time Time in UTC that DirectQuery/Live connections exceeded
80% the most times in an hour.
Refresh Reliability (%) Number of successful refreshes divided by the total number
of refreshes in the past seven days.
Refreshes Average Wait Time (Minutes) Average amount of time before starting refresh.
Queries Total Total number of queries run in the past seven days.
Queries Total Wait Count Total number of queries that had to wait before being
executed.
Queries Average Wait Time (MS) Average time queries waited on system resources before
being executed.
Dataflow Summary
Refreshes Average Duration (Minutes) The time taken to complete the refresh.
Refreshes Average Wait Times (Minutes) The lag between the scheduled time and actual start of the
refresh.
CPU - Dataflows Max (%) Max CPU consumption by dataflows workload in the past
seven days.
CPU - Dataflows Average (%) Average CPU consumption by dataflows workload in the
past seven days.
Memory - Dataflows Max (GB) Max memory consumption by dataflows workload in the
past seven days.
Memory - Dataflows Average (GB) Average memory consumption by dataflows workload in the
past seven days.
Views Total Total number of times that all reports have been viewed by
users.
Total Time Total time it takes for all phases (data retrieval, processing,
and rendering) of all reports, in milliseconds.
CPU - Paginated Reports Max (%) Maximum CPU consumption by paginated report workload
in the past seven days.
CPU - Paginated Reports Average (%) Average CPU consumption by paginated report workload in
the past seven days.
M ET RIC DESC RIP T IO N
Memory - Paginated Reports Max (GB) Maximum memory consumption by paginated report
workload in the past seven days.
Memory - Paginated Reports Average (GB) Average memory consumption by paginated report
workload in the past seven days.
AI Summary
AI Function Execution Reliability (%) Number of successful executions divided by the total
number of executions in the past seven days.
CPU Max (%) Max CPU consumption by the AI workload in the past seven
days.
Memory Max (GB) Max memory consumption by the AI workload in the past
seven days.
AI Function Execution Max Wait Time (MS) Maximum amount of time before starting execution.
AI Function Execution Average Wait Time (MS) Average amount of time before starting execution.
AI Function Execution Max Duration (MS) Maximum amount of time to complete execution.
AI Function Execution Average Duration (MS) Average amount of time to complete execution.
Reports
Reports provide more detailed metrics. To see reports for capacities for which you are an admin, in Repor ts ,
click Power BI Premium Capacity Metrics . Or, from the dashboard, click a metric cell to go to the underlying
report.
At the bottom of the report, there are five tabs:
Datasets - Provides detailed metrics on the health of the Power BI datasets in your capacities. Paginated
Repor ts - Provides detailed metrics on the health of the paginated reports in your capacities. Dataflows -
Provides detailed refresh metrics for dataflows in your capacities. AI - Provides detailed metrics on the health of
the AI functions used in your capacities. Resource Consumption - Provides detailed resource metrics
including memory and CPU high utilization. IDs and Info - Names, IDs, and owners for capacities, workspaces,
and workloads.
Each tab opens a page where you can filter metrics by capacity and date range. If no filters are selected, the
report defaults to show the past week’s metrics for all capacities that are reporting metrics.
Datasets
The Datasets page has different areas, which include Refreshes , Quer y Durations , Quer y Waits , and
Datasets . Use the buttons at the top of the page to navigate to different areas.
Refreshes area
REP O RT SEC T IO N M ET RIC S
Top 5 Datasets by Average Duration (minutes) The five datasets with the longest average refresh duration,
in minutes.
Top 5 Datasets by Average Wait Time (minutes) The five datasets with the longest average refresh wait time,
in minutes.
Hourly Refresh Count and Memory Consumption (GB) Successes, failures, and memory consumption, split into one-
hour buckets, reported in UTC time.
Hourly Average Refresh Wait Times (minutes) The average refresh wait time, split into one-hour buckets,
reported in UTC time. Multiple spikes with high refresh wait
times are indicative of the capacity running hot.
Top 5 Datasets by Average Duration The five datasets with the longest average query duration, in
milliseconds.
Hourly Query Duration Distributions Query counts and average duration (in milliseconds) vs.
memory consumption in GB, split into one-hour buckets,
reported in UTC time.
DirectQuery / Live Connections (> 80% Utilization) The times that a DirectQuery or live connection exceeded
80% CPU utilization, split into one-hour buckets, reported in
UTC time.
REP O RT SEC T IO N M ET RIC S
Query Wait Times Data in this section is sliced by datasets, workspace, and
hourly buckets in the past seven days.
Total: The total number of queries run for the dataset.
Wait count: The number of queries in the dataset that waited
on system resources before starting execution.
Average: The average query wait time for the dataset, in
milliseconds.
Max: The duration of the longest-waiting query in the
dataset, in milliseconds.
Top 5 Datasets by Average Wait Time The five datasets with the longest average wait time to start
executing a query, in milliseconds.
Hourly Query Wait Time Distributions Query wait counts and average wait time (in milliseconds) vs.
memory consumption in GB, split into one-hour buckets
reported in UTC time.
Datasets area
Dataset Sizes Max size: The maximum size of the dataset in MB for the
period shown.
Dataset Eviction Counts Total: The total number of dataset evictions for each capacity.
When a capacity faces memory pressure, the node evicts
one or more datasets from memory. Datasets that are
inactive (with no query/refresh operation currently
executing) are evicted first. Then the eviction order is based
on a measure of 'least recently used' (LRU).
Hourly Loaded Dataset Counts Number of datasets loaded into memory vs. memory
consumption in GB, split into one-hour buckets, reported in
UTC time.
Hourly Dataset Evictions and Memory Consumption Dataset evictions vs. memory consumption in GB, split into
one-hour buckets, reported in UTC time.
Paginated Reports
REP O RT SEC T IO N M ET RIC S
Overall usage Total Views: The number of times that the report has been
viewed by users.
Row Count: The number of rows of data in the report.
Retrieval (avg): The average amount of time it takes to
retrieve data for the report, in milliseconds. Long durations
can indicate slow queries or other data source issues.
Processing (avg): The average amount of time it takes to
process the data for a report, in milliseconds.
Rendering (avg): The average amount of time it takes to
render a report in the browser, in milliseconds.
Total time: The time it takes for all phases of the report, in
milliseconds.
Top 5 Reports by Average Data Retrieval Time The five reports with the longest average data retrieval time,
in milliseconds.
Top 5 Reports by Average Report Processing Time The five reports with the longest average report processing
time, in milliseconds.
Hourly Results Successes, failures, and memory consumption, split into one-
hour buckets, reported in UTC time.
Hourly Durations Data retrieval vs. processing and rendering time, split into
one-hour buckets, reported in UTC time.
Dataflows
REP O RT SEC T IO N M ET RIC S
Top 5 dataflows by Average Refresh Duration The five dataflows with the longest average refresh duration,
in minutes.
Top 5 dataflows by Average Wait Time The five dataflows with the longest average refresh wait
time, in minutes.
Hourly Average Refresh Wait Times The average refresh wait time, split into one-hour buckets,
reported in UTC time. Multiple spikes with high refresh wait
times are indicative of the capacity running hot.
Hourly Refresh Count and Memory Consumption Successes, failures, and memory consumption, split into one-
hour buckets, reported in UTC time.
AI
REP O RT SEC T IO N M ET RIC S
Hourly AI Function Execution and Average Wait Time AI executions and average wait time, in milliseconds, split
into one-hour buckets, reported in UTC time.
Resource Consumption
REP O RT SEC T IO N M ET RIC S
SKU and Workload Information SKU and workload settings for the capacity.
Workspaces area
Paginated Reports Names, workspace names, and IDs for all paginated reports.
Dataflows area
Dataflows Dataflow names, workspace names, and IDs for all dataflows.
NOTE
You can monitor Power BI Embedded capacity usage in the app or the Azure portal, but not in the Power BI admin portal.
Next steps
Optimizing Power BI Premium capacities
More questions? Ask the Power BI Community
Power BI has released Power BI Premium Gen2, which improves the Power BI Premium experience with
improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Power BI Premium Metrics app
5/23/2022 • 16 minutes to read • Edit Online
You can use the Power BI Premium Metrics app to manage the health and capacity of your Power BI
Premium subscription. With the app, administrators use the app's Capacity health center to see and interact
with indicators that monitor the health of their premium capacity. The Metrics app consists of the landing page,
called the Capacity Health Center , and details about three important metrics:
Active memory
Query waits
Refresh waits
The following sections describe the landing page, and the three metrics report pages, in detail.
IMPORTANT
If your Power BI Premium capacity is experiencing high resource usage, resulting in performance or reliability issues, you
can receive notification emails to identify and resolve the issue. This can be a streamlined way to troubleshoot overloaded
capacities. See capacity and reliability notifications for more information.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 simplifies the
management of Premium capacities, and reduces management overhead. In particular, it greatly reduces the metrics
administrators must monitor (CPU only) to ensure performance and users’ experience. For more information, see Power BI
Premium Generation 2.
NOTE
The metrics app cannot be used to monitor Premium Per User (PPU) activities or capacity.
From the landing page, you can select the Power BI Premium capacity you want to view, in case your
organization has multiple Premium subscriptions. To view a Premium capacity, select the dropdown near the top
of the page called Select a capacity to see its metrics .
The three KPIs show the current health of the selected Premium capacity, based on the settings applied to each
of the three KPIs.
To view specifics about each KPI, select the Explore button at the bottom of each KPI's visual, and its detail page
is displayed. The following sections describe each KPI and the details its page provides.
The troubleshooting guides, associated with each scenario, provide detailed explanations about what the metrics
mean, so you can better understand the state of the capacity, and what can be done to mitigate any issues.
Those two scenarios are described in the following sections.
Scenario one - current load is too high
To determine whether there's enough memory for the capacity to complete its workloads, consult the first visual
on the page: A: Consumed Memor y Percentages , which displays the memory consumed by datasets that are
being actively processed, and thus, cannot be evicted.
The alarm threshold, which is the red dotted line, marks incidents of 90% memory consumption.
The warning threshold, which is the yellow dotted line, marks incidents of 70% memory consumption.
The black dotted line indicates the memory usage trendline, based on the current capacity's memory usage over
the course of the graph timeline.
High occurrences of active memory above the alarm threshold (red dotted line) and memory trendline (black
dotted line) indicates memory capacity pressure, possibly preventing additional datasets from being loaded into
memory during that time.
When you see such cases, you should look carefully at the other charts on the page to better determine what
and why so much memory is so frequently being consumed, and how to load balance or optimize, or if
necessary, scale up the capacity.
The second visual on the page, B: Hourly loaded active datasets displays the counts of the maximum
number of datasets that were loaded in memory, in hourly buckets.
The third visual, C: Why datasets are in memor y is a table that lists the dataset by workspace name, dataset
name, datasets uncompressed size in memory, explains the reason it's loaded in memory (such as, being
refreshed or queried against, or both).
Diagnosing scenario one
Consistent high active memory utilization may result in forcing datasets that are actively being used to be
evicted, or can prevent new datasets from able to load. The following steps can help you diagnose problems
1. Have a look at chart A: Consumed memory percentages
a. If Chart A shows the alarm threshold (90%) is crossed many times and/or for consecutive hours, then
your capacity is running low on memory too frequently. In the chart below, we can see the warning
threshold (70%) was crossed four times.
b. The chart titled B: Hourly loaded active datasets shows the maximum number of unique datasets
loaded in memory by hourly buckets. Selecting a bar in the visual will cross filter the reasons datasets are
in memory visual.
c. Consult the Why datasets are in memor y table to see a list of the datasets that were loaded in
memory. Sort by Dataset Size (MB) to highlight the datasets taking up the most memory. Capacity
operations are classified as either interactive or background. Interactive operations include rendering
requests and responding to user interactions (filtering, Q&A querying, and so on). Total queries and total
refreshes provide an idea of whether there are interactive (queries) heavy or background (refreshes)
operations done on the dataset. It's important to understand that interactive operations are always
prioritized over background operations to ensure the best possible user experience. If there are
insufficient resources, background operations are added to a queue, and are processed once resources
free up. Background operations, such as dataset refreshes and AI functions, can be stopped mid-process
by the Power BI service and added to a queue.
a. If the chart shows an upward slope, that indicates that memory consumption has increased over the
past seven days.
b. Assume the current growth, and predict when the trend line will cross the warning threshold (the
yellow dashed line).
c. Keep checking the trend line at least every two days, to see if the trend continuing.
Remedies for scenario two
You can take the following steps to remedy the problems associated with scenario two:
1. Scale up the capacity - scaling up the capacity to the next SKU will make available twice the amount of
memory than the current SKU, thus alleviating any memory pressure the capacity is currently
experiencing.
2. Move datasets to another capacity - if you have another capacity that has more memory available,
you can move the workspaces that contain the larger datasets to that capacity.
The gauge in this visual shows that in the last seven days from the time the report was last refreshed, 17.32% of
the queries waited more than 100 milliseconds.
To learn details of Query waits KPI, click the Explore button to display a report page with visualization of
relevant metrics, and a troubleshooting guide in the right column of the page. The troubleshooting guide has
two scenarios, each providing detailed explanations of the metric, the state of the capacity, and what you can do
to mitigate the issue.
We discuss each query waits scenario, in turn, in the following sections.
Scenario one - long running queries consume CPU
In scenario one, long running queries are taking up too much CPU.
You can investigate whether poor report performance is caused by an overloaded capacity, or due to a poorly
designed dataset or report. There are several reasons why a query can run for an extended period, which is
defined as taking more than 10 seconds to finish executing. Dataset size and complexity, as well as query
complexity are just a few examples of what can cause a long running query.
On the report page, the following visuals appear:
The top table titled A: High wait times lists the datasets with queries that are waiting.
B: Hourly high wait time distributions shows the distribution of high wait times.
The chart titled C: Hourly long quer y counts displays the count of long running queries that were
executed split by hourly buckets.
The last visual, table D: Long running queries , lists the long running queries and their stats.
There are steps you can take to diagnose and remedy issues with query wait times, described next.
Diagnosing scenario one
First, you can determine if long running queries are occurring when your queries are waiting.
Look at Char t B , which shows the count of queries that are waiting more than 100 ms. Select one of the
columns that shows a high number of waits.
When you click on a column with high wait times, Char t C is filtered to show the count of long-running queries
that executed during that time, shown in the following image:
And in addition, Char t D is also filtered to show the queries that were long running during that selected time
period.
Once you've selected a dataset with a high wait time, Char t B is filtered to show the wait time distributions for
queries on that dataset, over the past seven days. Next, select one of the columns from Char t B .
Char t C is then filtered to show the queue length at the time selected from Chart B.
If the length of the queue has crossed the threshold of 20, then it's likely that the queries in the selected dataset
are delayed, due to too many queries trying to execute at the same time.
This gauge shows that in the last seven days from the last refresh report refresh, 3.18% of the refreshes waited
more than 10 minutes.
To learn the details of the Refresh waits KPI, click the Explore button, which presents a page with metrics and a
troubleshooting guide on the right column of the report page. The guide provides detailed explanations about
the metrics on the page, and helps you understand the state of the capacity, and what you can do to mitigate any
issues.
There are two scenarios explained, which you can show on the report page by selecting Scenario 1 or Scenario 2
on the page. We discuss each scenario in turn, in the following sections.
Scenario one - not enough memory
In scenario one, there isn't enough available memory to load the dataset. There are two situations that result in a
refresh being throttled during low memory conditions:
1. Not enough memory to load the dataset.
2. The refresh was canceled due to a higher priority operation.
The priority for loading datasets is the following:
1. Interactive query
2. On-demand refresh
3. Scheduled refresh
If there isn't enough memory to load a dataset for an interactive query, scheduled refreshes are stopped and
their datasets unloaded until sufficient memory become available. If that doesn't free up enough memory, then
on-demand refreshes are stopped and their datasets are unloaded.
Diagnosing scenario one
To diagnose scenario one, first determine whether throttling is due to insufficient memory. The steps to do so
are the following.
1. Select the dataset you're interested in from Table A by clicking on it:
a. When a dataset is selected in Table A , Char t B is filtered to show when waiting occurred.
b. Char t C is then filtered to show any throttling, explained in the next step.
2. Look at the results in the now-filtered Char t C . If the chart shows out of memory throttling occurred at
the times the dataset was waiting, then the dataset was waiting due to low memory conditions.
3. Finally, check Char t D , which shows the types of refreshes that were occurring, scheduled versus on-
demand. Any on-demand refreshes occurring at the same time could be the cause of the throttling.
a. When a dataset is selected in Table A , Char t B is filtered to show when waiting occurred.
b. Char t C is then filtered to show any throttling, explained in the next step.
2. Look at the results in the now-filtered Char t C . If the chart shows max concurrency occurred at the times
the dataset was waiting, then the dataset was waiting due to not enough available CPU.
3. Finally, check Char t D , which shows the types of refreshes that were occurring, scheduled versus on-
demand. Any on-demand refreshes occurring at the same time could be the cause of the throttling.
Next steps
What is Power BI Premium?
Microsoft Power BI Premium whitepaper
Planning a Power BI Enterprise Deployment whitepaper
Extended Pro Trial activation
Power BI Embedded FAQ
More questions? Try asking the Power BI Community
Power BI has introduced Power BI Premium Gen2 as a preview offering, which improves the Power BI Premium
experience with improvements in the following:
Performance
Per-user licensing
Greater scale
Improved metrics
Autoscaling
Reduced management overhead
For more information about Power BI Premium Gen2, see Power BI Premium Generation 2.
Restart a Power BI Premium capacity
5/23/2022 • 2 minutes to read • Edit Online
As a Power BI administrator, you might need to restart a Premium capacity. This article explains how to restart a
capacity and addresses several questions about restart and performance.
NOTE
Power BI Premium recently released a new version of Premium, called Premium Gen2 . Premium Gen2 capacities do not
require restarts, so this feature is not available in Premium Gen2.
Embedded Gen2 capacities also don't require restart. To review the Power BI Embedded Gen2 enhancements, refer to
Power BI Embedded Generation 2.
NOTE
This process and functionality does not apply to Power BI Premium Per User (PPU) capacities or activities.
Next steps
What is Power BI Premium?
More questions? Try asking the Power BI Community
Governance and deployment approaches
5/23/2022 • 2 minutes to read • Edit Online
Over the last few decades, companies have become increasingly aware of the need to leverage data assets to
profit from market opportunities. Either by performing competitive analysis or by understanding operational
patterns, many organizations now understand they can benefit from having a data strategy as a way to stay
ahead of their competition.
The Planning a Power BI Enterprise Deployment whitepaper provides a framework for increasing the return on
investment related to Power BI as companies increasingly seek to become more data-savvy.
Business Intelligence practitioners typically define data-savvy companies as those that benefit from the use of
factual information to support decision making. We even describe certain organizations as having a data
culture. Whether at the organizational level, or at a departmental level, a data culture can positively alter a
company’s ability to adapt and thrive. Data insights don't always have to be at enterprise scope to be far-
reaching: small operational insights that alter day-to-day operations can be transformational as well.
For these companies, there is an understanding that facts – and fact analysis – must drive how business
processes are defined. Team members attempt to seek data, identify patterns, and share findings with others.
This approach can be useful regardless of if the analysis is done over external or internal business factors. It is
first and foremost a perspective, not a process. Read the whitepaper to learn about concepts, options and
suggestions for governance within the Power BI ecosystem.
Metadata scanning
5/23/2022 • 4 minutes to read • Edit Online
Metadata scanning facilitates governance over your organization's Power BI data by making it possible to
quickly catalog and report on all the metadata of your organization's Power BI artifacts. It accomplishes this
using a set of Admin REST APIs that are collectively known as the scanner APIs. With the scanner APIs, you can
extract both general information such as artifact name, owner, sensitivity label, endorsement status, and last
refresh, as well as more detailed metadata such as dataset table and column names, measures, DAX expressions,
mashup queries, etc.
The following are the scanner APIs. They support both public and sovereign clouds.
GetModifiedWorkspaces
WorkspaceGetInfo
WorkspaceScanStatus
WorkspaceScanResult
Before metadata scanning can be run, a Power BI admin needs to set it up. See Setting up metadata scanning in
an organization.
NOTE
Not more than 16 calls can be made simultaneously. The caller should wait for a scan succeed/failed response from the
scanStatus API before invoking another call.
If some metadata you expected to receive is not returned, check with your Power BI admin to make sure they have
enabled all relevant admin switches.
Use the URI from the location header you received from calling workspaces/getInfo and poll on
workspaces/scanStatus/{scan_id} until the status returned is "Succeeded". This means the scan result is ready. It
is recommended to use a polling interval of 30-60 seconds. In the location header, you’ll also receive the URI to
call in the next step. Use it only once the status is "Succeeded".
Use the URI from the location header you received from calling workspaces/scanStatus/{scan-id} and read the
data using workspaces/scanResult/{scan_id}. The data contains the list of workspaces, artifact info, and other
metadata based on the parameters passed in the workspaces/getInfo call.
Step 2: Perform an incremental scan
Now that you have all the workspaces and the metadata and lineage of their assets, it's recommended that you
perform only incremental scans that reference the previous scan that you did.
Call workspaces/modified with the modifiedSince parameter set to the start time of the last scan in order to
get the workspaces that have changed and which therefore require another scan. The modifiedSince parameter
should be set for a date within the last 30 days.
Divide this list into chunks of up to 100 workspaces, and get the data for these changed workspaces using the
three API calls, workspaces/getInfo, workspaces/scanStatus/{scan_id}, and workspaces/scanResult/{scan_id}, as
described in Step 1 above.
Licensing
Metadata scanning requires no special license. It works for all of your tenant metadata, including that of artifacts
that are located in non-Premium workspaces.
Next steps
Power BI REST Admin APIs
Set up metadata scanning
Enable service principal authentication for read-only admin APIs
More questions? Try asking the Power BI Community
Data protection in Power BI
5/23/2022 • 2 minutes to read • Edit Online
Overview
Power BI plays a key role in bringing data insights to everyone in an organization. However, as data becomes
more accessible to inform decisions, risk of accidental oversharing or misuse of business-critical information
increases.
Microsoft has world-class security capabilities to help protect customers from threats. Over 3,500 security
researchers along with sophisticated AI models reason every day over 6.5+ trillion signals globally to help
protect customers against threats at Microsoft.
Data protection capabilities in Power BI build on Microsoft's strengths in security and enable customers to
empower every user with Power BI and better protect their data no matter how or where it is accessed.
The pillars of Power BI's data protection capabilities and how they help you protect your organization's sensitive
data are listed below:
Microsoft Information Protection sensitivity labels
Classify and label sensitive Power BI data using Microsoft Information Protection sensitivity
labels used in Office and other Microsoft products.
Enforce governance policies even when Power BI content is expor ted to Excel, PowerPoint,
PDF, and other supported export formats to help ensure data is protected even when it leaves Power
BI.
Microsoft Defender for Cloud Apps
Monitor and protect user activity on sensitive data in real time with alerts, session
monitoring, and risk remediation using Defender for Cloud Apps.
Empower security administrators who use data protection reports and security investigation
capabilities with Defender for Cloud Apps to enhance organizational oversight.
Microsoft 365 data loss prevention
Data loss prevention policies for Power BI enable central security teams to use Microsoft 365
data loss prevention policies to enforce the organization’s DLP policies on Power BI. DLP policies for
Power BI currently support detection of sensitive info types and sensitivity labels on datasets, and can
trigger automatic risk remediation actions such as alerts to security admins in Microsoft 365
compliance portal and policy tips for end users.
Read more about Microsoft Information Protection sensitivity labels, Microsoft Defender for Cloud Apps, and
Microsoft 365 data loss prevention.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!
Next steps
Learn about sensitivity labels in Power BI and how to use them
Set up and use Defender for Cloud Apps controls in Power BI
Learn about data loss prevention
Microsoft Business Applications Summit video session - Power BI and Microsoft Information Protection - The
game changer for secure BI
Sensitivity labels in Power BI
5/23/2022 • 18 minutes to read • Edit Online
This article describes the functionality of Microsoft Information Protection sensitivity labels in Power BI.
For information about enabling sensitivity labels on your tenant, including licensing requirements and
prerequisites, see Enable data sensitivity labels in Power BI.
For information about how to apply sensitivity labels on your Power BI content and files, see How to apply
sensitivity labels in Power BI.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!
Introduction
Microsoft Information Protection sensitivity labels provide a simple way for your users to classify critical content
in Power BI without compromising productivity or the ability to collaborate. They can be applied in both Power
BI Desktop and the Power BI service, making it possible to protect your sensitive data from the moment you first
start developing your content on through to when it's being accessed from Excel via a live connection.
Sensitivity labels are retained when you move your content back and forth between Desktop and the service in
the form of .pbix files.
In the Power BI service, sensitivity labels can be applied to datasets, reports, dashboards, and dataflows. When
labeled data leaves Power BI, either via export to Excel, PowerPoint, PDF, or .pbix files, or via other supported
export scenarios such as Analyze in Excel or live connection PivotTables in Excel, Power BI automatically applies
the label to the exported file and protects it according to the label's file encryption settings. This way your
sensitive data can remain protected, even when it leaves Power BI.
In addition, sensitivity labels can be applied to .pbix files in Power BI Desktop, so that your data and content is
safe when it’s shared outside Power BI (for example, so that only users within your organization can open a
confidential .pbix that has been shared or attached in an email), even before it has been published to the Power
BI service. See Restrict access to content by using sensitivity labels to apply encryption for more detail.
Sensitivity labels on reports, dashboards, datasets, and dataflows are visible from many places in the Power BI
service. Sensitivity labels on reports and dashboards are also visible in the Power BI iOS and Android mobile
apps and in embedded visuals. In Desktop, you can see the sensitivity label in the status bar.
A protection metrics report available in the Power BI admin portal gives Power BI admins full visibility over the
sensitive data in the Power BI tenant. In addition, the Power BI audit logs include sensitivity label information
about activities such as applying, removing, and changing labels, as well as about activities such as viewing
reports, dashboards, etc. This gives Power BI and security admins visibility over sensitive data consumption for
the purposes of monitoring and investigating security alerts.
Important considerations
In the Power BI service, sensitivity labeling does not affect access to content. Access to content in the service is
managed solely by Power BI permissions. While the labels are visible, any associated encryption settings
(configured in the Microsoft 365 compliance center) aren’t applied. They’re applied only to data that leaves the
service via a supported export path, such as export to Excel, PowerPoint, or PDF, and download to .pbix.
In Power BI Desktop, sensitivity labels with encryption settings do affect access to content. If a user doesn’t have
sufficient permissions according to the encryption settings of the sensitivity label on the .pbix file, they will not
be able to open the file. In addition, in Desktop, when you save your work, any sensitivity label you've added and
its associated encryption settings will be applied to the saved .pbix file.
Sensitivity labels and file encryption are not applied in non-supported export paths. The Power BI admin can
block export from non-supported export paths.
NOTE
Users who are granted access to a report are granted access to the entire underlying dataset, unless row-level security
(RLS) limits their access. Report authors can classify and label reports using sensitivity labels. If the sensitivity label has
protection settings, Power BI applies these protection settings when the report data leaves Power BI via a supported
export path such as export to Excel, PowerPoint, or PDF, download to .pbix, and Save (Desktop). Only authorized users
will be able to open protected files.
NOTE
When using Download the .pbix in the Power BI service, if the downloaded report and its dataset have different labels,
the more restrictive label will be applied to the .pbix file.
NOTE
Some limitations may apply. See Considerations and limitations.
If you apply a sensitivity label in Desktop, when you publish your work to the service, or when you upload a
.pbix file of that work to the service, the label travels with the data into the service. In the service, the label will
be applied to both the dataset and the report that you get with the file. If the dataset and report already have
sensitivity labels, you can choose to keep those labels or to overwrite them with the label coming from Desktop.
If you upload a .pbix file that has never been published to the service before, and that has the same name as a
report or dataset that already exists on the service, the upload will succeed only if the uploader has the RMS
permissions necessary to change the label.
The same is also true in the opposite direction - when you download to .pbix in the service and then load the
.pbix into Desktop, the label that was in the service will be applied to the downloaded .pbix file and from there
be loaded into Desktop. If the report and dataset in the service have different labels, the more restrictive of the
two will be applied to the downloaded .pbix file.
When you apply a label in Desktop, it shows up in the status bar.
Learn how to apply sensitivity labels to Power BI content and files.
NOTE
If for any reason the sensitivity label can't be applied on the new report or dashboard, Power BI will not block creation of
the new item.
Sensitivity labels and protection aren’t applied when data is exported to .csv, files or any other unsupported
export path.
Applying a sensitivity label and protection to an exported file doesn't add content marking to the file. However, if
the label is configured to apply content markings, the markings are automatically applied by the Azure
Information Protection unified labeling client when the file is opened in Office desktop apps. The content
markings aren’t automatically applied when you use built-in labeling for desktop, mobile, or web apps. See
When Office apps apply content marking and encryption for more detail.
Export fails if a label can't be applied when data is exported to a file. To check if export failed because the label
couldn't be applied, click the report or dashboard name at the center of the title bar and see whether it says
"Sensitivity label can't be loaded" in the info dropdown that opens. This can happen as the result of a temporary
system issue, or if the applied label has been unpublished or deleted by the security admin.
Sensitivity labels in Excel that were manually set aren’t automatically overwritten by the dataset's sensitivity
label. Rather, a banner notifies you that the dataset has a sensitivity label and recommends that you apply it.
NOTE
If the dataset's sensitivity label is less restrictive than the Excel file's sensitivity label, no label inheritance or update takes
place. An Excel file never inherits a less restrictive sensitivity label.
Supported clouds
Sensitivity labels are supported for tenants in global (public) clouds, and the following national clouds:
US Government: GCC, GCC High, DoD
China
Sensitivity labels are not currently supported in other national clouds.
IMPORTANT
If your organization uses Azure Information Protection sensitivity labels, you need to migrate them to one of the
previously listed services in order for the labels to be used in Power BI.
Next steps
This article provided an overview of data protection in Power BI. The following articles provide more details
about data protection in Power BI.
Enable sensitivity labels in Power BI
How to apply sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Protection metrics report
Enable sensitivity labels in Power BI
5/23/2022 • 4 minutes to read • Edit Online
In order for Microsoft Information Protection sensitivity labels to be used in Power BI, they must be enabled on
the tenant. This article shows Power BI admins how to do this. For an overview about sensitivity labels in Power
BI, see Sensitivity labels in Power BI. For information about applying sensitivity labels in Power BI, see Applying
sensitivity labels
When sensitivity labels are enabled:
Specified users and security groups in the organization can classify and apply sensitivity labels to their Power
BI content. In the Power BI service, this means their reports, dashboards, datasets, and dataflows. In Power BI
Desktop, it means their .pbix files.
In the service, all members of the organization will be able to see those labels. In Desktop, only members of
the organization who have the labels published to them will be able to see the labels.
Enabling sensitivity labels requires an Azure Information Protection license. See Licensing and requirements for
detail.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!
NOTE
If your organization uses Azure Information Protection sensitivity labels, they need to be migrated to the
Microsoft Information Protection Unified Labeling platform in order for the them to be used in Power BI. Learn
more about migrating sensitivity labels.
To be able to apply labels to Power BI content and files, a user must have a Power BI Pro or Premium Per
User (PPU) license in addition to one of the Azure Information Protection licenses mentioned above.
Office apps have their own licensing requirements for viewing and applying sensitivity labels.
Before enabling sensitivity labels on your tenant, make sure that sensitivity labels have been defined and
published for relevant users and groups. See Create and configure sensitivity labels and their policies for
detail.
Customers in China must enable rights management for the tenant and add the Microsoft Information
Protection Sync Service service principle, as described in steps 1 and 2 under Configure Azure
Information Protection for customers in China.
Using sensitivity labels in Desktop requires the Desktop December 2020 release and later.
NOTE
If you try to open a protected .pbix file with a Desktop version earlier than December 2020, it will fail, and you will
be prompted to upgrade your Desktop version.
Troubleshooting
Power BI uses Microsoft Information Protection sensitivity labels. Thus if you encounter an error message when
trying to enable sensitivity labels, it might be due to one of the following:
You do not have an Azure Information Protection license.
Sensitivity labels have not been migrated to the Microsoft Information Protection version supported by
Power BI.
No Microsoft Information Protection sensitivity labels have been defined in the organization.
Next steps
This article described how to enable sensitivity labels in Power BI. The following articles provide more details
about data protection in Power BI.
Overview of sensitivity labels in Power BI
How to apply sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Protection metrics report
How to apply sensitivity labels in Power BI
5/23/2022 • 5 minutes to read • Edit Online
Microsoft Information Protection sensitivity labels on your reports, dashboards, datasets, dataflows, and .pbix
files can guard your sensitive content against unauthorized data access and leakage. Labeling your data
correctly with sensitivity labels ensures that only authorized people can access your data. This article shows you
how to apply sensitivity labels in the Power BI service and in Power BI Desktop.
For more information about sensitivity labels in Power BI, see Sensitivity labels in Power BI.
Give us your feedback
The product team would love to get your feedback about Power BI's information protection capabilities and its
integration with Microsoft Information Protection sensitivity labels. Help us meet your information protection
needs! Thanks!
NOTE
If the label is greyed out, you may not have the correct usage rights to change the label. If you need to change a
sensitivity label and can't, either ask the person who applied the label in the first place to modify it, or contact the
Microsoft 365/Office security administrator and request the necessary usage rights for the label.
NOTE
If the label is greyed out, you may not have the correct usage rights to change the label. If you need to change a
sensitivity label and can't, either ask the person who applied the label in the first place to modify it, or contact the
Microsoft 365/Office security administrator and request the necessary usage rights for the label.
NOTE
This video might use earlier versions of Power BI Desktop or the Power BI service.
To apply a sensitivity label on the file you're working on, click the sensitivity button in the home tab and choose
the desired label from the menu that appears.
NOTE
If the sensitivity button is greyed out, it may indicate that you don't have an appropriate license or that you do not
belong to a security group that has permissions to apply sensitivity labels, as described in Enable sensitivity labels in
Power BI.
If a particular label you wish to change is greyed out, you may not have the correct usage rights to change that label. If
you need to change a sensitivity label and can't, either ask the person who applied the label in the first place to modify it,
or contact the Microsoft 365/Office security administrator and request the necessary usage rights for the label.
After you've applied the label, it will be visible in the status bar.
Sensitivity labels when uploading or downloading .pbix files to/from the service
When you publish a .pbix file to the Power BI service from Desktop, or when you upload a .pbix file to the
Power BI service directly using Get data , the .pbix file's label gets applied to both the report and the dataset
that are created in the service. If the .pbix file you're publishing or uploading replaces existing assets (i.e. that
have the same name as the .pbix file), a dialog will prompt you to choose whether to keep the labels on those
assets or to have the .pbix file's label overwrite those labels. If the .pbix file is unlabeled, the labels in the
service will be retained.
When using "Download to .pbix" in the Power BI service, if the report and dataset being downloaded both
have labels, and those labels are different, the label that will be applied to the .pbix file is the more restrictive
of the two.
Next steps
This article described how to apply sensitivity labels in Power BI. The following articles provide more details
about data protection in Power BI.
Overview of sensitivity labels in Power BI
Enable sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Default label policy for Power BI
5/23/2022 • 2 minutes to read • Edit Online
To help ensure comprehensive protection and governance of sensitive data, organizations can create default
label policies for Power BI that automatically apply default sensitivity labels to unlabeled content.
This article describes how to enable a default label policy, both in the Microsoft 365 compliance center and by
using the Security & Compliance Center PowerShell setLabelPolicy API.
NOTE
The default label policy settings for Power BI are independent of the default label policy settings for files and email.
For existing policies, it is also possible to enable default label policies for Power BI using the Security &
Compliance Center PowerShell setLabelPolicy API.
Set-LabelPolicy -Identity "<default label policy name>" -AdvancedSettings @{powerbidefaultlabelid="
<LabelId>"}
Where:
<default label policy name> = the name of the policy whose associated sensitivity label you want to be
applied by default to unlabeled content in Power BI.
IMPORTANT
If a user has more than one label policy, the default label setting is always taken from the policy with the highest priority,
so be sure to configure the default label on that policy.
Next steps
Mandatory label policy for Power BI
Sensitivity labels in Power BI
Data protection metrics report
Audit schema for sensitivity labels in Power BI
Mandatory label policy for Power BI
5/23/2022 • 2 minutes to read • Edit Online
To help ensure comprehensive protection and governance of sensitive data, you can require your organization's
Power BI users to apply sensitivity labels to content they create or edit in Power BI. You do this by enabling, in
their sensitivity label policies, a special setting for mandatory labeling in Power BI. This article describes the user
actions that are affected by a mandatory labeling policy, and explains how to enable a mandatory labeling policy
for Power BI.
NOTE
The mandatory label policy setting for Power BI is independent of the mandatory label policy setting for files and email.
Mandatory labeling in Power BI is not supported for service principals and APIs. Service principals and APIs are not
subject to mandatory label policies.
Where:
policy name = the name of the policy where you want to set labeling in Power BI as mandatory.
Requirements for using PowerShell
You need the EXO V2 module to run this command. For more information, see About the Exchange Online
PowerShell V2 module
A connection to the Microsoft 365 compliance center is also required. For more information, see Connect to
Security & Compliance Center PowerShell using the EXO V2 module
Documentation
Admin Guide: Custom configurations for the Azure Information Protection unified labeling client
Create and configure sensitivity labels and their policies
Set-LabelPolicy documentation
Next steps
Default label policy for Power BI
Sensitivity labels in Power BI
Data protection metrics report
Audit schema for sensitivity labels in Power BI
Sensitivity label downstream inheritance
5/23/2022 • 3 minutes to read • Edit Online
When a sensitivity label is applied to a dataset or report in the Power BI service, it is possible to have the label
trickle down and be applied to content that is built from that dataset or report as well. For datasets, this means
other datasets, reports, and dashboards. For reports, this means dashboards. This capability is called
downstream inheritance.
Downstream inheritance is a critical link in Power BI’s end-to-end information protection solution. Together with
inheritance from data sources, inheritance upon creation of new content, inheritance upon export to file, and
other capabilities for applying sensitivity labels, downstream inheritance helps ensure that sensitive data
remains protected throughout its journey through Power BI, from data source to point of consumption.
Downstream inheritance is illustrated below using lineage view. When a label is applied to the dataset
“Customer profitability”, that label filters down and also gets applied to the dataset’s downstream content – the
reports that are built using that dataset, and, in this case, a dashboard that is built from visuals from one of
those reports.
IMPORTANT
Downstream inheritance never overwrites labels that were applied manually.
Downstream inheritance never overwrites a label with a less restrictive label.
By default, the checkbox is selected. This means that when the user applies a sensitivity label to a dataset or
report, the label will filter down to its downstream content. For each downstream item, the label will be applied
only if:
The user who applied or changed the label has Power BI edit permissions on the downstream item (that is,
the user is an admin, member, or contributor in the workspace where the downstream item is located).
The user who applied or changed the label is authorized to change the sensitivity label that already exists on
the downstream item.
Clearing the checkbox prevents the label from being inherited downstream.
Fully automated downstream inheritance
In fully automated mode, a label applied to either a dataset or report will automatically be propagated and
applied to the dataset or report’s downstream content, without regard to edit permissions on the downstream
item and the usage rights on the label.
Next steps
Sensitivity label overview
Label change enforcement
Sensitivity label inheritance from data sources
(preview)
5/23/2022 • 2 minutes to read • Edit Online
Power BI datasets that connect to sensitivity-labeled data in supported data sources can inherit those labels, so
that the data remains classified and secure when brought into Power BI.
Currently supported data sources:
Excel
Azure Synapse Analytics (formerly SQL Data Warehouse)
Azure SQL Database
To be operative, sensitivity label inheritance from data sources must be enabled on the tenant.
Requirements
The data in the data source must be labeled with Microsoft Information Protection labels.
For Azure Synapse Analytics and Azure SQL Database, this is accomplished using a two-step Purview
flow:
1. Automatically apply sensitivity labels to your data.
2. Classify your Azure SQL data using Azure Purview labels.
The scope of the labels must be Files and emails and Azure Pur view assets . See Extending
sensitivity labels to Azure Purview and Creating new sensitivity labels or modifying existing labels.
Sensitivity labels must be enabled in Power BI.
The Apply sensitivity labels from data sources to their data in Power BI (preview) tenant admin
setting must be enabled.
All conditions for applying a label must be met.
Inheritance behavior
In the Power BI service, when the dataset is connected to the data source, Power BI inherits the label and
applies it automatically to the dataset. Subsequently, inheritance occurs upon dataset refresh. In Power BI
Desktop, when you connect to the data source via Get data , Power BI inherits the label and automatically
applies it to the .pbix file (both the dataset and report). Subsequently inheritance occurs upon refresh.
If the data source has sensitivity labels of different degrees, the most restrictive is chosen for inheritance. In
order to be applied, that label (the most restrictive) must be published for the dataset owner.
Labels from data sources never overwrite manually applied labels.
Less restrictive labels from the data source never overwrite more restrictive labels on the dataset.
In Desktop, if the incoming label is more restrictive than the label that is currently applied in Desktop, a
banner will appear recommending to the user to apply the more restrictive label.
Dataset refresh will succeed even if for some reason the label from the data source is not applied.
NOTE
No inheritance takes place if the dataset owner is not authorized to apply sensitivity labels in Power BI, or if the specific
label in question has not been published for the dataset owner.
Next steps
Enable sensitivity label inheritance from data sources
Sensitivity label overview
Sensitivity label change enforcement
5/23/2022 • 2 minutes to read • Edit Online
Power BI restricts permission to change or remove Microsoft Information Protection sensitivity labels that have
file encryption settings to authorized users only.
Authorized users are:
The user who applied the sensitivity label.
Users who have been assigned at least one of the following usage rights to the label in the labeling admin
center (Microsoft 365 compliance center):
OWNER
EXPORT
EDIT and EDITRIGHTSDATA
Users who try to change a label and can’t should ask the person who applied the label to perform the
modification, or they can contact their Microsoft 365/Office security administrator and ask to be granted the
necessary usage rights.
To help your organization's Power BI users understand what your sensitivity labels mean or how they should be
used, you can provide a Learn more link pointing to your organization’s custom web page that users will see
when they're applying or being prompted to apply sensitivity labels. The image below is an example that shows
how the Learn more link appears when applying a sensitivity label in Power BI Desktop.
If a dedicated custom help link for Power BI isn't set, Power BI uses the custom help link defined for Office
365 apps. This link is defined in the Microsoft 365 compliance center. See What label policies can do.
If a user has more than one label policy, the custom URL is always taken from the policy with the highest
priority, so be sure to configure the custom URL on that policy.
Next steps
Sensitivity label overview
Sensitivity label support for paginated reports
5/23/2022 • 2 minutes to read • Edit Online
Sensitivity labels can be applied to paginated reports hosted in the Power BI service. After uploading a
paginated report to the service, you apply the label to the report just as you would to a regular Power BI report.
When you export data from a labeled paginated report to a supported file type (Excel, PDF, PPTX, and Word), the
sensitivity label on the paginated report is applied to the exported file.
Sensitivity labels on paginated reports are included in protection metrics (as part of the Report count), and can
be audited (label-change audits only) and modified by public APIs, just like labels on regular Power BI reports.
Next steps
Apply sensitivity labels in Power BI
Sensitivity label overview
Set or remove sensitivity labels using Power BI REST
admin APIs
5/23/2022 • 2 minutes to read • Edit Online
To meet compliance requirements, organizations are often required to classify and label all sensitive data in
Power BI. This task can be challenging for tenants that have large volumes of data in Power BI. To make the task
easier and more effective, the Power BI setLabels and removeLabels admin REST APIs can be used to set and
remove sensitivity labels on large numbers of Power BI artifacts programatically.
The APIs set or remove labels from artifacts by artifact ID.
API documentation
setLabels
removeLabels
Sample
The following sample demonstrates how to set and remove sensitivity labels on Power BI dashboards. Similar
code can be used to set and remove labels on datasets, reports, and dataflows.
const string adminBearerToken = "<adminBearerToken>";
const string ApiUrl = "<api url>";
var persistedDashboardId = Guid.Parse("<dashboard object Id>");
var credentials = new TokenCredentials(adminBearerToken, "Bearer");
// Delete labels
// Set labels
// assignmentMethod (optional)
setLabelRequest.AssignmentMethod = AssignmentMethod.Priviledged;
// delegetedUser (optional)
var delegatedUser = new DelegatedUser();
delegatedUser.EmailAddress = "<delegated user email address>";
setLabelRequest.DelegatedUser = delegatedUser;
Next steps
setLabels API
removeLabels API
Sensitivity label overview
Audit schema for sensitivity labels in Power BI
5/23/2022 • 2 minutes to read • Edit Online
Whenever a sensitivity label on a dataset, report, dashboard, or dataflow is applied, changed, or removed, that
activity is recorded in the audit log for Power BI. You can track these activities in the unified audit log or in the
Power BI activity log. See Track user activities in Power BI for detail.
This article documents the information in the Power BI auditing schema that is specific to sensitivity labels. It
covers the following activity keys:
SensitivityLabelApplied
SensitivityLabelChanged
SensitivityLabelRemoved
SensitivityLabelEventData
M UST A P P EA R IN T H E
F IEL D TYPE SC H EM A DESC RIP T IO N
ArtifactType
This field indicates the type of artifact the label change took place on.
VA L UE F IEL D
1 Dashboard
2 Report
3 Dataset
7 Dataflow
ActionSource
This field indicates whether the label change is the result of an automatic or manual process.
VA L UE M EA N IN G DESC RIP T IO N
ActionSourceDetail
This field gives more detail about what caused the action to take place.
VA L UE M EA N IN G DESC RIP T IO N
LabelEventType
This field indicates whether the action resulted in a more restrictive label, less restrictive label, or a label of the
same degree of sensitivity.
VA L UE M EA N IN G DESC RIP T IO N
Next steps
Sensitivity labels in Power BI
Track user activities in Power BI
Data protection metrics report
5/23/2022 • 2 minutes to read • Edit Online
Do not change the report or dataset in any way, since new versions of the report are rolled out from time to
time and any changes you've made to the original report will be overwritten if you update to the new version.
Report updates
Improved versions of the data protection metrics report are released periodically. When you open the report, if a
new version is available you will be asked if you want to open the new version. If you say "yes", the new version
of the report will load and overwrite the old version. Any changes you might have made to the old report
and/or dataset will be lost. You can choose not to open the new version, but in that case you will not benefit
from the new version's improvements.
Next steps
Sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls in Power BI
Understanding the Power BI service administrator role
Enable sensitivity labels in Power BI
Data loss prevention policies for Power BI (preview)
5/23/2022 • 7 minutes to read • Edit Online
To help organizations detect and protect their sensitive data, Power BI supports Microsoft Purview data loss
prevention (DLP) polices. When a DLP policy for Power BI detects a sensitive dataset, a policy tip can be attached
to the dataset in the Power BI service that explains the nature of the sensitive content, and an alert can be
registered in the data loss prevention Aler ts tab in the Microsoft Purview compliance portal for monitoring and
management by administrators. In addition, email alerts can be sent to administrators and specified users.
NOTE
DLP evaluation of the dataset does not occur if either of the following is true:
The initiator of the event is a service principal.
The dataset owner is either a service principal or a B2B user.
Open the dataset details page to see a policy tip that explains the policy violation and how the detected
type of sensitive information should be handled.
NOTE
If you hide the policy tip, it doesn’t get deleted. It will appear the next time you visit the page.
If alerts are enabled in the policy, an alert will be recorded on the data loss prevention Aler ts tab in the
compliance center, and (if configured) an email will be sent to administrators and/or specified users. The
following image shows the Aler ts tab in the data loss prevention section of the compliance center.
3. Choose the Custom category and then the Custom policy template.
NOTE
No other categories or templates are currently supported.
NOTE
DLP actions are supported only for workspaces hosted in Premium Gen2 capacities.
If you select Choose workspaces or Exclude workspaces , a dialog will allow you to create a list of
included (or excluded) workspaces. You must specify workspaces by workspace object ID. Click the info
icon for information about how to find workspace object IDs.
After enabling Power BI as a DLP location for the policy and choosing which workspaces the policy will
apply to, click Next .
6. The Define policy settings page appears. Choose Create or customize advanced DLP rules to
begin defining your policy.
8. The Create rule page appears. On the create rule page, provide a name and description for the rule, and
then configure the other sections, which are described following the image below.
Conditions
In the condition section, you define the conditions under which the policy will apply to a dataset. Conditions are
created in groups. Groups make it possible to construct complex conditions.
1. Open the conditions section, choose Add condition and then Content contains .
This opens the first group (named Default – you can change this).
2. Choose Add , and then chose either Sensitive info types or Sensitivity labels .
NOTE
Currently, DLP policies for Power BI don't support scanning for sensitive info types in data stored in the Southeast
Asia region. See How to find the default region for your organization to learn how to find your organization's
default data region.
When you choose either Sensitive info types or Sensitivity labels , you will be able to choose the
particular sensitivity labels or sensitive info types you want to detect from a list that will appear in a
sidebar.
When you select a sensitive info type as a condition, you then need to specify how many instances of that
type must be detected in order for the condition to be considered as met. You can specify from 1 to 500
instances. If you want to detect 500 or more unique instances, enter a range of '500' to 'Any'. You also can
select the degree of confidence in the matching algorithm. Click the info button next to the confidence
level to see the definition of each level.
You can add additional sensitivity labels or sensitive info types to the group. To the right of the group
name, you can specify Any of these or All of these . This determines whether matches on all or any of
the items in the group is required for the condition to hold. If you specified more than one sensitivity
label, you will only be able to choose Any of these , since datasets can’t have more than one label
applied.
The image below shows a group (Default) that contains two sensitivity label conditions. The logic Any of
these means that a match on any one of the sensitivity labels in the group constitutes “true” for that
group.
You can create more than one group, and you can control the logic between the groups with AND or OR
logic.
The image below shows a rule containing two groups, joined by OR logic.
Exceptions
If the dataset has a sensitivity label or sensitive info type that matches any of the defined exceptions, the rule
won’t be applied to the dataset.
Exceptions are configured in the same way as conditions, described above.
Actions
Protection actions are currently unavailable for Power BI DLP policies.
User notifications
The user notifications section is where you configure your policy tip. Turn on the toggle, select the Notify users
in Office 365 ser vice with a policy tip and Policy tips checkboxes, and write your policy tip in the text box.
User overrides
User overrides are currently unavailable for Power BI DLP policies.
Incident reports
Assign a severity level that will be shown in alerts generated from this policy. Enable (default) or disable email
notification to admins, specify users or groups for email notification, and configure the details about when
notification will occur.
Additional options
Monitor and manage policy alerts
Log into the Microsoft Purview compliance portal and navigate to Data loss prevention > Aler ts .
Click on an alert to start drilling down to its details and to see management options.
Next steps
Learn about data loss prevention
Get started with Data loss prevention policies for Power BI
Sensitivity labels in Power BI
Audit schema for sensitivity labels in Power BI
Using Microsoft Defender for Cloud Apps controls
in Power BI
5/23/2022 • 7 minutes to read • Edit Online
Using Defender for Cloud Apps with Power BI, you can help protect your Power BI reports, data, and services
from unintended leaks or breaches. With Defender for Cloud Apps, you can create conditional access policies for
your organization's data, using real-time session controls in Azure Active Directory (Azure AD), that help to
ensure your Power BI analytics are secure. Once these policies have been set, administrators can monitor user
access and activity, perform real-time risk analysis, and set label-specific controls.
You can configure Defender for Cloud Apps for all sorts of apps and services, not only Power BI. You'll need to
configure Defender for Cloud Apps to work with Power BI to benefit from Defender for Cloud Apps protections
for your Power BI data and analytics. For more information about Defender for Cloud Apps, including an
overview of how it works, the dashboard, and app risk scores, see the Defender for Cloud Apps documentation.
The sections below describe the steps for configuring real-time controls for Power BI with Defender for Cloud
Apps.
Set session policies in Azure AD (required)
The steps necessary to set session controls are completed in the Azure AD and Defender for Cloud Apps portals.
In the Azure AD portal, you create a conditional access policy for Power BI, and route sessions used in Power BI
through the Defender for Cloud Apps service.
Defender for Cloud Apps operates using a reverse-proxy architecture, and is integrated with Azure AD
conditional access to monitor Power BI user activity in real-time. The following steps are provided here to help
you understand the process, and detailed step-by-step instructions are provided in the linked content in each of
the following steps. You can also read this Defender for Cloud Apps article that describes the process in whole.
1. Create an Azure AD conditional access test policy
2. Sign into each app using a user scoped to the policy
3. Verify the apps are configured to use access and session controls
4. Enable the app for use in your organization
5. Test the deployment
The process for setting session policies is described in detail in the Session policies article.
Set anomaly detection policies to monitor Power BI activities (recommended)
You can define anomaly Power BI detection policies that can be independently scoped, so that they apply to only
the users and groups you want to include and exclude in the policy. Learn more.
Defender for Cloud Apps also has two dedicated, built-in detections for Power BI. See the section later on in this
document for detail.
Use Microsoft Information Protection sensitivity labels (recommended)
Sensitivity labels enable you to classify and help protect sensitive content, so that people in your organization
can collaborate with partners outside your organization, yet still be careful and aware of sensitive content and
data.
You can read the article on sensitivity labels in Power BI, which goes into detail about the process of using
sensitivity labels for Power BI. See below for an example of a Power BI policy based on sensitivity labels.
Custom activity policies are configured in the Defender for Cloud Apps portal. Learn more.
In the session policy, in the "Action" part, the "protect" capability works only if no label exists on the item. If a
label already exists, the "protect" action won't apply; you can't override an existing label that has already been
applied to an item in Power BI.
Example
The following example shows you how to create a new session policy using Defender for Cloud Apps with
Power BI.
First, create a new session policy. In the Defender for Cloud Apps portal, select Policies on the navigation
pane. Then on the policies page, click Create policy and choose Session policy .
In the window that appears, create the session policy. The numbered steps describe settings for the following
image.
1. In the Policy template drop-down, choose No template.
2. For the Policy name box, provide a relevant name for your session policy.
3. For Session control type , select Control file download (with inspection) (for DLP).
For the Activity source section, choose relevant blocking policies. We recommend blocking unmanaged
and non-compliant devices. Choose to block downloads when the session is in Power BI.
When you scroll down you see more options. The following image shows those options, with additional
examples.
4. Create a filter on Sensitivity label and choose Highly confidential or whatever best fits your
organization.
5. Change the Inspection method to none.
6. Choose the Block option that fits your needs.
7. Make sure you create an alert for such an action.
8. Finally, select the Create button to create the session policy.
Next steps
This article described how Defender for Cloud Apps can provide data and content protections for Power BI. You
might also be interested in the following articles, which describe Data Protection for Power BI and supporting
content for the Azure services that enable it.
Overview of sensitivity labels in Power BI
Enable sensitivity labels in Power BI
How to apply sensitivity labels in Power BI
You might also be interested in the following Azure and security articles:
Protect apps with Microsoft Defender for Cloud Apps Conditional Access App Control
Deploy Conditional Access App Control for featured apps
Session policies
Overview of sensitivity labels
Data protection metrics report
Power BI Security
5/23/2022 • 4 minutes to read • Edit Online
For a detailed explanation of Power BI security, read the Power BI Security whitepaper.
The Power BI service is built on Azure , which is Microsoft’s cloud computing infrastructure and platform. The
Power BI service architecture is based on two clusters – the Web Front End (WFE ) cluster and the Back-End
cluster. The WFE cluster manages the initial connection and authentication to the Power BI service, and once
authenticated, the Back-End handles all subsequent user interactions. Power BI uses Azure Active Directory
(AAD) to store and manage user identities, and manages the storage of data and metadata using Azure BLOB
and Azure SQL Database, respectively.
Power BI Architecture
Each Power BI deployment consists of two clusters – a Web Front End (WFE ) cluster, and a Back-End cluster.
The WFE cluster manages the initial connection and authentication process for Power BI, using AAD to
authenticate clients and provide tokens for subsequent client connections to the Power BI service. Power BI also
uses the Azure Traffic Manager (ATM) to direct user traffic to the nearest datacenter, determined by the DNS
record of the client attempting to connect, for the authentication process and to download static content and
files. Power BI uses the Azure Content Deliver y Network (CDN) to efficiently distribute the necessary static
content and files to users based on geographical locale.
The Back-End cluster is how authenticated clients interact with the Power BI service. The Back-End cluster
manages visualizations, user dashboards, datasets, reports, data storage, data connections, data refresh, and
other aspects of interacting with the Power BI service. The Gateway Role acts as a gateway between user
requests and the Power BI service. Users do not interact directly with any roles other than the Gateway Role .
Azure API Management will eventually handle the Gateway Role .
IMPORTANT
It is imperative to note that only Azure API Management (APIM) and Gateway (GW) roles are accessible through the
public Internet. They provide authentication, authorization, DDoS protection, Throttling, Load Balancing, Routing, and
other capabilities.
User Authentication
Power BI uses Azure Active Directory (AAD) to authenticate users who sign in to the Power BI service, and in
turn, uses the Power BI login credentials whenever a user attempts to access resources that require
authentication. Users sign in to the Power BI service using the email address used to establish their Power BI
account; Power BI uses that login email as the effective username, which is passed to resources whenever a user
attempts to connect to data. The effective username is then mapped to a User Principal Name (UPN) and
resolved to the associated Windows domain account, against which authentication is applied.
For organizations that used work emails for Power BI login (such as [email protected]), the effective
username to UPN mapping is straightforward. For organizations that did not use work emails for Power BI login
(such as [email protected]), mapping between AAD and on-premises credentials will require
directory synchronization to work properly.
Platform security for Power BI also includes multi-tenant environment security, networking security, and the
ability to add additional AAD-based security measures.
Row-level security (RLS) with Power BI can be used to restrict data access for given users. Filters restrict data
access at the row level, and you can define filters within roles. In the Power BI service, members of a workspace
have access to datasets in the workspace. RLS doesn't restrict this data access.
You can configure RLS for data models imported into Power BI with Power BI Desktop. You can also configure
RLS on datasets that are using DirectQuery, such as SQL Server. For Analysis Services or Azure Analysis Services
lives connections, you configure Row-level security in the model, not in Power BI Desktop. The security option
will not show up for live connection datasets.
NOTE
You can't define roles within Power BI Desktop for Analysis Services live connections. You need to do that within
the Analysis Services model.
NOTE
You can't define a role with a comma, for example London,ParisRole .
5. Under Tables , select the table to which you want to apply a DAX rule.
6. In the Table filter DAX expression box, enter the DAX expressions. This expression returns a value of
true or false. For example: [Entity ID] = “Value” .
NOTE
You can use username() within this expression. Be aware that username() has the format of DOMAIN\username
within Power BI Desktop. Within the Power BI service and Power BI Report Server, it's in the format of the user's
User Principal Name (UPN). Alternatively, you can use userprincipalname(), which always returns the user in the
format of their user principal name, [email protected].
7. After you've created the DAX expression, select the checkmark above the expression box to validate the
expression.
NOTE
In this expression box, you use commas to separate DAX function arguments even if you're using a locale that
normally uses semicolon separators (e.g. French or German).
8. Select Save .
You can't assign users to a role within Power BI Desktop. You assign them in the Power BI service. You can enable
dynamic security within Power BI Desktop by making use of the username() or userprincipalname() DAX
functions and having the proper relationships configured.
By default, row-level security filtering uses single-directional filters, whether the relationships are set to single
direction or bi-directional. You can manually enable bi-directional cross-filtering with row-level security by
selecting the relationship and checking the Apply security filter in both directions checkbox. Select this
option when you've also implemented dynamic row-level security at the server level, where row-level security is
based on username or login ID.
For more information, see Bidirectional cross-filtering using DirectQuery in Power BI Desktop and the Securing
the Tabular BI Semantic Model technical article.
Validate the roles within Power BI Desktop
After you've created your roles, test the results of the roles within Power BI Desktop.
1. From the Modeling tab, select View as .
The View as roles window appears, where you see the roles you've created.
2. Select a role you created, and then select OK to apply that role.
The report renders the data relevant for that role.
3. You can also select Other user and supply a given user.
It's best to supply the User Principal Name (UPN) as that's what the Power BI service and Power BI Report
Server use.
Within Power BI Desktop, Other user displays different results only if you're using dynamic security
based on your DAX expressions.
4. Select OK .
The report renders based on what that user can see.
NOTE
The View as role feature doesn't work for DirectQuery models with Single Sign-On (SSO) enabled.
Now that you're done validating the roles in Power BI Desktop, go ahead and publish your report to the Power BI
service.
2. Select Security .
Security will take you to the Role-Level Security page where you add members to a role you created in Power BI
Desktop. Only the owners of the dataset will see Security . If the dataset is in a Group, only administrators of the
group will see the security option.
You can only create or modify roles within Power BI Desktop.
You can also see how many members are part of the role by the number in parentheses next to the role name,
or next to Members.
Remove members
You can remove members by selecting the X next to their name.
You'll see reports that are available for this role. Dashboards aren't shown in this view. In the page header, the
role being applied is shown.
Test other roles, or a combination of roles, by selecting Now viewing as .
You can choose to view data as a specific person or you can select a combination of available roles to validate
they're working.
To return to normal viewing, select Back to Row-Level Security .
NOTE
The Test as role feature doesn't work for DirectQuery models with Single Sign-On (SSO) enabled.
FAQ
Question: What if I had previously created roles and rules for a dataset in the Power BI service? Will they still
work if I do nothing?
Answer : No, visuals will not render properly. You will have to re-create the roles and rules within Power BI
Desktop and then publish to the Power BI service.
Question: Can I create these roles for Analysis Services data sources?
Answer : You can if you imported the data into Power BI Desktop. If you are using a live connection, you will not
be able to configure RLS within the Power BI service. This is defined within the Analysis Services model on-
premises.
Question: Can I use RLS to limit the columns or measures accessible by my users?
Answer : No, if a user has access to a particular row of data, they can see all the columns of data for that row.
Question: Does RLS let me hide detailed data but give access to data summarized in visuals?
Answer : No, you secure individual rows of data but users can always see either the details or the summarized
data.
Question: My data source already has security roles defined (for example SQL Server roles or SAP BW roles).
What is the relationship between these and RLS?
Answer : The answer depends on whether you're importing data or using DirectQuery. If you're importing data
into your Power BI dataset, the security roles in your data source aren't used. In this case, you should define RLS
to enforce security rules for users who connect in Power BI. If you're using DirectQuery, the security roles in your
data source are used. When a user opens a report Power BI sends a query to the underlying data source, which
applies security rules to the data based on the user's credentials.
Next steps
Restrict data access with row-level security (RLS) for Power BI Desktop
Row-level security (RLS) guidance in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Power BI Desktop privacy levels
5/23/2022 • 3 minutes to read • Edit Online
In Power BI Desktop , privacy levels specify an isolation level that defines the degree that one data source will
be isolated from other data sources. Although a restrictive isolation level blocks information from being
exchanged between data sources, it may reduce functionality and impact performance.
The Privacy Levels setting, found in File > Options and settings > Options and then Current File >
Privacy determines whether Power BI Desktop uses your Privacy Level settings while combining data. This
dialog includes a link to Power BI Desktop documentation about Privacy Levels and Privacy Levels (this article).
Private data source A Private data source contains Facebook data, a text file containing
sensitive or confidential information, stock awards, or a workbook
and the visibility of the data source containing employee review
may be restricted to authorized users. information.
Data from a Private data source will
not be folded to other sources (not
even to other Private sources).
Organizational data source An Organizational data source limits A Microsoft Word document on an
the visibility of a data source to a intranet SharePoint site with
trusted group of people. Data from an permissions enabled for a trusted
Organizational data source will not group.
be folded to Public data sources, but
may be folded to other
Organizational data sources, as well
as to Private data sources.
Public data source A Public data source gives everyone Free data from the Microsoft Azure
visibility to the data contained in the Marketplace, data from a Wikipedia
data source. Only files, internet data page, or a local file containing data
sources, or workbook data can be copied from a public web page.
marked Public. Data from a Public
data source may be freely folded to
other sources.
You should configure a data source containing highly sensitive or confidential data as Private .
Combine data according to your Privacy Level Privacy level settings are used to determine the level of
settings for each source (on, and the default setting) isolation between data sources when combining data.
Ignore the Privacy levels and potentially improve Privacy levels are not considered when combining data,
performance (off) however, performance and functionality of the data may
increase.
Security Note: Selecting Ignore the Privacy levels and potentially improve performance in the
Privacy Levels dialog could expose sensitive or confidential data to an unauthorized person. Do not turn
this setting to off unless you are confident that the data source does not contain sensitive or confidential
data.
Cau t i on
The Ignore the Privacy levels and potentially improve performance does not work in the Power BI
service. As such, Power BI Desktop reports with this setting enabled, which are then published to the Power BI
service, do not reflect this behavior when used in the service. However, the privacy levels are available on the
personal gateway.
Configure Privacy Levels
In Power BI Desktop or in Query Editor, select File > Options and settings > Options and then Current File
> Privacy .
a. When Combine data according to your Privacy Level settings for each source is selected, data will be
combined according to your Privacy Levels setting. Merging data across Privacy isolation zones will result in
some data buffering.
b. When Ignore the Privacy levels and potentially improve performance is selected, the data will be
combined ignoring your Privacy Levels which could reveal sensitive or confidential data to an unauthorized user.
The setting may improve performance and functionality.
Security Note: Selecting Ignore the Privacy levels and potentially improve performance may
improve performance; however, Power BI Desktop cannot ensure the privacy of data merged into the Power
BI Desktop file.
Using service tags with Power BI
5/23/2022 • 4 minutes to read • Edit Online
You can use Azure ser vice tags with Power BI to enable an Azure SQL Managed Instance (MI) to allow
incoming connections from the Power BI service. In Azure, a service tag is a defined group of IP addresses that
you can configure to be automatically managed, as a group, to minimize the complexity of updates or changes
to network security rules. By using service tags with Power BI, you can enable a SQL Managed Instance to allow
incoming connections from the Power BI service.
The following configurations are necessary to successfully enable the endpoints for use in the Power BI service:
1. Enable a public endpoint in the SQL Managed Instance
2. Create a Network Security Group rule to allow inbound traffic
3. Enter the credentials in Power BI
The following sections look at each of these steps in turn.
NOTE
The priority of the rule you set must be higher than the 4096 deny_all_inbound rule, which means the priority value must
be lower than 4096. In the following example, a priority value of 400 is used.
The following CLI script is provided as a reference example. See az network nsg rule for more information. You
may need to change multiple values for the example to work properly in your situation. A PowerShell script is
provided afterward.
#login to azure
az login
The following PowerShell script is provided as another reference to create the Network Security Group (NSG)
rule. See Add a network security group rule in PowerShell for more information. You may need to change
multiple values for the example to work properly in your situation.
#login to azure
Login-AzAccount
####
#Script to create Network Security Group Rule
###
Next steps
What is Power BI Premium?
Enable a Public Endpoint in the SQL Managed Instance
az network nsg rule
Add a network security group rule in PowerShell
Private endpoints for accessing Power BI
5/23/2022 • 10 minutes to read • Edit Online
You can use the Azure Private Link feature to provide secure access for data traffic in Power BI. Azure networking
provides the Azure Private Link feature. In this configuration, Azure Private Link and Azure Networking private
endpoints are used to send data traffic privately using Microsoft's backbone network infrastructure. The data
travels the Microsoft private network backbone instead of going across the Internet.
Private endpoints make sure that Power BI users go through the Microsoft private network backbone when they
access resources in the Power BI service.
See What is Azure Private Link to learn more about Azure Private Link.
It takes about 15 minutes to configure a private link for your tenant, which includes configuring a separate
FQDN for the tenant in order to communicate privately with Power BI services.
After this process is finished, you can move on to the next step.
PA RA M ET ER VA L UE
<resource-name> myPowerBIResource
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"resources": [
{
"type":"Microsoft.PowerBI/privateLinkServicesForPowerBI",
"apiVersion": "2020-06-01",
"name" : "<resource-name>",
"location": "global",
"properties" :
{
"tenantId": "<tenant-object-id>"
}
}
]
}
In the dialog that appears, select the checkbox to agree to the terms and conditions, and then select Purchase .
Create a virtual network
The next step is to create a virtual network and subnet. Replace the sample parameters in the table below with
your own to create a virtual network and subnet.
PA RA M ET ER VA L UE
<resource-group-name> myResourceGroup
<virtual-network-name> myVirtualNetwork
<region-name> Central US
<IPv4-address-space> 10.5.0.0/16
<subnet-name> mySubnet
<subnet-address-range> 10.5.0.0/24
1. On the upper-left side of the screen, select Create a resource > Networking > Vir tual network or
search for Vir tual network in the search box.
2. In Create vir tual network enter or select the following information in the Basics tab:
SET T IN GS VA L UE
Project details
Instance details
SET T IN GS VA L UE
4. In Subnet name select the word default, and in Edit subnet , enter the following information:
SET T IN GS VA L UE
5. Then select Save , and then select the Review + create tab, or select the Review + create button.
6. Then, select Create .
Once you've completed these steps, you can create a virtual machine (VM), as described in the next section.
SET T IN GS VA L UE
Project details
Instance details
ADMINISTRATOR ACCOUNT
SAVE MONEY
SET T IN GS VA L UE
6. Select Review + create . You're taken to the Review + create page where Azure validates your
configuration.
7. When you see the Validation passed message, select Create .
SET T IN GS VA L UE
Project details
Instance details
The following image shows the Create a private endpoint - Basics window.
4. Once that information is complete, select Next: Resource and in the Create a private endpoint -
Resource page, enter or select the following information:
SET T IN GS VA L UE
Resource myPowerBIResource
The following image shows the Create a private endpoint - Resource window.
5. Once that information is properly input, select Next: Configuration and in the Create a private
endpoint (Preview) - Configuration and enter or select the following information:
SET T IN GS VA L UE
NETWORKING
The following image shows the Create a private endpoint - Configuration window.
Next select Review + create , which displays the Review + create page where Azure validates your
configuration. When you see the Validation passed message, select Create .
Non-authoritative answer:
Name: 52d40f65ad6d48c3906f1ccf598612d4-api.privatelink.analysis.windows.net
Address: 10.5.0.4
Next steps
Administering Power BI in your Organization
Understanding the Power BI admin role
Auditing Power BI in your organization
How to find your Azure Active Directory tenant ID
The following video shows how to connect a mobile device to Power BI, using private endpoints:
NOTE
This video might use earlier versions of Power BI Desktop or the Power BI service.
Microsoft Intune enables organizations to manage devices and applications. The Power BI mobile applications
for iOS and Android integrate with Intune. This integration enables you to manage the application on your
devices, and to control security. Through configuration policies, you can control items like requiring an access
pin, how data is handled by the application, and even encrypting application data when the app is not in use.
The Microsoft Power BI mobile app allows you to get access to your important business information. You can
view and interact with your dashboards and reports for all your organization's managed device and app
business data. For more information about supported Intune apps, see Microsoft Intune protected apps.
NOTE
After you configure Intune, background data refresh is turned off for the Power BI mobile app on your iOS or Android
device. Power BI refreshes the data from the Power BI service on the web when you enter the app.
Next steps
How to create and assign app protection policies
Power BI apps for mobile devices
More questions? Try asking the Power BI Community
Enable service principal authentication for read-only
admin APIs
5/23/2022 • 2 minutes to read • Edit Online
Service principal is an authentication method that can be used to let an Azure Active Directory (Azure AD)
application access Power BI service content and APIs. When you create an Azure AD app, a service principal
object is created. The service principal object, also known simply as the service principal, allows Azure AD to
authenticate your app. Once authenticated, the app can access Azure AD tenant resources.
Method
To enable service principal authentication for Power BI read-only APIs, follow these steps:
1. Create an Azure AD app. You can skip this step if you already have an Azure AD app you want to use. Take
note of the App-Id for later steps.
NOTE
Make sure the app you use doesn't have any Power BI admin roles set on it in Azure portal.
2. Create a new Security Group in Azure Active Directory. Read more about how to create a basic group
and add members using Azure Active Directory. You can skip this step if you already have a security
group you would like to use. Make sure to select Security as the Group type.
3. Add your App-Id as a member of the security group you created. To do so:
a. Navigate to Azure por tal > Azure Active Director y > Groups , and choose the security group
you created in Step 2.
b. Select Add Members . Note: Make sure the app you use doesn't have any Power BI admin roles
set on it in Azure portal. To check the assigned roles:
Sign into the Azure por tal as a Global Administrator, an Application Administrator, or a Cloud
Application Administrator.
Select Azure Active Director y , then Enterprise applications .
Select the application you want to grant access to Power BI.
Select Permissions .
IMPORTANT
Make sure there are no Power BI admin-consent-required permissions set on this application. For more
information, see Managing consent to applications and evaluating consent requests.
5. Start using the read-only admin APIs. See the list of supported APIs below.
IMPORTANT
Once you enable the service principal to be used with Power BI, the application's Azure AD permissions no longer
have any effect. The application's permissions are then managed through the Power BI admin portal.
Supported APIs
Service principal currently supports the following APIs:
GetGroupsAsAdmin with $expand for dashboards, datasets, reports, and dataflows
GetGroupUsersAsAdmin
GetDashboardsAsAdmin with $expand tiles
GetDashboardUsersAsAdmin
GetAppsAsAdmin
GetAppUsersAsAdmin
GetDatasourcesAsAdmin
GetDatasetToDataflowsLinksAsAdmin
GetDataflowDatasourcesAsAdmin
GetDataflowUpstreamDataflowsAsAdmin
GetCapacitiesAsAdmin
GetCapacityUsersAsAdmin
GetActivityLog
GetModifiedWorkspaces
WorkspaceGetInfo
WorkspaceScanStatus
WorkspaceScanResult
GetDashboardsInGroupAsAdmin
GetTilesAsAdmin
ExportDataflowAsAdmin
GetDataflowsAsAdmin
GetDataflowUsersAsAdmin
GetDataflowsInGroupAsAdmin
GetDatasetsAsAdmin
GetDatasetUsersAsAdmin
GetDatasetsInGroupAsAdmin
Get Power BI Encryption Keys
Get Refreshable For Capacity
Get Refreshables
Get Refreshables For Capacity
GetImportsAsAdmin
GetReportsAsAdmin
GetReportUsersAsAdmin
GetReportsInGroupAsAdmin
Power BI enables administrators to script common tasks with PowerShell cmdlets. It also exposes REST APIs and
provides a .NET client library for developing administrative solutions. This topic shows a list of cmdlets and the
corresponding APIs and REST API endpoint. For more information, see:
PowerShell download and documentation
REST API documentation
.NET Client library download
Cmdlets below should be called with -Scope Organization to operate against the tenant for administration.