Power Query Documentation
Power Query Documentation
Power Query is a data transformation and data preparation engine. Power Query comes with a graphical interface
for getting data from sources and a Power Query Editor for applying transformations. Because the engine is
available in many products and services, the destination where the data will be stored depends on where Power
Query was used. Using Power Query, you can perform the extract, transform, and load (ETL) processing of data.
Diagram with symbolized data sources on the right, passing though Power query for transformation, and then
going to various destinations, such as Azure Data Lake Storage, Common Data Service, Microsoft Excel, or Power BI.
EXIST IN G C H A L L EN GE H O W DO ES P O W ER Q UERY H EL P ?
Finding and connecting to data is too difficult Power Query enables connectivity to a wide range of data
sources, including data of all sizes and shapes.
Experiences for data connectivity are too fragmented Consistency of experience, and parity of query capabilities over
all data sources.
Data often needs to be reshaped before consumption Highly interactive and intuitive experience for rapidly and
iteratively building queries over any data source, of any size.
EXIST IN G C H A L L EN GE H O W DO ES P O W ER Q UERY H EL P ?
Any shaping is one-off and not repeatable When using Power Query to access and transform data, you
define a repeatable process (query) that can be easily
refreshed in the future to get up-to-date data.
In the event that you need to modify the process or query to
account for underlying data or schema changes, you can use
the same interactive and intuitive experience you used when
you initially defined the query.
Volume (data sizes), velocity (rate of change), and variety Power Query offers the ability to work against a subset of the
(breadth of data sources and data shapes) entire dataset to define the required data transformations,
allowing you to easily filter down and transform your data to a
manageable size.
Power Query queries can be refreshed manually or by taking
advantage of scheduled refresh capabilities in specific products
(such as Power BI) or even programmatically (by using the
Excel object model).
Because Power Query provides connectivity to hundreds of
data sources and over 350 different types of data
transformations for each of these sources, you can work with
data from any source and in any shape.
NOTE
Although two Power Query experiences exist, they both provide almost the same user experience in every scenario.
Transformations
The transformation engine in Power Query includes many prebuilt transformation functions that can be used
through the graphical interface of the Power Query Editor. These transformations can be as simple as removing a
column or filtering rows, or as common as using the first row as a table header. There are also advanced
transformation options such as merge, append, group by, pivot, and unpivot.
All these transformations are made possible by choosing the transformation option in the menu, and then applying
the options required for that transformation. The following illustration shows a few of the transformations available
in Power Query Editor.
Dataflows
Power Query can be used in many products, such as Power BI and Excel. However, using Power Query within a
product limits its usage to only that specific product. Dataflows are a product-agnostic service version of the Power
Query experience that runs in the cloud. Using dataflows, you can get data and transform data in the same way, but
instead of sending the output to Power BI or Excel, you can store the output in other storage options such as
Common Data Service or Azure Data Lake Storage. This way, you can use the output of dataflows in other products
and services.
More information: What are dataflows?
let
Source = Exchange.Contents("[email protected]"),
Mail1 = Source{[Name="Mail"]}[Data],
#"Expanded Sender" = Table.ExpandRecordColumn(Mail1, "Sender", {"Name"}, {"Name"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded Sender", each ([HasAttachments] = true)),
#"Filtered Rows1" = Table.SelectRows(#"Filtered Rows", each ([Subject] = "sample files for email PQ test")
and ([Folder Path] = "\Inbox\")),
#"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows1",{"Attachments"}),
#"Expanded Attachments" = Table.ExpandTableColumn(#"Removed Other Columns", "Attachments", {"Name",
"AttachmentContent"}, {"Name", "AttachmentContent"}),
#"Filtered Hidden Files1" = Table.SelectRows(#"Expanded Attachments", each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File from Mail", each
#"Transform File from Mail"([AttachmentContent])),
#"Removed Other Columns1" = Table.SelectColumns(#"Invoke Custom Function1", {"Transform File from Mail"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other Columns1", "Transform File from Mail",
Table.ColumnNames(#"Transform File from Mail"(#"Sample File"))),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded Table Column1",{{"Column1", type text}, {"Column2",
type text}, {"Column3", type text}, {"Column4", type text}, {"Column5", type text}, {"Column6", type text},
{"Column7", type text}, {"Column8", type text}, {"Column9", type text}, {"Column10", type text}})
in
#"Changed Type"
More information: Power Query M formula language
P O W ER Q UERY P O W ER Q UERY
P RO DUC T M EN GIN E 1 DESK TO P 2 O N L IN E 3 DATA F LO W S 4
2 Power Quer y Desktop The Power Query experience found in desktop applications.
3 Power Quer y Online The Power Query experience found in web browser
applications.
See also
Data sources in Power Query
Getting data
Power Query quickstart
Shape and combine data using Power Query
What are dataflows
Getting data
10/30/2020 • 3 minutes to read • Edit Online
Power Query can connect to many different data sources so you can work with the data you need. This article
walks you through the steps for bringing in data to Power Query.
Connecting to a data source with Power Query follows a standard set of stages before landing the data at a
destination. This article describes each of these stages.
NOTE
In some cases, a connector might have all of these stages, and in other cases a connector might have just a few of them. For
more information about the experience of a specific connector, go to the documentation available for the specific connector.
1. Connection settings
Most connectors initially require at least one parameter to initialize a connection to the data source. For example,
the SQL Server connector requires at least the host name to establish a connection to the SQL Server database.
In comparison, when trying to connect to an Excel file, Power Query requires that you use the file path to find the
file you want to connect to.
The connector parameters are commonly used to establish a connection to a data source, and they—in conjunction
with the connector used—define what's called a data source path.
NOTE
Some connectors don't require you to enter any parameters at all. These are called singleton connectors and will only have
one data source path available per environment. Some examples are Adobe Analytics, MailChimp, and Google Analytics.
2. Authentication
Every single connection that's made in Power Query has to be authenticated. The authentication methods vary
from connector to connector, and some connectors might offer multiple methods of authentication.
The currently available methods of authentication for Power Query are:
Anonymous : Commonly used when connecting to a data source that doesn't require user authentication, such
as a webpage or a file available over public HTTP.
Basic : A username and password sent in base64 encoding are accepted for authentication.
API Key : A single API key is accepted for authentication.
Organizational account or Microsoft account : This method is also known as OAuth 2.0 .
Windows : Can be implicit or explicit.
Database : This is only available in some database connectors.
For example, the available authentication methods for the SQL Server database connector are Windows, Database,
and Microsoft account.
3. Data preview
The goal of the data preview stage is to provide you with a user-friendly way to preview and select your data.
Depending on the connector that you're using, you can preview data by using either:
Navigator window
Table preview dialog box
Navigator window (navigation table )
The Navigator window consists of two main sections:
The object selection pane is displayed on the left side of the window. The user can interact with and select
these objects.
NOTE
For Power Query in Excel, select the Select multiple items option from the upper-left corner of the navigation
window to select more than one object at a time in the object selection pane.
NOTE
The list of objects in Power Query Desktop is limited to 10,000 items. This limit does not exist in Power Query Online.
For a workaround in Power Query Desktop, see Object limitation workaround.
The data preview pane on the right side of the window shows a preview of the data from the object you
selected.
When you attempt to connect to a data source using a new connector for the first time, you might be asked to
select the authentication method to use when accessing the data. After you've selected the authentication method,
you won't be asked to select an authentication method for the connector using the specified connection
parameters. However, if you need to change the authentication method later, you can do so.
If you're using a connector from an online app, such as the Power BI service or Power Apps, you'll see an
authentication method dialog box for the OData Feed connector that looks something like the following image.
As you can see, a different selection of authentication methods is presented from an online app. Also, some
connectors might ask you to enter the name of an on-premises data gateway to be able to connect to your data.
The level you select for the authentication method you chose for this connector determines what part of a URL will
have the authentication method applied to it. If you select the top-level web address, the authentication method
you select for this connector will be used for that URL address or any subaddress within that address.
However, you might not want to set the top-level address to a specific authentication method because different
subaddresses can require different authentication methods. One example might be if you were accessing two
separate folders of a single SharePoint site and wanted to use different Microsoft accounts to access each one.
After you've set the authentication method for a connector's specific address, you won't need to select the
authentication method for that connector using that URL address or any subaddress again. For example, let's say
you select the https://contoso.com/ address as the level you want the Web connector URL settings to apply to.
Whenever you use a Web connector to access any webpage that begins with this address, you won't be required to
select the authentication method again.
2. In the Data source settings dialog box, select Global permissions , choose the website where you want
to change the permission setting, and then select Edit Permissions .
3. In the Edit Permissions dialog box, under Credentials , select Edit .
4. Change the credentials to the type required by the website, select Save , and then select OK .
You can also delete the credentials for a particular website in step 3 by selecting Clear Permissions for a selected
website, or by selecting Clear All Permissions for all of the listed websites.
To edit the authentication method in online ser vices, such as for dataflows in the Power BI ser vice
and Microsoft Power Platform
1. Select the connector, and then select Edit connection .
With Power Query in Power BI, you can connect to many different data sources, transform the data into the shape
you want, and quickly be ready to create reports and insights. When using Power BI Desktop, Power Query
functionality is provided in Power Query Editor.
Let's get acquainted with Power Query Editor.
If you're not signed up for Power BI, you can sign up for a free trial before you begin. Also, you can download
Power BI Desktop for free.
With no data connections, Power Query Editor appears as a blank pane, ready for data.
As soon as a query is loaded, the Power Query Editor view becomes more interesting. If you connect to the
following Web data source, Power Query Editor loads information about the data, which you can then begin to
shape.
https://www.bankrate.com/finance/retirement/best-places-retire-how-state-ranks.aspx
The following image shows how Power Query Editor appears after a data connection is established.
| No. | Description | |---------|-------------| | 1 | On the ribbon, many buttons are now active so you can interact with
the data in the query. | | 2 | In the Queries pane, queries are listed and available for selection, viewing, and
shaping. | | 3 | In the center pane, data from the selected query is displayed and available for shaping. | | 4 | In the
Quer y Settings pane, the properties and applied steps for the selected query are listed. |
The following sections describe each of these four areas—the ribbon, the Queries pane, the data view, and the
Quer y Settings pane.
To connect to data and begin the query building process, select the Get Data button. A menu appears, providing
the most common data sources.
You use the Transform tab to access common data transformation tasks, such as adding or removing columns,
changing data types, splitting columns, and other data-driven operations. The following image shows the
Transform tab.
Use the Add Column tab to perform additional tasks associated with adding a column, formatting column data,
and adding custom columns. The following image shows the Add Column tab.
Use the View tab to turn on or off the display of certain panes or windows, and to display the advanced editor. The
following image shows the View tab.
NOTE
Many of the tasks available from the ribbon are also available by right-clicking to select a column, or other data, in the center
pane.
When you select a shortcut menu item (or a ribbon button), Power Query applies the step to the data and saves it
as part of the query itself. The steps are recorded in the Quer y Settings pane in sequential order, as described in
the next section.
As it applies the changes in your query, Power BI Desktop displays the status of the operation.
After you have your query where you want it, or if you just want to make sure your work is saved, Power BI
Desktop can save your work in a .pbix file.
To save your work as a .pbix file in Power BI Desktop, select File > Save (or File > Save As ), as shown in the
following image.
Next step
In this quickstart, you learned how to use Power Query Editor in Power BI Desktop and how to connect to data
sources. To learn more, continue with the tutorial on shaping and transforming data with Power Query.
Power Query tutorial
Tutorial: Shape and combine data using Power Query
10/30/2020 • 14 minutes to read • Edit Online
With Power Query, you can connect to many different types of data sources, shape the data to meet your needs,
and then create visual reports using Power BI Desktop that you can share with others. Shaping data means
transforming the data—such as renaming columns or tables, changing text to numbers, removing rows, or setting
the first row as headers. Combining data means connecting to two or more data sources, shaping them as needed,
and then consolidating them into one useful query.
In this tutorial, you'll learn to:
Shape data with Power Query Editor.
Connect to a data source.
Connect to another data source.
Combine those data sources, and create a data model to use in reports.
This tutorial demonstrates how to shape a query by using Power Query Editor—a technology that's incorporated
into Power BI Desktop—and learn some common data tasks.
If you're not signed up for Power BI, you can sign up for a free trial before you begin. Also, you can download
Power BI Desktop for free.
TIP
In Power Query Editor in Power BI Desktop, you can use shortcut menus in addition to the ribbon. Most of what you can
select on the Transform tab of the ribbon is also available by right-clicking to select an item (such as a column), and
choosing a command from the shortcut menu that appears.
Shape data
When you shape data in Power Query Editor, you're providing step-by-step instructions (which Power Query Editor
carries out for you) to adjust the data as Power Query Editor loads and presents it. The original data source isn't
affected; only this particular view of the data is adjusted, or shaped.
The steps you specify (such as rename a table, transform a data type, or delete columns) are recorded by Power
Query Editor. Each time this query connects to the data source, those steps are carried out so that the data will
always be shaped the way you specify. This process occurs whenever you use the Power Query Editor feature of
Power BI Desktop, or for anyone who uses your shared query, such as in the Power BI service. Those steps are
captured sequentially in the Quer y Settings pane, under Applied Steps .
The following image shows the Quer y Settings pane for a query that has been shaped. You'll go through each of
these steps in the next few paragraphs.
NOTE
The sequence of applied steps in Power Query Editor is important and can affect how the data is shaped. It's important to
consider how one step might affect subsequent steps. For example, if you remove a step, steps that occur later in the
sequence might not behave as originally intended.
Using the retirement data from the Using Power Query in Power BI Desktop quickstart article, which you found by
connecting to a Web data source, you can shape that data to fit your needs.
For starters, you can add a custom column to calculate rank based all data being equal factors, and compare this
column to the existing column named Rank . On the Add Column tab, select the Custom Column button, as
shown in the following image.
In the Custom Column dialog box, for New column name , enter New Rank . Copy the following formula, and
paste it into the Custom column formula box:
([Cost of living] + [Weather] + [Health care quality] + [Crime] + [Tax] + [Culture] + [Senior] + [#"Well-
being"]) / 8
Make sure the status message reads "No syntax errors have been detected," and then select OK .
To keep column data consistent, you can transform the new column values to whole numbers. Just right-click the
column heading, and then select Change Type > Whole Number to change them.
TIP
If you need to choose more than one column, first select a column, select Shift as you select additional adjacent columns,
and then right-click a column heading to change all the selected columns. You can also use Ctrl to select noncontiguous
columns.
You can also transform column data types by using the Transform tab on the ribbon. The following image shows
the Data Type button on the Transform tab.
Note that in Quer y Settings , Applied Steps reflect any shaping steps that have been applied to the data. If you
want to remove any step from the shaping process, you select the X on the left side of the step. In the following
image, the Applied Steps section lists what has happened so far, which includes connecting to the website
(Source ), selecting the table (Navigation ), and, while loading the table, Power Query Editor automatically
changing text-based number columns from Text to Whole Number (Changed Type ). The last two steps show
your previous actions, Added Custom and Changed Type1 .
Before you can work with this query, you need to make a few changes to get its data where you want it:
1. Adjust the rankings by removing a column : You've decided Cost of living is a non-factor in your results.
You remove this column, but find that the data remains unchanged. You can fix this by following the rest of the
steps in this section.
2. Fix a few errors : Because you removed a column, you need to readjust your calculations in the New Rank
column. This readjustment involves changing a formula.
3. Sor t the data based on the New Rank and Rank columns.
4. Replace data : Replace a specific value and insert an Applied step .
5. Change the table name : Table 0 isn't a useful descriptor. Changing it is simple.
1. Adjust the rankings by removing a column
To remove the Cost of living column, select the column, select the Home tab on the ribbon, and then select
Remove Columns , as shown in the following image.
Notice that the New Rank values haven't changed; this is because of the ordering of the steps. Because Power
Query Editor records steps sequentially—yet independently of each other—you can move each step up or down in
the Applied Steps sequence. Just right-click any step, and a menu appears with commands you can use to
Rename , Delete , Delete Until End (remove the current step, and all subsequent steps too), Move Up , or Move
Down . Go ahead and move up the last step, Removed Columns , to just before the Added Custom step.
If you select the word "Error" directly, Power Query creates an entry in Applied Steps and displays information
about the error. You don't want to go this route, so select Cancel .
To fix the errors, select the New Rank column, select the View tab, and then select the Formula Bar check box.
This displays the formula for the data in the column.
Now you can remove the Cost of living parameter and decrement the divisor by changing the formula to the
following:
Table.AddColumn(#"Removed Columns", "New Rank", each ([Weather] + [Health care quality] + [Crime] + [Tax] +
[Culture] + [Senior] + [#"Well-being"]) / 7)
Select the green check mark to the left of the formula bar, or press the Enter key, to replace the revised values. The
Added Custom step should now be completed with no errors.
NOTE
You can also select the Remove errors command (from the ribbon or the shortcut menu), which removes any rows that
have errors. In this case, the command would have removed all the rows from your data, and you don't want to do that—
you probably want to keep your data in the table.
Select the green check mark to the left of the formula bar, or press the Enter key, to order the rows in accordance
with both the New Rank and Rank columns.
4. Replace data
In addition, you can select a step anywhere in the Applied steps list and continue shaping the data at that point in
the sequence. Power Query Editor will automatically insert a new step directly after the currently selected step.
Let's give that a try.
First, select the step that occurred just before you added the custom column—the Removed Columns step. Here
you'll replace the value of the Weather ranking in Arizona. Right-click the cell that contains Arizona's Weather
ranking, and then select Replace Values from the menu that appears. Note which step in the Applied Steps list
is currently selected—the one just before the Added Custom step.
Because you're inserting a step, Power Query Editor warns you about the danger of doing so; subsequent steps
might cause the query to break. You need to be careful and thoughtful here. To see how Power Query Editor
handles this, go ahead and select Inser t .
Change the value to 51 , and the data for Arizona is replaced. When you create a new step, Power Query Editor
names it based on the action—in this case, Replaced Value . When you have more than one step with the same
name in your query, Power Query Editor adds a number (in sequence) to each subsequent step to differentiate
between them.
Now select the last step, Sor ted Rows , and notice that the data about Arizona's new ranking has indeed changed.
This change is because you inserted the Replaced Value step in the right place, before Added Custom .
5. Change the table name
Lastly, you'll want to change the name of that table to something descriptive. When you create reports, it's useful
to have descriptive table names, especially when you connect to multiple data sources that are all listed on the
Fields pane of the Repor t view.
Changing the table name is easy. On the Quer y Settings pane, under Proper ties , enter RetirementStats in the
Name box, and then select Enter .
You've shaped that data to the extent you need to. Next, you'll connect to another data source and combine data.
Combine data
The data about various states is interesting, and will be useful for building additional analysis efforts and queries.
But there's one problem: most data out there uses a two-letter abbreviation for state codes, not the full name of the
state. You need some way to associate state names with their abbreviations.
You're in luck. There's a public data source that does just that, but it needs a fair amount of shaping before you can
connect it to your retirement table. Here's the web resource for state abbreviations:
https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviations
On the Home tab in Power Query Editor, select New source > Web , enter the address, and then select Connect .
Select the Codes and abbreviations for U.S. states, federal district, territories, and other regions table.
It includes the data you want, but it's going to take quite a bit of shaping to pare the data from that table down to
what you want.
TIP
Is there a faster or easier way to accomplish the steps below? Yes: you can create a relationship between the two tables, and
shape the data based on that relationship. The following steps are still good to learn, just keep in mind that relationships can
help you quickly use data from multiple tables.
NOTE
If Power BI accidentally imports the table headers as a row in your data table, you can select Use first row as
headers from the Home tab to fix your table.
2. Remove the bottom 26 rows: they're all territories, which you don't need to include. On the Home tab,
select Reduce rows > Remove rows > Remove bottom rows .
3. Because the RetirementStats table doesn't have information for Washington DC, you need to filter it from
your list. Select the drop-down arrow beside the Region Status column heading, and then clear the
Federal district check box.
4. Remove a few unneeded columns. You only need the mapping of the state to its official two-letter
abbreviation, so you can remove the following columns: Column1 , Column3 , Column4 , and then from
Column6 through Column11 . First select Column1 , then select Ctrl as you select the other columns to
remove (this lets you select multiple, noncontiguous columns). On the Home tab, select Remove columns
> Remove columns .
IMPORTANT
The sequence of applied steps in Power Query Editor is important, and can affect how the data is shaped. It’s also
important to consider how one step may impact another subsequent step. If you remove a step from the Applied
Steps, subsequent steps may not behave as originally intended because of the impact of the query’s sequence of
steps.
NOTE
When you resize the Power Query Editor window to make the width smaller, some ribbon items are condensed to
make the best use of visible space. When you increase the width of the Power Query Editor window, the ribbon items
expand to make the most use of the increased ribbon area.
5. Rename the columns and the table itself. As usual, there are a few ways to rename a column. First select the
column, and then either select Rename from the Transform tab, or right-click and select Rename from the
menu that appears. The following image shows both options; you only need to choose one.
Rename the columns to State Name and State Code . Rename the table by entering StateCodes in the
Name box on the Quer y Settings pane.
Now that you've shaped the StateCodes table the way you want, you'll combine these two tables—or queries—into
one.
There are two primary ways of combining queries: merging and appending:
When you have one or more columns that you'd like to add to another query, you merge the queries.
When you have additional rows of data that you'd like to add to an existing query, you append the query.
In this case, you'll want to merge queries. To get started, from the left pane of Power Query Editor, select the query
into which you want the other query to merge, which in this case is RetirementStats. Then on the Home tab, select
Combine > Merge queries .
You might be prompted to set privacy levels, to ensure that data is combined without including or transferring data
you didn't want transferred.
Next, the Merge dialog box appears. You're prompted to select which query you'd like merged into the selected
query, and then you're prompted to select the matching columns to use for the merge. Select the State column in
the RetirementStats query, and then select the StateCodes query (easy in this case, because there's only one other
query—when you connect to many data sources, there will be many queries to choose from). When you select the
correct matching columns—State from RetirementStats and State Name from StateCodes—the Merge dialog
box looks like the following image, and the OK button is enabled.
A column named NewColumn is created at the end of the query. This column contains the contents of the table
(query) that was merged with the existing query. All columns from the merged query are condensed into
NewColumn , but you can expand the table and include whichever columns you want.
To expand the merged table and select the columns to include, select the expand icon ( ). The Expand panel
appears.
In this case, you only want the State Code column, so select only that column and then select OK . Clear the Use
original column name as prefix check box, because you don't need or want that. (If you do leave that check box
selected, the merged column will be named NewColumn.State Code : the original column name followed by a
dot, followed by the name of the column that's being brought into the query.)
NOTE
You can experiment with different ways of bringing in the NewColumn table. If you don't like the results, you can just delete
that step from the Applied steps list in the Quer y settings pane. Your query will return to the state it was in before you
applied that Expand step, so you can try as many times as you like until the expanded query looks the way you want it.
You now have a single query (table) that combines two data sources, each of which has been shaped to meet your
needs. This query can serve as a basis for lots of additional, interesting data connections, such as housing cost
statistics, demographics, or job opportunities in any state.
To apply changes and close Power Query Editor, go to the Home tab and select Close & Apply . The transformed
dataset appears in Power BI Desktop, ready to be used for creating reports.
Next step
There are all sorts of things you can do with Power Query. If you're ready to create your own custom connector, go
to the following article.
Creating your first connector: Hello World
Using the Applied Steps list
10/30/2020 • 2 minutes to read • Edit Online
Any transformations to your data will show in the Applied Steps list. For instance, if you change the first column
name, it will display in the Applied Steps list as Renamed Columns .
Selecting any step will show you the results of that particular step, so you can see exactly how your data changes as
you add steps to the query.
The Quer y Settings menu will open to the right with the Applied Steps list.
Rename step
To rename a step, right-click the step and select Rename .
Enter in the name you want, and then either select Enter or click away from the step.
Delete step
To delete a step, right-click the step and select Delete .
Alternatively, select the x next to the step.
To insert a new intermediate step, right-click on a step and select Inser t step after . Then select Inser t on the new
window.
To set a transformation for the new step, select the new step in the list and make the change to the data. It will
automatically link the transformation to the selected step.
Move step
To move a step up one position in the list, right-click the step and select Move up .
To move a step down one position in the list, right-click the step and select Move down .
Alternatively, or to move more than a single position, drag and drop the step to the desired location.
Extract the previous steps into query
You can also separate a series of transformations into a different query. This allows the query to be referenced for
other sources, which can be helpful if you're trying to apply the same transformation to multiple datasets. To extract
all the previous steps into a new query, right-click the first step you do not want to include in the query and select
Extract Previous .
Name the new query and select OK . To access the new query, navigate to the Queries pane on the left side of the
screen.
The data profiling tools provide new and intuitive ways to clean, transform, and understand data in Power Query
Editor. They include:
Column quality
Column distribution
Column profile
To enable the data profiling tools, go to the View tab on the ribbon. Enable the options you want in the Data
preview group, as shown in the following image.
After you enable the options, you'll see something like the following image in Power Query Editor.
NOTE
By default, Power Query will perform this data profiling over the first 1,000 rows of your data. To have it operate over the
entire dataset, check the lower-left corner of your editor window to change how column profiling is performed.
Column quality
The column quality feature labels values in rows in three categories:
Valid , shown in green
Error , shown in red
Empty , shown in dark grey
These indicators are displayed directly underneath the name of the column as part of a small bar chart, as shown
in the following image.
The number of records in each column quality category is also displayed as a percentage.
By hovering over any of the columns, you are presented with the numerical distribution of the quality of values
throughout the column. Additionally, selecting the ellipsis button (...) opens some quick action buttons for
operations on the values.
Column distribution
This feature provides a set of visuals underneath the names of the columns that showcase the frequency and
distribution of the values in each of the columns. The data in these visualizations is sorted in descending order
from the value with the highest frequency.
By hovering over the distribution data in any of the columns, you get information about the overall data in the
column (with distinct count and unique values). You can also select the ellipsis button and choose from a menu of
available operations.
Column profile
This feature provides a more in-depth look at the data in a column. Apart from the column distribution chart, it
contains a column statistics chart. This information is displayed underneath the data preview section, as shown in
the following image.
Filter by value
You can interact with the value distribution chart on the right side and select any of the bars by hovering over the
parts of the chart.
Copy data
In the upper-right corner of both the column statistics and value distribution sections, you can select the ellipsis
button (...) to display a Copy shortcut menu. Select it to copy the data displayed in either section to the clipboard.
Group by value
When you select the ellipsis button (...) in the upper-right corner of the value distribution chart, in addition to Copy
you can select Group by . This feature groups the values in your chart by a set of available options.
The image below shows a column of product names that have been grouped by text length. After the values have
been grouped in the chart, you can interact with individual values in the chart as described in Filter by value.
Using the Queries pane
10/30/2020 • 3 minutes to read • Edit Online
In Power Query, you'll be creating many different queries. Whether it be from getting data from many tables or
from duplicating the original query, the number of queries will increase.
You'll be using the Queries pane to navigate through the queries.
NOTE
Some actions in the Power Query Online editor may be different than actions in the Power Query Desktop editor. These
differences will be noted in this article.
To be more comprehensive, we'll be touching on all of the context menu actions that are relevant for either.
Rename a query
To directly change the name of the query, double-select on the name of the query. This action will allow you to
immediately change the name.
Other options to rename the query are:
Go to the context menu and select Rename .
Go to Quer y Settings and enter in a different name in the Name input field.
Delete a query
To delete a query, open the context pane on the query and select Delete . There will be an additional pop-up
confirming the deletion. To complete the deletion, select the Delete button.
Duplicating a query
Duplicating a query will create a copy of the query you're selecting.
To duplicate your query, open the context pane on the query and select Duplicate . A new duplicate query will pop
up on the side of the query pane.
Referencing a query
Referencing a query will create a new query. The new query uses the steps of a previous query without having to
duplicate the query. Additionally, any changes on the original query will transfer down to the referenced query.
To reference your query, open the context pane on the query and select Reference . A new referenced query will
pop up on the side of the query pane.
NOTE
To learn more about how to copy and paste queries in Power Query, see Sharing a query.
For the sake of being more comprehensive, we'll once again describe all of the context menu actions that are
relevant for either.
New query
You can import data into the Power Query editor as an option from the context menu.
This option functions the same as the Get Data feature.
NOTE
To learn about how to get data into Power Query, see Getting data
Merge queries
When you select the Merge queries option from the context menu, the Merge queries input screen opens.
This option functions the same as the Merge queries feature located on the ribbon and in other areas of the
editor.
NOTE
To learn more about how to use the Merge queries feature, see Merge queries overview.
New parameter
When you select the New parameter option from the context menu, the New parameter input screen opens.
This option functions the same as the New parameter feature located on the ribbon.
NOTE
To learn more about Parameters in Power Query, see Using parameters.
New group
You can make folders and move the queries into and out of the folders for organizational purposes. These folders
are called groups.
To move the query into a group, open the context menu on the specific query.
In the menu, select Move to group .
Then, select the group you want to put the query in.
The move will look like the following image. Using the same steps as above, you can also move the query out of
the group by selecting Queries (root) or another group.
In desktop versions of Power Query, you can also drag and drop the queries into the folders.
Using Schema view (Preview)
10/30/2020 • 2 minutes to read • Edit Online
Schema view is designed to optimize your flow when working on schema level operations by putting your query's
column information front and center. Schema view provides contextual interactions to shape your data structure,
and lower latency operations as it only requires the column metadata to be computed and not the complete data
results.
This article walks you through schema view and the capabilities it offers.
Overview
When working on data sets with many columns, simple tasks can become incredibly cumbersome because even
finding the right column by horizontally scrolling and parsing through all the data is inefficient. Schema view
displays your column information in a list that's easy to parse and interact with, making it easier than ever to work
on your schema.
In addition to an optimized column management experience, another key benefit of schema view is that transforms
tend to yield results faster. These results are faster because this view only requires the columns information to be
computed instead of a preview of the data. So even working with long running queries with a few columns will
benefit from using schema view.
You can turn on schema view by selecting Schema view in the View tab. When you're ready to work on your data
again, you can select Data view to go back.
Reordering columns
One common task when working on your schema is reordering columns. In Schema View this can easily be done
by dragging columns in the list and dropping in the right location until you achieve the desired column order.
Applying transforms
For more advanced changes to your schema, you can find the most used column-level transforms right at your
fingertips directly in the list and in the Schema tools tab. Plus, you can also use transforms available in other tabs
on the ribbon.
Share a query
10/30/2020 • 2 minutes to read • Edit Online
You can use Power Query to extract and transform data from external data sources. These extraction and
transformations steps are represented as queries. Queries created with Power Query are expressed using the M
language and executed through the M Engine.
You can easily share and reuse your queries across projects, and also across Power Query product integrations.
This article covers the general mechanisms to share a query in Power Query.
Copy / Paste
In the queries pane, right-click the query you want to copy. From the dropdown menu, select the Copy option. The
query and its definition will be added to your clipboard.
NOTE
The copy feature is currently not available in Power Query Online instances.
To paste the query from your clipboard, go to the queries pane and right-click on any empty space in it. From the
menu, select Paste .
When pasting this query on an instance that already has the same query name, the pasted query will have a suffix
added with the format (#) , where the pound sign is replaced with a number to distinguish the pasted queries.
You can also paste queries between multiple instances and product integrations. For example, you can copy the
query from Power BI Desktop, as shown in the previous images, and paste it in Power Query for Excel as shown in
the following image.
WARNING
Copying and pasting queries between product integrations doesn't guarantee that all functions and functionality found in the
pasted query will work on the destination. Some functionality might only be available in the origin product integration.
NOTE
To create a blank query, go to the Get Data window and select Blank quer y from the options.
If you find yourself in a situation where you need to apply the same set of transformations to different queries or
values, creating a Power Query custom function that can be reused as many times as you need could be beneficial.
A Power Query custom function is a mapping from a set of input values to a single output value, and is created
from native M functions and operators.
While you can manually create your own Power Query custom function using code as shown in Understanding
Power Query M functions, the Power Query user interface offers you features to speed up, simplify, and enhance
the process of creating and managing a custom function. This article focuses on this experience provided only
through the Power Query user interface and how to get the most out of it.
This option will effectively create a new query with a navigation step directly to that file as a Binary, and the name
of this new query will be the file path of the selected file. Rename this query to be Sample File .
Create a new parameter with the name File Parameter . Use the Sample File query as the Current Value , as
shown in the following image.
NOTE
We recommend that you read the article on Parameters to better understand how to create and manage parameters in
Power Query.
Custom functions can be created using any parameters type. There's no requirement for any custom function to have a
binary as a parameter.
It's possible to create a custom function without a parameter. This is commonly seen in scenarios where an input can be
inferred from the environment where the function is being invoked. For example, a function that takes the environment's
current date and time, and creates a specific text string from those values.
Right-click File Parameter from the Queries pane. Select the Reference option.
Rename the newly created query from File Parameter (2) to Transform Sample file .
Right-click this new Transform Sample file query and select the Create Function option.
This operation will effectively create a new function that will be linked with the Transform Sample file query. Any
changes that you make to the Transform Sample file query will be automatically replicated to your custom
function. During the creation of this new function, use Transform file as the Function name .
After creating the function, you'll notice that a new group will be created for you with the name of your function.
This new group will contain:
All parameters that were referenced in your Transform Sample file query.
Your Transform Sample file query, commonly known as the sample query.
Your newly created function, in this case Transform file .
Applying transformations to a sample query
With your new function created, select the query with the name Transform Sample file . This query is now linked
with the Transform file function, so any changes made to this query will be reflected in the function. This is what
is known as the concept of a sample query linked to a function.
The first transformation that needs to happen to this query is one that will interpret the binary. You can right-click
the binary from the preview pane and select the CSV option to interpret the binary as a CSV file.
The format of all the CSV files in the folder is the same. They all have a header that spans the first top four rows.
The column headers are located in row five and the data starts from row six downwards, as shown in the next
image.
The next set of transformation steps that need to be applied to the Transform Sample file are:
1. Remove the top four rows —This action will get rid of the rows that are considered part of the header
section of the file.
NOTE
To learn more about how to remove rows or filter a table by row position, see Filter by row position.
2. Promote headers —The headers for your final table are now in the first row of the table. You can promote
them as shown in the next image.
Power Query by default will automatically add a new Changed Type step after promoting your column headers
that will automatically detect the data types for each column. Your Transform Sample file query will look like the
next image.
NOTE
To learn more about how to promote and demote headers, see Promote or demote column headers.
Cau t i on
Your Transform file function relies on the steps performed in the Transform Sample file query. However, if you
try to manually modify the code for the Transform file function, you'll be greeted with a warning that reads
The definition of the function 'Transform file' is updated whenever query 'Transform Sample file' is updated.
However, updates will stop if you directly modify function 'Transform file'.
After you select OK , a new column with the name Output Table will be created. This column has Table values in
its cells, as shown in the next image. For simplicity, remove all columns from this table except Name and Output
Table .
NOTE
To learn more about how to choose or remove columns from a table, see Choose or remove columns.
Your function was applied to every single row from the table using the values from the Content column as the
argument for your function. Now that the data has been transformed into the shape that you're looking for, you can
expand the Output Table column, as shown in the image below, without using any prefix for the expanded
columns.
You can verify that you have data from all files in the folder by checking the values in the Name or Date column.
For this case, you can check the values from the Date column, as each file only contains data for a single month
from a given year. If you see more than one, it means that you've successfully combined data from multiple files
into a single table.
NOTE
What you've read so far is fundamentally the same process that happens during the Combine files experience, but done
manually.
We recommend that you also read the article on Combine files overview and Combine CSV files to further understand how
the combine files experience works in Power Query and the role that custom functions play.
With this new parameter, select the Transform Sample file query and filter the Countr y field using the value
from the Market parameter.
NOTE
To learn more about how to filter columns by values, see Filter values.
Applying this new step to your query will automatically update the Transform file function, which will now
require two parameters based on the two parameters that your Transform Sample file uses.
But the CSV files query has a warning sign next to it. Now that your function has been updated, it requires two
parameters. So the step where you invoke the function results in error values, since only one of the arguments was
passed to the Transform file function during the Invoked Custom Function step.
To fix the errors, double-click Invoked Custom Function in the Applied Steps to open the Invoke Custom
Function window. In the Market parameter, manually enter the value Panama .
You can now check your query to validate that only rows where Countr y is equal to Panama show up in the final
result set of the CSV Files query.
Create a custom function from a reusable piece of logic
If you have multiple queries or values that require the same set of transformations, you could create a custom
function that acts as a reusable piece of logic. Later, this custom function can be invoked against the queries or
values of your choice. This custom function could save you time and help you in managing your set of
transformations in a central location, which you can modify at any moment.
For example, imagine a query that has several codes as a text string and you want to create a function that will
decode those values.
You start by having a parameter that has a value that serves as an example. For this case, it will be the value PTY-
CM1090-L AX .
From that parameter, you create a new query where you apply the transformations that you need. For this case,
you want to split the code PTY-CM1090-LAX into multiple components:
Origin = PTY
Destination = LAX
Airline = CM
FlightID = 1090
NOTE
To learn more about the Power Query M formula language, see Power Query M formula language
You can then transform that query into a function by doing a right-click on the query and selecting Create
Function . Finally, you can invoke your custom function into any of your queries or values, as shown in the next
image.
After a few more transformations, you can see that you've reached your desired output and leveraged the logic for
such a transformation from a custom function.
Promote or demote column headers
10/30/2020 • 2 minutes to read • Edit Online
When creating a new query from unstructured data sources such as text files, Power Query analyzes the contents
of the file. If Power Query identifies a different pattern for the first row, it will try to promote the first row of data to
be the column headings for your table. However, Power Query might not identify the pattern correctly 100 percent
of the time, so this article explains how you can manually promote or demote column headers from rows.
Table with the columns (Column1, Column2, Column3 and column 4) all set to the Text data type, with four rows
containing a header at the top, a column header in row 5, and seven data rows at the bottom.
Before you can promote the headers, you need to remove the first four rows of the table. To make that happen,
select the table menu in the upper-left corner of the preview window, and then select Remove top rows .
In the Remove top rows window, enter 4 in the Number of rows box.
NOTE
To learn more about Remove top rows and other table operations, go to Filter by row position.
The result of that operation will leave the headers as the first row of your table.
Locations of the promote headers operation
From here, you have a number of places where you can select the promote headers operation:
On the Home tab, in the Transform group.
After you do the promote headers operation, your table will look like the following image.
Table with Date, Country, Total Units, and Total Revenue column headers, and seven rows of data. The Date column
header has a Date data type, the Country column header has a Text data type, the Total Units column header has a
Whole number data type, and the Total Revenue column header has a Decimal number data type.
NOTE
Table column names must be unique. If the row you want to promote to a header row contains multiple instances of the
same text string, Power Query will disambiguate the column headings by adding a numeric suffix preceded by a dot to every
text string that isn't unique.
As a last step, select each column and type a new name for it. The end result will resemble the following image.
Final table after renaming column headers to Date, Country, Total Units, and Total Revenue, with Renamed columns
emphasized in the Query settings pane and the M code shown in the formula bar.
See also
Filter by row position
Filter a table by row position
10/30/2020 • 6 minutes to read • Edit Online
Power Query has multiple options to filter a table based on the positions of its rows, either by keeping or
removing those rows. This article covers all the available methods.
Keep rows
The keep rows set of functions will select a set of rows from the table and remove any other rows that don't meet
the criteria.
There are two places where you can find the Keep rows buttons:
On the Home tab, in the Reduce Rows group.
This report always contains seven rows of data, and below the data it has a section for comments with an
unknown number of rows. In this example, you only want to keep the first seven rows of data. To do that, select
Keep top rows from the table menu. In the Keep top rows dialog box, enter 7 in the Number of rows box.
The result of that change will give you the output table you're looking for. After you set the data types for your
columns, your table will look like the following image.
Keep bottom rows
Imagine the following table that comes out of a system with a fixed layout.
Initial sample table with Column1, Column2, and Column3 as the column headers, all set to the Text data type, and
the bottom seven rows containing data, and above that a column headers row and an unknown number of
comments.
This report always contains seven rows of data at the end of the report page. Above the data, the report has a
section for comments with an unknown number of rows. In this example, you only want to keep those last seven
rows of data and the header row.
To do that, select Keep bottom rows from the table menu. In the Keep bottom rows dialog box, enter 8 in the
Number of rows box.
The result of that operation will give you eight rows, but now your header row is part of the table.
You need to promote the column headers from the first row of your table. To do this, select Use first row as
headers from the table menu. After you define data types for your columns, you'll create a table that looks like the
following image.
Final sample table for Keep bottom rows after promoting the first row to column headers and retaining seven
rows of data, and then setting the Units to the Number data type.
More information: Promote or demote column headers
Keep a range of rows
Imagine the following table that comes out of a system with a fixed layout.
Initial sample table with the columns (Column1, Column2, and Column3) all set to the Text data type, and
containing the column headers and seven rows of data in the middle of the table.
This report always contains five rows for the header, one row of column headers below the header, seven rows of
data below the column headers, and then an unknown number of rows for its comments section. In this example,
you want to get the eight rows after the header section of the report, and only those eight rows.
To do that, select Keep range of rows from the table menu. In the Keep range of rows dialog box, enter 6 in
the First row box and 8 in the Number of rows box.
Similar to the previous example for keeping bottom rows, the result of this operation gives you eight rows with
your column headers as part of the table. Any rows above the First row that you defined (row 6) are removed.
You can perform the same operation as described in Keep bottom rows to promote the column headers from the
first row of your table. After you set data types for your columns, your table will look like the following image.
Final sample table for Keep range of rows after promoting first row to column headers, setting the Units column to
the Number data type, and keeping seven rows of data.
Remove rows
This set of functions will select a set of rows from the table, remove them, and keep the rest of the rows in the
table.
There are two places where you can find the Remove rows buttons:
On the Home tab, in the Reduce Rows group.
Initial sample table for Remove top rows with the columns (Column1, Column2, and Column3) all set to the Text
data type, a header at the top and a column header row and seven data rows at the bottom.
This report always contains a fixed header from row 1 to row 5 of the table. In this example, you want to remove
these first five rows and keep the rest of the data.
To do that, select Remove top rows from the table menu. In the Remove top rows dialog box, enter 5 in the
Number of rows box.
In the same way as the previous examples for "Keep bottom rows" and "Keep a range of rows," the result of this
operation gives you eight rows with your column headers as part of the table.
You can perform the same operation as described in previous examples to promote the column headers from the
first row of your table. After you set data types for your columns, your table will look like the following image.
Final sample table for Remove top rows after promoting first row to column headers and setting the Units column
to the Number data type, and retaining seven rows of data.
Remove bottom rows
Imagine the following table that comes out of a system with a fixed layout.
Initial sample table for Remove bottom rows, with the header columns all set to the Text data type, seven rows of
data, then a footer of fixed length at the bottom.
This report always contains a fixed section or footer that occupies the last five rows of the table. In this example,
you want to remove those last five rows and keep the rest of the data.
To do that, select Remove bottom rows from the table menu. In the Remove top rows dialog box, enter 5 in
the Number of rows box.
The result of that change will give you the output table that you're looking for. After you set data types for your
columns, your table will look like the following image.
The result of that selection will give you the output table that you're looking for. After you set the data types to
your columns, your table will look like the following image.
Filter by values in a column
10/30/2020 • 4 minutes to read • Edit Online
In Power Query, you can include or exclude rows according to a specific value in a column. You can choose from
three methods to filter the values in your column:
Sort and filter menu
Cell shortcut menu
Type-specific filter
After you apply a filter to a column, a small filter icon appears in the column heading, as shown in the following
illustration.
NOTE
In this article, we'll focus on aspects related to filtering data. To learn more about the sort options and how to sort columns
in Power Query, go to Sort columns.
Remove empty
The Remove empty command applies two filter rules to your column. The first rule gets rid of any null values. The
second rule gets rid of any blank values. For example, imagine a table with just one text column with five rows,
where you have one null value and one blank cell.
NOTE
A null value is a specific value in the Power Query language that represents no value.
You then select Remove empty from the sort and filter menu, as shown in the following image.
You can also select this option from the Home tab in the Reduce Rows group in the Remove Rows drop-down
options, as shown in the next image.
The result of the Remove empty operation gives you the same table without the empty values.
Clear filter
When a filter is applied to a column, the Clear filter command appears on the sort and filter menu.
Auto filter
The list in the sort and filter menu is called the auto filter list, which shows the unique values in your column. You
can manually select or deselect which values to include in the list. Any selected values will be taken into
consideration by the filter; any values that aren't selected will be ignored.
This auto filter section also has a search bar to help you find any values from your list.
NOTE
When you load the auto filter list, only the top 1,000 distinct values in the column are loaded. If there are more than 1,000
distinct values in the column in the that you're filtering, a message will appear indicating that the list of values in the filter list
might be incomplete, and the Load more link appears. Select the Load more link to load another 1,000 distinct values.
If exactly 1,000 distinct values are found again, the list is displayed with a message stating that the list might still be
incomplete.
If fewer than 1,000 distinct values are found, the full list of values is shown.
Type-specific filters
Depending on the data type of your column, you'll see different commands in the sort and filter menu. The
following images show examples for date, text, and numeric columns.
Filter rows
When selecting any of the type-specific filters, you'll use the Filter rows dialog box to specify filter rules for the
column. This dialog box is shown in the following image.
The Filter rows dialog box has two modes: Basic and Advanced .
B a si c
With basic mode, you can implement up to two filter rules based on type-specific filters. In the preceding image,
notice that the name of the selected column is displayed after the label Keep rows where , to let you know which
column these filter rules are being implemented on.
For example, imagine that in the following table, you want to filter the Account Code by all values that start with
either PA or PTY .
To do that, you can go to the Filter rows dialog box for the Account Code column and specify the set of filter
rules you want.
In this example, first select the Basic button. Then under Keep rows where "Account Code" , select begins
with , and then enter PA . Then select the or button. Under the or button, select begins with , and then enter PTY .
The select OK .
The result of that operation will give you the set of rows that you're looking for.
A dvan c ed
With advanced mode, you can implement as many type-specific filters as necessary from all the columns in the
table.
For example, imagine that instead of applying the previous filter in basic mode, you wanted to implement a filter to
Account Code to show all values that end with 4 . Also, you want to show values over $100 in the Sales column.
In this example, first select the Advanced button. In the first row, select Account Code under Column name ,
ends with under Operator , and select 4 for the Value. In the second row, select and , and then select Sales under
Column Name , is greater than under Operator , and 100 under Value . Then select OK
The result of that operation will give you just one row that meets both criteria.
NOTE
You can add as many clauses as you'd like by selecting Add clause . All clauses act at the same level, so you might want to
consider creating multiple filter steps if you need to implement filters that rely on other filters.
Choose or remove columns
10/30/2020 • 2 minutes to read • Edit Online
Choose columns and Remove columns are operations that help you define what columns your table needs to
keep and which ones it needs to remove. This article will showcase how to use the Choose columns and Remove
columns commands by using the following sample table for both operations.
The goal is to create a table that looks like the following image.
Choose columns
On the Home tab, in the Manage columns group, select Choose columns .
The Choose columns dialog box appears, containing all the available columns in your table. You can select all the
fields that you want to keep and remove specific fields by clearing their associated check box. For this example, you
want to remove the GUID and Repor t created by columns, so you clear the check boxes for those fields.
After selecting OK , you'll create a table that only contains the Date , Product , SalesPerson , and Units columns.
Remove columns
When you select Remove columns from the Home tab, you have two options:
Remove columns : Removes the selected columns.
Remove other columns : Removes all columns from the table except the selected ones.
After selecting Remove other columns , you'll create a table that only contains the Date , Product , SalesPerson ,
and Units columns.
Grouping or summarizing rows
10/30/2020 • 5 minutes to read • Edit Online
In Power Query, you can group values in various rows into a single value by grouping the rows according to the
values in one or more columns. You can choose from two types of grouping operations:
Aggregate a column by using an aggregate function.
Perform a row operation.
For this tutorial, you'll be using the sample table shown in the following image.
Table with columns showing Year (2020), Country (USA, Panama, or Canada), Product (Shirt or Shorts), Sales
channel (Online or Reseller), and Units (various values from 55 to 7500)
After that operation is complete, notice how the Products column has [Table] values inside each cell. Each [Table]
value contains all the rows that were grouped by the Countr y and Sales Channel columns from your original
table. You can select the white space inside the cell to see a preview of the contents of the table at the bottom of the
dialog box.
NOTE
The details preview pane might not show all the rows that were used for the group-by operation. You can select the [Table]
value to see all rows pertaining to the corresponding group-by operation.
Next, you need to extract the row that has the highest value in the Units column of the tables inside the new
Products column, and call that new column Top performer product .
Extract the top performer product information
With the new Products column with [Table] values, you create a new custom column by going to the Add Column
tab on the ribbon and selecting Custom column from the General group.
Name your new column Top performer product . Enter the formula Table.Max([Products], "Units" ) under
Custom column formula .
The result of that formula creates a new column with [Record] values. These record values are essentially a table
with just one row. These records contain the row with the maximum value for the Units column of each [Table]
value in the Products column.
With this new Top performer product column that contains [Record] values, you can select the expand icon,
select the Product and Units fields, and then select OK .
After removing your Products column and setting the data type for both newly expanded columns, your result will
resemble the following image.
Fuzzy grouping
To demonstrate how to do "fuzzy grouping," consider the sample table shown in the following image.
The goal of fuzzy grouping is to do a group-by operation that uses an approximate match algorithm for text strings.
Power Query uses the Jaccard similarity algorithm to measure the similarity between pairs of instances. Then it
applies agglomerative hierarchical clustering to group instances together. The following image shows the output
that you expect, where the table will be grouped by the Person column.
To do the fuzzy grouping, you perform the same steps previously described in this article. The only difference is that
this time, in the Group by dialog box, you select the Use fuzzy grouping check box.
For each group of rows, Power Query will pick the most frequent instance as the "canonical" instance. If multiple
instances occur with the same frequency, Power Query will pick the first one. After you select OK in the Group by
dialog box, you'll get the result that you were expecting.
However, you have more control over the fuzzy grouping operation by expanding Fuzzy group options .
The following options are available for fuzzy grouping:
Similarity threshold (optional) : This option indicates how similar two values must be to be grouped
together. The minimum setting of 0 will cause all values to be grouped together. The maximum setting of 1 will
only allow values that match exactly to be grouped together. The default is 0.8.
Ignore case : When comparing text strings, case will be ignored. This option is enabled by default.
Group by combining text par ts : The algorithm will try to combine text parts (such as combining Micro and
soft into Microsoft ) to group values.
Transformation table (optional) : You can select a transformation table that will map values (such as mapping
MSFT to Microsoft ) to group them together.
For this example, a transformation table will be used to demonstrate how values can be mapped. The
transformation table has two columns:
From : The text string to look for in your table.
To : The text string to use to replace the text string in the From column.
The following image shows the transformation table used in this example.
Return to the Group by dialog box, expand Fuzzy group options , and then select the Transformation table
drop-down menu.
After selecting your transformation table, select OK . The result of that operation will give you the result shown in
the following image.
In this example, the Ignore case option was enabled, so the values in the From column of the Transformation
table will be used to look for the text string without considering the case of the string. This transformation
operation occurs first, and then the fuzzy grouping operation is performed.
NOTE
When grouping by multiple columns, the transformation table will perform the replace operation in all columns if replacing
the value increases the similarity score.
See also
Add a custom column
Remove duplicates
Unpivot columns
10/30/2020 • 7 minutes to read • Edit Online
In Power Query, you can transform columns into attribute-value pairs, where columns become rows.
Diagram showing a table on the left with a blank column and rows, and the Attributes values A1, A2, and A3 as
column headers. The A1 column contains the values V1, V4, and V7, the A2 column contains the values V2, V5, and
V8, and the A3 column contains the values V3, V6, and V9. With the columns unpivoted, a table on the right of the
diagram contains a blank column and rows, an Attributes column with nine rows with A1, A2, and A3 repeated
three times, and a Values column with values V1 through V9.
For example, given a table like the following, where country rows and date columns create a matrix of values, it's
difficult to analyze the data in a scalable way.
Table containing a Country column set in the Text data type, and 6/1/2020, 7/1/2020, and 8/1/2020 columns set as
the Whole number data type. The Country column contains USA in row 1, Canada in row 2, and Panama in row 3.
Instead, you can transform the table into a table with unpivoted columns, as shown in the following image. In the
transformed table, it's easier to use the date as an attribute to filter on.
Table containing a Country column set as the Text data type, an Attribute column set as the Text data type, and a
Value column set as the Whole number data type. The Country column contains USA in the first three rows, Canada
in the next three rows, and Panama in the last three rows. The Attribute column contains 6/1/2020 in the first, forth,
and seventh rows, 7/1/2020 in the second, fifth, and eighth rows, and 8/1/2020 in the third, sixth, and ninth rows.
The key in this transformation is that you have a set of dates in the table that should all be part of a single column.
The respective value for each date and country should be in a different column, effectively creating an attribute-
value pair.
Power Query will always create the attribute-value pair by using two columns:
Attribute : The name of the column headings that were unpivoted.
Value : The values that were underneath each of the unpivoted column headings.
There are multiple places in the user interface where you can find Unpivot columns . You can right-click the
columns that you want to unpivot, or you can select the command from the Transform tab in the ribbon.
There are three ways that you can unpivot columns from a table:
Unpivot columns
Unpivot other columns
Unpivot only selected columns
Unpivot columns
For the scenario described above, you first need to select the columns you want to unpivot. You can select Ctrl as
you select as many columns as you need. For this scenario, you want to select all the columns except the one
named Countr y . After selecting the columns, right-click any of the selected columns, and then select Unpivot
columns .
The result of that operation will yield the result shown in the following image.
Table containing a Country column set as the Text data type, an Attribute column set as the Text data type, and a
Value column set as the Whole number data type. The Country column contains USA in the first three rows, Canada
in the next three rows, and Panama in the last three rows. The Attribute column contains 6/1/2020 in the first, forth,
and seventh rows, 7/1/2020 in the second, fifth, and eighth rows, and 8/1/2020 in the third, sixth, and ninth rows.
In addition, the Unpivot columns entry is emphasized in the Query settings pane and the M language code is
shown in the formula bar.
Special considerations
After creating your query from the steps above, imagine that your initial table gets updated to look like the
following screenshot.
Table with the same original Country, 6/1/2020, 7/1/2020, and 8/1/2020 columns, with the addition of a 9/1/2020
column. The Country column still contains the USA, Canada, and Panama values, but also has UK added to the
fourth row and Mexico added to the fifth row.
Notice that you've added a new column for the date 9/1/2020 (September 1, 2020), and two new rows for the
countries UK and Mexico.
If you refresh your query, you'll notice that the operation will be done on the updated column, but won't affect the
column that wasn't originally selected (Countr y , in this example). This means that any new column that's added to
the source table will be unpivoted as well.
The following image shows what your query will look like after the refresh with the new updated source table.
Table with Country, Attribute, and Value columns. The first four rows of the Country column contains USA, the
second four rows contains Canada, the third four rows contains Panama, the fourth four rows contains UK, and the
fifth four rows contains Mexico. The Attribute column contains 6/1/2020, 7/1/2020, 8/1/2020, and 9/1/2020 in the
first four rows, which are repeated for each country.
The result of that operation will yield exactly the same result as the one you got from Unpivot columns .
Table containing a Country column set as the Text data type, an Attribute column set as the Text data type, and a
Value column set as the Whole number data type. The Country column contains USA in the first three rows, Canada
in the next three rows, and Panama in the last three rows. The Attribute column contains 6/1/2020 in the first, forth,
and seventh rows, 7/1/2020 in the second, fifth, and eighth rows, and 8/1/2020 in the third, sixth, and ninth rows.
NOTE
This transformation is crucial for queries that have an unknown number of columns. The operation will unpivot all columns
from your table except the ones that you've selected. This is an ideal solution if the data source of your scenario got new date
columns in a refresh, because those will get picked up and unpivoted.
Special considerations
Similar to the Unpivot columns operation, if your query is refreshed and more data is picked up from the data
source, all the columns will be unpivoted except the ones that were previously selected.
To illustrate this, say that you have a new table like the one in the following image.
Table with Country, 6/1/2020, 7/1/2020, 8/1/2020, and 9/1/2020 columns, with all columns set to the Text data
type. The Country column contains, from top to bottom, USA, Canada, Panama, UK, and Mexico.
You can select the Countr y column, and then select Unpivot other column , which will yield the following result.
Table with Country, Attribute, and Value columns. The Country and Attribute columns are set to the Text data type.
The Value column is set to the Whole value data type. The first four rows of the Country column contain USA, the
second four rows contains Canada, the third four rows contains Panama, the fourth four rows contains UK, and the
fifth four rows contains Mexico. The Attribute column contains 6/1/2020, 7/1/2020, 8/1/2020, and 9/1/2020 in the
first four rows, which are repeated for each country.
Notice how this operation will yield the same output as the previous examples.
Table containing a Country column set as the Text data type, an Attribute column set as the Text data type, and a
Value column set as the Whole number data type. The Country column contains USA in the first three rows, Canada
in the next three rows, and Panama in the last three rows. The Attribute column contains 6/1/2020 in the first, forth,
and seventh rows, 7/1/2020 in the second, fifth, and eighth rows, and 8/1/2020 in the third, sixth, and ninth rows.
Special considerations
After doing a refresh, if our source table changes to have a new 9/1/2020 column and new rows for UK and
Mexico, the output of the query will be different from the previous examples. Say that our source table, after a
refresh, changes to the table in the following image.
The output of our query will look like the following image.
It looks like this because the unpivot operation was applied only on the 6/1/2020 , 7/1/2020 , and 8/1/2020
columns, so the column with the header 9/1/2020 remains unchanged.
Pivot columns
10/30/2020 • 4 minutes to read • Edit Online
In Power Query, you can create a table that contains an aggregate value for each unique value in a column. Power
Query groups each unique value, does an aggregate calculation for each value, and pivots the column into a new
table.
Diagram showing a table on the left with a blank column and rows. An Attributes column contains nine rows with
A1, A2, and A3 repeated three times. A Values column contains, from top to bottom, values V1 through V9. With the
columns pivoted, a table on the right contains a blank column and rows, the Attributes values A1, A2, and A3 as
column headers, with the A1 column containing the values V1, V4, and V7, the A2 column containing the values V2,
V5, and V8, and the A3 column containing the values V3, V6, and V9.
Imagine a table like the one in the following image.
Table containing a Country column set as the Text data type, a Date column set as the Data data type, and a Value
column set as the Whole number data type. The Country column contains USA in the first three rows, Canada in the
next three rows, and Panama in the last three rows. The Date column contains 6/1/2020 in the first, forth, and
seventh rows, 7/1/2020 in the second, fifth, and eighth rows, and 8/1/2020 in the third, sixth, and ninth rows.
This table contains values by country and date in a simple table. In this example, you want to transform this table
into the one where the date column is pivoted, as shown in the following image.
Table containing a Country column set in the Text data type, and 6/1/2020, 7/1/2020, and 8/1/2020 columns set as
the Whole number data type. The Country column contains Canada in row 1, Panama in row 2, and USA in row 3.
NOTE
During the pivot columns operation, Power Query will sort the table based on the values found on the first column—at the
left side of the table—in ascending order.
To pivot a column
1. Select the column that you want to pivot.
2. On the Transform tab in the Any column group, select Pivot column .
3. In the Pivot column dialog box, in the Value column list, select Value .
By default, Power Query will try to do a sum as the aggregation, but you can select the Advanced option to
see other available aggregations.
The available options are:
Don't aggregate
Count (all)
Count (not blank)
Minimum
Maximum
Median
Sum
Average
Table with Country column containing USA in the first three rows, Canada in the next three rows, and Panama in
the last three rows. The Position column contains First Place in the first, fourth, and seventh rows, Second Place in
the second, fifth, and eighth rows, and third Place in the third, sixth, and ninth rows.
Let's say you want to pivot the Position column in this table so you can have its values as new columns. For the
values of these new columns, you'll use the values from the Product column. Select the Position column, and then
select Pivot column to pivot that column.
In the Pivot column dialog box, select the Product column as the value column. Select the Advanced option
button in the Pivot columns dialog box, and then select Don't aggregate .
The result of this operation will yield the result shown in the following image.
Table containing Country, First Place, second Place, and Third Place columns, with the Country column containing
Canada in row 1, Panama in row 2, and USA in row 3.
Errors when using the Don't aggregate option
The way the Don't aggregate option works is that it grabs a single value for the pivot operation to be placed as
the value for the intersection of the column and row pair. For example, let's say you have a table like the one in the
following image.
Table with a Country, Date, and Value columns. The Country column contains USA in the first three rows, Canada in
the next three rows, and Panama in the last three rows. The Date column contains a date of 6/1/2020 in all rows.
The value column contains various whole numbers between 20 and 785.
You want to pivot that table by using the Date column, and you want to use the values from the Value column.
Because this pivot would make your table have just the Countr y values on rows and the Dates as columns, you'd
get an error for every single cell value because there are multiple rows for every combination of Countr y and
Date . The outcome of the Pivot column operation will yield the results shown in the following image.
Power Query Editor pane showing a table with Country and 6/1/2020 columns. The Country column contains
Canada in the first row, Panama in the second row, and USA in the third row. All of the rows under the 6/1/2020
column contain Errors. Under the table is another pane that shows the expression error with the "There are too
many elements in the enumeration to complete the operation" message.
Notice the error message "Expression.Error: There were too many elements in the enumeration to complete the
operation." This error occurs because the Don't aggregate operation only expects a single value for the country
and date combination.
Transpose a table
10/30/2020 • 2 minutes to read • Edit Online
The transpose table operation in Power Query rotates your table 90 degrees, turning your rows into columns and
your columns into rows.
Imagine a table like the one in the following image, with three rows and four columns.
Table with four columns named Column1 through Column4, with all columns set to the Text data type. Column1
contains Events in row 1, Participants in row 2, and Funds in row 3. Column2 contains Event 1 in row 1, 150 in row
2, and 4000 in row 3. Column3 contains Event 2 in row 1, 450 in row 2, and 10000 in row 3. Column4 contains
Event 2 in row 1, 1250 in row 2, and 15000 in row 3.
The goal of this example is to transpose that table so you end up with four rows and three columns.
Table with three columns named Events with a Text data type, Participants with a Whole number data type, and
Funds with a whole number data type. The Events column contains, from top to bottom, Event 1, Event 2, and Event
3. The Participants column contains, from top to bottom, 150, 450, and 1250. The Funds column contains, from top
to bottom, 4000, 10000, and 15000.
On the Transform tab in the ribbon, select Transpose .
The result of that operation will look like the following image.
Table with three columns named Column1, Column2, and Column 3, with all columns set to the Any data type.
Column1 contains, from top to bottom, Events, Event 1, Event 2, and Event 3. Column2 contains, from top to
bottom, Participants, 150, 450, and 1250. Column 3 contains, from top to bottom, Funds, 4000, 10000, and 15000.
NOTE
Only the contents of the table will be transposed during the transpose operation; the column headers of the initial table will
be lost. The new columns will have the name Column followed by a sequential number.
The headers you need in this example are in the first row of the table. To promote the first row to headers, select the
table icon in the upper-left corner of the data preview, and then select Use first row as headers .
The result of that operation will give you the output that you're looking for.
Final table with three columns named Events with a Text data type, Participants with a Whole number data type, and
Funds with a whole number data type. The Events column contains, from top to bottom, Event 1, Event 2, and Event
3. The Participants column contains, from top to bottom, 150, 450, and 1250. The Funds column contains, from top
to bottom, 4000, 10000, and 15000.
NOTE
To learn more about the promote headers operation, also known as Use first row as headers , go to Promote or demote
column headers.
Reverse rows
10/30/2020 • 2 minutes to read • Edit Online
With Power Query, it's possible to reverse the order of rows in a table.
Imagine a table with two columns, ID and Countr y , as shown in the following image.
Initial table with ID and Country columns. The ID rows contain, from top to bottom, values of 1 through 7. The
Country rows contain, from top to bottom, USA, Canada, Mexico, China, Spain, Panama, and Columbia.
On the Transform tab, select Reverse rows .
Output table with the rows reversed. The ID rows now contain, from top to bottom, values of 7 down to 1. The
Country rows contain, from top to bottom, Columbia, Panama, Spain, China, Mexico, Canada, and USA.
Data types in Power Query
10/30/2020 • 9 minutes to read • Edit Online
Data types in Power Query are used to classify values to have a more structured dataset. Data types are defined at
the field level—values inside a field are set to conform to the data type of the field.
The data type of a column is displayed on the left side of the column heading with an icon that symbolizes the data
type.
NOTE
Power Query provides a set of contextual transformations and options based on the data type of the column. For example,
when you select a column with a data type of Date, you get transformations and options that apply to that specific data
type. These transformations and options occur throughout the Power Query interface, such as on the Transform and Add
column tabs and the smart filter options.
The most common data types used in Power Query are listed in the following table. Although beyond the scope of
this article, you can find the complete list of data types in the Power Query M formula language Types article.
On the Transform tab, in the Any column group, on the Data type drop-down menu.
When you try setting the data type of the Date column to be Date , you get error values.
These errors occur because the locale being used is trying to interpret the date in the English (United States)
format, which is month/day/year. Because there's no month 22 in the calendar, it causes an error.
Instead of trying to just select the Date data type, you can right-click the column heading, select Change type , and
then select Using locale .
In the Change column type with locale dialog box, you select the data type that you want to set, but you also
select which locale to use, which in this case needs to be English (United Kingdom) .
Using this locale, Power Query will be able to interpret values correctly and convert those values to the right data
type.
By using these columns, you can verify that your date value has been converted correctly.
—
Decim
al
numb
er
—
Curre
ncy
—
Whole
numb
er
—
Percen
tage
—
Date/
Time
—
Date
—
Time
—
Date/
Time/
Timez
one
—
Durati
on
—
Text
—
True/F
alse
IC O N DESC RIP T IO N
Possible
Not possible
IC O N DESC RIP T IO N
Step-level error
A step-level error prevents the query from loading and displays the error components in a yellow pane.
Error reason : The first section before the colon. In the example above, the error reason is Expression.Error .
Error message : The section directly after the reason. In the example above, the error message is The column
'Column' of the table wasn't found .
Error detail : The section directly after the Details: string. In the example above, the error detail is Column .
Common step-level errors
In all cases, we recommend that you take a close look at the error reason, error message, and error detail to
understand what's causing the error. You can select the Go to error button, if available, to view the first step where
the error occurred.
Possible solutions : You can change the file path of the text file to a path that both users have access to. As user B,
you can change the file path to be a local copy of the same text file. If the Edit settings button is available in the
error pane, you can select it and change the file path.
The column of the table wasn't found
This error is commonly triggered when a step makes a direct reference to a column name that doesn't exist in the
query.
Example : You have a query from a text file where one of the column names was Column . In your query, you have
a step that renames that column to Date . But there was a change in the original text file, and it no longer has a
column heading with the name Column because it was manually changed to Date . Power Query is unable to find
a column heading named Column , so it can't rename any columns. It displays the error shown in the following
image.
Possible solutions : There are multiple solutions for this case, but they all depend on what you'd like to do. For
this example, because the correct Date column header already comes from your text file, you can just remove the
step that renames the column. This will allow your query to run without this error.
Other common step-level errors
When combining or merging data between multiple data sources, you might get a Formula.Firewall error such
as the one shown in the following image.
This error can be caused by a number of reasons, such as the data privacy levels between data sources or the way
that these data sources are being combined or merged. For more information about how to diagnose this issue, go
to Data privacy firewall.
Cell-level error
A cell-level error won't prevent the query from loading, but displays error values as Error in the cell. Selecting the
white space in the cell displays the error pane underneath the data preview.
NOTE
The data profiling tools can help you more easily identify cell-level errors with the column quality feature. More information:
Data profiling tools
Remove errors
To remove rows with errors in Power Query, first select the column that contains errors. On the Home tab, in the
Reduce rows group, select Remove rows . From the drop-down menu, select Remove errors .
The result of that operation will give you the table that you're looking for.
Replace errors
If instead of removing rows with errors, you want to replace the errors with a fixed value, you can do so as well. To
replace rows that have errors, first select the column that contains errors. On the Transform tab, in the Any
column group, select Replace values . From the drop-down menu, select Replace errors .
In the Replace errors dialog box, enter the value 10 because you want to replace all errors with the value 10.
The result of that operation will give you the table that you're looking for.
Keep errors
Power Query can serve as a good auditing tool to identify any rows with errors even if you don't fix the errors. This
is where Keep errors can be helpful. To keep rows that have errors, first select the column that contains errors. On
the Home tab, in the Reduce rows group, select Keep rows . From the drop-down menu, select Keep errors .
The result of that operation will give you the table that you're looking for.
Possible solutions : After identifying the row with the error, you can either modify the data source to reflect the
correct value rather than NA , or you can apply a Replace error operation to provide a value for any NA values
that cause an error.
Operation errors
When trying to apply an operation that isn't supported, such as multiplying a text value by a numeric value, an
error occurs.
Example : You want to create a custom column for your query by creating a text string that contains the phrase
"Total Sales: " concatenated with the value from the Sales column. An error occurs because the concatenation
operation only supports text columns and not numeric ones.
Possible solutions : Before creating this custom column, change the data type of the Sales column to be text.
Working with duplicate values
10/30/2020 • 2 minutes to read • Edit Online
You can work with duplicate sets of values through transformations that can remove duplicates from your data or
filter your data to show duplicates only, so you can focus on them.
For this article, the examples use the following table with id , Categor y , and Total columns.
Remove duplicates
One of the operations that you can perform is to remove duplicate values from your table.
1. Select the columns that contain duplicate values.
2. Go to the Home tab.
3. In the Reduce rows group, select Remove rows .
4. From the drop-down menu, select Remove duplicates .
WARNING
There's no guarantee that the first instance in a set of duplicates will be chosen when duplicates are removed.
NOTE
This operation can also be performed with a subset of columns.
You want to remove those duplicates and only keep unique values. To remove duplicates from the Categor y
column, select it, and then select Remove duplicates .
The result of that operation will give you the table that you're looking for.
Keep duplicates
Another operation you can perform with duplicates is to keep only the duplicates found in your table.
1. Select the columns that contain duplicate values.
2. Go to the Home tab.
3. In the Reduce rows group, select Keep rows .
4. From the drop-down menu, select Keep duplicates .
You have four rows that are duplicates. Your goal in this example is to keep only the rows that are duplicated in
your table. Select all the columns in your table, and then select Keep duplicates .
The result of that operation will give you the table that you're looking for.
See also
Data profiling tools
Fill values in a column
10/30/2020 • 2 minutes to read • Edit Online
You can use fill up and fill down to replace null values with the last non-empty value in a column. For example,
imagine the following table where you'd like to fill down in the Date column and fill up in the Comments column.
Fill down
The fill down operation takes a column and traverses through the values in it to fill any null values in the next rows
until it finds a new value. This process continues on a row-by-row basis until there are no more values in that
column.
In the following example, you want to fill down on the Date column. To do that, you can right-click to select the
Date column, and then select Fill > Down .
The result of that operation will look like the following image.
Fill up
In the same way as the fill down operation, fill up works on a column. But by contrast, fill up finds the last value of
the column and fills any null values in the previous rows until it finds a new value. Then the same process occurs
for that value. This process continues until there are no more values in that column.
In the following example, you want to fill the Comments column from the bottom up. You'll notice that your
Comments column doesn't have null values. Instead it has what appears to be empty cells. Before you can do the
fill up operation, you need to transform those empty cells into null values: select the column, go to the Transform
tab, and then select Replace values .
In the Replace values dialog box, leave Value to find blank. For Replace with , enter null .
The result of that operation will look like the following image.
Cleaning up your table
1. Filter the Units column to show only rows that aren't equal to null .
3. Remove the Sales Person: values from the Sales Person column so you only get the names of the
salespeople.
Now you should have exactly the table you were looking for.
See also
Replace values
Sort columns
10/30/2020 • 2 minutes to read • Edit Online
You can sort a table in Power Query by one column or multiple columns. For example, take the following table with
the columns named Competition , Competitor , and Position .
Table with Competition, Competitor, and Position columns. The Competition column contains 1 - Opening in rows 1
and 6, 2 - Main in rows 3 and 5, and 3-Final in rows 2 and 4. The Position row contains a value of either 1 or 2 for
each of the Competition values.
For this example, the goal is to sort this table by the Competition and Position fields in ascending order.
Table with Competition, Competitor, and Position columns. The Competition column contains 1 - Opening in rows 1
and 2, 2 - Main in rows 3 and 4, and 3-Final in rows 5 and 6. The Position row contains, from top to bottom, a value
of 1, 2, 1, 2, 1, and 2.
From the column heading drop-down menu. Next to the name of the column there's a drop-down menu
indicator . When you select the icon, you'll see the option to sort the column.
In this example, first you need to sort the Competition column. You'll perform the operation by using the buttons
in the Sor t group on the Home tab. This action creates a new step in the Applied steps section named Sor ted
rows .
A visual indicator, displayed as an arrow pointing up, gets added to the Competitor drop-down menu icon to
show that the column is being sorted in ascending order.
Now you'll sort the Position field in ascending order as well, but this time you'll use the Position column heading
drop-down menu.
Notice that this action doesn't create a new Sor ted rows step, but modifies it to perform both sort operations in
one step. When you sort multiple columns, the order that the columns are sorted in is based on the order the
columns were selected in. A visual indicator, displayed as a number to the left of the drop-down menu indicator,
shows the place each column occupies in the sort order.
With Power Query, you can replace one value with another value wherever that value is found in a column. The
Replace values command can be found:
On the cell shortcut menu. Right-click the cell to replace the selected value in the column with another value.
The value of -1 in the Sales Goal column is an error in the source and needs to be replaced with the standard
sales goal defined by the business for these instances, which is 250,000. To do that, right-click the -1 value, and
then select Replace values . This action will bring up the Replace values dialog box with Value to find set to -1 .
Now all you need to do is enter 250000 in the Replace with box.
The outcome of that operation will give you the result that you're looking for.
The result of that operation gives you the table in the following image.
Parse text as JSON or XML
10/30/2020 • 2 minutes to read • Edit Online
In Power Query, you can parse the contents of a column with text strings by identifying the contents as either a
JSON or XML text string.
You can perform this parse operation by selecting the Parse button found inside the following places in the Power
Query Editor:
Transform tab —This button will transform the existing column by parsing its contents.
Add column tab —This button will add a new column to the table parsing the contents of the selected
column.
For this article, you'll be using the following sample table that contains the following columns that you need to
parse:
SalesPerson —Contains unparsed JSON text strings with information about the FirstName and LastName
of the sales person, as in the following example.
{
"id" : 249319,
"FirstName": "Lesa",
"LastName": "Byrd"
}
Countr y —Contains unparsed XML text strings with information about the Countr y and the Division that
the account has been assigned to, as in the following example.
<root>
<id>1</id>
<Country>USA</Country>
<Division>BI-3316</Division>
</root>
The sample table looks as follows.
The goal is to parse the above mentioned columns and expand the contents of those columns to get this output.
As JSON
Select the SalesPerson column. Then select JSON from the Parse dropdown menu inside the Transform tab.
These steps will transform the SalesPerson column from having text strings to having Record values, as shown in
the next image. You can select anywhere in the whitespace inside the cell of the Record value to get a detailed
preview of the record contents at the bottom of the screen.
Select the expand icon next to the SalesPerson column header. From the expand columns menu, select only the
FirstName and LastName fields, as shown in the following image.
The result of that operation will give you the following table.
As XML
Select the Countr y column. Then select the XML button from the Parse dropdown menu inside the Transform
tab. These steps will transform the Countr y column from having text strings to having Table values as shown in
the next image. You can select anywhere in the whitespace inside the cell of the Table value to get a detailed
preview of the contents of the table at the bottom of the screen.
Select the expand icon next to the Countr y column header. From the expand columns menu, select only the
Countr y and Division fields, as shown in the following image.
You can define all the new columns as text columns. The result of that operation will give you the output table that
you're looking for.
Add a column from examples
10/30/2020 • 3 minutes to read • Edit Online
When you add columns from examples, you can quickly and easily create new columns that meet your needs. This
is useful for the following situations:
You know the data you want in your new column, but you're not sure which transformation, or collection of
transformations, will get you there.
You already know which transformations you need, but you're not sure what to select in the UI to make them
happen.
You know all about the transformations you need by using a custom column expression in the M language, but
one or more of those transformations aren't available in the UI.
The Column from examples command is located on the Add column tab, in the General group.
The preview pane displays a new, editable column where you can enter your examples. For the first example, the
value from the selected column is 19500. So in your new column, enter the text 15000 to 20000 , which is the bin
where that value falls.
When Power Query finds a matching transformation, it fills the transformation results into the remaining rows
using light-colored text. You can also see the M formula text for the transformation above the table preview.
After you select OK , you'll see your new column as part of your query. You'll also see a new step added to your
query.
After you select OK , you'll see your new column as part of your query. You'll also see a new step added to your
query.
Your last step is to remove the First Name , Last Name , and Monthly Income columns. Your final table now
contains the Range and Full Name columns with all the data you produced in the previous steps.
Tips and considerations
When providing examples, Power Query offers a helpful list of available fields, values, and suggested
transformations for the selected columns. You can view this list by selecting any cell of the new column.
It's important to note that the Column from examples experience works only on the top 100 rows of your data
preview. You can apply steps before the Column from examples step to create your own data sample. After the
Column from examples column has been created, you can delete those prior steps; the newly created column
won't be affected.
NOTE
All Text transformations take into account the potential need to trim, clean, or apply a case transformation to the column
value.
Date transformations
Day
Day of Week
Day of Week Name
Day of Year
Month
Month Name
Quarter of Year
Week of Month
Week of Year
Year
Age
Start of Year
End of Year
Start of Month
End of Month
Start of Quarter
Days in Month
End of Quarter
Start of Week
End of Week
Day of Month
Start of Day
End of Day
Time transformations
Hour
Minute
Second
To Local Time
NOTE
All Date and Time transformations take into account the potential need to convert the column value to Date, Time, or
DateTime.
Number transformations
Absolute Value
Arccosine
Arcsine
Arctangent
Convert to Number
Cosine
Cube
Divide
Exponent
Factorial
Integer Divide
Is Even
Is Odd
Ln
Base-10 Logarithm
Modulo
Multiply
Round Down
Round Up
Sign
Sine
Square Root
Square
Subtract
Sum
Tangent
Bucketing/Ranges
Add an index column
10/30/2020 • 2 minutes to read • Edit Online
The Index column command adds a new column to the table with explicit position values, and is usually created to
support other transformation patterns.
By default, the starting index will start from the value 0 and have an increment of 1 per row.
You can also configure the behavior of this step by selecting the Custom option and configuring two parameters:
Star ting index : Specifies the initial index value.
Increment : Specifies how much to increment each index value.
For the example in this article, you start with the following table that has only one column, but notice the data
pattern in the column.
Let's say that your goal is to transform that table into the one shown in the following image, with the columns
Date , Account , and Sale .
In the Modulo dialog box, enter the number from which to find the remainder for each value in the column. In this
case, your pattern repeats itself every three rows, so you'll enter 3 .
The result of that operation will give you a new column named Modulo .
Step 3. Add an integer-divide column from the index column
Select the Index column, go to the Add column tab, and then select Standard > Divide (Integer) .
In the Integer-divide dialog box, enter a number by which to divide each value in the column. In this case, your
pattern repeats itself every three rows, so enter the value 3 .
Remove the Index column, because you no longer need it. Your table now looks like the following image.
Step 4. Pivot a column
Your table now has three columns where:
Column1 contains the values that should be in the final table.
Modulo provides the column position of the value (similar to the y coordinates of an xy chart).
Integer-division provides the row position of the value (similar to the x coordinates of an xy chart).
To achieve the table you want, you need to pivot the Modulo column by using the values from Column1 where
these values don't get aggregated. On the Transform tab, select the Modulo column, and then select Pivot
column from the Any column group. In the Pivot column dialog box, select the Advanced option button. Make
sure Value column is set to Column1 and Aggregate values function is set to Don't aggregate .
If you need more flexibility for adding new columns than the ones provided out of the box in Power Query, you can
create your own custom column by using the Power Query M formula language.
Imagine that you have a table with the following set of columns.
Using the Units , Unit Price , and Discount columns, you'd like to create two new columns:
Total Sale before Discount : Calculated by multiplying the Units column times the Unit Price column.
Total Sale after Discount : Calculated by multiplying the Total Sale before Discount column by the net
percentage value (one minus the discount value).
The goal is to create a table with new columns that looks like the following image.
The Custom column dialog box appears. This dialog box is where you define the formula to create your column.
The Custom column dialog box contains:
An Available columns list on the right.
The initial name of your custom column in the New column name box. You can rename this column.
Power Query M formula in the Custom column formula box.
To add a new custom column, select a column from the Available columns list on the right side of the dialog box.
Then select the Inser t column button below the list to add it to the custom column formula. You can also add a
column by selecting it in the list. Alternatively, you can write your own formula by using the Power Query M
formula language in the Custom column formula box.
NOTE
If there's a syntax error when creating your custom column, you'll see a yellow warning icon, along with an error message
and reason.
To modify your custom column, select the Added custom step in the Applied steps list.
The Custom column dialog box appears with the custom column formula you created.
Next steps
You can create a custom column in other ways, such as creating a column based on examples you provide to
Power Query Editor. More information: Add a column from an example
For Power Query M reference information, go to Power Query M function reference.
Add a conditional column
10/30/2020 • 2 minutes to read • Edit Online
With Power Query, you can create new columns whose values will be based on one or more conditions applied to
other columns in your table.
The Conditional column command is located on the Add column tab, in the General group.
In this table, you have a field that gives you the CustomerGroup . You also have different prices applicable to that
customer in the Tier 1 Price , Tier 2 Price , and Tier 3 Price fields. In this example, your goal is to create a new
column with the name Final Price based on the value found in the CustomerGroup field. If the value in the
CustomerGroup field is equal to 1, you'll want to use the value from the Tier 1 Price field; otherwise, you'll use
the value from the Tier 3 Price .
To add this conditional column, select Conditional column . In the Add conditional column dialog box, you can
define three sections numbered in the following image.
1. New column name : You can define the name of your new column. In this example, you'll use the name Final
Price .
2. Conditional clauses : Here you define your conditional clauses. You can add more clauses by selecting Add
clause . Each conditional clause will be tested on the order shown in the dialog box, from top to bottom. Each
clause has four parts:
Column name : In the drop-down list, select the column to use for the conditional test. For this example,
select CustomerGroup .
Operator : Select the type of test or operator for the conditional test. In this example, the value from the
CustomerGroup column has to be equal to 1, so select equals .
Value : You can enter a value or select a column to be used for the conditional test. For this example, enter
1.
Output : If the test is positive, the value entered here or the column selected will be the output. For this
example, if the CustomerGroup value is equal to 1, your Output value should be the value from the
Tier 1 Price column.
3. Final Else clause : If none of the clauses above yield a positive test, the output of this operation will be the one
defined here, as a manually entered value or a value from a column. In this case, the output will be the value
from the Tier 3 Price column.
The result of that operation will give you a new Final Price column.
NOTE
New conditional columns won't have a data type defined. You can add a new step to define a data type for this newly created
column by following the steps described in Data types in Power Query.
The result of that operation will give you the result that you're looking for.
Append queries
10/30/2020 • 2 minutes to read • Edit Online
The append operation creates a single table by adding the contents of one or more tables to another, and
aggregates the column headers from the tables to create the schema for the new table.
NOTE
When tables that don't have the same column headers are appended, all column headers from all tables are appended to the
resulting table. If one of the appended tables doesn't have a column header from other tables, the resulting table shows null
values in the respective column, as shown in the previous image in columns C and D.
You can find the Append queries command on the Home tab in the Combine group. On the drop-down menu,
you'll see two options:
Append queries displays the Append dialog box to add additional tables to the current query.
Append queries as new displays the Append dialog box to create a new query by appending multiple tables.
The append operation requires at least two tables. The Append dialog box has two modes:
Two tables : Combine two table queries together. This mode is the default mode.
Three or more tables : Allow an arbitrary number of table queries to be combined.
NOTE
The tables will be appended in the order in which they're selected, starting with the Primar y table for the Two tables
mode and from the primary table in the Tables to append list for the Three or more tables mode.
To append these tables, first select the Online Sales table. On the Home tab, select Append queries , which
creates a new step in the Online Sales query. The Online Sales table will be the primary table. The table to
append to the primary table will be Store Sales .
Power Query performs the append operation based on the names of the column headers found on both tables, and
not based on their relative position in the headers sections of their respective tables. The final table will have all
columns from all tables appended.
In the event that one table doesn't have columns found in another table, null values will appear in the
corresponding column, as shown in the Referer column of the final query.
Append three or more tables
In this example, you want to append not only the Online Sales and Store Sales tables, but also a new table
named Wholesale Sales .
The new approach for this example is to select Append queries as new , and then in the Append dialog box,
select the Three or more tables option button. In the Available table(s) list, select each table you want to
append, and then select Add . After all the tables you want appear in the Tables to append list, select OK .
After selecting OK , a new query will be created with all your tables appended.
Combine files overview
10/30/2020 • 4 minutes to read • Edit Online
With Power Query, you can combine multiple files that have the same schema into a single logical table.
This feature is useful when you want to combine all the files you have in the same folder. For example, if you have
a folder that contains monthly files with all the purchase orders for your company, you can combine these files to
consolidate the orders into a single view.
Files can come from a variety of sources, such as (but not limited to):
Local folders
SharePoint sites
Azure Blob storage
Azure Data Lake Storage (Gen1 and Gen2)
When working with these sources, you'll notice that they share the same table schema, commonly referred to as
the file system view . The following screenshot shows an example of the file system view.
In the file system view, the Content column contains the binary representation of each file.
NOTE
You can filter the list of files in the file system view by using any of the available fields. It's good practice to filter this view to
show only the files you need to combine, for example by filtering fields such as Extension or Folder Path . More
information: Folder
Selecting any of the [Binary] values in the Content column automatically creates a series of navigation steps to
that specific file. Power Query will try to interpret the binary by using one of the available connectors, such as
Text/CSV, Excel, JSON, or XML.
Combining files takes place in the following stages:
Table preview
Combine files dialog box
Combined files output
Table preview
When you connect to a data source by using any of the previously mentioned connectors, a table preview opens. If
you're certain that you want to combine all the files in the folder, select Combine in the lower-right corner of the
screen.
Alternatively, you can select Transform data to access the Power Query Editor and create a subset of the list of
files (for example, by using filters on the folder path column to only include files from a specific subfolder). Then
combine files by selecting the column that contains the binaries in the Content column and then selecting either:
The Combine files command in the Combine group on the Home tab.
The Combine files icon in the column header of the column that contains [Binary] values.
NOTE
You can modify the steps inside the example query to change the function applied to each binary in your query. The example
query is linked to the function, so any changes made to the example query will be reflected in the function query.
If any of the changes affect column names or column data types, be sure to check the last step of your output query. Adding
a Change column type step can introduce a step-level error that prevents you from visualizing your table. More
information: Dealing with errors
See also
Combine CSV files
Combine CSV files
10/30/2020 • 5 minutes to read • Edit Online
In Power Query, you can combine multiple files from a given data source. This article describes how the experience
works when the files that you want to combine are CSV files. More information: Combine files overview
TIP
You can follow along with this example by downloading the sample files used in this article from this download link. You can
place those files in the data source of your choice, such as a local folder, SharePoint folder, Azure Blob storage, Azure Data
Lake Storage, or other data source that provides the file system view.
For simplicity, the example in this article uses the Folder connector. More information: Folder
The number of rows varies from file to file, but all files have a header section in the first four rows. They have
column headers in the fifth row, and the data for the table begins in the sixth row and continues through all
subsequent rows.
The goal is to combine all 12 files into a single table. This combined table contains the header row at the top of the
table, and includes the source name, date, country, units, and revenue data for the entire year in separate columns
after the header row.
Table preview
When connecting to the folder that hosts the files that you want to combine—in this example, the name of that
folder is CSV Files —you're shown the table preview dialog box, which displays your folder path in the upper-left
corner. The data preview shows the file system view.
NOTE
In a different situation, you might select Transform data to further filter and transform your data before combining the
files. Selecting Combine is only recommended when you're certain that the folder contains only the files that you want to
combine.
NOTE
Power Query automatically detects what connector to use based on the first file found in the list. To learn more about the
CSV connector, see Text/CSV.
For this example, leave all the default settings (Example file set to First file , and the default values for File
origin , Delimiter , and Data type detection ).
Now select Transform data in the lower-right corner to go to the output query.
Output query
After selecting Transform data in the Combine files dialog box, you'll be taken back to the Power Query Editor
in the query that you initially created from the connection to the local folder. The output query now contains the
source file name in the left-most column, along with the data from each of the source files in the remaining
columns.
However, the data isn't in the correct shape. You need to remove the top four rows from each file before combining
them. To make this change in each file before you combine them, select the Transform Sample file query in the
Queries pane on the left side of your screen.
Modify the Transform Sample file query
In this Transform Sample file query, the values in the Date column indicate that the data is for the month of
April, which has the year-month-day (YYYY-MM-DD) format. April 2019.csv is the first file that's displayed in the
table preview.
You now need to apply a new set of transformations to clean the data. Each transformation will be automatically
converted to a function inside the Helper queries group that will be applied to every file in the folder before
combining the data from each file.
The transformations that need to be added to the Transform Sample file query are:
1. Remove top rows : To perform this operation, select the table icon menu in the upper-left corner of the
table, and then select Remove top rows .
In the Remove top rows dialog box, enter 4 , and then select OK .
After selecting OK , your table will no longer have the top four rows.
2. Use first row as headers : Select the table icon again, and then select Use first row as headers .
The result of that operation will promote the first row of the table to the new column headers.
After this operation is completed, Power Query by default will try to automatically detect the data types of the
columns and add a new Changed column type step.
Revising the output query
When you go back to the CSV Files query, you'll notice that the last step is giving you an error that reads "The
column 'Column1' of the table wasn't found." The reason behind this error is that the previous state of the query
was doing an operation against a column named Column1 . But because of the changes made to the Transform
Sample file query, this column no longer exists. More information: Dealing with errors in Power Query
You can remove this last step of the query from the Applied steps pane by selecting the X delete icon on the left
side of the name of the step. After deleting this step, your query will show the correct results.
However, notice that none of the columns derived from the files (Date, Country, Units, Revenue) have a specific
data type assigned to them. Assign the correct data type to each column by using the following table.
C O L UM N N A M E DATA T Y P E
Date Date
Country Text
Revenue Currency
After defining the data types for each column, you'll be ready to load the table.
NOTE
To learn how to define or change column data types, see Data types.
Verification
To validate that all files have been combined, you can select the filter icon on the Source.Name column heading,
which will display all the names of the files that have been combined. If you get the warning "List may be
incomplete," select Load more at the bottom of the menu to display more available values in the column.
After you select Load more , all available file names will be displayed.
Merge queries overview
10/30/2020 • 4 minutes to read • Edit Online
A merge queries operation joins two existing tables together based on matching values from one or multiple
columns. You can choose to use different types of joins, depending on the output you want.
Merging queries
You can find the Merge queries command on the Home tab, in the Combine group. From the drop-down
menu, you'll see two options:
Merge queries : Displays the Merge dialog box, with the selected query as the left table of the merge
operation.
Merge queries as new : Displays the Merge dialog box without any preselected tables for the merge
operation.
NOTE
Although this example shows the same column header for both tables, this isn't a requirement for the merge operation.
Column headers don't need to match between tables. However, it's important to note that the columns must be of the
same data type, otherwise the merge operation might not yield correct results.
You can also select multiple columns to perform the join by selecting Ctrl as you select the columns. When you do
so, the order in which the columns were selected is displayed in small numbers next to the column headings,
starting with 1.
For this example, you have the Sales and Countries tables. Each of the tables has Countr yID and StateID
columns, which you need to pair for the join between both columns.
First select the Countr yID column in the Sales table, select Ctrl , and then select the StateID column. (This will
show the small numbers in the column headings.) Next, perform the same selections in the Countries table. The
following image shows the result of selecting those columns.
![Merge dialog box with the Left table for merge set to Sales, with the CountryID and StateID columns selected,
and the Right table for merge set to Countries, with the CountryID and StateID columns selected. The Join kind is
set to Left outer.
Expand or aggregate the new merged table column
After selecting OK in the Merge dialog box, the base table of your query will have all the columns from your left
table. Also, a new column will be added with the same name as your right table. This column holds the values
corresponding to the right table on a row-by-row basis.
From here, you can choose to expand or aggregate the fields from this new table column, which will be the fields
from your right table.
Table showing the merged Countries column on the right, with all rows containing a Table. The expand icon on the
right of the Countries column header has been selected, and the expand menu is open. The expand menu has the
Select all, CountryID, StateID, Country, and State selections selected. The Use original column name as prefix is also
selected.
NOTE
Currently, the Power Query Online experience only provides the expand operation in its interface. The option to aggregate
will be added later this year.
Join kinds
A join kind specifies how a merge operation will be performed. The following table describes the available join
kinds in Power Query.
JO IN K IN D IC O N DESC RIP T IO N
Fuzzy matching
You use fuzzy merge to apply fuzzy matching algorithms when comparing columns, to try to find matches across
the tables you're merging. You can enable this feature by selecting the Use fuzzy matching to perform the
merge check box in the Merge dialog box. Expand Fuzzy matching options to view all available configurations.
NOTE
Fuzzy matching is only supported for merge operations over text columns.
Left outer join
10/30/2020 • 2 minutes to read • Edit Online
One of the join kinds available in the Merge dialog box in Power Query is a left outer join, which keeps all the rows
from the left table and brings in any matching rows from the right table. More information: Merge operations
overview
Figure shows a table on the left with Date, CountryID, and Units columns. The emphasized CountryID column
contains values of 1 in rows 1 and 2, 3 in row 3, and 4 in row 4. A table on the right contains ID and Country
columns. The emphasized ID column contains values of 1 in row 1 (denoting USA), 2 in row 2 (denoting Canada),
and 3 in row 3 (denoting Panama). A table below the first two tables contains Date, CountryID, Units, and Country
columns. The table has four rows, with the top two rows containing the data for CountryID 1, one row for
CountryID 3, and one row for Country ID 4. Since the right table didn't contain an ID of 4, the value of the fourth
row in the Country column contains null.
This article uses sample data to show how to do a merge operation with the left outer join. The sample source
tables for this example are:
Sales : This table includes the fields Date , Countr yID , and Units . Countr yID is a whole number value that
represents the unique identifier from the Countries table.
Countries : This table is a reference table with the fields id and Countr y . The id field represents the unique
identifier for each record.
Countries table with id set to 1 in row 1, 2 in row 2, and 3 in row 3, and Country set to USA in row 1,
Canada in row 2, and Panama in row 3.
In this example, you'll merge both tables, with the Sales table as the left table and the Countries table as the right
one. The join will be made between the following columns.
CountryID id
The goal is to create a table like the following, where the name of the country appears as a new Countr y column
in the Sales table as long as the Countr yID exists in the Countries table. If there are no matches between the left
and right tables, a null value is the result of the merge for that row. In the following image, this is shown to be the
case for Countr yID 4, which was brought in from the Sales table.
One of the join kinds available in the Merge dialog box in Power Query is a right outer join, which keeps all the
rows from the right table and brings in any matching rows from the left table. More information: Merge operations
overview
Figure shows a table on the left with Date, CountryID, and Units columns. The emphasized CountryID column
contains values of 1 in rows 1 and 2, 3 in row 3, and 4 in row 4. A table on the right contains ID and Country
columns, with only one row. The emphasized ID column contains a value of 3 in row 1 (denoting Panama). A table
below the first two tables contains Date, CountryID, Units, and Country columns. The table has one row, with the
CountryID of 3 and the Country of Panama.
This article uses sample data to show how to do a merge operation with the right outer join. The sample source
tables for this example are:
Sales : This table includes the fields Date , Countr yID , and Units . The Countr yID is a whole number value
that represents the unique identifier from the Countries table.
Countries : This table is a reference table with the fields id and Countr y . The id field represents the unique
identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the Countries table as the right
one. The join will be made between the following columns.
CountryID id
The goal is to create a table like the following, where the name of the country appears as a new Countr y column
in the Sales table. Because of how the right outer join works, all rows from the right table will be brought in, but
only matching rows from the left table will be kept.
After performing this operation, you'll create a table that looks like the following image.
Full outer join
10/30/2020 • 3 minutes to read • Edit Online
One of the join kinds available in the Merge dialog box in Power Query is a full outer join, which brings in all the
rows from both the left and right tables. More information: Merge operations overview
Figure shows a table on the left with Date, CountryID, and Units columns. The emphasized CountryID column
contains values of 1 in rows 1 and 2, 3 in row 3, and 2 in row 4. A table on the right contains ID and Country
columns. The emphasized ID column contains values of 1 in row 1 (denoting USA), 2 in row 2 (denoting Canada), 3
in row 3 (denoting Panama), and 4 (denoting Spain) in row 4. A table below the first two tables contains Date,
CountryID, Units, and Country columns. All rows have been rearranged in numerical order according to the
CountryID value. The country associated with the CountryID number is shown in the Country column. Because the
country ID for Spain wasn't contained in the left table, a new row is added, and the date, country ID, and units
values for this row are set to null.
This article uses sample data to show how to do a merge operation with the full outer join. The sample source
tables for this example are:
Sales : This table includes the fields Date , Countr yID , and Units . Countr yID is a whole number value that
represents the unique identifier from the Countries table.
Countries : This is a reference table with the fields id and Countr y . The id field represents the unique
identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the Countries table as the right
one. The join will be made between the following columns.
CountryID id
The goal is to create a table like the following, where the name of the country appears as a new Countr y column
in the Sales table. Because of how the full outer join works, all rows from both the left and right tables will be
brought in, regardless of whether they only appear in one of the tables.
Full outer join final table with Date, a CountryID, and Units derived from the Sales table, and a Country column
derived from the Countries table. A fifth row was added to contain data from Spain, but that row contains null in
the Date, CountryID, and Units columns since those values did not exist for Spain in the Sales table.
To perform a full outer join
1. Select the Sales query, and then select Merge queries .
2. In the Merge dialog box, under Right table for merge , select Countries .
3. In the Sales table, select the Countr yID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Full outer .
6. Select OK
TIP
Take a closer look at the message at the bottom of the dialog box that reads "The selection matches 4 of 4 rows from the
first table, and 3 of 4 rows from the second table." This message is crucial for understanding the result that you get from this
operation.
In the Countries table, you have the Countr y Spain with id of 4, but there are no records for Countr yID 4 in the
Sales table. That's why only three of four rows from the right table found a match. All rows from the right table
that didn't have matching rows from the left table will be grouped and shown in a new row in the output table with
no values for the fields from the left table.
From the newly created Countries column after the merge operation, expand the Countr y field. Don't select the
Use original column name as prefix check box.
After performing this operation, you'll create a table that looks like the following image.
Full outer join final table containing Date, a CountryID, and Units derived from the Sales table, and a Country
column derived from the Countries table. A fifth row was added to contain data from Spain, but that row contains
null in the Date, CountryID, and Units columns since those values didn't exist for Spain in the Sales table.
Inner join
10/30/2020 • 2 minutes to read • Edit Online
One of the join kinds available in the Merge dialog box in Power Query is an inner join, which brings in only
matching rows from both the left and right tables. More information: Merge operations overview
Figure shows a table on the left with Date, CountryID, and Units columns. The emphasized CountryID column
contains values of 1 in rows 1 and 2, 3 in row 3, and 2 in row 4. A table on the right contains ID and Country
columns. The emphasized ID column contains values of 3 in row 1 (denoting Panama) and 4 in row 2 (denoting
Spain). A table below the first two tables contains Date, CountryID, Units, and Country columns, but only one row
of data for Panama.
This article uses sample data to show how to do a merge operation with the inner join. The sample source tables
for this example are:
Sales : This table includes the fields Date , Countr yID , and Units . Countr yID is a whole number value that
represents the unique identifier from the Countries table.
Countries : This is a reference table with the fields id and Countr y . The id field represents the unique
identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the Countries table as the right
one. The join will be made between the following columns.
CountryID id
The goal is to create a table like the following, where the name of the country appears as a new Countr y column
in the Sales table. Because of how the inner join works, only matching rows from both the left and right tables will
be brought in.
In the Sales table, you have a Countr yID of 1 and 2, but neither of these values are found in the Countries table.
That's why the match only found one of four rows in the left (first) table.
In the Countries table, you have the Countr y Spain with the id 4, but there are no records for a Countr yID of 4
in the Sales table. That's why only one of two rows from the right (second) table found a match.
From the newly created Countries column, expand the Countr y field. Don't select the Use original column
name as prefix check box.
After performing this operation, you'll create a table that looks like the following image.
Left anti join
10/30/2020 • 3 minutes to read • Edit Online
One of the join kinds available in the Merge dialog box in Power Query is a left anti join, which brings in only rows
from the left table that don't have any matching rows from the right table. More information: Merge operations
overview
Figure shows a table on the left with Date, CountryID, and Units columns. The emphasized CountryID column
contains values of 1 in rows 1 and 2, 3 in row 3, and 2 in row 4. A table on the right contains ID and Country
columns. The emphasized ID column contains values of 3 in row 1 (denoting Panama) and 4 in row 2 (denoting
Spain). A table below the first two tables contains Date, CountryID, Units, and Country columns. The table has three
rows, with two rows containing the data for CountryID 1, and one row for CountryID 2. Since none of the
remaining CountryIDs match any of the countries in the right table, the rows in the Country column in the merged
table all contain null.
This article uses sample data to show how to do a merge operation with the left anti join. The sample source tables
for this example are:
Sales : This table includes the fields Date , Countr yID , and Units . Countr yID is a whole number value that
represents the unique identifier from the Countries table.
Countries : This table is a reference table with the fields id and Countr y . The id field represents the unique
identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the Countries table as the right
one. The join will be made between the following columns.
CountryID id
The goal is to create a table like the following, where only the rows from the left table that don't match any from
the right table are kept.
Left anti join final table with Date, CountryID, Units, and Country column headers, and three rows of data of which
the values for the Country column are all null.
To do a left anti join
1. Select the Sales query, and then select Merge queries .
2. In the Merge dialog box, under Right table for merge , select Countries .
3. In the Sales table, select the Countr yID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Left anti .
6. Select OK .
TIP
Take a closer look at the message at the bottom of the dialog box that reads "The selection excludes 1 of 4 rows from the
first table." This message is crucial to understanding the result that you get from this operation.
In the Sales table, you have a Countr yID of 1 and 2, but neither of them are found in the Countries table. That's
why the match only found one of four rows in the left (first) table.
In the Countries table, you have the Countr y Spain with an id of 4, but there are no records for Countr yID 4 in
the Sales table. That's why only one of two rows from the right (second) table found a match.
From the newly created Countries column, expand the Countr y field. Don't select the Use original column
name as prefix check box.
After doing this operation, you'll create a table that looks like the following image. The newly expanded Countr y
field doesn't have any values. That's because the left anti join doesn't bring any values from the right table—it only
keeps rows from the left table.
Final table with Date, CountryID, Units, and Country column headers, and three rows of data of which the values for
the Country column are all null.
Right anti join
10/30/2020 • 2 minutes to read • Edit Online
One of the join kinds available in the Merge dialog box in Power Query is a right anti join, which brings in only
rows from the right table that don't have any matching rows from the left table. More information: Merge
operations overview
Figure shows a table on the left with Date, CountryID, and Units columns. The emphasized CountryID column
contains values of 1 in rows 1 and 2, 3 in row 3, and 2 in row 4. A table on the right contains ID and Country
columns. The emphasized ID column contains values of 3 in row 1 (denoting Panama) and 4 in row 2 (denoting
Spain). A table below the first two tables contains Date, CountryID, Units, and Country columns. The table has one
row, with the Date, CountryID and Units set to null, and the Country set to Spain.
This article uses sample data to show how to do a merge operation with the right anti join. The sample source
tables for this example are:
Sales : This table includes the fields Date , Countr yID , and Units . Countr yID is a whole number value that
represents the unique identifier from the Countries table.
Countries : This is a reference table with the fields id and Countr y . The id field represents the unique
identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the Countries table as the right
one. The join will be made between the following columns.
F IEL D F RO M T H E SA L ES TA B L E F IEL D F RO M T H E C O UN T RIES TA B L E
CountryID id
The goal is to create a table like the following, where only the rows from the right table that don't match any from
the left table are kept. As a common use case, you can find all the rows that are available in the right table but
aren't found in the left table.
Right anti join final table with the Date, CountryID, Units, and Country header columns, containing one row with
null in all columns except Country, which contains Spain.
To do a right anti join
1. Select the Sales query, and then select Merge queries .
2. In the Merge dialog box, under Right table for merge , select Countries .
3. In the Sales table, select the Countr yID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Right anti .
6. Select OK .
TIP
Take a closer look at the message at the bottom of the dialog box that reads "The selection excludes 1 of 2 rows from the
second table." This message is crucial to understanding the result that you get from this operation.
In the Countries table, you have the Countr y Spain with an id of 4, but there are no records for Countr yID 4 in
the Sales table. That's why only one of two rows from the right (second) table found a match. Because of how the
right anti join works, you'll never see any rows from the left (first) table in the output of this operation.
From the newly created Countries column, expand the Countr y field. Don't select the Use original column
name as prefix check box.
After performing this operation, you'll create a table that looks like the following image. The newly expanded
Countr y field doesn't have any values. That's because the right anti join doesn't bring any values from the left
table—it only keeps rows from the right table.
Final table with the Date, CountryID, Units, and Country header columns, containing one row with null in all
columns except Country, which contains Spain.
Fuzzy merge
10/30/2020 • 4 minutes to read • Edit Online
Fuzzy merge is a smart data preparation feature you can use to apply fuzzy matching algorithms when comparing
columns, to try to find matches across the tables that are being merged.
You can enable fuzzy matching at the bottom of the Merge dialog box by selecting the Use fuzzy matching to
perform the merge option button. More information: Merge operations overview
NOTE
Fuzzy matching is only supported on merge operations over text columns. Power Query uses the Jaccard similarity algorithm
to measure the similarity between pairs of instances.
Sample scenario
A common use case for fuzzy matching is with freeform text fields, such as in a survey. For this article, the sample
table was taken directly from an online survey sent to a group with only one question: What is your favorite fruit?
The results of that survey are shown in the following image.
Sample survey output table containing the column distribution graph showing nine distinct answers with all
answers unique, and the answers to the survey with all the typos, plural or singular, and case problems.
The nine records reflect the survey submissions. The problem with the survey submissions is that some have typos,
some are plural, some are singular, some are uppercase, and some are lowercase.
To help standardize these values, in this example you have a Fruits reference table.
Fruits reference table containing column distribution graph showing four distinct fruits with all fruits unique, and
the list of fruits: apple, pineapple, watermelon, and banana.
NOTE
For simplicity, this Fruits reference table only includes the name of the fruits that will be needed for this scenario. Your
reference table can have as many rows as you need.
The goal is to create a table like the following, where you've standardized all these values so you can do more
analysis.
Sample survey output table with the Question column containing the column distribution graph showing nine
distinct answers with all answers unique, and the answers to the survey with all the typos, plural or singular, and
case problems, and also contains the Fruit column containing the column distribution graph showing four distinct
answers with one unique answer and lists all of the fruits properly spelled, singular, and proper case.
Fuzzy merge
To do the fuzzy merge, you start by doing a merge. In this case, you'll use a left outer join, where the left table is the
one from the survey and the right table is the Fruits reference table. At the bottom of the dialog box, select the Use
fuzzy matching to perform the merge check box.
After you select OK , you can see a new column in your table because of this merge operation. If you expand it,
you'll notice that there's one row that doesn't have any values in it. That's exactly what the dialog box message in
the previous image stated when it said "The selection matches 8 of 9 rows from the first table."
Fruit column added to the Survey table, with all rows in the Question column expanded, except for row 9, which
could not expand and the Fruit column contains null.
Transformation table
For the example in this article, you can use a transformation table to map the value that has a missing pair. That
value is apls , which needs to be mapped to Apple . Your transformation table has two columns:
From contains the values to find.
To contains the values that will be used to replace the values found by using the From column.
For this article, the transformation table will look as follows:
F RO M TO
apls Apple
You can go back to the Merge dialog box, and in Fuzzy matching options under Number of matches
(optional) , enter 1 . Under Transformation table (optional) , select Transform Table from the drop-down
menu.
After you select OK , you'll create a table that looks like the following image, with all values mapped correctly. Note
how the example started with nine distinct values, but after the fuzzy merge, there are only four distinct values.
Fuzzy merge survey output table with the Question column containing the column distribution graph showing nine
distinct answers with all answers unique, and the answers to the survey with all the typos, plural or singular, and
case problems. Also contains the Fruit column with the column distribution graph showing four distinct answers
with one unique answer and lists all of the fruits properly spelled, singular, and proper case.
Cross join
10/30/2020 • 2 minutes to read • Edit Online
A cross join is a type of join that returns the Cartesian product of rows from the tables in the join. In other words, it
combines each row from the first table with each row from the second table.
This article demonstrates, with a practical example, how to do a cross join in Power Query.
Colors : A table with all the product variations, as colors, that you can have in your inventory.
The goal is to perform a cross-join operation with these two tables to create a list of all unique products that you
can have in your inventory, as shown in the following table. This operation is necessary because the Product table
only contains the generic product name, and doesn't give the level of detail you need to see what product variations
(such as color) there are.
In the Custom column dialog box, enter whatever name you like in the New column name box, and enter
Colors in the Custom column formula box.
IMPORTANT
If your query name has spaces in it, such as Product Colors , the text that you need to enter in the Custom column
formula section has to follow the syntax #"Query name" . For Product Colors , you need to enter #"Product Colors"
You can check the name of your queries in the Quer y settings pane on the right side of your screen or in the Queries
pane on the left side.
After you select OK in the Custom column dialog box, a new column is added to the table. In the new column
heading, select Expand to expand the contents of this newly created column, and then select OK .
After you select OK , you'll reach your goal of creating a table with all possible combinations of Product and
Colors .
Split columns by delimiter
10/30/2020 • 2 minutes to read • Edit Online
In Power Query, you can split a column through different methods. In this case, the column(s) selected can be split
by a delimiter.
Transform tab —under the Split column dropdown menu inside the Text column group.
The result of that operation will give you a table with the two columns that you're expecting.
NOTE
Power Query will split the column into as many columns as needed. The name of the new columns will contain the same
name as the original column. A suffix that includes a dot and a number that represents the split sections of the original
column will be appended to the name of the new columns.
The Accounts column has values in pairs separated by a comma. These pairs are separated by a semicolon. The
goal of this example is to split this column into new rows by using the semicolon as the delimiter.
To do that split, select the Accounts column. Select the option to split the column by a delimiter. In Split Column
by Delimiter , apply the following configuration:
Select or enter delimiter : Semicolon
Split at : Each occurrence of the delimiter
Split into : Rows
The result of that operation will give you a table with the same number of columns, but many more rows because
the values inside the cells are now in their own cells.
Final Split
Your table still requires one last split column operation. You need to split the Accounts column by the first comma
that it finds. This split will create a column for the account name and another one for the account number.
To do that split, select the Accounts column and then select Split Column > By Delimiter . Inside the Split
column window, apply the following configuration:
Select or enter delimiter : Comma
Split at : Each occurrence of the delimiter
The result of that operation will give you a table with the three columns that you're expecting. You then rename the
columns as follows:
P REVIO US N A M E N EW N A M E
Your final table looks like the one in the following image.
Split columns by number of characters
10/30/2020 • 2 minutes to read • Edit Online
In Power Query, you can split a column through different methods. In this case, the column(s) selected can be split
by the number of characters.
Transform tab —under the Split Column dropdown menu inside the Text Column group.
NOTE
Power Query will split the column into only two columns. The name of the new columns will contain the same name as the
original column. A suffix containing a dot and a number that represents the split section of the column will be appended to
the names of the new columns.
Now continue to do the same operation over the new Column1.2 column, but with the following configuration:
Number of characters : 8
Split : Once, as far left as possible
The result of that operation will yield a table with three columns. Notice the new names of the two columns on the
far right. Column1.2.1 and Column1.2.2 were automatically created by the split column operation.
You can now change the name of the columns and also define the data types of each column as follows:
O RIGIN A L C O L UM N N A M E N EW C O L UM N N A M E DATA T Y P E
Your final table will look like the one in the following image.
The Account column can hold multiple values in the same cell. Each value has the same length in characters, with a
total of six characters. In this example, you want to split these values so you can have each account value in its own
row.
To do that, select the Account column and then select the option to split the column by the number of characters.
In Split column by Number of Characters , apply the following configuration:
Number of characters : 6
Split : Repeatedly
Split into : Rows
The result of that operation will give you a table with the same number of columns, but many more rows because
the fragments inside the original cell values in the Account column are now split into multiple rows.
Split columns by positions
10/30/2020 • 2 minutes to read • Edit Online
In Power Query, you can split a column through different methods. In this case, the column(s) selected can be split
by positions.
Transform tab —under the Split Column dropdown menu inside the Text Column group.
The result of that operation will give you a table with three columns.
NOTE
Power Query will split the column into only two columns. The name of the new columns will contain the same name as the
original column. A suffix created by a dot and a number that represents the split section of the column will be appended to
the name of the new columns.
You can now change the name of the columns, and also define the data types of each column as follows:
O RIGIN A L C O L UM N N A M E N EW C O L UM N N A M E DATA T Y P E
Your final table will look the one in the following image.
NOTE
This operation will first start creating a column from position 0 to position 6. There will be another column should there be
values with a length of 8 or more characters in the current data preview contents.
The result of that operation will give you a table with the same number of columns, but many more rows because
the values inside the cells are now in their own cells.
Split columns by lowercase to uppercase
10/30/2020 • 2 minutes to read • Edit Online
In Power Query, you can split a column through different methods. If your data contains CamelCased text or a
similar pattern, then the column(s) selected can be split by every instance of the last lowercase letter to the next
uppercase letter easily.
Transform tab —under the Split Column dropdown menu inside the Text Column group.
In Power Query, you can split a column through different methods. In this case, the column(s) selected can be split
by every instance of the last uppercase letter to the next lowercase letter.
Transform tab —under the Split Column dropdown menu inside the Text Column group.
In Power Query, you can split a column through different methods. In this case, the column(s) selected can be split
by every instance of a digit followed by a non-digit.
Transform tab —under the Split Column dropdown menu inside the Text Column group.
In Power Query, you can split a column through different methods. In this case, the column(s) selected can be split
by every instance of a non-digit followed by a digit.
Transform tab —under the Split Column dropdown menu inside the Text Column group.
Dataflows are a self-service, cloud-based, data preparation technology. Dataflows enable customers to ingest,
transform, and load data into Common Data Service environments, Power BI workspaces, or your organization’s
Azure Data Lake Storage Gen2 account. Dataflows are authored using Power Query experience, a unified Data
Connectivity and Preparation experience already featured in many Microsoft products, including Excel and Power
BI. Customers can trigger dataflows to run either on demand or automatically on a schedule; data is always kept up
to date.
The diagram above shows an overall view of how a dataflow is defined. A dataflow gets data from different data
sources (there are more than 80 data sources supported already). Then, based on the transformations configured
using the Power Query authoring experience, transforms the data using the dataflow engine. Finally, the data is
loaded to the output destination, which can be a Power Platform environment, a Power BI workspace, or the
organization’s Azure Data Lake Storage Gen2 account.
Dataflows run in the cloud
Dataflows are cloud-based. When a dataflow is authored and saved, its definition is stored in the cloud. A dataflow
also runs in the cloud. However, if a data source is on-premises, an on-premises data gateway can be used to
extract the data to the cloud. When a dataflow run is triggered, the data transformation and computation happens
in the cloud, and the destination is always in the cloud.
Benefits of dataflows
The scenarios you have read above are good examples of how a dataflow can be beneficial in real-world use-cases.
The following list highlights some of the benefits of using dataflows:
A dataflow decouples the data transformation layer from the modeling and visualization layer in a Power BI
solution.
The data transformation code can reside in a central location, a dataflow, rather than spread about in
multiple artifacts.
A dataflow creator only needs Power Query skills. In a multi-creator environment, the dataflow creator can
be part of a team that together builds the entire BI solution or an operational application.
A dataflow is product-agnostic. It's not a component of Power BI only. You can get its data in other tools and
services.
Dataflows leverage Power Query, a powerful, graphical, self-service data transformation experience.
Dataflows run entirely in the cloud. No additional infrastructure is required.
There are multiple options to start working with dataflows, using licenses for Power Apps, Power BI, and
Dynamics 365 Customer Insights.
Although dataflows are capable of advanced transformations, they're designed for self-service scenarios and
require no IT or developer background.
Centralize data preparation and reuse of datasets across multiple Power BI solutions
If multiple Power BI solutions are using the same transformed version of a table, the process to create the table will
be repeated multiple times. This increases the load on the source system, consumes more resources, and creates
duplicate data with multiple points of failure. Instead, a single dataflow can be created to compute the data for all
solutions. Power BI can then reuse the result of the transformation in all solutions. The dataflow, if used in such a
way, can be part of a robust Power BI implementation architecture that avoids the Power Query code duplicates
and reduces the maintenance costs of the data integration layer.
Next steps
The following articles provide further study materials for dataflows.
Create and use dataflows in the Power Platform
Creating and using dataflows in Power BI
Understanding the differences between standard and
analytical dataflows
10/30/2020 • 4 minutes to read • Edit Online
You can categorize dataflows in many ways. One of those ways is the difference between standard and analytical
dataflows. Understanding this concept helps you create the dataflow for the right requirement. Dataflows create
entities, and entities are of two types: standard and analytical. Based on the type of entity produced by the
dataflow, we call the dataflow either a standard dataflow or an analytical dataflow.
Standard dataflow
A dataflow by the standard definition is used to extract, transform, and load data to a destination. A standard
dataflow’s destination must be the Common Data Service and the entities produces are database entities.
Standard dataflows can be created through the Power Apps portal.
One benefit of this type of dataflow is that any application that can connect to Common Data Service can work
with the data, such as Power BI, Power Apps, Power Automate, Power Virtual Agent, Dynamics 365, and other
applications.
Analytical dataflow
An analytical dataflow stores its entities in storage optimized for analytics—Azure Data Lake Storage Gen2. Power
Platform environments and Power BI workspaces provide customers with managed analytical storage location
that's bundled with those product licenses. In addition, customers can link their organization’s Azure Data Lake
storage Gen2 account as a destination for dataflows.
Analytical dataflows are capable additional analytical features. For example, integration with Power BI’s AI features
or use of computed entities which will be discussed later.
You can create analytical dataflows in Power BI. By default, they'll load data to Power BI’s managed storage. But you
can also configure Power Bi to store the data in the organization’s Azure Data Lake Storage Gen2.
You can also create analytical dataflows in Power Apps and Dynamics 365 customer insights portals. When you're
creating a dataflow in Power Apps portal, you can choose between Common Data Service manages analytical
storage or in your organization’s Azure Data Lake storage Gen2 account.
AI Integration
Sometimes, depending on the requirement, you might need to apply some AI and machine learning functions on
the data through the dataflow. These functionalities are available in Power BI dataflows and require a Premium
workspace.
The following articles discuss how to use AI functions in a dataflow:
Azure Machine Learning integration in Power BI
Cognitive Services in Power BI
Automated Machine Learning in Power BI
Note that the features listed above are Power BI specific and are not available when creating a dataflow in the
Power Apps or Dynamics 365 customer insights portals.
Computed entities
One of the reasons to use a computed entity is the ability to process large amounts of data. The computed entity
helps in those scenarios. If you have an entity in a dataflow, and another entity in the same dataflow uses the first
entity's output, this will create a computed entity.
The computed entity helps with the performance of the data transformations. Instead of re-doing the
transformations needed in the first entity multiple times, the transformation will be done only once in the
computed entity. Then the result will be used multiple times in other entities.
To learn more about computed entities, see Using computed entities on Power BI Premium.
Computed entities are available only in an analytical dataflow.
O P ERAT IO N STA N DA RD A N A LY T IC A L
Storage options Common Data Service Azure Data Lake Storage Gen2 internal
for the Power BI dataflows
Azure Data Lake Storage Gen2 external
attached to the Power BI or Power
Platform dataflows
AI functions No Yes
Can be used in other applications Yes, through Common Data Service Power BI dataflows: Only in Power BI
Power Platform dataflows or Power BI
external dataflows: Yes, through Azure
Data Lake Storage Gen2
Using dataflows with Microsoft Power Platform makes data preparation easier, and lets you reuse your data
preparation work in subsequent reports, apps, and models.
In the world of ever-expanding data, data preparation can be difficult and expensive, consuming as much as 60%-
80% of the time and cost for a typical analytics project. Such projects can require wrangling fragmented and
incomplete data, complex system integration, data with structural inconsistency, and a high skillset barrier.
To make data preparation easier and to help you get more value out of your data, Power Query and Power
Platform dataflows were created.
With dataflows, Microsoft brings Power Query’s self-service data preparation capabilities into the Power BI and
Power Apps online services, and expands existing capabilities in the following ways:
Self-ser vice data prep for big data with Dataflows —Dataflows can be used to easily ingest, cleanse,
transform, integrate, enrich, and schematize data from a large and ever growing array of transactional and
observational sources, encompassing all data preparation logic. Previously, extract, transform, load (ETL)
logic could only be included within datasets in Power BI, copied over and over between datasets, and bound
to dataset management settings.
With dataflows, ETL logic is elevated to a first-class artifact within Power Platform services and includes
dedicated authoring and management experiences. Business analysts, BI professionals, and data scientists
can use dataflows to handle the most complex data preparation challenges and build on each other’s work,
thanks to a revolutionary model-driven calculation engine, which takes care of all the transformation and
dependency logic—cutting time, cost, and expertise to a fraction of what’s traditionally been required for
those tasks. You can create dataflows using the well-known, self-service data preparation experience of
Power Query. Dataflows are created and easily managed in app workspaces or environments, in the Power
BI or Power Apps portal respectively, enjoying all the capabilities these services have to offer, such as
permission management, scheduled refreshes, and more.
Load data to Common Data Ser vice or Azure Data Lake Storage Gen2 —Depending on your use
case, you can store data prepared by Power Platform dataflows in the Common Data Service or your
organizations Azure Data Lake Storage Gen2 account:
Common Data Ser vice lets you securely store and manage data that's used by business
applications within a set of entities. An entity is a set of records used to store data, similar to how a
table stores data within a database. Common Data Service includes a base set of standard entities
that cover typical scenarios, but you can also create custom entities specific to your organization and
populate them with data using dataflows. App makers can then use Power Apps and Flow to build
rich applications using this data.
Azure Data Lake Storage Gen2 lets you collaborate with people in your organization using Power
BI, Azure Data, and AI services, or using custom-built Line of Business Applications that read data
from the lake. Dataflows that load data to an Azure Data Lake Storage Gen2 account store data in
Common Data Model folders. Common Data Model folders contain schematized data and metadata
in a standardized format, to facilitate data exchange and to enable full interoperability across services
that produce or consume data stored in an organization’s Azure Data Lake Storage account as the
shared storage layer.
Advanced Analytics and AI with Azure —Power Platform dataflows store data in Common Data Service
or Azure Data Lake Storage Gen2—which means that data ingested through dataflows is now available to
data engineers and data scientists to leverage the full power of Azure Data Services, such as Azure Machine
Learning, Azure Databricks, and Azure SQL Data Warehouse for advanced analytics and AI. This enables
business analysts, data engineers, and data scientists to collaborate on the same data within their
organization.
Suppor t for the Common Data Model —The Common Data Model is a set of a standardized data
schemas and a metadata system to allow consistency of data and its meaning across applications and
business processes. Dataflows support the Common Data Model by offering easy mapping from any data in
any shape into the standard Common Data Model entities, such as Account, Contact, and so on. Dataflows
also land the data, both standard and custom entities, in schematized Common Data Model form. Business
analysts can take advantage of the standard schema and its semantic consistency, or customize their entities
based on their unique needs. The Common Data Model continues to evolve as part of the recently
announced Open Data Initiative.
DATA F LO W C A PA B IL IT Y P O W ER A P P S P O W ER B I
Dataflows Data Connector in Power BI For dataflows with Azure Data Lake Yes
Desktop Gen2 as the destination
Dataflow linked entities For dataflows with Azure Data Lake Yes
Gen2 as the destination
Computed Entities (in-storage For dataflows with Azure Data Lake Power BI Premium only
transformations using M) Gen2 as the destination
Dataflow incremental refresh For dataflows with Azure Data Lake Power BI Premium only
Gen2 as the destination, requires Power
Apps Plan2
For more information about specific products, see the following articles:
Dataflows in Power Apps:
Self-service data prep in Power Apps
Creating and using dataflows in Power Apps
Connect Azure Data Lake Storage Gen2 for dataflow storage
Add data to an entity in Common Data Service
Visit the Power Apps dataflow community and share what you’re doing, ask questions, or submit new ideas
Visit the Power Apps dataflow community forum and share what you’re doing, ask questions, or submit new
ideas
Dataflows in Power BI:
Self-service data prep in Power BI
Create and use dataflows in Power BI
Dataflows whitepaper
Detailed video of a dataflows walkthrough
Visit the Power BI dataflows community and share what you’re doing, ask questions or submit new ideas
Next steps
The following articles go into more detail about common usage scenarios for dataflows.
Using incremental refresh with dataflows
Creating computed entities in dataflows
Connect to data sources for dataflows
Link entities between dataflows
For more information about the Common Data Model and the Common Data Model Folder standard, read the
following articles:
Common Data Model - overview
Common Data Model folders
Common Data Model folder model file definition
Using incremental refresh with dataflows
10/30/2020 • 9 minutes to read • Edit Online
With dataflows, you can bring large amounts of data into Power BI or your organization's provided storage. In
some cases, however, it's not practical to update a full copy of source data in each refresh. A good alternative is
incremental refresh , which provides the following benefits for dataflows:
Refresh occurs faster —only data that's changed needs to be refreshed. For example, refresh only the last five
days of a 10-year dataflow.
Refresh is more reliable —for example, it's not necessary to maintain long-running connections to volatile
source systems.
Resource consumption is reduced —less data to refresh reduces overall consumption of memory and other
resources.
Incremental refresh is available in dataflows created in Power BI, and dataflows created in the Power Apps portals.
This article shows screens from Power BI, but these instructions apply to dataflows created in Power BI or in the
Power Apps maker portal.
Using incremental refresh in dataflows created in Power BI requires that the workspace where the dataflow resides
in Premium capacity to run. Incremental refresh in the Power Apps portal requires Power Apps Plan 2.
In either Power BI or Power Apps, using incremental refresh requires the source data ingested into the dataflow to
have a datetime field on which incremental refresh can filter.
Configuring incremental refresh for dataflows
A dataflow can contain many entities. Incremental refresh is set up at the entity level, allowing one dataflow to hold
both fully refreshed entities and incrementally refreshed entities.
To set up an incremental refreshed entity, start by configuring your entity as you would any other entity.
Once the dataflow is created and saved, select the incremental refresh icon in the entity view, as shown in the
following image.
When you select the icon, the Incremental refresh settings window appears. When you toggle incremental
refresh to the On position, you can configure your incremental refresh.
The following list explains the settings in the Incremental refresh settings window.
Incremental refresh on/off toggle —this slider toggles incremental refresh policy on or off for the entity.
Filter field drop-down —selects the query field on which the entity should be filtered for increments. This
field only contains datetime fields. You can't use incremental refresh if your entity doesn’t contain a
datetime field.
Store rows from the past —the following example helps explain the next few settings.
This example defines a refresh policy to store five years of data in total, and incrementally refresh 10 days of
data. If the entity is refreshed daily, the following actions are carried out for each refresh operation:
Add a new day of data.
Refresh 10 days up to the current date.
Remove calendar years that are older than five years before the current date. For example, if the current
date is January 1, 2019, the year 2013 is removed.
The first dataflow refresh might take a while to import all five years, but subsequent refreshes are likely to
complete in a small fraction of the initial refresh time.
Detect data changes —incremental refresh of 10 days is much more efficient than full refresh of five
years, but you might be able to do even better. When you select the Detect data changes checkbox, you
can select a date/time column to identify and refresh only the days where the data has changed. This
assumes such a column exists in the source system, which is typically for auditing purposes. The maximum
value of this column is evaluated for each of the periods in the incremental range. If that data hasn't
changed since the last refresh, there's no need to refresh the period. In the example, this could further
reduce the days incrementally refreshed from 10 to perhaps two.
TIP
The current design requires that the column used to detect data changes be persisted and cached into memory. You might
want to consider one of the following techniques to reduce cardinality and memory consumption:
Persist only the maximum value of this column at time of refresh, perhaps using a Power Query function.
Reduce the precision to a level that is acceptable given your refresh-frequency requirements.
Only refresh complete periods —imagine your refresh is scheduled to run at 4:00 AM every morning. If
data appears in the source system during those first four hours of that day, you may not want to account for
it. Some business metrics, such as barrels per day in the oil and gas industry, aren't practical or sensible to
account for based on partial days.
Another example where only refreshing complete periods is appropriate is refreshing data from a financial
system. Imagine a financial system where data for the previous month is approved on the 12th calendar day
of the month. You could set the incremental range to one month and schedule the refresh to run on the
12th day of the month. With this option checked, it would refresh January data (the most recent complete
monthly period) on February 12.
NOTE
Dataflow incremental refresh determines dates according to the following logic: if a refresh is scheduled, incremental refresh
for dataflows uses the time-zone defined in the refresh policy. If no schedule for refreshing exists, incremental refresh uses
the time from the machine running the refresh.
Next Steps
This article described incremental refresh for dataflows. Here are some more articles that might be useful.
Self-service data prep in Power BI
Creating computed entities in dataflows
Connect to data sources for dataflows
Link entities between dataflows
Create and use dataflows in Power BI
Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
For more information about Power Query and scheduled refresh, you can read these articles:
Query overview in Power BI Desktop
Configuring scheduled refresh
For more information about the Common Data Model, you can read its overview article:
Common Data Model - overview
Creating computed entities in dataflows
10/30/2020 • 3 minutes to read • Edit Online
You can perform in-storage computations when using dataflows with a Power BI Premium subscription. This lets
you do calculations on your existing dataflows, and return results that enable you to focus on report creation and
analytics.
To perform in-storage computations, you first must create the dataflow and bring data into that Power BI dataflow
storage. Once you have a dataflow that contains data, you can create computed entities, which are entities that do
in-storage computations.
There are two ways you can connect dataflow data to Power BI:
Using self-service authoring of a dataflow
Using an external dataflow
The following sections describe how to create computed entities on your dataflow data.
Any transformation you do on this newly created entity will be run on the data that already resides in Power BI
dataflow storage. That means that the query won't run against the external data source from which the data was
imported (for example, the SQL database from which the data was pulled). Rather, the query is done on the data
that resides in the dataflow storage.
Example use cases
What kind of transformations can be done with computed entities? Any transformation that you usually specify
using the transformation user interface in Power BI, or the M editor, are all supported when performing in-storage
computation.
Consider the following example. You have an Account entity that contains the raw data for all the customers from
your Dynamics 365 subscription. You also have ServiceCalls raw data from the Service Center, with data from the
support calls that were performed from the different account in each day of the year.
Imagine you want to enrich the Account entity with data from the ServiceCalls.
First you would need to aggregate the data from the ServiceCalls to calculate the number of support calls that
were done for each account in the last year.
Next, you would want to merge the Account entity with the ServiceCallsAggregated entity to calculate the
enriched Account table.
And then you can see the results, shown as EnrichedAccount in the following image.
And that's it—the transformation is done on the data in the dataflow that resides in your Power BI Premium
subscription, not on the source data.
Next Steps
This article described computed entities and dataflows. Here are some more articles that might be useful.
Self-service data prep in Power BI
Using incremental refresh with dataflows
Connect to data sources for dataflows
Link entities between dataflows
The following links provide additional information about dataflows in Power BI, and other resources:
Create and use dataflows in Power BI
Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
Configure workspace dataflow settings (Preview)
Add a CDM folder to Power BI as a dataflow (Preview)
Connect Azure Data Lake Storage Gen2 for dataflow storage (Preview)
For more information about Power Query and scheduled refresh, you can read these articles:
Query overview in Power BI Desktop
Configuring scheduled refresh
For more information about the Common Data Model, you can read its overview article:
Common Data Model - overview
Connect to data sources for dataflows
10/30/2020 • 4 minutes to read • Edit Online
With Power Platform dataflows, you can connect to many different data sources to create new dataflows, or add
new entities to an existing dataflow.
This article lists the many available data sources for creating or adding to dataflows, and describes how to create
those dataflows using these data sources.
For an overview of how to create and use dataflows, see Creating and using dataflows in Power BI.
Data sources for dataflows are organized into the following categories, which appear across the top of the Get
data dialog:
All categories
File
Database
Power Platform
Azure
Online Services
Other
The All categories category contains all data sources, from all categories.
The File category includes the following available data connections for dataflows:
Access
Excel
JSON
Text/CSV
XML
The Database category includes the following available data connections for dataflows:
IBM DB2 Database
MySQL Database
Oracle Database
PostgreSQL Database
SQL Server Database
Sybase Database
Teradata Database
Vertica
The Power Platform category includes the following available data connections for dataflows:
Power BI dataflows
Power Platform dataflows
The Azure category includes the following available data connections for dataflows:
Azure Blobs
Azure Data Explorer
Azure SQL Data Warehouse
Azure SQL Database
Azure Tables
The Online Ser vices includes the following available data connections for dataflows:
Amazon Redshift
Common Data Service for Apps
Microsoft Exchange Online
Salesforce Objects
Salesforce Reports
SharePoint Online List
Smartsheet
The Other category includes the following available data connections for dataflows:
Active Directory
OData
SharePoint List
Web API
Web page
Blank table
Blank Query
A connection window for the selected data connection is displayed. If credentials are required, you're prompted to
provide them. The following image shows a Server URL being entered to connect to a Common Data Service
server.
Once the Server URL or resource connection information is provided, select Sign in to enter the credentials to use
for the data access, then select Next .
Power Quer y Online initiates and establishes the connection to the data source. It then presents the available
tables from that data source in the Navigator window, as shown in the following image.
You can select tables and data to load by selecting the checkbox next to each in the left pane. To load the data,
select OK from the bottom of the Navigator pane. A Power Query Online dialog appears. In this dialog, you can
edit queries and perform any other transformation you want to perform to the selected data.
That's all there is to it. Other data sources have similar flows, and use Power Query Online to edit and transform
the data you bring into your dataflow.
4. Paste the copied query into the blank query for the dataflow.
Next Steps
This article showed which data sources you can connect to for dataflows. The following articles go into more detail
about common usage scenarios for dataflows.
Self-service data prep in Power BI
Using incremental refresh with dataflows
Creating computed entities in dataflows
Link entities between dataflows
Additional information about dataflows and related information can be found in the following articles:
Create and use dataflows in Power BI
Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
Dataflows and Azure Data Lake integration (Preview)
For more information about Power Query and scheduled refresh, you can read these articles:
Query overview in Power BI Desktop
Configuring scheduled refresh
For more information about the Common Data Model, you can read its overview article:
Common Data Model - overview
Link entities between dataflows
10/30/2020 • 5 minutes to read • Edit Online
With dataflows in Power Platform, you can have a single organizational data storage source where business
analysts can prep and manage their data once, and then reuse it between different analytics apps in the
organization.
When you link entities between dataflows, you can reuse entities that have already been ingested, cleansed, and
transformed by other dataflows owned by others without the need to maintain that data. The linked entities
simply point to the entities in other dataflows, and do not copy or duplicate the data.
Linked entities are read only . If you want to create transformations for a linked entity, you must create a new
computed entity with a reference to the linked entity.
NOTE
Entities differ based on whether they’re standard entities or computed entities. Standard entities (often simply referred to as
entities) query an external data source, such as a SQL database. Computed entities require Premium capacity on Power BI
and run their transformations on data that’s already in Power BI storage.
If your dataflow isn’t in a Premium capacity workspace, you can still reference a single query or combine two or more
queries as long as the transformations aren’t defined as in-storage transformations. Such references are considered
standard entities. To do so, turn off the Enable load option for the referenced queries to prevent the data from being
materialized and from being ingested into storage. From there, you can reference those Enable load = false queries, and
set Enable load to On only for the resulted queries that you want to materialize.
You can also select Add linked entities from the Add entities menu item in the Power BI service.
To link entities, you must sign in with your Power BI credentials.
A Navigator window opens and lets you choose a set of entities you can connect to. The entities displayed are
entities for which you have permissions, across all workspaces and environments in your organization.
Once your linked entities are selected, they appear in the list of entities for your dataflow in the authoring tool,
with a special icon identifying them as linked entities.
You can also view the source dataflow from the dataflow settings of your linked entity.
Next Steps
The following articles may be useful as you create or work with dataflows.
Self-service data prep in Power BI
Using incremental refresh with dataflows
Creating computed entities in dataflows
Connect to data sources for dataflows
The articles below provide more information about dataflows and Power BI:
Create and use dataflows in Power BI
Using computed entities on Power BI Premium
Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
For more information about Power Query and scheduled refresh, you can read these articles:
Query overview in Power BI Desktop
Configuring scheduled refresh
For more information about the Common Data Model, you can read its overview article:
Common Data Model - overview
What is the storage structure for analytical dataflows?
10/30/2020 • 3 minutes to read • Edit Online
Analytical dataflows store both data and metadata in Azure Data Lake Storage Gen2. Dataflows leverage a standard
structure to store and describe data created in the lake, which is called Common Data Model folders. In this article,
you'll learn more about the storage standard that dataflows leverage behind the scenes.
This JSON file is the file that you can use to migrate (or import) your dataflow into another workspace or
environment.
To learn exactly what the model.json metadata file includes, see The metadata file (model.json) for the Common
Data Model.
Data files
In addition to the metadata file, there are other subfolders inside the dataflow folder. A dataflow stores the data of
each entity inside a subfolder with the entity's name on it. An entity’s data might be split into multiple data
partitions stored in CSV format.
To see how to connect the external Azure Data Lake storage account to dataflows in your environment, see Connect
Azure Data Lake Storage Gen2 for dataflow storage.
Next Steps
Use the Common Data Model to optimize Azure Data Lake Storage Gen2
The metadata file (model.json) for the Common Data Model
Add a CDM folder to Power BI as a dataflow (Preview)
Connect Azure Data Lake Storage Gen2 for dataflow storage
Dataflows and Azure Data Lake Integration (Preview)
Configure workspace dataflow settings (Preview)
Dataflow storage options
10/30/2020 • 2 minutes to read • Edit Online
Standard dataflows always load data into Common Data Service tables in an environment. Analytical dataflows
always load data into Azure Data Lake Storage Gen2 accounts. For both dataflow types, there's no need to provision
or manage the storage. Dataflow storage, by default, is provided and managed by products the dataflow is created
in.
Analytical dataflows allow an additional storage option: your organizations' Azure Data Lake Storage Gen2 account.
This option enables access to the data created by a dataflow directly through Azure Data Lake Storage Gen2
interfaces. Providing your own storage account for analytical dataflows enables other Azure or line-of-business
applications to leverage the data by connecting to the lake directly.
Linking a Power Platform environment to your organization's Azure Data Lake Storage Gen2
To configure dataflows created in Power Apps portal to store data in the organization's Azure Data Lake Storage
Gen2, follow the steps in Connect Azure Data Lake Storage Gen2 for dataflow storage in the Power Apps portal.
Known limitations
Once a dataflow is created, its storage location can't be changed.
Linked and computed entities features are only available when both dataflows are in the same storage
account.
To learn more about the enhanced compute engine, see The enhanced compute engine.
Next Steps
The articles below provide further information that can be helpful.
Connect Azure Data Lake Storage Gen2 for dataflow storage (Power BI dataflows)
Connect Azure Data Lake Storage Gen2 for dataflow storage (Power Platform dataflows)
Creating computed entities in dataflows
The enhanced compute engine
Standard vs Analytical dataflows
Using the output of Power Platform dataflows from
other Power Query experiences
10/30/2020 • 2 minutes to read • Edit Online
You can use the output of Power Platform dataflows from the Power Query experience in other products. For
example, using the Power BI Desktop, or even in another dataflow, you can get data from the output of a dataflow.
In this article, you'll learn how to do so.
When you get data from a dataflow, the data will be imported into the Power BI dataset. The dataset then needs to
be refreshed, and options are available to either perform a one time refresh or an automatic refresh on a schedule
specified by you. Scheduled refresh for the dataset can be configured in the Power BI portal.
DirectQuery from dataflows
Power BI Dataflows also support a DirectQuery connection. If the size of the data is so large that you don't want to
import all of it into the Power BI dataset, you can create a DirectQuery connection. DirectQuery won't copy the data
into the Power BI dataset. The tables in the Power BI dataset that get their data from a DirectQuery sourced
dataflow don't need a scheduled refresh, because their data will be fetched live from the dataflow.
To use DirectQuery for the dataflows, you need to enable the compute engine on your premium capacity, and then
refresh the dataflow before it can be consumed in DirectQuery mode. For more information, see Power BI
Dataflows Direct Query Support.
When getting data from the output of another dataflow, a linked entity will be created. Linked entities provide a way
to make data created in an upstream dataflow available in a downstream dataflow, without copying the data to the
downstream dataflow. Because linked entities are just pointers to entities created in other dataflows, they're kept up
to date by the refresh logic of the upstream dataflow. If both dataflows reside in the same workspace or
environment, those dataflows will refresh together, to keep data in both dataflows always up to date. To learn more
about the refresh process of linked entities, see Link entities between dataflows.
Next Steps
The following articles provide more details about related articles.
Creating and using dataflows in Power BI
Link entities between dataflows in Power BI
Connect to data created by Power BI dataflows in Power BI Desktop (Beta)
Create and use dataflows in the Power Platform
Link entities between dataflows (Power Platform)
How to migrate queries from Power Query in the
desktop (Power BI and Excel) to dataflows
10/30/2020 • 3 minutes to read • Edit Online
If you already have queries in Power Query, either in Power BI Desktop or in Excel, you might want to migrate the
queries into dataflows. The migration process is simple and straightforward. In this article, you'll learn the steps to
do so.
2. In Excel, this option is under Data > Get Data > Launch Power Quer y Editor .
The gateway isn't needed for data sources residing in the cloud, such as an Azure SQL database.
Configure connection
In the next step, configure the connection to the data source using the Configure connection option, enter
credentials, or anything else needed to connect to the data source at this stage.
Verification
If you've done all the steps successfully, you should see a preview of the data in the Power Query Editor.
If a scenario like this happens, you have two options. You can set up the gateway for that data source, or you need
to update the query in the dataflow's Power Query editor using a set of steps that are supported without the need
for the gateway.
Dataflows can be created in different portals, such as Power BI and Power Apps portal, and can be of two types,
analytical and standard dataflows. In addition, some dataflow features are only available as Premium features.
Considering the wide range of products that can use dataflows, and feature availability in each product or dataflow
type, it's important to know what licensing options you need to use dataflows.
Premium features
Some of the dataflow features are limited to premium licenses. If you want to use the enhanced compute engine to
speed up your dataflow queries performance over computed entities, or have the DirectQuery connection option to
the dataflow, you need to have Power BI P1 or A3 or higher capacities.
AI capabilities in Power BI, linked entity, and computed entity are all premium functions that aren't available with a
Power BI pro account.
Features
The following table contains a list of features and the license needed for them to be available.
F EAT URE P O W ER B I P O W ER A P P S
Store data in customer provided Azure Power BI Pro Per app plan
Data Lake Storage Gen2 (analytical Power BI Premium Per user plan
dataflow; bring your own Azure Data
Lake Storage Gen2)
F EAT URE P O W ER B I P O W ER A P P S
Dataflow incremental refresh Power BI Premium Analytical dataflows only, requires Power
Apps Plan2
Next step
If you want to read more details about the concepts discussed in this article, follow any of the links below.
Pricing
Power BI Pricing
Power Platform Pricing
Azure Data Lake Storage Gen 2 Pricing
Features
Computed entity
Linked entity
AI capabilities in Power BI dataflow
Standard vs analytical dataflow
The enhanced compute engine
How Power Platform dataflows and Azure Data
Factory wrangling dataflows relate to each other?
10/30/2020 • 2 minutes to read • Edit Online
Power Platform dataflows and Azure Data Factory dataflows are often considered to be doing the same thing:
extraction of the data from source systems, transforming the data, and loading the transformed data into a
destination. However, there are differences in these two types of dataflows, and you can have a solution
implemented that works with a combination of these technologies. This article describes this relationship in more
detail.
What's in common?
Both Power Platform dataflows and Azure Data Factory wrangling dataflows are useful for getting data from one or
more sources, applying transformations on the data using Power Query, and loading the transformed data into
destinations (Extract, Transform, Load—in short, ETL).
both are empowered using Power Query data transformation.
both are cloud-based technologies.
A Z URE DATA FA C TO RY W RA N GL IN G
F EAT URES P O W ER P L AT F O RM DATA F LO W S DATA F LO W S
Destinations Common Data Services or Azure Data Many destinations, listed here
Lake Storage Gen2
Power Query transformation All Power Query functions are A limited list of functions supported—
supported here is the list
Sources Many sources are supported Only a few sources—here is the list
Scalability Depends on the Premium capacity and Highly scalable, the query folding into
the use of the enhanced compute spark code for cloud scale execution
engine
Depending on the storage for the output of the Power Platform dataflows, you can use that output in other Azure
services.
There are benefits to using computed entities in the dataflow. This article explains computed entity use cases and
how they work behind the scenes.
A computed entity not only helps by having one place as the source code for the transformation. In addition, a
computed entity will also speed up the transformation because the transformation will be done only once instead
of multiple times. The load on the data source will also be reduced.
The computed entity can have further transformations. For example, you can use Group By to aggregate the data
at the customer level.
This means that the Orders Aggregated entity will be getting data from the Order entity (not from the data
source again). Since some of the transformations that need to be done have already been done in the Orders
entity, performance is better and data transformation is faster.
The concept of the computed entity is to have a table persisted in storage, and other tables sourced from it, so that
you can reduce the read time from the data source and share some of the common transformations. This can be
achieved by getting data from other dataflows through the dataflow connector, or referencing another query in the
same dataflow.
If the dataflow you're developing is getting bigger and more complex, here are some things you can do to improve
on your original design.
Avoid scheduling refresh for linked entities inside the same workspace
If you're regularly being locked out of your dataflows that contain linked entities, it may be a result of a
corresponding, dependent dataflow inside the same workspace that's locked during dataflow refresh. Such locking
provides transactional accuracy and ensures both dataflows successfully refresh, but it can block you from editing.
If you set up a separate schedule for the linked dataflow, dataflows can refresh unnecessarily and block you from
editing the dataflow. There are two recommendations to avoid this issue:
Avoid setting a refresh schedule for a linked dataflow in the same workspace as the source dataflow. If you want to
configure a refresh schedule separately, and want to avoid the locking behavior, separate the dataflow in a separate
workspace.
Best practices for reusing dataflows across
environments and workspaces
10/30/2020 • 2 minutes to read • Edit Online
This article discusses a collection of best practices for resuing dataflows effectively and efficiently. Read this article
to avoid design pitfalls and potential performance issues, while developing dataflows for reuse.
Designing a data warehouse is one of the most common tasks you can do with a dataflow. This article highlights
some of the best practices for creating a data warehouse using a dataflow.
Staging dataflows
One of the key points in any data integration system is to reduce the number of reads from the source operational
system. In the traditional data warehouse architecture, this reduction is done by creating a new database called a
staging database. The purpose of the staging database is to load data "as is" from the data source into the staging
database on a scheduled basis.
The rest of the data integration will then use the staging database as the source for further transformation and
converting it to the data warehouse model structure.
We recommended that you follow the same approach using dataflows. Create a set of dataflows that are
responsible for just loading data "as is" from the source system (only for the tables that are needed). The result is
then stored in the storage structure of the dataflow (either Azure Data Lake Storage Gen2 or Common Data
Services). This change ensures that the read operation from the source system is minimal.
Next, you can create other dataflows that source their data from staging dataflows. Benefits of this approach
include:
Reducing the number of read operations from the source system, and reducing the load on the source system as
a result.
Reducing the load on data gateways if an on-premise data source is used.
Having an intermediate copy of the data for reconciliation purpose, in case the source system data changes.
Making the transformation dataflows source-independent.
Transformation dataflows
When you have your transformation dataflows separate from the staging dataflows, the transformation will be
independent from the source. This separation helps if there's migration of the source system to the new system. All
you need to do in that case is to change the staging dataflows. The transformation dataflows should work without
any problem, because they're sourced only from the staging dataflows.
This separation also helps in case the source system connection is slow. The transformation dataflow doesn't need
to wait for a long time to get records coming through the slow connection of the source system. The staging
dataflow has already done that part and the data is ready for the transformation layer.
Layered Architecture
A layered architecture is an architecture in which you perform actions in separate layers. The staging and
transformation dataflows can be two layers of a multi-layered dataflow architecture. Trying to do actions in layers
ensures the minimum maintenance required. When you want to change something, you just need to change it in
the layer in which it's located. The other layers should all continue to work fine.
The following image shows a multi-layered architecture for dataflows in which their entities are then used in Power
BI datasets.
In the diagram above, the computed entity gets the data directly from the source. However, in the architecture of
staging and transformation dataflows, it's likely the computed entities are sourced from the staging dataflows.
This article reveals some of the most common errors and issues you might get when you want to create a dataflow,
and how to fix them.
Reason:
Creating dataflows in "My Workspace" isn't supported.
Resolution:
Create your dataflows in organizational workspaces. To learn how to create an organizational workspace, see Create
the new workspaces in Power BI.
You might have created a dataflow, but then have difficulty in getting data from it (either using Power Query in
Power BI Desktop or from other dataflows). This article explains some of the most common issues that happen
when you get data from a dataflow.
Once a dataflow is refreshed, the data in entities will be visible in the Navigator window of other tools and services.
When you create a dataflow, sometimes you get an issue connecting to the data source. This issue can be because
of the gateway, the credentials, or many other reasons. In this article, you'll see the most common errors and issues
in this category and their resolution.
Reason:
When your entity in the dataflow gets data from an on-premises data source, a gateway is needed for the
connection. The gateway isn't selected.
Resolution:
Select the gateway using the Select gateway button. Sometimes, however, you might not have a gateway set up
yet. For information about how to install and set up a gateway, see Install an on-premises data gateway.
Reason:
Disabled modules are related to functions that require an on-premises gateway connection to work. Even if the
function is getting data from a web page, because of some security compliance, they need to go through a gateway
connection.
Resolution:
First, install and setup an on-premises gateway. Then add a web data source for the web URL you're connecting to.
After adding the web data source, you can select the gateway in the dataflow from Project Options .
You might be asked to set up credentials. After the successful setup, you should see the queries working fine.
Best practices when working with Power Query
10/30/2020 • 10 minutes to read • Edit Online
This article contains some tips and tricks to make the most out of your data wrangling experience in Power Query.
Filter early
It's always recommended to filter your data in the early stages of your query or as early as possible. Some
connectors will take advantage of your filters through query folding, as described in Power Query query folding. It's
also a best practice to filter out any data that isn't relevant for your case. This will let you better focus on your task
at hand by only showing data that’s relevant in the data preview section..
You can use the auto filter menu that displays a distinct list of the values found in your column to select the values
that you want to keep or filter out. You can also use the search bar to help you find the values in your column.
You can also take advantage of the type-specific filters such as In the previous for a date, datetime, or even date
timezone column.
These type-specific filters can help you create a dynamic filter that will always retrieve data that's in the previous x
number of seconds, minutes, hours, days, weeks, months, quarters, or years as showcased in the following image.
NOTE
To learn more about filtering your data based on values from a column, see Filter by values.
It's crucial that you always work with the correct data types for your columns. When working with structured data
sources such as databases, the data type information will be brought from the table schema found in the database.
But for unstructured data sources such as TXT and CSV files, it's important that you set the correct data types for
the columns coming from that data source. By default, Power Query offers an automatic data type detection for
unstructured data sources. You can read more about this feature and how it can help you in Data types.
NOTE
To learn more about the importance of data types and how to work with them, see Data types.
NOTE
To learn more about the data profiling tools, see Data profiling tools.
NOTE
To learn more about all the available features and components found inside the applied steps pane, see Using the Applied
steps list.
You could split this query into two at the Merge with Prices table step. That way it's easier to understand the
steps that were applied to the sales query before the merge. To do this operation, you right-click the Merge with
Prices table step and select the Extract Previous option.
You'll then be prompted with a dialog to give your new query a name. This will effectively split your query into two
queries. One query will have all the queries before the merge. The other query will have an initial step that will
reference your new query and the rest of the steps that you had in your original query from the Merge with
Prices table step downward.
You could also leverage the use of query referencing as you see fit. But it's a good idea to keep your queries at a
level that doesn't seem daunting at first glance with so many steps.
NOTE
To learn more about query referencing, see Understanding the queries pane.
Create groups
A great way to keep your work organized is by leveraging the use of groups in the queries pane.
The sole purpose of groups is to help you keep your work organized by serving as folders for your queries. You can
create groups within groups should you ever need to. Moving queries across groups is as easy as drag and drop.
Try to give your groups a meaningful name that makes sense to you and your case.
NOTE
To learn more about all the available features and components found inside the queries pane, see Understanding the queries
pane.
Future-proofing queries
Making sure that you create a query that won't have any issues during a future refresh is a top priority. There are
several features in Power Query to make your query resilient to changes and able to refresh even when some
components of your data source changes.
It's a best practice to define the scope of your query as to what it should do and what it should account for in terms
of structure, layout, column names, data types, and any other component that you consider relevant to the scope.
Some examples of transformations that can help you make your query resilient to changes are:
If your query has a dynamic number of rows with data, but a fixed number of rows that serve as the footer
that should be removed, you can use the Remove bottom rows feature.
NOTE
To learn more about filtering your data by row position, see Filter a table by row position.
If your query has a dynamic number of columns, but you only need to select specific columns from your
dataset, you can use the Choose columns feature.
NOTE
To learn more about choosing or removing columns, see Choose or remove columns.
If your query has a dynamic number of columns and you need to unpivot only a subset of your columns,
you can use the unpivot only selected columns feature.
NOTE
To learn more about the options to unpivot your columns, see Unpivot columns.
If your query has a step that changes the data type of a column, but some cells yield errors as the values
don't conform to the desired data type, you could remove the rows that yielded error values.
NOTE
To more about working and dealing with errors, see Dealing with errors.
Use parameters
Creating queries that are dynamic and flexible is a best practice. Parameters in Power Query help you make your
queries more dynamic and flexible. A parameter serves as a way to easily store and manage a value that can be
reused in many different ways. But it's more commonly used in two scenarios:
Step argument —You can use a parameter as the argument of multiple transformations driven from the
user interface.
Custom Function argument —You can create a new function from a query, and reference parameters as
the arguments of your custom function.
The main benefits of creating and using parameters are:
Centralized view of all your parameters through the Manage Parameters window.
Reusability of the parameter in multiple steps or queries.
Makes the creation of custom functions straightforward and easy.
You can even use parameters in some of the arguments of the data connectors. For example, you could create a
parameter for your server name when connecting to your SQL Server database. Then you could use that parameter
inside the SQL Server database dialog.
If you change your server location, all you need to do is update the parameter for your server name and your
queries will be updated.
NOTE
To learn more about creating and using parameters, see Using parameters.
You start by having a parameter that has a value that serves as an example.
From that parameter, you create a new query where you apply the transformations that you need. For this case, you
want to split the code PTY-CM1090-L AX into multiple components:
Origin = PTY
Destination = LAX
Airline = CM
FlightID = 1090
You can then transform that query into a function by doing a right-click on the query and selecting Create
Function . Finally, you can invoke your custom function into any of your queries or values, as shown in the
following image.
After a few more transformations, you can see that you've reached your desired output and leveraged the logic for
such a transformation from a custom function.
NOTE
To learn more about how to create and use custom functions in Power Query from the article Custom Functions.
Power Query query folding
12/10/2019 • 4 minutes to read • Edit Online
This article targets data modelers developing models in Power Pivot or Power BI Desktop. It describes what Power
Query query folding is, and why it is important in your data model designs. It also describes the data sources and
transformations that can achieve query folding, and how to determine that your Power Query queries can be
folded—whether fully or partially.
Query folding is the ability for a Power Query query to generate a single query statement to retrieve and
transform source data. The Power Query mashup engine strives to achieve query folding whenever possible for
reasons of efficiency.
Query folding is an important topic for data modeling for several reasons:
Impor t model tables: Data refresh will take place efficiently for Import model tables (Power Pivot or Power BI
Desktop), in terms of resource utilization and refresh duration.
DirectQuer y and Dual storage mode tables: Each DirectQuery and Dual storage mode table (Power BI
only) must be based on a Power Query query that can be folded.
Incremental refresh: Incremental data refresh (Power BI only) will be efficient, in terms of resource utilization
and refresh duration. In fact, the Power BI Incremental Refresh configuration window will notify you of a
warning should it determine that query folding for the table cannot be achieved. If it cannot be achieved, the
objective of incremental refresh is defeated. The mashup engine would then be required to retrieve all source
rows, and then apply filters to determine incremental changes.
Query folding may occur for an entire Power Query query, or for a subset of its steps. When query folding cannot
be achieved—either partially or fully—the Power Query mashup engine must compensate by processing data
transformations itself. This process can involve retrieving source query results, which for large datasets is very
resource intensive and slow.
We recommend that you strive to achieve efficiency in your model designs by ensuring query folding occurs
whenever possible.
Date.Year([OrderDate])
Date.ToText([OrderDate], "yyyy")
To view the folded query, you select the View Native Quer y option. You are then be presented with the native
query that Power Query will use to source data.
If the View Native Quer y option is not enabled (greyed out), this is evidence that all query steps cannot be
folded. However, it could mean that a subset of steps can still be folded. Working backwards from the last step, you
can check each step to see if the View Native Quer y option is enabled. If this is the case, then you have learned
where, in the sequence of steps, that query folding could no longer be achieved.
Next steps
For more information about Query Folding and related topics, check out the following resources:
Best practice guidance for query folding
Use composite models in Power BI Desktop
Incremental refresh in Power BI Premium
Using Table.View to Implement Query Folding
Behind the scenes of the Data Privacy Firewall
10/30/2020 • 12 minutes to read • Edit Online
If you’ve used Power Query for any length of time, you’ve likely experienced it. There you are, querying away, when
you suddenly get an error that no amount of online searching, query tweaking, or keyboard bashing can remedy.
An error like:
Formula.Firewall: Query 'Query1' (step 'Source') references other queries or steps, so it may not directly
access a data source. Please rebuild this data combination.
Or maybe:
Formula.Firewall: Query 'Query1' (step 'Source') is accessing data sources that have privacy levels which
cannot be used together. Please rebuild this data combination.
These Formula.Firewall errors are the result of Power Query’s Data Privacy Firewall (aka the Firewall), which at
times may seem like it exists solely to frustrate data analysts the world over. Believe it or not, however, the Firewall
serves an important purpose. In this article, we’ll delve under the hood to better understand how it works. Armed
with greater understanding, you'll hopefully be able to better diagnose and fix Firewall errors in the future.
What is it?
The purpose of the Data Privacy Firewall is simple: it exists to prevent Power Query from unintentionally leaking
data between sources.
Why is this needed? I mean, you could certainly author some M that would pass a SQL value to an OData feed. But
this would be intentional data leakage. The mashup author would (or at least should) know they were doing this.
Why then the need for protection against unintentional data leakage?
The answer? Folding.
Folding?
Folding is a term that refers to converting expressions in M (such as filters, renames, joins, and so on) into
operations against a raw data source (such as SQL, OData, and so on). A huge part of Power Query’s power comes
from the fact that PQ can convert the operations a user performs via its user interface into complex SQL or other
backend data source languages, without the user having to know said languages. Users get the performance
benefit of native data source operations, with the ease of use of a UI where all data sources can be transformed
using a common set of commands.
As part of folding, PQ sometimes may determine that the most efficient way to execute a given mashup is to take
data from one source and pass it to another. For example, if you’re joining a small CSV file to a huge SQL table, you
probably don’t want PQ to read the CSV file, read the entire SQL table, and then join them together on your local
computer. You probably want PQ to inline the CSV data into a SQL statement and ask the SQL database to perform
the join.
This is how unintentional data leakage can happen.
Imagine if you were joining SQL data that included employee Social Security Numbers with the results of an
external OData feed, and you suddenly discovered that the Social Security Numbers from SQL were being sent to
the OData service. Bad news, right?
This is the kind of scenario the Firewall is intended to prevent.
How does it work?
The Firewall exists to prevent data from one source from being unintentionally sent to another source. Simple
enough.
So how does it accomplish this mission?
It does this by dividing your M queries into something called partitions, and then enforcing the following rule:
A partition may either access compatible data sources, or reference other partitions, but not both.
Simple…yet confusing. What’s a partition? What makes two data sources “compatible”? And why should the
Firewall care if a partition wants to access a data source and reference a partition?
Let’s break this down and look at the above rule one piece at a time.
What’s a partition?
At its most basic level, a partition is just a collection of one or more query steps. The most granular partition
possible (at least in the current implementation) is a single step. The largest partitions can sometimes encompass
multiple queries. (More on this later.)
If you’re not familiar with steps, you can view them on the right of the Power Query Editor window after selecting a
query, in the Applied Steps pane. Steps keep track of everything you’ve done to transform your data into its final
shape.
Partitions that reference other partitions
When a query is evaluated with the Firewall on, the Firewall divides the query and all its dependencies into
partitions (that is, groups of steps). Any time one partition references something in another partition, the Firewall
replaces the reference with a call to a special function called Value.Firewall . In other words, the Firewall doesn’t
allow partitions to access each other randomly. All references are modified to go through the Firewall. Think of the
Firewall as a gatekeeper. A partition that references another partition must get the Firewall’s permission to do so,
and the Firewall controls whether or not the referenced data will be allowed into the partition.
This all may seem pretty abstract, so let’s look at an example.
Assume you have a query called Employees, which pulls some data from a SQL database. Assume you also have
another query (EmployeesReference), which simply references Employees.
These queries will end up divided into two partitions: one for the Employees query, and one for the
EmployeesReference query (which will reference the Employees partition). When evaluated with the Firewall on,
these queries will be rewritten like so:
shared Employees = let
Source = Sql.Database(…),
EmployeesTable = …
in
EmployeesTable;
Notice that the simple reference to the Employees query has been replaced by a call to Value.Firewall , which is
provided the full name of the Employees query.
When EmployeesReference is evaluated, the call to Value.Firewall("Section1/Employees") is intercepted by the
Firewall, which now has a chance to control whether (and how) the requested data flows into the
EmployeesReference partition. It can do any number of things: deny the request, buffer the requested data (which
prevents any further folding to its original data source from occurring), and so on.
This is how the Firewall maintains control over the data flowing between partitions.
Partitions that directly access data sources
Let’s say you define a query Query1 with one step (note that this single-step query will correspond to one Firewall
partition), and that this single step accesses two data sources: a SQL database table and a CSV file. How does the
Firewall deal with this, since there’s no partition reference, and thus no call to Value.Firewall for it to intercept?
Let’s review to the rule stated earlier:
A partition may either access compatible data sources, or reference other partitions, but not both.
In order for your single-partition-but-two-data-sources query to be allowed to run, its two data sources must be
“compatible”. In other words, it needs to be okay for data to be shared between them. In terms of the Power Query
UI, this means the Privacy Levels of the SQL and CSV data sources need to both be Public, or both be
Organizational. If they are both marked Private, or one is marked Public and one is marked Organizational, or they
are marked using some other combination of Privacy Levels, then it's not safe for them to both be evaluated in the
same partition. Doing so would mean unsafe data leakage could occur (due to folding), and the Firewall would
have no way to prevent it.
What happens if you try to access incompatible data sources in the same partition?
Formula.Firewall: Query 'Query1' (step 'Source') is accessing data sources that have privacy levels which
cannot be used together. Please rebuild this data combination.
Hopefully you now better understand one of the error messages listed at the beginning of this article.
Note that this compatibility requirement only applies within a given partition. If a partition is referencing other
partitions, the data sources from the referenced partitions don't have to be compatible with one another. This is
because the Firewall can buffer the data, which will prevent any further folding against the original data source. The
data will be loaded into memory and treated as if it came from nowhere.
Why not do both?
Let’s say you define a query with one step (which will again correspond to one partition) that accesses two other
queries (that is, two other partitions). What if you wanted, in the same step, to also directly access a SQL database?
Why can’t a partition reference other partitions and directly access compatible data sources?
As you saw earlier, when one partition references another partition, the Firewall acts as the gatekeeper for all the
data flowing into the partition. To do so, it must be able to control what data is allowed in. If there are data sources
being accessed within the partition, as well as data flowing in from other partitions, it loses its ability to be the
gatekeeper, since the data flowing in could be leaked to one of the internally accessed data sources without it
knowing about it. Thus the Firewall prevents a partition that accesses other partitions from being allowed to
directly access any data sources.
So what happens if a partition tries to reference other partitions and also directly access data sources?
Formula.Firewall: Query 'Query1' (step 'Source') references other queries or steps, so it may not directly
access a data source. Please rebuild this data combination.
Now you hopefully better understand the other error message listed at the beginning of this article.
Partitions in-depth
As you can probably guess from the above information, how queries are partitioned ends up being incredibly
important. If you have some steps that are referencing other queries, and other steps that access data sources, you
now hopefully recognize that drawing the partition boundaries in certain places will cause Firewall errors, while
drawing them in other places will allow your query to run just fine.
So how exactly do queries get partitioned?
This section is probably the most important for understanding why you’re seeing Firewall errors, as well as
understanding how to resolve them (where possible).
Here’s a high-level summary of the partitioning logic.
Initial Partitioning
Creates a partition for each step in each query
Static Phase
This phase doesn’t depend on evaluation results. Instead, it relies on how the queries are structured.
Parameter Trimming
Trims parameter-esque partitions, that is, any one that:
Doesn’t reference any other partitions
Doesn’t contain any function invocations
Isn’t cyclic (that is, it doesn’t refer to itself)
Note that “removing” a partition effectively includes it in whatever other partitions reference it.
Trimming parameter partitions allows parameter references used within data source function calls
(for example, Web.Contents(myUrl) ) to work, instead of throwing “partition can’t reference data
sources and other steps” errors.
Grouping (Static)
Partitions are merged, while maintaining separation between:
Partitions in different queries
Partitions that reference other partitions vs. those that don’t
Dynamic Phase
This phase depends on evaluation results, including information about data sources accessed by various
partitions.
Trimming
Trims partitions that meet all the following requirements:
Doesn’t access any data sources
Doesn’t reference any partitions that access data sources
Isn’t cyclic
Grouping (Dynamic)
Now that unnecessary partitions have been trimmed, try to create Source partitions that are as
large as possible.
Merge all partitions with their input partitions if each of its inputs:
Is part of the same query
Doesn’t reference any other partitions
Is only referenced by the current partition
Isn’t the result (that is, final step) of a query
Isn’t cyclic
in
#"Changed Type";
Source = Sql.Databases(DbServer),
AdventureWorks = Source{[Name="AdventureWorks"]}[Data],
HumanResources_Employee = AdventureWorks{[Schema="HumanResources",Item="Employee"]}[Data],
in
#"Expanded Contacts";
Now we perform the static grouping. This maintains separation between partitions in separate queries (note for
instance that the last two steps of Employees don’t get grouped with the steps of Contacts), as well as between
partitions that reference other partitions (such as the last two steps of Employees) and those that don’t (such as the
first three steps of Employees).
Now we enter the dynamic phase. In this phase, the above static partitions are evaluated. Partitions that don’t
access any data sources are trimmed. Partitions are then grouped to create source partitions that are as large as
possible. However, in this sample scenario, all the remaining partitions access data sources, and there isn’t any
further grouping that can be done. The partitions in our sample thus won’t change during this phase.
Let’s Pretend
For the sake of illustration, though, let’s look at what would happen if the Contacts query, instead of coming from a
text file, were hard-coded in M (perhaps via the Enter Data dialog).
In this case, the Contacts query would not access any data sources. Thus, it would get trimmed during the first part
of the dynamic phase.
With the Contacts partition removed, the last two steps of Employees would no longer reference any partitions
except the one containing the first three steps of Employees. Thus, the two partitions would be grouped.
The resulting partition would look like this.
That’s a wrap
While there's much more that could be said on this topic, this introductory article is already long enough. Hopefully
it’s given you a better understanding of the Firewall, and will help you to understand and fix Firewall errors when
you encounter them in the future.
Query Diagnostics
3/16/2020 • 9 minutes to read • Edit Online
Query Diagnostics is a powerful new feature that will allow you to determine what Power Query is doing during
authoring time. While we will be expanding on this feature in the future, including allowing you to use it during full
refreshes, at this time it allows you to understand what sort of queries you are emitting, what slowdowns you
might run into during authoring refresh, and what kind of background events are happening.
To use Query Diagnostics, go to the 'Tools' tab in the Power Query Editor ribbon.
By default, Query Diagnostics may require administrative rights to run (depending on IT policy). If you find yourself
unable to run Query Diagnostics, open the Power BI options page, Diagnostics tab, and select 'Enable in Query
Editor (does not require running as admin)'. This will constrain you from being able to trace diagnostics when
doing a full refresh into Power BI rather than the Power Query editor, but will allow you to still use it when
previewing, authoring, etc.
Whenever you start diagnostics, Power Query will begin tracing any evaluations that you cause. The evaluation that
most users think of is when you press refresh, or when you retrieve data for the first time, but there are many
actions that can cause evaluations depending on the connector. For example, with the SQL connector, when you
retrieve a list of values to filter, that would kick off an evaluation as well—but it doesn’t associate with a user query,
and that’s represented in the diagnostics. Other system generated queries might include Navigator or “Get Data”
experience.
When you press 'Diagnose Step', Power Query runs a special evaluation of just the step you're looking at and
shows you the diagnostics for that step, without showing you the diagnostics for other steps in the query. This can
make it much easier to get a narrow view into a problem.
It's important that if you're recording all traces that from 'Start Diagnostics' that you press 'Stop diagnostics'. This
will allow the engine to collect the recorded traces and parse them into the proper output. Without this step you'll
lose your traces.
We currently present two views whenever you get diagnostics: The summarized view is aimed to give you an
immediate insight into where time is being spent in your query. The detailed view is much deeper, line by line, and
will generally only be needed for serious diagnosing by power users.
Some capabilities, like the “Data Source Query” column, are currently available only on certain connectors. We will
be working to extend the breadth of this coverage in the future.
NOTE
Power Query may perform evaluations that you may not have directly triggered. Some of these evaluations are performed in
order to retrieve metadata so we can best optimize our queries or to provide a better user experience (such as retrieving the
list of distinct values within a column that are displayed in the Filter Rows experience), and others might be related to how a
connector handles parallel evaluations. At the same time, if you see in your query diagnostics repeated queries that you don't
believe make sense, feel free to reach out through normal support channels--your feedback is how we improve our product.
Diagnostics Schema
Id
When analyzing the results of a recording, it’s important to filter the recording session by Id, so that columns such
as Exclusive Duration % make sense.
Id is a composite identifier. It’s comprised of two numbers—one before the dot, and one after. The first number will
be the same for all evaluations that resulted from a single user action. In other words, if I press refresh twice there’ll
be two different numbers leading the dot, one for each user activity taken. This will be sequential for a given
diagnostics recording.
The second number represents an evaluation by the engine. This will be sequential for the lifetime of the process
where the evaluation is queued. If you run multiple diagnostics recording sessions, you will see this number
continue to grow across the different sessions. To summarize, if I start recording, press evaluation once, and stop
recording, I’ll have some number of Ids in my diagnostics, but since I only took one action, they’ll all be 1.1, 1.2, 1.3,
etc.
The combination of the activityId and the evaluationId, separated by the dot, provides a unique identifier for an
evaluation for a single recording session.
Query
The name of the Query in the left-hand pane of the Power Query editor.
Step
The name of the Step in the right-hand pane of the Power Query editor. Things like filter dropdowns will generally
associate with the step you’re filtering on, even if you’re not refreshing the step.
Category
The category of the operation.
Data Source Kind
This tells you what sort of data source you’re accessing, such as SQL or Oracle.
Operation
The actual operation being performed. This can include evaluator work, opening connections, sending queries to
the data source, and many more.
Start Time
The time that the operation started.
End Time
The time that the operation ended.
Exclusive Duration (%)
The Exclusive Duration column of an event is the amount of time the event was active. This contrasts with the
"duration" value that results from subtracting the values in an event's Start Time column and End Time column.
This "duration" value represents the total time the elapsed between when an event began and when it ended, which
may include times the event was in a suspended or inactive state and another event was consuming resources.
Exclusive duration % will add up to approximately 100% within a given evaluation, as represented by the “Id”
column. For example, if you filter on rows with Id 1.x, the Exclusive Duration percentages would sum to
approximately 100%. This will not be the case if you sum the Exclusive Duration % values of all rows in a given
diagnostic table.
Exclusive Duration
The absolute time, rather than %, of exclusive duration. The total duration (i.e. exclusive duration + time when the
event was inactive) of an evaluation can be calculated in one of two ways:
1. Find the operation called “Evaluation”. The difference between End Time - Start Time will result in the total
duration of an event.
2. Subtract the minimum start time of all operations within an event from the maximum end time. Note that in
cases when the information collected for an event does not account for the total duration, an operation called
“Trace Gaps” will be generated to account for this time gap.
Resource
The resource you’re accessing for data. The exact format of this resource will depend on the data source.
Data Source Query
Power Query does something called ‘Folding’, which is the act of running as many parts of the query against the
back-end data source as possible. In Direct Query mode (over Power Query), where enabled, only transforms that
fold will run. In import mode, transforms that can’t fold will instead be run locally.
The Data Source Query column allows you to see the query or HTTP request/response sent against the back-end
data source. As you author your Query in the Editor, many Data Source Queries will be emitted. Some of these are
the actual final Data Source Query to render the preview, but others may be for Data Profiling, Filter dropdowns,
information on joins, retrieving metadata for schemas, and any number of other small queries.
In general, you shouldn’t be concerned by the number of Data Source Queries emitted unless there are specific
reasons to be concerned, and should focus instead on making sure the proper content is being retrieved. This
column might also help determine if the Power Query evaluation was fully folded.
Additional Info
There is a lot of information retrieved by our connectors. Many of it is ragged and doesn’t fit well into a standard
columnar hierarchy. This is put in a record in the additional info column. Information logged from custom
connectors will also appear here.
Row Count
The number of rows returned by a Data Source Query. Not enabled on all connectors.
Content Length
Content length returned by HTTP Requests, as commonly defined. This isn’t enabled in all connectors, and it won’t
be accurate for connectors that retrieve requests in chunks.
Is User Query
Boolean, indicates if it is a query authored by the user, and present in the left hand pane or if it was generated by
some other user action. Other user actions can include things such as Filter selection, or using the Navigator in the
Get Data experience.
Path
Path represents the relative route of the operation when viewed as part of an interval tree for all operations within
a single evaluation. At the top (root) of the tree there’s a single operation called “Evaluation” with path “0”. The start
time of this evaluation corresponds to the start of this evaluation as a whole. The end time of this evaluation shows
when the whole evaluation finished. This top level operation has an exclusive duration of 0, as its only purpose it to
serve as the root of the tree. Further operations branch from the root. For example, an operation may have “0/1/5”
as a path. This would be understood as:
0: tree root
1: current operation’s parent
5: index of current operation
Operation “0/1/5”, might have a child node, in which case, the path will have the form “0/1/5/8”, with 8
representing the index of the child.
Group ID
Combining two (or more) operations will not occur if it leads to detail loss. The grouping is designed to
approximate “commands” executed during the evaluation. In the detailed view, multiple operations will share a
Group Id, corresponding to the groups that are aggregated in the Summary view.
As with most columns, the group id is only relevant within a specific evaluation, as filtered by the Id column.
Additional Reading
How to record diagnostics in various use cases
More about reading and visualizing your recorded traces
How to understand what query operations are folding using Query Diagnostics
Recording Query Diagnostics in Power BI
3/16/2020 • 6 minutes to read • Edit Online
When authoring in Power Query, the basic workflow is that you connect to a data source, apply some
transformations, potentially refresh your data in the Power Query editor, and then load it to the Power BI model.
Once it's in the Power BI model, you may refresh it from time to time in Power BI Desktop (if you're using Desktop
to view analytics), aside from any refreshes you do in the service.
While you may get a similar result at the end of an authoring workflow, refreshing in the editor, or refreshing in
Power BI proper, very different evaluations are run by the software for the different user experiences provided. It's
important to know what to expect when doing query diagnostics in these different workflows so you aren't
surprised by the very different diagnostic data.
To start Query Diagnostics, go to the 'Tools' tab in the Power Query Editor ribbon. You're presented here with a few
different options.
There are two primary options here, 'Diagnose Step' and 'Start Diagnostics' (paired with 'Stop Diagnostics'). The
former will give you information on a query up to a selected step, and is most useful for understanding what
operations are being performed locally or remotely in a query. The latter gives you more insight into a variety of
other cases, discussed below.
Connector Specifics
It's important to mention that there is no way to cover all the different permutations of what you'll see in Query
Diagnostics. There are lots of things that can change exactly what you see in results:
Connector
Transforms applied
System that you're running on
Network configuration
Advanced configuration choices
ODBC configuration
For the most broad coverage this documentation will focus on Query Diagnostics of the Northwind Customers
table, both on SQL and OData. The OData notes use the public endpoint found at the OData.org website, while
you'll need to provide a SQL server for yourself. Many data sources will differ significantly from these, and will
have connector specific documentation added over time.
Once you connect and choose authentication, select the 'Customers' table from the OData service.
This will present you with the Customers table in the Power Query interface. Let's say that we want to know how
many Sales Representatives there are in different countries. First, right click on 'Sales Representative' under the
'Contact Title' column, mouse over 'Text Filters', and select 'Equals'.
Now, select 'Group By' from the Ribbon and do a grouping by 'Country', with your aggregate being a 'Count'.
This should present you with the same data you see below.
Finally, navigate back to the 'Tools' tab of the Ribbon and click 'Stop Diagnostics'. This will stop the tracing and
build your diagnostics file for you, and the summary and detailed tables will appear on the left-hand side.
If you trace an entire authoring session, you will generally expect to see something like a source query evaluation,
then evaluations related to the relevant navigator, then at least one query emitted for each step you apply (with
potentially more depending on the exact UX actions taken). In some connectors, parallel evaluations will happen
for performance reasons that will yield very similar sets of data.
Refresh Preview
When you have finished transforming your data, you have a sequence of steps in a query. When you press 'Refresh
Preview' or 'Refresh All' in the Power Query editor, you won't see just one step in your query diagnostics. The
reason for this is that refreshing in the Power Query Editor explicitly refreshes the query ending with the last step
applied, and then steps back through the applied steps and refreshes for the query up to that point, back to the
source.
This means that if you have five steps in your query, including Source and Navigator, you will expect to see five
different evaluations in your diagnostics. The first one, chronologically, will often (but not always) take the longest.
This is due to two different reasons:
It may potentially cache input data that the queries run after it (representing earlier steps in the User Query) can
access faster locally.
It may have transforms applied to it that significantly truncate how much data has to be returned.
Note that when talking about 'Refresh All' that it will refresh all queries and you'll need to filter to the ones you
care about, as you might expect.
Full Refresh
Query Diagnostics can be used to diagnose the so-called 'final query' that is emitted during the Refresh in Power
BI, rather than just the Power Query editor experience. To do this, you first need to load the data to the model once.
If you are planning to do this, make sure that you realize that if you press 'Close and Apply' that the editor window
will close (interrupting tracing) so you either need to do it on the second refresh, or click the dropdown icon under
'Close and Apply' and press 'Apply' instead.
Either way, make sure to press 'Start Diagnostics' on the Diagnostics section of the 'Tools' tab in the editor. Once
you've done this refresh your model, or even just the table you care about.
Once it's done loading the data to model, press 'Stop' diagnostics.
You can expect to see some combination of metadata and data queries. Metadata calls grab the information it can
about the data source. Data retrieval is about accessing the data source, emitting the final built up Data Source
Query with folded down operations, and then performing whatever evaluations are missing on top, locally.
It's important to note that just because you see a resource (database, web endpoint, etc.) or a data source query in
your diagnostics, it doesn't mean that it's necessarily performing network activity. Power Query may retrieve this
information from its cache. In future updates, we will indicate whether or not information is being retrieved from
the cache for easier diagnosis.
Diagnose Step
'Diagnose Step' is more useful for getting an insight into what evaluations are happening up to a single step, which
can help you identify, up to that step, what performance is like as well as what parts of your query are being
performed locally or remotely.
If you used 'Diagnose Step' on the query we built above, you'll find that it only returns 10 or so rows, and if we
look at the last row with a Data Source Query we can get a pretty good idea of what our final emitted query to the
data source will be. In this case, we can see that Sales Representative was filtered remotely, but the grouping (by
process of elimination) happened locally.
If you start and stop diagnostics and refresh the same query, we get 40 rows due to the fact that, as mentioned
above, Power Query is getting information on every step, not just the final step. This makes it harder when you're
just trying to get insight into one particular part of your query.
Additional Reading
An introduction to the feature
More about reading and visualizing your recorded traces
How to understand what query operations are folding using Query Diagnostics
Visualizing and Interpreting Query Diagnostics in
Power BI
3/16/2020 • 4 minutes to read • Edit Online
Introduction
Once you've recorded the diagnostics you want to use, the next step is being able to understand what they say.
It's helpful to have a good understanding of what exactly each column in the query diagnostics schema means,
which we're not going to repeat in this short tutorial. There's a full writeup of that here.
In general, when building visualizations, it's better to use the full detailed table because regardless of how many
rows it is, what you're probably looking at is some kind of depiction of how the time spent in different resources
adds up, or what the native query emitted was.
As mentioned in our article on recording the diagnostics, I'm working with the OData and SQL traces for the same
table (or very nearly so)--the Customers table from Northwind. In particular, I'm going to focus on common ask
from our customers, as well as one of the most easy to interpret sets of traces: full refresh of the data model.
If we perform all the same operations and build similar visualizations, but with the SQL traces instead of the
ODATA ones, we can see how the two data sources compare!
If we click the Data Source table, like with the ODATA diagnostics we can see the first evaluation (2.3 in this image)
emits metadata queries, with the second evaluation actually retrieving the data we care about. Because we're
retrieving very little data in this case the data pulled back takes very little time (less than a tenth of a second for the
entire second evaluation to happen, with less than a twentieth of a second for data retrieval itself), but that won't
be true in all cases.
As above, we can click the 'Data Source' category on the legend to see the emitted queries.
Digging into the data
Looking at paths
When you're looking at this, if it seems like time spent is strange--for example, on the OData query you might see
that there's a Data Source Query with the following value:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers?
$filter=ContactTitle%20eq%20%27Sales%20Representative%27&$select=CustomerID%2CCountry HTTP/1.1
Content-Type:
application/json;odata.metadata=minimal;q=1.0,application/json;odata=minimalmetadata;q=0.9,application/atomsvc
+xml;q=0.8,application/atom+xml;q=0.8,application/xml;q=0.7,text/plain;q=0.7
<Content placeholder>
Response:
Content-Type:
application/json;odata.metadata=minimal;q=1.0,application/json;odata=minimalmetadata;q=0.9,application/atomsvc
+xml;q=0.8,application/atom+xml;q=0.8,application/xml;q=0.7,text/plain;q=0.7
Content-Length: 435
<Content placeholder>
This Data Source Query is associated with an operation that only takes up, say, 1% of the Exclusive Duration.
Meanwhile, there's a very similar one:
Request:
GET https://services.odata.org/V4/Northwind/Northwind.svc/Customers?$filter=ContactTitle eq 'Sales
Representative'&$select=CustomerID%2CCountry HTTP/1.1
Response:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers?$filter=ContactTitle eq 'Sales
Representative'&$select=CustomerID%2CCountry
HTTP/1.1 200 OK
This Data Source Query is associated with an operation that takes up nearly 75% of the Exclusive Duration. If you
turn on the Path, you discover the latter is actually a child of the former. This means that the first query basically
added very little time on its own, with the actual data retrieval being tracked by the 'inner' query.
These are extreme values, but they're within the bounds of what might be seen.
Understanding folding with Query Diagnostics
3/16/2020 • 2 minutes to read • Edit Online
One of the most common reasons to use Query Diagnostics is to have a better understanding of what operations
were 'pushed down' by Power Query to be performed by the back-end data source, which is also known as
'folding'. If we want to see what folded, we can look at what is the 'most specific' query, or queries, that get sent to
the back-end data source. We can look at this for both ODATA and SQL.
The operation that was described in the article on Recording Diagnostics does essentially four things:
Connects to the data source
Grabs the customer table
Filters the Customer ID role to 'Sales Representative'
Groups by 'Country'
Since the ODATA connector doesn't currently support folding COUNT() to the endpoint, and since this endpoint is
somewhat limited in its operations as well, we don't expect that final step to fold. On the other hand, filtering is
relatively trivial. This is exactly what we see if we look at the most specific query emitted above:
Request:
GET https://services.odata.org/V4/Northwind/Northwind.svc/Customers?$filter=ContactTitle eq 'Sales
Representative'&$select=CustomerID%2CCountry HTTP/1.1
Response:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers?$filter=ContactTitle eq 'Sales
Representative'&$select=CustomerID%2CCountry
HTTP/1.1 200 OK
We can see we're filtering the table for ContactTitle equallying 'Sales Representative', and we're only returning two
columns--Customer ID and Country. Country, of course, is needed for the grouping operation, which since it isn't
being performed by the ODATA endpoint must be performed locally. We can conclude what folds and doesn't fold
here.
Similarly, if we look at the specific and final query emitted in the SQL diagnostics, we see something slightly
different:
count(1) as [Count]
from
(
select [_].[Country]
from [dbo].[Customers] as [_]
where [_].[ContactTitle] = 'Sales Representative' and [_].[ContactTitle] is not null
) as [rows]
group by [Country]
Here, we can see that Power Query creates a subselection where ContactTitle is filtered to 'Sales Representative',
then groups by Country on this subselection. All of our operations folded.
Using Query Diagnostics, we can examine what kind of operations folded--in the future, we hope to make this
capability easier to use.
Using parameters
10/30/2020 • 5 minutes to read • Edit Online
A parameter serves as a way to easily store and manage a value that can be reused.
Parameters give you the flexibility to dynamically change the output of your queries depending on their value, and
can be used for:
Changing the argument values for particular transforms and data source functions
Inputs in custom functions
You can easily manage your parameters inside the Manage Parameters window. You can get to the Manage
Parameters window by selecting the Manage Parameters option inside Manage Parameters in the Home
tab.
Creating a parameter
Power Query provides two easy ways to create parameters:
From an existing quer y —You can easily right-click a query whose output is a non-structured value such
as, but not limited to, a date, text, or number, and select Conver t to Parameter .
NOTE
You can also convert a parameter to a query by right-clicking the parameter and then selecting Conver t To Quer y ,
as shown in the following image.
Using the Manage Parameters window —You can select the New Parameter option from the
dropdown menu of Manage Parameters in the Home tab, or you can launch the Manage Parameters
window and select in the New button on the top to create a parameter. You can fill in this form and select
OK to create a new parameter.
After creating the parameter, you can always go back to the Manage Parameters window to modify any of your
parameters at any moment.
Parameter properties
A parameter stores a value that can be used for transformations in Power Query. Apart from the name of the
parameter and the value that it stores, it also has other properties that provide metadata to it. The properties of a
parameter are as follows.
Name —Provide a name for this parameter that lets you easily recognize and differentiate it from other
parameters you might create.
Description —The description is displayed next to the parameter name when parameter information is
displayed, helping users who are specifying the parameter value to understand its purpose and its
semantics.
Required —The checkbox indicates whether subsequent users can specify whether a value for the
parameter must be provided.
Type —We recommended that you always set up the data type of your parameter. You can learn more about
the importance of data types from the Data types article.
Suggested Values —Provides the user with suggestions to select a value for the Current Value from the
available options:
Any value —The current value can be any manually entered value.
List of values —Provides you with a simple table-like experience so you can define a list of
suggested values that you can later select from for the Current Value . When this option is selected,
a new option called Default Value will be made available. From here you can select what should be
the default value for this parameter, which will be the default value shown to the user when
referencing the parameter. This value isn't the same as the Current Value , which is the value that's
stored inside the parameter and can be passed as an argument in transformations. Using the List of
values will enable a drop-down menu to be displayed in the Default Value and Current Value
fields, where you can pick one of the values from the suggested list of values.
NOTE
You can still manually type any value that you want to pass to the parameter. The list of suggested values
only serves as simple suggestions.
Quer y —Uses a list query (a query whose output is a list) to provide the list of suggested values that
you can later select for the Current Value .
NOTE
This feature is currently not available in Power Query Online.
For example purposes, you can see the following Orders query with the fields OrderID , Units , and Margin .
You can create a new parameter with the name Minimum Margin with a Decimal Number type and a Current
Value of 0.2, as shown in the next image.
You can go to the Orders query, and in the Margin field select the Greater Than filter option.
In the Filter Rows window, you'll see a button with a data type for the field selected. You can select the
Parameter option from the dropdown menu for this button. From the field selection right next to the data type
button, you can select the parameter that you want to pass to this argument. In this case, it's the Minimum
Margin parameter.
After you select OK , you can see that your table has been filtered using the Current Value for your parameter.
If you modify the Current Value of your Minimum Margin parameter to be 0.3, you can immediately see how
your orders query gets updated and shows you only the rows where the Margin is above 30%.
TIP
Multiple transformations in Power Query offer this experience where you can select your parameter from a dropdown. So we
recommend that you always look for it and take advantage of what parameters can offer you.
You can name this new function however you want. For demonstration purposes, the name of this new function
will be MyFunction . After you select OK , a new group will be created in the Queries pane using the name of your
new function. In this group, you'll find the parameters being used for the function, the query that was used to
create the function, and the function itself.
You can test this new function by entering a value, such as 0.4, in the field underneath the Minimum Margin
label. Then select the Invoke button. This will create a new query with the name Invoked Function , effectively
passing the value 0.4 to be used as the argument for the function and giving you only the rows where the margin
is above 40%.
You can learn more about how to create custom functions from the article Creating a custom function.
Error handling
10/30/2020 • 4 minutes to read • Edit Online
Similar to how Excel and the DAX language have an IFERROR function, Power Query has its own syntax to test and
catch errors.
As mentioned in the article on dealing with errors in Power Query, errors can appear either at the step or cell level.
This article will focus on how you can catch and manage errors based on our own specific logic.
NOTE
To demonstrate this concept, this article will use an Excel Workbook as its data source. The concepts showcased here apply to
all values in Power Query and not only the ones coming from an Excel Workbook.
This table from an Excel Workbook has Excel errors such as #NULL! , #REF! , and #DIV/0! in the Standard Rate
column. When you import this table into the Power Query Editor, the following image shows how it will look.
Notice how the errors from the Excel workbook are shown with the [Error] value in each of the cells.
In this case, the goal is to create a new Final Rate column that will use the values from the Standard Rate
column. If there are any errors, then it will use the value from the correspondent Special Rate column.
Add custom column with try and otherwise syntax
To create a new custom column, go to the Add column menu and select Custom column . In the Custom
column window, enter the formula try [Standard Rate] otherwise [Special Rate] . Name this new column Final
Rate .
The formula above will try to evaluate the Standard Rate column and will output its value if no errors are found. If
errors are found in the Standard Rate column, then the output will be the value defined after the otherwise
statement, which in this case is the Special Rate column.
After adding the correct data types to all of the columns in the table, the following image shows how the final table
looks.
NOTE
The sole purpose of excluding the #REF! error is for demonstration purposes. With the concepts showcased in this article,
you can target any error reasons, messages, or details of your choice.
When you select any of the whitespace next to the error value, you get the details pane at the bottom of the screen.
The details pane contains both the error reason, DataFormat.Error , and the error message,
Invalid cell value '#REF!' :
You can only select one cell at a time, so you can effectively only see the error components of one error value at a
time. This is where you'll create a new custom column and use the try expression.
Add custom column with try syntax
To create a new custom column, go to the Add column menu and select Custom column . In the Custom
column window, enter the formula try [Standard Rate] . Name this new column All Errors .
The try expression converts values and errors into a record value that indicates whether the try expression
handled an error or not, as well as the proper value or the error record.
You can expand this newly created column with record values and look at the available fields to be expanded by
selecting the icon next to the column header.
More resources
Understanding and working with errors in Power Query
Add a Custom column in Power Query
Add a Conditional column in Power Query
Import data from a database using native database
query
10/30/2020 • 4 minutes to read • Edit Online
Power Query gives you the flexibility to import data from wide variety of databases that it supports. It can run
native database queries, which can save you the time it takes to build queries using the Power Query interface. This
feature is especially useful for using complex queries that already exist—and that you might not want to or know
how to rebuild using the Power Query interface.
NOTE
One intent of native database queries is to be non-side effecting. However, Power Query does not guarantee that the query
will not affect the database. If you run a native database query written by another user, you will be prompted to ensure that
you're aware of the queries that will be evaluated with your credentials. For more information, see Native database query
security.
Power Query enables you to specify your native database query in a text box under Advanced options when
connecting to a database. In the example below, you'll import data from a SQL Server database using a native
database query entered in the SQL statement text box. The procedure is similar in all other databases with native
database query that Power Query supports.
1. Connect to a SQL Server database using Power Query. Select the SQL Ser ver database option in the
connector selection.
2. In the SQL Ser ver database popup window:
a. Specify the Ser ver and Database where you want to import data from using native database query.
b. Under Advanced options , select the SQL statement field and paste or enter your native database
query, then select OK .
3. If this is the first time you're connecting to this server, you'll see a prompt to select the authentication mode
to connect to the database. Select an appropriate authentication mode, and continue.
NOTE
If you don't have access to the data source (both Server and Database), you'll see a prompt to request access to the
server and database (if access-request information is specified in Power BI for the data source).
4. If the connection is established, the result data is returned in the Power Query Editor.
Shape the data as you prefer, then select Apply & Close to save the changes and import the data.
DataWorld.Dataset dwSQL
If you see this message, select Edit Permission . This selection will open the Native Database Quer y dialog box.
You'll be given an opportunity to either run the native database query, or cancel the query.
By default, if you run a native database query outside of the connector dialogs, you'll be prompted each time you
run a different query text to ensure that the query text that will be executed is approved by you.
NOTE
Native database queries that you insert in your get data operation won't ask you whether you want to run the query or not.
They'll just run.
You can turn off the native database query security messages if the native database query is run in either Power BI
Desktop or Excel. To turn off the security messages:
1. If you're using Power BI Desktop, under the File tab, select Options and settings > Options .
If you're using Excel, under the Data tab, select Get Data > Quer y Options .
2. Under Global settings, select Security .
3. Clear Require user approval for new native database queries .
4. Select OK .
You can also revoke the approval of any native database queries that you've previously approved for a given data
source in either Power BI Desktop or Excel. To revoke the approval:
1. If you're using Power BI Desktop, under the File tab, select Options and settings > Data source
settings .
If you're using Excel, under the Data tab, select Get Data > Data Source Settings .
2. In the Data source settings dialog box, select Global permissions . Then select the data source
containing the native database queries whose approval you want to revoke.
3. Select Edit permissions .
4. In the Edit permissions dialog box, under Native Database Queries , select Revoke Approvals .
Create Power Platform dataflows from queries in
Microsoft Excel (Preview)
10/30/2020 • 2 minutes to read • Edit Online
NOTE
The preview feature for creating Power Query templates from queries feature is only available to Office Insiders. For more
information on the Office insider program, see Office Insider.
Overview
Working with large datasets or long-running queries can be cumbersome every time you have to manually trigger
a data refresh in Excel because it takes resources from your computer to do this, and you have to wait until the
computation is done to get the latest data. Moving these data operations into a Power Platform dataflow is an
effective way to free up your computer's resources and to have the latest data easily available for you to consume
in Excel.
It only takes two quick steps to do this:
1. Exporting queries in Excel to a Power Query template
2. Creating a Power Platform dataflow from the Power Query template
3. The template requires basic information such as a name and a description before it can be saved locally on
your computer.
Creating a Power Platform dataflow from the Power Query template
1. Sign in to Power Apps.
2. In the left navigation pane, select Data > Dataflows .
3. From the toolbar, select New dataflow > Impor t template .
4. Select the Power Query template you created earlier. The dataflow name will prepopulate with the template
name provided. Once you're done with the dataflow creation screen, select Next to see your queries from
Excel in the query editor.
5. From this point, go through the normal dataflow creation and configuration process so you can further
transform your data, set refresh schedules on the dataflow, and any other dataflow operation possible. For
more information on how to configure and create Power Platform dataflows, see Create and use dataflows.
See also
Create and use dataflows in Power Apps
Optimize Power Query when expanding table
columns
10/30/2020 • 3 minutes to read • Edit Online
The simplicity and ease of use that allows Power BI users to quickly gather data and generate interesting and
powerful reports to make intelligent business decisions also allows users to easily generate poorly performing
queries. This often occurs when there are two tables that are related in the way a foreign key relates SQL tables or
SharePoint lists. (For the record, this issue isn't specific to SQL or SharePoint, and occurs in many backend data
extraction scenarios, especially where schema is fluid and customizable.) There's also nothing inherently wrong
with storing data in separate tables that share a common key—in fact this is a fundamental tenet of database
design and normalization. But it does imply a better way to expand the relationship.
Consider the following example of a SharePoint customer list.
When you expand the record, you see the fields joined from the secondary table.
When expanding related rows from one table to another, the default behavior of Power BI is to generate a call to
Table.ExpandTableColumn . You can see this in the generated formula field. Unfortunately, this method generates an
individual call to the second table for every row in the first table.
This increases the number of HTTP calls by one for each row in the primary list. This may not seem like a lot in the
above example of five or six rows, but in production systems where SharePoint lists reach hundreds of thousands
of rows, this can cause a significant experience degradation.
When queries reach this bottleneck, the best mitigation is to avoid the call-per-row behavior by using a classic
table join. This ensures that there will be only one call to retrieve the second table, and the rest of the expansion
can occur in memory using the common key between the two tables. The performance difference can be massive
in some cases.
First, start with the original table, noting the column you want to expand, and ensuring you have the ID of the item
so that you can match it. Typically the foreign key is named similar to the display name of the column with Id
appended. In this example, it's LocationId .
Second, load the secondary table, making sure to include the Id , which is the foreign key. Right-click on the Queries
panel to create a new query.
Finally, join the two tables using the respective column names that match. You can typically find this field by first
expanding the column, then looking for the matching columns in the preview.
In this example, you can see that LocationId in the primary list matches Id in the secondary list. The UI renames
this to Location.Id to make the column name unique. Now let's use this information to merge the tables.
By right-clicking on the query panel and selecting New Quer y > Combine > Merge Queries as New , you see a
friendly UI to help you combine these two queries.
Select each table from the drop-down to see a preview of the query.
Once you've selected both tables, select the column that joins the tables logically (in this example, it's LocationId
from the primary table and Id from the secondary table). The dialog will instruct you how many of the rows match
using that foreign key. You'll likely want to use the default join kind (left outer) for this kind of data.
Select OK and you'll see a new query, which is the result of the join. Expanding the record now doesn't imply
additional calls to the backend.
Refreshing this data will result in only two calls to SharePoint—one for the primary list, and one for the secondary
list. The join will be performed in memory, significantly reducing the number of calls to SharePoint.
This approach can be used for any two tables in PowerQuery that have a matching foreign key.
NOTE
SharePoint user lists and taxonomy are also accessible as tables, and can be joined in exactly the way described above,
provided the user has adequate privileges to access these lists.
Connectors in Power Query
10/30/2020 • 3 minutes to read • Edit Online
The following table contains a list of all the connectors currently available for Power Query. For those connectors
that have a reference page in this document, a link is provided under the connector icon and name.
A checkmark indicates the connector is currently supported in the listed service; an X indicates that the connector
is not currently supported in the listed service.
A|B|C|D|E|F|G|H|I|J|K|L|M|N|O|P|Q|R|S|T|U|V|W|X|Y|Z
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Access
Database
By Microsoft
Active
Director y
By Microsoft
Adobe
Analytics
By Microsoft
Amazon
Redshift
By Microsoft
appFigures
(Beta)
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Asana
By Asana
AtScale
cubes
(Beta)
By Microsoft
Azure
Analysis
Ser vices
database
By Microsoft
Azure Blob
Storage
By Microsoft
Azure
CosmosDB
(Beta)
By Microsoft
Azure Cost
Managemen
t
By Microsoft
Azure Data
Explorer
(Beta)
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Azure Data
Lake
Storage
Gen1
By Microsoft
Azure Data
Lake
Storage
Gen2
(Beta)
By Microsoft
Azure
DevOps
(Beta)
By Microsoft
Azure
DevOps
Ser ver
(Beta)
By Microsoft
Azure
HDInsight
(HDFS)
By Microsoft
Azure
HDInsight
Spark
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Azure SQL
Data
Warehouse
By Microsoft
Azure SQL
database
By Microsoft
Azure Table
Storage
By Microsoft
Azure Time
Series
Insights
(Beta)
By Microsoft
BI
Connector
By Guidanz
BI360
By Solver
Global
Cognite
Data
Fustion
(Beta)
By Cognite
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Common
Data
Ser vice
By Microsoft
Data.World
-
Get Dataset
(Beta)
By Microsoft
Data
Vir tuality
(Beta)
By Data
Virtuality
Denodo
By Denodo
Dremio
By Dremio
Dynamics
365
(online)
By Microsoft
Dynamics
365
Business
Central
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Dynamics
365
Business
Central
(on-
premises)
By Microsoft
Dynamics
365
Customer
Insights
(Beta)
By Microsoft
Dynamics
NAV
By Microsoft
Emigo Data
Source
By Sagra
Entersoft
Business
Suite
(Beta)
By Entersoft
Essbase
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Exasol
By Exasol
Excel
By Microsoft
Facebook
By Microsoft
FactSet
Analytics
(Beta)
By FactSet
FHIR
By Microsoft
Folder
By Microsoft
Github
(Beta)
By Microsoft
Google
Analytics
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Google
BigQuer y
By Microsoft
Hadoop File
(HDFS)
By Microsoft
HDInsight
Interactive
Quer y
By Microsoft
Hive LL AP
(Beta)
By Microsoft
IBM DB2
database
By Microsoft
IBM
Informix
database
(Beta)
By Microsoft
IBM Netezza
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Impala
By Microsoft
Indexima
(Beta)
By Indexima
Industrial
App Store
By Intelligent
Plant
Information
Grid (Beta)
By Luminis
InterSystem
s
IRIS (Beta)
By
Intersystems
Intune Data
Warehouse
(Beta)
By Microsoft
Jamf Pro
(Beta)
By Jamf
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Jethro
(Beta)
By JethroData
JSON
By Microsoft
Kyligence
By Kyligence
Linkar PICK
Style/MultiV
alue
Databases
(Beta)
By Kosday
Solutions
LinkedIn
Sales
Navigator
(Beta)
By Microsoft
Marketo
(Beta)
By Microsoft
MarkLogic
(Beta)
By MarkLogic
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Microsoft
Azure
Consumptio
n Insights
(Beta)
By Microsoft
Microsoft
Exchange
By Microsoft
Microsoft
Exchange
Online
By Microsoft
Microsot
Graph
Security
(Beta)
By Microsoft
MicroStrate
gy
for Power BI
By
MicroStrategy
Mixpanel
(Beta)
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
MySQL
database
By Microsoft
OData Feed
By Microsoft
ODBC
By Microsoft
OLE DB
By Microsoft
Oracle
database
By Microsoft
Parquet
By Microsoft
Palantir
Foundr y
By Palantir
Paxata
By Paxata
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
PDF
By Microsoft
Planview
Enterprise
One - CTM
(Beta)
By Planview
Planview
Enterprise
One - PRM
(Beta)
By Planview
PostgreSQL
database
By Microsoft
Power BI
dataflows
(Beta)
By Microsoft
Power BI
datasets
By Microsoft
Power
Platform
dataflows
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Product
Insights
(Beta)
By Microsoft
Projectplace
for Power BI
(Beta)
By Planview
Python
Script
By Microsoft
QubolePrest
o Beta
By Qubole
Quickbooks
Online
(Beta)
By Microsoft
Quick Base
By Quick Base
R Script
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Roamler
(Beta)
By Roamler
Salesforce
Objects
By Microsoft
Salesforce
Repor ts
By Microsoft
SAP
Business
Warehouse
Application
Ser ver
By Microsoft
SAP
Business
Warehouse
Message
Ser ver
By Microsoft
SAP HANA
database
By Microsoft
SharePoint
Folder
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
SharePoint
list
By Microsoft
SharePoint
Online
List
By Microsoft
Shor tcuts
Business
Insights
(Beta)
By Shortcuts
SiteImprove
By
SiteImprove
Smar tsheet
By Microsoft
Snowflake
By Microsoft
Solver
By BI360
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Spark
By Microsoft
SparkPost
(Beta)
By Microsoft
Sur veyMonk
ey (Beta)
By
SurveyMonke
y
SweetIQ
(Beta)
By Microsoft
Sybase
Database
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
TeamDesk
(Beta)
By ForeSoft
Tenforce
(Smar t)List
By Tenforce
Teradata
database
By Microsoft
Text/CSV
By Microsoft
TIBCO(R)
Data
Vir tualizatio
n
(Beta)
By TIBCO
Twilio (Beta)
By Microsoft
Vena (Beta)
By Vena
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
Ver tica
By Microsoft
Vessel
Insights
(Beta)
By Kongsberg
Web
By Microsoft
Webtrends
Analytics
(Beta)
By Microsoft
Witivio
(Beta)
By Witivio
Workforce
Dimensions
(Beta)
By Kronos
Workplace
Analytics
(Beta)
By Microsoft
C USTO M ER
P O W ER B I P O W ER B I P O W ER A P P S IN SIGH T S A N A LY SIS
C O N N EC TO R EXC EL ( DATA SET S) ( DATA F LO W S) ( DATA F LO W S) ( DATA F LO W S) SERVIC ES
XML
By Microsoft
Zendesk
(Beta)
By Microsoft
Zoho
Creater
(Beta)
By Zoho
Zucchetti
HR
Infinity
(Beta)
By Zucchetti
Next steps
Power BI data sources (datasets)
Connect to data sources for Power BI dataflows
Available data sources (Dynamics 365 Customer Insights)
Data sources supported in Azure Analysis Services
Adobe Analytics
10/30/2020 • 3 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop
Authentication Types Supported: Organizational account
Function Reference Documentation: AdobeAnalytics.Cubes
Prerequisites
Before you can sign in to Adobe Analytics, you must have an Adobe Analytics account (username/password).
Capabilities Supported
Import
4. In the Adobe Analytics window that appears, provide your credentials to sign in to your Adobe Analytics
account. You can either supply a username (which is usually an email address), or select Continue with
Google or Continue with Facebook .
If you entered an email address, select Continue .
5. Enter your Adobe Analytics password and select Continue .
6. Once you've successfully signed in, select Connect .
Once the connection is established, you can preview and select multiple dimensions and measures within the
Navigator dialog box to create a single tabular output.
You can also provide any optional input parameters required for the selected items. For more information about
these parameters, see Optional input parameters.
You can Load the selected table, which brings the entire table into Power BI Desktop, or you can select Transform
Data to edit the query, which opens Power Query Editor. You can then filter and refine the set of data you want to
use, and then load that refined set of data into Power BI Desktop.
Top—filter the data based on the top items for the dimension. You can enter a value in the Top text box, or
select the ellipsis next to the text box to select some default values. By default, all items are selected.
Dimension—filter the data based on the selected dimension. By default, all dimensions are selected. Custom
Adobe dimension filters are not currently supported in the Power Query user interface, but can be defined
by hand as M parameters in the query. For more information, see Using Query Parameters in Power BI
Desktop.
Next steps
You may also find the following Adobe Analytics information useful:
Adobe Analytics 1.4 APIs
Adobe Analytics Reporting API
Metrics
Elements
Segments
GetReportSuites
Adobe Analytics support
Access database
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Windows, Basic, Organizational Account
Function Reference Documentation: Access.Database
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
If you're connecting to an Access database from Power Query Online, the system that contains the on-premises
data gateway must have the 64-bit version of the Access Database Engine 2010 OLEDB provider installed.
Also, if you are loading an Access database to Power BI Desktop, the versions of the Access Database Engine 2010
OLEDB provider and Power BI Desktop on that machine must match (that is, either 32-bit or 64-bit). For more
information, see Import Access database to Power BI Desktop.
Capabilities Supported
Import
NOTE
You must select an on-premises data gateway for this connector, whether the Access database is on your local
network or on a web site.
d. Select the type of credentials for the connection to the Access database in Authentication kind .
e. Enter your credentials.
f. Select Next to continue.
4. In Navigator , select the data you require, then either load or transform the data.
Troubleshooting
Connect to local file from Power Query Online
When you attempt to connect to a local Access database using Power Query Online, you must select an on-
premises data gateway, even if your Access database is online.
On-premises data gateway error
A 64-bit version of the Access Database Engine 2010 OLEDB provider must be installed on your on-premises data
gateway machine to be able to load Access database files. If you already have a 64-bit version of Microsoft Office
installed on the same machine as the gateway, the Access Database Engine 2010 OLEDB provider is already
installed. If not, you can download the driver from the following location:
https://www.microsoft.com/download/details.aspx?id=13255
Import Access database to Power BI Desktop
In some cases, you may get a The 'Microsoft.ACE.OLEDB.12.0' provider is not registered error when attempting to
import an Access database file to Power BI Desktop. This error may be caused by using mismatched bit versions of
Power BI Desktop and the Access Database Engine 2010 OLEDB provider. For more information about how you can
fix this mismatch, see Troubleshoot importing Access and Excel .xls files in Power BI Desktop.
Azure SQL database
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Windows (Power BI Desktop, Excel, online service with gateway), Database (Power
BI Desktop, Excel), Microsoft Account (all), Basic (online service)
Function Reference Documentation: Sql.Database, Sql.Databases
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
By default, Power BI installs an OLE DB driver for Azure SQL database. However, for optimal performance, we
recommend that the customer installs the SQL Server Native Client before using the Azure SQL database
connector. SQL Server Native Client 11.0 and SQL Server Native Client 10.0 are both supported in the latest
version.
Capabilities Supported
Import
DirectQuery (Power BI only)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
SQL Server failover support
NOTE
If the connection is not encrypted, you'll be prompted with the following dialog.
Select OK to connect to the database by using an unencrypted connection, or follow these
instructions to set up encrypted connections to Azure SQL database.
3. If you're connecting from an online service:
a. In the Azure SQL database dialog that appears, provide the name of the server and database.
b. If this is the first time you're connecting to this database, select the authentication kind and input your
credentials.
c. If required, select the name of your on-premises data gateway.
d. If the connection is not encrypted, clear the Use Encr ypted Connection check box.
e. Select Next to continue.
4. In Navigator , select the data you require, then either load or transform the data.
Azure SQL Data Warehouse
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Windows (Power BI Desktop, Excel, online service with gateway), Database (Power
BI Desktop, Excel), Microsoft Account (all), Basic (online service)
Function Reference Documentation: Sql.Database, Sql.Databases
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
By default, Power BI installs an OLE DB driver for Azure SQL Data Warehouse. However, for optimal performance,
we recommend that the customer installs the SQL Server Native Client before using the Azure SQL Data
Warehouse connector. SQL Server Native Client 11.0 and SQL Server Native Client 10.0 are both supported in the
latest version.
Capabilities Supported
Import
DirectQuery (Power BI only)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
SQL Server failover support
NOTE
If the connection is not encrypted, you'll be prompted with the following dialog.
Select OK to connect to the database by using an unencrypted connection, or follow these
instructions to set up encrypted connections to Azure SQL Data Warehouse.
3. If you're connecting from an online service:
a. In the Azure SQL Data Warehouse dialog that appears, provide the name of the server and
database.
b. If this is the first time you're connecting to this database, select the authentication kind and input your
credentials.
c. If required, select the name of your on-premises data gateway.
d. If the connection is not encrypted, clear the Use Encr ypted Connection check box.
e. Select Next to continue.
4. In Navigator , select the data you require, then either load or transform the data.
Common Data Service
10/30/2020 • 4 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dynamics 365 Customer Insights
Authentication types: Organizational account
Prerequisites
You must have a Common Data Service environment with maker permissions to access the portal, and read
permissions to access data within entities.
Capabilities supported
Server URL
Advanced
Reorder columns
Add display column
3. Enter the server URL address of the data you want to load.
When the table is loaded in the Navigator dialog box, by default the columns in the table are reordered in
alphabetical order by the column names. If you don't want the columns reordered, in the advanced settings
enter false in Reorder columns .
Also when the table is loaded, by default if the table contains any picklist fields, a new column with the name
of the picklist field with _display appended at the end of the name is added to the table. If you don't want
the picklist field display column added, in the advanced settings enter false in Add display column .
When you've finished filling in the information, select OK .
4. If this is the first time you're connecting to this site, select Sign in and input your credentials. Then select
Connect .
5. In Navigator , select the data you require, then either load or transform the data.
2. Enter the server URL address of the data you want to load.
3. If necessary, enter an on-premises data gateway if you're going to be using on-premises data (for example, if
you're going to combine data from Common Data Service and an on-premises SQL Server database).
4. Sign in to your organizational account.
5. When you've successfully signed in, select Next .
6. In the navigation page, select the data you require, and then select Transform Data .
NOTE
Both the Common Data Service connector and the OData APIs are meant to serve analytical scenarios where data volumes
are relatively small. The recommended approach for bulk data extraction is “Export to Data Lake”. The TDS endpoint is a
better option than the Common Data Service connector and OData endpoint, but is currently in Preview.
Analyze data in Azure Data Lake Storage Gen2 by
using Power BI
10/30/2020 • 3 minutes to read • Edit Online
In this article you'll learn how to use Power BI Desktop to analyze and visualize data that is stored in a storage
account that has a hierarchical namespace (Azure Data Lake Storage Gen2).
Prerequisites
Before you begin this tutorial, you must have the following prerequisites:
An Azure subscription. See Get Azure free trial.
A storage account that has a hierarchical namespace. Follow these instructions to create one. This article
assumes that you've created a storage account named myadlsg2 .
You are granted one of the following roles for the storage account: Blob Data Reader , Blob Data
Contributor , or Blob Data Owner .
A sample data file named Drivers.txt located in your storage account. You can download this sample from
Azure Data Lake Git Repository, and then upload that file to your storage account.
Power BI Desktop . You can download this from the Microsoft Download Center.
You can also select whether you want to use the file system view or the Common Data Model folder view.
Select OK to continue.
5. If this is the first time you're using this URL address, you'll be asked to select the authentication method.
If you select the Organizational account method, select Sign in to sign into your storage account. You'll be
redirected to your organization's sign in page. Follow the prompts to sign into the account. After you've
successfully signed in, select Connect .
If you select the Account key method, enter your account key and then select Connect .
6. The next dialog box shows all files under the URL you provided in step 4 above, including the file that you
uploaded to your storage account. Verify the information, and then select Load .
7. After the data has been successfully loaded into Power BI, you'll see the following fields in the Fields tab.
However, to visualize and analyze the data, you might prefer the data to be available using the following
fields.
In the next steps, you'll update the query to convert the imported data to the desired format.
8. From the Home tab on the ribbon, select Edit Queries .
9. In the Quer y Editor , under the Content column, select Binar y . The file will automatically be detected as
CSV and you should see an output as shown below. Your data is now available in a format that you can use
to create visualizations.
10. From the Home tab on the ribbon, select Close & Apply .
11. Once the query is updated, the Fields tab will show the new fields available for visualization.
12. Now you can create a pie chart to represent the drivers in each city for a given country. To do so, make the
following selections.
From the Visualizations tab, select the symbol for a pie chart.
In this example, the columns you're going to use are Column 4 (name of the city) and Column 7 (name of
the country). Drag these columns from the Fields tab to the Visualizations tab as shown below.
The pie chart should now resemble the one shown below.
13. By selecting a specific country from the page level filters, you can now see the number of drivers in each city
of the selected country. For example, under the Visualizations tab, under Page level filters , select Brazil .
14. The pie chart is automatically updated to display the drivers in the cities of Brazil.
15. From the File menu, select Save to save the visualization as a Power BI Desktop file.
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Gateway for on-premise or .xls files), Dataflows in PowerBI.com
(Gateway for on-premise or .xls files), Dataflows in PowerApps.com (Gateway for on-premise or .xls files), Excel
Authentication Types Supported: No authentication
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
In order to connect to a legacy workbook (such as .xls or .xlsb), the Access Database Engine 2010 OLEDB provider is
required. To install this provider, go to the download page and install the relevant (32 bit or 64 bit) version. If you
don't have it installed, when connecting to legacy workbooks you'll see the following error:
The 32-bit (or 64-bit) version of the Access Database Engine 2010 OLEDB provider may be required to read this
type of file. To download the client software, visit the following site: https://go.microsoft.com/fwlink/?
LinkID=285987.
Capabilities Supported
Import
Troubleshooting
Connecting to an online Excel workbook
If you want to connect to an Excel document hosted in Sharepoint, you can do so via the 'Web' connector in Power
BI Desktop, Excel, and Dataflows, as well as the 'Excel' connector in Dataflows. To get the link to the file:
1. Open the document in Excel Desktop.
2. Open the File menu, select the Info tab, and then select Copy Path .
3. Copy the address into the File Path or URL field, and remove the ?web=1 from the end of the address.
Legacy ACE connector
Error resolution
Workbooks built in a legacy format (such as .xls and .xlsb) are accessed through the Access Database Engine
OLEDB provider, and Power Query will display values as returned by this provider. This action may cause a lack of
fidelity in certain cases compared to what you would see in an equivalent xlsx (OpenXML based) file.
Incorrect column typing returning nulls
When Ace loads a sheet, it looks at the first eight rows to try to guess the data types to use. If the first eight rows of
data aren't inclusive of the following data (for example, numeric only in the first eight rows versus text in the
following rows), ACE will apply an incorrect type to that column and return nulls for any data that doesn't match
the type.
Missing or incomplete Excel data
Sometimes Power Query fails to extract all the data from an Excel Worksheet. This is often caused by the
Worksheet having incorrect dimensions (for example, having dimensions of A1:C200 when the actual data
occupies more than three columns or 200 rows). You can fix such incorrect dimensions by opening and re-saving
the document. If the problematic Excel document is automatically generated by a tool, you'll need to ensure the
tool is fixed to output the dimensions correctly before the output can be imported into Power Query.
To view the dimensions of a Worksheet:
1. Rename the xlsx file with a .zip extension.
2. Navigate into xl\worksheets.
3. Copy the xml file for the problematic sheet (for example, Sheet1.xml) out of the zip file to another location.
4. Inspect the first few lines of the file. If the file is small enough, simply open it in a text editor. If the file is too
large to be opened in a text editor, run the following from a Command Prompt: more Sheet1.xml .
5. Look for a <dimension .../> tag (for example, <dimension ref="A1:C200" /> ).
If your file has a dimension attribute that points to a single cell (such as <dimension ref="A1" /> ), Power Query
uses this to find the starting row and column of the data on the sheet.
However, if your file has a dimension attribute that points to multiple cells (such as <dimension ref="A1:AJ45000"/>
), Power Query uses this range to find the starting row and column as well as the ending row and column . If
this range does not contain all the data on the sheet, some of the data won't be loaded.
As mentioned above, incorrect dimensions can be fixed by re-saving the file in Excel or changing the tool that
generated it to output either a proper ending point or just the starting cell.
Sluggish or slow performance when loading Excel data
Slow loading of Excel data can also be caused by incorrect dimensions. However, in this case, the slowness is
caused by the dimensions being much larger than they need to be, rather than being too small. Overly large
dimensions will cause Power Query to read a much larger amount of data from the Workbook than is actually
needed.
To fix this issue, you can refer to Locate and reset the last cell on a worksheet for detailed instructions.
Facebook
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: Deprecated
Products: Power BI Desktop, Power BI Service, Excel
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Troubleshooting
This connector has been deprecated.
FHIR
2/20/2020 • 2 minutes to read • Edit Online
Summary
Fast Healthcare Interoperability Resources (FHIR®) is a new standard for healthcare data interoperability.
Healthcare data is represented as resources such as Patient , Observation , Encounter , and so on, and a REST API
is used for querying healthcare data served by a FHIR server. The Power Query connector for FHIR can be used to
import and shape data from a FHIR server.
If you don't have a FHIR server, you can provision the Azure API for FHIR.
Release State: General Availability
Products: Power BI Desktop, Power BI Service
Authentication Types Supported: Anonymous, Azure Active Directory
Capabilities Supported
Import
2. Select More....
3. Search for "FHIR".
Select the FHIR connector and select Connect .
4. Enter the URL for your FHIR server.
You can optionally enter an initial query for the FHIR server, if you know exactly what data you're looking for.
Select OK to proceed.
5. Decide on your authentication scheme.
The connector supports "Anonymous" for FHIR servers with no access controls (for example, public test
servers like http://test.fhir.org/r4) or Azure Active Directory authentication. See FHIR connector
authentication for details.
6. Select the resources you're interested in.
9. Create dashboards with data, for example, make a plot of the patient locations based on postal code.
Next Steps
In this article, you've learned how to use the Power Query connector for FHIR to access FHIR data from Power BI.
Next explore the authentication features of the Power Query connector for FHIR.
FHIR connector authentication
FHIR® and the FHIR Flame icon are the registered trademarks of HL7 and are used with the permission of
HL7. Use of the FHIR trademark does not constitute endorsement of this product by HL7.
FHIR Connector Authentication
2/20/2020 • 2 minutes to read • Edit Online
This article explains authenticated access to FHIR servers using the Power Query connector for FHIR. The connector
supports anonymous access to publicly accessible FHIR servers and authenticated access to FHIR servers using
Azure Active Directory authentication. The Azure API for FHIR is secured with Azure Active Directory.
Anonymous Access
There are a number of publicly accessible FHIR servers. To enable testing with these public servers, the Power
Query connector for FHIR supports the "Anonymous" authentication scheme. For example to access the public
https://vonk.fire.ly server:
1. Enter the URL of the public Vonk server.
After that, follow the steps to query and shape your data.
Next Steps
In this article, you've learned how to use the Power Query connector for FHIR authentication features. Next, explore
query folding.
FHIR Power Query folding
FHIR Query Folding
2/20/2020 • 4 minutes to read • Edit Online
Power Query folding is the mechanism used by a Power Query connector to turn data transformations into queries
that are sent to the data source. This allows Power Query to off-load as much of the data selection as possible to
the data source rather than retrieving large amounts of unneeded data only to discard it in the client. The Power
Query connector for FHIR includes query folding capabilities, but due to the nature of FHIR search, special
attention must be given to the Power Query expressions to ensure that query folding is performed when possible.
This article explains the basics of FHIR Power Query folding and provides guidelines and examples.
let
Source = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null),
Patient1 = Source{[Name="Patient"]}[Data],
#"Filtered Rows" = Table.SelectRows(Patient1, each [birthDate] < #date(1980, 1, 1))
in
#"Filtered Rows"
Instead of retrieving all Patient resources from the FHIR server and filtering them in the client (Power BI), it's more
efficient to send a query with a search parameter to the FHIR server:
GET https://myfhirserver.azurehealthcareapis.com/Patient?birthdate=lt1980-01-01
With such a query, the client would only receive the patients of interest and would not need to discard data in the
client.
In the example of a birth date, the query folding is straightforward, but in general it is challenging in FHIR because
the search parameter names don't always correspond to the data field names and frequently multiple data fields
will contribute to a single search parameter.
For example, let's consider the Observation resource and the category field. The Observation.category field is a
CodeableConcept in FHIR, which has a coding field, which have system and code fields (among other fields).
Suppose you're interested in vital-signs only, you would be interested in Observations where
Observation.category.coding.code = "vital-signs" , but the FHIR search would look something like
https://myfhirserver.azurehealthcareapis.com/Observation?category=vital-signs .
To be able to achieve query folding in the more complicated cases, the Power Query connector for FHIR matches
Power Query expressions with a list of expression patterns and translates them into appropriate search
parameters. The expression patterns are generated from the FHIR specification.
This matching with expression patterns works best when any selection expressions (filtering) is done as early as
possible in data transformation steps before any other shaping of the data.
NOTE
To give the Power Query engine the best chance of performing query folding, you should do all data selection expressions
before any shaping of the data.
Unfortunately, the Power Query engine no longer recognized that as a selection pattern that maps to the category
search parameter, but if you restructure the query to:
The search query /Observation?category=vital-signs will be sent to the FHIR server, which will reduce the amount
of data that the client will receive from the server.
While the first and the second Power Query expressions will result in the same data set, the latter will, in general,
result in better query performance. It's important to note that the second, more efficient, version of the query can't
be obtained purely through data shaping with the graphical user interface (GUI). It's necessary to write the query in
the "Advanced Editor".
The initial data exploration can be done with the GUI query editor, but it's recommended that the query be
refactored with query folding in mind. Specifically, selective queries (filtering) should be performed as early as
possible.
Summary
Query folding provides more efficient Power Query expressions. A properly crafted Power Query will enable query
folding and thus off-load much of the data filtering burden to the data source.
Next steps
In this article, you've learned how to use query folding in the Power Query connector for FHIR. Next, explore the
list of FHIR Power Query folding patterns.
FHIR Power Query folding patterns
FHIR Query Folding Patterns
2/20/2020 • 9 minutes to read • Edit Online
This article describes Power Query patterns that will allow effective query folding in FHIR. It assumes that you are
familiar with with using the Power Query connector for FHIR and understand the basic motivation and principles
for Power Query folding in FHIR.
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "birthdate=lt1980-01-01"
FilteredPatients = Table.SelectRows(Patients, each [birthDate] < #date(1980, 1, 1))
in
FilteredPatients
Filtering Patients by birth date range using and , only the 1970s:
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "birthdate=ge1970-01-01&birthdate=lt1980-01-01"
FilteredPatients = Table.SelectRows(Patients, each [birthDate] < #date(1980, 1, 1) and [birthDate] >=
#date(1970, 1, 1))
in
FilteredPatients
Filtering Patients by birthdate using or , not the 1970s:
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "birthdate=ge1980-01-01,lt1970-01-01"
FilteredPatients = Table.SelectRows(Patients, each [birthDate] >= #date(1980, 1, 1) or [birthDate] <
#date(1970, 1, 1))
in
FilteredPatients
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "active=true"
FilteredPatients = Table.SelectRows(Patients, each [active])
in
FilteredPatients
Alternative search for patients where active not true (could include missing):
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "active:not=true"
FilteredPatients = Table.SelectRows(Patients, each [active] <> true)
in
FilteredPatients
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "gender=male"
FilteredPatients = Table.SelectRows(Patients, each [gender] = "male")
in
FilteredPatients
Filtering to keep only patients that are not male (includes other):
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "gender:not=male"
FilteredPatients = Table.SelectRows(Patients, each [gender] <> "male")
in
FilteredPatients
// Fold: "status=final"
FilteredObservations = Table.SelectRows(Observations, each [status] = "final")
in
FilteredObservations
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "_lastUpdated=2010-12-31T11:56:02.000+00:00"
FilteredPatients = Table.SelectRows(Patients, each [meta][lastUpdated] = #datetimezone(2010, 12, 31, 11,
56, 2, 0, 0))
in
FilteredPatients
let
Encounters = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Encounter" ]}
[Data],
// Fold: "class=s|c"
FilteredEncounters = Table.SelectRows(Encounters, each [class][system] = "s" and [class][code] = "c")
in
FilteredEncounters
let
Encounters = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Encounter" ]}
[Data],
// Fold: "class=c"
FilteredEncounters = Table.SelectRows(Encounters, each [class][code] = "c")
in
FilteredEncounters
let
Encounters = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Encounter" ]}
[Data],
// Fold: "class=s|"
FilteredEncounters = Table.SelectRows(Encounters, each [class][system] = "s")
in
FilteredEncounters
// Fold: "subject=Patient/1234"
FilteredObservations = Table.SelectRows(Observations, each [subject][reference] = "Patient/1234")
in
FilteredObservations
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "subject=1234,Patient/1234,https://myfhirservice/Patient/1234"
FilteredObservations = Table.SelectRows(Observations, each [subject][reference] = "1234" or [subject]
[reference] = "Patient/1234" or [subject][reference] = "https://myfhirservice/Patient/1234")
in
FilteredObservations
let
ChargeItems = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "ChargeItem" ]}
[Data],
// Fold: "quantity=1"
FilteredChargeItems = Table.SelectRows(ChargeItems, each [quantity][value] = 1)
in
FilteredChargeItems
let
ChargeItems = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "ChargeItem" ]}
[Data],
// Fold: "quantity=gt1.001"
FilteredChargeItems = Table.SelectRows(ChargeItems, each [quantity][value] > 1.001)
in
FilteredChargeItems
let
ChargeItems = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "ChargeItem" ]}
[Data],
// Fold: "quantity=lt1.001|s|c"
FilteredChargeItems = Table.SelectRows(ChargeItems, each [quantity][value] < 1.001 and [quantity][system]
= "s" and [quantity][code] = "c")
in
FilteredChargeItems
// Fold: "period=sa2010-01-01T00:00:00.000+00:00"
FiltertedConsents = Table.SelectRows(Consents, each [provision][period][start] > #datetimezone(2010, 1, 1,
0, 0, 0, 0, 0))
in
FiltertedConsents
let
Consents = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Consent" ]}[Data],
// Fold: "period=eb2010-01-01T00:00:00.000+00:00"
FiltertedConsents = Table.SelectRows(Consents, each [provision][period][end] < #datetimezone(2010, 1, 1,
0, 0, 0, 0, 0))
in
FiltertedConsents
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "code:text=t"
FilteredObservations = Table.SelectRows(Observations, each [code][text] = "t")
in
FilteredObservations
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "code:text=t"
FilteredObservations = Table.SelectRows(Observations, each Text.StartsWith([code][text], "t"))
in
FilteredObservations
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "_profile=http://myprofile"
FilteredPatients = Table.SelectRows(Patients, each List.MatchesAny([meta][profile], each _ =
"http://myprofile"))
in
FilteredPatients
// Fold: "category=food"
FilteredAllergyIntolerances = Table.SelectRows(AllergyIntolerances, each List.MatchesAny([category], each
_ = "food"))
in
FilteredAllergyIntolerances
let
AllergyIntolerances = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"AllergyIntolerance" ]}[Data],
// Fold: "category:missing=true"
FilteredAllergyIntolerances = Table.SelectRows(AllergyIntolerances, each List.MatchesAll([category], each
_ = null))
in
FilteredAllergyIntolerances
let
AllergyIntolerances = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"AllergyIntolerance" ]}[Data],
// Fold: "category:missing=true"
FilteredAllergyIntolerances = Table.SelectRows(AllergyIntolerances, each [category] = null)
in
FilteredAllergyIntolerances
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "family:exact=Johnson"
FilteredPatients = Table.SelectRows(Patients, each Table.MatchesAnyRows([name], each [family] =
"Johnson"))
in
FilteredPatients
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "family=John"
FilteredPatients = Table.SelectRows(Patients, each Table.MatchesAnyRows([name], each
Text.StartsWith([family], "John")))
in
FilteredPatients
// Fold: "family=John,Paul"
FilteredPatients = Table.SelectRows(Patients, each Table.MatchesAnyRows([name], each
Text.StartsWith([family], "John") or Text.StartsWith([family], "Paul")))
in
FilteredPatients
Filtering Patients on family name starts with John and given starts with Paul :
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "family=John&given=Paul"
FilteredPatients = Table.SelectRows(
Patients,
each
Table.MatchesAnyRows([name], each Text.StartsWith([family], "John")) and
Table.MatchesAnyRows([name], each List.MatchesAny([given], each Text.StartsWith(_, "Paul"))))
in
FilteredPatients
let
Goals = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Goal" ]}[Data],
// Fold: "target-date=gt2020-03-01"
FilteredGoals = Table.SelectRows(Goals, each Table.MatchesAnyRows([target], each [due][date] >
#date(2020,3,1)))
in
FilteredGoals
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "identifier=s|v"
FilteredPatients = Table.SelectRows(Patients, each Table.MatchesAnyRows([identifier], each [system] = "s"
and _[value] = "v"))
in
FilteredPatients
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "code=s|c"
FilteredObservations = Table.SelectRows(Observations, each Table.MatchesAnyRows([code][coding], each
[system] = "s" and [code] = "c"))
in
FilteredObservations
// Fold: "code:text=t&code=s|c"
FilteredObservations = Table.SelectRows(Observations, each Table.MatchesAnyRows([code][coding], each
[system] = "s" and [code] = "c") and [code][text] = "t")
in
FilteredObservations
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Patient" ]}[Data],
// Fold: "family=John&given=Paul"
FilteredPatients =
Table.SelectRows(
Patients,
each
Table.MatchesAnyRows([name], each Text.StartsWith([family], "John")) and
Table.MatchesAnyRows([name], each List.MatchesAny([given], each Text.StartsWith(_, "Paul"))))
in
FilteredPatients
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "category=vital-signs"
FilteredObservations = Table.SelectRows(Observations, each Table.MatchesAnyRows([category], each
Table.MatchesAnyRows([coding], each [code] = "vital-signs")))
in
FilteredObservations
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "category=s|c"
FilteredObservations = Table.SelectRows(Observations, each Table.MatchesAnyRows([category], each
Table.MatchesAnyRows([coding], each [system] = "s" and [code] = "c")))
in
FilteredObservations
// Fold: "category=s1|c1,s2|c2"
FilteredObservations =
Table.SelectRows(
Observations,
each
Table.MatchesAnyRows(
[category],
each
Table.MatchesAnyRows(
[coding],
each
([system] = "s1" and [code] = "c1") or
([system] = "s2" and [code] = "c2"))))
in
FilteredObservations
let
AuditEvents = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "AuditEvent" ]}
[Data],
// Fold: "policy=http://mypolicy"
FilteredAuditEvents = Table.SelectRows(AuditEvents, each Table.MatchesAnyRows([agent], each
List.MatchesAny([policy], each _ = "http://mypolicy")))
in
FilteredAuditEvents
Filtering Observations on code and value quantity, body height greater than 150:
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "code-value-quantity=http://loinc.org|8302-2$gt150"
FilteredObservations = Table.SelectRows(Observations, each Table.MatchesAnyRows([code][coding], each
[system] = "http://loinc.org" and [code] = "8302-2") and [value][Quantity][value] > 150)
in
FilteredObservations
Filtering on Observation component code and value quantity, systolic blood pressure greater than 140:
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "component-code-value-quantity=http://loinc.org|8480-6$gt140"
FilteredObservations = Table.SelectRows(Observations, each Table.MatchesAnyRows([component], each
Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and [code] = "8480-6") and [value]
[Quantity][value] > 140))
in
FilteredObservations
Filtering on multiple component code value quantities (AND), diastolic blood pressure greater than 90 and systolic
blood pressure greater than 140:
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "component-code-value-quantity=http://loinc.org|8462-4$gt90&component-code-value-
quantity=http://loinc.org|8480-6$gt140"
FilteredObservations =
Table.SelectRows(
Observations,
each
Table.MatchesAnyRows(
[component],
each
Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and [code] =
"8462-4") and [value][Quantity][value] > 90) and
Table.MatchesAnyRows([component], each Table.MatchesAnyRows([code][coding], each
[system] = "http://loinc.org" and [code] = "8480-6") and [value][Quantity][value] > 140))
in
FilteredObservations
Filtering on multiple component code value quantities (OR), diastolic blood pressure greater than 90 or systolic
blood pressure greater than 140:
let
Observations = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name = "Observation"
]}[Data],
// Fold: "component-code-value-quantity=http://loinc.org|8462-4$gt90,http://loinc.org|8480-6$gt140"
FilteredObservations =
Table.SelectRows(
Observations,
each
Table.MatchesAnyRows(
[component],
each
(Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and [code] =
"8462-4") and [value][Quantity][value] > 90) or
Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and [code] =
"8480-6") and [value][Quantity][value] > 140 ))
in
FilteredObservations
// Fold: "combo-code-value-quantity=http://loinc.org|8302-2$gt150"
FilteredObservations =
Table.SelectRows(
Observations,
each
(Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and [code] = "8302-
2") and [value][Quantity][value] > 150) or
(Table.MatchesAnyRows([component], each Table.MatchesAnyRows([code][coding], each [system] =
"http://loinc.org" and [code] = "8302-2") and [value][Quantity][value] > 150)))
in
FilteredObservations
Summary
Query folding turns Power Query filtering expressions into FHIR search parameters. The Power Query connector
for FHIR recognizes certain patterns and attempts to identify matching search parameters. Recognizing those
patterns will help you write more efficient Power Query expressions.
Next steps
In this article, we reviewed some classes of filtering expressions that will fold to FHIR search parameters. Next read
about establishing relationships between FHIR resources.
FHIR Power Query relationships
FHIR Relationships
2/20/2020 • 2 minutes to read • Edit Online
This article describes how to establish relationships between tables that have been imported using the Power
Query connector for FHIR.
Introduction
FHIR resources are related to each other, for example, an Observation that references a subject ( Patient ):
{
"resourceType": "Observation",
"id": "1234",
"subject": {
"reference": "Patient/456"
}
Some of the resource reference fields in FHIR can refer to multiple different types of resources (for example,
Practitioner or Organization ). To facilitate an easier way to resolve references, the Power Query connector for
FHIR adds a synthetic field to all imported resources called <referenceId> , which contains a concatenation of the
resource type and the resource ID.
To establish a relationship between two tables, you can connect a specific reference field on a resource to the
corresponding <referenceId> field on the resource you would like it linked to. In simple cases, Power BI will even
detect this for you automatically.
5. Establish the relationship. In this simple example, Power BI will likely have detected the relationship
automatically:
If not, you can add it manually:
Next steps
In this article, you've learned how to establish relationships between tables imported with the Power Query
connector for FHIR. Next, explore query folding with the Power Query connector for FHIR.
FHIR Power Query folding
Folder
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Function Reference Documentation: Folder.Contents, Folder.Files
Capabilities supported
Folder path
Combine
Combine and load
Combine and transform
Connect to a folder
To connect to a folder:
1. In the Get Data dialog box, select Folder .
2. Enter the path to the folder you want to load, or select Browse to browse to the folder you want to load,
and then select OK .
When you select the folder you want to use, the file information about all of the files in that folder are
displayed. In addition, file information about any files in any subfolders is also displayed.
3. Select Combine & Transform Data to combine the data in the files of the selected folder and load the
data into the Power Query Editor for editing. Or select Combine & Load to load the data from all of the
files in the folder directly into your app.
NOTE
The Combine & Transform Data and Combine & Load buttons are the easiest ways to combine data found in the files
of the folder you specify. You could also use the Load button (in Power BI Desktop only) or the Transform Data buttons
to combine the files as well, but that requires more manual steps.
Troubleshooting
Combining files
All of the files in the folder you select will be included in the data to be combined. If you have data files located in a
subfolder of the folder you select, all of these files will also be included. To ensure that combining the file data
works properly, make sure that all of the files in the folder and the subfolders have the same schema.
For more information about combining files, see Combine files in Power Query.
Google Analytics
10/30/2020 • 4 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop
Authentication Types Supported: Google Account
Function Reference Documentation: GoogleAnalytics.Accounts
Prerequisites
Before you can sign in to Google Analytics, you must have an Google Analytics account (username/password).
Capabilities Supported
Import
4. In the Sign in with Google window that appears, provide your credentials to sign in to your Google
Analytics account. You can either supply an email address or phone number. Then select Next .
5. Enter your Google Analytics password and select Next .
6. When asked if you want Power BI Desktop to access your Google account, select Allow .
7. Once you have successfully signed in, select Connect .
Once the connection is established, you’ll see a list of the accounts you have access to. Drill through the account,
properties, and views to see a selection of values, categorized in display folders.
You can Load the selected table, which brings the entire table into Power BI Desktop, or you can select Transform
Data to edit the query, which opens Power Query Editor. You can then filter and refine the set of data you want to
use, and then load that refined set of data into Power BI Desktop.
Limitations and issues
You should be aware of the following limitations and issues associated with accessing Adobe Analytics data.
Google Analytics quota limits for Power BI
The standard limitations and quotas for Google Analytics AP requests is documented in Limits and Quotas on API
Requests. However, Power Query Desktop and Power Query Service allow you to use the following enhanced
number of queries.
Power BI Desktop:
Queries per day—250,000
Queries per 100 seconds—2,000
Power BI Service:
Queries per day—1,500,000
Queries per 100 seconds—4,000
Troubleshooting
Validating Unexpected Data
When date ranges are very large, Google Analytics will return only a subset of values. You can use the process
described in this section to understand what dates are being retrieved, and manually edit them. If you need more
data, you can append multiple queries with different date ranges. If you're not sure you're getting back the data you
expect to see, you can also use Data Profiling to get a quick look at what's being returned.
To make sure that the data you're seeing is the same as you would get from Google Analytics, you can execute the
query yourself in Google's interactive tool. To understand what data Power Query is retrieving, you can use Query
Diagnostics to understand what query parameters are being sent to Google Analytics.
If you follow the instructions for Query Diagnostics and run Diagnose Step on any Added Items , you can see the
generated results in the Diagnostics Data Source Quer y column. We recommend running this with as few
additional operations as possible on top of your initial connection to Google Analytics, to make sure you're not
losing data in a Power Query transform rather than what's being retrieved from Google Analytics.
Depending on your query, the row containing the emitted API call to Google Analytics may not be in the same
place. But for a simple Google Analytics only query, you'll generally see it as the last row that has content in that
column.
In the Data Source Quer y column, you'll find a record with the following pattern:
Request:
GET https://www.googleapis.com/analytics/v3/data/ga?ids=ga:<GA
Id>&metrics=ga:users&dimensions=ga:source&start-date=2009-03-12&end-date=2020-08-11&start-index=1&max-
results=1000"aUser=<User>%40gmail.com HTTP/1.1
<Content placeholder>
Response:
HTTP/1.1 200 OK
Content-Length: -1
<Content placeholder>
From this record, you can see you have your Analytics view (profile) ID, your list of metrics (in this case, just
ga:users ), your list of dimensions (in this case, just referral source), the start-date and end-date, the start-index,
max-results (set to 1000 for the editor by default), and the quotaUser.
You can copy these values into the Google Analytics Query Explorer to validate that the same data you're seeing
returned by your query is also being returned by the API.
If your error is around a date range, you can easily fix it. Go into the Advanced Editor. You'll have an M query that
looks something like this (at a minimum—there may be other transforms on top of it).
let
Source = GoogleAnalytics.Accounts(),
#"<ID>" = Source{[Id="<ID>"]}[Data],
#"UA-<ID>-1" = #"<ID>"{[Id="UA-<ID>-1"]}[Data],
#"<View ID>" = #"UA-<ID>-1"{[Id="<View ID>"]}[Data],
#"Added Items" = Cube.Transform(#"<View ID>",
{
{Cube.AddAndExpandDimensionColumn, "ga:source", {"ga:source"}, {"Source"}},
{Cube.AddMeasureColumn, "Users", "ga:users"}
})
in
#"Added Items"
You can do one of two things. If you have a Date column, you can filter on the Date. This is the easier option. If you
don't care about breaking it up by date, you can Group afterwards.
If you don't have a Date column, you can manually manipulate the query in the Advanced Editor to add one and
filter on it. For example.
let
Source = GoogleAnalytics.Accounts(),
#"<ID>" = Source{[Id="<ID>"]}[Data],
#"UA-<ID>-1" = #"<ID>"{[Id="UA-<ID>-1"]}[Data],
#"<View ID>" = #"UA-<ID>-1"{[Id="<View ID>"]}[Data],
#"Added Items" = Cube.Transform(#"<View ID>",
{
{Cube.AddAndExpandDimensionColumn, "ga:date", {"ga:date"}, {"Date"}},
{Cube.AddAndExpandDimensionColumn, "ga:source", {"ga:source"}, {"Source"}},
{Cube.AddMeasureColumn, "Organic Searches", "ga:organicSearches"}
}),
#"Filtered Rows" = Table.SelectRows(#"Added Items", each [Date] >= #date(2019, 9, 1) and [Date] <=
#date(2019, 9, 30))
in
#"Filtered Rows"
Next steps
Google Analytics Dimensions & Metrics Explorer
Google Analytics Core Reporting API
JSON
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Basic (Web only), Organizational Account, Web API (Web only),
Windows
Function Reference Documentation: Json.Document
Capabilities supported
Import
To load a local JSON file into an online service, such as Power BI service or Power Apps, you'll need to enter the
local path to the JSON file, select an on-premises data gateway, and, if authentication is required, enter your
credentials.
Loading the JSON file will automatically launch the Power Query Editor for you to transform the data if you want,
or you can simply close and apply.
JSON data may not always be imported into Power Query as a table. However, you can always use the available
Power Query ribbon transforms to convert it to a table.
Troubleshooting
If you see the following message, it may be because the file is invalid, for example, it's not really a JSON file, or is
malformed. Or you may be trying to load a JSON Lines file.
If you are trying to load a JSON Lines file, the following sample M code converts all JSON Lines input to a single
flattened table automatically:
let
// Read the file into a list of lines
Source = Table.FromColumns({Lines.FromBinary(File.Contents("C:\json-lines-example.json"), null, null)}),
// Transform each line using Json.Document
#"Transformed Column" = Table.TransformColumns(Source, {"Column1", Json.Document})
in
#"Transformed Column"
You'll then need to use an Expand operation to combine the lines together.
MySQL database
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Windows (Power BI Desktop, Excel, online service with gateway), Database (Power
BI Desktop, Excel), Basic (online service with gateway)
Function Reference Documentation: MySQL.Database
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
By default, Power BI installs an OLE DB driver for MySQL database. However, for optimal performance, we
recommend that the customer installs the SQL Server Native Client before using the MySQL database connector.
SQL Server Native Client 11.0 and SQL Server Native Client 10.0 are both supported in the latest version.
Capabilities Supported
Import
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Windows, Basic, Web API, Organizational Account
Function Reference Documentation: OData.Feed, ODataOmitValues.Nulls
Capabilities supported
Basic
Advanced
URL parts
Open type columns
Select related tables
If the URL address you enter is invalid, a warning icon will appear next to the URL textbox.
b. If this is the first time you're connecting using the OData Feed, select the authentication type, input
your credentials (if necessary), and select the level to apply the authentication settings to. Then select
Connect .
3. If you're connecting from an online service (such as Power BI service or Power Apps):
a. In the OData dialog that appears, enter a URL in the text box.
b. If this is the first time you're connecting using the OData Feed, select the authentication kind and
enter your credentials (if necessary). Then select Next .
4. From the Navigator dialog, you can select a table, then either transform the data in the Power Query Editor
by selecting Transform Data , or load the data by selecting Load .
If you have multiple tables that have a direct relationship to one or more of the already selected tables, you
can select the Select Related Tables button. When you do, all tables that have a direct relationship to one
or more of the already selected tables will be imported as well.
ODBC
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (On-Premise Gateway), Dataflows in PowerBI.com (On-Premise
Gateway), Dataflows in PowerApps.com (On-Premise Gateway), Excel, Flow
Authentication Types Supported: Database (Username/Password), Windows, Default or Custom
M Function Reference: Odbc.DataSource, Odbc.Query
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Capabilities Supported
Import
Advanced options
Connection string (non-credential properties)
SQL statement
Supported row reduction clauses
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise Gateway),
Dataflows in PowerApps.com (Enterprise Gateway), Excel, Dynamics 365 Customer Insights, Analysis Services
Authentication Types Supported: Windows (desktop/online), Database (desktop), Basic (online)
Function Reference Documentation: Oracle.Database
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
Supported Oracle versions:
Oracle Server 9 and later
Oracle Data Access Client (ODAC) software 11.2 and later
Before you can connect to an Oracle database using Power Query, you need to install the Oracle client software
v8.1.7 or greater on your computer. To install the 32-bit Oracle client software, go to 32-bit Oracle Data Access
Components (ODAC) with Oracle Developer Tools for Visual Studio (12.1.0.2.4). To install the 64-bit Oracle client, go
to 64-bit ODAC 12c Release 4 (12.1.0.2.4) Xcopy for Windows x64.
NOTE
Choose a version of Oracle Data Access Client (ODAC) which is compatible with your Oracle Server. For instance, ODAC 12.x
does not always support Oracle Server version 9. Choose the Windows installer of the Oracle Client. During the setup of the
Oracle client, make sure you enable Configure ODP.NET and/or Oracle Providers for ASP.NET at machine-wide level by
selecting the corresponding checkbox during the setup wizard. Some versions of the Oracle client wizard selects the checkbox
by default, others do not. Make sure that checkbox is selected so that Power Query can connect to your Oracle database.
To connect to an Oracle database with the on-premises data gateway, the correct Oracle client software must be
installed on the computer running the gateway. The Oracle client software you use depends on the Oracle server
version, but will always match the 64-bit gateway. For more information, see Manage your data source - Oracle.
Capabilities Supported
Import
DirectQuery
Advanced options
Command timeout in minutes
SQL statement
Include relationship columns
Navigate using full hierarchy
NOTE
If you are using a local database, or autonomous database connections, you may need to place the server name in
quotation marks to avoid connection errors.
3. If you're connecting from Power BI Desktop, select either the Impor t or DirectQuer y data connectivity
mode. The rest of these example steps use the Import data connectivity mode. To learn more about
DirectQuery, see Use DirectQuery in Power BI Desktop.
4. Optionally, you can provide a command timeout and a native query (SQL statement). You can also select
whether you want to include relationship columns and navigate using full hierarchy. Once you're done, select
OK .
5. If this is the first time you're connecting to this Oracle database, you'll be asked to enter your credentials.
Select the authentication type you want to use, and then enter your credentials. For more information about
authentication, see Authentication with a data source.
6. In Navigator , select the data you require, then either load or transform the data.
NOTE
You must select an on-premises data gateway for this connector, whether the Oracle database is on your local
network or on a web site.
4. If this is the first time you're connecting to this Oracle database, select the type of credentials for the
connection in Authentication kind . Choose Basic if you plan to use an account that's created within Oracle
instead of Windows authentication.
5. Enter your credentials.
6. Select Next to continue.
7. In Navigator , select the data you require, then either load or transform the data.
Troubleshooting
You might come across any of several errors from Oracle when the naming syntax is either incorrect or not
configured properly:
ORA-12154: TNS: could not resolve the connect identifier specified.
ORA-12514: TNS: listener does not currently know of service requested in connect descriptor.
ORA-12541: TNS: no listener.
ORA-12170: TNS: connect timeout occurred.
ORA-12504: TNS: listener was not given the SERVICE_NAME in CONNECT_DATA.
These errors might occur if the Oracle client either isn't installed or isn't configured properly. If it's installed, verify
the tnsnames.ora file is properly configured and you're using the proper net_service_name. You also need to make
sure the net_service_name is the same between the machine that uses Power BI Desktop and the machine that runs
the gateway. For more information, see Prerequisites.
You might also come across a compatibility issue between the Oracle server version and the Oracle Data Access
Client version. Typically, you want these versions to match, as some combinations are incompatible. For instance,
ODAC 12.x does not support Oracle Server version 9.
If you downloaded Power BI Desktop from the Microsoft Store, you might be unable to connect to Oracle databases
because of an Oracle driver issue. If you come across this issue, the error message returned is: Object reference not
set. To address the issue, do one of these steps:
Download Power BI Desktop from the Download Center instead of Microsoft Store.
If you want to use the version from Microsoft Store: on your local computer, copy oraons.dll from
12.X.X\client_X to 12.X.X\client_X\bin, where X represents version and directory numbers.
If you see the error message, Object reference not set, in the Power BI Gateway when you connect to an Oracle
database, follow the instructions in Manage your data source - Oracle.
If you're using Power BI Report Server, consult the guidance in the Oracle Connection Type article.
Next steps
Optimize Power Query when expanding table columns
PostgreSQL
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Database (Username/Password)
Function Reference Documentation: PostgreSQL.Database
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
As of the the December 2019 release, NpgSQL 4.0.10 shipped with Power BI Desktop and no additional installation
is required. GAC Installation overrides the version provided with Power BI Desktop, which will be the default.
Refreshing is supported both through the cloud in the Power BI Service as well as on premise through the Gateway.
In the Power BI service, NpgSQL 4.0.10 will be used, while on premise refresh will use the local installation of
NpgSQL, if available, and otherwise use NpgSQL 4.0.10.
For Power BI Desktop versions released before December 2019, you must install the NpgSQL provider on your
local machine. To install the NpgSQL provider, go to the releases page and download the relevant release. The
provider architecture (32-bit or 64-bit) needs to match the architecture of the product where you intent to use the
connector. When installing, make sure that you select NpgSQL GAC Installation to ensure NpgSQL itself is added to
your machine.
We recommend NpgSQL 4.0.10. NpgSQL 4.1 and up will not work due to .NET version
incompatibilities.
Capabilities Supported
Import
DirectQuery (Power BI only, learn more)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
2. In the PostgreSQL dialog that appears, provide the name of the server and database. Optionally, you may
provide a command timeout and a native query (SQL statement), as well as select whether or not you want to
include relationship columns and navigate using full hierarchy. Once you're done, select Connect .
3. If the PostgreSQL database requires database user credentials, input those credentials in the dialogue when
prompted.
Native Query Folding
To enable Native Query Folding, set the EnableFolding flag to true for Value.NativeQuery() in the advanced
editor.
Sample: Value.NativeQuery(target as any, query, null, [EnableFolding=true])
Operations that are capable of folding will be applied on top of your native query according to normal Import or
Direct Query logic. Native Query folding is not applicable with optional parameters present in Value.NativeQuery().
Troubleshooting
Your native query may throw the following error:
We cannot fold on top of this native query. Please modify the native query or remove the 'EnableFolding'
option.
A basic trouble shooting step is to check if the query in Value.NativeQuery() throws the same error with a limit 1
clause around it:
select * from (query) _ limit 1
QuickBooks Online
10/30/2020 • 2 minutes to read • Edit Online
Summary
Power BI QuickBooks Online connector enables connecting to your QuickBooks Online account and viewing,
analyzing, and reporting on your company QuickBooks data in Power BI.
Release state: Beta
Products: Power BI Desktop, Power BI Service (Enterprise Gateway)
Authentication Types Supported: QuickBooks Online account
WARNING
QuickBooks Online has deprecated support for Internet Explorer 11, which Power Query Desktop uses for authentication to
online services. At this time, users will be impaired from authenticating, but stored credentials should continue to work until
their existing authentication tokens expire.
Prerequisites
To use the QuickBooks Online connector, you must have a QuickBooks Online account username and password.
The QuickBooks Online connector uses the QuickBooks ODBC driver. The QuickBooks ODBC driver is shipped with
Power BI Desktop and no additional installation is required.
Capabilities Supported
Import
4. In the following dialog, enter your QuickBooks credentials. You may be required to provide 2FA (two factor
authentication code) as well.
5. In the following dialog, select a company and then select Next .
6. Once you've successfully signed in, select Connect .
7. In the Navigator dialog box, select the QuickBooks tables you want to load. You can then either load or
transform the data.
Known issues
Beginning on August 1, 2020, Intuit will no longer support Microsoft Internet Explorer 11 (IE 11) for QuickBooks
Online. When you use OAuth2 for authorizing QuickBooks Online, after August 1, 2020, only the following
browsers will be supported:
Microsoft Edge
Mozilla Firefox
Google Chrome
Safari 11 or newer (Mac only)
For more information, see Alert: Support for IE11 deprecating on July 31, 2020 for Authorization screens.
Next steps
QuickBooks Power BI integration
Salesforce Objects
10/30/2020 • 3 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Salesforce account
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
WARNING
By default, Salesforce does not support Internet Explorer 11, which is used as part of the authentication experience to online
services in Power Query Desktop. Please opt-in for extended support for accessing Lightning Experience Using Microsoft
Internet Explorer 11. You may also want to review Salesforce documentation on configuring Internet Explorer. At this time,
users will be impaired from authenticating, but stored credentials should continue to work until their existing authentication
tokens expire.
Prerequisites
To use the Salesforce Objects connector, you must have a Salesforce account username and password.
In addition, Salesforce API access should be enabled. To verify access settings, go to your personal Salesforce page,
open your profile settings, and search for and make sure the API Enabled checkbox is selected. Note that
Salesforce trial accounts do not have API access.
Capabilities Supported
Production
Custom
Custom domains
CNAME record redirects
Relationship columns
You can also select Custom and enter a custom URL to log in. This custom URL might be a custom domain
you've created within Salesforce, such as https://contoso.salesforce.com. You can also use the custom URL
selection if you are using your own CNAME record that redirects to Salesforce.
In addition, you can select Include relationship columns . This selection alters the query by including
columns that might have foreign-key relationships to other tables. If this box is unchecked, you won’t see
those columns.
Once you've selected the URL, select OK to continue.
3. Select Sign in to sign in to your Salesforce account.
Once you've successfully signed in, select Connect .
4. If this is the first time you've signed in using a specific app, you'll be asked to verify your authenticity by
entering a code sent to your email address. You'll then be asked whether you want the app you are using to
access the data. For example, you'll be asked if you want to allow Power BI Desktop to access your Salesforce
data. Select Allow .
5. In the Navigator dialog box, select the Salesforce Objects you want to load.
You can then either load or transform the data.
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Salesforce account
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
WARNING
By default, Salesforce does not support Internet Explorer 11, which is used as part of the authentication experience to online
services in Power Query Desktop. Please opt-in for extended support for accessing Lightning Experience Using Microsoft
Internet Explorer 11. You may also want to review Salesforce documentation on configuring Internet Explorer. At this time,
users will be impaired from authenticating, but stored credentials should continue to work until their existing authentication
tokens expire.
Prerequisites
To use the Salesforce Reports connector, you must have a Salesforce account username and password.
In addition, Salesforce API access should be enabled. To verify access settings, go to your personal Salesforce page,
open your profile settings, and search for and make sure the API Enabled checkbox is selected. Note that
Salesforce trial accounts do not have API access.
Capabilities Supported
Production
Custom
Custom domains
CNAME record redirects
You can also select Custom and enter a custom URL to log in. This custom URL might be a custom domain
you've created within Salesforce, such as https://contoso.salesforce.com. You can also use the custom URL
selection if you are using your own CNAME record that redirects to Salesforce.
Once you've selected the URL, select OK to continue.
3. Select Sign in to sign in to your Salesforce account.
Once you've successfully signed in, select Connect .
4. If this is the first time you've signed in using a specific app, you'll be asked to verify your authenticity by
entering a code sent to your email address. You'll then be asked whether you want the app you are using to
access the data. For example, you'll be asked if you want to allow Power BI Desktop to access your Salesforce
data. Select Allow .
5. In the Navigator dialog box, select the Salesforce Reports you want to load.
You can then either load or transform the data.
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Basic, Database, Windows
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
You'll need an SAP account to sign in to the website and download the drivers. If you're unsure, contact the SAP
administrator in your organization.
To use SAP HANA in Power BI Desktop or Excel, you must have the SAP HANA ODBC driver installed on the local
client computer for the SAP HANA data connection to work properly. You can download the SAP HANA Client tools
from SAP Development Tools, which contains the necessary ODBC driver. Or you can get it from the SAP Software
Download Center. In the Software portal, search for the SAP HANA CLIENT for Windows computers. Since the SAP
Software Download Center changes its structure frequently, more specific guidance for navigating that site isn't
available. For instructions about installing the SAP HANA ODBC driver, see Installing SAP HANA ODBC Driver on
Windows 64 Bits.
To use SAP HANA in Excel, you must have either the 32-bit or 64-bit SAP HANA ODBC driver (depending on
whether you're using the 32-bit or 64-bit version of Excel) installed on the local client computer.
This feature is only available in Excel for Windows if you have Office 2019 or a Microsoft 365 subscription. If you're
a Microsoft 365 subscriber, make sure you have the latest version of Office.
Capabilities Supported
Import
Direct Query
Advanced
SQL Statement
Also, you may need to validate the server certificate. For more information about using validate server
certificate selections, see Using SAP HANA encryption. In Power BI Desktop and Excel, the validate server
certificate selection is enabled by default. If you've already set up these selections in ODBC Data Source
Administrator, clear the Validate ser ver cer tificate check box. To learn more about using ODBC Data
Source Administrator to set up these selections, see Configure SSL for ODBC client access to SAP HANA.
For more information about authentication, see Authentication with a data source.
Once you've filled in all required information, select Connect .
4. From the Navigator dialog box, you can either transform the data in the Power Query editor by selecting
Transform Data , or load the data by selecting Load .
3. Select the name of the on-premises data gateway to use for accessing the database.
NOTE
You must use an on-premises data gateway with this connector, whether your data is local or online.
4. Choose the authentication kind you want to use to access your data. You'll also need to enter a username
and password.
NOTE
Currently, Power Query Online does not support Windows authentication. Windows authentication support is
planned to become available in a few months.
Next steps
Enable encryption for SAP HANA
The following articles contain more information that you may find useful when connecting to an SAP HANA
debase.
Manage your data source - SAP HANA
Use Kerberos for single sign-on (SSO) to SAP HANA
Enable encryption for SAP HANA
10/30/2020 • 4 minutes to read • Edit Online
We recommend that you encrypt connections to an SAP HANA server from Power Query Desktop and Power
Query Online. You can enable HANA encryption using both OpenSSL and SAP's proprietary CommonCryptoLib
(formerly known as sapcrypto) library. SAP recommends using CommonCryptoLib, but basic encryption features
are available using either library.
This article provides an overview of enabling encryption using OpenSSL, and references some specific areas of the
SAP documentation. We update content and links periodically, but for comprehensive instructions and support,
always refer to the official SAP documentation. If you want to set up encryption using CommonCryptoLib instead
of OpenSSL, see How to Configure TLS/SSL in SAP HANA 2.0 For steps on how to migrate from OpenSSL to
CommonCryptoLib, see SAP Note 2093286 (s-user required).
NOTE
The setup steps for encryption detailed in this article overlap with the setup and configuration steps for SAML SSO. Whether
you choose OpenSSL or CommonCryptoLib as your HANA server's encryption provider, make sure that your choice is
consistent across SAML and encryption configurations.
There are four phases to enabling encryption for SAP HANA using OpenSSL. We cover these phases next. For more
information, see Securing the Communication between SAP HANA Studio and SAP HANA Server through SSL.
Use OpenSSL
Ensure your HANA server is configured to use OpenSSL as its cryptographic provider. Replace the missing path
information below with the server ID (sid) of your HANA server.
openssl req -newkey rsa:2048 -days 365 -sha256 -keyout Server\_Key.pem -out Server\_Req.pem -nodes
This command creates a certificate signing request and private key. Once signed, the certificate is valid for a year
(see the -days parameter). When prompted for the common name (CN), enter the fully qualified domain name
(FQDN) of the computer the HANA server is installed on.
openssl x509 -req -days 365 -in Server\_Req.pem -sha256 -extfile /etc/ssl/openssl.cnf -extensions
usr\_cert -CA CA\_Cert.pem -CAkey CA\_Key.pem -CAcreateserial -out Server\_Cert.pem
If you don't already have a CA you can use, you can create a root CA yourself by following the steps outlined
in Securing the Communication between SAP HANA Studio and SAP HANA Server through SSL.
2. Create the HANA server certificate chain by combining the server certificate, key, and the CA's certificate
(the key.pem name is the convention for SAP HANA):
3. Create a copy of CA_Cert.pem named trust.pem (the trust.pem name is the convention for SAP HANA):
cp CA\_Cert.pem trust.pem
You must first convert trust.pem into a .crt file before you can import the certificate into the Trusted Root
Certification Authorities folder, for example by executing the following OpenSSL command:
Power BI service
2. Verify that you can successfully establish an encrypted connection to the server with the Validate ser ver
cer tificate option enabled, by loading data in Power BI Desktop or refreshing a published report in Power
BI service.
You'll note that only the SSL cr ypto provider information is required. However, your implementation might
require that you also use the key store and trust store. For more information about these stores and how to create
them, see Client-Side TLS/SSL Connection Properties (ODBC).
Additional information
Server-Side TLS/SSL Configuration Properties for External Communication (JDBC/ODBC)
Next steps
Configure SSL for ODBC client access to SAP HANA
Configure SSL for ODBC client access to SAP HANA
10/30/2020 • 2 minutes to read • Edit Online
If you're connecting to an SAP HANA database from Power Query Online, you may need to set up various property
values to connect. These properties could be the SSL crypto provider, an SSL key store, and an SSL trust store. You
may also require that the connection be encrypted. In this case, you can use the ODBC Data Source Administrator
application supplied with Windows to set up these properties.
In Power BI Desktop and Excel, you can set up these properties when you first sign in using the Power Query SAP
HANA database connector. The Validate ser ver cer tificate selection in the authentication dialog box is enabled
by default. You can then enter values in the SSL cr ypto provider , SSL key store , and SSL trust store
properties in this dialog box. However, all of the validate server certificate selections in the authentication dialog
box in Power BI Desktop and Excel are optional. They're optional in case you want to use ODBC Data Source
Administrator to set them up at the driver level.
NOTE
You must have the proper SAP HANA ODBC driver (32-bit or 64-bit) installed before you can set these properties in ODBC
Data Source Administrator.
If you're going to use ODBC Data Source Administrator to set up the SSL crypto provider, SSL key store, and SSL
trust store in Power BI or Excel, clear the Validate ser ver cer tificate check box when presented with the
authentication dialog box.
To use ODBC Data Source Administrator to set up the validate server certificate selections:
1. From the Windows Start menu, select Windows Administrative Tools > ODBC Data Sources . If you're
using a 32-bit version of Power BI Desktop or Excel, open ODBC Data Sources (32-bit), otherwise open
ODBC Data Sources (64-bit).
2. In the User DSN tab, select Add .
3. In the Create New Data Source dialog box, select the HDBODBC driver, and then select Finish .
4. In the ODBC Configuration for SAP HANA dialog box, enter a Data source name . Then enter your
server and database information, and select Validate the TLS/SSL cer tificate .
5. Select the Advanced button.
6. In the Advanced ODBC Connection Proper ty Setup dialog box, select the Add button.
7. In the Add/Modify Connection Proper ty dialog box, enter sslCr yptoProvider in the Proper ty text box.
8. In the Value text box, enter the name of the crypto provider you'll be using: either sapcr ypto ,
commoncr ypto , openssl , or mscr ypto .
9. Select OK .
10. You can also add the optional sslKeyStore and sslTrustStore properties and values if necessary. If the
connection must be encrypted, add ENCRYPT as the property and TRUE as the value.
11. In the Advanced ODBC Connection Proper ty Setup dialog box, select OK .
12. To test the connection you’ve set up, select Test connection in the ODBC Configuration for SAP HANA
dialog box.
13. When the test connection has completed successfully, select OK .
For more information about the SAP HANA connection properties, see Server-Side TLS/SSL Configuration
Properties for External Communication (JDBC/ODBC).
NOTE
If you select Validate ser ver cer tificate in the SAP HANA authentication dialog box in Power BI Desktop or Excel, any
values you enter in SSL cr ypto provider , SSL key store , and SSL trust store in the authentication dialog box will
override any selections you've set up using ODBC Data Source Administrator.
Next steps
SAP HANA database connector troubleshooting
Troubleshooting
10/30/2020 • 2 minutes to read • Edit Online
The following section describes some issues that may occur while using the Power Query SAP HANA connector,
along with some possible solutions.
If you’re on a 64-bit machine, but Excel or Power BI Desktop is 32-bit (like the screenshots below), you can check
for the driver in the WOW6432 node instead:
HKEY_LOCAL_MACHINE\Software\WOW6432Node\ODBC\ODBCINST.INI\ODBC Drivers
Note that the driver needs to match the bit version of your Excel or Power BI Desktop. If you’re using:
32-bit Excel/Power BI Desktop, you'll need the 32-bit ODBC driver (HDBODBC32).
64-bit Excel/Power BI Desktop, you'll need the 64-bit ODBC driver (HDBODBC).
The driver is usually installed by running hdbsetup.exe.
Finally, the driver should also show up as "ODBC DataSources 32-bit" or "ODBC DataSources 64-bit".
Unfortunately, this is an SAP issue so you'll need to wait for a fix from SAP.
SharePoint Folder
10/30/2020 • 3 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Microsoft Account, Windows
Function Reference Documentation: SharePoint.Contents, SharePoint.Files
Capabilities supported
Folder path
Combine
Combine and load
Combine and transform
b. If this is the first time you've visited this site address, select the appropriate authentication method.
Enter your credentials and chose which level to apply these setting to. Then select Connect .
For more information about authentication methods, see Authentication with a data source.
4. If you're connecting from Power Query Online:
a. Paste the address into the Site URL test box in the SharePoint folder dialog box. In this case, the
site URL is https://contoso.sharepoint.com/marketing/data .
b. If the SharePoint folder is on-premises, enter the name of an on-premises data gateway.
c. Select the authentication kind, and enter any credentials that are required.
d. Select Next .
5. When you select the SharePoint folder you want to use, the file information about all of the files in that
SharePoint folder are displayed. In addition, file information about any files in any subfolders is also
displayed.
6. Select Combine & Transform Data to combine the data in the files of the selected SharePoint folder and
load the data into the Power Query Editor for editing. Or select Combine & Load to load the data from all
of the files in the SharePoint folder directly into your app.
NOTE
The Combine & Transform Data and Combine & Load buttons are the easiest ways to combine data found in the files
of the SharePoint folder you specify. You could also use the Load button (in Power BI Desktop only) or the Transform Data
buttons to combine the files as well, but that requires more manual steps.
Troubleshooting
Combining files
All of the files in the SharePoint folder you select will be included in the data to be combined. If you have data files
located in a subfolder of the SharePoint folder you select, all of these files will also be included. To ensure that
combining the file data works properly, make sure that all of the files in the folder and the subfolders have the
same schema.
In some cases, you might have multiple folders on your SharePoint site containing different types of data. In this
case, you'll need to delete the unnecessary files. To delete these files:
1. In the list of files from the SharePoint folder you chose, select Transform Data .
2. In the Power Query editor, scroll down to find the files you want to keep.
3. In the example shown in the screenshot above, the required files are the last rows in the table. Select
Remove Rows , enter the value of the last row before the files to keep (in this case 903), and select OK .
4. Once you've removed all the unnecessary files, select Combine Files from the Home ribbon to combine
the data from all of the remaining files.
For more information about combining files, see Combine files in Power Query.
Filename special characters
If a filename contains certain special characters, it may lead to authentication errors due to the filename being
truncated in the URL. If you are getting unusual authentication errors, make sure that all of the filenames you're
using don't contain any of the following special characters.
# % $
If these characters are present in the filename, the file owner must rename the file so that it does NOT contain any
of these characters.
SharePoint List
10/30/2020 • 3 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Windows, Microsoft Account
Function Reference Documentation: SharePoint.Contents, SharePoint.Files, SharePoint.Tables
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Capabilities supported
Site URL
If the URL address you enter is invalid, a warning icon will appear next to the Site URL textbox.
4. You may or may not see a SharePoint access screen like the following image. If you don't see it, skip to step
8. If you do see it, select the type of credentials you will use to access your SharePoint site on the left side of
the page (in this example, a Microsoft account).
5. Select the level to you want to apply these sign in settings to.
The level you select for the authentication method determines what part of a URL will have the
authentication method applied to it. If you select the top-level web address, the authentication method you
select here will be used for that URL address or any sub-address within that address. However, you might
not want to set the top URL address to a specific authentication method because different sub-addresses
could require different authentication methods. For example, if you were accessing two separate folders of a
single SharePoint site and wanted to use different Microsoft Accounts to access each one.
Once you have set the authentication method for a specific web site address, you won't need to select the
authentication method for that URL address or any sub-address again. For example, if you select the
https://contoso.sharepoint.com/ address in this dialog, any SharePoint site that begins with this address will
not require that you select the authentication method again.
NOTE
If you need to change the authentication method because you accidentally entered the incorrect information or are
receiving an "unable to connect" message, see Change the authentication method.
6. Select Sign In and enter the user name and password you use to sign in to Microsoft Office 365.
7. When you finish signing in, select Connect .
8. From the Navigator dialog, you can select a location, then either transform the data in the Power Query
editor by selecting Transform Data , or load the data by selecting Load .
Troubleshooting
Use root SharePoint address
Make sure you supply the root address of the SharePoint site, without any subfolders or documents. For example,
use link similar to the following: https://contoso.sharepoint.com/teams/ObjectModel/
Change the authentication method
In some cases, you may need to change the authentication method you use to access a particular SharePoint site. If
this is necessary, see Change the authentication method.
Inconsistent behavior around boolean data
When using the Sharepoint List connector, Boolean values are represented inconsistently as TRUE/FALSE or 1/0 in
Power BI Desktop and Power BI service environments. This may result in wrong data, incorrect filters, and empty
visuals.
This issue only happens when the Data Type is not explicitly set for a column in the Query View of Power BI
Desktop. You can tell that the data type isn't set by seeing the "ABC 123" image on the column and "Any" data type
in the ribbon as shown below.
The user can force the interpretation to be consistent by explicitly setting the data type for the column through the
Power Query Editor. For example, the following image shows the column with an explicit Boolean type.
Next steps
Optimize Power Query when expanding table columns
SharePoint Online List
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Windows, Microsoft Account
Function Reference Documentation: SharePoint.Contents, SharePoint.Files, SharePoint.Tables
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Capabilities supported
Site URL
If the URL address you enter is invalid, a warning icon will appear next to the Site URL textbox.
4. You may or may not see a SharePoint access screen like the following image. If you don't see it, skip to step
8. If you do see it, select the type of credentials you will use to access your SharePoint site on the left side of
the page (in this example, a Microsoft account).
5. Select the level to you want to apply these sign in settings to.
The level you select for the authentication method determines what part of a URL will have the
authentication method applied to it. If you select the top-level web address, the authentication method you
select here will be used for that URL address or any sub-address within that address. However, you might
not want to set the top URL address to a specific authentication method because different sub-addresses
could require different authentication methods. For example, if you were accessing two separate folders of a
single SharePoint site and wanted to use different Microsoft Accounts to access each one.
Once you have set the authentication method for a specific web site address, you won't need to select the
authentication method for that URL address or any sub-address again. For example, if you select the
https://contoso.sharepoint.com/ address in this dialog, any SharePoint site that begins with this address will
not require that you select the authentication method again.
NOTE
If you need to change the authentication method because you accidentally entered the incorrect information or are
receiving an "unable to connect" message, see Change the authentication method.
6. Select Sign In and enter the user name and password you use to sign in to Microsoft Office 365.
7. When you finish signing in, select Connect .
8. From the Navigator dialog, you can select a location, then either transform the data in the Power Query
editor by selecting Transform Data , or load the data by selecting Load .
Troubleshooting
Use root SharePoint address
Make sure you supply the root address of the SharePoint site, without any subfolders or documents. For example,
use link similar to the following: https://contoso.sharepoint.com/teams/ObjectModel/
Change the authentication method
In some cases, you may need to change the authentication method you use to access a particular SharePoint site. If
this is necessary, see Change the authentication method.
SQL Server
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel, Flow
Authentication Types Supported: Database (Username/Password), Windows
M Function Reference
NOTE
Some capabilities may be present in one product but not others due to deployment schedules and host-specific capabilities.
Prerequisites
By default, Power BI installs an OLE DB driver for SQL Server. However, for optimal performance, we recommend
that the customer installs the SQL Server Native Client before using the SQL Server connector. SQL Server Native
Client 11.0 and SQL Server Native Client 10.0 are both supported in the latest version.
Capabilities Supported
Import
DirectQuery (Power BI only, learn more)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
SQL Server failover support
NOTE
If the connection is not encrypted, you'll be prompted with the following dialog.
Select OK to connect to the database by using an unencrypted connection, or follow the instructions to setup
encrypted connections to SQL Server.
Next steps
Optimize Power Query when expanding table columns
Text/CSV
10/30/2020 • 5 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Function Reference Documentation: File.Contents, Lines.FromBinary, Csv.Document
Capabilities supported
Import
Power Query will treat CSVs as structured files with a comma as a delimiter—a special case of a text file. If you
choose a text file, Power Query will automatically attempt to determine if it has delimiter separated values, and
what that delimiter is. If it can infer a delimiter, it will automatically treat it as a structured data source.
Unstructured Text
If your text file doesn't have structure, you'll get a single column with a new row per line encoded in the source
text. As a sample for unstructured text, you can consider a notepad file with the following contents:
Hello world.
This is sample data.
When you load it, you're presented with a navigation screen that loads each of these lines into their own row.
There's only one thing you can configure on this dialog, which is the File Origin dropdown select. This dropdown
lets you select which character set was used to generate the file. Currently, character set isn't inferred, and UTF-8
will only be inferred if it starts with a UTF-8 BOM.
CSV
You can find a sample CSV file here.
In addition to file origin, CSV also supports specifying the delimiter and how data type detection will be handled.
Delimiters available include colon, comma, equals sign, semicolon, space, tab, a custom delimiter (which can be
any string), and a fixed width (splitting up text by some standard number of characters).
The final dropdown allows you to select how you want to handle data type detection. It can be done based on the
first 200 rows, on the entire data set, or you can choose to not do automatic data type detection and instead let all
columns default to 'Text'. Warning: if you do it on the entire data set it may cause the initial load of the data in the
editor to be slower.
Since inference can be incorrect, it's worth double checking settings before loading.
Structured Text
When Power Query can detect structure to your text file, it will treat the text file as a delimiter separated value file,
and give you the same options available when opening a CSV—which is essentially just a file with an extension
indicating the delimiter type.
For example, if you save the following below as a text file, it will be read as having a tab delimiter rather than
unstructured text.
For example, if you edit the 'structured' sample provided above, you can add a line break.
If Line breaks is set to Ignore quoted line breaks , it will load as if there was no line break (with an extra space).
If Line breaks is set to Apply all line breaks , it will load an extra row, with the content after the line breaks
being the only content in that row (exact output may depend on structure of the file contents).
The Open file as dropdown will let you edit what you want to load the file as—important for troubleshooting. For
structured files that aren't technically CSVs (such as a tab separated value file saved as a text file), you should still
have Open file as set to CSV. This setting also determines which dropdowns are available in the rest of the dialog.
Text/CSV by Example (preview)
Text/CSV By Example in Power Query is now available as a public preview feature in Power BI Desktop. To start
using Text/CSV By Example:
1. In Power BI Desktop, under the File tab, select Options and settings > Options .
2. In the Options page, under Global , select Preview features .
3. Under Preview features , select Impor t text using examples . Then select OK .
Now when you use the Text/CSV connector, you'll see a new option to Extract Table Using Examples on the
bottom-left corner of the file preview dialog.
When you select that new button, you’ll be taken into the Extract Table Using Examples page. On this page, you
specify sample output values for the data you’d like to extract from your Text/CSV file. After you enter the first cell
of the column, other cells in the column are filled out. For the data to be extracted correctly, you may need to enter
more than one cell in the column. If some cells in the column are incorrect, you can fix the first incorrect cell and
the data will be extracted again. Check the data in the first few cells to ensure that the data has been extracted
successfully.
NOTE
We recommend that you enter the examples in column order. Once the column has successfully been filled out, create a new
column and begin entering examples in the new column.
Once you’re done constructing that table, you can either select to load or transform the data. Notice how the
resulting queries contain a detailed breakdown of all the steps that were inferred for the data extraction. These
steps are just regular query steps that you can customize as needed.
Troubleshooting
Loading Files from the Web
If you're requesting text/csv files from the web and also promoting headers, and you’re retrieving enough files that
you need to be concerned with potential throttling, you should consider wrapping your Web.Contents call with
Binary.Buffer() . In this case, buffering the file before promoting headers will cause the file to only be requested
once.
Unstructured text being interpreted as structured
In rare cases, a document that has similar comma numbers across paragraphs might be interpreted to be a CSV. If
this issue happens, edit the Source step in the Query Editor, and select Text instead of CSV in the Open File As
dropdown select.
XML
10/30/2020 • 2 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Function Reference Documentation: Xml.Tables, Xml.Document
Capabilities supported
Import
You'll be presented with the table that the connector loads, which you can then Load or Transform.
Load from web
If you want to load an XML file from the web, instead of selecting the XML connector you can select the Web
connector. Paste in the address of the desired file and you'll be prompted with an authentication selection, since
you're accessing a website instead of a static file. If there's no authentication, you can just select Anonymous . As in
the local case, you'll then be presented with the table that the connector loads by default, which you can Load or
Transform.
Troubleshooting
Data Structure
Due to the fact that many XML documents have ragged or nested data, you may have to do extra data shaping to
get it in the sort of form that will make it convenient to do analytics. This holds true whether you use the UI
accessible Xml.Tables function, or the Xml.Document function. Depending on your needs, you may find you have to
do more or less data shaping.
Text versus nodes
If your document contains a mixture of text and non-text sibling nodes, you may encounter issues.
For example if you have a node like this:
<abc>
Hello <i>world</i>
</abc>
Xml.Tables will return the "world" portion but ignore "Hello". Only the element(s) are returned, not the text.
However, Xml.Document will return "Hello <i>world</i>". The entire inner node is turned to text, and structure isn't
preserved.
Web
10/30/2020 • 8 minutes to read • Edit Online
Summary
Release State: General Availability
Products: Power BI Desktop, Power BI Service (Enterprise Gateway), Dataflows in PowerBI.com (Enterprise
Gateway), Dataflows in PowerApps.com (Enterprise Gateway), Excel
Authentication Types Supported: Anonymous, Windows, Basic, Web API, Organizational Account
Function Reference Documentation: Web.Page, Web.BrowserContents
Prerequisites
Internet Explorer 10
Capabilities supported
Basic
Advanced
URL parts
Command timeout
HTTP request header parameters
If the URL address you enter is invalid, a warning icon ( ) will appear next to the URL textbox.
3. Select the authentication method to use for this web site. In this example, select Anonymous . Then select
the level to you want to apply these settings to—in this case, https://en.wikipedia.org/ . Then select
Connect .
The level you select for the authentication method determines what part of a URL will have the
authentication method applied to it. If you select the top-level web address, the authentication method you
select here will be used for that URL address or any subaddress within that address. However, you might
not want to set the top URL address to a specific authentication method because different subaddresses
could require different authentication methods. For example, if you were accessing two separate folders of
a single SharePoint site and wanted to use different Microsoft Accounts to access each one.
Once you've set the authentication method for a specific web site address, you won't need to select the
authentication method for that URL address or any subaddress again. For example, if you select the
https://en.wikipedia.org/ address in this dialog, any web page that begins with this address won't require
that you select the authentication method again.
NOTE
If you need to change the authentication method later, see Changing the authentication method.
4. From the Navigator dialog, you can select a table, then either transform the data in the Power Query
editor by selecting Transform Data , or load the data by selecting Load .
The right side of the Navigator dialog displays the contents of the table you select to transform or load. If
you're uncertain which table contains the data you're interested in, you can select the Web View tab. The
web view lets you see the entire contents of the web page, and highlights each of the tables that have been
detected on that site. You can select the check box above the highlighted table to obtain the data from that
table.
On the lower left side of the Navigator dialog, you can also select the Add table using examples button.
This selection presents an interactive window where you can preview the content of the web page and enter
sample values of the data you want to extract. For more information on using this feature, see Get webpage
data by providing examples.
Then select OK .
3. Select Anonymous as the authentication type, and then select Connect .
4. Power Query Editor will now open with the data contained in the JSON file. Select the View tab in the
Power Query Editor, then select Formula Bar to turn on the formula bar in the editor.
As you can see, the Web Connector returns the web contents from the URL you supplied, and then
automatically wraps the web contents in the appropriate document type specified by the URL (
Json.Document in this example).
Troubleshooting
Using a gateway with the Web connector
If you're using the Web connector through an on-premises data gateway, you must have Internet Explorer 10
installed on the gateway machine. This installation will ensure that the Web.Page call through the gateway will
work correctly.
Changing the authentication method
In some cases, you may need to change the authentication method you use to access a particular site. If this
change is necessary, see Change the authentication method.
Limitations on Web connector authentication for HTML content
NOTE
The limitations described in this section only apply to HTML web pages. Opening other kinds of files from the web using this
connector isn't affected by these limitations.
The legacy Power Query Web connector automatically creates a Web.Page query that supports authentication. The
only limitation occurs if you select Windows authentication in the authentication method dialog box. In this case,
the Use my current credentials selection works correctly, but Use alternate credentials wouldn't
authenticate.
The new version of the Web connector (currently available in Power BI Desktop) automatically creates a
Web.BrowserContents query. Such queries currently only support anonymous authentication. In other words, the
new Web connector can't be used to connect to a source that requires non-anonymous authentication. This
limitation applies to the Web.BrowserContents function, regardless of the host environment.
Currently, Power BI Desktop automatically uses the Web.BrowserContents function. The Web.Page function is still
used automatically by Excel and Power Query Online. Power Query Online does support Web.BrowserContents
using an on-premises data gateway, but you currently would have to enter such a formula manually. When Web
By Example becomes available in Power Query Online in mid-October 2020, this feature will use
Web.BrowserContents .
The Web.Page function requires that you have Internet Explorer 10 installed on your computer. When refreshing a
Web.Page query via an on-premises data gateway, the computer containing the gateway must have Internet
Explorer 10 installed. If you use only the Web.BrowserContents function, you don't need to have Internet Explorer
10 installed on your computer or the computer containing the on-premises data gateway.
In cases where you need to use Web.Page instead of Web.BrowserContents because of authentication issues, you
can still manually use Web.Page .
In Power BI Desktop, you can use the older Web.Page function by clearing the New web table inference preview
feature:
1. Under the File tab, select Options and settings > Options .
2. In the Global section, select Preview features .
3. Clear the New web table inference preview feature, and then select OK .
4. Restart Power BI Desktop.
NOTE
Currently, you can't turn off the use of Web.BrowserContents in Power BI Desktop optimized for Power BI Report
Server.
You can also get a copy of a Web.Page query from Excel. To copy the code from Excel:
1. Select From Web from the Data tab.
2. Enter the address in the From Web dialog box, and then select OK .
3. In Navigator , choose the data you want to load, and then select Transform Data .
4. In the Home tab of Power Query, select Advanced Editor .
5. In the Advanced Editor , copy the M formula.
6. In the app that uses Web.BrowserContents , select the Blank Quer y connector.
7. If you are copying to Power BI Desktop:
a. In the Home tab, select Advanced Editor .
b. Paste the copied Web.Page query in the editor, and then select Done .
8. If you are copying to Power Query Online:
a. In the Blank Quer y , paste the copied Web.Page query in the blank query.
b. Select an on-premises data gateway to use.
c. Select Next .
You can also manually enter the following code into a blank query. Ensure that you enter the address of the web
page you want to load.
let
Source = Web.Page(Web.Contents("<your address here>")),
Navigation = Source{0}[Data]
in
Navigation
Get webpage data by providing examples
10/30/2020 • 2 minutes to read • Edit Online
NOTE
The availability of this feature in Power Query Online will occur in mid-October. In this upcoming release, the Power Query
Online experience won’t contain an inline preview of the website in the By example dialog.
Getting data from a web page lets users easily extract data from web pages. Often however, data on Web pages
aren't in tidy tables that are easy to extract. Getting data from such pages can be challenging, even if the data is
structured and consistent.
There's a solution. With the Get Data from Web by example feature, you can essentially show Power Query data you
want to extract by providing one or more examples within the connector dialog. Power Query gathers other data on
the page that match your examples. With this solution you can extract all sorts of data from Web pages, including
data found in tables and other non-table data.
NOTE
Prices listed in the images are for example purposes only.
Add table using examples presents an interactive window where you can preview the content of the Web page.
Enter sample values of the data you want to extract.
In this example, you'll extract the Name and Price for each of the games on the page. You can do that by specifying
a couple of examples from the page for each column. As you enter examples, Power Query extracts data that fits the
pattern of example entries using smart data extraction algorithms.
NOTE
Value suggestions only include values less than or equal to 128 characters in length.
Once you're happy with the data extracted from the Web page, select OK to go to Power Query Editor. You can
apply more transformations or shape the data, such as combining this data with other data sources.
Next steps
Add a column from examples
Shape and combine data
Getting data
Power Query Online Limits
1/17/2020 • 2 minutes to read • Edit Online
Summary
Power Query Online is integrated into a variety of Microsoft products. Since these products target different
scenarios, they may set different limits for Power Query Online usage.
Limits are enforced at the beginning of query evaluations. Once an evaluation is underway, only timeout limits are
imposed.
Limit Types
Hourly Evaluation Count: The maximum number of evaluation requests a user can issue during any 60 minute
period
Daily Evaluation Time: The net time a user can spend evaluating queries during any 24 hour period
Concurrent Evaluations: The maximum number of evaluations a user can have running at any given time
Authoring Limits
Authoring limits are the same across all products. During authoring, query evaluations return previews that may be
subsets of the data. Data is not persisted.
Hourly Evaluation Count: 1000
Daily Evaluation Time: Currently unrestricted
Per Query Timeout: 10 minutes
Refresh Limits
During refresh (either scheduled or on-demand), query evaluations return complete results. Data is typically
persisted in storage.
Preserving Sort
In Power Query, you'll often want to sort your data before you perform some other operation. For example, if you
wanted to sort a sales table by the Store ID and the sale amount, and then you wanted to perform a group, you
might expect sort order to be preserved. However, due to how operation application works, sort order is not
preserved through aggregations or joins.
If you sorted your table, applied an aggregation, and then you tried to apply a distinct to the original sort operation,
you might be surprised to find out that you had lost both the first and the second sort. In other words, if you had
two rows with sales for a single store, and you had them sorted in descending order so that the first row had a
greater dollar value than the second, you might find that the second row was preserved when you ran a distinct on
the Store ID.
There are ways to make this work via a smart combination of aggregations, but these aren't exposed by the user
experience. Unfortunately, there are a sufficiently large number of possible transformations here that we can't give
an example for all outcomes, but here is how you might address the problem above.
let
Source = Sql.Database("Server", "AdventureWorks"),
Sales_SalesPerson = Source{[Schema="Sales",Item="SalesPerson"]}[Data],
#"Grouped Rows" = Table.Group(Sales_SalesPerson, {"TerritoryID"}, {{"Rows", each _}}),
Custom1 = Table.TransformColumns(#"Grouped Rows", {{"Rows", each Table.FirstN(Table.Sort(_, {"SalesYTD",
Order.Descending}), 1)}})
in
Custom1
The data you want is the entire record with the highest SalesYTD in each TerritoryID. If you only wanted the max,
this would be a simple aggregation—but you want the entire input record. To get this, you need to group by
TerritoryID and then sort inside each group, keeping the first record.
If you run into this error, it's most likely a networking issue. Generally, the first people to check with are the owners
of the data source you're attempting to connect to. If they don’t think they’re the ones closing the connection, then
it’s possible something along the way is (for example, a proxy server, intermediate routers/gateways, and so on).
Whether this only reproduces with any data or only larger data sizes, it's likely that there's a network timeout
somewhere on the route. If it's only with larger data, customers should consult with the data source owner to see if
their APIs support paging, so that they can split their requests into smaller chunks. Failing that, alternative ways to
extract data from the API (following data source best practices) should be followed.
Installing the Power Query SDK
2/20/2020 • 3 minutes to read • Edit Online
Quickstart
NOTE
The steps to enable extensions changed in the June 2017 version of Power BI Desktop.
1. Install the Power Query SDK from the Visual Studio Marketplace.
2. Create a new data connector project.
3. Define your connector logic.
4. Build the project to produce an extension file.
5. Copy the extension file into [Documents]/Power BI Desktop/Custom Connectors.
6. Check the option (Not Recommended) Allow any extension to load without validation or warning in
Power BI Desktop (under File | Options and settings | Options | Security | Data Extensions).
7. Restart Power BI Desktop.
Step by step
Creating a new extension in Visual Studio
Installing the Power Query SDK for Visual Studio will create a new Data Connector project template in Visual
Studio.
This creates a new project containing the following files:
Connector definition file (.pq)
A query test file (.query.pq)
A string resource file (resources.resx)
PNG files of various sizes used to create icons
Your connector definition file will start with an empty Data Source description. See the Data Source Kind section
later in this document for details.
Testing in Visual Studio
The Power Query SDK provides basic query execution capabilities, allowing you to test your extension without
having to switch over to Power BI Desktop. See Query File for more details.
Build and deploy from Visual Studio
Building your project will produce your .pqx file.
Data Connector projects don't support custom post build steps to copy the extension file to
your [Documents]\Microsoft Power BI Desktop\Custom Connectors directory. If this is something you want to do,
you may want to use a third party Visual Studio extension, such as Auto Deploy.
Extension files
Power Query extensions are bundled in a ZIP file and given a .mez file extension. At runtime, Power BI Desktop will
load extensions from the [Documents]\Microsoft Power BI Desktop\Custom Connectors.
NOTE
In an upcoming change the default extension will be changed from .mez to .pqx.
To get you up to speed with Power Query, this page lists some of the most common questions.
What software do I need to get started with the Power Query SDK?
You need to install the Power Query SDK in addition to Visual Studio. To be able to test your connectors, you should
also have Power BI installed.
What can you do with a Connector?
Data Connectors allow you to create new data sources or customize and extend an existing source. Common use
cases include:
Creating a business analyst-friendly view for a REST API.
Providing branding for a source that Power Query supports with an existing connector (such as an OData
service or ODBC driver).
Implementing OAuth v2 authentication flow for a SaaS offering.
Exposing a limited or filtered view over your data source to improve usability.
Enabling DirectQuery for a data source using an ODBC driver.
Data Connectors are currently only supported in Power BI Desktop.
Creating your first connector: Hello World
12/10/2019 • 2 minutes to read • Edit Online
HelloWorld = [
Authentication = [
Implicit = []
],
Label = Extension.LoadString("DataSourceLabel")
];
HelloWorld.Publish = [
Beta = true,
ButtonText = { Extension.LoadString("FormulaTitle"), Extension.LoadString("FormulaHelp") },
SourceImage = HelloWorld.Icons,
SourceTypeImage = HelloWorld.Icons
];
HelloWorld.Icons = [
Icon16 = { Extension.Contents("HelloWorld16.png"), Extension.Contents("HelloWorld20.png"), Extension.Conte
nts("HelloWorld24.png"), Extension.Contents("HelloWorld32.png") },
Icon32 = { Extension.Contents("HelloWorld32.png"), Extension.Contents("HelloWorld40.png"), Extension.Conte
nts("HelloWorld48.png"), Extension.Contents("HelloWorld64.png") }
];
Once you've built the file and copied it to the correct directory, following the instructions in Installing the
PowerQuery SDK tutorial, open PowerBI. You can search for "hello" to find your connector in the Get Data dialog.
This step will bring up an authentication dialog. Since there's no authentication options and the function takes no
parameters, there's no further steps in these dialogs.
Press Connect and the dialog will tell you that it's a "Preview connector", since Beta is set to true in the query.
Since there's no authentication, the authentication screen will present a tab for Anonymous authentication with no
fields. Press Connect again to finish.
Finally, the query editor will come up showing what you expect—a function that returns the text "Hello world".
For the fully implemented sample, see the Hello World Sample in the Data Connectors sample repo.
TripPin Tutorial
10/30/2020 • 2 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant to
be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding new
capabilities to your connector.
This tutorial uses a public OData service (TripPin) as a reference source. Although this lesson requires the use of the
M engine's OData functions, subsequent lessons will use Web.Contents, making it applicable to (most) REST APIs.
Prerequisites
The following applications will be used throughout this tutorial:
Power BI Desktop, May 2017 release or later
Power Query SDK for Visual Studio
Fiddler—Optional, but recommended for viewing and debugging requests to your REST service
It's strongly suggested that you review:
Installing the PowerQuery SDK
Starting to Develop Custom Connectors
Creating your first connector: Hello World
Handling Data Access
Handling Authentication
Parts
PA RT L ESSO N DETA IL S
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Create a new Data Connector project using the Visual Studio SDK
Author a base function to pull data from a source
Test your connector in Visual Studio
Register your connector in Power BI Desktop
Open the TripPin.pq file and paste in the following connector definition.
section TripPin;
[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Feed = Value.ReplaceType(TripPinImpl, type function (url as Uri.Type) as any);
TripPin.Feed("https://services.odata.org/v4/TripPinService/")
You can try out a few different OData URLs in the test file to see what how different results are returned. For
example:
https://services.odata.org/v4/TripPinService/Me
https://services.odata.org/v4/TripPinService/GetPersonWithMostFriends()
https://services.odata.org/v4/TripPinService/People
The TripPin.query.pq file can contain single statements, let statements, or full section documents.
let
Source = TripPin.Feed("https://services.odata.org/v4/TripPinService/"),
People = Source{[Name="People"]}[Data],
SelectColumns = Table.SelectColumns(People, {"UserName", "FirstName", "LastName"})
in
SelectColumns
Open Fiddler to capture HTTP traffic, and run the query. You should see a few different requires to
services.odata.org, generated by the mashup container process. You can see that accessing the root URL of the
service results in a 302 status and a redirect to the longer version of the URL. Following redirects is another
behavior you get “for free” from the base library functions.
One thing to note if you look at the URLs is that you can see the query folding that happened with the
SelectColumns statement.
https://services.odata.org/v4/TripPinService/People?$select=UserName%2CFirstName%2CLastName
If you add more transformations to your query, you can see how they impact the generated URL.
This behavior is important to note. Even though you did not implement explicit folding logic, your connector
inherits these capabilities from the OData.Feed function. M statements are compose-able—filter contexts will flow
from one function to another, whenever possible. This is similar in concept to the way data source functions used
within your connector inherit their authentication context and credentials. In later lessons, you'll replace the use of
OData.Feed, which has native folding capabilities, with Web.Contents, which does not. To get the same level of
capabilities, you'll need to use the Table.View interface and implement your own explicit folding logic.
Double click on the function name and the function invocation dialog will appear. Enter the root URL of the service
(https://services.odata.org/v4/TripPinService/), and select OK .
Since this is the first time you are accessing this data source, you'll receive a prompt for credentials. Check that the
shortest URL is selected, and then select Connect .
Notice that instead of getting a simple table of data, the navigator appears. This is because the OData.Feed
function returns a table with special metadata on top of it that the Power Query experience knows to display as a
navigation table. This walkthrough will cover how you can create and customize your own navigation table in a
future lesson.
Select the Me table, and then select Edit . Notice that the columns already have types assigned (well, most of
them). This is another feature of the underlying OData.Feed function. If you watch the requests in Fiddler, you'll
see that you've fetched the service's $metadata document. The engine's OData implementation does this
automatically to determine the service's schema, data types, and relationships.
Conclusion
This lesson walked you through the creation of a simple connector based on the OData.Feed library function. As
you saw, very little logic is needed to enable a fully functional connector over the OData base function. Other
extensibility enabled functions, such as ODBC.DataSource, provide similar capabilities.
In the next lesson, you'll replace the use of OData.Feed with a less capable function—Web.Contents. Each lesson
will implement more connector features, including paging, metadata/schema detection, and query folding to the
OData query syntax, until your custom connector supports the same range of capabilities as OData.Feed.
Next steps
TripPin Part 2 - Data Connector for a REST Service
TripPin Part 2 - Data Connector for a REST Service
5/18/2020 • 7 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Create a base function that calls out to a REST API using Web.Contents
Learn how to set request headers and process a JSON response
Use Power BI Desktop to wrangle the response into a user friendly format
This lesson converts the OData based connector for the TripPin service (created in the previous lesson) to a
connector that resembles something you'd create for any RESTful API. OData is a RESTful API, but one with a fixed
set of conventions. The advantage of OData is that it provides a schema, data retrieval protocol, and standard
query language. Taking away the use of OData.Feed will require us to build these capabilities into the connector
ourselves.
TripPin.Feed("https://services.odata.org/v4/TripPinService/Me")
Open Fiddler and then select the Start button in Visual Studio.
In Fiddler, you'll see three requests to the server:
When the query finishes evaluating, the M Query Output window should show the Record value for the Me
singleton.
If you compare the fields in the output window with the fields returned in the raw JSON response, you'll notice a
mismatch. The query result has additional fields ( Friends , Trips , GetFriendsTrips ) that don't appear anywhere
in the JSON response. The OData.Feed function automatically appended these fields to the record based on the
schema returned by $metadata. This is a good example of how a connector might augment and/or reformat the
response from the service to provide a better user experience.
DefaultRequestHeaders = [
#"Accept" = "application/json;odata.metadata=minimal", // column name and values only
#"OData-MaxVersion" = "4.0" // we only support v4
];
You'll change your implementation of your TripPin.Feed function so that rather than using OData.Feed , it uses
Web.Contents to make a web request, and parses the result as a JSON document.
You can now test this out in Visual Studio using the query file. The result of the /Me record now resembles the raw
JSON that you saw in the Fiddler request.
If you watch Fiddler when running the new function, you'll also notice that the evaluation now makes a single web
request, rather than three. Congratulations—you've achieved a 300% performance increase! Of course, you've
now lost all the type and schema information, but there's no need to focus on that part just yet.
Update your query to access some of the TripPin Entities/Tables, such as:
https://services.odata.org/v4/TripPinService/Airlines
https://services.odata.org/v4/TripPinService/Airports
https://services.odata.org/v4/TripPinService/Me/Trips
You'll notice that the paths that used to return nicely formatted tables now return a top level "value" field with an
embedded [List]. You'll need to do some transformations on the result to make it usable for Power BI scenarios.
let
Source = TripPin.Feed("https://services.odata.org/v4/TripPinService/Airlines"),
value = Source[value],
toTable = Table.FromList(value, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
expand = Table.ExpandRecordColumn(toTable, "Column1", {"AirlineCode", "Name"}, {"AirlineCode", "Name"})
in
expand
let
Source = TripPin.Feed("https://services.odata.org/v4/TripPinService/Airports"),
value = Source[value],
#"Converted to Table" = Table.FromList(value, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"Name", "IcaoCode",
"IataCode", "Location"}, {"Name", "IcaoCode", "IataCode", "Location"}),
#"Expanded Location" = Table.ExpandRecordColumn(#"Expanded Column1", "Location", {"Address", "Loc",
"City"}, {"Address", "Loc", "City"}),
#"Expanded City" = Table.ExpandRecordColumn(#"Expanded Location", "City", {"Name", "CountryRegion",
"Region"}, {"Name.1", "CountryRegion", "Region"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded City",{{"Name.1", "City"}}),
#"Expanded Loc" = Table.ExpandRecordColumn(#"Renamed Columns", "Loc", {"coordinates"}, {"coordinates"}),
#"Added Custom" = Table.AddColumn(#"Expanded Loc", "Latitude", each [coordinates]{1}),
#"Added Custom1" = Table.AddColumn(#"Added Custom", "Longitude", each [coordinates]{0}),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom1",{"coordinates"}),
#"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"Name", type text}, {"IcaoCode", type
text}, {"IataCode", type text}, {"Address", type text}, {"City", type text}, {"CountryRegion", type text},
{"Region", type text}, {"Latitude", type number}, {"Longitude", type number}})
in
#"Changed Type"
You can repeat this process for additional paths under the service. Once you're ready, move onto the next step of
creating a (mock) navigation table.
let
source = #table({"Name", "Data"}, {
{ "Airlines", Airlines },
{ "Airports", Airports }
})
in
source
If you have not set your Privacy Levels setting to "Always ignore Privacy level settings" (also known as "Fast
Combine") you'll see a privacy prompt.
Privacy prompts appear when you're combining data from multiple sources and have not yet specified a privacy
level for the source(s). Select the Continue button and set the privacy level of the top source to Public .
Select Save and your table will appear. While this isn't a navigation table yet, it provides the basic functionality
you need to turn it into one in a subsequent lesson.
Data combination checks do not occur when accessing multiple data sources from within an extension. Since all
data source calls made from within the extension inherit the same authorization context, it is assumed they are
"safe" to combine. Your extension will always be treated as a single data source when it comes to data
combination rules. Users would still receive the regular privacy prompts when combining your source with other
M sources.
If you run Fiddler and click the Refresh Preview button in the Query Editor, you'll notice separate web requests
for each item in your navigation table. This indicates that an eager evaluation is occurring, which isn't ideal when
building navigation tables with a lot of elements. Subsequent lessons will show how to build a proper navigation
table that supports lazy evaluation.
Conclusion
This lesson showed you how to build a simple connector for a REST service. In this case, you turned an existing
OData extension into a standard REST extension (using Web.Contents), but the same concepts apply if you were
creating a new extension from scratch.
In the next lesson, you'll take the queries created in this lesson using Power BI Desktop and turn them into a true
navigation table within the extension.
Next steps
TripPin Part 3 - Navigation Tables
TripPin Part 3 - Navigation Tables
5/18/2020 • 4 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Create a navigation table for a fixed set of queries
Test the navigation table in Power BI Desktop
This lesson adds a navigation table to the TripPin connector created in the previous lesson. When your connector
used the OData.Feed function (Part 1), you received the navigation table “for free”, as derived from the OData
service’s $metadata document. When you moved to the Web.Contents function (Part 2), you lost the built-in
navigation table. In this lesson, you'll take a set of fixed queries you created in Power BI Desktop and add the
appropriate metadata for Power Query to popup the Navigator dialog for your data source function.
See the Navigation Table documentation for more information about using navigation tables.
Next you'll import the mock navigation table query you wrote that creates a fixed table linking to these data set
queries. Call it TripPinNavTable :
Finally you'll declare a new shared function, TripPin.Contents , that will be used as your main data source
function. You'll also remove the Publish value from TripPin.Feed so that it no longer shows up in the Get Data
dialog.
[DataSource.Kind="TripPin"]
shared TripPin.Feed = Value.ReplaceType(TripPinImpl, type function (url as Uri.Type) as any);
[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Contents = Value.ReplaceType(TripPinNavTable, type function (url as Uri.Type) as any);
NOTE
Your extension can mark multiple functions as shared , with or without associating them with a DataSource.Kind .
However, when you associate a function with a specific DataSource.Kind , each function must have the same set of
required parameters, with the same name and type. This is because the data source function parameters are combined to
make a 'key' used for looking up cached credentials.
You can test your TripPin.Contents function using your TripPin.query.pq file. Running the following test query will
give you a credential prompt, and a simple table output.
TripPin.Contents("https://services.odata.org/v4/TripPinService/")
Table.ToNavigationTable = (
table as table,
keyColumns as list,
nameColumn as text,
dataColumn as text,
itemKindColumn as text,
itemNameColumn as text,
isLeafColumn as text
) as table =>
let
tableType = Value.Type(table),
newTableType = Type.AddTableKey(tableType, keyColumns, true) meta
[
NavigationTable.NameColumn = nameColumn,
NavigationTable.DataColumn = dataColumn,
NavigationTable.ItemKindColumn = itemKindColumn,
Preview.DelayColumn = itemNameColumn,
NavigationTable.IsLeafColumn = isLeafColumn
],
navigationTable = Value.ReplaceType(table, newTableType)
in
navigationTable;
After copying this into your extension file, you'll update your TripPinNavTable function to add the navigation table
fields.
Running your test query again will give you a similar result as last time—with a few more columns added.
NOTE
You will not see the Navigator window appear in Visual Studio. The M Quer y Output window always displays the
underlying table.
If you copy your extension over to your Power BI Desktop custom connector and invoke the new function from the
Get Data dialog, you'll see your navigator appear.
If you right click on the root of the navigation tree and select Edit , you'll see the same table as you did within
Visual Studio.
Conclusion
In this tutorial, you added a Navigation Table to your extension. Navigation Tables are a key feature that make
connectors easier to use. In this example your navigation table only has a single level, but the Power Query UI
supports displaying navigation tables that have multiple dimensions (even when they are ragged).
Next steps
TripPin Part 4 - Data Source Paths
TripPin Part 4 - Data Source Paths
5/18/2020 • 6 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Simplify the connection logic for your connector
Improve the navigation table experience
This lesson simplifies the connector built in the previous lesson by removing its required function parameters, and
improving the user experience by moving to a dynamically generated navigation table.
For an in-depth explanation of how credentials are identified, see the Data Source Paths section of Handling
Authentication.
[DataSource.Kind="TripPin"]
shared TripPin.Feed = Value.ReplaceType(TripPinImpl, type function (url as Uri.Type) as any);
[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Contents = Value.ReplaceType(TripPinNavTable, type function (url as Uri.Type) as any);
The first time you run a query that uses one of the functions, you'll receive a credential prompt with drop downs
that lets you select a path and an authentication type.
If you run the same query again, with the same parameters, the M engine is able to locate the cached credentials,
and no credential prompt is shown. If you modify the url argument to your function so that the base path no
longer matches, a new credential prompt is displayed for the new path.
You can see any cached credentials on the Credentials table in the M Quer y Output window.
Depending on the type of change, modifying the parameters of your function will likely result in a credential error.
BaseUrl = "https://services.odata.org/v4/TripPinService/";
[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Contents = () => TripPinNavTable(BaseUrl) as table;
You'll keep the TripPin.Feed function, but no longer make it shared, no longer associate it with a Data Source
Kind, and simplify its declaration. From this point on, you'll only use it internally within this section document.
If you update the TripPin.Contents() call in your TripPin.query.pq file and run it in Visual Studio, you'll see a
new credential prompt. Note that there is now a single Data Source Path value—TripPin.
Improving the Navigation Table
In the first tutorial you used the built-in OData functions to connect to the TripPin service. This gave you a really
nice looking navigation table, based on the TripPin service document, with no additional code on your side. The
OData.Feed function automatically did the hard work for you. Since you're "roughing it" by using Web.Contents
rather than OData.Feed, you'll need to recreate this navigation table yourself.
RootEntities = {
"Airlines",
"Airports",
"People"
};
You then update your TripPinNavTable function to build the table a column at a time. The [Data] column for each
entity is retrieved by calling TripPin.Feed with the full URL to the entity.
When dynamically building URL paths, make sure you're clear where your forward slashes (/) are! Note that
Uri.Combine uses the following rules when combining paths:
When the relativeUri parameter starts with a /, it will replace the entire path of the baseUri parameter
If the relativeUri parameter does not start with a / and baseUri ends with a /, the path is appended
If the relativeUri parameter does not start with a / and baseUri does not end with a /, the last segment of
the path is replaced
The following image shows examples of this:
NOTE
A disadvantage of using a generic approach to process your entities is that you lose the nice formating and type
information for your entities. A later section in this tutorial shows how to enforce schema on REST API calls.
Conclusion
In this tutorial, you cleaned up and simplified your connector by fixing your Data Source Path value, and moving to
a more flexible format for your navigation table. After completing these steps (or using the sample code in this
directory), the TripPin.Contents function returns a navigation table in Power BI Desktop.
Next steps
TripPin Part 5 - Paging
TripPin Part 5 - Paging
5/18/2020 • 7 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Add paging support to the connector
Many Rest APIs will return data in "pages", requiring clients to make multiple requests to stitch the results
together. Although there are some common conventions for pagination (such as RFC 5988), it generally varies
from API to API. Thankfully, TripPin is an OData service, and the OData standard defines a way of doing pagination
using odata.nextLink values returned in the body of the response.
To simplify previous iterations of the connector, the TripPin.Feed function was not page aware. It simply parsed
whatever JSON was returned from the request and formatted it as a table. Those familiar with the OData protocol
might have noticed that a number of incorrect assumptions were made on the format of the response (such as
assuming there is a value field containing an array of records).
In this lesson you'll improve your response handling logic by making it page aware. Future tutorials will make the
page handling logic more robust and able to handle multiple response formats (including errors from the service).
NOTE
You do not need to implement your own paging logic with connectors based on OData.Feed, as it handles it all for you
automatically.
Paging Checklist
When implementing paging support, you'll need to know the following things about your API:
How do you request the next page of data?
Does the paging mechanism involve calculating values, or do you extract the URL for the next page from the
response?
How do you know when to stop paging?
Are there parameters related to paging that you should be aware of? (such as "page size")
The answer to these questions will impact the way you implement your paging logic. While there is some amount
of code reuse across paging implementations (such as the use of Table.GenerateByPage, most connectors will end
up requiring custom logic.
NOTE
This lesson contains paging logic for an OData service, which follows a specific format. Check the documentation for your
API to determine the changes you'll need to make in your connector to support its paging format.
{
"odata.context": "...",
"odata.count": 37,
"value": [
{ },
{ },
{ }
],
"odata.nextLink": "...?$skiptoken=342r89"
}
Some OData services allow clients to supply a max page size preference, but it is up to the service whether or not
to honor it. Power Query should be able to handle responses of any size, so you don't need to worry about
specifying a page size preference—you can support whatever the service throws at you.
More information about Server-Driven Paging can be found in the OData specification.
Testing TripPin
Before fixing your paging implementation, confirm the current behavior of the extension from the previous
tutorial. The following test query will retrieve the People table and add an index column to show your current row
count.
let
source = TripPin.Contents(),
data = source{[Name="People"]}[Data],
withRowCount = Table.AddIndexColumn(data, "Index")
in
withRowCount
Turn on fiddler, and run the query in Visual Studio. You'll notice that the query returns a table with 8 rows (index 0
to 7).
If you look at the body of the response from fiddler, you'll see that it does in fact contain an @odata.nextLink field,
indicating that there are more pages of data available.
{
"@odata.context": "https://services.odata.org/V4/TripPinService/$metadata#People",
"@odata.nextLink": "https://services.odata.org/v4/TripPinService/People?%24skiptoken=8",
"value": [
{ },
{ },
{ }
]
}
NOTE
As stated earlier in this tutorial, paging logic will vary between data sources. The implementation here tries to break up the
logic into functions that should be reusable for sources that use next links returned in the response.
Table.GenerateByPage
The Table.GenerateByPage function can be used to efficiently combine multiple 'pages' of data into a single table.
It does this by repeatedly calling the function passed in as the getNextPage parameter, until it receives a null .
The function parameter must take a single argument, and return a nullable table .
getNextPage = (lastPage) as nullable table => ...
Each call to getNextPage receives the output from the previous call.
// The getNextPage function takes a single argument and is expected to return a nullable table
Table.GenerateByPage = (getNextPage as function) as table =>
let
listOfPages = List.Generate(
() => getNextPage(null), // get the first page of data
(lastPage) => lastPage <> null, // stop when the function returns null
(lastPage) => getNextPage(lastPage) // pass the previous page to the next function call
),
// concatenate the pages together
tableOfPages = Table.FromList(listOfPages, Splitter.SplitByNothing(), {"Column1"}),
firstRow = tableOfPages{0}?
in
// if we didn't get back any pages of data, return an empty table
// otherwise set the table type based on the columns of the first page
if (firstRow = null) then
Table.FromRows({})
else
Value.ReplaceType(
Table.ExpandTableColumn(tableOfPages, "Column1", Table.ColumnNames(firstRow[Column1])),
Value.Type(firstRow[Column1])
);
Implementing GetAllPagesByNextLink
The body of your GetAllPagesByNextLink function implements the getNextPage function argument for
Table.GenerateByPage . It will call the GetPage function, and retrieve the URL for the next page of data from the
NextLink field of the meta record from the previous call.
Implementing GetPage
Your GetPage function will use Web.Contents to retrieve a single page of data from the TripPin service, and
convert the response into a table. It passes the response from Web.Contents to the GetNextLink function to
extract the URL of the next page, and sets it on the meta record of the returned table (page of data).
This implementation is a slightly modified version of the TripPin.Feed call from the previous tutorials.
Implementing GetNextLink
Your GetNextLink function simply checks the body of the response for an @odata.nextLink field, and returns its
value.
// In this implementation, 'response' will be the parsed body of the response after the call to
Json.Document.
// Look for the '@odata.nextLink' field and simply return null if it doesn't exist.
GetNextLink = (response) as nullable text => Record.FieldOrDefault(response, "@odata.nextLink");
If you re-run the same test query from earlier in the tutorial, you should now see the page reader in action. You
should also see that you have 20 rows in the response rather than 8.
If you look at the requests in fiddler, you should now see separate requests for each page of data.
NOTE
You'll notice duplicate requests for the first page of data from the service, which is not ideal. The extra request is a result of
the M engine's schema checking behavior. Ignore this issue for now and resolve it in the next tutorial, where you'll apply an
explict schema.
Conclusion
This lesson showed you how to implement pagination support for a Rest API. While the logic will likely vary
between APIs, the pattern established here should be reusable with minor modifications.
In the next lesson, you'll look at how to apply an explicit schema to your data, going beyond the simple text and
number data types you get from Json.Document .
Next steps
TripPin Part 6 - Schema
TripPin Part 6 - Schema
5/18/2020 • 11 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Define a fixed schema for a REST API
Dynamically set data types for columns
Enforce a table structure to avoid transformation errors due to missing columns
Hide columns from the result set
One of the big advantages of an OData service over a standard REST API is its $metadata definition. The
$metadata document describes the data found on this service, including the schema for all of its Entities (Tables)
and Fields (Columns). The OData.Feed function uses this schema definition to automatically set data type
information—so instead of getting all text and number fields (like you would from Json.Document ), end users will
get dates, whole numbers, times, and so on, providing a better overall user experience.
Many REST APIs don't have a way to programmatically determine their schema. In these cases, you'll need to
include schema definitions within your connector. In this lesson you'll define a simple, hardcoded schema for each
of your tables, and enforce the schema on the data you read from the service.
NOTE
The approach described here should work for many REST services. Future lessons will build upon this approach by
recursively enforcing schemas on structured columns (record, list, table), and provide sample implementations that can
programmatically generate a schema table from CSDL or JSON Schema documents.
Overall, enforcing a schema on the data returned by your connector has multiple benefits, such as:
Setting the correct data types
Removing columns that don't need to be shown to end users (such as internal IDs or state information)
Ensuring that each page of data has the same shape by adding any columns that might be missing from a
response (a common way for REST APIs to indicate a field should be null)
let
source = TripPin.Contents(),
data = source{[Name="Airlines"]}[Data]
in
data
The "@odata.*" columns are part of OData protocol, and not something you'd want or need to show to the end
users of your connector. AirlineCode and Name are the two columns you'll want to keep. If you look at the
schema of the table (using the handy Table.Schema function), you can see that all of the columns in the table have
a data type of Any.Type .
let
source = TripPin.Contents(),
data = source{[Name="Airlines"]}[Data]
in
Table.Schema(data)
Table.Schema returns a lot of metadata about the columns in a table, including names, positions, type information,
and many advanced properties, such as Precision, Scale, and MaxLength. Future lessons will provide design
patterns for setting these advanced properties, but for now you need only concern yourself with the ascribed type
( TypeName ), primitive type ( Kind ), and whether the column value might be null ( IsNullable ).
C O L UM N DETA IL S
Name The name of the column. This must match the name in the
results returned by the service.
Type The M data type you're going to set. This can be a primitive
type ( text , number , datetime , and so on), or an ascribed
type ( Int64.Type , Currency.Type , and so on).
The hardcoded schema table for the Airlines table will set its AirlineCode and Name columns to text , and
looks like this:
The Airports table has four fields you'll want to keep (including one of type record ):
Finally, the People table has seven fields, including lists ( Emails , AddressInfo ), a nullable column ( Gender ), and
a column with an ascribed type ( Concurrency ).
NOTE
The last step to set the table type will remove the need for the Power Query UI to infer type information when viewing the
results in the query editor. This removes the double request issue you saw at the end of the previous tutorial.
The following helper code can be copy and pasted into your extension:
EnforceSchema.Strict = 1; // Add any missing columns, remove extra columns, set table type
EnforceSchema.IgnoreExtraColumns = 2; // Add missing columns, do not remove extra columns
EnforceSchema.IgnoreMissingColumns = 3; // Do not add or remove columns
SchemaTransformTable = (table as table, schema as table, optional enforceSchema as number) as table =>
let
// Default to EnforceSchema.Strict
_enforceSchema = if (enforceSchema <> null) then enforceSchema else EnforceSchema.Strict,
You'll also update all of the calls to these functions to make sure that you pass the schema through correctly.
Enforcing the schema
The actual schema enforcement will be done in your GetPage function.
You'll then update your TripPinNavTable function to call GetEntity , rather than making all of the calls inline. The
main advantage to this is that it will let you continue modifying your entity building code, without having to touch
your nav table logic.
let
source = TripPin.Contents(),
data = source{[Name="Airlines"]}[Data]
in
Table.Schema(data)
You now see that your Airlines table only has the two columns you defined in its schema:
If you run the same code against the People table...
let
source = TripPin.Contents(),
data = source{[Name="People"]}[Data]
in
Table.Schema(data)
You'll see that the ascribed type you used ( Int64.Type ) was also set correctly.
An important thing to note is that this implementation of SchemaTransformTable doesn't modify the types of list
and record columns, but the Emails and AddressInfo columns are still typed as list . This is because
Json.Document will correctly map JSON arrays to M lists, and JSON objects to M records. If you were to expand
the list or record column in Power Query, you'd see that all of the expanded columns will be of type any. Future
tutorials will improve the implementation to recursively set type information for nested complex types.
Conclusion
This tutorial provided a sample implementation for enforcing a schema on JSON data returned from a REST
service. While this sample uses a simple hardcoded schema table format, the approach could be expanded upon
by dynamically building a schema table definition from another source, such as a JSON schema file, or metadata
service/endpoint exposed by the data source.
In addition to modifying column types (and values), your code is also setting the correct type information on the
table itself. Setting this type information benefits performance when running inside of Power Query, as the user
experience always attempts to infer type information to display the right UI queues to the end user, and the
inference calls can end up triggering additional calls to the underlying data APIs.
If you view the People table using the TripPin connector from the previous lesson, you'll see that all of the columns
have a 'type any' icon (even the columns that contain lists):
Running the same query with the TripPin connector from this lesson, you'll now see that the type information is
displayed correctly.
Next steps
TripPin Part 7 - Advanced Schema with M Types
TripPin Part 7 - Advanced Schema with M Types
5/18/2020 • 7 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Enforce a table schema using M Types
Set types for nested records and lists
Refactor code for reuse and unit testing
In the previous lesson you defined your table schemas using a simple "Schema Table" system. This schema table
approach works for many REST APIs/Data Connectors, but services that return complete or deeply nested data
sets might benefit from the approach in this tutorial, which leverages the M type system.
This lesson will guide you through the following steps:
1. Adding unit tests
2. Defining custom M types
3. Enforcing a schema using types
4. Refactoring common code into separate files
shared TripPin.UnitTest =
[
// Put any common variables here if you only want them to be evaluated once
RootTable = TripPin.Contents(),
Airlines = RootTable{[Name="Airlines"]}[Data],
Airports = RootTable{[Name="Airports"]}[Data],
People = RootTable{[Name="People"]}[Data],
report = Facts.Summarize(facts)
][report];
Clicking run on the project will evaluate all of the Facts, and give you a report output that looks like this:
Using some principles from test-driven development, you'll now add a test that currently fails, but will soon be
reimplemented and fixed (by the end of this tutorial). Specifically, you'll add a test that checks one of the nested
records (Emails) you get back in the People entity.
If you run the code again, you should now see that you have a failing test.
Now you just need to implement the functionality to make this work.
A type value is a value that classifies other values. A value that is classified by a type is said to conform to
that type. The M type system consists of the following kinds of types:
Primitive types, which classify primitive values ( binary , date , datetime , datetimezone , duration , list ,
logical , null , number , record , text , time , type ) and also include a number of abstract types (
function , table , any , and none )
Record types, which classify record values based on field names and value types
List types, which classify lists using a single item base type
Function types, which classify function values based on the types of their parameters and return values
Table types, which classify table values based on column names, column types, and keys
Nullable types, which classifies the value null in addition to all the values classified by a base type
Type types, which classify values that are types
Using the raw JSON output you get (and/or looking up the definitions in the service's $metadata), you can define
the following record types to represent OData complex types:
LocationType = type [
Address = text,
City = CityType,
Loc = LocType
];
CityType = type [
CountryRegion = text,
Name = text,
Region = text
];
LocType = type [
#"type" = text,
coordinates = {number},
crs = CrsType
];
CrsType = type [
#"type" = text,
properties = record
];
Note how the LocationType references the CityType and LocType to represent its structured columns.
For the top level entities (that you want represented as Tables), you define table types:
You then update your SchemaTable variable (which you use as a "lookup table" for entity to type mappings) to use
these new type definitions:
The full code listing for the Table.ChangeType function can be found in the Table.ChangeType.pqm file.
NOTE
For flexibility, the function can be used on tables, as well as lists of records (which is how tables would be represented in a
JSON document).
You then need to update the connector code to change the schema parameter from a table to a type , and add a
call to Table.ChangeType in GetEntity .
GetPage is updated to use the list of fields from the schema (to know the names of what to expand when you get
the results), but leaves the actual schema enforcement to GetEntity .
At this point, your extension almost has as much "common" code as TripPin connector code. In the future these
common functions will either be part of the built-in standard function library, or you'll be able to reference them
from another extension. For now, you refactor your code in the following way:
1. Move the reusable functions to separate files (.pqm).
2. Set the Build Action property on the file to Compile to make sure it gets included in your extension file
during the build.
3. Define a function to load the code using Expression.Evaluate.
4. Load each of the common functions you want to use.
The code to do this is included in the snippet below:
Table.ChangeType = Extension.LoadFunction("Table.ChangeType.pqm");
Table.GenerateByPage = Extension.LoadFunction("Table.GenerateByPage.pqm");
Table.ToNavigationTable = Extension.LoadFunction("Table.ToNavigationTable.pqm");
Conclusion
This tutorial made a number of improvements to the way you enforce a schema on the data you get from a REST
API. The connector is currently hard coding its schema information, which has a performance benefit at runtime,
but is unable to adapt to changes in the service's metadata overtime. Future tutorials will move to a purely
dynamic approach that will infer the schema from the service's $metadata document.
In addition to the schema changes, this tutorial added Unit Tests for your code, and refactored the common helper
functions into separate files to improve overall readability.
Next steps
TripPin Part 8 - Adding Diagnostics
TripPin Part 8 - Adding Diagnostics
5/18/2020 • 8 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Learn about the Diagnostics.Trace function
Use the Diagnostics helper functions to add trace information to help debug your connector
Enabling diagnostics
Power Query users can enable trace logging by selecting the checkbox under Options | Diagnostics .
Once enabled, any subsequent queries will cause the M engine to emit trace information to log files located in a
fixed user directory.
When running M queries from within the Power Query SDK, tracing is enabled at the project level. On the project
properties page, there are three settings related to tracing:
Clear Log —when this is set to true , the log will be reset/cleared when you run your queries. We recommend
you keep this set to true .
Show Engine Traces —this setting controls the output of built-in traces from the M engine. These traces are
generally only useful to members of the Power Query team, so you'll typically want to keep this set to false .
Show User Traces —this setting controls trace information output by your connector. You'll want to set this to
true .
Once enabled, you'll start seeing log entries in the M Query Output window, under the Log tab.
Diagnostics.Trace
The Diagnostics.Trace function is used to write messages into the M engine's trace log.
Diagnostics.Trace = (traceLevel as number, message as text, value as any, optional delayed as nullable logical
as any) => ...
IMPORTANT
M is a functional language with lazy evaluation. When using Diagnostics.Trace , keep in mind that the function will only
be called if the expression its a part of is actually evaluated. Examples of this can be found later in this tutorial.
The traceLevel parameter can be one of the following values (in descending order):
TraceLevel.Critical
TraceLevel.Error
TraceLevel.Warning
TraceLevel.Information
TraceLevel.Verbose
When tracing is enabled, the user can select the maximum level of messages they would like to see. All trace
messages of this level and under will be output to the log. For example, if the user selects the "Warning" level, trace
messages of TraceLevel.Warning , TraceLevel.Error , and TraceLevel.Critical would appear in the logs.
The messageparameter is the actual text that will be output to the trace file. Note that the text will not contain the
value parameter unless you explicitly include it in the text.
The value parameter is what the function will return. When the delayed parameter is set to true , value will be
a zero parameter function that returns the actual value you're evaluating. When delayed is set to false , value
will be the actual value. An example of how this works can be found below.
Using Diagnostics.Trace in the TripPin connector
For a practical example of using Diagnostics.Trace and the impact of the delayed parameter, update the TripPin
connector's GetSchemaForEntity function to wrap the error exception:
You can force an error during evaluation (for test purposes!) by passing an invalid entity name to the GetEntity
function. Here you change the withData line in the TripPinNavTable function, replacing [Name] with
"DoesNotExist" .
Enable tracing for your project, and run your test queries. On the Errors tab you should see the text of the error
you raised:
Also, on the Log tab, you should see the same message. Note that if you use different values for the message and
value parameters, these would be different.
Also note that the Action field of the log message contains the name (Data Source Kind) of your extension (in this
case, Engine/Extension/TripPin ). This makes it easier to find the messages related to your extension when there
are multiple queries involved and/or system (mashup engine) tracing is enabled.
Delayed evaluation
As an example of how the delayed parameter works, you'll make some modifications and run the queries again.
First, set the delayed value to false , but leave the value parameter as-is:
When you run the query, you'll receive an error that "We cannot convert a value of type Function to type Type",
and not the actual error you raised. This is because the call is now returning a function value, rather than the
value itself.
Next, remove the function from the value parameter:
When you run the query, you'll receive the correct error, but if you check the Log tab, there will be no messages.
This is because the error ends up being raised/evaluated during the call to Diagnostics.Trace , so the message is
never actually output.
Now that you understand the impact of the delayed parameter, be sure to reset your connector back to a
working state before proceeding.
// Diagnostics module contains multiple functions. We can take the ones we need.
Diagnostics = Extension.LoadFunction("Diagnostics.pqm");
Diagnostics.LogValue = Diagnostics[LogValue];
Diagnostics.LogFailure = Diagnostics[LogFailure];
Diagnostics.LogValue
The Diagnostics.LogValue function is a lot like Diagnostics.Trace , and can be used to output the value of what
you're evaluating.
The prefix parameter is prepended to the log message. You'd use this to figure out which call output the
message. The value parameter is what the function will return, and will also be written to the trace as a text
representation of the M value. For example, if value is equal to a table with columns A and B, the log will
contain the equivalent #table representation: #table({"A", "B"}, {{"row1 A", "row1 B"}, {"row2 A", row2 B"}})
NOTE
Serializing M values to text can be an expensive operation. Be aware of the potential size of the values you are outputting to
the trace.
NOTE
Most Power Query environments will truncate trace messages to a maximum length.
As an example, you'll update the TripPin.Feed function to trace the url and schema arguments passed into the
function.
Note that you have to use the new _url and _schema values in the call to GetAllPagesByNextLink . If you used the
original function parameters, the Diagnostics.LogValue calls would never actually be evaluated, resulting in no
messages written to the trace. Functional programming is fun!
When you run your queries, you should now see new messages in the log.
Accessing url:
Schema type:
Note that you see the serialized version of the schema parameter type , rather than what you'd get when you do a
simple Text.FromValue on a type value (which results in "type").
Diagnostics.LogFailure
The Diagnostics.LogFailure function can be used to wrap function calls, and will only write to the trace if the
function call fails (that is, returns an error ).
Internally, Diagnostics.LogFailure adds a try operator to the function call. If the call fails, the text value is
written to the trace before returning the original error . If the function call succeeds, the result is returned
without writing anything to the trace. Since M errors don't contain a full stack trace (that is, you typically only see
the message of the error), this can be useful when you want to pinpoint where the error was actually raised.
As a (poor) example, modify the withData line of the TripPinNavTable function to force an error once again:
In the trace, you can find the resulting error message containing your text , and the original error information.
Be sure to reset your function to a working state before proceeding with the next tutorial.
Conclusion
This brief (but important!) lesson showed you how to make use of the diagnostic helper functions to log to the
Power Query trace files. When used properly, these functions are extremely useful in debugging issues within your
connector.
NOTE
As a connector developer, it is your responsibility to ensure that you do not log sensitive or personally identifiable
information (PII) as part of your diagnostic logging. You must also be careful to not output too much trace information, as it
can have a negative performance impact.
Next steps
TripPin Part 9 - TestConnection
TripPin Part 9 - TestConnection
5/18/2020 • 4 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Add a TestConnection handler
Configure the On-Premises Data Gateway (Personal mode)
Test scheduled refresh through the Power BI service
Custom Connector support was added to the April 2018 release of the Personal On-Premises Gateway. This new
(preview) functionality allows for Scheduled Refresh of reports that make use of your custom connector.
This tutorial will cover the process of enabling your connector for refresh, and provide a quick walkthrough of the
steps to configure the gateway. Specifically you will:
1. Add a TestConnection handler to your connector
2. Install the On-Premises Data Gateway in Personal mode
3. Enable Custom Connector support in the Gateway
4. Publish a workbook that uses your connector to PowerBI.com
5. Configure scheduled refresh to test your connector
See the Handling Gateway Support for more information on the TestConnection handler.
Background
There are three prerequisites for configuring a data source for scheduled refresh using PowerBI.com:
The data source is suppor ted: This means that the target gateway environment is aware of all of the
functions contained within the query you want to refresh.
Credentials are provided: To present the right credential entry dialog, Power BI needs to know the support
authentication mechanism for a given data source.
The credentials are valid: After the user provides credentials, they are validated by calling the data source's
TestConnection handler.
The first two items are handled by registering your connector with the gateway. When the user attempts to
configure scheduled refresh in PowerBI.com, the query information is sent to your personal gateway to determine
if any data sources that aren't recognized by the Power BI service (that is, custom ones that you created) are
available there. The third item is handled by invoking the TestConnection handler defined for your data source.
Future versions of the Power Query SDK will provide a way to validate the TestConnection handler from Visual
Studio. Currently, the only mechanism that uses TestConnection is the On-premises Data Gateway.
Download and install the On-Premises Data Gateway. When you run the installer, select the Personal Mode.
After installation is complete, launch the gateway and sign into Power BI. The sign-in process will automatically
register your gateway with the Power BI services. Once signed in, perform the following steps:
1. Select the Connectors tab.
2. Select the switch to enable support for Custom data connectors .
3. Select the directory you want to load custom connectors from. This will usually be the same directory that
you'd use for Power BI Desktop, but the value is configurable.
4. The page should now list all extension files in your target directory.
See the online documentation for more information about the gateway.
Select the Edit credentials link to bring up the authentication dialog, and then select sign-in.
NOTE
If you receive an error similar to the one below ("Failed to update data source credentials"), you most likely have an issue
with your TestConnection handler.
After a successful call to TestConnection, the credentials will be accepted. You can now schedule refresh, or select
the dataset ellipse and then select Refresh Now . You can select the Refresh histor y link to view the status of the
refresh (which generally takes a few minutes to get kicked off).
Conclusion
Congratulations! You now have a production ready custom connector that supports automated refresh through
the Power BI service.
Next steps
TripPin Part 10 - Query Folding
TripPin Part 10—Basic Query Folding
10/30/2020 • 16 minutes to read • Edit Online
This multi-part tutorial covers the creation of a new data source extension for Power Query. The tutorial is meant
to be done sequentially—each lesson builds on the connector created in previous lessons, incrementally adding
new capabilities to your connector.
In this lesson, you will:
Learn the basics of query folding
Learn about the Table.View function
Replicate OData query folding handlers for:
$top
$skip
$count
$select
$orderby
One of the powerful features of the M language is its ability to push transformation work to underlying data
source(s). This capability is referred to as Query Folding (other tools/technologies also refer to similar function as
Predicate Pushdown, or Query Delegation). When creating a custom connector that uses an M function with built-
in query folding capabilities, such as OData.Feed or Odbc.DataSource , your connector will automatically inherit this
capability for free.
This tutorial will replicate the built-in query folding behavior for OData by implementing function handlers for the
Table.View function. This part of the tutorial will implement some of the easier handlers to implement (that is,
ones that don't require expression parsing and state tracking).
To understand more about the query capabilities that an OData service might offer, see OData v4 URL
Conventions.
NOTE
As stated above, the OData.Feed function will automatically provide query folding capabilities. Since the TripPin series is
treating the OData service as a regular REST API, using Web.Contents rather than OData.Feed , you'll need to implement
the query folding handlers yourself. For real world usage, we recommend that you use OData.Feed whenever possible.
See the Table.View documentation for more information about query folding in M.
Using Table.View
The Table.View function allows a custom connector to override default transformation handlers for your data
source. An implementation of Table.View will provide a function for one or more of the supported handlers. If a
handler is unimplemented, or returns an error during evaluation, the M engine will fall back to its default handler.
When a custom connector uses a function that doesn't support implicit query folding, such as Web.Contents ,
default transformation handlers will always be performed locally. If the REST API you are connecting to supports
query parameters as part of the query, Table.View will allow you to add optimizations that allow transformation
work to be pushed to the service.
The Table.View function has the following signature:
Your implementation will wrap your main data source function. There are two required handlers for Table.View :
GetType —returns the expected table type of the query result
GetRows —returns the actual table result of your data source function
If you re-run the unit tests, you'll see that the behavior of your function hasn't changed. In this case your Table.View
implementation is simply passing through the call to GetEntity . Since your haven't implemented any
transformation handlers (yet), the original url parameter remains untouched.
//
// Helper functions
//
// Retrieves the cached schema. If this is the first call
// to CalculateSchema, the table type is calculated based on
// the entity name that was passed into the function.
CalculateSchema = (state) as type =>
if (state[Schema]? = null) then
GetSchemaForEntity(entity)
else
state[Schema],
If you look at the call to , you'll see an additional wrapper function around the handlers record—
Table.View
Diagnostics.WrapHandlers . This helper function is found in the Diagnostics module (that was introduced in a
previous tutorial), and provides you with a useful way to automatically trace any errors raised by individual
handlers.
The GetType and GetRowsfunctions have been updated to make use of two new helper functions—
CalculateSchema and CaculateUrl . Right now the implementations of those functions are fairly straightforward—
you'll notice they contain parts of what was previously done by the GetEntity function.
Finally, you'll notice that you're defining an internal function ( View ) that accepts a state parameter. As you
implement more handlers, they will recursively call the internal View function, updating and passing along state
as they go.
Update the TripPinNavTable function once again, replacing the call to TripPin.SuperSimpleView with a call to the
new TripPin.View function, and re-run the unit tests. You won't see any new functionality yet, but you now have a
solid baseline for testing.
NOTE
The Error on Folding Failure setting is an "all or nothing" approach. If you want to test queries that aren't designed to
fold as part of your unit tests, you'll need to add some conditional logic to enable/disable tests accordingly.
The remaining sections of this tutorial will each add a new Table.View handler. You'll be taking a Test Driven
Development (TDD) approach, where you first add failing unit tests, and then implement the M code to resolve
them.
Each handler section below will describe the functionality provided by the handler, the OData equivalent query
syntax, the unit tests, and the implementation. Using the scaffolding code described above, each handler
implementation requires two changes:
Adding the handler to Table.View that will update the state record.
Modifying CalculateUrl to retrieve the values from the state and add to the url and/or query string
parameters.
Handling Table.FirstN with OnTake
The OnTake handler receives a count parameter, which is the maximum number of rows to take. In OData terms,
you can translate this to the $top query parameter.
You'll use the following unit tests:
These tests both use Table.FirstN to filter to the result set to the first X number of rows. If you run these tests
with Error on Folding Failure set to False (the default), the tests should succeed, but if you run Fiddler (or
check the trace logs), you'll see that the request you send doesn't contain any OData query parameters.
If you set Error on Folding Failure to True , they will fail with the "Please try a simpler expression." error. To fix
this, you'll define your first Table.View handler for OnTake .
The OnTake handler looks like this:
The CalculateUrl function is updated to extract the Top value from the state record, and set the right
parameter in the query string.
encodedQueryString = Uri.BuildQueryString(qsWithTop),
finalUrl = urlWithEntity & "?" & encodedQueryString
in
finalUrl
Rerunning the unit tests, you can see that the URL you are accessing now contains the $top parameter. (Note that
due to URL encoding, $top appears as %24top , but the OData service is smart enough to convert it
automatically).
// OnSkip
Fact("Fold $skip 14 on Airlines",
#table( type table [AirlineCode = text, Name = text] , {{"EK", "Emirates"}} ),
Table.Skip(Airlines, 14)
),
Fact("Fold $skip 0 and $top 1",
#table( type table [AirlineCode = text, Name = text] , {{"AA", "American Airlines"}} ),
Table.FirstN(Table.Skip(Airlines, 0), 1)
),
Implementation:
qsWithSkip =
if (state[Skip]? <> null) then
qsWithTop & [ #"$skip" = Number.ToText(state[Skip]) ]
else
qsWithTop,
// OnSelectColumns
Fact("Fold $select single column",
#table( type table [AirlineCode = text] , {{"AA"}} ),
Table.FirstN(Table.SelectColumns(Airlines, {"AirlineCode"}), 1)
),
Fact("Fold $select multiple column",
#table( type table [UserName = text, FirstName = text, LastName = text],{{"russellwhyte", "Russell",
"Whyte"}}),
Table.FirstN(Table.SelectColumns(People, {"UserName", "FirstName", "LastName"}), 1)
),
Fact("Fold $select with ignore column",
#table( type table [AirlineCode = text] , {{"AA"}} ),
Table.FirstN(Table.SelectColumns(Airlines, {"AirlineCode", "DoesNotExist"}, MissingField.Ignore), 1)
),
The first two tests select different numbers of columns with Table.SelectColumns , and include a Table.FirstN call
to simplify the test case.
NOTE
If the test were to simply return the column names (using Table.ColumnNames ) and not any data, the request to the
OData service will never actually be sent. This is because the call to GetType will return the schema, which contains all of
the information the M engine needs to calculate the result.
The third test uses the MissingField.Ignore option, which tells the M engine to ignore any selected columns that
don't exist in the result set. The OnSelectColumns handler does not need to worry about this option—the M engine
will handle it automatically (that is, missing columns won't be included in the columns list).
NOTE
The other option for Table.SelectColumns , MissingField.UseNull , requires a connector to implement the
OnAddColumn handler. This will be done in a subsequent lesson.
CalculateUrl is updated to retrieve the list of columns from the state, and combine them (with a separator) for
the $select parameter.
// OnSort
Fact("Fold $orderby single column",
#table( type table [AirlineCode = text, Name = text], {{"TK", "Turkish Airlines"}}),
Table.FirstN(Table.Sort(Airlines, {{"AirlineCode", Order.Descending}}), 1)
),
Fact("Fold $orderby multiple column",
#table( type table [UserName = text], {{"javieralfred"}}),
Table.SelectColumns(Table.FirstN(Table.Sort(People, {{"LastName", Order.Ascending}, {"UserName",
Order.Descending}}), 1), {"UserName"})
)
Implementation:
// OnSort - receives a list of records containing two fields:
// [Name] - the name of the column to sort on
// [Order] - equal to Order.Ascending or Order.Descending
// If there are multiple records, the sort order must be maintained.
//
// OData allows you to sort on columns that do not appear in the result
// set, so we do not have to validate that the sorted columns are in our
// existing schema.
OnSort = (order as list) =>
let
// This will convert the list of records to a list of text,
// where each entry is "<columnName> <asc|desc>"
sorting = List.Transform(order, (o) =>
let
column = o[Name],
order = o[Order],
orderText = if (order = Order.Ascending) then "asc" else "desc"
in
column & " " & orderText
),
orderBy = Text.Combine(sorting, ", ")
in
@View(state & [ OrderBy = orderBy ]),
Updates to CalculateUrl :
qsWithOrderBy =
if (state[OrderBy]? <> null) then
qsWithSelect & [ #"$orderby" = state[OrderBy] ]
else
qsWithSelect,
Since the /$count path segment returns a single value (in plain/text format) rather than a JSON result set, you'll
also have to add a new internal function ( TripPin.Scalar ) for making the request and handling the result.
The implementation will then use this function (if no other query parameters are found in the state ):
The CalculateUrl function is updated to append /$count to the URL if the RowCountOnly field is set in the state .
// Returns true if there is a folding error, or the original record (for logging purposes) if not.
Test.IsFoldingError = (tryResult as record) =>
if ( tryResult[HasError]? = true and tryResult[Error][Message] = "We couldn't fold the expression to the
data source. Please try a simpler expression.") then
true
else
tryResult;
Then add a test that uses both Table.RowCount and Table.FirstN to force the error.
An important note here is that this test will now return an error if Error on Folding Error is set to false ,
because the Table.RowCount operation will fall back to the local (default) handler. Running the tests with Error on
Folding Error set to true will cause Table.RowCount to fail, and allows the test to succeed.
Conclusion
Implementing Table.View for your connector adds a significant amount of complexity to your code. Since the M
engine can process all transformations locally, adding Table.View handlers does not enable new scenarios for
your users, but will result in more efficient processing (and potentially, happier users). One of the main advantages
of the Table.View handlers being optional is that it allows you to incrementally add new functionality without
impacting backwards compatibility for your connector.
For most connectors, an important (and basic) handler to implement is OnTake (which translates to $top in
OData), as it limits the amount of rows returned. The Power Query experience will always perform an OnTake of
1000 rows when displaying previews in the navigator and query editor, so your users might see significant
performance improvements when working with larger data sets.
GitHub Connector Sample
10/30/2020 • 7 minutes to read • Edit Online
The GitHub M extension shows how to add support for an OAuth 2.0 protocol authentication flow. You can learn
more about the specifics of GitHub's authentication flow on the GitHub Developer site.
Before you get started creating an M extension, you need to register a new app on GitHub, and replace the
client_id and client_secret files with the appropriate values for your app.
Note about compatibility issues in Visual Studio: The Power Query SDK uses an Internet Explorer based
control to popup OAuth dialogs. GitHub has deprecated its support for the version of IE used by this control, which
will prevent you from completing the permission grant for you app if run from within Visual Studio. An alternative
is to load the extension with Power BI Desktop and complete the first OAuth flow there. After your application has
been granted access to your account, subsequent logins will work fine from Visual Studio.
NOTE
To allow Power BI to obtain and use the access_token, you must specify the redirect url as
https://oauth.powerbi.com/views/oauthredirect.html.
When you specify this URL and GitHub successfully authenticates and grants permissions, GitHub will redirect to
PowerBI's oauthredirect endpoint so that Power BI can retrieve the access_token and refresh_token.
NOTE
A registered OAuth application is assigned a unique Client ID and Client Secret. The Client Secret should not be shared. You
get the Client ID and Client Secret from the GitHub application page. Update the files in your Data Connector project with
the Client ID ( client_id file) and Client Secret ( client_secret file).
//
// Data Source definition
//
GithubSample = [
Authentication = [
OAuth = [
StartLogin = StartLogin,
FinishLogin = FinishLogin
]
],
Label = Extension.LoadString("DataSourceLabel")
];
Step 2 - Provide details so the M engine can start the OAuth flow
The GitHub OAuth flow starts when you direct users to the https://github.com/login/oauth/authorize page. For
the user to login, you need to specify a number of query parameters:
The following code snippet describes how to implement a StartLogin function to start the login flow. A
StartLogin function takes a resourceUrl , state , and display value. In the function, create an AuthorizeUrl that
concatenates the GitHub authorize URL with the following parameters:
client_id : You get the client ID after you register your extension with GitHub from the GitHub application
page.
scope : Set scope to " user, repo ". This sets the authorization scope (that is, what your app wants to access) for
the user.
state : An internal value that the M engine passes in.
redirect_uri : Set to https://oauth.powerbi.com/views/oauthredirect.html.
If this is the first time the user is logging in with your app (identified by its client_id value), they'll see a page that
asks them to grant access to your app. Subsequent login attempts will simply ask for their credentials.
Step 3 - Convert the code received from GitHub into an access_token
If the user completes the authentication flow, GitHub redirects back to the Power BI redirect URL with a temporary
code in a code parameter, as well as the state you provided in the previous step in a state parameter. Your
FinishLogin function will extract the code from the callbackUri parameter, and then exchange it for an access
token (using the TokenMethod function).
To get a GitHub access token, you pass the temporary code from the GitHub Authorize Response. In the
TokenMethod function, you formulate a POST request to GitHub's access_token endpoint (
https://github.com/login/oauth/access_token ). The following parameters are required for the GitHub endpoint:
Here are the details used parameters for the Web.Contents call.
options A record to control the behavior of this Not used in this case
function.
This code snippet describes how to implement a TokenMethod function to exchange an auth code for an access
token.
The JSON response from the service will contain an access_token field. The TokenMethod method converts the
JSON response into an M record using Json.Document, and returns it to the engine.
Sample response:
{
"access_token":"e72e16c7e42f292c6912e7710c838347ae178b4a",
"scope":"user,repo",
"token_type":"bearer"
}
[DataSource.Kind="GithubSample", Publish="GithubSample.UI"]
shared GithubSample.Contents = Value.ReplaceType(Github.Contents, type function (url as Uri.Type) as any);
[DataSource.Kind="GithubSample"]
shared GithubSample.PagedTable = Value.ReplaceType(Github.PagedTable, type function (url as Uri.Type) as
nullable table);
The GithubSample.Contents function is also published to the UI (allowing it to appear in the Get Data dialog). The
Value.ReplaceType function is used to set the function parameter to the Url.Type ascribed type.
By associating these functions with the GithubSample data source kind, they'll automatically use the credentials
that the user provided. Any M library functions that have been enabled for extensibility (such as Web.Contents)
will automatically inherit these credentials as well.
For more details on how credential and authentication works, see Handling Authentication.
Sample URL
This connector is able to retrieve formatted data from any of the GitHub v3 REST API endpoints. For example, the
query to pull all commits to the Data Connectors repo would look like this:
GithubSample.Contents("https://api.github.com/repos/microsoft/dataconnectors/commits")
List of Samples
10/30/2020 • 2 minutes to read • Edit Online
We maintain a list of samples on the DataConnectors repo on GitHub. Each of the links below links to a folder in the
sample repository. Generally these folders include a readme, one or more .pq / .query.pq files, a project file for
Visual Studio, and in some cases icons. To open these files in Visual Studio, make sure you've set up the SDK
properly, and run the .mproj file from the cloned or downloaded folder.
Functionality
SA M P L E DESC RIP T IO N L IN K
Hello World This simple sample shows the basic GitHub Link
structure of a connector.
Hello World with Docs Similar to the Hello World sample, this GitHub Link
sample shows how to add
documentation to a shared function.
Unit Testing This sample shows how you can add GitHub Link
simple unit testing to your
<extension>.query.pq file.
OAuth
SA M P L E DESC RIP T IO N L IN K
ODBC
SA M P L E DESC RIP T IO N L IN K
Hive LLAP This connector sample uses the Hive GitHub Link
ODBC driver, and is based on the
connector template.
Direct Query for SQL This sample creates an ODBC-based GitHub Link
custom connector that enables Direct
Query for SQL Server.
TripPin
SA M P L E DESC RIP T IO N L IN K
We maintain a list of samples on the DataConnectors repo on GitHub. Each of the links below links to a folder in the
sample repository. Generally these folders include a readme, one or more .pq / .query.pq files, a project file for
Visual Studio, and in some cases icons. To open these files in Visual Studio, make sure you've set up the SDK
properly, and run the .mproj file from the cloned or downloaded folder.
Functionality
SA M P L E DESC RIP T IO N L IN K
Hello World This simple sample shows the basic GitHub Link
structure of a connector.
Hello World with Docs Similar to the Hello World sample, this GitHub Link
sample shows how to add
documentation to a shared function.
Unit Testing This sample shows how you can add GitHub Link
simple unit testing to your
<extension>.query.pq file.
OAuth
SA M P L E DESC RIP T IO N L IN K
ODBC
SA M P L E DESC RIP T IO N L IN K
Hive LLAP This connector sample uses the Hive GitHub Link
ODBC driver, and is based on the
connector template.
Direct Query for SQL This sample creates an ODBC-based GitHub Link
custom connector that enables Direct
Query for SQL Server.
TripPin
SA M P L E DESC RIP T IO N L IN K
We maintain a list of samples on the DataConnectors repo on GitHub. Each of the links below links to a folder in the
sample repository. Generally these folders include a readme, one or more .pq / .query.pq files, a project file for
Visual Studio, and in some cases icons. To open these files in Visual Studio, make sure you've set up the SDK
properly, and run the .mproj file from the cloned or downloaded folder.
Functionality
SA M P L E DESC RIP T IO N L IN K
Hello World This simple sample shows the basic GitHub Link
structure of a connector.
Hello World with Docs Similar to the Hello World sample, this GitHub Link
sample shows how to add
documentation to a shared function.
Unit Testing This sample shows how you can add GitHub Link
simple unit testing to your
<extension>.query.pq file.
OAuth
SA M P L E DESC RIP T IO N L IN K
ODBC
SA M P L E DESC RIP T IO N L IN K
Hive LLAP This connector sample uses the Hive GitHub Link
ODBC driver, and is based on the
connector template.
Direct Query for SQL This sample creates an ODBC-based GitHub Link
custom connector that enables Direct
Query for SQL Server.
TripPin
SA M P L E DESC RIP T IO N L IN K
Authentication Kinds
An extension can support one or more kinds of Authentication. Each authentication kind is a different type of credential. The
authentication UI displayed to end users in Power Query is driven by the type of credential(s) that an extension supports.
The list of supported authentication types is defined as part of an extension's Data Source Kind definition. Each Authentication value
is a record with specific fields. The following table lists the expected fields for each kind. All fields are required unless marked
otherwise.
The sample below shows the Authentication record for a connector that supports OAuth, Key, Windows, Basic (Username and
Password), and anonymous credentials.
Example:
Authentication = [
OAuth = [
StartLogin = StartLogin,
FinishLogin = FinishLogin,
Refresh = Refresh,
Logout = Logout
],
Key = [],
UsernamePassword = [],
Windows = [],
Implicit = []
]
Key The API key value. Note, the key value is also Key
available in the Password field as well. By
default, the mashup engine will insert this in
an Authorization header as if this value were a
basic auth password (with no username). If
this is not the behavior you want, you must
specify the ManualCredentials = true option
in the options record.
The following code sample accesses the current credential for an API key and uses it to populate a custom header ( x-APIKey ).
Example:
#"x-APIKey" = apiKey,
Accept = "application/vnd.api+json",
#"Content-Type" = "application/json"
],
request = Web.Contents(_url, [ Headers = headers, ManualCredentials = true ])
in
request
NOTE
Power Query extensions are evaluated in applications running on client machines. Data Connectors should not use confidential secrets in their
OAuth flows, as users may inspect the extension or network traffic to learn the secret. See the Proof Key for Code Exchange by OAuth Public
Clients RFC (also known as PKCE) for further details on providing flows that don't rely on shared secrets.
There are two sets of OAuth function signatures; the original signature that contains a minimal number of parameters, and an
advanced signature that accepts additional parameters. Most OAuth flows can be implemented using the original signatures. You
can also mix and match signature types in your implementation. The function calls are matches based on the number of parameters
(and their types). The parameter names are not taken into consideration.
See the Github sample for more details.
Original OAuth Signatures
StartLogin = (dataSourcePath, state, display) => ...;
NOTE
If your data source requires scopes other than user_impersonation , or is incompatible with the use of user_impersonation , then you
should use the OAuth authentication kind.
Most connectors will need to provide values for the AuthorizationUri and Resource fields. Both fields can be text values, or a
single argument function that returns a text value .
AuthorizationUri = "https://login.microsoftonline.com/common/oauth2/authorize"
Resource = "77256ee0-fe79-11ea-adc1-0242ac120002" // Azure AD resource value for your service - Guid or URL
Connectors that use a Uri based identifier do not need to provide a Resource value. By default, the value will be equal to the root
path of the connector's Uri parameter. If the data source's Azure AD resource is different than the domain value (for example, it uses
a GUID), then a Resource value needs to be provided.
Aad authentication kind samples
In this case, the data source supports global cloud Azure AD using the common tenant (no Azure B2B support).
Authentication = [
Aad = [
AuthorizationUri = "https://login.microsoftonline.com/common/oauth2/authorize",
Resource = "77256ee0-fe79-11ea-adc1-0242ac120002" // Azure AD resource value for your service - Guid or URL
]
]
In this case, the data source supports tenant discovery based on OpenID Connect (OIDC) or similar protocol. This allows the
connector to determine the correct Azure AD endpoint to use based on one or more parameters in the data source path. This
dynamic discovery approach allows the connector to support Azure B2B.
// Implement this function to retrieve or calculate the service URL based on the data source path parameters
GetServiceRootFromDataSourcePath = (dataSourcePath) as text => ...;
Authentication = [
Aad = [
AuthorizationUri = (dataSourcePath) =>
GetAuthorizationUrlFromWwwAuthenticate(
GetServiceRootFromDataSourcePath(dataSourcePath)
),
Resource = "https://myAadResourceValue.com", // Azure AD resource value for your service - Guid or URL
]
]
The function has a single required parameter ( message ) of type text , and will be used to calculate the data source path. The
optional parameter ( count ) would be ignored. The path would be displayed
Credential prompt:
NOTE
We currently recommend you do not include a Label for your data source if your function has required parameters, as users won't be able to
distinguish between the different credentials they've entered. We are hoping to improve this in the future (that is, allowing data connectors to
display their own custom data source paths).
As Uri.Type is an ascribed type rather than a primitive type in the M language, you'll need to use the Value.ReplaceType function to
indicate that your text parameter should be treated as a Uri.
[DataSource.Kind="HelloWorld", Publish="HelloWorld.Publish"]
shared HelloWorld.Contents = (optional message as text) =>
let
message = if (message <> null) then message else "Hello world"
in
message;
HelloWorld = [
Authentication = [
Implicit = []
],
Label = Extension.LoadString("DataSourceLabel")
];
Properties
The following table lists the fields for your Data Source definition record.
F IEL D TYPE DETA IL S
Publish to UI
Similar to the (Data Source)[#data-source-kind] definition record, the Publish record provides the Power Query UI
the information it needs to expose this extension in the Get Data dialog.
Example:
HelloWorld.Publish = [
Beta = true,
ButtonText = { Extension.LoadString("FormulaTitle"), Extension.LoadString("FormulaHelp") },
SourceImage = HelloWorld.Icons,
SourceTypeImage = HelloWorld.Icons
];
HelloWorld.Icons = [
Icon16 = { Extension.Contents("HelloWorld16.png"), Extension.Contents("HelloWorld20.png"),
Extension.Contents("HelloWorld24.png"), Extension.Contents("HelloWorld32.png") },
Icon32 = { Extension.Contents("HelloWorld32.png"), Extension.Contents("HelloWorld40.png"),
Extension.Contents("HelloWorld48.png"), Extension.Contents("HelloWorld64.png") }
];
Properties
The following table lists the fields for your Publish record.
Overview
Using M's built-in Odbc.DataSource function is the recommended way to create custom connectors for data sources
that have an existing ODBC driver and/or support a SQL query syntax. Wrapping the Odbc.DataSource function will
allow your connector to inherit default query folding behavior based on the capabilities reported by your driver.
This will enable the M engine to generate SQL statements based on filters and other transformations defined by the
user within the Power Query experience, without having to provide this logic within the connector itself.
ODBC extensions can optionally enable Direct Query mode, allowing Power BI to dynamically generate queries at
runtime without pre-caching the user's data model.
NOTE
Enabling Direct Query support raises the difficulty and complexity level of your connector. When Direct Query is enabled,
Power BI will prevent the M engine from compensating for operations that cannot be fully pushed to the underlying data
source.
This section builds on the concepts presented in the M Extensibility Reference, and assumes familiarity with the
creation of a basic Data Connector.
Refer to the SqlODBC sample for most of the code examples in the sections below. Additional samples can be found
in the ODBC samples directory.
The following table describes the options record fields that are only available through extensibility. Fields that aren't
simple literal values are described in subsequent sections.
Overriding AstVisitor
The AstVisitor field is set through the Odbc.DataSource options record. It's used to modify SQL statements
generated for specific query scenarios.
NOTE
Drivers that support LIMIT and OFFSET clauses (rather than TOP ) will want to provide a LimitClause override for
AstVisitor.
Constant
Providing an override for this value has been deprecated and may be removed from future implementations.
LimitClause
This field is a function that receives two Int64.Type arguments (skip, take), and returns a record with two text fields
(Text, Location).
The skip parameter is the number of rows to skip (that is, the argument to OFFSET). If an offset is not specified, the
skip value will be null. If your driver supports LIMIT , but does not support OFFSET , the LimitClause function should
return an unimplemented error (...) when skip is greater than 0.
The take parameter is the number of rows to take (that is, the argument to LIMIT).
The Text field of the result contains the SQL text to add to the generated query.
The Location field specifies where to insert the clause. The following table describes supported values.
AfterSelect LIMIT goes after the SELECT statement, SELECT DISTINCT LIMIT 5 a, b, c
and after any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
AfterSelectBeforeModifiers LIMIT goes after the SELECT statement, SELECT LIMIT 5 DISTINCT a, b, c
but before any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
The following code snippet provides a LimitClause implementation for a driver that expects a LIMIT clause, with an
optional OFFSET, in the following format: [OFFSET <offset> ROWS] LIMIT <row_count>
The following code snippet provides a LimitClause implementation for a driver that supports LIMIT, but not OFFSET.
Format: LIMIT <row_count> .
Overriding SqlCapabilities
F IEL D DETA IL S
SupportsTop A logical value that indicates the driver supports the TOP
clause to limit the number of returned rows.
Default: false
Overriding SQLColumns
SQLColumns is a function handler that receives the results of an ODBC call to SQLColumns. The source parameter
contains a table with the data type information. This override is typically used to fix up data type mismatches
between calls to SQLGetTypeInfo and SQLColumns .
For details of the format of the source table parameter, see:
https://docs.microsoft.com/sql/odbc/reference/syntax/sqlcolumns-function
Overriding SQLGetFunctions
This field is used to override SQLFunctions values returned by an ODBC driver. It contains a record whose field
names are equal to the FunctionId constants defined for the ODBC SQLGetFunctions function. Numeric constants
for each of these fields can be found in the ODBC specification.
F IEL D DETA IL S
The following code snippet provides an example explicitly telling the M engine to use CAST rather than CONVERT.
SQLGetFunctions = [
SQL_CONVERT_FUNCTIONS = 0x2 /* SQL_FN_CVT_CAST */
]
Overriding SQLGetInfo
This field is used to override SQLGetInfo values returned by an ODBC driver. It contains a record whose fields are
names equal to the InfoType constants defined for the ODBC SQLGetInfo function. Numeric constants for each of
these fields can be found in the ODBC specification. The full list of InfoTypes that are checked can be found in the
Mashup Engine trace files.
The following table contains commonly overridden SQLGetInfo properties:
F IEL D DETA IL S
The following helper function can be used to create bitmask values from a list of integer values:
Overriding SQLGetTypeInfo
SQLGetTypeInfo can be specified in two ways:
A fixed table value that contains the same type information as an ODBC call to SQLGetTypeInfo .
A function that accepts a table argument, and returns a table. The argument will contain the original results of
the ODBC call to SQLGetTypeInfo . Your function implementation can modify/add to this table.
The first approach is used to completely override the values returned by the ODBC driver. The second approach is
used if you want to add to or modify these values.
For details of the format of the types table parameter and expected return value, see the SQLGetTypeInfo function
reference.
SQLGetTypeInfo using a static table
The following code snippet provides a static implementation for SQLGetTypeInfo.
SQLGetTypeInfo = #table(
{ "TYPE_NAME", "DATA_TYPE", "COLUMN_SIZE", "LITERAL_PREF", "LITERAL_SUFFIX", "CREATE_PARAS",
"NULLABLE", "CASE_SENSITIVE", "SEARCHABLE", "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "NUM_PREC_RADIX",
"INTERNAL_PRECISION", "USER_DATA_TYPE" }, {
Once you have simple queries working, you can then try Direct Query scenarios (for example, building reports in
the Report Views). The queries generated in Direct Query mode will be significantly more complex (that is, use of
sub-selects, COALESCE statements, and aggregations).
Concatenation of strings in Direct Query mode
The M engine does basic type size limit validation as part of its query folding logic. If you are receiving a folding
error when trying to concatenate two strings that potentially overflow the maximum size of the underlying
database type:
1. Ensure that your database can support up-conversion to CLOB types when string concat overflow occurs.
2. Set the TolerateConcatOverflow option for Odbc.DataSource to true .
The DAX CONCATENATE function is currently not supported by Power Query/ODBC extensions. Extension
authors should ensure string concatenation works through the query editor by adding calculated columns (
[stringCol1] & [stringCol2] ). When the capability to fold the CONCATENATE operation is added in the future,
it should work seamlessly with existing extensions.
Enabling Direct Query for an ODBC based connector
1/17/2020 • 21 minutes to read • Edit Online
Overview
Using M's built-in Odbc.DataSource function is the recommended way to create custom connectors for data sources
that have an existing ODBC driver and/or support a SQL query syntax. Wrapping the Odbc.DataSource function will
allow your connector to inherit default query folding behavior based on the capabilities reported by your driver.
This will enable the M engine to generate SQL statements based on filters and other transformations defined by the
user within the Power Query experience, without having to provide this logic within the connector itself.
ODBC extensions can optionally enable Direct Query mode, allowing Power BI to dynamically generate queries at
runtime without pre-caching the user's data model.
NOTE
Enabling Direct Query support raises the difficulty and complexity level of your connector. When Direct Query is enabled,
Power BI will prevent the M engine from compensating for operations that cannot be fully pushed to the underlying data
source.
This section builds on the concepts presented in the M Extensibility Reference, and assumes familiarity with the
creation of a basic Data Connector.
Refer to the SqlODBC sample for most of the code examples in the sections below. Additional samples can be found
in the ODBC samples directory.
The following table describes the options record fields that are only available through extensibility. Fields that aren't
simple literal values are described in subsequent sections.
Overriding AstVisitor
The AstVisitor field is set through the Odbc.DataSource options record. It's used to modify SQL statements
generated for specific query scenarios.
NOTE
Drivers that support LIMIT and OFFSET clauses (rather than TOP ) will want to provide a LimitClause override for
AstVisitor.
Constant
Providing an override for this value has been deprecated and may be removed from future implementations.
LimitClause
This field is a function that receives two Int64.Type arguments (skip, take), and returns a record with two text fields
(Text, Location).
The skip parameter is the number of rows to skip (that is, the argument to OFFSET). If an offset is not specified, the
skip value will be null. If your driver supports LIMIT , but does not support OFFSET , the LimitClause function should
return an unimplemented error (...) when skip is greater than 0.
The take parameter is the number of rows to take (that is, the argument to LIMIT).
The Text field of the result contains the SQL text to add to the generated query.
The Location field specifies where to insert the clause. The following table describes supported values.
AfterSelect LIMIT goes after the SELECT statement, SELECT DISTINCT LIMIT 5 a, b, c
and after any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
AfterSelectBeforeModifiers LIMIT goes after the SELECT statement, SELECT LIMIT 5 DISTINCT a, b, c
but before any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
The following code snippet provides a LimitClause implementation for a driver that expects a LIMIT clause, with an
optional OFFSET, in the following format: [OFFSET <offset> ROWS] LIMIT <row_count>
The following code snippet provides a LimitClause implementation for a driver that supports LIMIT, but not OFFSET.
Format: LIMIT <row_count> .
Overriding SqlCapabilities
F IEL D DETA IL S
SupportsTop A logical value that indicates the driver supports the TOP
clause to limit the number of returned rows.
Default: false
Overriding SQLColumns
SQLColumns is a function handler that receives the results of an ODBC call to SQLColumns. The source parameter
contains a table with the data type information. This override is typically used to fix up data type mismatches
between calls to SQLGetTypeInfo and SQLColumns .
For details of the format of the source table parameter, see:
https://docs.microsoft.com/sql/odbc/reference/syntax/sqlcolumns-function
Overriding SQLGetFunctions
This field is used to override SQLFunctions values returned by an ODBC driver. It contains a record whose field
names are equal to the FunctionId constants defined for the ODBC SQLGetFunctions function. Numeric constants
for each of these fields can be found in the ODBC specification.
F IEL D DETA IL S
The following code snippet provides an example explicitly telling the M engine to use CAST rather than CONVERT.
SQLGetFunctions = [
SQL_CONVERT_FUNCTIONS = 0x2 /* SQL_FN_CVT_CAST */
]
Overriding SQLGetInfo
This field is used to override SQLGetInfo values returned by an ODBC driver. It contains a record whose fields are
names equal to the InfoType constants defined for the ODBC SQLGetInfo function. Numeric constants for each of
these fields can be found in the ODBC specification. The full list of InfoTypes that are checked can be found in the
Mashup Engine trace files.
The following table contains commonly overridden SQLGetInfo properties:
F IEL D DETA IL S
The following helper function can be used to create bitmask values from a list of integer values:
Overriding SQLGetTypeInfo
SQLGetTypeInfo can be specified in two ways:
A fixed table value that contains the same type information as an ODBC call to SQLGetTypeInfo .
A function that accepts a table argument, and returns a table. The argument will contain the original results of
the ODBC call to SQLGetTypeInfo . Your function implementation can modify/add to this table.
The first approach is used to completely override the values returned by the ODBC driver. The second approach is
used if you want to add to or modify these values.
For details of the format of the types table parameter and expected return value, see the SQLGetTypeInfo function
reference.
SQLGetTypeInfo using a static table
The following code snippet provides a static implementation for SQLGetTypeInfo.
SQLGetTypeInfo = #table(
{ "TYPE_NAME", "DATA_TYPE", "COLUMN_SIZE", "LITERAL_PREF", "LITERAL_SUFFIX", "CREATE_PARAS",
"NULLABLE", "CASE_SENSITIVE", "SEARCHABLE", "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "NUM_PREC_RADIX",
"INTERNAL_PRECISION", "USER_DATA_TYPE" }, {
Once you have simple queries working, you can then try Direct Query scenarios (for example, building reports in
the Report Views). The queries generated in Direct Query mode will be significantly more complex (that is, use of
sub-selects, COALESCE statements, and aggregations).
Concatenation of strings in Direct Query mode
The M engine does basic type size limit validation as part of its query folding logic. If you are receiving a folding
error when trying to concatenate two strings that potentially overflow the maximum size of the underlying
database type:
1. Ensure that your database can support up-conversion to CLOB types when string concat overflow occurs.
2. Set the TolerateConcatOverflow option for Odbc.DataSource to true .
The DAX CONCATENATE function is currently not supported by Power Query/ODBC extensions. Extension
authors should ensure string concatenation works through the query editor by adding calculated columns (
[stringCol1] & [stringCol2] ). When the capability to fold the CONCATENATE operation is added in the future,
it should work seamlessly with existing extensions.
Enabling Direct Query for an ODBC based connector
1/17/2020 • 21 minutes to read • Edit Online
Overview
Using M's built-in Odbc.DataSource function is the recommended way to create custom connectors for data sources
that have an existing ODBC driver and/or support a SQL query syntax. Wrapping the Odbc.DataSource function will
allow your connector to inherit default query folding behavior based on the capabilities reported by your driver.
This will enable the M engine to generate SQL statements based on filters and other transformations defined by the
user within the Power Query experience, without having to provide this logic within the connector itself.
ODBC extensions can optionally enable Direct Query mode, allowing Power BI to dynamically generate queries at
runtime without pre-caching the user's data model.
NOTE
Enabling Direct Query support raises the difficulty and complexity level of your connector. When Direct Query is enabled,
Power BI will prevent the M engine from compensating for operations that cannot be fully pushed to the underlying data
source.
This section builds on the concepts presented in the M Extensibility Reference, and assumes familiarity with the
creation of a basic Data Connector.
Refer to the SqlODBC sample for most of the code examples in the sections below. Additional samples can be found
in the ODBC samples directory.
The following table describes the options record fields that are only available through extensibility. Fields that aren't
simple literal values are described in subsequent sections.
Overriding AstVisitor
The AstVisitor field is set through the Odbc.DataSource options record. It's used to modify SQL statements
generated for specific query scenarios.
NOTE
Drivers that support LIMIT and OFFSET clauses (rather than TOP ) will want to provide a LimitClause override for
AstVisitor.
Constant
Providing an override for this value has been deprecated and may be removed from future implementations.
LimitClause
This field is a function that receives two Int64.Type arguments (skip, take), and returns a record with two text fields
(Text, Location).
The skip parameter is the number of rows to skip (that is, the argument to OFFSET). If an offset is not specified, the
skip value will be null. If your driver supports LIMIT , but does not support OFFSET , the LimitClause function should
return an unimplemented error (...) when skip is greater than 0.
The take parameter is the number of rows to take (that is, the argument to LIMIT).
The Text field of the result contains the SQL text to add to the generated query.
The Location field specifies where to insert the clause. The following table describes supported values.
AfterSelect LIMIT goes after the SELECT statement, SELECT DISTINCT LIMIT 5 a, b, c
and after any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
AfterSelectBeforeModifiers LIMIT goes after the SELECT statement, SELECT LIMIT 5 DISTINCT a, b, c
but before any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
The following code snippet provides a LimitClause implementation for a driver that expects a LIMIT clause, with an
optional OFFSET, in the following format: [OFFSET <offset> ROWS] LIMIT <row_count>
The following code snippet provides a LimitClause implementation for a driver that supports LIMIT, but not OFFSET.
Format: LIMIT <row_count> .
Overriding SqlCapabilities
F IEL D DETA IL S
SupportsTop A logical value that indicates the driver supports the TOP
clause to limit the number of returned rows.
Default: false
Overriding SQLColumns
SQLColumns is a function handler that receives the results of an ODBC call to SQLColumns. The source parameter
contains a table with the data type information. This override is typically used to fix up data type mismatches
between calls to SQLGetTypeInfo and SQLColumns .
For details of the format of the source table parameter, see:
https://docs.microsoft.com/sql/odbc/reference/syntax/sqlcolumns-function
Overriding SQLGetFunctions
This field is used to override SQLFunctions values returned by an ODBC driver. It contains a record whose field
names are equal to the FunctionId constants defined for the ODBC SQLGetFunctions function. Numeric constants
for each of these fields can be found in the ODBC specification.
F IEL D DETA IL S
The following code snippet provides an example explicitly telling the M engine to use CAST rather than CONVERT.
SQLGetFunctions = [
SQL_CONVERT_FUNCTIONS = 0x2 /* SQL_FN_CVT_CAST */
]
Overriding SQLGetInfo
This field is used to override SQLGetInfo values returned by an ODBC driver. It contains a record whose fields are
names equal to the InfoType constants defined for the ODBC SQLGetInfo function. Numeric constants for each of
these fields can be found in the ODBC specification. The full list of InfoTypes that are checked can be found in the
Mashup Engine trace files.
The following table contains commonly overridden SQLGetInfo properties:
F IEL D DETA IL S
The following helper function can be used to create bitmask values from a list of integer values:
Overriding SQLGetTypeInfo
SQLGetTypeInfo can be specified in two ways:
A fixed table value that contains the same type information as an ODBC call to SQLGetTypeInfo .
A function that accepts a table argument, and returns a table. The argument will contain the original results of
the ODBC call to SQLGetTypeInfo . Your function implementation can modify/add to this table.
The first approach is used to completely override the values returned by the ODBC driver. The second approach is
used if you want to add to or modify these values.
For details of the format of the types table parameter and expected return value, see the SQLGetTypeInfo function
reference.
SQLGetTypeInfo using a static table
The following code snippet provides a static implementation for SQLGetTypeInfo.
SQLGetTypeInfo = #table(
{ "TYPE_NAME", "DATA_TYPE", "COLUMN_SIZE", "LITERAL_PREF", "LITERAL_SUFFIX", "CREATE_PARAS",
"NULLABLE", "CASE_SENSITIVE", "SEARCHABLE", "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "NUM_PREC_RADIX",
"INTERNAL_PRECISION", "USER_DATA_TYPE" }, {
Once you have simple queries working, you can then try Direct Query scenarios (for example, building reports in
the Report Views). The queries generated in Direct Query mode will be significantly more complex (that is, use of
sub-selects, COALESCE statements, and aggregations).
Concatenation of strings in Direct Query mode
The M engine does basic type size limit validation as part of its query folding logic. If you are receiving a folding
error when trying to concatenate two strings that potentially overflow the maximum size of the underlying
database type:
1. Ensure that your database can support up-conversion to CLOB types when string concat overflow occurs.
2. Set the TolerateConcatOverflow option for Odbc.DataSource to true .
The DAX CONCATENATE function is currently not supported by Power Query/ODBC extensions. Extension
authors should ensure string concatenation works through the query editor by adding calculated columns (
[stringCol1] & [stringCol2] ). When the capability to fold the CONCATENATE operation is added in the future,
it should work seamlessly with existing extensions.
Enabling Direct Query for an ODBC based connector
1/17/2020 • 21 minutes to read • Edit Online
Overview
Using M's built-in Odbc.DataSource function is the recommended way to create custom connectors for data sources
that have an existing ODBC driver and/or support a SQL query syntax. Wrapping the Odbc.DataSource function will
allow your connector to inherit default query folding behavior based on the capabilities reported by your driver.
This will enable the M engine to generate SQL statements based on filters and other transformations defined by the
user within the Power Query experience, without having to provide this logic within the connector itself.
ODBC extensions can optionally enable Direct Query mode, allowing Power BI to dynamically generate queries at
runtime without pre-caching the user's data model.
NOTE
Enabling Direct Query support raises the difficulty and complexity level of your connector. When Direct Query is enabled,
Power BI will prevent the M engine from compensating for operations that cannot be fully pushed to the underlying data
source.
This section builds on the concepts presented in the M Extensibility Reference, and assumes familiarity with the
creation of a basic Data Connector.
Refer to the SqlODBC sample for most of the code examples in the sections below. Additional samples can be found
in the ODBC samples directory.
The following table describes the options record fields that are only available through extensibility. Fields that aren't
simple literal values are described in subsequent sections.
Overriding AstVisitor
The AstVisitor field is set through the Odbc.DataSource options record. It's used to modify SQL statements
generated for specific query scenarios.
NOTE
Drivers that support LIMIT and OFFSET clauses (rather than TOP ) will want to provide a LimitClause override for
AstVisitor.
Constant
Providing an override for this value has been deprecated and may be removed from future implementations.
LimitClause
This field is a function that receives two Int64.Type arguments (skip, take), and returns a record with two text fields
(Text, Location).
The skip parameter is the number of rows to skip (that is, the argument to OFFSET). If an offset is not specified, the
skip value will be null. If your driver supports LIMIT , but does not support OFFSET , the LimitClause function should
return an unimplemented error (...) when skip is greater than 0.
The take parameter is the number of rows to take (that is, the argument to LIMIT).
The Text field of the result contains the SQL text to add to the generated query.
The Location field specifies where to insert the clause. The following table describes supported values.
AfterSelect LIMIT goes after the SELECT statement, SELECT DISTINCT LIMIT 5 a, b, c
and after any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
AfterSelectBeforeModifiers LIMIT goes after the SELECT statement, SELECT LIMIT 5 DISTINCT a, b, c
but before any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
The following code snippet provides a LimitClause implementation for a driver that expects a LIMIT clause, with an
optional OFFSET, in the following format: [OFFSET <offset> ROWS] LIMIT <row_count>
The following code snippet provides a LimitClause implementation for a driver that supports LIMIT, but not OFFSET.
Format: LIMIT <row_count> .
Overriding SqlCapabilities
F IEL D DETA IL S
SupportsTop A logical value that indicates the driver supports the TOP
clause to limit the number of returned rows.
Default: false
Overriding SQLColumns
SQLColumns is a function handler that receives the results of an ODBC call to SQLColumns. The source parameter
contains a table with the data type information. This override is typically used to fix up data type mismatches
between calls to SQLGetTypeInfo and SQLColumns .
For details of the format of the source table parameter, see:
https://docs.microsoft.com/sql/odbc/reference/syntax/sqlcolumns-function
Overriding SQLGetFunctions
This field is used to override SQLFunctions values returned by an ODBC driver. It contains a record whose field
names are equal to the FunctionId constants defined for the ODBC SQLGetFunctions function. Numeric constants
for each of these fields can be found in the ODBC specification.
F IEL D DETA IL S
The following code snippet provides an example explicitly telling the M engine to use CAST rather than CONVERT.
SQLGetFunctions = [
SQL_CONVERT_FUNCTIONS = 0x2 /* SQL_FN_CVT_CAST */
]
Overriding SQLGetInfo
This field is used to override SQLGetInfo values returned by an ODBC driver. It contains a record whose fields are
names equal to the InfoType constants defined for the ODBC SQLGetInfo function. Numeric constants for each of
these fields can be found in the ODBC specification. The full list of InfoTypes that are checked can be found in the
Mashup Engine trace files.
The following table contains commonly overridden SQLGetInfo properties:
F IEL D DETA IL S
The following helper function can be used to create bitmask values from a list of integer values:
Overriding SQLGetTypeInfo
SQLGetTypeInfo can be specified in two ways:
A fixed table value that contains the same type information as an ODBC call to SQLGetTypeInfo .
A function that accepts a table argument, and returns a table. The argument will contain the original results of
the ODBC call to SQLGetTypeInfo . Your function implementation can modify/add to this table.
The first approach is used to completely override the values returned by the ODBC driver. The second approach is
used if you want to add to or modify these values.
For details of the format of the types table parameter and expected return value, see the SQLGetTypeInfo function
reference.
SQLGetTypeInfo using a static table
The following code snippet provides a static implementation for SQLGetTypeInfo.
SQLGetTypeInfo = #table(
{ "TYPE_NAME", "DATA_TYPE", "COLUMN_SIZE", "LITERAL_PREF", "LITERAL_SUFFIX", "CREATE_PARAS",
"NULLABLE", "CASE_SENSITIVE", "SEARCHABLE", "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "NUM_PREC_RADIX",
"INTERNAL_PRECISION", "USER_DATA_TYPE" }, {
Once you have simple queries working, you can then try Direct Query scenarios (for example, building reports in
the Report Views). The queries generated in Direct Query mode will be significantly more complex (that is, use of
sub-selects, COALESCE statements, and aggregations).
Concatenation of strings in Direct Query mode
The M engine does basic type size limit validation as part of its query folding logic. If you are receiving a folding
error when trying to concatenate two strings that potentially overflow the maximum size of the underlying
database type:
1. Ensure that your database can support up-conversion to CLOB types when string concat overflow occurs.
2. Set the TolerateConcatOverflow option for Odbc.DataSource to true .
The DAX CONCATENATE function is currently not supported by Power Query/ODBC extensions. Extension
authors should ensure string concatenation works through the query editor by adding calculated columns (
[stringCol1] & [stringCol2] ). When the capability to fold the CONCATENATE operation is added in the future,
it should work seamlessly with existing extensions.
Enabling Direct Query for an ODBC based connector
1/17/2020 • 21 minutes to read • Edit Online
Overview
Using M's built-in Odbc.DataSource function is the recommended way to create custom connectors for data sources
that have an existing ODBC driver and/or support a SQL query syntax. Wrapping the Odbc.DataSource function will
allow your connector to inherit default query folding behavior based on the capabilities reported by your driver.
This will enable the M engine to generate SQL statements based on filters and other transformations defined by the
user within the Power Query experience, without having to provide this logic within the connector itself.
ODBC extensions can optionally enable Direct Query mode, allowing Power BI to dynamically generate queries at
runtime without pre-caching the user's data model.
NOTE
Enabling Direct Query support raises the difficulty and complexity level of your connector. When Direct Query is enabled,
Power BI will prevent the M engine from compensating for operations that cannot be fully pushed to the underlying data
source.
This section builds on the concepts presented in the M Extensibility Reference, and assumes familiarity with the
creation of a basic Data Connector.
Refer to the SqlODBC sample for most of the code examples in the sections below. Additional samples can be found
in the ODBC samples directory.
The following table describes the options record fields that are only available through extensibility. Fields that aren't
simple literal values are described in subsequent sections.
Overriding AstVisitor
The AstVisitor field is set through the Odbc.DataSource options record. It's used to modify SQL statements
generated for specific query scenarios.
NOTE
Drivers that support LIMIT and OFFSET clauses (rather than TOP ) will want to provide a LimitClause override for
AstVisitor.
Constant
Providing an override for this value has been deprecated and may be removed from future implementations.
LimitClause
This field is a function that receives two Int64.Type arguments (skip, take), and returns a record with two text fields
(Text, Location).
The skip parameter is the number of rows to skip (that is, the argument to OFFSET). If an offset is not specified, the
skip value will be null. If your driver supports LIMIT , but does not support OFFSET , the LimitClause function should
return an unimplemented error (...) when skip is greater than 0.
The take parameter is the number of rows to take (that is, the argument to LIMIT).
The Text field of the result contains the SQL text to add to the generated query.
The Location field specifies where to insert the clause. The following table describes supported values.
AfterSelect LIMIT goes after the SELECT statement, SELECT DISTINCT LIMIT 5 a, b, c
and after any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
AfterSelectBeforeModifiers LIMIT goes after the SELECT statement, SELECT LIMIT 5 DISTINCT a, b, c
but before any modifiers (such as
DISTINCT). FROM table
WHERE a > 10
The following code snippet provides a LimitClause implementation for a driver that expects a LIMIT clause, with an
optional OFFSET, in the following format: [OFFSET <offset> ROWS] LIMIT <row_count>
The following code snippet provides a LimitClause implementation for a driver that supports LIMIT, but not OFFSET.
Format: LIMIT <row_count> .
Overriding SqlCapabilities
F IEL D DETA IL S
SupportsTop A logical value that indicates the driver supports the TOP
clause to limit the number of returned rows.
Default: false
Overriding SQLColumns
SQLColumns is a function handler that receives the results of an ODBC call to SQLColumns. The source parameter
contains a table with the data type information. This override is typically used to fix up data type mismatches
between calls to SQLGetTypeInfo and SQLColumns .
For details of the format of the source table parameter, see:
https://docs.microsoft.com/sql/odbc/reference/syntax/sqlcolumns-function
Overriding SQLGetFunctions
This field is used to override SQLFunctions values returned by an ODBC driver. It contains a record whose field
names are equal to the FunctionId constants defined for the ODBC SQLGetFunctions function. Numeric constants
for each of these fields can be found in the ODBC specification.
F IEL D DETA IL S
The following code snippet provides an example explicitly telling the M engine to use CAST rather than CONVERT.
SQLGetFunctions = [
SQL_CONVERT_FUNCTIONS = 0x2 /* SQL_FN_CVT_CAST */
]
Overriding SQLGetInfo
This field is used to override SQLGetInfo values returned by an ODBC driver. It contains a record whose fields are
names equal to the InfoType constants defined for the ODBC SQLGetInfo function. Numeric constants for each of
these fields can be found in the ODBC specification. The full list of InfoTypes that are checked can be found in the
Mashup Engine trace files.
The following table contains commonly overridden SQLGetInfo properties:
F IEL D DETA IL S
The following helper function can be used to create bitmask values from a list of integer values:
Overriding SQLGetTypeInfo
SQLGetTypeInfo can be specified in two ways:
A fixed table value that contains the same type information as an ODBC call to SQLGetTypeInfo .
A function that accepts a table argument, and returns a table. The argument will contain the original results of
the ODBC call to SQLGetTypeInfo . Your function implementation can modify/add to this table.
The first approach is used to completely override the values returned by the ODBC driver. The second approach is
used if you want to add to or modify these values.
For details of the format of the types table parameter and expected return value, see the SQLGetTypeInfo function
reference.
SQLGetTypeInfo using a static table
The following code snippet provides a static implementation for SQLGetTypeInfo.
SQLGetTypeInfo = #table(
{ "TYPE_NAME", "DATA_TYPE", "COLUMN_SIZE", "LITERAL_PREF", "LITERAL_SUFFIX", "CREATE_PARAS",
"NULLABLE", "CASE_SENSITIVE", "SEARCHABLE", "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", "SQL_DATA_TYPE", "SQL_DATETIME_SUB", "NUM_PREC_RADIX",
"INTERNAL_PRECISION", "USER_DATA_TYPE" }, {
Once you have simple queries working, you can then try Direct Query scenarios (for example, building reports in
the Report Views). The queries generated in Direct Query mode will be significantly more complex (that is, use of
sub-selects, COALESCE statements, and aggregations).
Concatenation of strings in Direct Query mode
The M engine does basic type size limit validation as part of its query folding logic. If you are receiving a folding
error when trying to concatenate two strings that potentially overflow the maximum size of the underlying
database type:
1. Ensure that your database can support up-conversion to CLOB types when string concat overflow occurs.
2. Set the TolerateConcatOverflow option for Odbc.DataSource to true .
The DAX CONCATENATE function is currently not supported by Power Query/ODBC extensions. Extension
authors should ensure string concatenation works through the query editor by adding calculated columns (
[stringCol1] & [stringCol2] ). When the capability to fold the CONCATENATE operation is added in the future,
it should work seamlessly with existing extensions.
Handling Resource Path
10/30/2020 • 2 minutes to read • Edit Online
The M engine identifies a data source using a combination of its Kind and Path. When a data source is encountered
during a query evaluation, the M engine will try to find matching credentials. If no credentials are found, the engine
returns a special error that results in a credential prompt in Power Query.
The Kind value comes from Data Source Kind definition.
The Path value is derived from the required parameters of your data source function(s). Optional parameters aren't
factored into the data source path identifier. As a result, all data source functions associated with a data source kind
must have the same parameters. There's special handling for functions that have a single parameter of type
Uri.Type . See below for further details.
You can see an example of how credentials are stored in the Data source settings dialog in Power BI Desktop. In
this dialog, the Kind is represented by an icon, and the Path value is displayed as text.
[Note] If you change your data source function's required parameters during development, previously stored
credentials will no longer work (because the path values no longer match). You should delete any stored
credentials any time you change your data source function parameters. If incompatible credentials are found,
you may receive an error at runtime.
The function has a single required parameter ( message ) of type text , and will be used to calculate the data source
path. The optional parameter ( count ) will be ignored. The path would be displayed as follows:
Credential prompt:
When a Label value is defined, the data source path value won't be shown:
[Note] We currently recommend that you do not include a Label for your data source if your function has
required parameters, as users won't be able to distinguish between the different credentials they've entered.
We are hoping to improve this in the future (that is, allowing data connectors to display their own custom data
source paths).
As Uri.Type is an ascribed type rather than a primitive type in the M language, you'll need to use the
Value.ReplaceType function to indicate that your text parameter should be treated as a Uri.
REST APIs typically have some mechanism to transmit large volumes of records broken up into pages of results.
Power Query has the flexibility to support many different paging mechanisms. However, since each paging
mechanism is different, some amount of modification of the paging examples is likely to be necessary to fit your
situation.
Typical Patterns
The heavy lifting of compiling all page results into a single table is performed by the Table.GenerateByPage() helper
function, which can generally be used with no modification. The code snippets presented in the
Table.GenerateByPage() helper function section describe how to implement some common paging patterns.
Regardless of pattern, you'll need to understand:
1. How do you request the next page of data?
2. Does the paging mechanism involve calculating values, or do you extract the URL for the next page from the
response?
3. How do you know when to stop paging?
4. Are there parameters related to paging (such as "page size") that you should be aware of?
Handling Transformations
1/17/2020 • 3 minutes to read • Edit Online
For situations where the data source response isn't presented in a format that Power BI can consume directly, Power
Query can be used to perform a series of transformations.
Static Transformations
In most cases, the data is presented in a consistent way by the data source: column names, data types, and
hierarchical structure are consistent for a given endpoint. In this situation it's appropriate to always apply the same
set of transformations to get the data in a format acceptable to Power BI.
An example of static transformation can be found in the TripPin Part 2 - Data Connector for a REST Service tutorial
when the data source is treated as a standard REST service:
let
Source = TripPin.Feed("https://services.odata.org/v4/TripPinService/Airlines"),
value = Source[value],
toTable = Table.FromList(value, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
expand = Table.ExpandRecordColumn(toTable, "Column1", {"AirlineCode", "Name"}, {"AirlineCode", "Name"})
in
expand
It's important to note that a sequence of static transformations of this specificity are only applicable to a single
endpoint. In the example above, this sequence of transformations will only work if "AirlineCode" and "Name" exist
in the REST endpoint response, since they are hard-coded into the M code. Thus, this sequence of transformations
may not work if you try to hit the /Event endpoint.
This high level of specificity may be necessary for pushing data to a navigation table, but for more general data
access functions it's recommended that you only perform transformations that are appropriate for all endpoints.
NOTE
Be sure to test transformations under a variety of data circumstances. If the user doesn't have any data at the /airlines
endpoint, do your transformations result in an empty table with the correct schema? Or is an error encountered during
evaluation? See TripPin Part 7: Advanced Schema with M Types for a discussion on unit testing.
Dynamic Transformations
More complex logic is sometimes needed to convert API responses into stable and consistent forms appropriate for
Power BI data models.
Inconsistent API Responses
Basic M control flow (if statements, HTTP status codes, try...catch blocks, and so on) are typically sufficient to handle
situations where there are a handful of ways in which the API responds.
Determining Schema On-The -Fly
Some APIs are designed such that multiple pieces of information must be combined to get the correct tabular
format. Consider Smartsheet's /sheets endpoint response, which contains an array of column names and an array
of data rows. The Smartsheet Connector is able to parse this response in the following way:
raw = Web.Contents(...),
columns = raw[columns],
columnTitles = List.Transform(columns, each [title]),
columnTitlesWithRowNumber = List.InsertRange(columnTitles, 0, {"RowNumber"}),
1. First deal with column header information. You can pull the title record of each column into a List, prepending
with a RowNumber column that you know will always be represented as this first column.
2. Next you can define a function that allows you to parse a row into a List of cell value s. You can again prepend
rowNumber information.
3. Apply your RowAsList() function to each of the row s returned in the API response.
4. Convert the List to a table, specifying the column headers.
Handling Transformations
1/17/2020 • 3 minutes to read • Edit Online
For situations where the data source response isn't presented in a format that Power BI can consume directly, Power
Query can be used to perform a series of transformations.
Static Transformations
In most cases, the data is presented in a consistent way by the data source: column names, data types, and
hierarchical structure are consistent for a given endpoint. In this situation it's appropriate to always apply the same
set of transformations to get the data in a format acceptable to Power BI.
An example of static transformation can be found in the TripPin Part 2 - Data Connector for a REST Service tutorial
when the data source is treated as a standard REST service:
let
Source = TripPin.Feed("https://services.odata.org/v4/TripPinService/Airlines"),
value = Source[value],
toTable = Table.FromList(value, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
expand = Table.ExpandRecordColumn(toTable, "Column1", {"AirlineCode", "Name"}, {"AirlineCode", "Name"})
in
expand
It's important to note that a sequence of static transformations of this specificity are only applicable to a single
endpoint. In the example above, this sequence of transformations will only work if "AirlineCode" and "Name" exist
in the REST endpoint response, since they are hard-coded into the M code. Thus, this sequence of transformations
may not work if you try to hit the /Event endpoint.
This high level of specificity may be necessary for pushing data to a navigation table, but for more general data
access functions it's recommended that you only perform transformations that are appropriate for all endpoints.
NOTE
Be sure to test transformations under a variety of data circumstances. If the user doesn't have any data at the /airlines
endpoint, do your transformations result in an empty table with the correct schema? Or is an error encountered during
evaluation? See TripPin Part 7: Advanced Schema with M Types for a discussion on unit testing.
Dynamic Transformations
More complex logic is sometimes needed to convert API responses into stable and consistent forms appropriate for
Power BI data models.
Inconsistent API Responses
Basic M control flow (if statements, HTTP status codes, try...catch blocks, and so on) are typically sufficient to handle
situations where there are a handful of ways in which the API responds.
Determining Schema On-The -Fly
Some APIs are designed such that multiple pieces of information must be combined to get the correct tabular
format. Consider Smartsheet's /sheets endpoint response, which contains an array of column names and an array
of data rows. The Smartsheet Connector is able to parse this response in the following way:
raw = Web.Contents(...),
columns = raw[columns],
columnTitles = List.Transform(columns, each [title]),
columnTitlesWithRowNumber = List.InsertRange(columnTitles, 0, {"RowNumber"}),
1. First deal with column header information. You can pull the title record of each column into a List, prepending
with a RowNumber column that you know will always be represented as this first column.
2. Next you can define a function that allows you to parse a row into a List of cell value s. You can again prepend
rowNumber information.
3. Apply your RowAsList() function to each of the row s returned in the API response.
4. Convert the List to a table, specifying the column headers.
Handling Schema
1/17/2020 • 7 minutes to read • Edit Online
Depending on your data source, information about data types and column names may or may not be provided
explicitly. OData REST APIs typically handle this using the $metadata definition, and the Power Query OData.Feed
method automatically handles parsing this information and applying it to the data returned from an OData source.
Many REST APIs don't have a way to programmatically determine their schema. In these cases you'll need to
include a schema definition in your connector.
Consider the following code that returns a simple table from the TripPin OData sample service:
let
url = "https://services.odata.org/TripPinWebApiService/Airlines",
source = Json.Document(Web.Contents(url))[value],
asTable = Table.FromRecords(source)
in
asTable
NOTE
TripPin is an OData source, so realistically it would make more sense to simply use the OData.Feed function's automatic
schema handling. In this example you'll be treating the source as a typical REST API and using Web.Contents to
demonstrate the technique of hardcoding a schema by hand.
You can use the handy Table.Schema function to check the data type of the columns:
let
url = "https://services.odata.org/TripPinWebApiService/Airlines",
source = Json.Document(Web.Contents(url))[value],
asTable = Table.FromRecords(source)
in
Table.Schema(asTable)
Both AirlineCode and Name are of any type. Table.Schema returns a lot of metadata about the columns in a table,
including names, positions, type information, and many advanced properties such as Precision, Scale, and
MaxLength. For now you should only concern yourself with the ascribed type ( TypeName ), primitive type ( Kind ),
and whether the column value might be null ( IsNullable ).
Defining a Simple Schema Table
Your schema table will be composed of two columns:
C O L UM N DETA IL S
Name The name of the column. This must match the name in the
results returned by the service.
Type The M data type you're going to set. This can be a primitive
type (text, number, datetime, and so on), or an ascribed type
(Int64.Type, Currency.Type, and so on).
The hardcoded schema table for the Airlines table will set its AirlineCode and Name columns to text and looks
like this:
As you look to some of the other endpoints, consider the following schema tables:
The Airports table has four fields you'll want to keep (including one of type record ):
The People table has seven fields, including list s ( Emails , AddressInfo ), a nullable column ( Gender ), and a
column with an ascribed type ( Concurrency ):
People = #table({"Name", "Type"}, {
{"UserName", type text},
{"FirstName", type text},
{"LastName", type text},
{"Emails", type list},
{"AddressInfo", type list},
{"Gender", type nullable text},
{"Concurrency", Int64.Type}
})
You can put all of these tables into a single master schema table SchemaTable :
NOTE
The last step to set the table type will remove the need for the Power Query UI to infer type information when viewing the
results in the query editor, which can sometimes result in a double-call to the API.
Sophisticated Approach
The hardcoded implementation discussed above does a good job of making sure that schemas remain consistent
for simple JSON repsonses, but it's limited to parsing the first level of the response. Deeply nested data sets would
benefit from the following approach, which takes advantage of M Types.
Here is a quick refresh about types in the M language from the Language Specification:
A type value is a value that classifies other values. A value that is classified by a type is said to conform to
that type. The M type system consists of the following kinds of types:
Primitive types, which classify primitive values ( binary , date , datetime , datetimezone , duration , list ,
logical , null , , record , text , time , type ) and also include a number of abstract types (
number
function , table , any , and none ).
Record types, which classify record values based on field names and value types.
List types, which classify lists using a single item base type.
Function types, which classify function values based on the types of their parameters and return values.
Table types, which classify table values based on column names, column types, and keys.
Nullable types, which classify the value null in addition to all the values classified by a base type.
Type types, which classify values that are types.
Using the raw JSON output you get (and/or by looking up the definitions in the service's $metadata), you can
define the following record types to represent OData complex types:
LocationType = type [
Address = text,
City = CityType,
Loc = LocType
];
CityType = type [
CountryRegion = text,
Name = text,
Region = text
];
LocType = type [
#"type" = text,
coordinates = {number},
crs = CrsType
];
CrsType = type [
#"type" = text,
properties = record
];
Notice how LocationType references the CityType and LocType to represent its structured columns.
For the top-level entities that you'll want represented as Tables, you can define table types:
You can then update your SchemaTable variable (which you can use as a lookup table for entity-to-type mappings)
to use these new type definitions:
SchemaTable = #table({"Entity", "Type"}, {
{"Airlines", AirlinesType},
{"Airports", AirportsType},
{"People", PeopleType}
});
You can rely on a common function ( Table.ChangeType ) to enforce a schema on your data, much like you used
SchemaTransformTable in the earlier exercise. Unlike SchemaTransformTable , Table.ChangeType takes an actual M
table type as an argument, and will apply your schema recursively for all nested types. Its signature is:
NOTE
For flexibility, the function can be used on tables as well as lists of records (which is how tables are represented in a JSON
document).
You'll then need to update the connector code to change the schema parameter from a table to a type , and add a
call to Table.ChangeType . Again, the details for doing so are very implementation-specific and thus not worth going
into in detail here. This extended TripPin connector example demonstrates an end-to-end solution implementing
this more sophisticated approach to handling schema.
Status Code Handling with Web.Contents
1/17/2020 • 2 minutes to read • Edit Online
The Web.Contents function has some built in functionality for dealing with certain HTTP status codes. The default
behavior can be overridden in your extension using the ManualStatusHandling field in the options record.
Automatic retry
Web.Contents will automatically retry requests that fail with one of the following status codes:
C O DE STAT US
Requests will be retried up to 3 times before failing. The engine uses an exponential back-off algorithm to
determine how long to wait until the next retry, unless the response contains a Retry-after header. When the
header is found, the engine will wait the specified number of seconds before the next retry. The minimum
supported wait time is 0.5 seconds, and the maximum value is 120 seconds.
NOTE
The Retry-after value must be in the delta-seconds format. The HTTP-date format is currently not supported.
Authentication exceptions
The following status codes will result in a credentials exception, causing an authentication prompt asking the user
to provide credentials (or re-login in the case of an expired OAuth token).
C O DE STAT US
401 Unauthorized
403 Forbidden
NOTE
Extensions are able to use the ManualStatusHandling option with status codes 401 and 403, which is not something that
can be done in Web.Contents calls made outside of an extension context (that is, directly from Power Query).
Redirection
The follow status codes will result in an automatic redirect to the URI specified in the Location header. A missing
Location header will result in an error.
C O DE STAT US
302 Found
NOTE
Only status code 307 will keep a POST request method. All other redirect status codes will result in a switch to GET .
Wait-Retry Pattern
1/17/2020 • 2 minutes to read • Edit Online
In some situations a data source's behavior does not match that expected by Power Query's default HTTP code
handling. The examples below show how to work around this situation.
In this scenario you'll be working with a REST API that occassionally returns a 500 status code, indicating an internal
server error. In these instances, you could wait a few seconds and retry, potentially a few times before you give up.
ManualStatusHandling
If Web.Contents gets a 500 status code response, it throws a DataSource.Error by default. You can override this
behavior by providing a list of codes as an optional argument to Web.Contents :
By specifying the status codes in this way, Power Query will continue to process the web response as normal.
However, normal response processing is often not appropriate in these cases. You'll need to understand that an
abnormal response code has been received and perform special logic to handle it. To determine the response code
that was returned from the web service, you can access it from the meta Record that accompanies the response:
responseCode = Value.Metadata(response)[Response.Status]
Based on whether responseCode is 200 or 500, you can either process the result as normal, or follow your wait-
retry logic that you'll flesh out in the next section.
NOTE
We recommended that you use Binary.Buffer to force Power Query to cache the Web.Contents results if you'll be
implementing complex logic such as the Wait-Retry pattern shown here. This prevents Power Query's multi-threaded
execution from making multiple calls with potentially inconsistent results.
Value.WaitFor
Value.WaitFor() is a standard helper function that can usually be used with no modification. It works by building a
List of retry attempts.
producer Argument
This contains the task to be (possibly) retried. It's represented as a function so that the iteration number can be used
in the producer logic. The expected behavior is that producer will return null if a retry is determined to be
necessary. If anything other than null is returned by producer , that value is in turn returned by Value.WaitFor .
delay Argument
This contains the logic to execute between retries. It's represented as a function so that the iteration number can be
used in the delay logic. The expected behavior is that delay returns a Duration.
count Argument (optional)
A maximum number of retries can be set by providing a number to the count argument.
Putting It All Together
The following example shows how ManualStatusHandling and Value.WaitFor can be used to implement a delayed
retry in the event of a 500 response. Wait time between retries here is shown as doubling with each try, with a
maximum of 5 retries.
let
waitForResult = Value.WaitFor(
(iteration) =>
let
result = Web.Contents(url, [ManualStatusHandling = {500}]),
buffered = Binary.Buffer(result),
status = Value.Metadata(result)[Response.Status],
actualResult = if status = 500 then null else buffered
in
actualResult,
(iteration) => #duration(0, 0, 0, Number.Power(2, iteration)),
5)
in
waitForResult,
Handling Unit Testing
1/17/2020 • 2 minutes to read • Edit Online
For both simple and complex connectors, adding unit tests is a best practice and highly recommended.
Unit testing is accomplished in the context of Visual Studio's Power Query SDK. Each test is defined as a Fact that
has a name, an expected value, and an actual value. In most cases, the "actual value" will be an M expression that
tests part of your expression.
Consider a very simple extension that exports three functions:
section Unittesting;
This unit test code is made up of a number of Facts, and a bunch of common code for the unit test framework (
ValueToText , Fact , Facts , Facts.Summarize ). The following code provides an example set of Facts (see
UnitTesting.query.pq for the common code):
section UnitTestingTests;
shared MyExtension.UnitTest =
[
// Put any common variables here if you only want them to be evaluated once
Running the sample in Visual Studio will evaluate all of the Facts and give you a visual summary of the pass rates:
Implementing unit testing early in the connector development process enables you to follow the principles of test-
driven development. Imagine that you need to write a function called Uri.GetHost that returns only the host data
from a URI. You might start by writing a test case to verify that the function appropriately performs the expected
function:
Additional tests can be written to ensure that the function appropriately handles edge cases.
An early version of the function might pass some but not all tests:
The final version of the function should pass all unit tests. This also makes it easy to ensure that future updates to
the function do not accidentally remove any of its basic functionality.
Helper Functions
1/17/2020 • 10 minutes to read • Edit Online
This topic contains a number of helper functions commonly used in M extensions. These functions may eventually
be moved to the official M library, but for now can be copied into your extension file code. You shouldn't mark any
of these functions as shared within your extension code.
Navigation Tables
Table.ToNavigationTable
This function adds the table type metadata needed for your extension to return a table value that Power Query can
recognize as a Navigation Tree. See Navigation Tables for more information.
Table.ToNavigationTable = (
table as table,
keyColumns as list,
nameColumn as text,
dataColumn as text,
itemKindColumn as text,
itemNameColumn as text,
isLeafColumn as text
) as table =>
let
tableType = Value.Type(table),
newTableType = Type.AddTableKey(tableType, keyColumns, true) meta
[
NavigationTable.NameColumn = nameColumn,
NavigationTable.DataColumn = dataColumn,
NavigationTable.ItemKindColumn = itemKindColumn,
Preview.DelayColumn = itemNameColumn,
NavigationTable.IsLeafColumn = isLeafColumn
],
navigationTable = Value.ReplaceType(table, newTableType)
in
navigationTable;
PA RA M ET ER DETA IL S
keyColumns List of column names that act as the primary key for your
navigation table.
nameColumn The name of the column that should be used as the display
name in the navigator.
dataColumn The name of the column that contains the Table or Function
to display.
itemKindColumn The name of the column to use to determine the type of icon
to display. Valid values for the column are Table and
Function .
PA RA M ET ER DETA IL S
Example usage:
URI Manipulation
Uri.FromParts
This function constructs a full URL based on individual fields in the record. It acts as the reverse of Uri.Parts.
Uri.GetHost
This function returns the scheme, host, and default port (for HTTP/HTTPS) for a given URL. For example,
https://bing.com/subpath/query?param=1¶m2=hello would become https://bing.com:443 .
ValidateUrlScheme
This function checks if the user entered an HTTPS URL and raises an error if they don't. This is required for user
entered URLs for certified connectors.
ValidateUrlScheme = (url as text) as text => if (Uri.Parts(url)[Scheme] <> "https") then error "Url scheme
must be HTTPS" else url;
To apply it, just wrap your url parameter in your data access function.
Retrieving Data
Value.WaitFor
This function is useful when making an asynchronous HTTP request and you need to poll the server until the
request is complete.
Value.WaitFor = (producer as function, interval as function, optional count as number) as any =>
let
list = List.Generate(
() => {0, null},
(state) => state{0} <> null and (count = null or state{0} < count),
(state) => if state{1} <> null then {null, state{1}} else {1 + state{0}, Function.InvokeAfter(()
=> producer(state{0}), interval(state{0}))},
(state) => state{1})
in
List.Last(list);
Table.GenerateByPage
This function is used when an API returns data in an incremental/paged format, which is common for many REST
APIs. The getNextPage argument is a function that takes in a single parameter, which will be the result of the
previous call to getNextPage , and should return a nullable table .
getNextPage is called repeatedly until it returns null . The function will collate all pages into a single table. When
the result of the first call to getNextPage is null, an empty table is returned.
// The getNextPage function takes a single argument and is expected to return a nullable table
Table.GenerateByPage = (getNextPage as function) as table =>
let
listOfPages = List.Generate(
() => getNextPage(null), // get the first page of data
(lastPage) => lastPage <> null, // stop when the function returns null
(lastPage) => getNextPage(lastPage) // pass the previous page to the next function call
),
// concatenate the pages together
tableOfPages = Table.FromList(listOfPages, Splitter.SplitByNothing(), {"Column1"}),
firstRow = tableOfPages{0}?
in
// if we didn't get back any pages of data, return an empty table
// otherwise set the table type based on the columns of the first page
if (firstRow = null) then
Table.FromRows({})
else
Value.ReplaceType(
Table.ExpandTableColumn(tableOfPages, "Column1", Table.ColumnNames(firstRow[Column1])),
Value.Type(firstRow[Column1])
);
Additional notes:
The getNextPage function will need to retrieve the next page URL (or page number, or whatever other values
are used to implement the paging logic). This is generally done by adding meta values to the page before
returning it.
The columns and table type of the combined table (that is, all pages together) are derived from the first page of
data. The getNextPage function should normalize each page of data.
The first call to getNextPage receives a null parameter.
getNextPage must return null when there are no pages left.
An example of using this function can be found in the Github sample, and the TripPin paging sample.
SchemaTransformTable
EnforceSchema.Strict = 1; // Add any missing columns, remove extra columns, set table type
EnforceSchema.IgnoreExtraColumns = 2; // Add missing columns, do not remove extra columns
EnforceSchema.IgnoreMissingColumns = 3; // Do not add or remove columns
SchemaTransformTable = (table as table, schema as table, optional enforceSchema as number) as table =>
let
// Default to EnforceSchema.Strict
_enforceSchema = if (enforceSchema <> null) then enforceSchema else EnforceSchema.Strict,
Table.ChangeType
let
// table should be an actual Table.Type, or a List.Type of Records
Table.ChangeType = (table, tableType as type) as nullable table =>
// we only operate on table types
if (not Type.Is(tableType, type table)) then error "type argument should be a table type" else
// if we have a null value, just return it
// if we have a null value, just return it
if (table = null) then table else
let
columnsForType = Type.RecordFields(Type.TableRow(tableType)),
columnsAsTable = Record.ToTable(columnsForType),
schema = Table.ExpandRecordColumn(columnsAsTable, "Value", {"Type"}, {"Type"}),
previousMeta = Value.Metadata(tableType),
// If given a generic record type (no predefined fields), the original record is returned
Record.ChangeType = (record as record, recordType as type) =>
let
// record field format is [ fieldName = [ Type = type, Optional = logical], ... ]
fields = try Type.RecordFields(recordType) otherwise error "Record.ChangeType: failed to get
record fields. Is this a record type?",
fieldNames = Record.FieldNames(fields),
fieldTable = Record.ToTable(fields),
optionalFields = Table.SelectRows(fieldTable, each [Value][Optional])[Name],
requiredFields = List.Difference(fieldNames, optionalFields),
// make sure all required fields exist
withRequired = Record.SelectFields(record, requiredFields, MissingField.UseNull),
// append optional fields
withOptional = withRequired & Record.SelectFields(record, optionalFields, MissingField.Ignore),
// set types
transforms = GetTransformsForType(recordType),
withTypes = Record.TransformFields(withOptional, transforms, MissingField.Ignore),
// order the same as the record type
reorder = Record.ReorderFields(withTypes, fieldNames, MissingField.Ignore)
in
if (List.IsEmpty(fieldNames)) then record else reorder,
Errors in Power Query generally halt query evaluation and display a message to the user.
let
Source = "foo",
Output = error "error message"
in
Output
let
Source = "foo",
Output = error Error.Record("error reason", "error message", "error detail")
in
Output
try "foo"
If an error is found, the following record is returned from the try expression:
try "foo"+1
The Error record contains Reason , Message , and Detail fields.
Depending on the error, the Detail field may contain additional information.
The otherwise clause can be used with a try expression to perform some action if an error occurs:
Power Query will automatically generate an invocation UI for you based on the arguments for your function. By
default, this UI will contain the name of your function, and an input for each of your parameters.
Similarly, evaluating the name of your function, without specifying parameters, will display information about it.
You might notice that built-in functions typically provide a better user experience, with descriptions, tooltips, and
even sample values. You can take advantage of this same mechanism by defining specific meta values on your
function type. This topic describes the meta fields that are used by Power Query, and how you can make use of
them in your extensions.
Function Types
You can provide documentation for your function by defining custom type values. The process looks like this:
1. Define a type for each parameter.
2. Define a type for your function.
3. Add various Documentation.* fields to your types metadata record.
4. Call Value.ReplaceType to ascribe the type to your shared function.
You can find more information about types and metadata values in the M Language Specification.
Using this approach allows you to supply descriptions and display names for your function, as well as individual
parameters. You can also supply sample values for parameters, as well as defining a preset list of values (turning
the default text box control into a drop down).
The Power Query experience retrieves documentation from meta values on the type of your function, using a
combination of calls to Value.Type, Type.FunctionParameters, and Value.Metadata.
Function Documentation
The following table lists the Documentation fields that can be set in the metadata for your function. All fields are
optional.
Parameter Documentation
The following table lists the Documentation fields that can be set in the metadata for your function parameters. All
fields are optional.
Basic Example
The following code snippet (and resulting dialogs) are from the HelloWorldWithDocs sample.
[DataSource.Kind="HelloWorldWithDocs", Publish="HelloWorldWithDocs.Publish"]
shared HelloWorldWithDocs.Contents = Value.ReplaceType(HelloWorldImpl, HelloWorldType);
Function info
Multi-Line Example
[DataSource.Kind="HelloWorld", Publish="HelloWorld.Publish"]
shared HelloWorld.Contents =
let
HelloWorldType = type function (
message1 as (type text meta [
Documentation.FieldCaption = "Message 1",
Documentation.FieldDescription = "Text to display for message 1",
Documentation.SampleValues = {"Hello world"},
Formatting.IsMultiLine = true,
Formatting.IsCode = true
]),
message2 as (type text meta [
Documentation.FieldCaption = "Message 2",
Documentation.FieldDescription = "Text to display for message 2",
Documentation.SampleValues = {"Hola mundo"},
Formatting.IsMultiLine = true,
Formatting.IsCode = false
])) as text,
HelloWorldFunction = (message1 as text, message2 as text) as text => message1 & message2
in
Value.ReplaceType(HelloWorldFunction, HelloWorldType);
This code (with associated publish information, etc.) results in the following dialogue in Power BI. New lines will be
represented in text with '#(lf)', or 'line feed'.
Handling Navigation
1/17/2020 • 3 minutes to read • Edit Online
Navigation Tables (or nav tables) are a core part of providing a user-friendly experience for your connector. The
Power Query experience displays them to the user after they've entered any required parameters for your data
source function, and have authenticated with the data source.
Behind the scenes, a nav table is just a regular M Table value with specific metadata fields defined on its Type.
When your data source function returns a table with these fields defined, Power Query will display the navigator
dialog. You can actually see the underlying data as a Table value by right-clicking on the root node and selecting
Edit .
Table.ToNavigationTable
You can use the Table.ToNavigationTable function to add the table type metadata needed to create a nav table.
NOTE
You currently need to copy and paste this function into your M extension. In the future it will likely be moved into the M
standard library.
keyColumns List of column names that act as the primary key for your
navigation table.
nameColumn The name of the column that should be used as the display
name in the navigator.
dataColumn The name of the column that contains the Table or Function
to display.
itemKindColumn The name of the column to use to determine the type of icon
to display. See below for the list of valid values for the column.
F IEL D PA RA M ET ER
NavigationTable.NameColumn nameColumn
NavigationTable.DataColumn dataColumn
NavigationTable.ItemKindColumn itemKindColumn
NavigationTable.IsLeafColumn isLeafColumn
Preview.DelayColumn itemNameColumn
This code will result in the following Navigator display in Power BI Desktop:
This code would result in the following Navigator display in Power BI Desktop:
Test Connection
Custom Connector support is available in both Personal and Standard modes of the on-premises data gateway.
Both gateway modes support Impor t . Direct Quer y is only supported in Standard mode. OAuth for custom
connectors via gateways is currently supported only for gateway admins but not other data source users.
The method for implementing TestConnection functionality is likely to change while the Power BI Custom Data
Connector functionality is in preview.
To support scheduled refresh through the on-premises data gateway, your connector must implement a
TestConnection handler. The function is called when the user is configuring credentials for your source, and used to
ensure they are valid. The TestConnection handler is set in the Data Source Kind record, and has the following
signature:
Where dataSourcePath is the Data Source Path value for your function, and the return value is a list composed of:
The name of the function to call (this function must be marked as #shared , and is usually your primary data
source function).
One or more arguments to pass to your function.
If the invocation of the function results in an error, TestConnection is considered to have failed, and the credential
won't be persisted.
NOTE
As stated above, the function name provided by TestConnection must be a shared member.
TripPin = [
TestConnection = (dataSourcePath) => { "TripPin.Contents" },
Authentication = [
Anonymous = []
],
Label = "TripPin"
];
GithubSample = [
TestConnection = (dataSourcePath) => {"GithubSample.Contents", dataSourcePath},
Authentication = [
OAuth = [
StartLogin = StartLogin,
FinishLogin = FinishLogin,
Label = Extension.LoadString("AuthenticationLabel")
]
]
];
DirectSQL = [
TestConnection = (dataSourcePath) =>
let
json = Json.Document(dataSourcePath),
server = json[server],
database = json[database]
in
{ "DirectSQL.Database", server, database },
Authentication = [
Windows = [],
UsernamePassword = []
],
Label = "Direct Query for SQL"
];
Handling Power Query Connector Signing
1/17/2020 • 3 minutes to read • Edit Online
In Power BI, the loading of custom connectors is limited by your choice of security setting. As a general rule, when
the security for loading custom connectors is set to 'Recommended', the custom connectors won't load at all, and
you have to lower it to make them load.
The exception to this is trusted, 'signed connectors'. Signed connectors are a special format of custom connector, a
.pqx instead of .mez file, which have been signed with a certificate. The signer can provide the user or the user's IT
department with a thumbprint of the signature, which can be put into the registry to securely indicate trusting a
given connector.
The following steps enable you to use a certificate (with explanation on how to generate one if you don't have one
available) and sign a custom connector with the 'MakePQX' tool.
NOTE
If you need help creating a self-signed certificate to test these instructions, see the Microsoft documentation on New-
SelfSignedCertificate in PowerShell.
NOTE
If you need help exporting your certificate as a pfx, see How to create a PKCS#12 (PFX) file on a Windows server.
1. Download MakePQX.
2. Extract the MakePQX folder in the included zip to your desired target.
3. To run it, call MakePQX in the command-line. It requires the other libraries in the folder, so you can't copy
just the one executable. Running without any parameters will return the help information.
Usage: MakePQX [ options] [ command]
Options:
O P T IO N S DESC RIP T IO N
Commands:
verify Verify the signature status on a .pqx file. Return value will be
non-zero if the signature is invalid.
There are three commands in MakePQX. Use MakePQX [ command] --help for more information about a
command.
Pack
The Pack command takes a .mez file and packs it into a .pqx file, which is able to be signed. The .pqx file is also able
to support a number of capabilities that will be added in the future.
Usage: MakePQX pack [ options]
Options:
O P T IO N DESC RIP T IO N
-t | --target Output file name. Defaults to the same name as the input file.
Example
C:\Users\cpope\Downloads\MakePQX>MakePQX.exe pack -mz
"C:\Users\cpope\OneDrive\Documents\Power BI Desktop\Custom Connectors\HelloWorld.mez" -t
"C:\Users\cpope\OneDrive\Documents\Power BI Desktop\Custom Connectors\HelloWorldSigned.pqx"
Sign
The Sign command signs your .pqx file with a certificate, giving it a thumbprint that can be checked for trust by
Power BI clients with the higher security setting. This takes a pqx file and returns the same pqx file, signed.
Usage: MakePQX sign [ arguments] [ options]
Arguments:
Options:
O P T IO N DESC RIP T IO N
Example
C:\Users\cpope\Downloads\MakePQX>MakePQX sign "C:\Users\cpope\OneDrive\Documents\Power
BI Desktop\Custom Connectors\HelloWorldSigned.pqx" --cer tificate ColinPopellTestCer tificate.pfx --
password password
Verify
The Verify command verifies that your module has been properly signed, as well as showing the Certificate status.
Usage: MakePQX verify [ arguments] [ options]
Arguments:
Options:
O P T IO N DESC RIP T IO N
Example
C:\Users\cpope\Downloads\MakePQX>MakePQX verify "C:\Users\cpope\OneDrive\Documents\Power
BI Desktop\Custom Connectors\HelloWorldSigned.pqx"
{
"SignatureStatus": "Success",
"CertificateStatus": [
{
"Issuer": "CN=Colin Popell",
"Thumbprint": "16AF59E4BE5384CD860E230ED4AED474C2A3BC69",
"Subject": "CN=Colin Popell",
"NotBefore": "2019-02-14T22:47:42-08:00",
"NotAfter": "2020-02-14T23:07:42-08:00",
"Valid": false,
"Parent": null,
"Status": "UntrustedRoot"
}
]
}
NOTE
This article describes the requirements and process to submit a Power Query custom connector for certification. Read the
entire article closely before starting the certification process.
Introduction
Certifying a Power Query custom connector makes the connector available publicly, out-of-box, within Power BI
Desktop. Certification is governed by Microsoft's Connector Certification Program, where Microsoft works with
partner developers to extend the data connectivity capabilities of Power BI.
Certified connectors are:
Maintained by the partner developer
Supported by the partner developer
Certified by Microsoft
Distributed by Microsoft
We work with partners to try to make sure that they have support in maintenance, but customer issues with the
connector itself will be directed to the partner developer.
Certification Overview
Prerequisites
To ensure the best experience for our customers, we only consider connectors that meet a set of prerequisites for
certification:
The connector must be for a public product.
The developer must provide an estimate for usage. We suggest that developers of connectors for very
boutique products use our connector self-signing capabilities to provide them directly to the customer.
The connector must be already made available to customers directly to fulfill a user need or business
scenario.
The connector must be working successfully at an anticipated level of usage by customers.
There must be a thread in the Power BI Ideas forum driven by customers to indicate demand to make the
connector publicly available in Power BI Desktop.
These prerequisites exist to ensure that connectors undergoing certification have significant customer and business
need to be used and supported post-certification.
Process and Timelines
Certified connectors are released with monthly Power BI Desktop releases, so the deadlines for each release work
back from each Power BI Desktop release date. The expected duration of the certification process from registration
to release varies depending on the quality and complexity of the connector submission, and is outlined in the
following steps:
Registration : notification of intent to certify your custom connector. This must occur by the 15th of the
month, two months before the targeted Power BI desktop release.
For example, for the April Power BI Desktop release, the deadline would be February 15th.
Submission : submission of connector files for Microsoft review. This must occur by the 1st of the month
before the targeted Power BI desktop release.
For example, for the April Power BI Desktop release, the deadline would be March 1st.
Technical Review : finalization of the connector files, passing Microsoft review and certification. This must
occur by the 15th of the month before the targeted Power BI Desktop release.
For example, for the April Power BI Desktop release, the deadline would be March 15th.
Due to the complexity of the technical reviews and potential delays, rearchitecture, and testing issues, we highly
recommend submitting early with a long lead time for the initial release and certification. If you feel like your
connector is important to deliver to a few customers with minimal overhead, we recommend self-signing and
providing it that way.
Certification Requirements
We have a certain set of requirements for certification. We recognize that not every developer can meet these
requirements, and we're hoping to introduce a feature set that will handle developer needs in short order.
Submission Files (Artifacts)
Please ensure the connector files that you submit include all of the following:
Connector (.mez) file
The .mez file should follow style standards.
Name the .mez file: ProductName.mez
Power BI Desktop (.pbix) file for testing
We require a sample Power BI report (.pbix) to test your connector with.
The report should include at least one query to test each item in your navigation table.
If there's no set schema (for example, databases), the report needs to include a query for each "type" of
table that the connector may handle.
Test account to your data source
We will use the test account to test and troubleshoot your connector.
Provide a test account that is persistent, so we can use the same account to certify any future updates.
Testing instructions
Provide any documentation on how to use the connector and test its functionality.
Links to external dependencies (for example, ODBC drivers)
Features and Style
The connector must follow a set of feature and style rules to meet a usability standard consistent with other
certified connectors.
The connector MUST:
Use Section document format.
Have version adornment on section.
Provide function documentation metadata.
Have TestConnection handler.
Follow naming conventions (for example, DataSourceKind.FunctionName ).
The FunctionName should make sense for the domain (for example "Contents", "Tables", "Document",
"Databases", and so on).
The connector SHOULD:
Have icons.
Provide a navigation table.
Place strings in a resources.resx file.
Security
There are specific security considerations that your connector must handle.
If Extension.CurrentCredentials() is used:
Is the usage required? If so, where do the credentials get sent to?
Are the requests guaranteed to be made through HTTPS?
You can use the HTTPS enforcement helper function.
If the credentials are sent using Web.Contents() via GET:
Can it be turned into a POST?
If GET is required, the connector MUST use the CredentialQueryString record in the
Web.Contents() options record to pass in sensitive credentials.
If Diagnostics.* functions are used:
Validate what is being traced; data must not contain PII or large amounts of unnecessar y data .
If you implemented significant tracing in development, you should implement a variable or feature flag
that determines if tracing should be on. This must be turned off prior to submitting for certification.
If Expression.Evaluate() is used:
Validate where the expression is coming from and what it is (that is, can dynamically construct calls to
Extension.CurrentCredentials() and so on).
The Expression should not be user provided nor take user input.
The Expression should not be dynamic (that is, retrieved from a web call).
NOTE
Template apps do not support connectors that require a gateway.