Analysis Services MOLAP Performance Guide For SQL Server 2012 and 2014
Analysis Services MOLAP Performance Guide For SQL Server 2012 and 2014
Analysis Services Performance Guide for SQL Server 2012 and SQL
Server 2014
Summary: This white paper describes how business intelligence developers can apply query and
processing performance-tuning techniques to their OLAP solutions running on Microsoft
SQL Server Analysis Services.
This paper is based on the performance guide for 2008 R2 and has been reviewed and updated to
validate performance on SQL Server 2012 and SQL Server 2014.
Contributors and Technical Reviewers: Akshai Mirchandani, Siva Harinath, Lisa Liu
Applies to: SQL Server 2012 (including 2012 SP1), SQL Server 2014
Copyright
The information contained in this document represents the current view of Microsoft Corporation
on the issues discussed as of the date of publication. Because Microsoft must respond to
changing market conditions, it should not be interpreted to be a commitment on the part of
Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the
date of publication.
This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES,
EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.
Complying with all applicable copyright laws is the responsibility of the user. Without limiting the
rights under copyright, no part of this document may be reproduced, stored in, or introduced into
a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written
permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.
Microsoft, Excel, SQL Server, Visual Basic, Visual Studio, and Windows are trademarks of the
Microsoft group of companies.
2
Contents
1 Introduction.........................................................................................................................................7
1.1 Overview of Performance Goals...............................................................................................7
2 Design Patterns for Scalable Cubes......................................................................................................7
2.1 Building Optimal Dimensions...................................................................................................8
2.1.1 Using the KeyColumns, ValueColumn, and NameColumn Properties Effectively.................8
2.1.2 Hiding Attribute Hierarchies................................................................................................9
2.1.3 Setting or Disabling Ordering of Attributes..........................................................................9
2.1.4 Setting Default Attribute Members....................................................................................10
2.1.5 Removing the All Level.......................................................................................................10
2.1.6 Identifying Attribute Relationships....................................................................................10
2.1.7 Using Hierarchies Effectively..............................................................................................12
2.1.8 Turning Off the Attribute Hierarchy...................................................................................13
2.1.9 Reference Dimensions.......................................................................................................14
2.1.10 Fast-Changing Attributes....................................................................................................15
2.1.11 Large Dimensions...............................................................................................................18
2.2 Partitioning a Cube.................................................................................................................20
2.2.1 Partition Slicing..................................................................................................................21
2.2.2 Partition Sizing...................................................................................................................23
2.2.3 Partition Strategy...............................................................................................................24
2.3 Relational Data Source Design...............................................................................................28
2.3.1 Use a Star Schema for Best Performance...........................................................................29
2.3.2 Consider Moving Calculations to the Relational Engine.....................................................29
2.3.3 Use Views...........................................................................................................................30
2.4 Calculation Scripts..................................................................................................................32
2.4.1 Learn MDX Basics...............................................................................................................32
2.4.2 Use Attributes Instead of Sets............................................................................................32
2.4.3 Use SCOPE Instead of IIF When Addressing Cube Space....................................................33
2.4.4 Avoid Mimicking Engine Features with Expressions..........................................................34
2.4.5 Comparing Objects and Values..........................................................................................35
2.4.6 Evaluating Set Membership...............................................................................................35
3
3 Tuning Query Performance...............................................................................................................36
3.1 Query Processor Architecture................................................................................................36
3.1.1 Session Management.........................................................................................................37
3.1.2 Query Processing...............................................................................................................38
3.1.3 Data Retrieval....................................................................................................................41
3.2 Query Processor Internals......................................................................................................43
3.2.1 Subspace Computation......................................................................................................43
3.2.2 Expensive vs. Inexpensive Query Plans..............................................................................45
3.2.3 Expression Sparsity............................................................................................................45
3.2.4 Default Values....................................................................................................................46
3.2.5 Varying Attributes..............................................................................................................47
3.3 Optimizing MDX.....................................................................................................................47
3.3.1 Creating a Query Speed Baseline.......................................................................................48
3.3.2 Isolating the Problem.........................................................................................................50
3.3.3 Cell-by-Cell Mode vs. Subspace Mode...............................................................................52
3.3.4 Avoid Assigning Non Null Values to Otherwise Empty Cells...............................................54
3.3.5 Sparse/Dense Considerations with “expr1 * expr2” Expressions.......................................55
3.3.6 IIf Function in SQL Server 2008 Analysis Services...............................................................56
3.3.7 Cache Partial Expressions and Cell Properties....................................................................60
3.3.8 Eliminate Varying Attributes in Set Expressions.................................................................61
3.3.9 Eliminate Cost of Computing Formatted Values................................................................62
3.3.10 NON_EMPTY_BEHAVIOR...................................................................................................63
3.4 Aggregations..........................................................................................................................64
3.4.1 Detecting Aggregation Hits................................................................................................65
3.4.2 How to Interpret Aggregations..........................................................................................66
3.4.3 Aggregation Tradeoffs........................................................................................................67
3.4.4 Building Aggregations........................................................................................................67
3.5 Cache Warming......................................................................................................................71
3.5.1 Cache Warming Guidelines................................................................................................71
3.5.2 Implementing a Cache Warming Strategy..........................................................................72
3.6 Scale-Out................................................................................................................................73
4 Tuning Processing Performance........................................................................................................74
4
4.1 Baselining Processing.............................................................................................................74
4.1.1 Performance Monitor Trace...............................................................................................74
4.1.2 Profiler Trace.....................................................................................................................75
4.1.3 Determining Where You Spend Processing Time...............................................................76
4.2 Tuning Dimension Processing.................................................................................................77
4.2.1 Dimension Processing Architecture...................................................................................77
4.2.2 Dimension Processing Commands.....................................................................................80
4.3 Tuning Cube Dimension Processing.......................................................................................81
4.3.1 Reduce Attribute Overhead...............................................................................................82
4.3.2 Tuning the Relational Dimension Processing Queries........................................................83
4.4 Tuning Partition Processing....................................................................................................83
4.4.1 Partition Processing Architecture......................................................................................84
4.4.2 Partition Processing Commands........................................................................................84
4.4.3 Partition Processing Performance Best Practices...............................................................85
4.4.4 Optimizing Data Inserts, Updates, and Deletes..................................................................85
4.4.5 Picking Efficient Data Types in Fact Tables.........................................................................87
4.4.6 Tuning the Relational Partition Processing Query..............................................................87
4.4.7 Splitting Processing Index and Process Data......................................................................88
4.4.8 Increasing Concurrency by Adding More Partitions...........................................................89
4.4.9 Adjusting Maximum Number of Connections....................................................................89
4.4.10 Tuning the Process Index Phase.........................................................................................90
4.4.11 Partitioning the Relational Source.....................................................................................91
5 Special Considerations.......................................................................................................................91
5.1 Distinct Count.........................................................................................................................92
5.1.1 Partition Design..................................................................................................................92
5.1.2 Processing of Distinct Count..............................................................................................92
5.1.3 Distinct Count Partition Aggregation Considerations.........................................................93
5.1.4 Optimize the Disk Subsystem for Random I/O...................................................................94
5.2 Large Many-to-Many Dimensions..........................................................................................94
5.3 Parent-Child Dimensions........................................................................................................96
5.4 Near Real Time and ROLAP....................................................................................................97
5.4.1 MOLAP Switching...............................................................................................................98
5
5.4.2 ROLAP + MOLAP.................................................................................................................99
5.4.3 Comparing MOLAP Switching and ROLAP + MOLAP..........................................................99
5.4.4 ROLAP..............................................................................................................................100
5.5 NUMA...................................................................................................................................103
5.5.1 NUMA Optimizations in SSAS...........................................................................................104
5.5.2 General NUMA Tips.........................................................................................................104
5.5.3 Specific NUMA Configurations for SSAS...........................................................................105
5.5.4 Addenda: NUMA with Tabular Models............................................................................110
6 Conclusion.......................................................................................................................................110
Send feedback.........................................................................................................................................110
7 Resources........................................................................................................................................110
6
1 Introduction
This guide contains a collection of tips and design strategies to help you build and tune Analysis Services
cubes for the best possible performance. This guide is primarily aimed at business intelligence (BI)
developers who are building a new cube from scratch or optimizing an existing cube for better
performance.
The goal of this guide is to provide you with the necessary background to understand design tradeoffs,
and to suggest techniques and design patterns that can help you achieve the best possible performance
of even large cubes.
The guide was previously published in 2008, and has been updated to cover changes in SQL Server 2012
and SQL Server 2014.
Design Patterns for Scalable Cubes – No amount of query tuning and optimization can match the
benefits of a well-designed data model. This section contains guidance to help you get the design right
the first time. In general, good cube design follows Kimball modeling techniques, and if you avoid some
typical design mistakes, you are in very good shape.
Tuning Query Performance - Query performance directly affects the quality of the end-user experience.
As such, it is the primary benchmark used to evaluate the success of an online analytical processing
(OLAP) implementation. Analysis Services provides a variety of mechanisms to accelerate query
performance, including aggregations, caching, and indexed data retrieval. This section also provides
guidance on writing efficient Multidimensional Expressions (MDX) calculation scripts.
Tuning Processing Performance - Processing is the operation that refreshes data in an Analysis Services
database. The faster the processing performance, the sooner users can access refreshed data. Analysis
Services provides a variety of mechanisms that you can use to influence processing performance,
including parallelized processing designs, relational tuning, and an economical processing strategy (for
example, incremental versus full refresh versus proactive caching).
Special Considerations – Some features of Analysis Services such as distinct count measures and many-
to-many dimensions require careful attention to cube design. At the end of the paper you will find a
section that describes the special techniques you should apply when using these features.
7
2 Design Patterns for Scalable Cubes
Cubes present a unique challenge to the BI developer: they are databases that are expected to respond
quickly to most queries. Depending on the data model you implement, the end user might have
considerable freedom to create ad hoc queries. Achieving a balance between user freedom and scalable
design will determine the success of a cube.
Each industry has specific design patterns that lend themselves to value-added reporting. A detailed
treatment of optimal, industry specific data model is outside the scope of this document, but there are
many common design patterns that you can apply across all industries. This section explains these
patterns and how you can leverage them for increased scalability in your cube design.
Dimensions are composed of attributes, which are related to each other through hierarchies. Efficient
use of attributes is a key design skill to master, and studying and implementing the attribute
relationships available in the business model can help improve cube performance.
In this section, you will find guidance on how to optimize dimensions and properly use attributes and
hierarchies.
The KeyColumns property specifies one or more source fields that uniquely identify each
instance of the attribute.
The NameColumn property specifies the source field that will be displayed to end users. If you
do not specify a value for the NameColumn property, it is automatically set to the value of the
KeyColumns property.
ValueColumn allows you to store additional information about the attribute, which is typically
used for calculations. Unlike member properties, this property of an attribute is strongly typed,
providing increased performance when it is used in calculations. The contents of this property
can be accessed through the MemberValue MDX function.
We recommend that you use both ValueColumn and NameColumn in your dimension design, because
smart use of these properties can eliminate the need for extraneous attributes. Reducing the total
number of attributes used in your design also makes it more efficient.
Additionally, use these practices to reduce processing time, reduce the size of the dimension, and
minimize the likelihood of user errors:
8
Assign a numeric source field, if available, to the KeyColumns property, rather than a string
property.
Use a single column key instead of a composite, multi-column key. This is especially true for
attributes that have a large number of members, that is, greater than one million members.
In addition to user hierarchies, Analysis Services by default creates a flat hierarchy for every attribute in
a dimension. These automatically generated hierarchies are called attribute hierarchies. Hiding attribute
hierarchies is often a good idea, because a lot of hierarchies in a single dimension will typically confuse
users and make client queries less efficient. Consider setting AttributeHierarchyVisible = false for most
attribute hierarchies and expose user hierarchies instead.
To avoid end-user reports referring to the surrogate key directly, we recommend that you hide it. The
best way to hide a surrogate key from users is to set the AttributeHierarchyVisible = false in the
dimension design process, and then remove the attribute from any user hierarchies. This prevents end-
user tools from referencing the surrogate key, leaving you free to change the key value if requirements
change.
There are a few cases in which you don’t care about the ordering of an attribute, and the surrogate key
is one such case. For hidden attributes such as these, which you use only for implementation purposes,
you can set AttributeHierarchyOrdered = false to save time during processing of the dimension.
9
2.1.4 Setting Default Attribute Members
Any query that does not explicitly reference a hierarchy will use the current member of that hierarchy.
The default behavior of Analysis Services is to assign the All member of a dimension as the default
member, which is normally the desired behavior.
However, for some attributes, such as the current day in a date dimension, it sometimes makes sense to
explicitly assign a default member. For example, you might set a default date in the Adventure Works
cube like this.
Some client tools might not handle default members correctly, though. For example, Microsoft Excel
2010 will not provide a visual indication that a default member is currently selected, but the default
member will nonetheless influence the query result. This is often confusing to users who expect the All
level to be the current member, given that no other members are referenced in the query.
Also, when a default member is set in a dimension with multiple hierarchies, the results can be hard for
users to interpret.
In general, you should explicitly set default members only on dimensions with single hierarchies or in
hierarchies that do not have an All level.
It can also be expensive to ask for the All level of a dimension if there is no good aggregate to respond
to the query. For example, if you have a cube partitioned by currency, asking for the All level of currency
will cause a scan of all partitions, which could be expensive and lead to a useless result.
In order to prevent users from querying meaningless All levels, you can disable the All member in a
hierarchy. You do this by setting the IsAggregateable = false on the attribute at the top of the hierarchy.
Note that if you disable the All level, you should also set a default member as described in the previous
section. If you don’t, Analysis Services will choose one for you.
10
Often, there are relationships between attributes that can be used by the Analysis Services engine to
optimize performance, and these relationships might not necessarily be manifested in the original
dimension table.
When you begin designing attribute hierarchies, by default, all attributes are related to the key, and the
attribute relationship diagram resembles a “bush” in which relationships all stem from the key attribute
and end at each other’s attribute.
You can optimize performance by changing the bush to more of a tree: that is, by defining hierarchical
relationships supported by the data.
In the example shown in figures 1 and 2, a model name identifies the product line and subcategory, and
the subcategory identifies a category. No subcategory is found in more than one category. You can
redefine the relationships in the attribute relationship editor to make these relationships clearer.
11
Cross-products between levels in the hierarchy do not need to go through the key attribute. This
saves CPU time during queries.
Aggregations built on attributes can be reused for queries on related attributes. This saves
resources during processing and queries.
Auto-Exist can more efficiently eliminate attribute combinations that do not exist in the data.
Consider the cross-product between Subcategory and Category, given the two designs shown in Figures
1 and 2. In the design in Figure 1, no attribute relationships have been explicitly defined, and therefore
the engine must first find which products belong to each subcategory, and then determine which
categories each of these products belongs to. For large dimensions, this can take a long time. However,
if the attribute relationship is defined as depicted in Figure 2, the Analysis Services engine can use
indexes, which are built at processing time, to determine the correct category for each subcategory,
making queries much faster.
In a flexible attribute relationship, members can move around during dimension updates. For
example, the relationship between customer and city should perhaps be flexible, as customers
might move.
In a rigid attribute relationship, the member relationships are guaranteed to be fixed. For
example, the relationship between month and year is fixed because a particular month isn’t
going to change its year when the dimension is reprocessed.
The choice of whether a relationship is flexible or rigid is not merely one of semantics; it affects
processing. When a change in a flexible relationship is detected during processing, all indexes for
partitions referencing the affected dimension must be invalidated (including the indexes for attributes
that are not affected). This is an expensive operation and may cause Process Update operations to take
a very long time. Indexes that have been invalidated by changes in flexible relationships must be rebuilt
after a Process Update operation with a Process Index on the affected partitions; this adds even more
time to cube processing. For more information on how Process Update works, see this blog article
( http://blogs.msdn.com/b/karang/archive/2012/05/03/processupdate_2d00_insight.aspx).
Flexible relationships are the default setting. Carefully consider the advantages of rigid relationships and
change the default where the design allows it.
In a natural hierarchy, all attributes participating as levels in the hierarchy have direct or indirect
attribute relationships extending from the bottom of the hierarchy to the top of the hierarchy.
12
In an unnatural hierarchy, the hierarchy consists of at least two consecutive levels that have no
attribute relationships. Typically these hierarchies are used to create drill-down paths of
commonly viewed attributes that do not follow any natural hierarchy. For example, users may
want to view a hierarchy of Gender and Education.
From a performance perspective, natural hierarchies behave very differently than unnatural hierarchies
do. In natural hierarchies, the hierarchy tree is materialized on disk in hierarchy stores. In addition, all
attributes participating in natural hierarchies are automatically considered to be aggregation candidates.
Unnatural hierarchies are not materialized on disk, and the attributes participating in unnatural
hierarchies are not automatically considered as aggregation candidates. Rather, they simply provide
users with easy-to-use drill-down paths for commonly viewed attributes that do not have natural
relationships. By assembling these attributes into hierarchies, you can also use a variety of MDX
navigation functions to easily perform calculations like percent of parent.
To take advantage of natural hierarchies, define cascading attribute relationships for all attributes that
participate in the hierarchy.
If you only want to access an attribute as a member property, and not expose the attribute directly, you
can disable the attribute’s hierarchy by setting the AttributeHierarchyEnabled property to False. From a
processing perspective, disabling the attribute hierarchy can improve performance and decrease cube
size because the attribute will no longer be indexed or aggregated. This can be especially useful for high-
cardinality attributes that have a one-to-one relationship with the primary key (such as phone numbers
and addresses), and attributes that typically do not require slice-and-dice analysis. By disabling the
13
hierarchies for these attributes and accessing them via member properties, you can save processing
time and reduce cube size.
Deciding whether to disable the attribute’s hierarchy requires that you consider the following impacts of
using member properties.
Member properties cannot be placed on a query axis in an MDX query in the same manner as
attribute hierarchies and user hierarchies. To query a member property, you must query the
attribute that contains that member property.
For example, if you require the work phone number for a customer, you must query the
properties of the customer and then request the phone number property. As a convenience,
most front-end tools easily display member properties in their user interfaces.
In general, filtering measures using member properties is slower than filtering using attribute
hierarchies, because member properties are not indexed and do not participate in aggregations.
The actual impact to query performance depends on how you use the attribute.
For example, if your users want to slice and dice data by both account number and account
description, from a querying perspective you may be better off having the attribute hierarchies
in place and removing the bitmap indexes if processing performance is an issue.
By default, a reference dimension is non-materialized. This means that queries have to perform the join
between the reference and the outer dimension table at query time. Also, filters defined on attributes in
the outer dimension table are not driven into the measure group when the bitmaps there are scanned.
When this happens, too much data is read from disk to answer user queries. Leaving a dimension as
non-materialized prioritizes modeling flexibility over query performance. Consider carefully whether you
can afford this tradeoff. Cubes are typically intended to support fast ad-hoc queries, and subjecting the
end user to bad query performance is rarely a good idea. Users are not in a position to understand that
you are using reference dimensions, and users can’t do anything to avoid them.
Analysis Services does provide an option that lets you materialize the reference dimension. When you
enable this option, memory and disk structures are created that make the dimension behave just like a
denormalized star schema. This means that you will retain all the performance benefits of a regular,
non-reference dimension.
Be careful when using materialized reference dimensions though– if you run a process update on the
intermediate dimension, any changes in the relationships between the outer dimension and the
reference will not be reflected in the cube. Instead, the original relationship between the outer
dimension and the measure group is retained – which is most likely not the desired result. In a way, you
14
can consider the reference table to be a rigid relationship to attributes in the outer attributes. The only
way to reflect changes in the reference table is to fully process the dimension.
Type2 Fast-Changing Attributes - If you try to track every change in a fast-changing attribute, the
dimension containing the attribute might grow very large. Type 2 attributes are typically added to a
dimension with a Process Add command, but at some point, running Process Add on a large dimension
and running all the consistency checks will take a long time.
In general, having a huge dimension is unwieldy because users will have trouble querying it and the
server will have trouble keeping it in memory. A good example is the customer age– the values would
change every year and cause the customer dimension to grow dramatically.
Type 1 Fast-Changing Attributes – Even if you do not track every change to the attribute, you may still
run into issues with fast-changing attributes. To reflect a change in the data source to the cube, you
have to run Process Update on the changed dimension. As the cube and dimension grows larger,
running Process Update becomes expensive.
For example, you might be tracking the status of a server in a hosting environment, using values like
“Running”, “Shut down”, and “Overloaded”. A status attribute like this might change several times per
day or even per hour. Running frequent Process Updates on such a dimension to reflect changes can be
an expensive operation, and it may not be feasible in a production environment because Analysis
Services locks some files while processing. 1
In the following sections, we will look at some modeling options you can use to address these problems.
Consider the customer dimension with an age attribute, as discussed ablove. Modeling the Age attribute
directly in the Customer dimension produces a design like the following diagram. Notice that every time
Thomas has a birthday, a new row is added to the dimension table.
1
For an example, see http://geekswithblogs.net/darrengosbell/archive/2007/04/24/SSAS-
Processing-ForceCommitTimeout-and-quotthe-operation-has-been-cancelledquot.aspx.
15
Figure 4. Age included in customer dimension
An alternative design approach splits the customer dimension into two dimensions, like this:
There are, however, some restrictions on where this approach can be applied. Creating a separate
dimension works best when the changing attribute takes on a small, distinct set of values. This design
also adds complexity; when you add more dimensions to the model, you create more work for the ETL
16
developers when the fact table is loaded. Also, consider the storage impact on the fact table: With the
second design, the fact table becomes wider, and more bytes have to be stored per row.
Consider, for example, the cube that tracks server hosting: You might want to track the status of all
servers, which changes frequently. Assume for the example that the server dimension is used by a fact
table that captures performance counters and that you have designed the data model like this:
The problem with this model is the Status column. If the Fact Counter table is large and status changes a
lot, Process Update will take a very long time to run. To optimize, consider the following design instead.
17
Figure 7: Status column in its own dimension
If you implement DimServer as the intermediate reference table to DimServerStatus, Analysis Services
no longer has to keep track of the metadata in the FactCounter when you run Process Update on
DimServerStatus.
However, this also means that the join to DimServerStatus will happen at run time, increasing CPU cost
and query times. Plus you cannot index attributes in DimServer because the intermediate dimension is
not materialized.
In summary, you have to carefully balance the tradeoff between processing time and query speeds.
First, all indexes on the fact tables must be considered potentially invalid when an attribute
changes.
Second, string values in dimension attributes are stored on a disk structure called the string
store. This structure has a size limitation of 4 GB. Thus if a dimension contains attributes where
the total size of the string values (this includes translations) exceeds 4 GB, you will get an error
during processing.
Consider for a moment a dimension with tens or even hundreds of millions of members. Such a
dimension can be built and added to a cube, no matter whether you are running on SQL Server 2005,
2008, or 2008 R2. But what does such a dimension mean to an ad-hoc user? How will the user navigate
it? Are there hierarchies that can group the members of this dimension into reasonable sizes for
rendering in a client? While it may make sense for some reporting purposes to search for individual
members in such a dimension, it may not be the right problem to solve with a cube.
18
When you build cubes, ask yourself: is this a cube problem? For example, think of this typical
telecommunications data set, which models detailed records of phone calls.
In this particular example, there are 300 million customers in the data model. There is no good way to
group these customers and allow ad-hoc access to the cube at reasonable speeds. Even if you manage to
optimize the space used to fit in the 4-GB string store, how would users browse a customer dimension
like this?
If you find yourself in a situation where a dimension becomes too large and unwieldy, consider building
the cube on top of an aggregate. For the call records data set, imagine a transformation like the
following:
When you substitute an aggregated fact table, the problem dimension with 300 million rows turns into a
much smaller dimension with 100,000 rows. You might consider aggregating the facts to save storage
19
too – alternatively, you can add a demographics key directly to the original fact table, process on top of
this data source, and rely on MOLAP compression to reduce data sizes.
Beginning in SQL Server 2012 Analysis Services, you can reconfigure string storage to accommodate very
large strings in dimension attributes or partitions, and exceed the previous 4 GB file size limit for string
stores. If your dimensions or partitions include string stores of this size, you can work around the file size
constraint by changing the StringStoresCompatibilityLevel property. However, note these limitations:
String storage configuration is optional, which means that even new databases that you create
in SQL Server 2012 will continue to use the default string store architecture, which is subject to
the 4 GB maximum file size.
Using the larger string storage architecture has a small but noticeable impact on performance.
You should use it only if your string storage files are close to or at the maximum 4 GB limit.
Changing the string storage settings of an object requires that you reprocess the object itself
and any dependent object. Processing is required to complete the procedure
To configure storage for larger strings, set the StringStoresCompatibilityLevel property on a dimension
or partition. Valid values for this property include the following:
Value Description
1050 Specifies the default string storage architecture, subject to a 4 GB maximum file size per store.
1100 Specifies larger string storage, supports up to 4 billion unique strings per store.
This section specifically addresses how you can use partitions to improve query performance. The
advantages of partitioning for query performance are two-fold: you can eliminate partitions that aren’t
needed in a query and optimize aggregation design. However, in your partitioning strategy you must
often make a tradeoff between query and processing performance.
Partition elimination - Partitions that do not contain data in the requested subcube are not queried at
all, thus avoiding the cost of reading the index, or of scanning a table if the server is in ROLAP mode.
While reading a partition index and finding no available rows is a cheap operation, these reads begin to
put a strain on the thread pool as the number of concurrent users grows. Moreover, when Anaysis
20
Services encounters queries that do not have indexes to support them, all potentially matching
partitions must be scanned for data.
Aggregation design - Each partition can have its own unique aggregation design, or it can use a shared
aggregation design. Partitions queried more often or differently might have their own designs.
Figure 10 shows an example of a Profiler trace on a query that requests the Reseller Sales Amount from
Adventure Works for the year 2003, grouped by Business Type. The Reseller Sales measure group of the
Adventure Works cube contains four partitions: one for each year. Because the query slices on 2003, the
storage engine can go directly to the 2003 Reseller Sales partition and ignore other partitions.
Auto slice – When Analysis Services reads the data during processing, it keeps track of the
minimum and maximum attribute DataID reads. These values are used to set the slice when the
indexes are built on the partition.
Manual slice– when Analysis Services reads the data during processing
21
2.2.1.2 Auto Slice
During processing of MOLAP partitions, Analysis Services internally identifies the range of data that is
contained in each partition by using the Min and Max DataIDs of each attribute, to calculate the range of
data that is contained in the partition. The data range for each attribute is then combined to create the
slice definition for the partition.
The minimum and maximum DataIDs can specify either a single member or a range of members. For
example, partitioning by Year results in the same minimum and maximum DataID slice for the Year
attribute, and queries to a specific moment in time only result in partition queries to that year’s
partition.
It is important to remember that the partition slice is maintained as a range of DataIDs that you have no
explicit control over. DataIDs are assigned during dimension processing as new members are
encountered. Because Analysis Services just looks at the minimum and maximum value of the DataID,
you can end up reading partitions that don’t contain relevant data.
For example: if you have a partition, P2003_4, that contains both 2003 and 2004 data, you are not
guaranteed that the minimum and maximum DataID in the slide contain values next to each other (even
though the years are adjacent). In our example, let us say the DataID for 2003 is 42 and the DataID for
2004 is 45. You can’t specify which DataID gets assigned to which members, so the DataID for 2005
might be 44. When a user requests data for 2005, Analysis Services looks at the slice for P2003_4, sees
that it contains data in the interval 42 to 45 and therefore concludes that this partition has to be
scanned to make sure it does not contain the values for DataID 44 (because 44 is between 42 and 45).
Because of this behavior, auto slice typically works best if the data contained in the partition maps to a
single attribute value. When that is the case, the maximum and minimum DataID contained in the slice
will be equal and the slice will work efficiently.
Note that the auto slice is not defined and indexes are not built for partitions with fewer rows than
IndexBuildThreshold (which has a default value of 4096).
However, as shown in the previous section, there are cases where auto slice will not give you the
desired partition elimination behavior. In these cases you can benefit from defining the slice yourself for
MOLAP partitions. For example, if you partition by year with some partitions containing a range of years,
defining the slice explicitly avoids the problem of overlapping DataIDs. This can only be done with
knowledge of the data – which is where you can add some optimization as a BI developer.
It is generally not a best practice to create partitions before you are ready to fill them with data. But for
real-time cubes, it is sometimes a good idea to create partitions in advance to avoid locking issues.
22
When you take this approach, it is also a good idea to set a manual slice on MOLAP partitions to make
sure the storage engine does not spend time scanning empty partitions.
References
The following graph shows four different query runs, using the same Customer cube but with different
partition sizes. Notice that performance is comparable between cubes with different partition sizes, and
that throughput is affected more by the design of the security features.
Of course, as you add more partitions, the metadata overhead of managing the cube grows
exponentially. This affects ProcessUpdate and ProcessAdd operations on dimensions, which have to
23
traverse the metadata dependencies to update the cube when dimensions change. As a rule of thumb,
follow these guidelines, while balancing the requirements discussed above.
partition by date.
use a partition matrix.
use a hash to partition.
Typically, a date partitioning scheme looks somewhat like this. This design works very well for small to
medium-sized cubes. It is reasonably simple to implement and the number of partitions is kept low. To
move the partition to cheaper storage, you simply change the data location and reprocess the partition.
24
February 2011 Process Add/Full
January 2011
December 2010
Fast
Storage
November 2010
January 2010
December 2009
Cheap
Storage
January 2009
1. If the granularity of the partitioning is small enough (for example, hourly), the number of
partitions can quickly become unmanageable.
2. Assuming data is added only to the latest partition, partition processing is limited to one TCP/IP
connection reading from the data source. If you have a lot of data, this can be a scalability limit.
If you have a lot of date-based partitions, it is often a good idea to merge the older ones into large
partitions. You can do this either by using the Analysis Services merge functionality or by dropping the
old partitions, creating a new, larger partition, and then reprocessing it. Reprocessing will typically take
longer than merging, but we have found that compression of the partition can often increase if you
reprocess.
A modified date partitioning scheme might look like this. This design addresses the metadata overhead
of having too many partitions.
25
2011-02-01 Process Add/Full
2011-01-31
2011-01-30
2011-01-01
Fast
December 2010
Storage
November 2010
January 2010
Year 2009
Cheap
Storage
Year 2008
However, this design too is bottlenecked by the maximum speed of the Process Add or Process Full job
for the latest partition. If your data source is SQL Server, the speed of a single database connection can
be hundreds of thousands of rows every second – which works well for most scenarios. But if the cube
requires even faster processing speeds, consider matrix partitioning.
For example, consider a retailer that operates in US, Europe, and Asia. You might decide to partition like
this.
26
Figure 14: Example of matrix partitioning
As the retailer grows, they might need to split the regional partitions into smaller partitions, to further
increase parallelism during load and to limit the worst-case scans that a user can perform. Therefore, in
cubes that are expected to grow dramatically, choose a partition key that will grow with the business
and give you options for extending the matrix partitioning strategy.
27
Data hosting Host ID or rack location Adding a new server
Telecommunications Switch ID, country code, or area Expanding into new geographical
code regions or adding new services
Computerized Production line ID or machine ID Adding production lines or (for
manufacturing machines) sensors
Investment banking Stock exchange or financial Adding new financial instruments,
instrument products, or markets
Retail banking Credit card number or customer Increasing customer transactions
key
Online gaming Game key or player key Adding new games or players
If you implement a matrix partitioning scheme, you must pay special attention to user queries. Queries
that touch several partitions for every subcube request, such as a query that asks for a high-level
aggregate of the partition business key, result in a high thread usage in the storage engine. Because of
this, we recommend that you partition the business key so that single queries touch no more than the
number of cores available on the target server. For example, if you partition by Store Key and you have
1,000 stores, queries touching the aggregation of all stores will have to touch 1,000 partitions. To avoid
this, you might try grouping the stores into a number of buckets – that is, rather than having individual
partitions for each store, assign stores to each partition. For example, if you run on a 16-core server, you
can group the store into buckets of around 62 stores for each partition (1,000 stores divided into 16
buckets).
Partition on the hash value of a key that has a high enough cardinality and where there is little
skew.
If every query will touch many partitions, pay special attention to the
CoordinatorQueryBalancingFactor and the CoordinatorQueryMaxThread settings, which are
described in the SQL Server 2008 R2 Analysis Services Operations Guide
(http://msdn.microsoft.com/en-us/library/hh226085.aspx).
28
In this section, we will describe some of the options that you should consider when designing a
relational data source. A full treatment of relational data warehousing is out of scope for this document,
but we will provide references where appropriate.
The Unified Dimensional Model (UDM) used by Analysis Services is itself a dimensional model, with
some additional features (such as reference dimensions) that support snowflakes and many-to-many
dimensions. Thus, no matter which model you use for end-user reporting, performance boils down to
one simple fact: joins are expensive!
This is also partially true for the Analysis Services engine. For example: If a snowflake is implemented as
a non-materialized reference dimension, users will wait longer for queries, because the join is done at
run time inside the Analysis Services engine.
The largest impact of snowflakes occurs during processing of the partition data. For example, if you
implement a fact table as a join of two big tables (for example, separating order lines and order headers
instead of storing them as pre-joined values), processing of facts will take longer, because the relational
engine has to compute the join.
It is possible to build an Analysis Services cube on top of a highly normalized model, but be
prepared to pay the price of joins when accessing the relational model. In most cases, that price
is paid at processing time.
In MOLAP data models, materialized reference dimensions help you store the result of the
joined tables on disk and give you high speed queries even on normalized data.
If you are running ROLAP partitions, queries will pay the price of the join at query time, and your
user response times or your hardware budget will suffer if you are unable to resist
normalization.
Sum(Customer.City.Members,
cint(Customer.City.Currentmember.properties(“Population”))), …
29
Instead, you could define a separate measure group on the City table, with a SUM measure on the
Population column.
As a second example, consider these two solutions. Which do you think will provide superior
performance?
Compute the product of Revenue * Products Sold at the leaves in the cube and
aggregate with calculations.
Compute this result in the source database.
Consider a relational source that has chosen to normalize two tables you need to join to obtain a fact
table – for example, a data model that splits a sales fact into order lines and orders. If you implement
the fact table using query binding, your UDM will contain the following.
Cube
Relational
Source
Orders
LineItems
In this model, the UDM now has a dependency on the structure of the LineItems and Orders tables –
along with the join between them. If you instead implement a Sales view in the database, you can model
like this.
30
Sales Cube
Relational
Source
CREATE VIEW Sales AS
SELECT ... FROM LineItems JOIN Orders
Orders
LineItems
This revised, view-centric model gives the relational database the freedom to optimize the joined results
of LineItems and Order (for example by storing it denormalized), without any impact on the cube. It
would be transparent for the cube developer if the DBA of the relational database implemented a
change like the following one.
Sales Cube
Relational
Source
CREATE VIEW Sales AS
SELECT ... FROM Sales
Sales
If the relational data modelers insist on normalization, give them a chance to change their minds and
denormalize without breaking the cube model. Views provide encapsulation, and it is good practice to
use them:
Views make debugging easier. You can issue SQL queries directly on views to compare the
relational data with the cube.
Views are a good way to implement business logic that you can mimic with query binding in the
UDM.
31
While the UDM syntax is similar to the SQL view syntax, you cannot issue SQL statements
against the UDM.
Views can also be used to pre-aggregate large fact tables using a GROUP BY statement. The relational
database modeler might even choose to materialize views that use a lot of hardware resources.
This section describes some best practices you can apply to the cube to avoid common performance
mistakes. Consider these basic rules the bare minimum that you should understand and apply when
building the cube script.
32
o http://sqlblog.com/blogs/mosha/
Piasevoli, Tomislav: Blog
o http://tomislav.piasevoli.com
Webb, Christopher: Blog
o http://cwebbbi.wordpress.com/category/mdx/
Spofford, George, Sivakumar Harinath, Christopher Webb, Dylan Hai Huang, and Francesco
Civardi,: MDX Solutions: With Microsoft SQL Server Analysis Services 2005 and Hyperion Essbase,
ISBN: 978-0471748083
o http://www.amazon.com/MDX-Solutions-Microsoft-Analysis-Services/dp/0471748080
Process Update the dimension when the day changes. Users can now refer to the current day by
addressing the Day Type attribute instead of the set.
33
CREATE MEMBER CurrentCube.[Measures].[SixMonthRollingAverage] AS
IIF ([Date].[Calendar].CurrentMember.Level
IS [Date].[Calendar].[Month]
, SUM ([Date].[Calendar].CurrentMember.LAG(5)
:[Date].[Calendar].CurrentMember
, NULL)
:[Date].[Calendar].CurrentMember
END SCOPE;
Unary operators
Measure expressions
Semiadditive measures
If you must emulate these these features by using MDX script (for example, some features are only
available in the Enterprise SKU), be aware that doing so can hurt performance.
For example, using distributive unary operators (that is, those whose member order does not matter,
such as +, -, and ~) is generally twice as fast as trying to mimic their capabilities with assignments.
34
There are rare exceptions.
You might be able to improve performance of nondistributive unary operators (those involving
*, /, or numeric values) by using MDX.
You might know some special characteristic of your data that allows you to take a shortcut that
improves performance.
However, such optimizations require expert-level tuning, and in general, you can rely on the Analysis
Services engine features to do the best job.
Measure expressions also provide a unique challenge, because they disable the use of aggregates (data
has to be rolled up from the leaf level). One way to work around this is to use a hidden measure that
contains pre-aggregated values in the relational source. You can then target the hidden measure to the
aggregate values with a SCOPE statement in the calculation script.
For example, the following query not only performs badly, but returns incorrect results. It forces
unnecessary cell evaluation and compares values instead of members.
The following example is even worse, because it performs extra steps to deduce whether
CurrentMember is a particular member by using Intersect and Counting.
35
2.4.6 Evaluating Set Membership
Determining whether a member or tuple is in a set is best accomplished by using INTERSECT. The RANK
function is less performant because it does the additional operation of determining where in the set that
object lies. If you don’t need this additional operation, don’t use RANK.
For example, the following statement might do more work than you need.
The following example demonstrates how to use INTERSECT instead, to determine whether the
specified information is in the set.
These should be the first stops for optimization, before digging into queries in general. The problem with
jumping into queries too quickly is that much time can be expended pursuing dead ends.
Also it is important to first understand the nature of the problem before applying specific techniques. To
gain this understanding, you should have a mental model of how the query engine works. We will
therefore start with a brief introduction to the Analysis Services query processor.
36
Client App (MDX)
Session Management
XMLA Listener
Security
Session
Manager
Manager
Query Processing
Query Query Proc
Processor Cache
Data Retrieval
Storage Engine SE Cache
1. Users authenticated by the Windows operating system and who have access to at least one
database can connect to Analysis Services.
2. After a user connects to Analysis Services, the Security Manager determines user permissions
based on the combination of Analysis Services roles that apply to the user.
3. Depending on the client application architecture and the security privileges of the connection,
the client creates a session when the application starts, and then it reuses the session for all of
the user’s requests.
4. The session provides the context under which client queries are executed by the query
processor.
5. A session exists until it is closed by the client application or the server.
37
3.1.2 Query Processing
The query processor executes MDX queries 2 and generates a cellset or rowset in return. This section
provides an overview of how the query processor executes queries.
To retrieve the data requested by a query, the query processor builds an execution plan to generate the
requested results from the cube data and calculations. There are two major different types of query
execution plans: cell-by-cell (naïve) evaluation or block mode (subspace) computation. Which one is
chosen by the engine can have a significant impact on performance. For more information, see Subspace
Computation.
To communicate with the storage engine, the query processor uses the execution plan to translate the
data request into one or more subcube requests that the storage engine can understand. A subcube is a
logical unit of querying, caching, and data retrieval—it is a subset of cube data defined by the crossjoin
of one or more members from a single level of each attribute hierarchy. An MDX query can be resolved
into multiple subcube requests, depending the attribute granularities involved and calculation
complexity.
For example, a query involving every member of the Country attribute hierarchy (assuming it’s not a
parent-child hierarchy) would be split into two subcube requests: one for the All member and another
for the countries.
As the query processor evaluates cells, it uses the query processor cache to store calculation results. The
primary benefits of the cache are to optimize the evaluation of calculations and to support the reuse of
calculation results across users (with the same security roles). To optimize cache reuse, the query
processor manages three cache layers that determine the level of cache reusability: global, session, and
query.
Query Context—contains the result of calculations created by using the WITH keyword within a
query. The query context is created on demand and terminates when the query is over.
Therefore, the cache of the query context is not shared across queries in a session.
Session Context —contains the result of calculations created by using the CREATE statement
within a given session. The cache of the session context is reused from request to request in the
same session, but it is not shared across sessions.
2
For more information about optimizing MDX, see Optimizing MDX.
38
Global Context —contains the result of calculations that are shared among users. The cache of
the global context can be shared across sessions if the sessions share the same security roles.
The contexts are tiered in terms of their level of reuse. The following diagram shows the different types
of context, in the order of increasing scope for re-use.
During execution, every MDX query must reference all three contexts to identify all of the potential
calculations and security conditions that can impact the evaluation of the query. For example, to resolve
a query that contains a query calculated member, the query processor requires three contexts:
These contexts are created only if they aren’t already built. After they are built, they are reused where
possible.
Even though a query references all three contexts, it will typically use the cache of a single context. This
means that on a per-query basis, the query processor must select which cache to use. The query
processor always attempts to use the broadly applicable cache depending on whether or not it detects
the presence of calculations at a narrower context.
If the query processor encounters calculations created at query time, it always uses the query context,
even if a query also references calculations from the global context. There is an exception to this –
39
queries with query calculated members of the form Aggregate(<set>) do share the session cache.
However, if there are no query calculations, but there are session calculations, the query processor uses
the session cache.
The query processor selects which cache to use, based on the presence of any calculation in the scope.
This behavior is especially relevant to users with client tools that generate their own MDX. If the front-
end tool creates any session calculations or query calculations, the global cache is not used, even if the
query does not specifically use those session or query calculations.
There are other calculation scenarios that affect how the query processor caches calculations. When you
call a stored procedure from an MDX calculation, the engine always uses the query cache. This is
because stored procedures are nondeterministic, which simply means that there is no guarantee what
the stored procedure will return. As a result, when a nondeterministic calculation is encountered during
the query, nothing is cached globally or in the session cache. Instead, the remaining calculations are
stored in the query cache.
The following scenarios determine how the query processor caches calculation results:
• The use of MDX functions that are locale-dependent (such as CAPTION or .Properties) prevents
the use of the global cache, because different sessions may be connected with different locales
and cached results for one locale may not be correct for another locale.
• The use of any of these functions disables the global cache: cell security; functions such as
UserName, StrToSet, StrToMember, and StrToTuple; LookupCube functions in the MDX script
or in the dimension or cell security definition. In other words, just one expression that uses any
of these functions or features disables global caching for the entire cube.
• If visual totals are enabled for the session (by setting the default MDX Visual Mode property in
the Analysis Services connection string to 1), the query processor uses the query cache for all
queries issued in that session.
• If you enable visual totals for a query by using the MDX VisualTotals function, the query
processor uses the query cache.
• Queries that use the subselect syntax (SELECT FROM SELECT) or queries that are based on a
session subcube (CREATE SUBCUBE) result in the query or, respectively, session cache to be
used.
• Arbitrary shapes can only use the query cache if they are used in a subselect, in the WHERE
clause, or in a calculated member. An arbitrary shape is any set that cannot be expressed as a
crossjoin of members from the same level of an attribute hierarchy. For example, these
expressions both are arbitrary sets:
40
• An arbitrary shape on the query axis does not limit the use of any cache.
Based on this behavior, when your query workload can benefit from reusing data across users, it is a
good practice to define calculations in the global scope. An example of this scenario is a structured
reporting workload where you have few security roles.
In contrast, if you have a workload that requires individual data sets for each user, such as in an HR cube
where you have many security roles or you are using dynamic security, the opportunity to reuse
calculation results across users is lessened or eliminated. As a result, the performance benefits
associated with reusing the query processor cache are not as high.
Retrieving data from a partition requires I/O activity. This I/O can either be served from the file system
cache or from disk. Additional details of the I/O subsystem of Analysis Services can be found in the SQL
Server 2008 R2 Analysis Services Operations Guide ( http://msdn.microsoft.com/en-
us/library/hh226085.aspx).
41
3.1.3.1 Storage Engine Cache
The storage engine cache is also known as the data cache registry because it is composed of the
dimension and measure group caches that are the same structurally. When a request is made from the
Analysis Services formula engine to the storage engine, it sends a request in the form of a subcube
describing the structure of the data request and a data cache structure that will contain the results of
the request. Using the data cache registry indexes, it attempts to find a corresponding subcube:
Analysis Services allocates memory via memory holders that contain statistical information about the
amount of memory being used. Memory holders are in the form of nonshrinkable and shrinkable
memory; each combination of a subcube and data cache forms a single shrinkable memory holder.
When Analysis Services is under heavy memory pressure, cleaner threads remove shrinkable memory.
Therefore, ensure your system has enough memory; if it does not, your data cache registry will be
cleared out (resulting in slower query performance) when it is placed under memory pressure.
If you suspect more data is being retrieved than is required, you can use SQL Server Profiler to diagnose
how a query decomposes into subcube query events and partition scans. For subcube scans, check the
verbose subcube event and whether more members than required are retrieved from the storage
engine. For small cubes, this likely isn’t a problem. For larger cubes with multiple partitions, it can
greatly reduce query performance.
The following example demonstrates how a single query subcube event results in partition scans.
There are two potential solutions to the problem of overly aggressive partition scanning:
42
If a calculation expression contains an arbitrary shape, the query processor might not be able to
determine that the data is limited to a single partition, and thus will request data from all
partitions. Try to eliminate the arbitrary shape.
Other times, the query processor is simply overly aggressive in asking for data. For small cubes,
this doesn’t matter, but for very large cubes, it does. If you observe this behavior, potential
solutions include the following:
o Contact Microsoft Customer Service and Support for further advice.
o Sometimes Analysis Services requests additional data from the source to prepopulate the
cache; it may help to turn it off so that Analysis Services does not request too much data. To
do this, edit the connection string and set Disable Prefetch = 1.
Consider a trivial calculation of a rolling sum, summing the sales for the previous and current years.
Now, consider a query that requests the rolling sum for 2005 for all products.
43
The 10 cells for [2005, All Products] are each evaluated in turn. For each, the previous year is located,
and then the sales value is obtained and then added to the sales for the current year. There are two
significant performance issues with this approach.
1) If the data is sparse (that is, thinly populated), cells are calculated even though they are bound
to return a null value. In the previous example, calculating the cells for anything but Product 3
and Product 6 is a waste of effort. The impact of this can be extreme—in a sparsely populated
cube, the difference can be several orders of magnitude in the numbers of cells evaluated.
2) Even if the data is totally dense, meaning that every cell has a value and there is no wasted
effort visiting empty cells, there is much repeated effort. The same work (for example, getting
the previous Year member, setting up the new context for the previous Year cell, checking for
recursion) is redone for each Product. It would be much more efficient to move this work out of
the inner loop of evaluating each cell.
Now consider the same example performed using subspace computation. In subspace computation, the
engine works its way down an execution tree determining what spaces need to be filled. Given the
query, the following space needs to be computed, where * means every member of the attribute
hierarchy.
Given the calculation, this means that the following space needs to be computed first.
If Sales were itself covered by calculations, the spaces necessary to calculate Sales would be determined
and the tree would be expanded. In this case, Sales is a base measure, so the storage engine data is used
to fill the two spaces at the leaves, and then, working up the tree, the operator is applied to fill the
space at the root. Hence the one row (Product3, 2004, 3) and the two rows { (Product3, 2005, 20),
(Product6, 2005, 5)} are retrieved, and the + operator is applied to them, yielding the following result.
44
Figure 23: Execution plan
The + operator operates on spaces, not just on scalar values. The operator is responsible for combining
the two given spaces to produce a space that contains each product that appears in either space with
the summed value. This is the query execution plan. Note that it operates only on data that could
contribute to the result. There is no notion of the theoretical space over which the calculation must be
performed.
A query execution plan is not one or the other but can contain both subspace and cell-by-cell nodes.
Some functions are not supported in subspace mode, causing the engine to fall back to cell-by-cell
mode. But even when evaluating an expression in cell-by-cell mode, the engine can return to subspace
mode.
Cube data is used in query plans in several scenarios. Some query plans result in the mapping of one
member to another because of MDX functions such as PrevMember and Parent. The mappings are built
from cube data and materialized during the construction of the query plans. The IIf, CASE, and IF
functions can generate expensive query plans as well, should it be necessary to read cube data in order
to partition cube space for evaluation of one of the branches. For more information, see IIf Function in
SQL Server 2008 Analysis Services.
45
But how can you tell whether an expression is dense or sparse? Consider a simple noncalculated
measure – is it dense or sparse? In OLAP, base fact measures are considered sparse by the Analysis
Services engine. This means that the typical measure does not have values for every attribute member.
For example, a customer does not purchase most products on most days from most stores. In fact it’s
the quite the opposite. A typical customer purchases a small percentage of all products from a small
number of stores on a few days.
The following table lists some simple rules for assessing the sparsity or denseness of some popular
expressions.
Expression Sparse/dense
Regular measure Sparse
Constant Value Dense (excluding constant null values,
true/false values)
Scalar expression; for example, count, Dense
.properties
<exp1>+<exp2> Sparse if both exp1 and exp2 are sparse;
<exp1>-<exp2> otherwise dense
<exp1>*<exp2> Sparse if either exp1 or exp2 is sparse;
otherwise dense
<exp1> / <exp2> Sparse if <exp1> is sparse; otherwise dense
Sum(<set>, <exp>) Inherited from <exp>
Aggregate(<set>, <exp>)
IIf(<cond>, <exp1>, <exp2>) Determined by sparsity of default branch
(refer to IIf function)
For more information about sparsity and density, see Gross margin - dense vs. sparse block evaluation
mode in MDX (http://sqlblog.com/blogs/mosha/archive/2008/11/01/gross-margin-dense-vs-sparse-
block-evaluation-mode-in-mdx.aspx).
Another important use of the default value is in the condition in the IIF function. The engine must know
which branch is evaluated more often to drive the execution plan.
The following table lists the default values of some popular expressions.
46
Expression Default value Comment
Regular measure Null None.
IsEmpty(<regular measure>) True The majority of theoretical space is
occupied by null values. Therefore,
IsEmpty will return True most often.
<regular measure A> = <regular True Values for both measures are principally
measure B> null, so this evaluates to True most of the
time.
<member A> IS <member B> False This is different than comparing values –
the engine assumes that different
members are compared most of the time.
When this expression is evaluated over a subspace involving other attributes, any attributes that the
expression doesn’t require can be eliminated, and then the expression can be resolved and projected
back over the original subspace. We call the attributes that an expression depends its varying attributes.
For example, consider the following query.
SELECT measures.zip on 0,
[Product].[Category].members ON 1
This expression depends on the Customer attribute and not the Category attribute; therefore, Customer
is a varying attribute and Category is not. In this case the expression is evaluated only once for the
customer and not as many times as there are product categories.
47
3.3.1 Creating a Query Speed Baseline
Before beginning optimization, you need reproducible cold-cache baseline measurements.
To do this, you should be aware of the following three Analysis Services caches:
Both the Analysis Services and the operating system caches need to be cleared before you start taking
measurements.
<ClearCache
xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Object>
<DatabaseID><database name></DatabaseID>
</Object>
</ClearCache>
48
contains code for a utility that enables you to clear the file system cache using a stored
procedure that you can run directly on Analysis Services.
Note that neither FSUTIL nor RAMMap should be used in production cubes –both cause disruption to
service. Also note that neither RAMMap nor the Analysis Services Stored Procedures Project is
supported by Microsoft.
References
For additional information, see this article by Greg Galloway that discusses usage of RAMMP
http://www.artisconsulting.com/blogs/greggalloway/Lists/Posts/Post.aspx?ID=19
Execute the query you want to optimize and then use SQL Server Profiler with the Standard (default)
trace and these additional events enabled:
Save the profiler trace, because it contains important information that you can use to diagnose slow
query times.
The text for the query subcube verbose event deserves some explanation. It contains information for
each attribute in every dimension:
49
0: Indicates that the attribute is not included in query (the All member is hit).
* : Indicates that every member of the attribute was requested.
+ : Indicates that two or more members of the attribute were requested.
- : Indicates that a slice below granularity is requested.
<integer value> : Indicates that a single member of the attribute was hit. The integer
represents the member’s data ID (an internal identifier generated by the engine).
For more information about the textdata field in the Query Subcube Verbose event, see the following
resources:
Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis
Services (http://sqlcat.com/sqlcat/b/whitepapers/archive/2007/12/16/identifying-and-
resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx)
Configuring the Analysis Services Query Log (http://msdn.microsoft.com/en-
us/library/cc917676.aspx): Refer to the section, The Dataset Column in the Query Log Table.
SQL Server Management Studio displays the total query time; however, it displays only the time taken to
retrieve and display the cellset. For large results, the time to render the cellset on the client can actually
rival the time it took the server to generate the cellset.
Therefore, instead of using SQL Server Management Studio to measure query time, capture the Query
End event (using SQL Server Profiler or other tools) to measure how long the query takes from the
server’s perspective and get the Analysis Services engine duration.
However, if there are chains of expressions or a complex query, it can be time-consuming to locate the
problem. There are a couple ways to scope the problem space:
Reduce the query to the simplest expression possible that continues to reproduce the
performance issue.
If possible, remove expressions such as MDX scripts, unary operators, measure expressions,
custom member formulas, semi-additive measures, and custom rollup properties.
With some client applications, the query generated by the client itself, not the cube, can be the
problem. For example, the following problems can arise when client applications generate queries:
50
If you can confirm that the issue is in the cube itself, comment out calculated members in the cube or
query until you have narrowed down the offending calculation. Using a binary chop method is useful to
quickly reduce the query to the simplest form that reproduces the issue. Experienced tuners will be able
to quickly narrow in on typical calculation issues.
After you have removed calculations to the simplest form in which the performance issue reproduces,
the next step is to determine whether the problem lies in the query processor (the formula engine) or
the storage engine.
To determine the amount of time the engine spends scanning data, use the SQL Server Profiler trace
created earlier. We recommend that you limit the events you capture to noncached storage engine
retrievals, by selecting only the query subcube verbose event and filtering on event subclass = 22.
The result will be similar to the following:
If the majority of time is spent in the storage engine (one sign of this is long-running query subcube
events), the problem is likely with the storage engine. In this case, consider optimizing dimension design,
designing aggregations, or using partitions to improve query performance. In addition, you may want to
consider optimizing the disk subsystem.
If the majority of time is not spent in the storage engine but in the query processor, focus on optimizing
the MDX script or the query itself. Note, the problem can involve both the formula and storage engines.
A fragmented query space can be diagnosed when you see many query subcube events generated by a
single query. Each request might not take long, but the sum of them can have a significant impact on
performance. When you find this behavior, consider warming the cache to make sure that the necessary
subcubes and calculations are already cached. You should also consider rewriting the query to remove
arbitrary shapes, because arbitrary subcubes cannot be cached. For more information, see Cache
Warming later in this white paper.
If the cube and MDX query are already fully optimized, you might consider tuning the cube for thread
usage and memory usage. Additional server-level tuning techniques are described in the SQL Server
2008 R2 Analysis Services Operations Guide.
References
51
The SQL Server 2008 R2 Analysis Services Operations Guide
(http://sqlcat.com/sqlcat/b/whitepapers/archive/2011/06/01/sql-server-2008r2-analysis-
services-operations-guide.aspx)
Predeployment I/O Best Practices
(http://sqlcat.com/sqlcat/b/whitepapers/archive/2007/11/21/predeployment-i-o-best-
practices.aspx): The concepts in this document provide an overview of disk I/O and its impact
query performance; focus on the random I/O context.
Scalable Shared Databases Part 5
(http://sqlcat.com/sqlcat/b/whitepapers/archive/2011/06/01/sql-server-2008r2-analysis-
services-operations-guide.aspx): Review to better understand on query performance in context
of random I/O vs. sequential I/O.
The following table lists the most common reasons for using cell-by-cell mode, and the steps you can
take to use subspace mode instead:
with
set y as [Product].[Category].[Category].members
member measures.Naive as
sum(
y,
[Measures].[Internet Sales Amount]
)
select
{measures.Naive,[Measures].[Internet Sales Amount]} on
0 ,
[Customer].[Customer Geography].[Country].members on 1
from [Adventure Works]
cell properties value
In contrast, this very similar query operates in subspace mode because the
set expression is explicitly defined.
with
member measures.SubspaceMode as
sum(
[Product].[Category].[Category].members,
[Measures].[Internet Sales Amount]
)
select
{measures.SubspaceMode,[Measures].[Internet Sales
52
Feature or function Comment
Amount]} on 0 ,
[Customer].[Customer Geography].[Country].members on 1
from [Adventure Works]
cell properties value
However, Note: This functionality has been fixed with the latest service pack
of SQL Server 2008 R2 Analysis Services.
Late binding functions Late-binding functions are functions that depend on query context and
in queries cannot be statically evaluated. These typically include LinkMember,
StrToSet, StrToMember, StrToValue.
A query is late-bound if an argument can be evaluated only in context.
53
For example, the following query replaces empty values with the dash; therefore the NON EMPTY
keyword does not eliminate them.
WHERE measures.x
The NON EMPTY keyword also operates on cell values but not on formatted values. Therefore, in rare
cases you can use the format string to replace null values with the same character while still eliminating
empty rows and columns in roughly half the execution time, as shown in this example:.
WITH member measures.x AS
WHERE measures.x
The reason this workaround can only be used in rare cases is that the queries are not really equivalent –
the second query eliminates completely empty rows. More importantly, neither Excel nor SQL Server
Reporting Services supports the fourth argument of the FORMAT_STRING expression.
References
For more information about using the FORMAT_STRING calculation property, see
FORMAT_STRING Contents (MDX) (http://msdn.microsoft.com/en-us/library/ms146084.aspx) in
SQL Server Books Online.
For more information about how Excel uses the FORMAT_STRING property, see Create or delete
a custom number format (http://office.microsoft.com/en-us/excel-help/create-or-delete-a-
custom-number-format-HP010342372.aspx).
54
non-null values compared to the total number of cells. For more information, see Expression Sparsity
earlier in this section.
Consider the following two queries on Adventure Works, which perform a currency conversion
calculation, applying the exchange rate at leaves of the Date dimension. The only difference between
the two queries is the order of the expressions in the product of the cell calculation.
Sparse First
SELECT
Dense First
AS
SELECT
The same results are returned, but using the sparser [Internet Sales Amount] first results in about a 10%
savings in speed. This savings could be substantially greater in other calculations. The amount of
55
improvement depends on the relative sparsity between the two expressions, so performance benefits
may vary.
In other words:
Where the condition evaluates to TRUE, the value from the THEN branch is used.
Otherwise the ELSE branch expression is used.
Note that we say “used” rather than “evaluated”. In fact, one or both branches may be evaluated even if
theire values are not used. It might be cheaper for the engine to evaluate the expression over the entire
space and use it when needed – that’s what we call an eager plan – than it would be to chop up the
space into a potentially enormous number of fragments and evaluate only where needed – whatwe call
a strict plan.
Note: One of the most common errors in MDX scripting is using IIF when the condition depends
on cell coordinates instead of values. If the condition depends on cell coordinates, use scopes
and assignments as described in section 2. When this is done, the condition is not evaluated
over the space and the engine does not evaluate one or both branches over the entire space.
Admittedly, in some cases, using assignments forces some unwieldy scoping and repetition of
assignments, but it is always worthwhile comparing the two approaches.
Most IIF condition query plans are inexpensive, but complex nested conditions with more IIF
functions can go to cell by cell computation.
The engine examines the condition’s default value. If the condition’s default value is true, the
THEN branch is the default branch – the branch that is evaluated over most of the subspace.
Knowing a few simple rules on how the condition is evaluated can help you to determine the default
branch:
In sparse expressions, most cells are empty. The default value of the ISEMPTY function on a
sparse expression is true.
56
The default value of the IS operator is false.
An example might help illustrate how to use these rules. One of the most common uses of the IIF
function is to check whether the denominator is nonzero, as shown here:
, null
The following table shows how each branch of an IIF function is evaluated.
In Analysis Services, you can overrule the default behavior by using query hints.
IIF(
[<condition>
You might want to change the default behavior in the following common scenarios:
The engine determines the query plan for the condition is expensive and evaluates each branch
in strict mode.
The condition is evaluated in cell-by-cell mode, and each branch is evaluated in eager mode.
The branch expression is dense but easily evaluated.
For example, consider the following simple expression, which takes the inverse of a measure.
57
WITH member
measures.x AS
IIF(
, null
SELECT {[Measures].x} ON 0,
[Customer].[Customer Geography].[Country].members *
[Product].[Product Categories].[Category].members ON 1
Therefore, this expression is evaluated in strict mode. Using strict mode forces the engine to materialize
the space over which it is evaluated.
You can diagnose this behavior in SQL Server Profiler by looking at the Query Subcube Verbose events:.
58
Figure 26: Default IIf query trace
Pay attention to the subcube definition for the Product and Customer dimensions (dimensions 7 and 8
respectively) with the ‘+’ indicator on the Country and Category attributes. This means that more than
one but not all members are included. The query processor has determined which tuples meet the
condition and partitioned the space, and it is evaluating the fraction over that space.
To prevent the query plan from partitioning the space, the query can be modified as follows. Note the
query hint, in bold.
WITH member
measures.x AS
IIF(
, null
SELECT {[Measures].x} on 0,
[Customer].[Customer Geography].[Country].members *
[Product].[Product Categories].[Category].members ON 1
59
Figure 27: IIf trace with MDX query hints
Now in the Profiler trace, in the Query Subcube Verbose event, the same attributes are marked with ‘*’,
meaning that the expression is evaluated over the entire space instead of a partitioned space.
The repeated partial expressions can be extracted and replaced with a hidden calculated member as
follows.
60
When rewritten, only the value cell property is cached.
If you have complex cell properties to support such things as bubble-up exception coloring, consider
creating a separate calculated measure. For example, the following expression includes color in the
definition, which creates extra work every time the expression is used.
The following expression is more efficient because it creates a calculated measure to handle the color
effect.
For example, in this calculation, the average of sales is computed, but only for sales exceeding $100. The
query also runs very slowly – ina recent desktop test on a fast server, it took about 55 seconds.
However, the average of sales for all customers everywhere should not depend on the current city. In
other words, City should not be a varying attribute.
61
You can eliminate City as a varying attribute and make the expression more efficient by using the All
member as follows:
With the modification, this query ran much faster, in under a second.
To see the difference, you can compare the SQL Server Profiler traces of the two queries:
EventClass > EventSubClass AvgSalesWithOverwrite AvgSales
Events Duration Events Duration
Query Cube End 1 515 1 161526
Serial Results End 1 499 1 161526
Query Dimension 586
Get Data From Cache > Get Data 586
from Flat Cache
Query Subcube > Non-Cache Data 5 64 5 218
Figure 28. Effect of removing varying attributes on query duration
Look at the duration of Query Subcube > Non-Cache Data. From the fact that this value is relatively
small, you can deduce that most of the query calculation is done by the formula engine.
Now look at the AvgSales calculation, and notice that most of the query durations correspond to the
values for the Serial Results event. The latter event reports the status of serializing axes and cells.
From this analysis you can see that using [All Customers] ensures that the expression is evaluated
only once for each Customer, improving performance.
62
If the result is noticeable faster without the formatting, apply the formatting directly in the script as
follows.
FORMAT_STRING(this) = "currency";
END SCOPE;
Execute the query (with formatting applied) to determine the extent of any performance benefit.
3.3.10 NON_EMPTY_BEHAVIOR
In some situations, it is expensive to compute the result of an expression, even if you know it will be null
beforehand based on the value of some indicator tuple. In earlier versions of SQL Server Analysis
Services, the NON_EMPTY_BEHAVIOR property was sometimes helpful for these kinds of calculations.
When this property evaluates to null, the expression is guaranteed to be null and (most of the time) vice
versa.
In past releases, changing this property often resulted in substantial performance improvements.
However, starting with SQL Server 2008, the property can often be ignored, because the engine
automatically deals with nonempty cells in many cases, and manually setting the property can result in
degraded performance.
To determine whether you should use this property or not, we recommend that you eliminate it from
the MDX script and do some performance testing, and add it back only if using it leads to improvement.
this = <e1>;
Non_Empty_Behavior(this) = <e2>;
For calculated members in the MDX script, the property is used this way.
In SQL Server 2005 Analysis Services, there were complex rules on how the property could be defined,
when the engine used it or ignored it, and how the engine would use it.
In SQL Server 2008 Analysis Services, the behavior of this property changed, and remains as described
here for SQL Server 2012 and SQL Server 2014:
63
When NON_EMPTY_BEHAVIOR is null, the expression must also be null. If this is not true,
incorrect query results can be returned.
The reverse is not necessarily true. That is, the NON_EMPTY_BEHAVIOR expression can return
non null when the original expression is null.
The engine more often than not ignores this property and deduces the nonempty behavior of
the expression on its own.
The NON_EMPTY_BEHAVIOR property is used if <e2> is sparse and <e1> is dense or <e1> is evaluated in
naïve or cell-by-cell mode.
If these conditions are not met and both <e1> and <e2> are sparse (that is, if <e2> is much sparser than
<e1>), you may be able to achieve improved performance by forcing the behavior as follows.
The NON_EMPTY_BEHAVIOR property can be expressed as a simple tuple expression including simple
member navigation functions such as .PREVMEMBER or .PARENT or an enumerated set. An enumerated
set is equivalent to NON_EMPTY_BEHAVIOR of the resultant sum.
References
3.4 Aggregations
An aggregation is a data structure that stores precalculated data that Analysis Services uses to enhance
query performance. You can define the aggregation design for each partition independently. Each
partition can be thought of as being an aggregation at the lowest granularity of the measure group.
Aggregations that are defined for a partition are processed out of the leaf level partition data by
aggregating it to a higher granularity.
When a query requests data at higher levels, the aggregation structure can deliver the data more quickly
because the data is already aggregated in fewer rows. As you design aggregations, you must consider
64
the querying benefits that aggregations provide compared with the time it takes to create and refresh
the aggregations. In fact, adding unnecessary aggregations can worsen query performance because the
rare hits move the aggregation into the file cache at the cost of moving something else out.
While aggregations are physically designed per measure group partition, the optimization techniques for
maximizing aggregation design apply whether you have one or many partitions. In this section, unless
otherwise stated, aggregations are discussed in the fundamental context of a cube with a single
measure group and single partition. For more information about how you can improve query
performance using multiple partitions, see Partition Strategy.
Within SQL Server Profiler, there are several events that describe how a query is fulfilled. The event that
specifically pertains to aggregation hits is the Get Data From Aggregation event.
Figure 29: Scenario 1: SQL Server Profiler trace for cube with an aggregation hit
This figure displays a SQL Server Profiler trace of the query’s resolution against a cube with aggregations.
In the SQL Server Profiler trace, the operations that the storage engine performs to produce the result
set are revealed.
The storage engine gets data from Aggregation C 0000, 0001, 0000 as indicated by the Get Data From
Aggregation event. In addition to the aggregation name, Aggregation C, Figure 10 displays a vector, 000,
0001, 0000, that describes the content of the aggregation. More information on what this vector
actually means is described in the next section, How to Interpret Aggregations. The aggregation data is
loaded into the storage engine measure group cache from where the query processor retrieves it and
returns the result set to the client.
Suppose no aggregations can satisfy the query request? In that case, the Get Data From Aggregation
event will be missing, as you can see from the following example, which shows the same cube with no
aggregations.
65
Figure 30: Scenario 2: SQL Server Profiler trace for cube with no aggregation hit
After the query is submitted, rather than retrieving data from an aggregation, the storage engine goes to
the detail data in the partition. From this point, the process is the same. The data is loaded into the
storage engine measure group cache.
For example, consider the following examples of aggregation vectors for the product dimension:
To identify each aggregation, Analysis Services combines the dimension vectors into one long vector
path, also called a subcube, with each dimension vector separated by commas.
The order of the dimensions in the vector is determined by the order of the dimensions in the measure
group. To find the order of dimensions in the measure group, use one of the following two techniques:
1. With the cube opened in SQL Server Business Intelligence Development Studio, review the order
of dimensions in a measure group on the Cube Structure tab. The order of dimensions in the
cube is displayed in the Dimensions pane.
2. As an alternative, review the order of dimensions listed in the cube’s XMLA definition.
The order of attributes in the vector for each dimension is determined by the order of attributes in the
dimension. You can identify the order of attributes in each dimension by reviewing the dimension XML
file. For example, the subcube definition (0000, 0001, 0001) describes an aggregation for the following:
66
Order Date – All, All, All, Year
Understanding how to read these vectors is helpful when you review aggregation hits in SQL Server
Profiler. In SQL Server Profiler, you can view how the vector maps to specific dimension attributes by
enabling the Query Subcube Verbose event. In some cases (such as when attributes are disabled), it
may be easier to view the Aggregation Design tab and use the Advanced view of the aggregations.
To help Analysis Services successfully apply the AggregationDesign algorithm, you can perform the
following optimization techniques to influence and enhance the aggregation design. In this section we
will discuss the following:
Note that attributes that are exposed only in attribute hierarchies are not automatically considered for
aggregation by the Aggregation Design Wizard. Therefore, queries involving these attributes are satisfied
by summarizing data from the primary key. Without the benefit of aggregations, query performance
against these attributes hierarchies can be slow. To enhance performance, it is possible to flag an
67
attribute as an aggregation candidate by using the Aggregation Usage property. For more information
about this technique, see Suggesting Aggregation Candidates. However, before you modify the
Aggregation Usage property, you should consider whether you can take advantage of user hierarchies.
Consider the following example. In a cube with multiple monthly partitions, new data may flow into the
single partition corresponding to the latest month. Generally that is also the partition most frequently
queried. A common aggregation strategy in this case is to perform usage-based optimization to the most
recent partition, leaving older, less frequently queried partitions as they are.
If you automate partition creation, it is easy to simply set the AggregationDesignID for the new partition
at creation time and specify the slice for the partition. After that the partition is ready to be processed.
At a later stage, you may choose to update the aggregation design for a partition when its usage pattern
changes – again, you can just update the AggregationDesignID, but you will also need to invoke
ProcessIndexes so that the new aggregation design takes effect for the processed partition.
Whenever you use multiple partitions for a given measure group, ensure that you update the data
statistics for each partition. More specifically, it is important to ensure that the partition data and
member counts (such as EstimatedRows and EstimatedCount properties) accurately reflect the specific
data in the partition and not the data across the entire measure group.
68
automatically considered for aggregation and then determine whether you need to suggest additional
aggregation candidates.
An aggregation candidate is an attribute that Analysis Services considers for potential aggregation. To
determine whether or not a specific attribute is an aggregation candidate, the storage engine relies on
the value of the Aggregation Usage property. The Aggregation Usage property is assigned a per-cube
attribute, so it globally applies across all measure groups and partitions in the cube. For each attribute in
a cube, the Aggregation Usage property can have one of four potential values: Full, None, Unrestricted,
and Default.
Full— Each aggregation for the cube must include either this attribute, or a related attribute
that is lower in the attribute chain. For example, suppose you have a product dimension with
the following chain of related attributes: [Product], [Product Subcategory], and [Product
Category]. If you specify the Aggregation Usage for [Product Category] to be Full, Analysis
Services may create an aggregation that includes [Product Subcategory] as opposed to [Product
Category], given that [Product Subcategory] is related to [Category] and can be used to derive
[Category] totals.
None—No aggregation for the cube can include this attribute.
Unrestricted—No restrictions are placed on the aggregation designer; however, the attribute
must still be evaluated to determine whether it is a valuable aggregation candidate.
Default—The designer applies a default rule based on the type of attribute and dimension. This
defines the default value of the Aggregation Usage property.
The default rule is highly conservative about which attributes are considered for aggregation. The
default rule is broken down into four constraints.
69
Aggregation Usage Guidelines
Given these behaviors of the Aggregation Usage property, apply the following guidelines when
designing or using aggregations:
After the aggregations are designed, you can add them to the existing design or completely replace the
design. Be careful adding them to the existing design – the two designs may contain aggregations that
serve almost identical purposes, but which when combined are redundant. Always inspect the new
aggregations compared to the old and ensure there are no near-duplicates.
70
Note that aggregation designs have a costly metadata impact. Do not overdesign, but try to keep the
number of aggregation designs per measure group to a minimum.
When you are satisfied with your aggregations, you can copy the aggregation design to other partitions,
using either SQL Server Management Studio or the design tools in SQL Server Data Tools.
References
As cubes become larger, it becomes more important to design aggregations and to do so correctly. As a
general rule of thumb, MOLAP performance is approximately between 10 and 40 million rows per
second per core, plus the I/O for aggregating data.
Larger cubes have more constraints such as small processing windows and/or not enough disk space.
Therefore it may be difficult to create all of your desired aggregations. You will need to weigh the
tradeoffs carefully when designing aggregations.
71
Using the CREATE CACHE statement. This statement returns no cellsets and has the advantage
of executing faster because it bypasses the query processor.
When possible, Analysis Services returns results from the Analysis Services data cache without using
aggregations, because it is the fastest way to get data. With smaller cubes, the server might have
enough memory to keep a large portion of the data in the cache. In such cases, aggregations are not
needed and existing aggregations may never be used. Cache warming can be used to ensure that users
will always have excellent performance.
With larger cubes, however, the server might not have sufficient memory to keep query data in the
cache. Additionally, cached results can be pushed out by other query results. Hence, cache warming
might help only some of the queries.
Therefore, in larger cubes, it is important to create well-designed aggregations to provide solid query
performance. Note that too many aggregations can thrash the cache, as different data result sets and
aggregations are requested and swapped from the cache.
If you find many subcube requests to the same grain, the query processor might be making
many requests for slightly different data, resulting in the storage engine making many small but
time-consuming I/O requests. It would be better to retrieve the data en masse and then return
results from the cache.
To pre-execute queries, you can create an application (or use something like ascmd) that
executes a set of generalized queries to simulate typical user activity. This expedites the process
of populating the cache. Execute these queries right after you start Analysis Services or after
processing, to preload the cache prior to user queries.
To determine how to generalize your queries, you can sometimes refer to the Analysis Services
query log to determine the dimension attributes typically queried. However, be careful when
you generalize based on this information, because you might include attributes or subcubes that
are not beneficial and unnecessarily take up cache.
When testing the effectiveness of different cache-warming queries, be sure to empty the query
results cache between each test to ensure the validity of your testing.
Because cached results can be pushed out by other query results, you should consider
scheduling refresh of the cache results.
Limit cache warming to what can fit in memory, leaving enough for other queries to be cached.
72
3.5.2.2 How to warm the cache
The Analysis Services formula engine can only be warmed by MDX queries. To warm the storage
engine caches, you can use the WITH CACHE or CREATE CACHE statements, as described in this
article:
How to warm up the Analysis Services data cache using Create Cache statement?
(http://sqlcat.com/sqlcat/b/technicalnotes/archive/2007/09/11/how-to-warm-up-the-analysis-
services-data-cache-using-create-cache-statement.aspx)
3.6 Scale-Out
If you have many concurrent users querying your Analysis Services cubes, a potential query performance
solution is to scale out your Analysis Services query servers. There are different forms of scale-out,
which are discussed in the Analysis Services 2008 R2 Operations Guide
(http://sqlcat.com/whitepapers/archive/2011/06/01/sql-server-2008r2-analysis-services-operations-
guide.aspx), but the basic principle is that ) so there are multiple servers to address user queries. You
can do this by pointing multiple query servers at the same database, or by replicating the database.
Your server is under memory pressure due to concurrency. Scaling out allows you to distribute
the query load to multiple servers, thus alleviating memory bottlenecks on a single server.
Memory pressure can be caused by many issues. For example:
o Users execute many different unique queries, filling up and thrashing available cache.
o Complex or large queries require large subcubes and a large memory space.
o Too many concurrent users access the same server.
You have many long running queries against your Analysis Services cube, which will block other
queries, or block processing commits.In this case, scaling out the long-running queries to
separate servers can help alleviate contention problems.
References
73
4 Tuning Processing Performance
In the following sections we will provide guidance on tuning processing. Processing is the operation that
loads data from one or more data sources into one or more Analysis Services objects. Although OLAP
systems are not generally judged by how fast they process data, processing performance affects how
quickly new data is available for querying. Every application has different data refresh requirements,
ranging from monthly updates to near real-time data refreshes; however, in all cases, the faster the
processing performance, the sooner users can query refreshed data.
Analysis Services provides several processing commands, allowing granular control over the data loading
and refresh frequency of cubes.
To manage processing operations, Analysis Services uses centrally controlled jobs. A processing job is a
generic unit of work generated by a processing request.
From an architectural perspective, a job can be broken down into parent jobs and child jobs. For a given
object, you can have multiple levels of nested jobs depending on where the object is located in the OLAP
database hierarchy. The number and type of parent and child jobs depend on these factors:
The object that you are processing, such as a dimension, cube, measure group, or partition.
The processing operation that you are requesting, such as ProcessFull, ProcessUpdate, or
ProcessIndexes.
For example, when you issue a ProcessFull operation for a measure group, a parent job is created for
the measure group with child jobs created for each partition. For each partition, a series of child jobs are
spawned to carry out the ProcessFull operation of the fact data and aggregations. In addition, Analysis
Services implements dependencies between jobs. For example, cube jobs are dependent on dimension
jobs.
The most significant opportunities to tune performance involve the processing jobs for the core
processing objects: dimensions and partitions. Each of these has its own section in this guide.
References
Additional background information on processing can be found in this technical article: Analysis Services
2005 Processing Architecture (http://msdn.microsoft.com/en-us/library/ms345142(SQL.90).aspx).
74
MSOLAP: Processing
o Rows read/sec
MSOLAP: Proc Aggregations
o Temp File Bytes Writes/sec
o Rows created/Sec
o Current Partitions
MSOLAP: Threads
o Processing pool idle threads
o Processing pool job queue length
o Processing pool busy threads
MSSQL: Memory Manager
o Total Server Memory
o Target Server Memory
Process
o Virtual Bytes – msmdsrv.exe
o Working Set – msmdsrv.exe
o Private Bytes – msmdsrv.exe
o % Processor Time – msmdsrv.exe and sqlservr.exe
MSOLAP: Memory
o Quote Blocked
Logical Disk:
o Avg. Disk sec/Transfer – All Instances
Processor:
o % Processor Time – Total
System:
o Context Switches / sec
Configure the trace to save data to a file. Measuring every 15 seconds will be sufficient for tuning
processing. As you tune processing, you should re-measure these counters after each change to see
whether you are getting closer to your performance goal. Also note the total time used by processing.
The following sections explain how to use and interpret the individual counters.
In the following section we will assume that you use SQL Server as the relational foundation for Analysis
Services. (For users of other databases, the knowledge here will most likely transfer cleanly to your
platform.)
75
In your SQL Server Profiler trace you should capture these events:
TextData
Reads
DatabaseName
SPID
Duration
You can use the Tuning template and just add the Reads column and Showplan XML Statistics Profiles.
We suggest that rather than running the tool in real-time, you configure the trace to save for later
analysis. You can save to a file, or log to a table. With the latter option the data is available in tabular
format, and you can more easily correlate the traces from different tools.
The performance data gathered by these traces will be used in the following section to help you tune
processing.
To make tuning and future monitoring easier, you might want to split the dimension processing and
partition processing into two different commands. That way you can tune each step individually.
For partition processing, you should distinguish between ProcessData and ProcessIndex—the tuning
techniques for each are very different. If you follow our recommended best practice of doing
ProcessData followed by ProcessIndex instead of ProcessFull, the time spent in each should be easy to
read.
If you use ProcessFull instead of splitting into ProcessData and ProcessIndex, you can get an idea of
when each phase ends by observing the following performance counters:
During ProcessData the counter MSOLAP:Processing – Rows read/Sec is greater than zero.
During ProcessIndex the counter MSOLAP:Proc Aggregations – Row created/Sec is greater than
zero.
ProcessData can be further split into the time spent by the SQL Server process and the time spent by the
Analysis Services process. You can use the Process counters collected to see where most of the CPU time
is spent. The following diagram provides an overview of the many different operations included in a full
process of a cube.
76
Process
Process Build MOLAP Relational
Dimensions
Dimensions structures Read
Relational
Read
Process Partition
Process Full Process Partition
Data
Cube Data
Build
Segments
Process Partition
Process Partition Build
Index Build Indexes
Index Aggregations
To provide a mental model of the workload, we will first introduce the dimension processing
architecture.
To create these dimension stores, the storage engine uses the series of jobs displayed in the following
diagram.
77
Figure 32: Dimension processing jobs
Build Attribute Stores - For each attribute in a dimension, a job is instantiated to extract and persist the
attribute members into an attribute store. The attribute store consists of the key store, name store, and
relationship store. The data structures created during dimension processing are saved to disk with the
following extensions:
Because the relationship stores contain information about dependent attributes, the processing jobs
must be ordered to provide the correct workflow. The storage engine analyzes the dependencies
between attributes, and then creates an execution tree with the correct ordering. The execution tree is
then used to determine the best parallel execution of the dimension processing.
The following figure displays an example execution tree for a Time dimension. Note that the dimension
has been configured using cascading attribute relationships, which is a best practice for all dimension
designs.
78
Figure 33: Execution tree example
In this example, the All attribute proceeds first, given that it has no dependencies to another attribute,
followed by the Fiscal Year and Calendar Year attributes, which can be processed in parallel. The other
attributes proceed according to the dependencies in the execution tree, with the key attribute always
being processed last, because it always has at least one attribute relationship, except when it is the only
attribute in the dimension.
While you cannot control the number of members for a given attribute, you can improve processing
performance by using cascading attribute relationships. This is especially critical for the key attribute,
because it has the most members and all other jobs (hierarchy, decoding, bitmap indexes) are waiting
for it to complete.
In general, using attribute relationships lowers the memory requirement during processing. When an
attribute is processed, all dependent attributes must be kept in memory. If you have no attribute
relationships, all attributes must be kept in memory while the key attribute is processed. This may cause
out-of-memory conditions.
Build Decoding Stores - Decoding stores are used extensively by the storage engine. During querying,
they are used to retrieve data from the dimension. During processing, they are used to build the
dimension’s bitmap indexes.
Build Hierarchy Stores - A hierarchy store is a persistent representation of the tree structure. For each
natural hierarchy in the dimension, a job is instantiated to create the hierarchy stores.
79
Build Bitmap Indexes - To efficiently locate attribute data in the relationship store at querying time, the
storage engine creates bitmap indexes at processing time. For attributes with a very large number of
members, the bitmap indexes can take some time to process. In most scenarios, the bitmap indexes
provide significant querying benefits; however, when you have high-cardinality attributes, the querying
benefit that the bitmap index provides may not outweigh the processing cost of creating the bitmap
index.
From a performance perspective, the following dimension processing commands are the most
important:
ProcessData
ProcessFull
ProcessUpdate
ProcessAdd
The ProcessFull and ProcessData commands discard all storage contents of the dimension and rebuild
them. Behind the scenes, ProcessFull executes all dimension processing jobs and performs an implicit
ProcessClear on all dependent partitions. This means that whenever you perform a ProcessFull
operation of a dimension, you need to perform a ProcessFull operation on dependent partitions to bring
the cube back online. ProcessFull also builds indexes on the dimension data itself (note that indexes on
the partitions are built separately). If you do ProcessData on a dimension, you should do ProcessIndexes
subsequently so that dimension queries are able to use these indexes.
Unlike ProcessFull, ProcessUpdate does not discard the dimension storage contents. Instead, it applies
updates intelligently in order to preserve dependent partitions. More specifically, ProcessUpdate sends
SQL queries to read the entire dimension table and then applies changes to the dimension stores.
ProcessAdd optimizes ProcessUpdate in scenarios where you only need to insert new members.
ProcessAdd does not delete or update existing members. The performance benefit of ProcessAdd is that
you can use a different source table or data source view named query that restrict the rows of the
source dimension table to only return the new rows. This eliminates the need to read all of the source
data. In addition, ProcessAdd also retains all indexes and aggregations (flexible and rigid).
ProcessUpdate and ProcessAdd have some special behaviors that you should be aware of. These
behaviors are discussed in the following sections.
4.2.2.1 ProcessUpdate
A ProcessUpdate can handle inserts, updates, and deletions, depending on the type of attribute
relationships (rigid versus flexible) in the dimension. Note that ProcessUpdate drops invalid
80
aggregations and indexes, requiring you to take action to rebuild the aggregations in order to maintain
query performance. However, flexible aggregations are dropped only if a change is detected.
When ProcessUpdate runs, it must walk through the partitions that depend on the dimension. For each
partition, all indexes and aggregation must be checked to see whether they require updating. On a cube
with many partitions, indexes, and aggregates, this can take a very long time. Because this dependency
walk is expensive, ProcessUpdate is often the most expensive of all processing operations on a well-
tuned system, dwarfing even large partition processing commands.
4.2.2.2 ProcessAdd
ProcessAdd is the preferred way of managing Type 2 changing dimensions. Because Analysis Services
knows that existing indexes do not need to be checked for invalidation, ProcessAdd typically runs much
faster than ProcessUpdate.
In the default configuration of Analysis Services, ProcessAdd often triggers a processing error when run,
and reports duplicate key values. This error is caused by the “addition” of non-key properties that
already exist in the dimension. For example, consider the addition of a new customer to a dimension. If
the customer lives in a country that is already present in the dimension, this country cannot be added (it
is already there) and Analysis Services throws an error. The solution in this case is to set the
<KeyDuplicate> to IgnoreError on the dimension processing command.
Notes
You cannot run a ProcessAdd on an empty dimension. The dimension must first be fully
processed.
Formerly, ProcessAdd was available only as an XMLA command; however, in SQL Server 2012,
you can configure incremental updates using the Process Add option in SQL Server Management
Studio.
References
For detailed information about automating ProcessAdd, see Greg Galloway’s blog entry:
http://www.artisconsulting.com/blogs/greggalloway/Lists/Posts/Post.aspx?ID=4
For information about how to avoid setting duplicate keys, see this forum thread:
http://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/thread/8e7f1304-56a1-
467e-9cc6-68428bd92aa6?prof=required
81
4.3.1 Reduce Attribute Overhead
When it comes to dimension processing, you must pay a price for having many attributes. If the
processing time for the dimension is restrictive, you most likely have to change the attribute to design in
order to improve performance.
Every attribute that you include in a dimension impacts the cube size, the dimension size, the
aggregation design, and processing performance. Whenever you identify an attribute that will not be
used by end users, delete the attribute entirely from your dimension. After you have removed
extraneous attributes, you can apply a series of techniques to optimize the processing of remaining
attributes.
For example, the primary key of the customer dimension uniquely identifies each customer by account
number; however, users also want to slice and dice data by the customer’s social security number. Each
customer account number has a one-to-one relationship with a customer social security number. You
can consider removing the creation of bitmaps for the social security number.
You can also consider removing bitmap indexes from attributes that are always queried together with
other attributes that already have bitmap indexes that are highly selective. If the other attributes have
sufficient selectivity, adding another bitmap index to filter the segments will not yield a great benefit.
For example, assume you are creating a Sales fact table, and user queries always use the Date and Store
dimensions. Sometimes a filter is also applied by using Store Clerk. However, because you have already
filtered on Stores, adding a bitmap on Store Clerk might yield only trivial benefit. In this case, you might
consider disabling bitmap indexes on the Store Clerk attributes.
You can disable the creation of bitmap indexes for an attribute by setting the
AttributeHierarchyOptimizedState property to Not Optimized.
If you do not use cascading attribute relationships, the SQL Server OPENROWSET function, which
provides a method for accessing data from multiple sources, is used to merge the data streams. In this
82
situation, the processing for the attribute is extremely slow, because it must access multiple
OPENROWSET derived tables.
If you have the option, consider performing ETL to bring all data needed for the dimension into the same
SQL Server database. This allows you to utilize the relational engine to tune the query.
Tables that have the characteristics of dimensions can often be heavily indexed with little insert/update
performance overhead to the system. You can use this to your advantage during processing and make
liberal use of relational indexes.
To quickly tune the relational queries used for dimension processing, capture a Profiler trace of the
dimension processing and use the Database Engine Tuning Advisor to generate recommendations based
on the trace. For small dimension tables, chances are that you can get away with adding every suggested
index. For larger tables, target the indexes towards the longest-running queries. For detailed tuning
advice on large dimension tables, see the SQL Server 2008 R2 Analysis Services Operations Guide.
However, you should be careful when using this setting – if Analysis Services runs out of memory during
processing, it will have a large impact on both query and processing performance. Experiment with this
setting carefully before putting it into production.
Note also that ByTable processing will cause duplicate key (KeyDuplicate) errors because SELECT
DISTINCT is not executed for each attribute, and the same members will be encountered repeatedly
during processing. Therefore, you will need to specify a custom error configuration and disable the
KeyDuplicate errors.
83
In this section, we discuss the following techniques for efficient data refresh 3:
Process Fact Data - Fact data is processed using three concurrent threads that perform the following
tasks:
Build Aggregations and Bitmap Indexes - Aggregations are built in memory during processing. Although
too few aggregations may have little impact on query performance, excessive aggregations can increase
processing time without much added value on query performance.
If aggregations do not fit in memory, chunks are written to temp files and merged at the end of the
process. Bitmap indexes are also built during this phase and written to disk on a segment-by-segment
basis.
ProcessFull
Discards the storage contents of the partition and then rebuilds them. Behind the scenes,
ProcessFull executes jobs for ProcessData and ProcessIndexes.
3
For detailed guidance on server tuning, hardware optimization and relational indexing, see the SQL
Server 2008 R2 Operations Guide.
84
ProcessData
Discards the storage contents of the object and rebuilds only the fact data.
ProcessIndexes
Preserves the data created during ProcessData and creates new aggregations and bitmap
indexes based on it. ProcessIndexes requires a partition to have built its data already.
ProcessAdd
Internally creates a temporary partition, processes it with the target fact data, and then merges
it with the existing partition. Previously, this option was available in SQL Server Management
Studio as ProcessIncremental.
ProcessClear
Removes all data from the partition. Previously, this option was available in Business Intelligence
Development Studio as UnProcess.
ProcessClearIndexes
Removes all indexes and aggregates from the partition. This brings the partitions in the same
state as if ProcessClear followed by ProcessData had just been run. Note that
ProcessClearIndexes is the name of the XMLA command. This command is not available in SQL
Server Data Tools or SQL Server Management Studio.
Top 10 Best Practices for Building a Large Scale Relational Data Warehouse
(http://sqlcat.com/sqlcat/b/top10lists/archive/2008/02/06/top-10-best-practices-for-building-a-
large-scale-relational-data-warehouse.aspx)
Analysis Services Processing Best Practices
(http://sqlcat.com/sqlcat/b/whitepapers/archive/2007/11/15/analysis-services-processing-best-
practices.aspx)
4.4.4.1 Inserts
If you have a browsable, processed cube and you need to add new data to an existing measure group
partition, you can apply one of the following techniques:
85
ProcessFull—Perform a ProcessFull operation for the existing partition. During the ProcessFull
operation, the cube remains available for browsing with the existing data while a separate set of
data files are created to contain the new data. When the processing is complete, the new
partition data is available for browsing. Note that ProcessFull is technically not necessary, given
that you are only doing inserts. To optimize processing for insert operations, you can use
ProcessAdd.
ProcessAdd—Use this operation to append data to the existing partition files. If you frequently
perform ProcessAdd, we recommend that you periodically perform ProcessFull in order to
rebuild and recompress the partition data files. The reason for doing so is that, internally,
ProcessAdd creates a temporary partition and merges it, which can result in data fragmentation
over time, hence the need to periodically perform ProcessFull.
If your measure group contains multiple partitions, a more effective approach is to create a new
partition that contains the new data and then perform ProcessFull on that partition. This technique
allows you to add new data without affecting the existing partitions. When the new partition has
completed processing, it is available for querying.
4.4.4.2 Updates
When you need to perform data updates, you can perform a ProcessFull. Ideally you will target the
updates to a specific partition, so that you only have to process a single partition.
Rather than directly updating fact data, a better practice is to use a journaling mechanism to implement
data changes. In journaling, you turn an update into an insertion that corrects that existing data. With
this approach, you can simply continue to add new data to the partition by using a ProcessAdd. By using
journaling, you also have an audit trail of the changes that have been made to the fact table.
4.4.4.3 Deletes
For deletions, consider using multiple partitions, to remove expired data.
Consider the following example. You currently have 13 months of data in a measure group, 1 month per
partition. You want to remove the oldest month of data from the cube. If you have partitioned the data
correctly, you can simply delete the partition without affecting any of the other partitions.
If there are any old dimension members that appeared only in the expired month, you can remove these
using a ProcessUpdate operation on the dimension, but only if it contains flexible relationships.
If you need to delete members from the key/granularity attribute of a dimension, you must set the
dimension’s UnknownMember property to Hidden. This is because the server does not know if there is
a fact record assigned to the deleted member. After this property has been set appropriately, the
member will be hidden at query time.
Another option is to remove the data from the underlying table and perform a ProcessFull operation.
However, this may take longer than ProcessUpdate.
86
As your dimension grows larger, you may want to perform a ProcessFull operation on the dimension to
completely remove deleted keys. However, if you do this, all related partitions must also be
reprocessed. This may require a large batch window and is not viable for all scenarios.
Some data types are, by the nature of their design, faster to use than others. For fastest performance,
consider using only these data types in fact tables.
In the following subsection, we assume that your relational source is SQL Server. If you are using
another relational source, some of the advice still applies – consult your database specialist for platform
specific guidance.
Analysis Services uses the partition information to generate its queries. Unless you have done any query
binding in the UDM, the SELECT statement issues to the relational source is very simple. It consists of:
A SELECT of the columns required to process. This will be the dimension columns and the
measures.
Optionally, a WHERE criterion if you use partitions. You can control this WHERE criterion by
changing the query binding of the partition.
87
joined columns to the fact table. If you are using a star schema design, you should already have done
this.4
If you got rid of all joins, your query plan should look something like the following figure.
Click on the table scan (it may also be a range scan or index seek in your case) and bring up the
Properties pane.
In this example, both partition 4 and partition 5 are accessed, and the partition count is 2. In general,
you want the value for Actual Partition Count to be 1. If this is not the case (as in the example), you
should consider repartitioning the relational source data so that each cube partition touches at most
one relational partition.
First, it allows you to restart a failed processing at the last valid state. For example, if you fail processing
during ProcessIndex, you can restart this phase instead of reverting to running ProcessData again.
4
For background on relational star schemas and how to design and denormalize for optimal
performance, refer to: Ralph Kimball, The Data Warehouse Toolkit.
88
Second, ProcessData and ProcessIndex have different performance characteristics. Typically, you want
to have more parallel commands executing during ProcessData than you want during ProcessIndex. By
splitting them into two different commands, you can override parallelism on the individual commands.
Of course, if you don’t want to micromanage partition processing, you might opt for running a
ProcessFull on the measure group. This works well on small cubes where performance is not a concern.
As you continue tuning, keep comparing the baselines to measure improvement, and watch out for
bottlenecks to appear again as you push more data through the system.
Using multiple partitions can enhance processing performance. Partitions allow you to work on many,
smaller parts of the fact table in parallel. Because a single connection to SQL Server can only transfer a
limited amount of rows per second, adding more partitions (and hence more connections) can increase
throughput. How many partitions you can process in parallel depends on your CPU and machine
architecture. As a rule of thumb, keep increasing parallelism until you no longer see an increase in
MSOLAP:Processing – Rows read/Sec. You can measure the amount of concurrent partitions you are
processing by looking at the performance counter MSOLAP: Proc Aggregations - Current Partitions.
Being able to process multiple partitions in parallel is useful in a variety of scenarios; however, there are
a few guidelines that you must follow.
Whenever you process a measure group that has no processed partitions, Analysis Services must
initialize the cube structure for that measure group. To do this, it takes an exclusive lock that
prevents parallel processing of partitions. You should eliminate this lock before you start the full
parallel process on the system. To remove the initialization lock, ensure that you have at least
one processed partition per measure group before you begin the parallel operation.
If you do not have a processed partition, you can perform a ProcessStructure on the cube to
build its initial structure and then proceed to process measure group partitions in parallel. You
will not encounter this limitation if you process partitions in the same client session and use the
MaxParallel XMLA element to control the level of parallelism.
89
2. In the Properties pane, review the Maximum number of connections box.
3. Set this number to at least the number of partitions you want to process in parallel.
The number you want to optimize during ProcessIndex is the performance counter MSOLAP:Proc
Aggregations – Row created/Sec. As the counter increases, the ProcessIndex time decreases. You can
use this counter to check if your tuning efforts improve the speed.
An additional counter to examine is the temporary files counter. When an aggregation doesn’t fit in
memory, the aggregation data is spilled to temporary disk files, making it more expensive to build disk
based aggregations. Therefore, if you notice this counter increasing, consider making more memory
available for the index building phase. Alternatively, consider dropping some of the larger aggregations,
to avoid this issue.
Keep increasing partition count until you no longer see an increase in processing speed.
90
4.4.11 Partitioning the Relational Source
The best partition strategy to implement in the relational source depends on the capabilities of the
database product, but some general guidance applies.
It is often a good idea to reflect the cube partition strategy in the relation design. Partitions in
the relational source serve as “coarse indexes,” and matching relational partitions with the cube
allows you to get the best possible table scan speeds by touching only the records you need.
Another way to achieve that effect is to use a SQL Server clustered index (or the equivalent in
your preferred database engine) to support fast scan queries during partition processing.
If you have used a matrix partition schema as described earlier, you may even want to combine
the partition and cluster index strategy, using partitioning to support one of the partitioned
dimension and cluster indexes to support the other.
5 Special Considerations
There are certain features of Analysis Services that provide a lot of business intelligence value, but that
require special attention to succeed. This section describes these scenarios and the tuning you can apply
when you encounter them.
91
5.1 Distinct Count
Distinct count measures are architecturally very different from other Analysis Services measures
because they are not additive in nature. This means that more data must be kept on disk and in general,
most distinct count queries have a heavy impact on the storage engine.
If each partition contains a nonoverlapping range of values, this coordination between jobs is avoided
and query performance can improve by orders of magnitude, depending on hardware. You can also
perform additional optimizations to help improve distinct count performance:
The key to improving distinct count query performance is to have a partitioning strategy that
involves a time period together with your distinct count value. Start by partitioning by time and
x number of distinct value partitions of equal size with non-overlapping ranges, where x is the
number of cores. Refine x by testing with different partitioning schemes.
To distribute your distinct value across partitions with non-overlapping ranges, considering
building a hash of the distinct value. A modulo function is simple and straightforward; for
example, convert character key to integer values. However, it requires extra processing and
storage, since you might have to maintain an IDENTITY table. A hash function such as the SQL
HashBytes function will avoid the latter issues but may introduce hash key collisions, when the
hash value is repeated based on different source values.
The distinct count measure must be directly contained in the query. If you partition your cube by
the hash of the distinct value, it is important that your query be written to use the hash of the
distinct value , versus the distinct value itself. Even if the distinct value and the hash of the
distinct value have the same distribution of data, and even if you partition data by the latter, the
header files contain only the range of values associated with the hash of the distinct value. This
ultimately means that the Analysis Services storage engine must query all of the partitions to
perform the distinct on the distinct value.
The distinct count values need to be continuous.
References
For more information, see Analysis Services Distinct Count Optimization Using Solid State Devices
(http://sqlcat.com/sqlcat/b/technicalnotes/archive/2010/09/20/analysis-services-distinct-count-
optimization-using-solid-state-devices.aspx).
92
count measure group, you should partition on the value of the distinct count measure column instead of
a dimension.
To do this, group the distinct count measure column into separate, nonoverlapping intervals. Each
interval should contain approximately the same amount of rows from the source. These intervals then
form the source of your Analysis Services partitions.
Because the parallelism of the ProcessData phase is limited by the amount of partitions you have, for
optimal processing performance, you should split the distinct count measure into as many equal-sized
nonoverlapping intervals as you have CPU cores on the Analysis Services computer.
It is possible to use noninteger columns for distinct count measure groups. However, for performance
reasons, and to avoid hitting the 4-GB limit, you should avoid this.
You should also investigate the possibility of optimizing the relational database for the particular SQL
queries that are generated during processing of distinct count partitions. The processing query will send
an ORDER BY clause in the SQL, and there may be techniques that you can follow to build indexes in the
relational database that will produce better performance for this query pattern.
References
The white paper describes how you can use hash functions to transform noninteger columns into
integers for distinct count. It also provides examples of the nonoverlapping interval-partitioning
strategy.
In other words, if you think of an aggregation as a GROUP BY on the aggregation granularities, a distinct
count aggregation is a GROUP BY on the aggregation granularities and the distinct count column. Having
the distinct count column in the aggregation data allows the aggregation to be used when querying a
higher granularity—but unfortunately it also makes the aggregations much larger.
To get the most value out of aggregations for distinct count partitions, design aggregations at a
commonly viewed higher level attribute related to the distinct count attribute. For example, a report
about customers is typically viewed at the Customer Group level; hence, build aggregations at that level.
A common approach is run the typical queries against your distinct count partition and use usage-based
optimization to build the appropriate aggregations.
93
5.1.4 Optimize the Disk Subsystem for Random I/O
Distinct count queries have a huge impact on the Analysis Services storage engine, which means that in
large cubes affects the disk subsystem as well. For each such query, Analysis Services generates
potentially multiple processes, each one parsing the disk subsystem to perform a portion of the distinct
count calculation. This activity results in heavy random I/O on the disk subsystem, which can
significantly reduce the query performance of your distinct counts, and all of your Analysis Services
queries.
The disk optimization techniques described in the SQL Server 2008 R2 Analysis Services Operations
Guide are especially important for distinct count measure groups.
References
One way to think about the problem is to regard a many-to-many dimension as a generalization of the
distinct count measure. Using a many-to-many dimension lets you apply distinct count logic to other
Analysis Services measures such as SUM, COUNT, MAX, MIN, and so on. However, to calculate these
values, the Analysis Services storage engine must parse through the lowest level of granularity of data.
This is because when a query includes a many-to-many dimension, the query calculation is performed at
query-time between the measure group and intermediate measure group at the attribute level. The
result is a processor- and memory-intensive process to return the result.
When many-to-many dimensions are used, you might experience the following performance and
accuracy issues:
The join between the measure group and intermediate measure group is a hash join strategy;
hence it is very memory-intensive to perform this operation.
94
Because queries involving many-to-many dimensions result in a join between the measure
group and an intermediate measure group, best performance is achieved by reducing the size of
your intermediate measure group. A general rule is less than 1 million rows.
Many-to-many relationships cannot be aggregated. Therefore, queries involving many-to-many
dimensions cannot use aggregations or aggregate caches—only a direct hit will work. There are
various MDX calculation issues with VisualTotals, subselects, and CREATE SUBCUBE 5.
There may be perceived double counting issues because it is difficult to identify which members
of the dimension are involved with the many-to-many relationship.
To help improve the performance of many-to-many dimensions, one can make use of the Many-to-
Many matrix compression (http://bidshelper.codeplex.com/wikipage?title=Many-to-Many%20Matrix
%20Compression), which removes repeated many-to-many relationships thus reducing the size of your
intermediate measure group.
The following figure shows how a MatrixKey can be created to eliminate repeated combinations. The
MatrixKey is based on combinations of common dimension members.
5
You can find more information in the Analysis Services Many-to-Many Dimensions: Query
Performance Optimization Techniques white paper.
95
References
Therefore, a common best practice is to refrain from using parent-child hierarchies that contain a large
number of members. You might ask, how big is large? Unfortunately, there isn’t a single answer or
specific number, because query performance at intermediate levels of the parent-child hierarchy
degrades linearly with the number of members. If you are in a design scenario with a large parent-child
hierarchy, consider altering the source schema to reorganize part or all of the hierarchy into a regular
hierarchy with a fixed number of levels.
For example, say you have a parent-child hierarchy such as the one shown here.
The data from this parent-child hierarchy is represented in relational format as in the following table.
SK Parent_SK
1 NULL
2 1
3 2
4 2
5 1
96
Converting this table to a regularly hierarchy results in a relational table with the following format.
After the data has been reorganized into the user hierarchy, you can use the Hide Member If property
of each level to hide the redundant or missing members.
To convert your parent-child hierarchy into a regular hierarchy, refer to the Analysis Services Parent-
Child Dimension Naturalizer tool in CodePlex:
http://pcdimnaturalize.codeplex.com/wikipage?title=Home&version=12&ProjectName=pcdimnaturalize
Another optimization that you might consider is to limit the total number of parent-child hierarchies in
your cube.
References
Typically the data must reside in memory for low latency access.
Often, you do not have time to maintain indexes on the data.
You will typically run into locking and/or concurrency issues that must be dealt with.
Due to the locking logic invoked by Analysis Services, long-running queries in Analysis Services
can prevent processing from committing and block other queries.
To provide near real-time results and avoid Analysis Services query locking, start with the relational
source:
97
Place the real-time portion of the data into its own separate table but keep historical data within
your partitioned table. This can minimize the impact of blocking queries within your relational
database.
After you have optimized the relational source, go on to apply these techniques, discussed in this
section:
MOLAP switching
ROLAP + MOLAP
ROLAP partitioning
This methodology is well suited for something like a time-zone scenario in which you have active
partitions throughout the day. For example, say you have active partitions for different regions such as
New York, London, Mumbai, and Tokyo. In this scenario, you would create partitions by both time and
the specific region. This provides you with the following benefits:
You can fully process (as often as needed) the active region / time partition (for example,
Tokyo / Day 1) without interfering with other partitions (for example, New York / Day 1).
You can “roll with the daylight” and process New York, London, Mumbai, and Tokyo with
minimal overlap.
However, long-running queries for a region can block the processing for that region. For example, a
processing commit of current New York data might be blocked by an existing long running query for
New York data.
To alleviate this problem, use cube flipping, by creating two copies of the same cube. While one cube
processes data, the other cube is available for querying.
98
To flip between the cubes, you can use the Analysis Services Load Balancing Toolkit
(http://sqlcat.com/sqlcat/b/toolbox/archive/2010/02/08/aslb.aspx) or create your own custom plug-in
to your UI that can detect which cube it should query against. It will be important for the plug-in to hold
session state so that user queries use the query cache. Session state should automatically refresh when
the connection string is changed. Excel, for example, can do this.
Maintain a coherent ROLAP cache. For example, if you query the relational data, the results are
placed into the storage engine cache. By default, the next query uses that storage engine cache
entry, but the cache entry may not reflect any new changes to the underlying relational
database. It is even possible to have aggregate values stored in the data cache that when
aggregated up do not add up correctly to the parent.
Use Real Time OLAP = true within the connection string.
Assume that the MOLAP partitions are write-once / read-sometimes. If you need to make
changes to the MOLAP partitions, ensure the changes do not have an impact on users querying
the system.
For the ROLAP partition, ensure that the underlying SQL data source can handle concurrent
queries and load. A potential solution is to use Read Committed Snapshot Isolation
(http://msdn.microsoft.com/en-us/library/ms188277.aspx ) (RSCI)
References
99
5.4.4 ROLAP
In general, MOLAP is the preferred storage choice for Analysis Services; because MOLAP typically
provides faster access to the data, especially if your disk subsystem is optimized for random I/O. This
design can handle attributes more efficiently and it is easier to manage.
However, ROLAP against SQL Server can be a solid choice for very large cubes with excellent
performance, and provides the additional benefit of reducing processing time of large cubes, or
eliminating processing entirely. This might be a requirement if you need to implement near real-time
cubes.
The following figure shows the query performance of a ROLAP cube after usage-based optimization has
been applied. Performance is comparable to MOLAP if the system is expertly tuned.
Simplify the data structure of your underlying SQL data source to minimize page reads. For
example, remove unused columns, try to use int columns, and so forth.
Use a star schema without snowflaking, because joins can be expensive.
Avoid scenarios such as many-to-many dimensions, parent-child dimensions, distinct count, and
ROLAP dimensions.
100
Create cube-based aggregations by using the Analysis Services aggregations tools.
Create your own transparent aggregations directly against the SQL Server database.
Both approaches rely on the creation of indexed views within SQL Server but offer different advantages
and disadvantages. In general:
Transparent aggregations have greater value in an environment where multiple cubes are
referencing the same fact table.
Transparent aggregations and cube-based aggregations could be used together to get the most
efficient design:
o Start with a set of transparent aggregations that will work for the most commonly run
queries.
o Add cube-based aggregations using usage-based optimization for important queries that
are taking a long time to run.
To design the most effective strategy, you might consider a combination of these two approaches, which
have their respective advantages and disadvantages:
Cube-based aggregations
Advantages Efficient query processing: Analysis Services can use cube-based aggregations
even if the query and aggregation granularities do not exactly match.
For example, a query on [Month] can use an aggregation on [Day], which requires
only the summarization of up to 31 numbers.
Aggregation design efficiency: Analysis Services includes the Aggregation Design
Wizard and the Usage-Based Optimization Wizard to create aggregation designs
based on storage and percentage constraints or queries submitted by client
applications.
Disadvantages Processing overhead: Analysis Services drops and re-creates indexed views
associated with cube-based aggregations during cube partition processing.
Dropping and re-creating the indexes can take an excessive amount of time in a
large-scale data warehouse.
101
Less overhead during cube processing: Analysis Services is unaware of the
aggregations and does not drop the indexed views during partition processing.
There is no need to drop indexed views because the relational engine maintains
the indexes continuously, such as during INSERT, UPDATE, and DELETE operations
against the base tables.
Disadvantages No sophisticated aggregation algorithms: Indexed views must match query
granularity. The query optimizer doesn’t consider dimension hierarchies or
aggregation granularities in the query execution plan.
For example, an SQL query with GROUP BY on [Month] can’t use an index on
[Day].
Maintenance overhead: Database administrators must maintain aggregations by
using SQL Server Management Studio or other tools. It is difficult to keep track of
the relationships between indexed views and ROLAP cubes.
1) You may have to design using table binding (not query binding) to an actual table instead of a
partition.
The goal of this design is to ensure partition elimination.
Advice on ROLAP aggregations is specific to SQL Server as a data source. For other data sources,
carefully evaluate the behavior of ROLAP queries when accessing a partitioned table.
It is not possible to create an indexed view on a view containing a subselect statement. This will
prevent Analysis Services from creating index view aggregations.
2) Relational partition elimination will generally not work.
Normally, data warehousing best practice is to use partitioned fact tables. However, if you need
to use ROLAP aggregations, you must use separate tables in the relational database for each
cube partition.
Partitions require named queries, and those tend to generate bad SQL plans. This may vary
depending on the relational engine you use.
3) You cannot use some features.
You cannot use named queries or views in the DSV.
Any feature that will cause Analysis Services to generate a subquery cannot be used for ROLAP
aggregations. For example, you cannot use a Count of Rows measure, because a subquery is
always generated when this type of measure is used.
102
4) There are limitations on measure groups.
You cannot aggregate any measures that uses MAX or MIN.
You cannot use measures that are based on nullable fields in the relational data source.
References
For more information about how to optimize your ROLAP design, see the following white paper:
http://blogs.msdn.com/b/sqlcat/archive/2013/11/05/forcing-numa-node-affinity-for-analysis-services-
tabular-databases.aspx
5.5 NUMA
Non Uniform Memory Access (NUMA) is a computer memory design, applied at the chip level by the
manufacturer. NUMA architectures address the problem of multiple processors needing to use the same
memory by providing separate memory for each processor.
In general, the point of NUMA architectures is to alleviate performance bottlenecks associated with lots
of logical processors actively accessing a shared memory system. A shared physical memory bus can
suffer from contention, which stalls memory access operations. NUMA solves this problem in part by
providing more independence for each group of logical processors. However, multi-processor systems
without NUMA can make the problem of memory bottlenecks worse.
In NUMA architectures, applications that are NUMA-aware implement optimizations to take advantage
of the locality of memory. Without those optimizations, the applications can behave inconsistently due
to slowdowns when accessing memory that is not local to the NUMA nodes on which the threads are
executing.
The SQL Server database engine became NUMA-aware in 2005. To get the best performance in a NUMA
environment, be sure to apply the latest hotfixes.
Note: Microsoft Windows 7, Windows Server 2008 R2, and Windows Server 2012 all support NUMA
architecture over up to 64 logical cores. Significant improvements were made in Windows Server 2012
to support NUMA integration with Hyper-V (http://technet.microsoft.com/en-
us/library/hh831410.aspx).
103
5.5.2 General NUMA Tips
In general, we recommend that SSAS developers or administrators who are considering tuning for
NUMA architecture review their overall optimization strategies before using the advanced techniques
described in this section.
For MOLAP solutions, memory bus bottlenecks occur only when there are a large number of
active logical processors. Typically if you have less than 8 cores, you will not observe major
bottlenecks and you can conclude that NUMA is not the source of performance problems.
To estimate the overhead associated with referencing memory on a separate NUMA node, you
can use the tool CoreInfo, from SysInternals. The metric Approximate Cross-NUMA Node
Access Cost from this tool indicates how much performance might be affected when memory is
accessed across NUMA nodes. However, the SQL CAT team found that the calculations from this
metric are relative and can change on subsequent executions on the same machine. We
recommend that you conduct multiple runs and compare performance on different models and
at different times.
NUMA characteristics have the greatest effect on serial execution of individual queries.
Therefore concurrency of queries is somewhat orthogonal to the performance of a NUMA
system. However, if you have not performed these optimizations, be sure to do so before
working with NUMA settings:
o Examine the degree of concurrency on your server. If lots of user queries are being
performed concurrently, it is highly recommended that you experiment with the multi-
user configuration settings (http://support.microsoft.com/kb/2135031). This
configuration might slow down individual queries, but it should give you a better
balance of responsiveness among the different concurrent queries; that is, no single
long-running query should prevent other shorter queries from executing.
o To manage query blocking issues, use techniques described previously, in the 2008 R2
Operations Guide (http://msdn.microsoft.com/en-us/library/hh226085.aspx) .
Hyper-V provides settings that make it relatively easy to affinitize SSAS models to a specific
NUMA node within virtual machines.
If you remove all of the CPUs in a NUMA node, it becomes more difficult to use the RAM in that
node. Memory can still be allocated from that physical node’s memory, but Windows will avoid
using it and will only give pages from those nodes when the affinitized nodes are out of physical
pages.
There is physical memory local to each NUMA node, and there are on-chip caches (L1/L2/L3)
associated with each physical CPU. You can physically adjust the amount of memory chips
associated with a NUMA node if you want, but you can’t adjust the amount of caches on each
CPU.
104
5.5.3.1 Eliminate cross NUMA-node file access by using separate
thread pools
A new thread pool, IOProcess, was added to support separation of reads from other activities. The new
IOProcess thread pool handles read jobs. By separating out the segment scan operations into a separate
thread pool, it is possible to make the scans localized to a NUMA node and therefore improve the
memory access performance of the read operations. The processing threads are not specifically NUMA
aware.
On machines with 4 or more NUMA nodes, the IOProcess thread pool is actually a collection of thread
pools, with each NUMA node having its own pool of IOProcess threads. These can be allocated to
different cores or different processors as described below.
To ensure that file IO operations are consistently assigned to the same IOProcess thread pool, an
algorithm was implemented that spreads partition reads across all IO thread pools. Currently the
algorithm bases the distribution on the ordinal position of the partition in the partitions collection for a
measure group. This means that whenever a partition is scanned, it will use threads only from a single
NUMA node. Even if other cores are available on other NUMA nodes, those cores will not be used. In a
typical environment, multiple partitions will be queried and therefore the load will be evenly distributed
across all the thread pools and NUMA nodes.
(This design is subject to change in future, but it is important to understand its effect in existing
implementations.)
Dimension read jobs are always assigned to the IOProcess thread pool for NUMA node 0. Therefore,
NUMA node 0 will typically be assigned a larger percentage of work. However, since most dimension
operations operate on cached data, this additional load should have no noticeable impact. The Process
thread pool will continue to handle ROLAP and processing related jobs, including writing new files.
Note, however, that processing of aggregations and indexes will use the IO thread pool to scan the
partition data. Testing has shown that use of this new thread pool improves the performance of that
stage of processing.
If you wish to control the use of IOProcess thread pools in a NUMA-aware environment, there are three
basic approaches:
6
With this option, separate thread pools are instantiated per core. The threads in each
thread pool are affinitized to all cores in that NUMA node but one core serves as the ideal
processor. For more information, see SetThreadIdealProcessor
(http://msdn.microsoft.com/library/windows/desktop/ms686253(v=vs.85).aspx) in the
Windows API documentation.
105
Figure 42. Comparing methods 1 and 2 for NUMA affinitization
In general, performance is optimized when each IO thread is on its own core. In general, performance
can be improved when the active threads are not blocked on shared data structures. One of the most
important bottlenecks for high end NUMA systems is the queue of segment scan jobs that need to be
executed concurrently. The act of inserting and removing these jobs was found to be a severe
bottleneck in the Process thread pool. By separating out the segment scan jobs into their own thread
pool and by then increasing the number of IOProcess thread pools to be based on either the number of
NUMA nodes or the number of cores, we can reduce one of the important bottlenecks in the system
because each thread pool has its own queue of jobs.
However, one small issue that you should keep in mind is that the IOProcess threads do not register
themselves as cancellable objects, because the expectation is that these threads will be performing only
short duration read operations on files. As a result, if you cancel a query that requested an IO operation,
any jobs on the IOProcess thread might continue to run for a short period of time after the query was
cancelled, but new IO jobs would not be created for the canceled query.
106
5.5.3.2 Modify the PerNumaNode setting
As with other SSAS features, the default behavior of the IOProcess thread pool is intended to cover the
most common scenarios. By default, there will be only one IOProcess thread pool if there are less than
four NUMA nodes available. When there are four or more NUMA nodes, the default is to create one
IOProcess thread pool per NUMA node. The threads in each thread pool will be affinitized to the cores in
the corresponding NUMA nodes.
To give you more control over the NUMA thread pools, modify the Analysis Services configuration file
(msmdsrv.ini), and change the value of the PerNumaNode setting (ThreadPool\IOProcess). The default
value of this setting is -1, which indicates that the server should use the default 4 NUMA node threshold.
Change this value to 0 to disable the per NUMA node thread pool behavior. In effect, this setting
reverts the server to use of a single IOProcess thread pool.
Change this value to 1 to create one IOProcess thread pool per NUMA node. This setting
overrides the default behavior where servers with less than 4 NUMA nodes would only have one
IOProcess thread pool.
Change the value to 2 to instruct the engine to create one IOProcess thread pool per core.
To support multiple processor groups, the following changes were made in SSAS 2012:
The AS engine was modified to understand processor groups. By default, threads in the thread
pools might be able to use all available logical processors automatically – including those from
different processor groups.
A new configuration property, GroupAffinity (in the section, <ThreadPool> <Process>), was
added for each thread pool in the server. This property lets the SSAS administrator control
which CPUs are used for each thread pool.
For diagnostic purposes, the msmdsrv.log file in SSAS 2012 and 2014 contains the following
entries at service start that reflect the size of each of the five thread pools:
o Query
o ParsingShort
o ParsingLong
o Processing
o IOProcessing
The msmdsrv.log file outputs the affinity information for each NUMA node.
Although the GroupAffinity setting was added to support affinitizing thread pools, this property can also
be used to control the CPUs that are used for specific operations. That is, by defining a GroupAffinity
mask, administrators can allocate threads for IO, processing, query, or parsing to specific CPUs. This
107
optimization can improve resource usage, and better enable resource sharing across multiple processes
on the same server.
To use the GroupAffinity setting, you must define a bitmask that specifies affinity for each processor in a
processor group as follows:
The GroupAffinity property can have as many comma-separated hex values as there are defined
CPU groups on a server.
If the mask contains fewer bits than the number of CPUs for the processor group, then non-
specified bits are set to zero.
For example, the following entry in the msmdsrv.ini file would affinitize threads to 16 logical processors
in the first two processor groups on the server:
<GroupAffinity>0xFFFF,0xFFFF</GroupAffinity>
In contrast, the following entry would affinitize threads to CPUs 4-7 in the first processor group, and the
first 32 CPUs in the second processor group:
<GroupAffinity>0x00F0,0xFFFFFFFF</GroupAffinity>
If no GroupAffinity value is specified for a thread pool (default) then that thread pool is allowed to
spread work across available processor groups and CPUs.
Note: Although VertiPaq can use more than 64 CPUs, setting the GroupAffinity property is currently not
supported for the VertiPaq thread pool, even though an entry exists in the msmdsrv.ini file.
For additional examples of how to use this property, see this topic in Books Online
(http://msdn.microsoft.com/library/ms175657.aspx).
Note: Support for processor groups was added around the same time as enhancements specific to
NUMA, but the two are not directly related or dependent on each other.
Set CoordinatorQueryBalancingFactor to 1
Set CoordinatorQueryBoostPriorityLevel to 0
108
If you are unsure of the appropriate values for these properties, we recommend that you run the Best
Practice Analyzer, which includes a rule that checks the current values for both properties.
If you get the message "Server not configured for optimal concurrent query throughput", you can edit
the msmdsrv.ini configuration file and change CoordinatorQueryBalancingFactor and
CoordinatorQueryBoostPriorityLevel to use the values recommended by the BPA tool.
These settings are described in more detail in the 2008 Operations Guide.
To modify this setting, edit the msmdsrv.ini file, set the RandomFileAccessMode property to a value of
1, and optionally restart the service. Changes to this server property do not require a service restart to
take effect, but if the server is not restarted Analysis Services will not release open files or change the
way it accesses open files, and the setting will affect only newly opened or created files.
Note: You should make this change only if the computer has sufficient memory. Using Random mode
will cause some pages to stay in memory longer, which might cause new bottlenecks.
References
Forcing NUMA Node Affinity for tabular models databases (John Sirmon, SQLCAT)
http://blogs.msdn.com/b/sqlcat/archive/2013/11/05/forcing-numa-node-affinity-for-analysis-
services-tabular-databases.aspx
Analysis Services thread pool changes in SQL Server 2012 (Wayne Robertson, CSS SQL Escalation
Services http://blogs.msdn.com/b/psssql/archive/2012/01/31/analysis-services-thread-pool-
changes-in-sql-server-2012.aspx
The impact of NUMA on SQL Server workloads (Linchi Shea)
http://sqlblog.com/blogs/linchi_shea/archive/2012/01/30/performance-impact-the-cost-of-
numa-remote-memory-access.aspx
For general information about NUMA and Hyper-V, we recommend these articles:
109
Hyper-V in Windows Server 2012 and NUMA
http://blogs.technet.com/b/windowsserver/archive/2012/04/05/windows-server-8-beta-hyper-
v-amp-scale-up-virtual-machines-part-1.aspx
http://blogs.technet.com/b/windowsserver/archive/2012/04/06/windows-server-8-beta-hyper-
v-amp-scale-up-virtual-machines-part-2.aspx
For more information about NUMA issues and working with tabular models, see the 2014 white paper
by Alberto Ferrari (http://go.microsoft.com/fwlink/?LinkId=398938) .
6 Conclusion
This document provides recommendations for diagnosing and resolving processing and query
performance issues in SQL Server 2012 and SQL Server 2014 Analysis Services. Based on the workload,
your performance gains might be different. We recommend that you do performance testing with an
appropriate number of users to determine the appropriate tuning steps.
Additionally, in your testing, you might consider using solid state drives (SSD) and assess the benefits of
performance in conjunction with specific workloads. Some customers have reported benefits such as
improved read access times with SSDs.
Send feedback
7 Resources
For more information, see:
110