Perf Optimization
Perf Optimization
Getting Started....................................................................................................................................... 11
In This Section.................................................................................................................................... 12
Performance Factors.............................................................................................................................. 12
Message size...................................................................................................................................... 12
Schema complexity............................................................................................................................. 13
Map complexity................................................................................................................................... 13
Pipeline components........................................................................................................................... 13
Tracking data...................................................................................................................................... 15
Message-persistence frequency......................................................................................................... 15
Transport adapters.............................................................................................................................. 15
Business activity monitoring (BAM) performance factors....................................................................16
BAM disk usage configuration......................................................................................................... 16
BAM EventStream APIs................................................................................................................... 16
BAM performance counters............................................................................................................. 17
Host configuration............................................................................................................................... 17
Separation of sending, receiving, processing, and tracking functionality.........................................17
Orchestrations..................................................................................................................................... 18
Serializing, deserializing, or loading large messages into memory..................................................18
Use of promoted properties to access message tags or attributes from an orchestration...............18
Performance impact of orchestration persistence points.................................................................18
Effects of orchestration dehydration................................................................................................ 19
Performance impact of Orchestration complexity............................................................................19
Performance implications of using logical ports bound to physical ports.........................................19
Use of appropriate .NET classes in your orchestrations..................................................................19
Use of the Call Orchestration shape versus the Start Orchestration shape.....................................20
Using XmlReader with XLANGMessage versus using XmlReader with XmlDocument...................21
Considerations when using maps in orchestrations.........................................................................24
Impact of correlated messages on throttling....................................................................................25
Example....................................................................................................................................... 25
Recommendation......................................................................................................................... 26
Performance implications of delivery notification................................................................................26
Flat file processing performance considerations.................................................................................26
Business Rules Engine (BRE) performance factors............................................................................26
Fact types........................................................................................................................................ 26
Data table vs. data connection........................................................................................................ 27
Fact retrievers.................................................................................................................................. 27
Rule priority..................................................................................................................................... 27
Update calls..................................................................................................................................... 27
Usage of logical OR operators......................................................................................................... 28
Caching settings.............................................................................................................................. 28
SideEffects property........................................................................................................................ 29
Instances and selectivity.................................................................................................................. 29
Performance metrics........................................................................................................................... 29
BizTalk Server performance metrics................................................................................................30
SQL Server performance metrics.................................................................................................... 30
See Also.............................................................................................................................................. 30
Performance Tools................................................................................................................................. 30
BizUnit 3.0 and BizUnit Designer........................................................................................................ 30
Performance Analysis of Logs (PAL)................................................................................................... 31
SQLIO................................................................................................................................................. 31
Microsoft BizTalk LoadGen 2007........................................................................................................ 31
Visual Studio Team Suite Testing Tools............................................................................................... 31
Windows Server 2003 Performance Advisor.......................................................................................31
BizTalk Server Orchestration Profiler.................................................................................................. 32
Investigating Bottlenecks........................................................................................................................ 61
What is the source of the problem?.................................................................................................... 61
Using an iterative approach to testing.................................................................................................61
Testing consistency............................................................................................................................. 62
Expectations: throughput vs. latency.................................................................................................. 63
Scaling................................................................................................................................................ 64
Automating Testing................................................................................................................................. 94
In This Section.................................................................................................................................... 95
What's in it?
Guidance for optimizing performance, based upon hands-on experience of IT professionals that have
worked extensively with BizTalk Server. This guide includes:
Getting Started: The Getting Started section provides an overview of the BizTalk Server functional
components that can affect performance. This section also describes the phases of a BizTalk
Server performance assessment.
Finding and Eliminating Bottlenecks: The Finding and Eliminating Bottlenecks section describes
various types of performance bottlenecks as they relate to BizTalk Server solutions and information
about how to resolve the bottlenecks.
Automating Testing: The Automating Testing section provides detailed steps that you should
follow when engaging in end-to-end testing to assess the performance of a BizTalk Server solution.
This topic describes the importance of testing BizTalk solutions, how to implement an automated
build process, how to use BizUnit to implement automated testing and how to use LoadGen and
BizUnit together to perform load testing.
Optimizing Performance: The Optimizing Performance section provides guidance for optimizing
performance of specific components in a BizTalk Server environment.
Acknowledgements
We in the BizTalk Server User Education team gratefully acknowledge the outstanding contributions of
the following individuals for providing both technical feedback as well as a good deal of content for the
BizTalk Server Performance Optimization Guide:
Authors
Ewan Fairweather, Microsoft
Rob Steel, Microsoft
Contributors
Paolo Salvatori, Microsoft
Ben Pearce, Microsoft
Reviewers
Stephan Pepersack, Microsoft
Justin Langford, Coeo
Kevin B. Smith, Barclays Capital
Christian Bolton, Coeo
Brian Gregor, Microsoft
Robert Hogg, Blackmarble
John Plummer, Microsoft
Niklas Engfelt, Microsoft
Everett Yang, Microsoft
Clint Huffman, Microsoft
Shane Creamer, Microsoft
Young Jun Hong, Microsoft
Guy Lau, Microsoft
Getting Started
This section describes performance factors that typically come into play in a BizTalk Server
environment, tools that can be used to measure the performance of a BizTalk Server solution, and the
phases of a BizTalk Server performance assessment.
In This Section
Performance Factors
Performance Tools
Phases of a Performance Assessment
Performance Factors
This topic describes the performance factors for the most commonly used components of a BizTalk
Server solution. Some components are used by most BizTalk solutions while others are optional. An
understanding of these factors will assist with the decision making process to maximize the
performance of a production BizTalk Server environment.
Message size
While BizTalk Server imposes no restriction on message size, practical limits and dependencies might
require you to minimize the size of your messages because large messages require more processing
resources. As message size increases, overall throughput (messages processed per second)
decreases. When designing your scenario and planning for capacity, consider the average message
size, message type, and number of messages BizTalk Server processes. Do not use unnecessarily long
attribute and tag names; if possible, keep the length under 50 characters. For example, do not use a
200-character tag name for a message size of only 1 byte.
If the in-memory size of a received message exceeds the number of bytes specified for the Large
message fragment size that is configurable on the Group Properties page for the BizTalk Group in
the BizTalk Server Administration console, then the message is split into fragments of the specified size
and the fragments are written into the MessageBox under the context of a Microsoft Distributed
Transaction Coordinator (MSDTC) transaction as follows:
1. If the incoming message is being published under the context of an existing MSDTC transaction,
then this transaction is used when writing the message fragments to the MessageBox. For
example, if the incoming message is being published by a transactional adapter configured to
require transactions then the existing transaction would be used when writing the message
fragments to the MessageBox.
2. If the incoming message is not being published under the context of an existing MSDTC
transaction, then a new MSDTC transaction is created to write the message fragments to the
MessageBox database. In this scenario, the following considerations apply:
Increase the value for Large message fragment size to reduce the frequency with which large
messages are fragmented and reduce the incidence of creating the associated MSDTC
transactions. This should be done because excessive use of MSDTC transactions is expensive
from a performance standpoint. Note that increasing this value may also increase the amount
of available memory that is used.
If it takes longer than the maximum allowable MSDTC transaction timeout of 60 minutes to
write a message to the MessageBox, then the transaction times out, an error occurs, and the
attempt to write the message fails and is rolled back. The Large message fragment size value
should be increased enough to avoid this problem when processing very large messages.
Depending on available memory, this value should be increased up to a maximum value of
1000000 bytes.
Each message fragment in a message creates one or more SQL Server database locks against
the MessageBox database. When the number of locks exceeds several hundred thousand, it is
possible that SQL Server will generate “out of lock” errors. If this problem occurs, increase the
value for Large message fragment size to reduce the number of fragments (which decreases
the number of SQL Server database locks made against the MessageBox database) or
consider housing your MessageBox database on a 64-bit version of SQL Server. The number
of available locks is significantly higher on the 64-bit version of SQL Server than on a 32-bit
version of SQL Server. The following formula can be used to estimate the maximum number of
messages per interchange when the MessageBox database is housed on a 32-bit version of
SQL Server:
200,000 / (Number of CPUs * BatchSize * MessagingThreadPoolSize)
For more information on how BizTalk Server processes large messages, see “How BizTalk Server
Processes Large Messages” in the BizTalk Server 2006 R2 help at http://go.microsoft.com/fwlink/?
LinkID=102620.
Schema complexity
The throughput for message parsing (especially flat-file parsing) is affected by the complexity of the
schemas. As schema complexity increases, overall performance decreases. When designing schemas,
reduce the length of node names and move promoted properties to the top of the schema to reduce
retrieval time to increase performance.
Map complexity
Depending on the complexity of the maps, map transformation can be resource intensive. As map
complexity increases, overall performance decreases. To improve overall performance, minimize the
number of links and functoids used in your maps, especially functoids that call external resources such
as the DB Lookup functoid.
Pipeline components
Because pipeline components have a significant impact on performance (for example, a pass-through
pipeline component performs up to 30 percent better than an XML assembler/disassembler pipeline
component), make sure that any custom pipeline components perform optimally before implementing
them in your deployment. Minimize the number of pipeline components in your custom pipelines if you
want to maximize the overall performance of your BizTalk application.
You also can improve overall performance by reducing the message persistence frequency in your
pipeline component and by coding your component to minimizing redundancy. Every custom assembly
and in particular artifacts that could potentially disrupt performance, like custom tracking components,
should be tested separately under heavy load condition to observe their behavior when the system is
working at full capacity and to find any possible bottlenecks.
If you need to read the inbound message inside a pipeline component, avoid loading the entire
document into memory using an XmlDocument object. The amount of space required by an instance of
the XmlDocument class to load and create an in-memory representation of a XML document is up to 10
times the actual message size. In order to read a message, you should use an XmlTextReader object
along with an instance of the following classes:
VirtualStream (Microsoft.BizTalk.Streaming.dll) - The code for this class is located in two
locations under the Pipelines SDK as follows:
SDK\Samples\Pipelines\ArbitraryXPathPropertyHandler and
SDK\Samples\Pipelines\SchemaResolverComponent\SchemaResolverFlatFileDasm.
ReadOnlySeekableStream (Microsoft.BizTalk.Streaming.dll).
SeekAbleReadOnlyStream - The source code for this class is located in two locations under the
Pipelines SDK as follows: SDK\Samples\Pipelines\ArbitraryXPathPropertyHandler and
SDK\Samples\Pipelines\SchemaResolverComponent\SchemaResolverFlatFileDasm.
If XLANG/s is generating an XML message, then we recommend not using the XMLTransmit pipeline on
the send side. Doing so incurs processing overhead. We recommended that you use the PassThru
pipeline on the send side in all cases except where additional pipeline processing is required.
Use the PassThruReceive and the PassThruTransmit standard pipelines whenever possible. They do
not contain any pipeline component and do not perform any processing of the message. For this
reason, they ensure maximum performance in receiving or sending messages. You can use a
PassThruReceive pipeline on a receive location if you need to publish a binary document to the BizTalk
MessageBox and a PassThruTransmit pipeline on a send port if you need to send out a binary
message. You can also use the PassThruTransmit pipeline on a physical send port bound to an
orchestration if the message has been formatted and is ready to be transmitted. You will need to use a
different approach if you need to accomplish one of the following actions:
Promoting properties on the context of the inbound XML or Flat File message
Applying a map inside the receive location
Applying a map in a orchestration subscribing the message
Applying a map on a send port subscribing the message
To accomplish one of these actions, you must probe and discover the document type inside the receive
pipeline and assign its value (namespace#root-name) to the MessageType context property. This
operation is typically accomplished by a disassembler component as the Xml Disassembler component
(XmlDasmComp) or the Flat File disassembler component (FFDasmComp). In this case, you need to
use a standard (e.g. XmlReceive pipeline) or a custom pipeline containing a standard or a custom
disassembler component.
Inside a receive pipeline, you should promote items to the message context only if you need them for
message routing (Orchestrations, Send Ports) or demotion of message context properties (Send Ports).
If you need to flow metadata with a message, and you don't use them for routing or demotion purposes,
use the IBaseMessageContext.Write method instead of the IBaseMessageContext.Promote method.
Acquire resources as late as possible and release them as early as possible. For example, if you need
to access data on a database, open the connection as late as possible and close it as soon as possible.
Use the C# using statement to implicitly release disposable objects or the finally block of a try-catch-
finally statement to explicitly dispose your objects. Instrument your source code to make to make your
components simple to debug.
If you need to extract information from a message using an XPath expression, avoid loading the entire
document into memory using an XmlDocument object just to use the SelectNodes or SelectSingleNode
methods. Instead, you should:
Load the message using an instance of the SeekAbleReadOnlyStream class.
Create an instance of the XPathReader .NET class that you can find in the
Microsoft.BizTalk.XPathReader.dll assembly.
Use the methods (ReadUntilMatch, Match, etc.) of the XPathReader instance to find out matches.
Tracking data
The amount of message tracking can have a significant impact on performance. As the number of items
tracked and the amount of tracking data increases, overall performance decreases. We recommend
putting the Tracking database on a separate server. It is important that you ensure the SQL agent jobs
are running. In addition, run the tracking service (TDDS) to move tracking data from the MessageBox
database to the Tracking database (BizTalkDTADb) and monitor the Tracking database for disk growth.
Archive and clean up the Tracking database regularly by backing up and deleting old files.
Message-persistence frequency
BizTalk Server commits all write operations to the MessageBox database, which stores messages and
state information for particular message instances. This persistence allows BizTalk Server to ensure
data integrity, security, and reliability. As message-persistence frequency increases, overall
performance decreases. Whenever possible, reduce the frequency at which data needs to be persisted
to the message box. For example, group multiple message-persistence points into one scope.
Transport adapters
While scenarios define the particular transport adapters that are required, transport adapters have
different settings that you can modify to improve performance. For example, orchestration and adapter
functionality should be separated into different BizTalk Server hosts to minimize resource contention.
For more information, refer to the article BizTalk Server 2006: Comparative Adapter Study at
http://go.microsoft.com/fwlink/?LinkId=116888 and Configuration Parameters that Affect Adapter
Performance at http://go.microsoft.com/fwlink/?LinkId=108785.
Business activity monitoring (BAM) performance
factors
BAM disk usage configuration
BAM incurs significant overhead when a BizTalk system is under load due to the significant amount of
data that is persisted to the BAM database. Therefore, judicious use of disk I/O techniques for the BAM
database critically important.
Note
There are scenarios in which you may want to mix EventStream types. For example, for
pipeline processing, you may want to capture the particular data in BAM regardless of
whether the pipeline is rolling back its transaction. In particular, you may want capture data
about how many messages failed or how many retries occurred during pipeline processing.
To capture the data in this situation you should use BES.
If your application runs on a computer on which BizTalk Server is installed, use MES and OES.
(These APIs are available only from BizTalk applications.)
Note
OES is the equivalent of MES but for BizTalk orchestrations.
If you want BAM event persistence to be in sync with pipeline transaction, you should use a
Messaging Event Stream (MES).
All the asynchronous EventStreams (BES, MES, and OES) persist data first to the BizTalk MessageBox
database. Periodically the data is processed and persisted to the BAM Primary Import database by the
Tracking Data Decode Service (TDDS).
For more information about the BAM EventStream APIs, see “EventStream Classes” in the BizTalk
Server 2006 R2 documentation at http://go.microsoft.com/fwlink/?LinkId=115092.
Host configuration
Host configuration has a significant impact on performance. BizTalk Server supports the isolation of
individual components (including orchestrations, adapters, pipelines, and tracking) into separate hosts,
which are logical entities with specific functionality. Careful consideration should be given to the host
design of a BizTalk Server solution. How a project decides to allocate processing to hosts and host
instances can be a determining factor for that solution’s performance capability. Below are some key
heuristics to consider during creation of host design.
Orchestrations
The following performance factors should be considered when implementing orchestrations in a BizTalk
Server solution:
Serializing, deserializing, or loading large messages into memory
Avoid loading entire documents into orchestrations using an XmlDocument object. If you need to read a
message inside an orchestration, avoid loading the entire document into memory using an
XmlDocument object, use XmlReader instead when possible (as described below in the section “Use of
appropriate .NET classes in your orchestrations”
try{
//read property
finally {
old.Dispose();
return ret;
Another method would be to create a .NET class based on the schema. This takes less memory than
loading the document into an XmlDocument object, as well as providing easy access to the schema
elements for .NET developers. To generate a class based on a BizTalk schema, you can use the
xsd.exe tool provided with Visual Studio 2005. For example, running xsd.exe <schema.xsd> /classes
against a simple schema containing fields named ItemA, ItemB, ItemC, will produce the following class:
//------------------------------------------------------------------------------
// <auto-generated>
// Runtime Version:2.0.50727.1433
//
// Changes to this file may cause incorrect behavior and will be lost if
// </auto-generated>
//------------------------------------------------------------------------------
using System.Xml.Serialization;
//
//
/// <remarks/>
[System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.42")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlTypeAttribute(AnonymousType=true,
Namespace="http://Schemas.MySchema")]
[System.Xml.Serialization.XmlRootAttribute(Namespace="http://Schemas.MySchema",
IsNullable=false)]
/// <remarks/>
[System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)
]
get {
return this.itemAField;
set {
this.itemAField = value;
}
}
/// <remarks/>
[System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)
]
get {
return this.itemBField;
set {
this.itemBField = value;
/// <remarks/>
[System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)
]
get {
return this.itemCField;
set {
this.itemCField = value;
This class can then be referenced in your .NET assembly in order to access the message elements,
and the returned object can be directly assigned to a message. The following is an example use of the
class generated above:
public static Root SetValues(Microsoft.XLANGs.BaseTypes.XLANGMessage msg)
MySchemaRoot rootObj=(MySchemaRoot)msg[0].RetrieveAs(typeof(MySchemaRoot);
rootObj.ItemA="value a";
rootObj.ItemB="value b";
rootObj.ItemC="value c";
return rootObj;
This technique allows you to use an object-oriented approach when processing messages. This
technique should be used primarily with relatively small messages because even though this technique
uses considerably less memory than when loading the message into an XmlDocument object, the
entire message is still loaded into memory. When processing larger messages, use the XmlReader
class to read messages and the XmlWriter class to write messages. When using XmlReader and
XmlWriter, the message is contained in a VirtualStream object and if the size of the message size
exceeds the value specified for Large message threshold (bytes) exposed on the BizTalk Group
Properties configuration page then the message is written to the file system. This decreases overall
performance, but avoids out of memory exceptions.
Example
For example, consider a scenario whereby an orchestration receives an activation message that
initializes a correlation set, which in turn receives a convoy of 10 messages that follow the correlation
set. In addition, assume that the mix of activation and correlation messages cause a backlog of
messages in the spool table which triggers a throttling condition that limits the amount of messages that
can be received. Unfortunately, the correlation messages are throttled along with the activation
messages, which slows down the completion of orchestrations, causing further backlog and additional
throttling. Allowed to continue, this can cause the throttling mechanism to reduce system throughput to
nearly zero.
By splitting the single receive location into two receive locations -- one for activations and one for
correlations -- and configuring the locations in separate hosts, the database size throttling threshold for
the activations can be kept lower than that for correlations, which will result in reduced overall backlog,
and keep messages flowing.
So, you might be asking "Why can’t I just raise the database size threshold for my single receive
location to fix the problem?" The answer is, you can, but it won’t always result in the desired behavior.
Throttling is there primarily to protect the system from becoming overloaded. If you raise the thresholds
high enough, or turn them off altogether, you will eliminate this protection.
Recommendation
The best practice for scenarios such as the one described above that are sensitive to throttling
correlation messages, is to separate the receive locations into separate hosts which can be throttled
independently.
When separate hosts are configured for receive locations, set the database size throttling threshold for
the host used by the receive locations to a lower value than the database size throttling threshold for
hosts used by orchestrations or correlations.
If you know that your load will never be higher than the maximum sustainable throughput (MST) for the
system, or that throughput peaks are recoverable between peak events, then raising the throttling
thresholds will also work, but may not sustain as high a throughput as using separate hosts for
activations and correlations.
Performance implications of delivery notification
Be aware of the performance implications of using delivery notification. Delivery notification incurs
overhead that negatively impacts the overall throughput of a solution so only use it when absolutely
necessary. BizTalk creates an internal subscription for each delivery notification and uses an internal
correlation set to return a message to the corresponding orchestration instance. More subscriptions
entail more work for the master MessageBox, which ultimately reduces the maximum sustainable
throughput of the system.
Fact types
The rule engine takes less time to access .NET facts compared to the time it takes to access XML and
database facts. If you have a choice of using either .NET or XML or database facts in a policy, you
should consider using .NET facts for improved performance.
Fact retrievers
A fact retriever implements standard methods which are typically used to supply long-term and slowly
changing facts to the rule engine before a policy is executed. The engine caches these facts and uses
them over multiple execution cycles. Instead of submitting a static or fairly static fact each time that you
invoke the rule engine, you should create a fact retriever that submits the fact for the first time, and then
updates the fact in memory only when necessary.
Rule priority
The priority setting for a rule can range on either side of 0, with larger numbers having higher priority.
Actions are executed in order from the highest priority to lowest priority. When the policy implements
forward-chaining behavior by using Assert/Update calls, the chaining can be optimized by using the
priority setting. For example, assume that Rule2 has a dependency on a value set by Rule1. Giving
Rule1 a higher priority means that Rule2 will only execute after Rule1 fires and updates the value.
Conversely, if Rule2 were given a higher priority, it could fire once, and then fire again after Rule1 fires
and update the fact that Rule2 is using a condition. While this may provide a correct result, giving
Rule1 a higher priority in this scenario will provide better performance.
Update calls
The Update function causes all the rules using the updated facts to be reevaluated. Update function
calls can be expensive especially if a large set of rules is reevaluated when updating facts. There are
situations where this behavior can be avoided. For example, consider the following rules:
Rule1:
IF PurchaseOrder.Amount > 5
Rule2:
IF PurchaseOrder.Amount <= 5
All remaining rules of the policy use StatusObj.Flag in their conditions. Therefore, when Update is
called on the StatusObj object, all rules will be reevaluated. Whatever the value of the Amount field is,
all rules except Rule1 or Rule2 are evaluated twice, once before the Update call and once after the
Update call.
To mitigate the associated overhead, you could set the value of the flag field to false prior to invoking
the policy and then use only Rule1 in the policy to set the flag. In this case, Update would be called
only if the value of the Amount field is greater than 5, and the Update function is not called if the value
of Amount is less than or equal to 5. Therefore, all the rules except Rule1 or Rule2 are evaluated twice
only if the value of the Amount field is greater than 5.
Caching settings
The Rule Engine uses two caches. The first one is used by the update service and the second one is
used by each BizTalk process. The first time a policy is used, the BizTalk process requests the policy
information from the update service. The update service retrieves the policy information from the rule
engine database, caches it and returns the information to the BizTalk process. The BizTalk process
creates a policy object based on that information and stores the policy object in a cache when the
associated rule engine instance completes execution of the policy. When the same policy is invoked
again, the BizTalk process reuses the policy object from the cache if one is available. Similarly, if
BizTalk process requests for the information about a policy from update service, the update service
looks for the policy information in its cache if it is available. The update service also checks if there
have been any updates to the policy in the database every 60 seconds. If there are any updates, the
update service retrieves the information and caches the updated information.
There are three tuning parameters for the rule engine related to these caches, CacheEntries,
CacheTimeout, and PollingInterval. You can specify the values for these parameters either in the
registry or in a configuration file. The value of the CacheEntries parameter is the maximum number of
entries in the cache and is set to a value of 32 by default. You may want to increase the value of the
CacheEntries parameter to improve performance in certain scenarios. For example, say you are using
40 policies repeatedly; you could to increase the value of the CacheEntries parameter to 40 to improve
performance. This would allow the update service to maintain cache details of up to 40 policies in
memory.
The value of CacheTimeout is the time in seconds that an entry is maintained in the update service
cache. In other words, the CacheTimeout value refers to how long a cache entry for a policy is
maintained in the cache without being referenced. The default value of CacheTimeout parameter is
3600 seconds, or 1 hour. It means that if the cache entry is not referenced within an hour, the entry is
deleted. In some cases, it may be beneficial to increase the value of the CacheTimeout parameter to
improve performance. For example if a policy is invoked every 2 hours performance of the policy
execution would be improved by increasing the CacheTimeout parameter to a value higher than 2
hours.
The PollingInterval parameter of the rule engine defines the time in seconds for the update service to
check the rule engine database for updates. The default value for the PollingInterval parameter is 60
seconds. If you know that the policies do not get updated at all or are updated rarely, you could change
this parameter to a higher value to improve performance.
SideEffects property
The ClassMemberBinding, DatabaseColumnBinding, and XmlDocumentFieldBinding classes
have a property named SideEffects. This property determines whether the value of the bound field,
member, or column is cached. The default value of the SideEffects property in the
DatabaseColumnBinding and XmlDocumentFieldBinding classes is false. The default value of the
SideEffects property in the ClassMemberBinding class is true. Therefore, when a field of an XML
document or a column of a database table is accessed for the second time or later within the policy, its
value is retrieved from the cache. However, when a member of a .NET object is accessed for the
second time or later, the value is retrieved from the .NET object, and not from the cache. Setting the
SideEffects property of a .NET ClassMemberBinding to false will improve performance because the
value of the field is retrieved from the cache from the second time onwards. You can only do this
programmatically. The Business Rule Composer tool does not expose the SideEffects property.
Performance metrics
This section describes performance metrics that should be evaluated when measuring the performance
of a BizTalk solution. This section provides pointers to existing documentation as a reference when
more information is needed for a specific performance object or counter that might be mentioned later
in this document.
Performance Tools
This topic provides information on tools you can use to evaluate the performance of a BizTalk Server
solution. The tools described in this topic have different purposes; some are designed to evaluate end-
to-end performance while others focus on evaluating performance of a particular aspect of a BizTalk
Server solution.
Important
As of the writing of this guide, the test cases generated by BizUnit Designer 1.x are only
compatible with BizUnit 2.x.
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about the
suitability of this programs. Use of this program is entirely at your own risk.
SQLIO
The SQLIO tool was developed by Microsoft to evaluate the I/O capacity of a given configuration. As
the name of the tool implies, SQLIO is a valuable tool for measuring the impact of file system I/O on
SQL Server performance. SQLIO can be downloaded from http://go.microsoft.com/fwlink/?
LinkId=115176.
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about the
suitability of this programs. Use of this program is entirely at your own risk.
Message load
It’s important to consider right from the outset how you are going to replicate the message load that will
actually be going through the production system. For instance, if in production 20% of the messages
will be <20KB in size, 50% will be <100KB in size and the remaining 30% could be up to 1 MB in size, it
is important this is replicated in the lab.
Note
In the examples above, X is placeholder for the unit that is the focus of the performance
assessment. X could represent orchestrations, messages, or other performance metrics that
are relevant to the BizTalk solution.
Note
For more information on applying this optimization, see Optimizing Filegroups for the
Databases.
Ability to identify performance problems using SQL Server 2005 Performance Dashboard
Reports.
Note
SQL Server 2005 Performance Dashboard Reports is available for download at
http://go.microsoft.com/fwlink/?LinkId=118673.
For each specialist technology involved in the performance assessment, a list of requirements
should be defined as is done above for SQL Server. This ensures the resource obtained has clear
expectations of what will be required of them. Another technology that frequently requires
specialized knowledge during the performance assessment is IBM Websphere MQ. The list below
illustrates the requirements specification for an IBM WebSphere MQ product specialist:
Experience in the monitoring, maintenance, and customization of MQSeries.
Experience with installation and migration of new versions of MQSeries.
Ability to analyze and tune MQSeries performance.
Perform MQSeries problem analysis.
Knowledge of the processes and procedures related to MQSeries security, administration,
recovery and automation.
Documentation lead - Continuously updating the lab documentation end-to-end throughout the
performance assessment is vitally important. The overall successfulness of a lab engagement
should not be judged until the optimizations have successfully been applied in the production
environment and the system has obtained the desired level of performance. In order to do this, it is
essential that a detailed record of the following information is kept:
High-level results summary of the lab
Unresolved issues
Resolved issues
Timeline for the lab
Lab progress
Implemented optimizations by category
Implemented optimizations in chronological order (to ensure that they can be applied in the
same order in the production system)
High-level architectural diagram
Detail of the scenarios to be tested
Third party technologies involved
Lab hardware diagram
Contact list
Detailed hardware inventory
Appendix with full detailed results
Developer - Whether a developer is required depends very much on the scope of the engagement. If
the code base has been locked down and the lab is there to test infrastructure and platform
optimizations only then the services of a developer should not be required. This type of scenario
can occur when performance testing is performed just before the “go-live” date of the production
server. By this time, the code should have been locked down and full regression testing should be
complete or in progress. Any changes to the code introduced in the lab could introduce a regression
and, therefore, introduce risk to the production system. If code changes are permitted, then typically
a developer will be required. The list below represents the skills typically required by a developer
that is engaged in a BizTalk Server performance assessment:
Ability to identify and fix performance issues with orchestrations
Ability to identify and fix performance issues with pipelines
Familiarity with .NET including:
Enterprise library
Visual Studio F1 profiler expertise
Visual Studio 2005 Team System
Test lead - Ensuring an accurate, complete and thorough set of results is obtained is critical. The test
lead is responsible for ensuring the required information is captured, analyzed appropriately, and
distributed appropriately after every test run. It is important to consider how to capture the data,
typically the test data is suitable for presentation using an Excel spreadsheet. The following list
illustrates the data that should be captured for each test that is run during the lab:
Test run number
Date
Total messages processed
Messages processed per second
Time started
Time stopped
Test duration in minutes
Suspended messages / Total messages processed – This can be captured either from the
BizTalk Administration console or by measuring the BizTalk Server performance counters using
Performance Monitor.
# of messages that have failed processing
# or message that have been successfully processed
Test client responses
Client process duration average, measured in seconds - Typically this value is measured when
implementing a synchronous messaging solution with BizTalk Server, in this case it is important
to know the value for average client duration as this is typically representative of how long an
end user is waiting for a response from the solution.
Client process duration maximum value, measured in seconds
Client process duration minimum value, measured in seconds
BizTalk Server request/response latency, measured in seconds
Orchestrations completed per second
% of messages processed on time
Value of TestResultsLocation variable used by Visual Studio Team System testing tools.
Value of TestResultsLocation variable used by BizUnit.
Any comments or recommendations
As well as collecting the results, the test lead should ensure that they monitor each test run to see if
there are any trends. Performance improvements should be communicated to the rest of the team
and should indicate how much performance was improved and which optimization was applied to
achieve the improvement. At the end of day it is important that the test lead provides a summary of
the tests that have been performed in the lab. This allows the stakeholders to be kept informed of
the continued progress of the lab. The table below illustrates how this information might be put
together in a sample update e-mail:
Test results summary
Status Throughput Averag %< 2 # of # of # of Averag Duratio
e second Tes BizTalk Message e n
Latenc s t Server s Messag
y run Compute e Size
s rs
Note
For more information about implementing an automated build process, see Automating the
Build Process.
MSBuild was introduced with the .NET framework 2.0 to enable developers to automate task such
as those described above. Several BizTalk Server specific MSBuild tasks are included with the SDC
Tasks library which is available for download from http://go.microsoft.com/fwlink/?LinkId=119288.
Test data to be used during the performance assessment – The test data that is used has a
considerable influence on the overall effectiveness and success of a performance assessment.
Consider the scenario where a BizTalk Server application utilizes messaging, orchestration and the
Rules Engine. The Rules Engine is called from within a pipeline component on the receive side to
determine which orchestration will be used to process the message; and it is also called from within
the orchestration at various points to determine flow. The Rules Engine implements caching so that
rules policy execution is optimized. Therefore, if a single message is used as test data during the
performance assessment, test results may be skewed (due to caching) and you may obtain results
that can’t be replicated in production.
Ideally the test data used during the performance assessment should be actual production data or a
subset of production data. The test data should also simulate the load and pattern of traffic that will
be flowing through the production system. Consider the following factors when defining
requirements for test data:
Number of messages that will be flowing through the system in a given period - This is
normally expressed as messages per second or messages per hour.
Types of messages – Are the messages flat file, XML, or binary?
Distribution of messages – What percentage will be flat file, what percentage of each XML
message type will be used?
Peak load processing requirements - In many scenarios a large interchange may be
processed during non-peak hours. For example a large batch of payments may be posted to a
bank’s systems at midnight. If this is the case, ensure you are able to replicate this during
testing.
Endpoints used to receive/send messages - In many environments separate receive
locations are configured to receive different types of messages. For example flat file messages
may be received using the File adapter or the MQSeries adapter may be used to receive XML
messages. While messages may all be processed by the same orchestration(s) they will have
different entry points into the system. Each of these different receive locations can be hosted in
a separate host; in fact doing this will often improve the overall performance of the system.
The table below provides an example of the information that should be captured when determining
test data specifications:
Putting the information into a table accomplishes several things. First off, it makes it easier for
stakeholders to agree on assumptions made about the test data. Second, it provides you with
information that can be used to decide on potential optimizations for the performance assessment.
For example in the table above you can see that all the receive locations used to process all
different data types are hosted within the ReceiveHost BizTalk host. This means that each instance
of this host will be responsible for processing different types and sizes of data (e.g. XML and binary
non-compressible PDF data). Given that each host instance is a single instance of the BizTalk
Server process (BTSNTSVC.EXE), this could become a processing bottleneck. Therefore in this
scenario one immediately obvious optimization for the environment would be to test the
performance improvement of separating each receive location into its own individual host. Having
access to the test data information in a summary tabular format makes it easier to gauge simple
optimizations such as this.
Planning for automated load tests and load generation - After the test data profile for the
performance assessment is established, it is important to consider how to perform load testing
within the environment. The BizTalk Server product group has developed the BizTalk LoadGen
2007 tool (LoadGen) to quickly, easily and reliably define load tests that simulate production level
message volumes. LoadGen is multi-threaded, configuration driven and supports multiple
transports. Because LoadGen was designed by and is used by the BizTalk Server product group it
should be the tool of choice for simulating load during the performance assessment. For more
information about automating load testing using LoadGen, see Automating Performance and
Stability Testing.
Note
For more information about how to create a timeline using Visio 2007, see the article “Create
project timelines in Visio 2007” at http://go.microsoft.com/fwlink/?LinkId=119395.
The table below summarizes the deliverables and other details associated with the milestones in the
figure above.
BizTalk Host design and descriptions of each host and their instances.
Description of each orchestration.
Description of each pipeline.
Description of custom components such as .NET assemblies and COM+ components.
Detailed architecture diagram
The diagram below illustrates an architecture diagram that could be used for a performance
assessment.
The following details should be considered when creating the message flow diagram(s):
Describe the lifecycle of each type of message from the time it arrives at a receive location until all
resulting messages are sent and all related processing is completed.
Describe how processing changes for error conditions.
Include details about correlation, delivery notifications, and acknowledgements.
Include details about dependence on external systems.
Include performance requirement information regarding latency and throughput.
In This Section
Best Practices for Avoiding Bottlenecks
Investigating Bottlenecks
System Level Bottlenecks
Bottlenecks in the BizTalk Server Tier
Bottlenecks in the Database Tier
See Also
Finding and Eliminating Bottlenecks
Investigating Bottlenecks
This topic describes a recommended process for investigating bottlenecks.
Testing consistency
Measuring performance characteristics after changing settings should be done to validate the effect of
the change.
Hardware - Use consistent hardware because varying the hardware can cause inconsistent
behavior and produce misleading results. For example, you would not use a laptop to test
performance of a BizTalk solution.
Test Run Duration - Measure performance for a fixed minimum period to ensure that the results
are sustainable. Running tests for longer periods also ensures the system has gone through the
initial warm/ramp up period where all caches are populated, database tables have reached
expected counts, and throttling is given sufficient time to regulate throughput once predefined
thresholds are hit. This approach will help discover optimal sustainable throughput.
Test Parameters – Do not vary test parameters from test run to test run. For example, varying map
complexity and/or document sizes can produce different throughput and latency results.
Clean State - After a test is complete, cleanup all state before running the next test. For example,
historical data can buildup in the database impacting runtime throughput. Recycling the service
instances helps to release cached resources like memory, database connections, and threads. In
your test environment, you may want to run the script BizTalk Message Box Clean as described in
the article “BizTalk Server 2006: Managing a Successful Performance Lab” (available at
http://go.microsoft.com/fwlink/?LinkID=118365). This script is intended to return your BizTalk Server
test environment to a fresh state with regards to the Message Box between runs. The script deletes
all running instances and all information about those instances including state, messages, and
subscriptions, but leaves all activation subscriptions so you do not have to re-enlist your
orchestrations or send ports. Note that this tool is not supported on production systems.
Performance Testing and Tuning - The goal of this test category is to maximize performance and
throughput of your application and find the Maximum Sustainable Throughput (MST) of your system
(see “Planning for Sustained Performance” at http://go.microsoft.com/fwlink/?LinkID=104146) and
“What Is Sustainable Performance?” at http://go.microsoft.com/fwlink/?LinkID=106928).
The MST is the highest load of message traffic that a system can handle indefinitely in a production
environment. All BizTalk applications should be tested for performance and throughput before going
into production. At a minimum, you should run a representative set of test cases that represent the
most common usage scenarios. We recommend that you test against expected loads and peak
loads in a separate environment that matches characteristics of the production environment. This
environment should have all of the corporate standard services installed and running, such as
monitoring agents, and antivirus software.
We also recommend that you test new BizTalk applications on the same hardware in production
alongside the other BizTalk applications that are running. These other BizTalk applications put
additional load on the BizTalk Server, SQL Server, network I/O, and disk I/O. In addition, one
BizTalk application could cause another to throttle (when the spool depth gets too large, for
example). All BizTalk applications should be performance / stress tested before going into
production. In addition, you should determine how long it takes the system to recover from peak
loads. If the system does not fully recover from a peak load before the next peak load occurs, then
you've got a problem. The system will get progressively further and further behind and will never be
able to fully recover.
4. Right-click this key and select Export to export the registry key to a file.
Collect the BizTalk configuration files - On each BizTalk Server node, collect the
BTSNTSVC.exe.config file located in the BizTalk Server installation folder (e.g. C:\Program
Files\Microsoft BizTalk Server 2006).
Collect the .NET configuration files - On each BizTalk Server node, collect the machine.config and
web.config .NET Framework 2.0 configuration files. You can find the configuration files in the %windir
%\Microsoft.NET\Framework\v2.0.50727\CONFIG folder.
Use the MsgBoxViewer tool to collect information about the Messagebox database - Follow these
steps to collect information about the MessageBox database using the MsgBoxViewer tool:
1. Download the MsgBoxViewer tool from http://go.microsoft.com/fwlink/?LinkID=117289.
2. Log on to the BizTalk Server computer with a user account that is part of the BizTalk Server
Administrator security group.
3. Copy the MsgBoxViewer.exe to the BizTalk Server computer.
4. Launch the tool.
5. Click the Optional Info to Collect tab, and then click Select All Info.
6. Click Start to Collect.
7. When the Status label shows the End Collection message, switch to the folder containing the
MsgBoxViewer.exe executable and copy the resulting report (.htm) and log files.
Use the BizTalk Server 2006 Documenter tool to save BizTalk configuration information – Follow
these steps to use the BizTalk Server 2006 Documenter tool to save BizTalk Server configuration
information:
1. Download the BizTalk Server 2006 Documenter tool from http://go.microsoft.com/fwlink/?
LinkID=117291.
2. Using a user account that is part of the BizTalk Server Administrator security group, log on to a
BizTalk node.
3. Install the tool.
4. Launch the BizTalk Server 2006 Documenter tool.
5. Choose the Document All Configuration option.
6. Start collecting data.
7. Save the documents produced by the tool.
Run the Windows Server 2003 Performance Advisor tool - Follow these steps to run the Windows
Server 2003 Performance Advisor tool:
1. Download the Windows Server 2003 Performance Advisor tool from http://go.microsoft.com/fwlink/?
LinkID=117292.
2. Using a domain or local administrator account, log on to each BizTalk Server and SQL Server
computer.
3. Install the Windows Server 2003 Performance Advisor tool.
4. Run the Windows Server 2003 Performance Advisor tool.
5. Press the green button under the File menu to start the system overview.
6. When the tool finishes collecting data, switch to the folder C:\PerfLogs\Report\System
Overview\Current and save the data. Note that .blg files can be quite large.
7. You can further analyze performance data collected by the Performance Advisor tool using the
Performance Analysis of Logs (PAL) tool available at http://go.microsoft.com/fwlink/?LinkID=98098.
Collect and store the source code for all components used in the solution - Store the source code
for all components (for example Orchestration, Custom Pipeline Component, and Helper components
code) on a separate file share.
Initial troubleshooting
There are certain components of a BizTalk Server solution which, if not enabled, will cause performance
problems regardless of the overall size or design of the BizTalk solution. The following preliminary
troubleshooting tasks should be completed to rule out some of the “usual suspects” before engaging in
an exhaustive bottleneck analysis of a BizTalk solution.
Verify the Tracking host instance is running - The Tracking host instance is responsible for
moving both BAM and HAT data from the TrackingData table of the MessageBox database to the
BizTalkDTADb and/or BAMPrimaryImport database tables. If the tracking host instance is not
running, then tracking data will accumulate in the MessageBox database and negatively impact
performance of the BizTalk Server solution.
Verify the Enterprise Single Sign-On (ENTSSO) service is running on all BizTalk Server
computers - BizTalk host instances maintain a dependency on a locally running instance of the
ENTSSO service. If the ENTSSO service is not running on the BizTalk Server, then host instances
on the server will not be able to run either.
Verify the SQL Server Agent service is running on all SQL Server computers - The SQL
Server Agent service must be running in order for the BizTalk SQL Server Agent jobs to execute.
These jobs perform important functions to keep your servers operational and healthy.
Verify the BizTalk SQL Server Agent jobs are enabled and running without exceptions - Even
if the SQL Server Agent service is running, it is imperative that all of the default BizTalk SQL Server
Agent jobs are enabled and running successfully.
Check the BizTalk Server and SQL Server event logs - A cursory examination of the BizTalk
Server or SQL Server event logs may reveal a problem that could otherwise take a significant
amount of time to diagnose and resolve.
Run the BizTalk Server 2006 Best Practices Analyzer - The BizTalk Server 2006 Best Practices
Analyzer examines a BizTalk Server 2006 deployment and generates a list of issues pertaining to
best practices standards. The tool performs configuration-level verification by gathering data from
different information sources, such as Windows Management Instrumentation (WMI) classes, SQL
Server databases, and registry entries. The data is then used to evaluate the deployment
configuration. The tool reads and reports only and does not modify any system settings, and is not
a self-tuning tool. You can download the BizTalk Server 2006 Best Practices Analyzer at
http://go.microsoft.com/fwlink/?LinkID=86622.
Disk memory cache and available physical Because memory is cached to disk as physical
memory memory becomes limited, make sure that you
have a sufficient amount of memory available.
When memory is scarce, more pages are
written to disk, resulting in increased disk
activity. Also, make sure to set the paging file to
an appropriate size. Additional disk memory
cache will help offset peaks in disk I/O
requests. However, it should be noted that a
large disk memory cache seldom solves the
problem of not having enough spindles, and
having enough spindles can negate the need
for a large disk memory cache. For information
on configuring the Windows paging file for
optimal performance, see the section
“Configure the Windows PAGEFILE for optimal
performance” in the topic Optimizing Operating
System Performance.
Storage controller type If you have battery-backed cache, enable write
caching to improve disk write performance on
the transaction log file volumes and on the
database volumes. Write caching provides a
response time of 2 ms for a write I/O request,
as opposed to a response time of 10 to 20 ms
without write caching enabled. Enabling write
caching greatly improves the responsiveness
for client write requests. Read caching does not
improve performance in a BizTalk Server
scenario because it is only useful for sequential
disk reads, which occur only in transaction log
files.
Transaction log files are read from only when
they are being played back, such as after a
database restore or when a server is not
properly shutdown. Larger caches allow for
more data to be buffered, meaning that longer
periods of saturation can be accommodated. If
your controller allows you to configure the
cache page size, you should set it to 4 KB. A
larger size, such as 8 KB, results in wasted
cache because a 4 KB I/O request takes up the
entire cache page of 8 KB, thereby cutting your
usable cache in half.
Spindles Spindles are more important than capacity, and
BizTalk Server performance is improved if the
spindles support a high number of random I/O
requests. Plan for no more than 80 percent
total utilization to ensure sufficient I/O is
available, even in cases of a spindle failure.
Raid The RAID solution you use should be based on
the cost and performance trade-offs that are
appropriate for your environment. Therefore,
more than one type of RAID solution may be
recommended for a particular data storage
requirement. General recommendations are as
follows:
Use Raid-1+0 (striped sets in a mirrored
set) for the BizTalk Server databases.
Use Raid-1 (mirrored set without parity) for
the transaction log file volumes.
In general, Raid 5 (striped set with
distributed parity) is not recommended as
Raid 5 does not provide optimal reliability,
availability and performance as compared
to other Raid configurations.
Due to the possibility of permanent data
loss, a Raid-0 (striped set without parity)
configuration should never be used in a
BizTalk Server production environment.
Bus type Higher throughput provides better performance.
In general, SCSI buses provide better
throughput and scalability than IDE or ATA
buses. You can use the following equation to
determine the theoretical throughput limit for
your bus type:
(Bus speed (in bits) / 8 bits per byte) X Operating speed (in
MHz) = Throughput (in MB/s)
You can also improve disk I/O performance by
placing multiple drives on separate I/O buses.
Performance counters for measuring disk I/O bottlenecks
Note
When attempting to analyze disk performance bottlenecks, you should always use physical disk
counters. However, if you use software RAID, you should use logical disk counters. As for
Logical Disk and Physical Disk Counters, the same counters are available in each of these
counter objects. Logical disk data is tracked by the volume manager (or managers), and
physical disk data is tracked by the partition manager.
The following performance counters should be used to determine if your system is experiencing a disk
I/O related bottleneck:
PhysicalDisk\Avg. Disk Queue Length
Threshold: Should not be higher than the number of spindles plus two.
Significance: This counter indicates the average number of both read and writes requests that
were queued for the selected disk during the sample interval. This counter is useful for
gathering concurrency data, including data bursts and peak loads. These values represent the
number of requests in flight below the driver taking the statistics. This means the requests are
not necessarily queued but could actually be in service or completed and on the way back up
the path. Possible in-flight locations include the following:
SCSIport or Storport queue
OEM driver queue
Disk controller queue
Hard disk queue
Actively receiving from a hard disk
PhysicalDisk\Avg. Disk Read Queue Length
Threshold: Should be less than two.
Significance: This counter indicates the average number of read requests that were queued for the
selected disk during the sample interval.
PhysicalDisk\Avg. Disk Write Queue Length
Threshold: Should be less than two.
Significance: This counter indicates the average number of write requests that were queued for
the selected disk during the sample interval.
PhysicalDisk\Avg. Disk sec/Read
Threshold: No specific value.
Significance: This counter indicates the average time, in seconds, of a data read operation from
the disk.
PhysicalDisk\Avg. Disk sec/Transfer
Threshold: Should not be more than 18 milliseconds.
Significance: This counter indicates the time, in seconds, of the average disk transfer. This may
indicate a large amount of disk fragmentation, slow disks, or disk failures. Multiply the values of
the Physical Disk\Avg. Disk sec/Transfer and Memory\Pages/sec counters. If the product of
these counters exceeds 0.1, paging is taking more than 10 percent of disk access time, so you
need more physical memory available.
PhysicalDisk\Disk Writes/sec
Threshold: Depends on manufacturer’s specifications.
Significance: This counter indicates the rate of write operations on the disk.
Processor\% DPC Time, % Interrupt Time and % Privileged Time - If Interrupt Time and
Deferred Procedure Call (DPC) time are a large portion of Privileged Time, the kernel is spending a
significant amount of time processing I/O requests. In some cases performance can be improved by
configuring interrupts and DPC affinity to a small number of CPUs on a multiprocessor system,
which improves cache locality. In other cases, it works best to distribute the interrupts and DPCs
among many CPUs, so as to keep the interrupt and DPC activity from becoming a bottleneck. For
information on using the Interrupt Filter Configuration tool to bind network adapter interrupts to
specific processors on multiprocessor computers, see the section “Use the Interrupt Filter
Configuration tool to bind network adapter interrupts to specific processors on multiprocessor
computers” in Optimizing Operating System Performance.
Processor\DPCs Queued / sec - Measures how DPCs are consuming CPU time and kernel
resources.
Processor\Interrupts / sec - Another measurement of how interrupts are consuming CPU time and
kernel resources. Modern disk controllers often combine or coalesce interrupts so that a single
interrupt results in the processing of multiple I/O completions. Of course, there is a trade-off
between delaying interrupts (and thus completions) and economizing CPU processing time.
Caution
Some applications, such as incremental backup utilities, rely on the NTFS update
information and will not function correctly without it.
Reserve appropriate space for the master file table - Add the NtfsMftZoneReservation entry to
the registry depending on the number of files that are typically stored on your NTFS volumes. When
you add this entry to the registry, the system reserves space on the volume for the master file table.
Reserving space in this manner allows the master file table to grow optimally. If your NTFS volumes
generally store relatively few files, set the value of this registry entry to the default value of 1.
Typically you can use a value of 2 or 3 if your NTFS volumes store a moderate numbers of files,
and use a value of 4 (the maximum) if your NTFS volumes tend to contain a relatively large number
of files. However, make sure to test any settings greater than 2, because these greater values
cause the system to reserve a much larger portion of the disk for the master file table. For more
information about adding the NtfsMftZoneReservation to the registry, see the section “Increase
space available for the master file table” in Optimizing Operating System Performance.
Use the most efficient disk systems available - In addition to the physical disk that is used,
consider the type of disk controller and cabling that will be used. An efficient disk subsystem should
also provide drivers that support interrupt moderation or interrupt avoidance to mitigate processor
interrupt activity caused by disk I/O.
Ensure that you are using the appropriate RAID configuration - Use RAID 10 (striping and
mirroring) for optimal performance and fault tolerance. The tradeoff is that using RAID 10 is
expensive. Avoid using RAID 5 when you have extensive write operations. For more information
about implementing RAID in a BizTalk Server environment, see the section “Disk Infrastructure” in
the BizTalk Server Database Optimization" white paper at http://go.microsoft.com/fwlink/?
LinkID=101578.
Consider using database partitions - If you have a database bottleneck, consider using database
partitions and mapping disks to specific tables and transaction logs. The primary purpose of
partitions is to overcome disk bottlenecks for large tables. If you have a table with large number of
rows and you determine that it is the source of a bottleneck, consider using partitions. For SQL
Server, you can use file groups to improve I/O performance. You can associate tables with file
groups, and then associate the file groups with a specific hard disk. For more information about
using optimizing filegroups for the BizTalk Server databases, see Optimizing Filegroups for the
Databases.
Consider adding physical memory, if you have excessive page faults - A high value for the
Memory: Pages/sec performance counter could indicate excessive paging which will increase disk
I/0. If this occurs, consider adding physical memory to reduce disk I/O and increase performance.
Consider using a disk with a higher RPM rating or using a Storage Area Network (SAN)
device - Disks with higher RPM ratings offer improved performance compared to disks with lower
RPM ratings. SAN devices typically offer top tier performance but at a price premium.
Follow the recommendations in Optimizing Database Performance. This topic provides several
recommendations for optimizing database performance both before and after configuring BizTalk
Server.
CPU bottlenecks
Each application that runs on a server gets a time slice of the CPU. The CPU might be able to efficiently
handle all of the processes running on the computer, or it might be overloaded. By examining processor
activity and the activity of individual processes including thread creation, thread switching and context
switching, you can gain insight into processor workload and performance.
Note
Although you can change the process priority level of an application by using Task Manager or
from a command prompt, you should generally avoid doing so.
Memory bottlenecks
When evaluating memory-related bottlenecks, consider unnecessary allocations, inefficient clean up,
and inappropriate caching and state management mechanisms. To resolve memory-related
bottlenecks, optimize your code to eliminate these issues and then tune the amount of memory
allocated to your application(s). If you determine during tuning that memory contention and excessive
paging are occurring, you may need to add additional physical memory to the server. Low memory
leads to increased paging of an application's virtual address space to and from disk. If paging becomes
excessive, disk I/O will increase and negatively impact overall system performance.
Note
On a 32-bit system, BizTalk can use a maximum of 2GB of memory, the limit increases to 3GB
with BizTalk 2006 if the /3GB switch is used. For more information on memory usage, see
“Memory Limits for Windows Releases” at http://go.microsoft.com/fwlink/?LinkId=118349.
Processing bottlenecks
If the Host Queue – Length performance counter is climbing, it indicates that the orchestrations are not
completing fast enough. For more information, see the Perfmon counter table in this topic. This could
be due to memory contention or CPU saturation.
If the orchestration servers are the bottleneck, use Perfmon to identify the source.
If the server is CPU bound, consider the following:
If the workflow is complex consider splitting the orchestration into multiple smaller orchestrations
Note
Splitting an orchestration into multiple workflows can cause additional latency and add
complexity. Multiple workflows can also cause an increase in the number of messages
published to and consumed from the BizTalkMsgBoxDb, putting additional pressure on the
database.
If you use complex maps, consider whether they can be moved to the Receive/Send ports. Be sure
to verify which ports have additional bandwidth.
Consider scaling up the hardware or scaling out by configuring an additional processing server.
Transmitting bottlenecks
If the server hosting the send adapters is saturated on resources (for example, disk, memory, or CPU),
consider scaling-up the server or scaling-out to additional send host servers. The sending tier could
become the bottleneck if the destination (external to BizTalk) is unable to receive data fast enough. This
will cause messages to buildup in the MessageBox database (Application SendHostQ).
If all the endpoints are within the scope of the topology, isolate the cause at the destination. For
example, determine if the HTTP/SOAP location is optimally configured to receive load. If not, consider
scaling out. Also determine if the destination is growing due to excessive output messages delivered by
BizTalk. If yes, you might need a maintenance plan to archive and purge the destination messages.
Large numbers of files in a destination folder can severely impact the ability of the BizTalk service to
commit data to the disk drive.
Tracking bottlenecks
The Tracking host instance is responsible for moving the Business Activity Monitoring (BAM) and Health
and Activity Tracking (HAT) data from the MessageBox database (TrackingData table) to the BizTalk
Tracking and/or BAM Primary Import database tables. If multiple MessageBox databases are
configured, the tracking host instance uses four threads per MessageBox database.
It is possible that the Tracking host instance is CPU bound. If it is, consider scaling-up the server or
scale-out by configuring an additional server with Host Tracking enabled. The multiple host instances
will automatically balance load for the multiple MessageBox databases configured. For more
information about scaling, see the topic “Scaling Your Solutions” in the BizTalk Server 2006 R2
documentation at http://go.microsoft.com/fwlink/?LinkID=107575.
If the TrackingData table in the MessageBox database grows large, it is usually because the data
maintenance jobs on the BizTalk Tracking and/or BAM Primary Import database are not running as
configured, causing growth of the BizTalk Tracking and/or BAM Primary Import databases. After these
databases grow too large it can have a negative impact on the ability of the Tracking host to insert data
into the TrackingData table. This causes tracked data to back up in the MessageBox database tables.
The growth of the TrackingData table cause throttling to start.
You should only enable the minimum tracking required for your application, as this will reduce the
amount of data logged and lower the risk of tracking bottlenecks. For information on disabling tracking
settings for individual items such as orchestrations and send/receive ports, see
http://go.microsoft.com/fwlink/?LinkId=118343.
Other
Configure the deployment topology such that different functionality runs in dedicated isolated host
instances. This way each host instance gets its own set of resources (for example, on a 32-bit system,
2GB virtual memory address space, handles, threads). If the server is has sufficient CPU headroom
and memory to host multiple host instances, they can be configured to run on the same physical
computer. If not, consider scaling out by moving the functionality to dedicated servers. Running the
same functionality on multiple servers also serves to provide a highly available configuration.
CPU contention
If the processor is saturated, you can fragment the application by separating the receiving from the
sending and orchestration. To do this, create separate hosts, map the hosts to specific functionality
(receive/send/orchestrations/tracking) and add dedicated servers to these separate hosts.
Orchestration functionality is often CPU-intensive. If you configure the system so the orchestrations
execute on a separate dedicated server, this can help improve overall system throughput.
If multiple orchestrations are deployed, you can enlist them to different dedicated orchestration hosts.
Mapping different physical servers to the dedicated orchestration hosts ensures that the different
orchestrations are isolated and do not contend for shared resources either in the same physical
address space or on the same server.
Stop unused host instances. Unused host instances can compete for CPU and memory resources by
regularly checking the MessageBox for messages to process. Additionally, stop unused receive
locations, send ports, and orchestrations.
Spool table growth
Downstream bottlenecks and/or resource contention can cause the spool to start growing excessively
and reduce overall performance. For more information, see Spool Table Growth in How to Identify
Bottlenecks in the MessageBox Database.
Memory starvation
High throughput scenarios can have increased demand on system memory. Since a 32-bit process is
limited by the amount of memory it can consume, it is recommended to separate the
receive/process/send functionality into separate host instances such that each host receives its own
2GB address space. In addition, if multiple host instances are running on the same physical server, you
can upgrade to 4/8GB memory to avoid swapping data to disk from real memory. Long running
orchestrations can hold onto allocated memory longer. This can cause memory bloat and throttling to
start. Large messages can also cause high memory consumption.
You can ease the memory bloat problem that occurs when large messages are processed by lowering
the Internal Message Queue Size and In-process Messages per CPU values for the specific host.
Note
If latency is a concern, changes to Internal Message Queue Size and In-process Messages
per CPU should be made with caution as this may increase end-to-end latency of the system.
Disk contention
If the disks are saturated (for example, with a large number of FILE/MSMQ transports), consider
upgrading to multiple spindles and striping the disks with RAID 10. In addition, whenever using the FILE
transport, it is important to ensure that the receive and send folders do not grow larger than 50,000
files.
The receive folder can grow large if BizTalk Server throttles incoming data into the system. It is
important to move data from the send folder so that growth in this folder does not impact the ability of
BizTalk Server to write additional data. For non-transactional MSMQ queues, it is recommended to
remotely create the receive queues so that disk contention is reduced on the BizTalk Server.
The remote non-transactional queue configuration also provides high availability as the remote server
hosting the queue can be clustered.
Downstream bottlenecks
If the downstream system is unable to receive data fast enough from BizTalk, this output data will back
up in the BizTalk databases. This results in bloat, causes throttling to start, shrinks the receive pipe, and
impacts the overall throughput of the BizTalk system. A direct indication of this is Spool table growth.
For more information about bottlenecks and the Spool table, see How to Identify Bottlenecks in the
MessageBox Database.
Throttling impact
Throttling will eventually start to protect the system from reaching an unrecoverable state. Thus, you
can use throttling to verify whether the system is functioning normally and discover the source of the
problem. After you identify the cause of the bottleneck from the throttling state, analyze the other
performance counters to determine the source of the problem. For example, high contention on the
MessageBox database could be due to high CPU use, caused by excessively paging to disk that is
caused by low memory conditions. High contention on the MessageBox database could also be caused
by high lock contention due to saturated disk drives. While occasional throttling is not a significant
threat to performance, persistent throttling can indicate a more significant underlying problem. For more
information about those conditions where BizTalk Server will use throttling, see BizTalk Engine
Throttling Conditions at http://go.microsoft.com/fwlink/?LinkId=117360.
For more information about how BizTalk Server throttling can help manage the use of available
resources and minimize resource contention, see Optimizing Resource Usage Through Host Throttling
at http://go.microsoft.com/fwlink/?LinkId=117358.
BizTalk application counters
Object Instance Counter Description
Where do I start?
Monitoring the Message Delivery Throttling State and the Message Publishing Throttling State for
each host instance is a good place to start. If the value of these counters is not zero, this indicates
throttling in the BizTalk system and you can further analyze the cause of the bottleneck. For
descriptions on the other performance counters, see Bottlenecks in the Database Tier.
Backlog buildup
For a 1-1 deployment scenario where 1 message received results in 1 message processed and
transmitted, if the outgoing rate does not equal the incoming rate, there is a backlog in the system. In
this situation, you can monitor the Spool Size. When determining the cause of bottlenecks in outgoing
rate, run a single use-case scenario at a time. Isolate orchestrations, receive locations, and send
locations to separate hosts.
Add the BizTalk:Message Box:Host Counters to your Performance Monitor log to monitor a host. The
Host Queue - Length: counter tracks the total number of messages in a particular host queue. If one or
more of these counters continues grow over time, give particular attention to the artifacts executed by
those hosts.
If the Spool is growing linearly, determine which Application Queue is responsible for the Spool growth.
If none of the Application Queues are growing and the Spool continues to grow, it could mean that the
purge jobs are unable to keep up. This occurs if the agent is not running or there is other system
resource contention on the SQL Server.
If one of the Application Queues is growing, diagnose the cause of this growth. Monitor the system
resources on the system that is unable to drain the specific Application Queue (for example,
Orchestration Host-Q is growing due to CPU starvation on the server). In addition, verify the values of
the throttling counter for the specific host instance.
If the BizTalk:Messsage Agent Message Delivery Throttling State and Message Publishing Throttling
State performance counters are not zero, check the value to confirm the reason for throttling (for
example, memory threshold exceeded, in-flight message count too high, and so on).
F1 profiler
You can use performance counters to detect the location of the bottleneck at a high level. However,
once narrowed down, you might need to examine the code more closely to help ease the problem. The
F1 Profiler that ships with Visual Studio can be a very helpful tool to help diagnose where the code is
spending most of its cycles.
Symbols can help to create a more meaningful stack (especially for unmanaged code). For example,
the F1-Profiler can help pinpoint the number of invocations and the amount of time an API call takes to
return. Drilling further down the stack, it may be possible to detect the underlying cause of the high
latency. It could be a blocking call to a database query or a call to wait on an event.
L2/L3 cache
From a hardware perspective, you can gain the biggest benefits by using the onboard CPU cache.
Higher CPU cache helps increase cache hit rate reducing the need for the system to page data in and
out of memory to disk.
64-Bit performance bottlenecks
Performance on 64-bit systems may appear lower than what can be achieved on 32-bit systems. This is
possible for a few reasons, the most important one being memory.
Measuring performance on a 32-bit system with 2 GB of memory and comparing the results to a similar
64-bit system with 2 GB of memory is not comparing the same thing. The 64-bit system will appear to
be disk-I/O bound (low % Disk Idle time & high Disk Queue Length) and CPU bound (max CPU and
high context switching). However, this is not because performing file I/O on a 64-bit system is more
expensive.
The 64-bit system is more memory intensive (64-bit addressing) which results in the operating system
consuming most of the 2 GB available memory. When this happens, most other operations cause
paging to disk which stresses the file subsystem. Therefore, the system spends CPU cycles paging
in/out of memory both data and code and is impacted by the high disk latency cost. This manifests itself
as both higher disk contention and higher CPU consumption.
The way to alleviate this problem is to scale-up the server by upgrading the memory. Scaling-up to 8
GB is idea, however, adding more memory will not help improve throughput unless the source of the
problem is CPU starvation due to low memory conditions.
In This Section
How to Identify Bottlenecks in the MessageBox Database
How to Identify Bottlenecks in the Tracking Database
How to Identify Bottlenecks in the BAM Database
How to Avoid Disk Contention
Note
By default, the purge job is unable to delete data from the BizTalk Tracking database tables
until this data has been archived. If you do not need to archive the tracking data, you can
modify the job to purge the BizTalk Tracking database without archiving by following the steps
at http://go.microsoft.com/fwlink/?LinkId=118004.
See Also
Bottlenecks in the Database Tier
See Also
Bottlenecks in the Database Tier
See Also
Bottlenecks in the Database Tier
Note
For information about identifying if a server has a disk bottleneck, and using the Microsoft
Server Performance Advisor (SPA) tool to determine if disk bottleneck exists, see "How To:
Identify a Disk Performance Bottleneck Using the Microsoft Server Performance Advisor (SPA)
Tool" at http://go.microsoft.com/fwlink/?LinkId=115283.
To avoid disk contention, do the following:
Steps Reference
See Also
Bottlenecks in the Database Tier
Automating Testing
BizTalk Server solutions are often deployed in “mission critical” scenarios, whereby the BizTalk solution
is a core component of the business. In these scenarios, the continual performance and stability of the
BizTalk solution is key business requirement because if the BizTalk solution fails then the associated
downtime costs are significant. In such scenarios, it is of paramount importance that the BizTalk
solution is thoroughly tested before the solution is placed into production. The costs associated with
properly testing the solution are small compared to the downtime costs resulting from not testing or
insufficiently testing the solution. Therefore, the testing of a BizTalk Server solution is arguably the most
important phase of any BizTalk Server solution deployment.
This section describes the proper methodology for testing a BizTalk solution before running the solution
in a production environment.
In This Section
Why Is It Important to Test?
Automating the Build Process
Using BizUnit for Automated Testing
Why Is It Important to Test?
This topic provides an overview of the mindset that leads to insufficient testing, describes the risks
associated with failing to test BizTalk solutions, and contrasts the pitfalls of manual testing with the
benefits of automated testing.
Testing as “overhead”
Unfortunately, with ever increasing demands for return on investment (ROI), the testing phase of a
project is often seen as one of the first aspects of a project plan that can be scaled back.
One argument against testing BizTalk solutions is that “We don’t need to test our solution because
Microsoft already thoroughly tests BizTalk Server.” While it is true that Microsoft does thoroughly test
BizTalk Server, different usage scenarios require one of an almost countless number of permutations of
business requirements for throughput, high availability, adapter usage, latency, tracking requirements,
orchestration usage, and custom code. Because BizTalk Server is extremely flexible and can
accommodate so many different usage scenarios, it is simply not possible to anticipate and test all of
them. Furthermore, the default settings that are applied in a BizTalk Server environment should be
optimized to accommodate each usage scenario. The only way to determine the optimal settings for a
particular usage scenario is to test the solution, measure various parameters, tune the environment,
and retest. Consider the following diagram, which depicts a sample physical architecture for a BizTalk
Server solution:
When we look at the logical diagram of a message flowing through BizTalk Server, we see other
variables that necessitate per-project testing, including custom send and receive pipeline components
and custom classes that can be called from BizTalk orchestrations. Given that the type complexity and
use of custom components and BizTalk components varies from project to project, it becomes more
evident why it is important to perform testing for each specific usage scenario.
Testing methodology and timelines
To ensure testing is performed effectively and efficiently, the test harness should be fully automated so
it is easily reproducible and minimizes the chance for human error. Additionally, adequate time should
be allotted for testing when planning the project. A minimalist approach to testing would comprise
manual steps similar to the following:
1. Manually load one or more messages into a receive end point, such as a file drop or by using a
SOAP client to call a Web service.
2. Manually validate the correct content and structure of a message. Because multiple schemas are
often present in a project, the messages may be a mixture of flat file and XML and may also contain
cross field dependencies.
Note
An example of this would be any project involving SWIFT messages, these are flat file
messages which have cross field dependencies (i.e. one field’s value depends on another
– simple XSD validation will not do here, hence the SWIFT Accelerator makes use of the
BizTalk Rules Engine for validation of the messages). For more information on the SWIFT
Accelerator, see http://go.microsoft.com/fwlink/?LinkID=79657.
3. Manually check the event logs on the BizTalk Server computers for errors.
4. Manually check the BAM databases (if used) to validate that activity information is recorded
correctly.
Using a manual process as described above for testing is subjective and error prone. Consider having
to examine a hundred line SWIFT message with cross field dependencies for 10 different test cases.
Most project developers would not be able to or even if they were able, would not be inclined to engage
in such a task reliably and accurately. Implementation of a subjective, manual, error prone testing
process adds risk to a project and increases the chance of failure.
The risk of failure is often amplified by project planning timelines that do not incorporate sufficient time
for testing. All too often, a project testing strategy hinges on a single manual test pass which is
scheduled a week or so before the go-live date. Such limited testing places the project at risk. Given
such a limited timeline for testing, if any problems are detected, then the project may be delayed
because no time has been allotted to fix problems. Furthermore, if a problem is discovered and fixed,
there may be insufficient time left to perform subsequent test passes before the system goes live.
The reality of “single final build, single test pass, take the project live” testing is that it often results in the
project being delayed, the project going over budget or even worse, the project failing completely! For
mission critical systems, this sort of testing methodology is a disaster waiting to happen.
Note
You should also treat any deployment, un-deployment, configuration and installation scripts or
processes as part of the software project for testing purposes. Both the operations of the
project, as well as the deployment and configuration of it, should be tested.
This process is typically completed by early morning around 6 A.M., which enables the first members of
the team to start work on the new day’s build. If one or more of the BVTs from the previous night failed,
then it is the team’s responsibility to fix it as soon as possible.
Following a daily build process has many advantages for the project. First, it provides a regular
heartbeat (made up of the daily build plus the automated BVTs). Second, using BVTs forces integration
with systems, this is a tricky task and doing this early in and of itself reduces project risks. Due to the
time required to complete them, stress and performance testing are typically performed outside of the
daily build process. Stress and performance tests are typically scheduled to be performed on milestone
build in the project.
The daily build process can be and has been used very effectively on BizTalk solutions. However, you
need to ensure tasks that are typically left to the end in projects are done iteratively from the start. For
example, deployment in BizTalk Server is certainly non-trivial. It is important automated deployment
scripts be developed up front. If you do not do this, you’ll end up manually deploying and un-deploying
the solution many times throughout the project, which will cost you more time in the end. There are
tools available to drive the daily build process; Visual Studio Team System and Team Foundation
Server are the primary choice for many people. MSBuild script(s) may be used to drive the steps in the
build process. Another alternative is the open source CruiseControl.NET tool, which is available at
http://go.microsoft.com/fwlink/?LinkId=116093.
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about the
suitability of this programs. Use of this program is entirely at your own risk.
For more information about automating testing using Visual Studio Team System, see the topic "Testing
Tools Tasks" in the Visual Studio Team System online documentation at http://go.microsoft.com/fwlink/?
LinkId=120226. For more information about automating the build process using Visual Studio Team
System, see "Managing Builds with Team Foundation Build" in the Visual Studio Team System
documentation at http://go.microsoft.com/fwlink/?LinkId=116130.
Functional testing
In the context of BizTalk applications functional tests, test a specific end-to-end scenario. When
performing this type of testing, it is useful to imagine BizTalk Server as a black box you test functionally
from an external perspective. For example, a test may involve feeding an input message (with known
values) to a receive location end point (e.g. URL, FTP location, whatever your choice of transport is).
The test would then monitor the correct number of messages with the correct output that are produced
on the send side. This may sound relatively straightforward, but when you consider some scenarios
require inputs to come in a certain order and at a certain time and you compound this with other
solution requirements, such as, when recording tracking data in BAM, it becomes clear this is a classic
case where a tool and framework can be used to orchestrate this.
It is critical functional testing is designed to cover all the possible paths through your solution. This
should include not only those scenarios you expect in production, but also the failure paths and
exception handling paths you have implemented but hope never to use – one phrase commonly used to
describe this is testing for the “bad day scenario.” You should ensure all orchestrations, all permissible
message types, and all code branches are exercised by your functional test suite. The following
sections describe developing positive and negative functional test cases to cover all code paths.
For more information about functional testing and the other testing categories should be implemented
before placing a BizTalk Server solution into production, see the topic “Checklist: Testing Operational
Readiness” in the BizTalk Server Operations Guide at http://go.microsoft.com/fwlink/?LinkId=116064.
Positive tests
It is important when performing positive tests to ensure all combinations of messages, pipelines,
orchestrations and endpoints are passed through the solution to make sure all the message flows
are exercised. To ensure you test all code paths will likely require you process different messages
with different content.
When testing, use the transport type will be used in production. Unfortunately all too often functional
testing is performed only using the file adapter when some other transport will be used in
production. Adopting this approach is setting you and the overall project up for failure later on.
Validate the payload of all messages that are sent out from the system. If the messages are XML,
you should validate their schema and key fields in the message using XPath expressions.
Validate any tracking data stored in BAM (if used) or any other data should be left in external data
repositories is accounted for.
Test the execution of all Business Rule Engine (BRE) policies and rules if your solution uses BRE.
Negative tests
Ensure you test the handling of invalid messages through your system. You should verify your
chosen strategy (to reject them before they come into BizTalk Server or to suspend them) has
worked correctly.
When testing the handling of invalid messages, ensure you test any receive-side error handling
orchestrations have been implemented to handle suspended messages.
Ensure your failure scenarios cover all exception blocks in your orchestrations. Failing to test this
adequately is a common problem.
If you are using long-running transactions with compensation behavior, test these scenarios very
carefully. This is another area where inadequate testing will incur serious consequences in a
production environment.
Ensure failures are logged correctly in the appropriate error logs.
Automation is key
In order to do all of this efficiently and effectively, invest the time upfront to automate testing. Manual
testing is time consuming, error prone, and expensive (in terms of time and cost). Every time you
perform a manual test pass, you add another batch of tasks that have to be handled by project
resources (i.e. people in the team). By automating this up front, you are able to get a return on the
upfront investment that is required to develop the tests every time they are run. Good functional tests
should execute quickly and efficiently and be repeatable; each test should also be autonomous (i.e.
independent of any other, adopting this approach enables you to run multiple tests sequentially as a
test suite). The functional tests should always produce the same result, as poorly written functional
tests or poorly written code will result in different tests results between test runs, leading to confusion
and wasted time investigating the root cause of the failure.
It is important to minimize the development effort required to write each functional test. Usually the
more expensive it is to produce something (in terms of development time), the fewer test cases you are
likely to end up with. This means you will have a lower level of test coverage over your code. By
utilizing a test framework, you can develop test cases quicker and easier and, hence, make it easier to
get full code coverage. Most good test frameworks use a declarative approach to defining tests (i.e. the
configuration for a test is stored in a configuration file which is typically an XML file). Utilizing a good
test framework enables you to develop a full functional test suite in an agile and reliable manner and
avoids having to “reinvent the wheel” over and over, so to speak.
In This Section
Stages of a BizUnit Test Case
Defining Tests Using an XML Configuration File
Automating Performance and Stability Testing
Using BizUnit with the Business Process Management Scenario
Setup Stage
This setup stage prepares the platform for the testing. For example, before a particular test can be run,
a file may need to be copied to a file drop in preparation for the actual execution of the test. You could
also use this stage to cleanup any file locations or database tables that will be used in the test. As with
every stage in BizUnit, there is no limit to the number of test steps that can be added, which provides
the flexibility required to handle complex scenarios.
Execution Stage
The execution stage is where the test is actually run. This is where the function of the system you are
validating is actually tested.
Cleanup Stage
The cleanup stage is the container for test steps that returns the platform to the consistent state that it
was in before you ran the test. This stage is always executed, even if an error occurs in the execution
stage. The reason the platform should be returned to its starting point is to prevent one test case from
interfering with another so that each test case is run autonomously as part of the test suite. Ensuring a
complete cleanup of the system at this stage is one of the guiding principles for effective testing with
BizUnit.
The diagram below illustrates the format of a sample test case, which contains test steps in the three
stages: setup, execution, and cleanup. It is important to always follow this structure when defining test
cases with BizUnit.
Stages of a BizUnit test
Each test case begins and ends with the TestCase XML tag, the testName parameter is passed into this
as indicated below:
<TestCase testName="Test_01_FILECopyWithXmlValidation">
Then we enter the TestSetup phase, in which we ensure that the environment is in a consistent state to
run the test. In this example, we delete any XML messages that are contained in our TestData directory.
This is done using the FileDeleteMultipleStep:
<TestSetup>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileDeleteMultipleStep">
<Directory>..\..\..\TestData\</Directory>
<SearchPattern>*.xml</SearchPattern>
</TestStep>
</TestSetup>
We then enter what is the most critical section of the test, the test execution stage. This stage can
contain multiple test steps. In this example we use the FileCreateStep to cope a document (InDoc1.xml
which can be seen in the <SourcePath> tag) to a file drop which is used by out receive location. It’s
important to note that BizUnit supports using unique identifiers for filenames in this step, this can be
seen with the %Guid% reference in the CreationPath tag.
After this has been completed, we need to use the FileValidateStep to validate the outbound message
has been created. You will notice this step allows you to specify a timeout value (this is in milliseconds),
the directory and the search pattern. In addition to this, the DeleteFile tag enables you to specify
whether you want the file to be removed after it has been validated. Finally, you should also note the
validation includes an XPath query, which validates the PONumber node within the XML message (it
checks that the value is PONumber_0.) Examination and validation of the payload of any outbound
messages is another example of a guiding principle that you should follow when using BizUnit.
<TestExecution>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileCreateStep">
<SourcePath>..\..\..\TestData\InDoc1.xml</SourcePath>
<CreationPath>..\..\..\Rec_03\TransactionId_%Guid%.xml</CreationPath>
</TestStep>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileValidateStep">
<Timeout>3000</Timeout>
<Directory>..\..\..\Rec_03\</Directory>
<SearchPattern>TransactionId_*.xml</SearchPattern>
<DeleteFile>true</DeleteFile>
<ValidationStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.XmlValidationStep">
<XmlSchemaPath>..\..\..\TestData\PurchaseOrder.xsd</XmlSchemaPath>
<XmlSchemaNameSpace>http://SendMail.PurchaseOrder</XmlSchemaNameSpace>
<XPathList>
</XPathList>
</ValidationStep>
</TestStep>
</TestExecution>
The final stage of the test case is the cleanup. As can be seen below, the FileDelete test step is used to
clean up the directories used by the test.
<TestCleanup>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileDeleteMultipleStep">
<Directory>..\..\..\TestData\</Directory>
<SearchPattern>*.xml</SearchPattern>
</TestStep>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileDeleteMultipleStep">
<Directory>..\..\..\Rec_03\</Directory>
<SearchPattern>*.xml</SearchPattern>
</TestStep>
</TestCleanup>
Hopefully this example illustrates that defining tests in BizUnit is relatively straightforward and that by
using this test framework, you will be able to rapidly develop test cases to provide functional testing of
your application.
<TestSetup>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileDeleteMultipleStep">
<Directory>..\..\..\TestData\</Directory>
<SearchPattern>*.xml</SearchPattern>
</TestStep>
</TestSetup>
<TestExecution>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileCreateStep">
<SourcePath>..\..\..\TestData\InDoc1.xml</SourcePath>
<CreationPath>..\..\..\Rec_03\TransactionId_%Guid%.xml</CreationPath>
</TestStep>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileValidateStep">
<Timeout>3000</Timeout>
<Directory>..\..\..\Rec_03\</Directory>
<SearchPattern>TransactionId_*.xml</SearchPattern>
<DeleteFile>true</DeleteFile>
<ValidationStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.XmlValidationStep">
<XmlSchemaPath>..\..\..\TestData\PurchaseOrder.xsd</XmlSchemaPath>
<XmlSchemaNameSpace>http://SendMail.PurchaseOrder</XmlSchemaNameSpace>
<XPathList>
</XPathList>
</ValidationStep>
</TestStep>
</TestExecution>
<!-- Test cleanup: test cases should always leave the system in the state they found it -->
<TestCleanup>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileDeleteMultipleStep">
<Directory>..\..\..\TestData\</Directory>
<SearchPattern>*.xml</SearchPattern>
</TestStep>
<TestStep assemblyPath=""
typeName="Microsoft.Services.BizTalkApplicationFramework.BizUnit.FileDeleteMultipleStep">
<Directory>..\..\..\Rec_03\</Directory>
<SearchPattern>*.xml</SearchPattern>
</TestStep>
</TestCleanup>
</TestCase>
Overview of LoadGen
The BizTalk LoadGen 2007 tool (LoadGen) is a load testing tool that was developed by the Stress and
Performance Testing team in the BizTalk Server 2006 product group. LoadGen was designed to quickly,
easily and reliably define load tests that simulate production level message volumes. LoadGen is multi-
threaded, configuration driven and supports multiple transports. LoadGen is used by BizTalk product
group on a daily basis; therefore you can have a high degree of confidence that the tool is durable, fit
for the purpose and able to simulate a wide variety of BizTalk scenarios.
LoadGen employs a modular design that consists of three layers, presentation, framework and
component. The presentation layer consists of a command line driver, which is responsible for driving
the framework. The framework layer reads a configuration file and then executes the components
specified therein. The component layer consists of three types of components; load generators,
message creators and throttle controllers. Each of these components is extensible, so you can
create your own and plug them into LoadGen to address the needs of your scenario. Because LoadGen
was developed by the BizTalk Server product group, you should find that the out of the box components
will fulfill most of your load testing requirements. Each of thes components is described in greater detail
below:
Load generators are responsible for transmitting messages via a particular transport. Load
generators are provided for the following transports:
File
HTTP
MQSeries
MSMQLarge
MSMQ
SOAP
Web Services Enhancements (WSE)
Windows SharePoint Services (WSS)
Windows Communication Foundation (WCF)
Message creators are an optional component that can be used when you need to generate
messages that contain unique data. Message creators use one of two modes of creation;
synchronous and asynchronous. If the synchronous message creation mode is specified, then
LoadGen uses only a single thread to create messages to ensure that each message contains a
unique payload. While the synchronous mode guarantees unique data within each message, this
mode also limits scalability. LoadGen also provides asynchronous message creators that use
multiple execution threads; this enables LoadGen to meet the target message rate (because it can
simply create more threads). In asynchronous mode the message creator may be configured to
randomly modify data for each individual message, but because it uses multiple threads it does not
guarantee that all message generated during the test will contain a unique payload.
Throttle controllers ensure that messages are transmitted at a steady rate by governing the load
generators while the test is running. LoadGen also exposes custom throttling, which enables you to
control the flow of messages based on criteria including:
Number of files in a folder
Number of rows in a database table
Depth of an MSMQ or MQSeries message queue
The Microsoft BizTalk LoadGen 2007 tool is available for download at http://go.microsoft.com/fwlink/?
LinkId=59841.
<CommonSection>
<LoadGenVersion>2</LoadGenVersion>
<OptimizeLimitFileSize>204800</OptimizeLimitFileSize>
<NumThreadsPerSection>5</NumThreadsPerSection>
<SleepInterval>200</SleepInterval>
<LotSizePerInterval>25</LotSizePerInterval>
<RetryInterval>10000</RetryInterval>
<StopMode Mode="Files">
<NumFiles>5000</NumFiles>
</StopMode>
<Transport Name="FILE">
<Assembly>FileTransport.dll/FileTransport.FileTransport</Assembly>
</Transport>
<ThrottleController Mode="Custom">
<Monitor Name="File">
<Assembly>FileMonitor.dll/DropLocationFileMonitor.DropLocationFileMonitor</Assembly>
<ThresholdRange>1000-2000</ThresholdRange>
<SleepInterval>1000</SleepInterval>
<Parameters>C:\Scenarios\FileToFile\Receive</Parameters>
</Monitor>
<ThrottleCondition>File</ThrottleCondition>
</ThrottleController>
</CommonSection>
<Section Name="FileSection">
<SrcFilePath>C:\LoadGen\ConfigFiles\ConsoleConfigFiles\FileToFileLG.xml</SrcFilePath>
<DstLocation>
<Parameters>
<DstFilePath>C:\Scenarios\FileToFile\Receive</DstFilePath>
</Parameters>
</DstLocation>
</Section>
</LoadGenFramework>
<TestSetup>
</TestSetup>
<TestExecution>
<LoadGenTestConfig>..\..\..\PerfGuideFiletoFile.xml</LoadGenTestConfig>
</TestStep>
</TestExecution>
<!-- Test cleanup: test cases should always leave the system in the state they found it -->
<TestCleanup>
</TestCleanup>
</TestCase>
The remainder of this topic describes the configuration file for a BizUnit test case that automates
performance testing with Loadgen.
Note
This configuration file can be used as a template to quickly integrate BizUnit and Loadgen as
part of your performance testing. Before running this test case you will need to customize the
configuration file for your environment. Sections of the configuration file that must be
customized are indicated accordingly.
To begin with, specify a value for the testName parameter that is appropriate for the BizTalk solution:
<TestCase testName="Performance-Guide-Sample-Loadgen-Test">
Then add context variables to the TestSetup stage. These context variables will be referenced
throughout the duration of the test case. To use this configuration file, modify the values specified for
TestCaseResultsDir (C:\Dev Work\Perf Guide Demos\PerfResults\) and Machine (BIZTALKADMIN01)
to match your environment.
<TestSetup>
<ContextItem contextKey="TestRunName">
<ItemTest takeFromCtx="BizUnitTestCaseName"></ItemTest>
<ItemTest>_%DateTime%</ItemTest>
</ContextItem>
</TestStep>
<ContextItem contextKey="TestCaseResultsDir">
<ItemTest takeFromCtx="TestRunName"></ItemTest>
</ContextItem>
</TestStep>
<ItemTest takeFromCtx="TestCaseResultsDir"></ItemTest>
<ItemTest>\PerfCounters.blg</ItemTest>
</ContextItem>
</TestStep>
<!-- Context property: destintation for app event log on test computer -->
<ItemTest takeFromCtx="TestCaseResultsDir"></ItemTest>
<ItemTest> BIZTALKADMIN01_ApplicationLog.evt</ItemTest>
</ContextItem>
</TestStep>
<Machine>BIZTALKADMIN01</Machine>
<EventLog>Application</EventLog>
</TestStep>
<!-- Create the directory to save all the test results -->
</TestStep>
</TestSetup>
After completing the TestSetup stage, we enter the TestExecution stage. The first thing we do in this
stage is stop the BizTalk host instances. A separate BizUnit.HostConductorStep section must be
added for each distinct host instance. If you are using this configuration file in your environment, you
will also need to enter the appropriate values for HostInstanceName, Server, Logon, and Password:
<TestExecution>
<Action>stop</Action>
<HostInstanceName>BizTalkServerApplication</HostInstanceName>
<Server>BizTalkAdmin01</Server>
<Logon>ServerName\Administrator</Logon>
<PassWord>Pass@word1</PassWord>
<GrantLogOnAsService>true</GrantLogOnAsService>
</TestStep>
After stopping all of the host instances, we clean up the BizTalk MessageBox database using the
bts_CleanupMsgBox stored procedure. In order to use this step you must modify the value for
ConnectionString to match your environment:
<!-- Step 2: Clean Up MessageBox -->
<DelayBeforeExecution>1</DelayBeforeExecution>
<SQLQuery>
<RawSQLQuery>[dbo].[bts_CleanupMsgbox]</RawSQLQuery>
</SQLQuery>
</TestStep>
Step 3 of the TestExecution stage starts Performance Monitor counters that are specified in a
template file. A sample template file is listed underneath the sample BizUnit.PerfmonCountersStep
below. To use the template file you must modify the value specified for CountersListFilePath to match
your environment. Modify the sample performance monitor counter template file to include any perfmon
counters that you would like to monitor or remove any that are not relevant to your scenario:
<!-- Step 3: Start Perfmon counters -->
<PerfmonAction>Start</PerfmonAction>
<CounterSetName>PerfGuidePerfmonCounters</CounterSetName>
<SampleInterval>5</SampleInterval>
<PerfmonLogFilePath takeFromCtx="PerfMonFilePath"></PerfmonLogFilePath>
</TestStep>
\Process(*)\*
\Memory\*
\PhysicalDisk(*)\*
\System\Context Switches/sec
\BizTalk:Message Agent(*)\*
\BizTalk:Messaging(*)\*
\BizTalk:Messaging Latency(*)\*
\BizTalk:TDDS(*)\*
\XLANG/s Orchestrations(*)\*
Now we start the BizTalk Server host instances. A separate BizUnit.HostConductorStep section must
be added for each distinct host instance (distinct includes multiple instances of a host across servers). If
you are using this configuration file in your environment, you will also need to enter the appropriate
values for HostInstanceName, Server, Logon, and Password:
<!-- Step 4: Start BizTalk Hosts -->
<Action>start</Action>
<HostInstanceName>BizTalkServerApplication</HostInstanceName>
<Server>BizTalkAdmin01</Server>
<Logon>ServerName\Administrator</Logon>
<PassWord>Pass@word1</PassWord>
<GrantLogOnAsService>true</GrantLogOnAsService>
</TestStep>
Step 5 “primes” the system by sending a couple of messages to BizTalk Server using
BizUnit.LoadGenExecuteStep, change the value of the LoadGenTestConfig parameter to match
your environment. Step 6 writes the LoadGen configuration file to memory so that it can written to the
test results database when the test is complete:
<!-- Step 5: Send Priming messages -->
<LoadGenTestConfig>C:\Program
Files\LoadGen\ConfigFiles\ConsoleConfigFiles\PerfGuideFiletoFile.xml</LoadGenTestConfig>
</TestStep>
<FilePath>C:\Program
Files\LoadGen\ConfigFiles\ConsoleConfigFiles\PerfGuideFiletoFile.xml</FilePath>
<ContextPropertyName>LoadGenFileContent</ContextPropertyName>
</TestStep>
Now we write the test start time to a test results database. Modify the ConnectionString and
RawSQLQuery parameters to match your environment:
<!-- Step 7: Update test results DB with test start time -->
<DelayBeforeExecution>1</DelayBeforeExecution>
<SQLQuery>
<SQLQueryParams>
<SQLQueryParam takeFromCtx="TestRunName"></SQLQueryParam>
<SQLQueryParam takeFromCtx="LoadGenFileContent"></SQLQueryParam>
</SQLQueryParams>
</SQLQuery>
</TestStep>
Step 8 is where the actual performance test is initiated using BizUnit.LoadGenExecuteStep. This step
specifies the same LoadGen configuration file that was used in step 5 but you can specify any valid
LoadGen configuration file here. BizUnit.DelayStep is used in step 9 to impose a 5 second delay to
allow time for messages to start flowing through the system. Host queue length is calculated using
BizUnit.PerMonCounterMonitorStep. When this parameter reaches a value of 1 as specified in step
10, the test is concluded. Change the values for the InstanceName and Server parameters to match
the name of the host instance and server that you would like to monitor in your environment.
<!-- Step 8: LoadGen: Load actual perf test -->
<TestStep assemblyPath="" typeName="BizUnit.LoadGenSteps.LoadGenExecuteStep,
BizUnit.LoadGenSteps , Version=3.0.0.0, Culture=neutral, PublicKeyToken=7eb7d82981ae5162">
<LoadGenTestConfig>C:\Program
Files\LoadGen\ConfigFiles\ConsoleConfigFiles\PerfGuideFiletoFile.xml</LoadGenTestConfig>
</TestStep>
<!-- Step 9: Delay for 5 secs to allow msgs to start flowing -->
<Delay>5000</Delay>
</TestStep>
<!-- Step 10: Wait for Orch Host Queue depth to reach one -->
<InstanceName>BizTalkServerApplication:biztalkmsgboxdb:BizTalkAdmin01</InstanceName>
<Server>BizTalkAdmin01</Server>
<CounterTargetValue>1</CounterTargetValue>
</TestStep>
At the conclusion of the test we use BizUnit.DBExecuteNonQueryStep to update the test results
database. Completion of this step signifies the end of the test execution stage, as indicated by the
closing </TestExecution> tag. Again, you must modify the ConnectionString and RawSQLQuery
parameters to match your environment.
<!-- Step 11: Update test results DB with test stop time -->
<DelayBeforeExecution>1</DelayBeforeExecution>
<SQLQuery>
<SQLQueryParams>
<SQLQueryParam takeFromCtx="TestRunName"></SQLQueryParam>
</SQLQueryParams>
</SQLQuery>
</TestStep>
</TestExecution>
Upon concluding the execution stage we enter the test cleanup stage. This stage uses
BizUnit.PerfmonCountersStep to stop the Performance counters that were started earlier (in Step 3).
<TestCleanup>
<PerfmonAction>Stop</PerfmonAction>
<CounterSetName>PerfGuidePerfmonCounters</CounterSetName>
</TestStep>
</TestCleanup>
</TestCase>
This example illustrated how BizUnit can be combined with LoadGen to automate performance testing.
The load test described by the BizUnit configuration file above can be executed from Visual Studio’s
testing tools in the same manner as the functional testing examples described in Walkthrough: Using
BizUnit to Test the BPM Scenario. Adopting this approach enables you to centrally manage, administer
and collect data for your performance testing.
By using BizUnit and LoadGen in an automated approach, it is very easy to schedule multiple test runs
to occur during off hours, which will provide ample test results for analysis during normal working hours.
When automating performance testing, consider using LoadGen scripts that model different loads
through the system, for example you may wish to simulate varying degrees (75%, 100% and 125%) of
the expected production message volume. When performing load testing, it is especially important to
test the overload or “bad day” scenario. Before placing the system into production, you should know
what the maximum sustainable throughput (MST) is for each test case in the BizTalk Server
environment. For more information about maximum sustainable performance, see “What is Sustainable
Performance” in the BizTalk Server 2006 documentation at http://go.microsoft.com/fwlink/?
LinkId=116896.
<TestSetup>
<ContextItem contextKey="TestRunName">
<ItemTest takeFromCtx="BizUnitTestCaseName"></ItemTest>
<ItemTest>_%DateTime%</ItemTest>
</ContextItem>
</TestStep>
<ContextItem contextKey="TestCaseResultsDir">
<ItemTest takeFromCtx="TestRunName"></ItemTest>
</ContextItem>
</TestStep>
<ContextItem contextKey="PerfMonFilePath">
<ItemTest takeFromCtx="TestCaseResultsDir"></ItemTest>
<ItemTest>\PerfCounters.blg</ItemTest>
</ContextItem>
</TestStep>
<!-- Context property: destintation for app event log on BTSSVR-001 -->
<ContextItem contextKey="DestPath-BTSSVR-001-AppEventLog">
<ItemTest takeFromCtx="TestCaseResultsDir"></ItemTest>
<ItemTest>BTSSVR-001_ApplicationLog.evt</ItemTest>
</ContextItem>
</TestStep>
<Machine>BIZTALKADMIN01</Machine>
<EventLog>Application</EventLog>
</TestStep>
<!-- Create the directory to save all the test results -->
</TestStep>
</TestSetup>
<TestExecution>
<Action>stop</Action>
<HostInstanceName>BizTalkServerApplication</HostInstanceName>
<Server>BizTalkAdmin01</Server>
<Logon>ServerName\Administrator</Logon>
<PassWord>Pass@word1</PassWord>
<GrantLogOnAsService>true</GrantLogOnAsService>
</TestStep>
<DelayBeforeExecution>1</DelayBeforeExecution>
<SQLQuery>
<RawSQLQuery>[dbo].[bts_CleanupMsgbox]</RawSQLQuery>
</SQLQuery>
</TestStep>
<PerfmonAction>Start</PerfmonAction>
<CounterSetName>PerfGuidePerfmonCounters</CounterSetName>
<SampleInterval>5</SampleInterval>
<PerfmonLogFilePath takeFromCtx="PerfMonFilePath"></PerfmonLogFilePath>
</TestStep>
<Action>start</Action>
<HostInstanceName>BizTalkServerApplication</HostInstanceName>
<Server>BizTalkAdmin01</Server>
<Logon>ServerName\Administrator</Logon>
<PassWord>Pass@word1</PassWord>
<GrantLogOnAsService>true</GrantLogOnAsService>
</TestStep>
<LoadGenTestConfig>C:\Program
Files\LoadGen\ConfigFiles\ConsoleConfigFiles\PerfGuideFiletoFile.xml</LoadGenTestConfig>
</TestStep>
<FilePath>C:\Program
Files\LoadGen\ConfigFiles\ConsoleConfigFiles\PerfGuideFiletoFile.xml</FilePath>
<ContextPropertyName>LoadGenFileContent</ContextPropertyName>
</TestStep>
<!-- Step 7: Update test results DB with test start time -->
<DelayBeforeExecution>1</DelayBeforeExecution>
<SQLQuery>
<SQLQueryParams>
<SQLQueryParam takeFromCtx="TestRunName"></SQLQueryParam>
<SQLQueryParam takeFromCtx="LoadGenFileContent"></SQLQueryParam>
</SQLQueryParams>
</SQLQuery>
</TestStep>
</TestStep>
<!-- Step 9: Delay for 5 secs to allow msgs to start flowing -->
<Delay>5000</Delay>
</TestStep>
<!-- Step 10: Wait for Orch Host Queue depth to reach one -->
<InstanceName>BizTalkServerApplication:biztalkmsgboxdb:BizTalkAdmin01</InstanceName>
<Server>BizTalkAdmin01</Server>
<CounterTargetValue>1</CounterTargetValue>
</TestStep>
<!-- Step 11: Update test results DB with test stop time -->
<DelayBeforeExecution>1</DelayBeforeExecution>
<SQLQuery>
<SQLQueryParams>
<SQLQueryParam takeFromCtx="TestRunName"></SQLQueryParam>
</SQLQueryParams>
</SQLQuery>
</TestStep>
</TestExecution>
<TestCleanup>
<CounterSetName>PerfGuidePerfmonCounters</CounterSetName>
</TestStep>
</TestCleanup>
</TestCase>
In This Section
Overview of the BPM Scenario
BPM Scenario Architecture
BPM Message Flow and Entry Points
Walkthrough: Using BizUnit to Test the BPM Scenario
Note
As well as reading through this overview of the BPM scenario, you should also read the
documentation referenced by the link below. This provides detailed information about the
solution including the design patterns used. It also provides full and complete instructions on
how to set up the solution on your machine. You will need to follow these instructions carefully if
you then want to utilize BizUnit and the other tools we use here to test your solution. For more
information about the BPM solution, see “Business Process Management Solution” at
http://go.microsoft.com/fwlink/?LinkId=116293.
Note
Each of the design patterns discussed above are covered in greater depth in the BizTalk Server
documentation. For more information about these design patterns, see “Patterns in the
Business Process Management Solution” in the BizTalk Server 2006 documentation at
http://go.microsoft.com/fwlink/?LinkId=116298.
The sequence diagram below illustrates the high level steps involved in processing an order in the
Southridge system. As you can see from the diagram the system enables the CSR to initiate orders,
interrupt orders (e.g. cancel them) and update orders that are being processed. Each of these
functionality requirements were driven by the business requirements described in Overview of the BPM
Scenario.
BPM scenario workflow sequence diagram
The message then enters pre-processing by translators, implemented as BizTalk Server maps. The
translators create an acknowledgement for the service interface, generate an entry in the history or
tracking database and make an entry in the service system. The fourth and final translator creates the
message which is required by the Process Manager (Order Manager in previous diagram); the Process
Manager then controls the execution of the required number of stages and writes the history of the
process to the History database.
The solution also supports single and batched requests via different service interfaces; it detects
duplicates and supports sending interrupts to in-flight orchestrations.
This system has multiple end points, supports different message types and communicates over many
different transports. There are multiple points of failure here, so full and thorough testing of an
application such as this significantly reduces risk to the project. Building out of the test harness over
time should be looked at as an investment in test assets which will provide a return on the investment
every time you run the test suite.
using System.Text;
using System.Collections.Generic;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using BizUnit;
namespace BPM_Tests
[TestClass]
[TestMethod]
bizUnit.RunTest();
}
[TestMethod]
bizUnit.RunTest();
As can be seen in the code listing above, it is necessary to include a “using” statement for the BizUnit
assembly. This code defines a single class, BPMTests, which has been given the [TestClass] attribute.
The class includes two methods, both of which simply instantiate a new BizUnit object with a parameter
for the location of the test case configuration file.
This is literally all the code required to define a BizUnit test case; everything else is done through the
test case XML configuration file. This allows for separation between code and test configuration, which
means that tests can be changed or modified without the need for recompiling.
1. The service interface - In this case the CSR Web form which can be accessed at
http://localhost/CSRWebApp/CSRMainForm.aspx is the Web form used to drive the application.
The test should validate that the CSR form is accessible; that the correct data can be entered and
that the correct response is returned. The screenshot below depicts this Web page:
The CSR Web form interface
2. The History database - As highlighted in the diagram below the history database should be
updated throughout the process to reflect the start, completion and any cancellation of an order.
Therefore this test should validate that the history database is updated. The screenshot below
shows the row stored in the orderlog table in the SouthRidgeVideoHistory database for the
information entered in the CSR Web form in step 1:
Combining a Visual Studio Web test and BizUnit to functionally test the BPM Scenario
Now that we have defined what should be tested, we will establish how to test it. To test the functional
flow through the system, we combine Visual Studio Web Testing (to test the CSR Web form) and
BizUnit (to validate that the BAM and History database information is updated). To do this, complete the
following steps:
1. Complete steps 1 through 6 in the section “To start the Business Process Management Solution” of
the topic “How to Run the Business Process Management Solution” at
http://go.microsoft.com/fwlink/?LinkId=117752.
2. Follow these steps to add a new Web test to the BPM Tests project:
a. In Solution Explorer, right-click the BPM Tests project, click Add, and then click New Test to
display the Add New Test dialog box.
b. In the Add New Test dialog box, click Web Test, enter CSRWebFormCreateOrder.webtest
for Test Name, and then click OK. This displays a CSRWebFormCreateOrder.webtest form in
the Visual Studio IDE and also opens a Web Test Recorder interface in Internet Explorer.
Enter the address of the CSR Web form into the address bar of the Web Test Recorder
interface in Internet Explorer: http://localhost/CSRWebApp/CSRMainForm.aspx
c. After the Web page is displayed, enter the following values into the Web form, and then click
the Submit Order button.
Field Value
Customer ID 999
Order ID 999
Sequence Number 1
Service Type Code New Standard Service
3. After you click Submit Order, the form should display a Last Message Sent acknowledgement as
in the screenshot below.
In the Web Test Recorder section, click Stop and switch back to the Visual Studio IDE. The Web
test configuration will display in Visual Studio.
In the Web test configuration window, click to expand each of the nodes under
CSRWebFormCreateOrder to display the Form Post Parameters for the Web test. Note that the
form post parameters correspond to the values entered in the Web Test Recorder interface in
Internet Explorer.
4. Launch Internet Explorer, open the CSR Web form at
http://localhost/CSRWebApp/CSRMainForm.aspx, enter the same values into the form that you
entered when submitting the order except select the value Cancel Standard Service for Service
Type Code, and then click Submit Order. This will clean up the original Web test and allow you to
run the Web test using the same parameters in the future. Later on we will illustrate how this clean
up logic is incorporated into the testing harness to perform the cleanup process automatically.
The preceding steps define how to create a Web test to test the service interface. The following
steps define how to validate that the correct information is inserted into the History and BAM
databases. Follow these steps to confirm that the correct information is inserted into the History and
BAM databases.
5. Create the file CSRWebForm_BizUnit_Data_Validation.xml in the C:\BizUnit Walkthrough\BPM E2E
Scenario Tests\BPM Tests\TestCases directory of the BPM Tests project. This file is referenced by
the CSRWebForm_BizUnit_Data_Validation() test method defined in the BPMTests class in the file
BPMTests.cs. The contents of this file are listed below:
<TestCase testName="CSRWebForm_BizUnit_Data_Validation">
<TestSetup>
<Delay>20000</Delay>
<!--
Set <Delay> to a value large enough to allow enough time for the BPM
scenario to update the History database,
the Tracking database, and the BAM database or else the test steps that
validate the updated values
in these databases will fail. For most environments a value of 20000 (20
seconds) will be sufficient.
-->
</TestStep>
</TestSetup>
<TestExecution>
<DelayBeforeCheck>0</DelayBeforeCheck>
<NumberOfRowsExpected>1</NumberOfRowsExpected>
<!--
This SQL Query validates that the correct information has been tracked in
the History Database which is used by the BPM scenario
-->
<SQLQuery>
</SQLQuery>
<Rows>
<Columns>
<!--
Here we validate the values for some of the key columns returned. This
validates that the correct row of data related to the test case is returned.
by passing context between test steps. For more information please see
the BizUnit documentation.
-->
<OrderNumber>999</OrderNumber>
<CustNumber>999</CustNumber>
<OrderType>NS</OrderType>
<OrderDetails>New Order</OrderDetails>
</Columns>
</Rows>
</TestStep>
<DelayBeforeCheck>0</DelayBeforeCheck>
<NumberOfRowsExpected>1</NumberOfRowsExpected>
<!--
Here we validate that tracking data has been correctly recorded in the
ServiceOrderRequest BAM activity
-->
<SQLQuery>
</SQLQuery>
<Rows>
<Columns>
<!--
-->
<OrderID>999</OrderID>
<CustomerID>999</CustomerID>
<ServiceType>NS</ServiceType>
</Columns>
</Rows>
</TestStep>
<DelayBeforeCheck>0</DelayBeforeCheck>
<NumberOfRowsExpected>1</NumberOfRowsExpected>
<!--
Here we validate that tracking data has been correctly recorded in the
Custom Order Request BAM Activity
-->
<SQLQuery>
</SQLQuery>
<Rows>
<Columns>
<!--
-->
<OrderID>999</OrderID>
<CustomerID>999</CustomerID>
</Columns>
</Rows>
</TestStep>
<DelayBeforeCheck>0</DelayBeforeCheck>
<NumberOfRowsExpected>1</NumberOfRowsExpected>
<!--
Here we validate that data has been correctly recorded in the OrderManager
BAM activity
-->
<SQLQuery>
</SQLQuery>
<Rows>
<Columns>
<!--
-->
<OrderID>999</OrderID>
<CustomerID>999</CustomerID>
<SequenceNum>1</SequenceNum>
</Columns>
</Rows>
</TestStep>
</TestExecution>
<TestCleanup>
<!-- Test cleanup: test cases should always leave the system in the state they
found it -->
</TestCleanup>
</TestCase>
6. Verify that the three BAM activities that are part of the BPM scenario have been deployed to the
BizTalk group as part of the BPM scenario. To do this, start a command prompt, change directories
to the \Program Files\Microsoft BizTalk Server 2006\Tracking directory and then run the following
command:
BM get-activities
This command returns all of the BAM activities that have been deployed to the BizTalk Server group
which should include the following BAM activities:
Customer Order Request
OrderManager
ServiceOrderRequest
Note that the CSRWebForm_BizUnit_Data_Validation.xml file makes use of the
“BizUnit.DBQueryStep” function to validate that the correct information is captured in the History
database and the three BAM Activities all contain the correct monitoring data.
7. You are now ready to run the test. To do this, in Visual Studio, click the Test menu, select
Windows, and click Test View to display the Test View window. Then right-click the
CSRWebForm_BizUnit_Data_Validation test and select Run Selection.
The Test Results window should display detailed information for each test stage. Upon completion
of the test, a result of Passed should be displayed. If you right-click the test result and click View
Test Results Details, a console output is displayed, which provides detailed information about
each BizUnit step that was completed. The test results details should be examined to ensure that
each of the BizUnit.DBQueryStep test steps has completed. The detailed output that BizUnit
generates can be used to quickly debug test case failures.
So far we have illustrated how to use a Visual Studio Web Test to generate input for the CSR Web form
and to run a BizUnit test, which uses tracking data to verify data is submitted to the system correctly.
However, in order to accommodate full functional testing, we must also implement test cleanup
functionality. As mentioned earlier in the topic, it is important that each test can run autonomous of other
tests.
The next section examines how to integrate Visual Studio Web testing and BizUnit to provide cleanup
and verification functionality. To do this, we will to expand on the previous test and utilize the Cancel
Standard Service option provided by the CSR Web form.
Test Cleanup: Utilizing Visual Studio Web Testing and BizUnit
In this section we will create a Web test that will be used to cancel the order created previously. We will
then use BizUnit to validate that the OrderHistory is updated successfully. To do this, follow these steps:
1. In Solution Explorer, right-click the BPM Tests project, click Add, and then click New Test to
display the Add New Test dialog box.
2. In the Add New Test dialog box, click Web Test, enter CSRWebFormCancelOrder.webtest for
Test Name, and then click OK. This displays a CSRWebFormCancelOrder.webtest form in the
Visual Studio IDE and also opens a Web Test Recorder interface in Internet Explorer.
Enter the address of the CSR Web form into the address bar of the Web Test Recorder interface in
Internet Explorer (http://localhost/CSRWebApp/CSRMainForm.aspx).
3. Enter the following values into the Web form, and then click the Submit Order button.
Field Value
Customer ID 999
Order ID 999
Sequence Number 1
Service Type Code Cancel Standard Service
4. In the Web Test Recorder section, click Stop and switch back to the Visual Studio IDE. The Web
test configuration will be displayed in Visual Studio.
In the Web test configuration window, click to expand each node under
CSRWebFormTerminateOrder to display the Form Post Parameters for the Web test. Note that
the form post parameter for OrderTypeCodeDropDownList is “XS,” which is the internal code
used by Southridge to represent cancellation of a standard cable service order.
5. In the C:\BizUnit Walkthrough\BPM E2E Scenario Tests\BPM Tests\TestCases directory of the BPM
Tests project, create the file CancelOrder_CSRWebForm_Bizunit_Data_Validation.xml. This file is
referenced by the CancelOrder_CSRWebForm_BizUnit_Data_Validation() test method defined in
the BPMTests class in the file BPMTests.cs. This test configuration file is used to check that an “XS”
type (Cancel Standard Service) request is recorded in the History database and the Service Order
BAM activity. In this instance, it is only necessary to check the Service Order BAM Activity. The
contents of this file are listed below:
<TestCase testName="CancelOrder_CSRWebForm_BizUnit_Data_Validation">
<TestSetup>
<Delay>20000</Delay>
<!--
Set <Delay> to a value large enough to allow enough time for the BPM
scenario to update the History database,
the Tracking database, and the BAM database or else the test steps that
validate the updated values
in these databases will fail. For most environments a value of 20000 (20
seconds) will be sufficient.
-->
</TestStep>
</TestSetup>
<TestExecution>
<DelayBeforeCheck>0</DelayBeforeCheck>
<NumberOfRowsExpected>1</NumberOfRowsExpected>
<!--
This SQL Query validates that the correct information has been tracked in
the History Database which is used by the BPM scenario
-->
<SQLQuery>
</SQLQuery>
<Rows>
<Columns>
<!--
Here we validate the values for some of the key columns returned. This
validates that the correct row of data related to the test case is returned.
passing context between test steps. For more information please see the
BizUnit documentation.
-->
<OrderNumber>999</OrderNumber>
<CustNumber>999</CustNumber>
<OrderType>XS</OrderType>
<OrderDetails>New Order</OrderDetails>
</Columns>
</Rows>
</TestStep>
<DelayBeforeCheck>0</DelayBeforeCheck>
<NumberOfRowsExpected>1</NumberOfRowsExpected>
<!--
Here we validate that tracking data has been correctly recorded in the
ServiceOrderRequest BAM activity
-->
<SQLQuery>
</SQLQuery>
<Rows>
<Columns>
<!--
-->
<OrderID>999</OrderID>
<CustomerID>999</CustomerID>
<ServiceType>XS</ServiceType>
</Columns>
</Rows>
</TestStep>
</TestExecution>
<!-- Test cleanup: test cases should always leave the system in the state they
found it -->
<TestCleanup>
</TestCleanup>
</TestCase>
bizUnit1.RunTest();
bizUnit2.RunTest();
Conclusion
Functional testing plays an important role in the performance tuning and optimization process. In many
cases during a performance lab changes will need to be made that can affect the functionality of the
system. An example of this would be improving the latency of an Orchestration by replacing a send port
with a .NET assembly that makes in-line sends. When making changes such as this it is important that
you do not impact the functionality of the system. By automating functional testing of your solution you
reduce the risk associated with any changes to the solution because you can fully functionally test your
system for positive and negative (failure) scenarios. Automated functional testing becomes critical to the
success of a project because manual functional testing becomes burdensome as the complexity and
scope of the project increases.
BizUnit provides over 70 predefined test steps and the BizUnit framework is extensible, so you can add
your own steps relatively easily. It is important to use the right tool for the job. Visual Studio Web testing
is an excellent tool for functional testing of Web services and BizUnit’s test steps make it very easy to
validate tracking data, start and stop BizTalk hosts, monitor performance monitor counters, and much
more.
For more information and best practices about unit testing with the Visual Studio 2005 testing tools, see
the following documentation:
Working with Ordered Tests - http://go.microsoft.com/fwlink/?LinkId=117837
Working with Unit Tests - http://go.microsoft.com/fwlink/?LinkId=62412
Unit Test Walkthrough - http://go.microsoft.com/fwlink/?LinkId=117843
Optimizing Performance
A default installation of the Windows operating system, SQL Server, and BizTalk Server can be
significantly optimized to provide optimal performance for a production BizTalk Server environment.
This section provides specific performance optimizations that should be followed when deploying a
production BizTalk Server solution.
In This Section
Optimizing Operating System Performance
Optimizing Network Performance
Optimizing Database Performance
Optimizing BizTalk Server Performance
Windows PowerShell Scripts
Important
Some of the recommendations in this topic require modifications to the Windows Server
registry. Incorrect use of Registry Editor may cause problems requiring you to reinstall your
operating system. Use Registry Editor at your own risk. For more information about how to
back up, restore, and modify the registry, see the Microsoft Knowledge Base article
"Description of the Microsoft Windows registry" at http://go.microsoft.com/fwlink/?LinkId=62729.
Install the latest BIOS, storage area network (SAN) drivers, network
adapter firmware and network adapter drivers
Hardware manufacturers regularly release BIOS, firmware, and driver updates that can improve
performance and availability for the associated hardware. Visit the hardware manufacturer’s Web site to
download and apply updates for the following hardware components on each computer in the BizTalk
Server environment:
1. BIOS updates
2. SAN drivers (if using a SAN)
3. NIC firmware
4. NIC drivers
Note
On most computers, hyper-threading is configured via the computer BIOS. Steps will vary by
computer type and manufacturer.
Note
A DPC is defined as a queued call to a kernel-mode function that will usually be executed at a
later time. An ISR is defined as a routine whose purpose is to service a device when it
generates an interrupt. For more information about deferred procedure calls and interrupt
service routines, see the Windows Driver Kit documentation at http://go.microsoft.com/fwlink/?
LinkId=84418.
Interrupt Filter Configuration Tool
On Windows Server 2003 based multiprocessor computers, the default behavior of the interrupt
controller is to assign device interrupts to any available processor. When network connections and file
server sessions for a given network adapter are filtered/bound/partitioned to run on a specific set of
processors, rather than any available processor, the performance and scalability of the associated
network processing is improved. Large BizTalk Server solutions often employ the use of multi-processor
SQL Server computers with multiple network adapters for which interrupt filtering may be particularly
beneficial.
Interrupt filtering using Intfiltr should always be evaluated in a test environment before employing in a
production environment. The hardware, operating system and application configuration of the test
environment should approximate the production environment as closely as possible. This will allow you
to test various permutations of interrupt filtering and determine the extent that interrupt filtering will
increase performance.
It is recommended that you disable hyper-threading before configuring Intfiltr on a computer with CPUs
that supports hyper-threading. This will ensure that interrupts are assigned to physical processors
rather than logical processors. Assigning interrupt affinity to logical processors that refer to the same
physical processor will not increase performance and could even degrade system performance.
The Interrupt Filter Configuration tool is available with the Server 2003 Resource Kit Tools at
http://go.microsoft.com/fwlink/?LinkId=106079.
Note
Either the FORMAT command line tool or the Disk Management tool is required to specify an
allocation unit size larger than 4096 bytes (4 KB) when formatting volumes. Windows Explorer
will only format up to this threshold. The CHKDSK command can be used to confirm the current
allocation unit size of a volume however it needs to scan the entire volume before the desired
information is displayed (shown as Bytes in each allocation unit).
Monitor drive space utilization
The less data a disk has on it, the faster it will operate. This is because on a well defragmented drive,
data is written as close to the outer edge of the disk as possible, as this is where the disk spins the
fastest and yields the best performance.
Disk seek time is normally considerably longer than read or write activities. As noted above, data is
initially written to the outside edge of a disk. As demand for disk storage increases and free space
reduces, data is written closer to the center of the disk. Disk seek time is increased in locating the data
as the head moves away from the edge, and when found, it takes longer to read, hindering disk I/O
performance.
This means that monitoring disk space utilization is important not just for capacity reasons but for
performance also.
As a rule of thumb, work towards a goal of keeping disk free space between 20% to 25% of total disk
space. If free disk space drops below this threshold, then disk I/O performance will be negatively
impacted.
Note
Test the server for correct operation after you disable each service to make sure you did not
disable a service you want to continue to use.
If the server is a member of a Windows Server 2003 domain, which BizTalk Servers typically
are, you must have the TCP/IP helper service on your system to correctly apply Group Policy to
the computer.
When you disable the DHCP client, the DHCP client stops DNS dynamic update protocol
registration and requires manual DNS records to be added for this client in the DNS server.
Important
If you do remove the paging file from the boot partition, Windows cannot create a crash
dump file (MEMORY.DMP) in which to write debugging information in the event that a
kernel mode STOP error occurs. If you do require a crash dump file, then you will have no
option but to leave a paging file of at least the size of physical memory + 1 MB on the boot
partition.
5. Manually set the size of the paging file – Manually setting the size of the paging file typically
provides better performance than allowing the server to size it automatically or having no paging file
at all. Best-practice tuning is to set the initial (minimum) and maximum size settings for the paging
file to the same value. This ensures that no processing resources are lost to the dynamic resizing of
the paging file, which can be intensive. This is especially true given that this resizing activity
typically occurs when the memory resources on the system are already becoming constrained.
Setting the same minimum and maximum page file size values also ensures the paging area on a
disk is one single, contiguous area, improving disk seek time. Windows Server 2003 automatically
recommends a total paging file size equal to 1.5 times the amount of installed RAM. On servers
with adequate disk space, the paging file on all disks combined should be configured up to twice
the physical memory for optimal performance.
AdditionalDelayedWorkerThreads
Key: HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\Executiv
e
Value: AdditionalDelayedWorkerThreads
Data Type: REG_DWORD
Range: 0x0 (default) to 0x10 (16)
Recommended value: 0x10 (16)
Value exists by default? Yes
The AdditionalCriticalWorkerThreads value increases the number of critical worker threads created
for a specified work queue. Critical worker threads process time-critical work items and have their stack
present in physical memory at all times. An insufficient number of threads will reduce the rate at which
time-critical work items are serviced; a value that is too high will consume system resources
unnecessarily.
AdditionalCriticalWorkerThreads
Key: HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\Executiv
e
Value: AdditionalCriticalWorkerThreads
Data Type: REG_DWORD
Range: 0x0 (default) to 0x10 (16)
Recommended value: 0x10 (16)
Value exists by default? Yes
Important
Only disable this feature in an intranet scenario when you are sure you will not suffer from
actual denial of service attacks.
For more information about disabling Windows Server Denial of Service Checking, see Microsoft
Knowledge Base article 889599, "A BizTalk Server Host instance fails, and a 'General Network' error is
written to the Application log when the BizTalk Server-based server processes a high volume of
documents" at http://support.microsoft.com/kb/899599. Follow the instructions in this article to create
the SynAttackProtect registry entry on computers running SQL Server that host BizTalk Server
databases and on any computers running BizTalk Server running Windows Server 2003 SP1 or later.
Registry settings that govern the level of denial of service attack protection - In certain scenarios
you may want to maintain denial of service protection but reduce how aggressively the denial of service
functionality is applied. It is possible to tune the default behavior of the denial of service protection
feature by following these steps:
1. Ensure the SynAttackProtect registry entry is set to a REG_DWORD value of 1 as described at
http://go.microsoft.com/fwlink/?LinkId=111477.
2. Configure the TcpMaxHalfOpen registry entry as described at http://go.microsoft.com/fwlink/?
LinkId=111478.
3. Configure the TcpMaxHalfOpenRetried registry entry as described at
http://go.microsoft.com/fwlink/?LinkId=111479.
NtfsMftZoneReservation
Key: HKLM\SYSTEM\CurrentControlSet\Control\FileSyste
m
Value: NtfsMftZoneReservation
Data Type: REG_DWORD
Range: 1–4
Default value: 1
Recommended value: 1 if volumes typically store fewer files.
2 or 3 if volumes typically store a moderate
number of files.
4 if volumes typically store a large number of files.
Value exists by default? No, needs to be added.
Note
These values do not exist in the registry by default.
The MaxWorkItems value specifies the maximum number of receive buffers, or work items, the Server
service is permitted to allocate at one time. If this limit is reached, then the transport must initiate flow
control, which can significantly reduce performance.
MaxWorkItems
Key: HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
s
Value: MaxWorkItems
Data Type: REG_DWORD
Range: 0 – 65535
Default value: Configured dynamically
Recommended value: 8192
Note
The MaxWorkItems value must be at least four times as large
as the MaxMpxCt value.
Value exists by default? No, needs to be added.
The MaxMpxCt value enforces the maximum number of simultaneous outstanding requests from a
particular client to a server. During negotiation of a Server Message Block between the client and the
server, this value is passed to the client's redirector where the limit on outstanding requests is enforced.
A higher value can increase server performance but requires more use of server work items
(MaxWorkItems).
MaxMpxCt
Key: HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
s
Value: MaxMpxCt
Data Type: REG_DWORD
Range: 0 – 65535
Default value: 50
Recommended value: 2048
Note
The MaxWorkItems value must be at least four times as large
as the MaxMpxCt value.
Value exists by default? No, needs to be added.
The MaxCmds value specifies the maximum number of network control blocks the redirector can
reserve. The value of this entry coincides with the number of execution threads that can be outstanding
simultaneously. Increasing this value will improve network throughput, especially if you are running
applications that perform more than 15 operations simultaneously. This value is set on the SMB client
computer.
MaxCmds
Key: HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameter
s
Value: MaxCmds
Data Type: REG_DWORD
Range: 0 – 65535
Default value: 50
Recommended value: 2048
Value exists by default? No, needs to be added.
Note
Start with the default or recommended values for these registry keys, and increase the value in
small increments as needed. The more outstanding connections that exist, the more memory
resources will be used by the server. If you set the values too high, the server could run out of
resources such as paged pool memory.
For more information about setting these values, see Microsoft Knowledge Base article “How to
troubleshoot Event ID 2021 and Event ID 2022” at http://support.microsoft.com/kb/317249 and see
Microsoft Knowledge Base article "The network BIOS command limit has been reached" error message
in Windows Server 2003, in Windows XP, and in Windows 2000 Server available at
http://support.microsoft.com/kb/810886.
Caution
Before disabling short name generation, ensure there are no DOS or 16-bit applications
running on the server that require 8.3 file names.
NTFSDisable8dot3NameCreation
Key: HKLM\SYSTEM\CurrentControlSet\Control\FileSyste
m
Value: NTFSDisable8dot3NameCreation
Data Type: REG_DWORD
Range: 0–1
Default value: 0
Recommended value: 1
Value exists by default? Yes
In Windows Server 2003, this value can be set by using the following command:
fsutil behavior set disable8dot3 1
Size
Key: HKLM\System\CurrentControlSet\Services\LanmanServer\Parameter
s
Value: Size
Data Type: REG_DWORD
Recommended value: 3
Value exists by default? Yes
LargeSystemCache
Key: HKLM\System\CurrentControlSet\Control\Sessio
n Manager\MemoryManagement
Value: LargeSystemCache
Data Type: REG_DWORD
Recommended value: 0
Value exists by default? Yes
DontVerifyRandomDrivers
Key: HKLM\SYSTEM\CurrentControlSet\Control\FileSyste
m
Value: DontVerifyRandomDrivers
Data Type: REG_DWORD
Range: 0–1
Default value: 0
Recommended value: 1
Value exists by default? No, needs to be added.
NTFSDisableLastAccessUpdate
Key: HKLM\SYSTEM\CurrentControlSet\Control\FileSyste
m
Value: NTFSDisableLastAccessUpdate
Data Type: REG_DWORD
Range: 0–1
Default value: 0
Recommended value: 1
Value exists by default? No, needs to be added.
In Windows Server 2003, this value can be set by using the following command:
fsutil behavior set disablelastaccess 1
For more information about disabling NTFS last access update, see the Windows Server 2003
Deployment Guide at http://go.microsoft.com/fwlink/?LinkID=111188
Note
If the script does not run, or opens in Notepad instead of running, ensure the
PowerShell execution policy permits running PowerShell scripts. To determine the
current PowerShell execution policy run the Get-ExecutionPolicy PowerShell
command. To change the current PowerShell execution policy run the Set-
ExecutionPolicy PowerShell command.
The operating system optimizations PowerShell script generates a log file named “OSSettings.log”
in the directory from which the script was executed. This log file details which values were changed
and lists the original value as well as the new value. For simplicity sake, and so that all logs are
accessible from the same place, it is recommended that this script is placed in a networked file
share and that the script is run from that file share on all computers in the BizTalk Server
environment.
Optimizing Network Performance
In a BizTalk Server environment where the BizTalk Server computer(s) are separate from the SQL
Server computer(s), each and every message processed by BizTalk Server requires communication
over the network. This communication includes considerable traffic between the BizTalk Server
computers and the BizTalk Message Box database(s), the BizTalk Management database(s), the BAM
databases, and other databases. In high-load scenarios, this communication can result in considerable
network traffic and can become a bottleneck, especially when network settings have not been
optimized, not enough network interface cards are installed, or insufficient network bandwidth is
available.
This topic provides some general recommendations for improving network performance and describes
several registry entries that can be modified to optimize the network stack to mitigate the occurrence of
bottlenecks on the network.
Important
Some of the recommendations in this topic require modifications to the Windows registry.
Incorrect use of Registry Editor may cause problems requiring you to reinstall your operating
system. Use Registry Editor at your own risk. For more information about how to back up,
restore, and modify the registry, see the Microsoft Knowledge Base article "Description of the
Microsoft Windows registry" at http://go.microsoft.com/fwlink/?LinkId=62729.
Adjusting network settings to optimal values has been shown to effectively address network bottlenecks
and improve overall network performance in BizTalk Server solutions. This should be done on all
computers involved in the solution, including BizTalk Server computers, SQL Server computers, and
any other server computers.
Note
The most common indicator that Network IO is a bottleneck is the counter “SQL Server:Wait
Statistics\Network IO waits”. When the value for Avg Wait Time in this counter is greater than
zero on one or more of your SQL Server computers, then Network IO is a bottleneck.
Note
Steps to tune network adapter drivers vary by manufacturer.
Follow these steps to access settings for network adapters in Windows Server 2003:
1. Click Start, point to Settings, click Control Panel, and then double-click Network Connections.
2. Right-click Local Area Connection (or the name of your network connection), and then click
Properties.
3. On the General tab, click Configure.
4. Click the Advanced tab to access properties that can be configured for the network adapter.
The following properties should be configured for each network adapter in the BizTalk Server
environment:
Note
You apply these settings for each physical network adapter, including the individual network
adapters within a teamed set of network adapters that are configured for aggregation, load
balancing, or fault tolerance. With some teaming software, you might need to apply these
settings to the team also. Note that some network adapters are self-tuning and may not offer
the option to configure parameters manually.
Power Option – Configure the network adapter driver to prevent power management functionality
from turning off the network adapter to save power. This functionality may be useful for client
computers but should seldom, if ever, be used on a BizTalk Server or SQL Server computer.
Fixed Speed/Duplex (do not use AUTO) - It is very important that the network speed, duplex, and
flow control parameters are set to correspond to the settings on the switch to which they are
connected. This will mitigate the occurrence of periodic “auto-synchronization” which may
temporarily take connections off-line.
Max Coalesce Buffers - Map registers are system resources used to convert physical addresses
to virtual addresses for network adapters that support bus mastering. Coalesce buffers are
available to the network driver if the driver runs out of map registers. Set this value as high as
possible for maximum performance. On servers with limited physical memory, this may have a
negative impact as coalesce buffers consume system memory. On most systems however, the
maximum setting can be applied without significantly reducing available memory.
Max Transmit/Send Descriptors and Send Buffers - This setting specifies how many transmit
control buffers the driver allocates for use by the network interface. This directly reflects the number
of outstanding packets the driver can have in its “send” queue. Set this value as high as possible for
maximum performance. On servers with limited physical memory, this may have a negative impact
as send buffers consume system memory. On most systems however, the maximum setting can be
applied without significantly reducing available memory.
Max Receive Buffers - This setting specifies the amount of memory buffer used by the network
interface driver when copying data to the protocol memory. It is normally set by default to a
relatively low value. Set this value as high as possible for maximum performance. On servers with
limited physical memory, this may have a negative impact as receive buffers consume system
memory. On most systems however, the maximum setting can be applied without significantly
reducing available memory.
All offload options ON - In almost all cases performance is improved when enabling network
interface offload features. Some network adapters provide separate parameters to enable or
disable offloading for send and receive traffic. Offloading tasks from the CPU to the network adapter
can help lower CPU usage on the server which will improve overall system performance. The
Microsoft TCP/IP transport can offload one or more of the following tasks to a network adapter that
has the appropriate capabilities:
Checksum tasks - The TCP/IP transport can offload the calculation and validation of IP and
TCP checksums for sends and receives to the network adapter, enable this option if the
network adapter driver provides this capability.
IP security tasks - The TCP/IP transport can offload the calculation and validation of encrypted
checksums for authentication headers (AH) and encapsulating security payloads (ESP) to the
network adapter. The TCP/IP transport can also offload the encryption and decryption of ESP
payloads to the network adapter. Enable these options if the network adapter driver provides
this capability.
Segmentation of large TCP packets - The TCP/IP transport supports large send offload
(LSO). With LSO, the TCP/IP transport can offload the segmentation of large TCP packets.
Stack Offload – The entire network stack can be offloaded to a network adapter that has the
appropriate capabilities. Enable this option if the network adapter driver provides this capability.
Wake On LAN disabled (unless being used) – Configure the network adapter driver to disable
wake-on lan functionality. This functionality may be useful for client computers but should seldom if
ever be used on a BizTalk Server or SQL Server computer.
For more information about tuning network adapters for performance, see the Network Device
Settings section of the "BizTalk Server Database Optimization" whitepaper at
http://go.microsoft.com/fwlink/?LinkID=101578.
Registry settings that can be modified to improve
network performance
This section provides a description of recommended values for several registry entries that impact
network performance. These registry entries can be applied manually or can be applied via the
operating system optimization PowerShell script included in Windows PowerShell Scripts.
The DisablePagingExecutive value governs whether or not Windows will page the NT executive to
disk. Setting this entry to a value of 1 will prevent pageable drivers and system code in the Windows NT
Executive from being paged out to disk. Although this decreases the response time in systems with
extremely large amounts of physical memory (RAM), it is critical that there is enough RAM installed,
otherwise the server could be rendered unstable. For more information about the
DisablePagingExecutive registry entry, see http://go.microsoft.com/fwlink/?LinkId=113707.
DisablePagingExecutive
Key: HKLM:\System\CurrentControlSet\Control\Sessio
n Manager\Memory Management
Value: DisablePagingExecutive
Data Type: REG_DWORD
Range: 0 to 1
Default value: 0
Recommended value: 1
Value exists by default? Yes
The IRPStackSize value specifies the number of stack locations in I/O request packets (IRPs) that are
used by Windows 2000 Server, by Windows Server 2003, and by Windows XP. You may have to
increase this number for certain transports, for media access control (MAC) drivers, or for file system
drivers. Each stack uses 36 bytes of memory for each receive buffer. For more information about the
IRPStackSize registry entry, see Microsoft Knowledge Base Article 285089, “Description of the
IRPStackSize parameter in Windows 2000, in Windows XP, and in Windows Server 2003” at
http://support.microsoft.com/kb/285089.
IRPStackSize
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters
Value: IRPStackSize
Data Type: REG_DWORD
Range: 11 to 50
Default value: 15
Recommend 32
ed value:
Value exists No, needs to be added.
by default?
The SizReqBuf value specifies the size in bytes of the raw receive buffers (work items) that the Server
service uses. Small work items use less memory, but large work items can improve performance. The
value that works best in a particular environment depends on the configuration of that environment. For
an optional value, you might try increasing the value as high as 4410 (hexadecimal); this has been
shown to work well in a fairly standard Ethernet environment. However, going over setting a value over
4000 hexadecimal has been seen to cause other issues on some servers. Therefore, the default
starting point for the SizeReqBuf entry should be 4000 hexadecimal (16384 decimal). By default, the
value for this entry is 4356 bytes on servers with less than 512 MB of memory. On servers with more
than 512 MB of memory, this value is increased to 16384 bytes (16 KB). A receive buffer that is larger
can improve performance on directory queries and similar commands, but at the price of more memory
per work item.
Increasing the SizReqBuf value can increase performance significantly in a high-latency environment.
However, note that increasing the SizReqBuf value also increases the non-paged pool memory used
by the Server service. If you increase the SizReqBuf value, monitor non-paged pool memory to make
sure that the change does not adversely impact the performance of the server.
SizReqBuf
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters
Value: SizReqBuf
Data Type: REG_DWORD
Range: 1-65535
Default value: 16,384 (bytes) on servers with 512 MB or more or physical memory, 4,356 (bytes) on
servers with less than 512 MB physical memory.
Recommend 17424 (bytes) on servers with 512 MB or more or physical memory, 4,356 (bytes) on
ed value: servers with less than 512 MB physical memory.
Value exists No, needs to be added.
by default?
Review the following information to configure TCP/IP registry settings for optimal performance:
"Avoiding TCP/IP Port Exhaustion" at http://go.microsoft.com/fwlink/?LinkId=101610.
Review the "HKLM\SYSTEM\CurrentControlSet\ Services\Tcpip\Parameters" section of "BizTalk
Server Database Optimization" whitepaper at http://go.microsoft.com/fwlink/?LinkID=101578.
Review Optimizing Network Performance.
The DefaultTTL value specifies the default time-to-live (TTL) value set in the header of outgoing IP
packets. The TTL determines the maximum amount of time that an IP packet may live in the network
without reaching its destination. It is effectively a limit on the number of links on which an IP packet is
allowed to travel before being discarded.
DefaultTTL
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: DefaultTTL
Data Type: REG_DWORD
Range: 1 to 255 (seconds)
Default value: 128
Recommended 64
value:
Value exists by No, needs to be added
default?
The EnablePMTUDiscovery value governs whether TCP will attempt to discover the Maximum
Transmission Unit (MTU), or largest packet size for the entire path to a remote host. By discovering the
Path MTU (PMTU) and limiting TCP segments to this size, TCP can eliminate packet fragmentation at
routers along the path that connect networks with different MTUs. Fragmentation adversely affects TCP
throughput and causes network congestion. Setting this parameter to 0 (or off) causes an MTU of 576
bytes to be used for all connections to destinations other than the local subnet.
Important
This entry should not be set to a value of 1 if the server is directly exposed to potential
attackers.
EnablePMTUDiscovery
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: EnablePMTUDiscovery
Data Type: REG_DWORD
Range: 0 to 1
Default value: 1
Recommended 1
value:
Value exists by No, needs to be added.
default?
The EnablePMTUBHDetect value governs whether TCP tries to detect black hole routers during the
Path MTU (maximum transmission unit) discovery process. Enabling black hole detection increases the
maximum number of times TCP retransmits a given segment. If the value of this entry is 1, TCP
recognizes when it has transmitted the same segment several times without receiving an
acknowledgement. It reduces the maximum segment size (MSS) to 536 bytes, and it sets the Don't-
Fragment bit. If, as a result, receipt of the segment is acknowledged, TCP continues this practice in all
subsequent transmissions on the connection.
EnablePMTUBHDetect
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: EnablePMTUBHDetect
Data Type: REG_DWORD
Range: 0 to 1
Default value: 0
Recommended 1
value:
Value exists by No, needs to be added.
default?
The TcpMaxDupAcks value determines the number of duplicate ACKs that must be received for the
same sequence number of sent data before fast retransmit is triggered to resend the segment that has
been dropped in transit. If you set the value of this entry to 1, then the system retransmits a segment
when it receives an ACK for a segment with a sequence number that is less than the number of the
segment currently being sent.
When data arrives with a sequence number that is greater than expected, the receiver assumes that
data with the expected number was dropped, and it immediately sends an ACK with the ACK number
set to the expected sequence number. The receiver sends ACKs set to the same missing number each
time it receives a TCP segment that has a sequence number greater than expected. The sender
recognizes the duplicate ACKs and sends the missing segment. The recommended value of 2 is also
the default value but Windows Server 2003 does not add this entry to the registry, so it should be
added.
TcpMaxDupAcks
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: TcpMaxDupAcks
Data Type: REG_DWORD
Range: 1 to 3
Default value: 2
Recommended 2
value:
Value exists by No, needs to be added.
default?
The Tcp1323Opts value governs whether TCP uses the timestamping and window scaling features
described in RFC 1323, TCP Extensions for High Performance. Window scaling permits TCP to
negotiate a scaling factor for the TCP receive window size, allowing for a very large TCP receive
window of up to 1 GB. The TCP receive window is the amount of data that the sending host can send at
one time on a connection. Timestamps help TCP measure round trip time (RTT) accurately in order to
adjust retransmission timeouts. The Timestamps option provides two timestamp fields of 4 bytes each
in the TCP header, one to record the time the initial transmission is sent and one to record the time on
the remote host. This entry is a 2-bit bitmask. The lower bit determines whether scaling is enabled; the
higher bit determines whether timestamps are enabled.
Tcp1323Opts
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: Tcp1323Opts
Data Type: REG_DWORD
Range: 0 to 3
Default value: 3
Recommended 1 (also consider setting to a value of 3 if high packet loss / retransmits are
value: occurring).
Value exists by No, needs to be added.
default?
The SackOpts value governs whether the Selective Acknowledgment (SACK) feature of
Windows Server 2003 TCP/IP is enabled. SACK is an optimizing feature based upon RFC 2018, TCP
Selective Acknowledgement Options. SACK permits receipt acknowledgement of individual blocks of
data in a continuous sequence, rather than just the last sequence number. When SACK is enabled, the
recipient can tell the sender that one or more data blocks are missing from the middle of a sequence,
and the sender can retransmit only the missing data.
SackOpts
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: SackOpts
Data Type: REG_DWORD
Range: 0 to 1
Default value: 1
Recommended 1
value:
Value exists by No, needs to be added.
default?
The MaxFreeTcbs value determines the number of TCP control blocks (TCBs) the system creates to
support active connections. Because each connection requires a control block, this value determines
how many active connections TCP can support simultaneously. If all control blocks are used and more
connection requests arrive, TCP can prematurely release connections in the TIME_WAIT state in order
to free a control block for a new connection.
Note
If the value for this entry is increased, then the value for the MaxHashTableSize value should
also be increased.
MaxFreeTcbs
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: MaxFreeTcbs
Data Type: REG_DWORD
Range: 0 to 4294967295
Default value: Varies with the system and amount of physical memory on the computer. For more
information, see “Appendix A: TCP/IP Configuration Parameters” at
http://go.microsoft.com/fwlink/?LinkId=113716
Recommended 65535
value:
Value exists by No, needs to be added.
default?
The MaxHashTableSize value controls how fast the system can find a TCB and should be increased if
the MaxFreeTcbs value is increased from its default value.
Note
This value should be set to a power of 2 (for example, 512, 1024, 2048, and so on.) If this value
is not a power of 2, the system configures the hash table to the next power of 2 value (for
example, a setting of 513 is rounded up to 1024).
MaxHashTableSize
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: MaxHashTableSize
Data Type: REG_DWORD
Range: 64 to 65536
Default value: 512
Recommended 65536
value:
Value exists by No, needs to be added.
default?
The MaxUserPort value controls the maximum port number used when an application requests any
available user port from the system. Normally, short-lived ports are allocated in the range from 1024
through 5000. Setting this parameter to a value outside of the valid range causes the nearest valid
value to be used (5000 or 65534).
MaxUserPort
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: MaxUserPort
Data Type: REG_DWORD
Range: 5000 to 65534
Default value: 5000
Recommended 65534
value:
Value exists by No, needs to be added.
default?
The TcpTimedWaitDelay value determines the length of time that a connection stays in the
TIME_WAIT state when being closed. While a connection is in the TIME_WAIT state, the socket pair
cannot be reused. This is also known as the 2MSL state because the value should be twice the
maximum segment lifetime on the network. For more information, see Internet RFC 793 at
http://go.microsoft.com/fwlink/?LinkId=113719.
TcpTimedWaitDelay
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: TcpTimedWaitDelay
Data Type: REG_DWORD
Range: 30 to 300
Default value: 240
Recommended 30
value:
Value exists by No, needs to be added.
default?
The GlobalMaxTcpWindowSize value specifies the maximum size of the TCP receive window. The
receive window specifies the number of bytes that a sender can transmit without receiving an
acknowledgment. In general, larger receive windows improve performance over high-latency, high-
bandwidth networks. For greatest efficiency, the receive window should be an even multiple of the TCP
Maximum Segment Size.
The TCP/IP stack of Windows Server 2003 was designed to tune itself in most environments. Instead of
using a fixed size for the receive window, TCP negotiates for and adjusts to an even increment of the
maximum segment size. Matching the receive window to even increments of the maximum segment
size increases the percentage of full-sized TCP segments used during bulk data transmission.
Note
Setting this entry to a value greater than 64 KB can only be achieved when connecting to other
systems that support window scaling as described in Internet RFC 1323. For more information
about RFC 1323, see http://go.microsoft.com/fwlink/?LinkId=84406.
GlobalMaxTcpWindowSize
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: GlobalMaxTcpWindowSize
Data Type: REG_DWORD
Range: 0 to 1073741823
Default value: This value does not exist by default
Recommended 65535
value:
Value exists by No, needs to be added.
default?
The NumTcpTablePartitions value controls the number of TCB table partitions. The TCB table can be
portioned to improve scalability on multi-processor systems by reducing contention on the TCB table.
This value should not be modified without a careful performance study. A suggested maximum value is
(number of CPUs) * 2 (not counting hyper-threaded CPUs).
NumTcpTablePartitions
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: NumTcpTablePartitions
Data Type: REG_DWORD
Range: 1 to 65535
Default value: 4
Recommended Number of physical CPUs or physical CPU cores * 2 (not counting hyper-threaded
value: CPUs)
Value exists by No, needs to be added.
default?
The TcpAckFrequency value specifies the number of ACKs that will be outstanding before the delayed
ACK timer is ignored. The TcpAckFrequency value can only be set after installing Hotfix 815230,
described in Microsoft Knowledge Base article 815230, “Changing the TcpAckFrequency value to 1
does not have any effect” at http://support.microsoft.com/kb/815230.
Caution
Do not change the value of this entry before carefully studying the effect of different values in a
test environment.
TcpAckFrequency
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: TcpAckFrequency
Data Type: REG_DWORD
Range: 0 to 255
Default value: 2
Recommended 5 for 100 MB networks
value: 13 MB for 1 GB networks
Value exists by No, needs to be added.
default?
The SynAttackProtect value specifies whether the SYN flooding attack protection feature of TCP/IP is
enabled. The SYN flooding attack protection feature of TCP detects symptoms of denial-of-service
(DOS) attacks (also known as SYN flooding), and it responds by reducing the time that the server
spends on connection requests that it cannot acknowledge. For more information about the
SynAttackProtect registry entry, see the “Disable Windows Server 2003 Service Pack 1 and Service
Pack 2 denial of service checking” section of Optimizing Operating System Performance.
SynAttackProtect
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: SynAttackProtect
Data Type: REG_DWORD
Range: 0 to 1
Default value: 1 for Windows Server 2003 SP1 and later, 0 otherwise
Recommended 0 (Only set this on systems with Web exposure if other hardware or software is
value: providing denial of service (DOS) attack protection)
Value exists by No, needs to be added.
default?
The MTU value specifies the size of the maximum transmission unit (MTU) that TCP/IP uses for the
network interface. The value of this entry takes precedence over the MTU that the network adapter
detects dynamically. For more information about the MTU value, see “Appendix A: TCP/IP Configuration
Parameters” at http://go.microsoft.com/fwlink/?LinkId=113716.
MTU
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interf
aces\interfaceGUID
Value: MTU
Data Type: REG_DWORD
Range: 88 to the dynamically determined MTU (in bytes)
Default 4294967295
value:
Recommen Determine the optimal MTU value as described in the Find the Optimal MTU below,
ded value: under “Applying registry settings with the network optimization Windows PowerShell
script”
Value exists No, needs to be added.
by default?
The ForwardBufferMemory value specifies the size of the buffer that IP allocates for storing packet
data in the router packet queue. Because packet queue data buffers are 256 bytes long, the value of
this entry must be a multiple of 256.
ForwardBufferMemory
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: ForwardBufferMemory
Data Type: REG_DWORD
Range: 0 to 4294967295 (bytes, in 256 byte increments
Default value: 74240
Recommended Set to a value 100 * the optimal MTU value as described in the Find the Optimal
value: MTU below, under “Applying registry settings with the network optimization
Windows PowerShell script”
Value exists by No, needs to be added.
default?
The MaxForwardBufferMemory value limits the total amount of memory that IP can allocate to store
packet data in the router packet queue
MaxForwardBufferMemory
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: MaxForwardBufferMemory
Data Type: REG_DWORD
Range: Dynamically determined MTU to 4294967295 (in bytes)
Default value: 2097152 (bytes)
Recommended Set to a value 100 * the optimal MTU value as described in Find the Optimal
value: MTU below, under “Applying registry settings with the network optimization
Windows PowerShell script”. This value must be greater than or equal to the value
specified for ForwardBufferMemory.
Value exists by No, needs to be added.
default?
The NumForwardPackets value determines the number of IP packet headers that are allocated for the
router packet queue. When all headers are in use, the system attempts to allocate more, up to the
value configured for MaxNumForwardPackets. This value should be at least as large as the
ForwardBufferMemory value divided by the maximum IP data size of the networks that are connected
to the router. It should be no larger than the ForwardBufferMemory value divided by 256 because at
least 256 bytes of forward buffer memory is used for each packet. The optimal number of forward
packets for a given ForwardBufferMemory size depends on the type of traffic that is carried on the
network and is somewhere between these two values. This parameter is ignored and no headers are
allocated if routing is not enabled.
NumForwardPackets
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: NumForwardPackets
Data Type: REG_DWORD
Range: 1 to 4294967295
Default value: 50
Recommended Set to 1/256 of the value specified for ForwardBufferMemory
value:
Value exists by No, needs to be added.
default?
The MaxNumForwardPackets value limits the total number of IP packet headers that can be allocated
for the router packet queue. This value must be greater than or equal to the value of the
NumForwardPackets entry.
MaxNumForwardPackets
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: MaxNumForwardPackets
Data Type: REG_DWORD
Range: 1 to 4294967295
Default value: 4294967295
Recommended Set to 1/256 of the value specified for ForwardBufferMemory
value:
Value exists by No, needs to be added.
default?
The TcpWindowSize value specifies the maximum size of the TCP receive window. The receive
window specifies the number of bytes that a sender can transmit without receiving an acknowledgment.
In general, larger receive windows improve performance over high-latency, high-bandwidth networks.
For greatest efficiency, the receive window should be an even multiple of the TCP Maximum Segment
Size.
The TCP/IP stack of Windows Server 2003 was designed to tune itself in most environments. Instead of
using a fixed size for the receive window, TCP negotiates for and adjusts to an even increment of the
maximum segment size (MSS). For more information about MSS, see http://go.microsoft.com/fwlink/?
LinkId=113755. Matching the receive window to even increments of the MSS increases the percentage
of full-sized TCP segments used during bulk data transmission.
Note
Setting this entry to a value greater than 64 KB can only be achieved when connecting to other
systems that support window scaling as described in Internet RFC 1323. For more information
about RFC 1323, see http://go.microsoft.com/fwlink/?LinkId=84406
TcpWindowSize
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s
Value: TcpWindowSize
Data Type: REG_DWORD
Range: 0 to 4294967295 ( bytes, in 256-byte increments )
Default value: The smaller of the following values:
65535
The value of the GlobalMaxTcpWindowSize registry entry. For more
information, see “Appendix A: TCP/IP Configuration Parameters” at
http://go.microsoft.com/fwlink/?LinkId=113716
16384 rounded up to an even multiple of the TCP Maximum Segment Size
(MSS)
Recommended Value closest to 64420 that is a multiple of the MSS value.
value:
Value exists by No, needs to be added.
default?
The EnableDynamicBacklog value is a global switch that enables AFD.SYS functionality to withstand
large numbers of SYN_RCVD connections efficiently. For more information, see "Internet Server
Unavailable Because of Malicious SYN Attacks," at http://support.microsoft.com/kb/142641.
EnableDynamicBacklog
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameter
s
Value: EnableDynamicBacklog
Data Type: REG_DWORD
Range: 0 to 1
Default value: 0
Recommended 1
value:
Value exists by No, needs to be added.
default?
The MinimumDynamicBacklog value spcifies the minimum number of free connections allowed on a
listening endpoint. If the number of free connections drops below this value, a thread is queued to
create additional free connections.
MinimumDynamicBacklog
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameter
s
Value: MinimumDynamicBacklog
Data Type: REG_DWORD
Range: 0 to 4294967295
Default value: This value does not exist by default
Recommended 200
value:
Value exists by No, needs to be added.
default?
MaximumDynamicBacklog
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameter
s
Value: MaximumDynamicBacklog
Data Type: REG_DWORD
Range: 0 to 4294967295
Default value: This value does not exist by default
Recommended 20000
value:
Value exists by No, needs to be added.
default?
The DynamicBacklogGrowthDelta value controls the number of free connections to create when
additional connections are necessary. Be careful with this value, as a very large value could lead to
explosive free connection allocations.
DynamicBacklogGrowthDelta
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameter
s
Value: DynamicBacklogGrowthDelta
Data Type: REG_DWORD
Range: 0 to 4294967295
Default value: This value does not exist by default
Recommended 100
value:
Value exists by No, needs to be added.
default?
Note
If you don’t know the default gateway IP address, run “IPCONFIG” from a command
prompt to display the address.
If this command returns the message “Packet needs to be fragmented but DF set,” then the
specified MTU value is too high and a lower value must be used.
b. Determine the highest MTU value that will work on all computers in the BizTalk Server
environment, this value will be passed as an argument to the network optimization PowerShell
script.
2. Install Windows PowerShell – Windows PowerShell can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=77521. PowerShell must be installed in order to run the
network optimization script.
3. Run the network optimization script
a. Copy the script from the “Optimizing network performance registry settings” section of Windows
PowerShell Scripts into notepad and save as Set-NetworkRegSettings.ps1.
b. Launch PowerShell and change directories to the folder that contains the saved script.
c. Run the script with the following command:
.\Set-NetworkRegSettings.ps1 1400
Note
In this example an “optimal MTU” size of 1400 is passed as an argument to the script
but the “optimal MTU” value may be different in your environment.
Note
If the script does not run, or opens in Notepad instead of running, ensure the
PowerShell execution policy permits running PowerShell scripts. To determine the
current PowerShell execution policy run the Get-ExecutionPolicy PowerShell
command. To change the current PowerShell execution policy run the Set-
ExecutionPolicy PowerShell command.
The network optimizations PowerShell script generates a log file named “COMPUTERNAME-
NetworkRegSettings.log” in the directory from which the script was run. This log file details which
values were changed and lists the original value as well as the new value. For simplicity sake, and
so that all logs are accessible from the same place, it is recommended that this script is placed in a
networked file share and that the script is run from that file share on all computers in the BizTalk
Server environment.
In This Section
Pre-Configuration Database Optimization
Post-Configuration Database Optimization
Optimizing Filegroups for the Databases
Note
Running BizTalk Server databases on SQL Server 2005 typically provides superior
performance to running BizTalk Server databases on SQL Server 2000.
Version and Edition 64-bit support Multi-Instance Clustering support Analysis Services
of SQL Server Support
Before you set the amount of memory for SQL Server, determine the appropriate memory setting by
subtracting the memory required for Windows Server from the total physical memory. This is the
maximum amount of memory you can assign to SQL Server.
Note
If the computer(s) running SQL Server that host the BizTalk Server databases also host the
Enterprise Single Sign-On Master Secret Server then you may need to adjust this value to
ensure that there is sufficient memory available to run the Enterprise Single Sign-On Service. It
is not an uncommon practice to run a clustered instance of the Enterprise Single Sign-On
service on a SQL Server cluster to provide high availability for the Master Secret Server. For
more information about clustering the Enterprise Single Sign-On Master Secret Server, see the
topic “How to Cluster the Master Secret Server” in the BizTalk Server 2006 R2 documentation
at http://go.microsoft.com/fwlink/?LinkID=106874.
Split the tempdb database into multiple data files of
equal size on each SQL Server instance used by
BizTalk Server
Ensuring that the data files used for the tempdb are of equal size is critical because the proportional fill
algorithm used by SQL Server is based on the size of the data files. If data files are created with
unequal sizes, the proportional fill algorithm will use the largest file more for GAM allocations rather
than spreading the allocations between all the files, thereby defeating the purpose of creating multiple
data files. The number of data files for the tempdb database should be configured to be at least equal to
the number of processors assigned for SQL Server.
Overview
Every BizTalk Server solution will eventually encounter file I/O contention as throughput is increased.
The I/O subsystem, or storage engine, is a key component of any relational database. A successful
database implementation typically requires careful planning at the early stages of a project. This
planning should include consideration of the following issues:
What type of disk hardware to use, such as RAID (redundant array of independent disks) devices.
For more information about using a RAID hardware solution, see “About Hardware-based solutions”
in the SQL Server 2005 Books online at http://go.microsoft.com/fwlink/?LinkID=113944.
How to apportion data on the disks using files and filegroups. For more information about using files
and filegroups in SQL Server 2005, see “Using Files and Filegroups” in the SQL Server 2005 Books
online at http://go.microsoft.com/fwlink/?LinkID=69369 and “Understanding Files and Filegroups” in
the SQL Server Books online at http://go.microsoft.com/fwlink/?LinkID=96447.
Implementing the optimal index design for improving performance when accessing data. For more
information about designing indexes, see “Designing Indexes” in the SQL Server 2005 books online
at http://go.microsoft.com/fwlink/?LinkID=96457.
How to set SQL Server configuration parameters for optimal performance. For more information
about setting optimal configuration parameters for SQL Server, see “Optimizing Server
Performance” in the SQL Server Books online at http://go.microsoft.com/fwlink/?LinkID=71418.
One of the primary design goals of BizTalk Server is to ensure that a message is never lost. In order to
mitigate the possibility of message loss, messages are frequently written to the MessageBox database
as the message is processed. When messages are processed by an Orchestration, the message is
written to the MessageBox database at every persistence point in the orchestration. These persistence
points cause the MessageBox to write the message and related state to physical disk. At higher
throughputs, this persistence can result in considerable disk contention and can potentially become a
bottleneck.
Making optimal use of the files and filegroups feature in SQL Server has been shown to effectively
address File IO bottlenecks and improve overall performance in BizTalk Server solutions. This
optimization should only be done by an experienced SQL Server database administrator and only after
all BizTalk Server databases have been properly backed up. This optimization should be performed on
all SQL Server computers in the BizTalk Server environment.
SQL Server files and filegroups can be utilized to improve database performance because this
functionality allows a database be created across multiple disks, multiple disk controllers, or RAID
(redundant array of independent disks) systems. For example, if your computer has four disks, you can
create a database that is made up of three data files and one log file, with one file on each disk. As data
is accessed, four read/write heads can concurrently access the data in parallel. This speeds up
database operations significantly. For more information about implementing hardware solutions for SQL
Server disks, see “Database Performance” in the SQL Server Books online at
http://go.microsoft.com/fwlink/?LinkID=71419.
Additionally, files and filegroups enable data placement, because tables can be created in specific
filegroups. This improves performance, because all file I/O for a given table can be directed at a specific
disk. For example, a heavily used table can be placed on a file in a filegroup, located on one disk, and
the other less heavily accessed tables in the database can be located on different files in another
filegroup, located on a second disk.
File IO bottlenecks are discussed in considerable detail in the topic Bottlenecks in the Database Tier.
The most common indicator that File I/O (Disk I/O) is a bottleneck is the value of the “Physical
Disk:Average Disk Queue Length” counter. When the value of the “Physical Disk:Average Disk Queue
Length” counter is greater than about 3 for any given disk on any of the SQL Servers, then file I/O is
likely a bottleneck.
If applying file or filegroup optimization doesn't resolve a file I/O bottleneck problem, then it may be
necessary to increase the throughput of the disk subsystem by adding additional physical or SAN
drives.
This topic describes how to manually apply file and filegroup optimizations but these optimizations can
also be scripted. A sample SQL script is provided at the end of this topic. It is important to note that this
script would need to be modified to accommodate the file, filegroup, and disk configuration used by the
SQL Server database(s) for any given BizTalk Server solution.
Note
This topic describes how to create multiple files and filegroups for the BizTalk MessageBox
database. For an exhaustive list of recommended files and filegroups for all of the BizTalk
Server databases, see Appendix B of the excellent "BizTalk Server Database Optimization"
whitepaper available at http://go.microsoft.com/fwlink/?LinkID=101578.
Note
Log files are never part of a filegroup. Log space is managed separately from data space.
2. Enter the name of the SQL Server instance that houses the BizTalk Server MessageBox databases
in the Server name edit box of the Connect to Server dialog box, and click the Connect button to
display the SQL Server Management Studio dialog box. In the Object Explorer pane of SQL
Server Management Studio, expand Databases to view the databases for this instance of SQL
Server.
3. Right-click the database for which to add the files, and then click Properties to display the
Database Properties dialog box for the database.
4. In the Database Properties dialog box, select the Filegroups page. Click the Add button to create
additional filegroups for the BizTalkMsgBoxDb databases. In the example below, three additional
filegroups are added.
5. In the Database Properties dialog box, select the Files page.
Click the Add button to create additional files to add to the filegroups, and then click OK. The
MessageBox database is now distributed across multiple disks, which will provide a significant
performance advantage over a single disk configuration.
In the example below, a file is created for each of the filegroups that were created earlier and each
file is placed on a separate disk.
Sample SQL script for adding filegroups and files to
the BizTalk MessageBox database
The sample SQL script below performs the same tasks that were completed manually in the previous
section. This sample script assumes the existence of distinct logical drives G through J. The script
creates filegroups and files for each filegroup and places the log files on the J drive.
Note
Because SQL Server writes to its log files sequentially, there is no performance advantage
realized by creating multiple log files for a SQL Server database.
-- Filegroup changes are made using the master database
USE [master]
GO
-- Script-wide declarations
EXECUTE (@CommandBuffer)
EXECUTE (@CommandBuffer)
EXECUTE (@CommandBuffer)
The sample SQL script below could be used to set a particular filegroup as the default filegroup:
USE [BizTalkMsgBoxDb]
GO
if(@isdefault=0)
GO
The advantage to scripting is that scripts can perform multiple tasks quickly, can be reproduced
precisely, and reduce the possibility of human error. The disadvantage of scripting is that the execution
of an incorrectly written script can potentially cause serious problems that could require the BizTalk
Server databases to be re-configured from scratch. Therefore, it is of utmost importance that SQL
scripts such as the sample script listed in this topic are thoroughly tested before being executed in a
production environment.
Note
While there are benefits to creating additional host instances, there are also potential
drawbacks if too many host instances are created. Each host instance is a Windows service
(BTSNTSvc.exe), which generates additional load against the MessageBox database and
consumes computer resources (such as CPU, memory, threads).
For more information about modifying BizTalk Server Host properties, see "How to Modify Host
Properties" in BizTalk Server 2006 R2 Help at http://go.microsoft.com/fwlink/?LinkId=101588.
<system.net>
<connectionManagement>
</connectionManagement>
</system.net>
</configuration>
Note
Do not increase the value for the maxconnection parameter to such a large value that the Web
server being called is overwhelmed with HTTP connections. Perform stress testing by sending
requests to each destination Web server to determine a value for maxconnection that will
provide good performance for SOAP and HTTP sends without overwhelming the target Web
servers.
For information about tuning IIS and ASP.NET settings for Web services, see the "ASP.NET settings
that can impact HTTP or SOAP Adapter performance" section of "Configuration Parameters that Affect
Adapter Performance" in BizTalk Server 2006 R2 Help at http://go.microsoft.com/fwlink/?LinkID=79435.
Manually define worker and I/O thread values for Web applications
that host orchestrations published as a Web service
When the autoConfig value in the machine.config file of an ASP.NET 2.0 application is set to true,
ASP.NET 2.0 manages the number of worker threads and I/O threads that are allocated to any
associated IIS worker processes:
<processModel autoConfig="true" />
The number of worker and I/O threads for an ASP.NET Web application that hosts an orchestration
published as a Web service should be modified under the following conditions:
CPU utilization is not a bottleneck on the hosting Web server.
The value of the ASP.NET Apps v2.0.50727 \ Request Wait Time or ASP.NET Apps
v2.0.50727\Request Execution Time performance counters is unusually high.
The following error is generated in the Application log of the computer that hosts the Web
application:
Event Type: Warning
Event Source: W3SVC Event Category: None
Date: 6/4/2008
Time: 1:03:47 PM
User: N/A
Computer: <ComputerName>
To manually modify the number of worker and I/O threads for an ASP.NET Web application, open the
associated machine.config file, and then enter new values for the maxWorkerThreads and
maxIoThreads parameters:
<!-- <processModel autoConfig="true" /> -->
Note
These values are for guidance only; ensure you test changes to these parameters.
For more information about tuning parameters in the machine.config file for an ASP.NET 2.0 Web
application, see the Microsoft Knowledge Base article 821268 “Contention, poor performance, and
deadlocks when you make Web service requests from ASP.NET applications” at
http://support.microsoft.com/kb/821268.
Warning
Incorrect use of Registry Editor may cause problems requiring you to reinstall your operating
system. Use Registry Editor at your own risk. For more information about how to back up,
restore, and modify the registry, see the Microsoft Knowledge Base article "Description of the
Microsoft Windows registry" at http://go.microsoft.com/fwlink/?LinkId=62729.
Note
Worker threads are used to handle queued work items and I/O threads are dedicated callback
threads associated with an I/O completion port to handle a completed asynchronous I/O
request.
To modify the number of threads available in the .NET thread pool associated with each instance of a
BizTalk host, follow these steps:
1. Stop the BizTalk host instance.
2. Click Start, click Run, type regedit.exe, and then click OK to start Registry Editor.
Navigate to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc$hostname] where
hostname is the name of the host associated with the host instance.
Note
If you have upgraded your BizTalk Server 2006 installation from BizTalk Server 2004, this
registry key may be represented as
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvcguid] where
guid is a GUID unique to each instance of a BizTalk Server host.
3. Locate the CLR Hosting key. If this key does not exist, then create the key by following these
steps:
a. On the Edit menu, click New, and then click Key.
b. Type CLR Hosting, and then press ENTER.
4. Under the CLR Hosting key, create the following DWORD entries with the indicated values.
DWORD entry Default value Recommended value
MaxIOThreads 20 100
MaxWorkerThreads 25 100
Important
Increasing this value
beyond 100 can have an
adverse effect on the
performance of the SQL
Server computer hosting
the BizTalk Server
MessageBox database.
When this problem
occurs, SQL Server may
encounter a deadlock
condition. It is
recommended this
parameter is not
increased beyond a value
of 100.
MinIOThreads 1 25
MinWorkerThreads 1 25
Note
These recommended values will be sufficient for most scenarios but may need to be
increased depending on how many adapter handlers or orchestrations are running in each
host instance.
Note
These values are implicitly multiplied by the number of processors on the server. For
example, setting the MaxWorkerThreads entry to a value of 100 would effectively set a
value of 400 on a 4 CPU server.
5. Close Registry Editor.
6. Restart the BizTalk host instance.
Decrease the purging period for the DTA Purge and Archive job from
7 days to 2 days in high throughput scenarios
By default, the purging interval for tracking data in BizTalk Server is set to 7 days. In a high throughput
scenario, this can result in an excessive build up of data in the Tracking database, which will eventually
impact the performance of the MessageBox and in turn negatively impact message processing
throughput.
In high throughput scenarios, reduce the hard and soft purging interval from the default of 7 days to 2
days. For more information about configuring the purging interval, see “How to Configure the DTA
Purge and Archive Job” in the BizTalk Server 2006 R2 Help at http://go.microsoft.com/fwlink/?
LinkID=104908.
Note
These optimizations will improve latency but may do so at some cost to overall throughput.
Note
Changes to this value impact all instances of the associated service type, therefore, take care
to evaluate the impact on all host instances before changing this value.
Note
This value is only used if the MessageBox has no remaining unprocessed messages. If there is
a constant backlog of unprocessed messages in the MessageBox, BizTalk Server will attempt
to process the messages without waiting on the polling delay. After all messages are
processed, BizTalk Server will begin polling using the value specified for MaxReceiveInterval.
Note
In a BizTalk Server environment with a high ratio of host instances to MessageBox database
instances, decreasing the value for MaxReceiveInterval may cause excessive CPU utilization
on the SQL Server computer that houses the MessageBox database instance. For example, if
the MaxReceiveInterval is decreased to a low value (< 100) in a BizTalk Server environment
with a single MessageBox and > 50 host instances, CPU utilization on the SQL Server may
climb above 50%. This phenomenon can occur because the overhead associated with
continually polling host queues is significant. If you reduce MaxReceiveInterval to a value less
than 100, you should also evaluate the impact that this has on your SQL Server computer’s
CPU utilization.
Adjust the value for the MQSeries receive adapter polling threads
Increase the value specified for Threads in the MQSeries Transport Properties when configuring
MQSeries adapter receive locations. Increasing this property from the default value of 2 to a value of 5
will significantly improve the rate at which messages can be received using the MQSeries adapter.
param($RAMMb, $NumCPU,$VolType)
if (($RAMMb -eq $null) -or ($NumCPU -eq $null) -or ($VolType -eq $null) -or ($VolType -gt 4))
"`r"
"Valid VolType values are: 1(few files), 2 or 3(moderate files), 4(many files)"
"`r"
exit
$ErrorActionPreference = "SilentlyContinue"
$LogFileName="OSSettings.log"
$LogTime=([System.DateTime]::Now).ToString("dd-MM-yyyy hh:mm:ss")
$error.clear()
$oldValue = (Get-ItemProperty -path $path).$key
if ($error.count -eq 0)
$data = "$path\$key=$oldValue"
else
#if error count is greater than 0 an error occurred and the regkey was not set
else
Write-Output $Error[$error.count-1].exception
$IoPageLockLimit = "0xFFFFFFFF"
else
#Convert to Hexadecimal
$WorkerThreads = $NumCPU * 16
else
$LogFileName="NetworkRegSettings.log"
$LogTime=([System.DateTime]::Now).ToString("dd-MM-yyyy hh:mm:ss")
$data = "$path\$key=$oldValue"
# Maximize system pages. The system creates the largest number of page table entries possible
within physical memory.
# The system monitors and adjusts this value dynamically when the configuration changes.
# HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanmanServer\Parameters
# ============================================================================
$path = "HKLM:\System\CurrentControlSet\Services\LanmanServer\Parameters"
SetProperty $path "IRPStackSize" 0x20 # IRPStackSize -> +10 (Use DWORD 0x20 (32) if not
present)
}else{
$returnValue = $returnValue + 1
# HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
# =====================================================================
$path = "HKLM:\System\CurrentControlSet\Services\Tcpip\Parameters"
SetProperty $path "Tcp1323Opts" 1 # Tcp1323Opts -> 1 (experiment with a value of 3 for possible
improved performance, especially if you are experiencing high packet loss/retransmits)
SetProperty $path "SackOpts" 1 # SackOpts -> 1 (VERY important for large TCP window sizes, such
as specified below)
# SetProperty $path "TcpAckFrequency" 5 #5 for 100Mb, 13 for 1Gb (requires Windows Server 2K3
Hotfix 815230) - can also be set at the interface level if mixed speeds; only set for
connections primarily processing data
SetProperty $path "SynAttackProtect" 0 # SynAttackProtect -> 0 (Only set this on systems with
web exposure if other H/W or S/W is providing DOS attack protection)
#------------------------
#Interfaces\<adapter ID>\MTU -> 1450-1500, test for maximum value that will pass on each
interface using PING -f -l <MTU Size> <Interface Gateway Address>, pick the value that works
across all interfaces
$ip = $iface.DhcpIpAddress
else {
$ip = $iface.IPAddress
$path = $Interface.PSPath
$path = "HKLM:\System\CurrentControlSet\Services\Tcpip\Parameters"
$ForwardBufferMemory = 100*$MTUSize
$NumForwardPackets = $ForwardBufferMemory/256
SetProperty $path "TcpWindowSize" 0xFBA4 # TcpWindowSize -> 0xFBA4 (64420) (make this a
multiple of the TCP MSS (Max Segment Size)
# HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters
# ===================================================================
$path = "HKLM:\SYSTEM\CurrentControlSet\Services\AFD\Parameters"
#S/W Configuration
#=====================================================================