Developing For Performance and Scalability
Developing For Performance and Scalability
PERFORMANCE
AND SCALABILITY
Student Guide
Table of Contents
Introduction ................................................................................................................................................ 4
Module 1: It’s All about Scalability ............................................................................................................. 7
Lesson 1.1: Optimizing Web, App, and Database Tier Interactions ..................................................... 9
Knowledge Check ................................................................................................................................ 11
Module 2: Implementing Effective Caching Strategies ............................................................................ 13
Lesson 2.1: Page Caching .................................................................................................................... 14
Knowledge Check ................................................................................................................................ 16
Lesson 2.2: Enabling Page Caching ..................................................................................................... 17
Lesson 2.3: Invalidating Page Cache ................................................................................................... 19
Exercise: Enable Page Caching ........................................................................................................... 20
Exercise: Explore the Cache Settings ................................................................................................. 20
Exercise: Set Up a Page Cache Partition ............................................................................................ 23
Exercise: Invalidate the Page Cache for the Site................................................................................ 24
Knowledge Check ................................................................................................................................ 29
Lesson 2.4: Managing Static Content ................................................................................................. 30
Module 3: Optimizing Performance During Development ....................................................................... 33
Lesson 3.1: Improving Client-Side Performance ................................................................................. 33
Knowledge Check ................................................................................................................................ 35
Lesson 3.2: Improving Server-Side Performance ................................................................................ 37
Knowledge Check ................................................................................................................................ 42
Lesson 3.3: Implementing Optimal Data Management Strategies..................................................... 45
Exercise: Select a Storage Strategy .................................................................................................... 46
Module 4: Analyzing Performance Issues ................................................................................................. 52
Lesson 4.1: Diagnosing Performance with the Pipeline Profiler ........................................................ 52
Exercise: Diagnose Performance Using the Pipeline Profiler ............................................................ 62
Exercise: Compare Script and Pipeline Performance with the Profiler ............................................. 62
Lesson 4.2: Diagnosing Performance with Business Manager Analytics ............................................ 66
Audience Developers
Duration 4 hours
System
Laptop or desktop with UX Studio installed and access to a sandbox.
Requirements
Course Objectives
Welcome to Developing for Performance and Scalability. After completing this course, you’ll be able to:
Discuss factors that impact performance on the platform and develop strategies to address them.
Use caching techniques and tools to improve performance, efficiency, and stability on the
platform.
Optimize client-side and server-side processing using coding best practices.
Optimize transactional jobs and integrations.
Describe how Demandware uses quotas to enforce appropriate data usage policies.
Review Quota Status reports.
Analyze and troubleshoot performance issues using Business Manager Analytics, Pipeline Profiler,
and Control Center.
Module Objectives
It’s All about Scalability Discuss how performance, efficiency, and stability affect a site as
it scales.
Describe the impact of designing transactions to be processed
on the web, app, and database tiers.
Implementing Effective Describe the caching techniques that are available on the
Caching Strategies Demandware platform.
Set up page and partial page caching.
Determine appropriate cache settings for site pages.
Review cache settings using the Business Manager Toolkit.
Optimizing Performance Optimize client-side and server-side processing using coding best
During Development practices.
Manage data efficiently and monitor data storage usage.
Analyzing Performance Analyze performance using Pipeline Profiler.
Issues Analyze performance using Business Manager Analytics.
Optimizing Transactional Create and schedule jobs that limit resource consumption on the
Jobs and Integrations platform.
Diagnose issues with job performance.
Use web services efficiently.
Adhering to Quotas Describe platform quotas and the benefits of using them.
Describe the Demandware data usage policy.
Identify quota violations using the Business Manager Quota
Status and quota log files.
Create email notifications for quota violations.
Describe the types and levels of quotas and explain the purpose
of each.
Describe the process of overriding platform quotas.
Summary of Diagnostic Summarize the diagnostics tools and identify under which
Tools circumstances they should be used.
Identify external tools for analyzing performance.
This course includes scripting examples that you’ll run on your storefront site. You can find these files
in the Developing for Performance and Scalability Exercises.zip file provided by your
instructor. You’ll run the scripts and use the Pipeline Profiler to analyze the performance of the scripts.
Note: If you are taking the eLearning version of the course, the zip file is on the Course Materials tab.
Learning Objectives
After completing this module, you will be able to:
Discuss how performance, efficiency, and stability affect a site as it scales.
Describe the impact of designing transactions to be processed on the web, app, and database tiers.
Introduction
Site performance is critical to sales on your site. If customers experience slowdowns when they search
for an item or attempt to check-out on your site, there’s a good chance they’ll leave without
completing a purchase. Also, when customers do complete purchases on slow sites, their basket totals
tend to be lower. The speed of your site affects customer brand perception, as well. Customers are
accustomed to immediate web responses; your site must meet or exceed their expectations for site
performance.
Developing eCommerce sites in a Software-as-a-Service (SaaS)-based environment requires thoughtful
design that takes resource usage into consideration as sites scale. The Demandware platform’s
extensive customization capabilities make it possible to accidentally exceed the limits of Demandware
controls. For example, misuse of an API or a business object can lead to overages that reduce
performance and stability, and increase cost.
Web site efficiency is influenced by many factors, including integration with external systems, the
processing time spent on the web adaptor, application server, and database tiers, and the complexity
of the HTML pages the browser renders to create the customer experience.
The platform scales to meet traffic and order peaks most effectively when the web tier handles the
majority of transaction requests. As you design your site, it’s important to minimize the number of
transactions that pass through the web tier to the application tier and then from the application tier to
the database. To do so you’ll use efficient programming and caching techniques.
Too many requests on the database layer causes latency, leading to weak performance and the
inability of your site to scale effectively.
In a scalable site, most of the requests are handled by the web server:
A recommended ratio for transactions is 100:10:1 where there are 100 transactions on the web tier for
every 10 transactions on the app tier for every single transaction on the database. While 100:10:1 is an
Knowledge Check
Which of these statements about site performance and scalability are true? Select all that apply. The
Knowledge Check answer key is on the following page.
a. It’s important to minimize the transactions that pass from the database to the data grid.
b. Web site efficiency is influenced by integrations with external systems.
c. Scalability is a result of platform stability and optimal performance.
d. The platform scales best when the web tier handles most requests.
e. An optimal ratio of database-to-app-server transactions and from app-server-to-web-server
transactions is 100:10:1.
Learning Objectives
After completing this module, you will be able to:
Describe the caching techniques that are available on the Demandware platform.
Set up page caching and partial page caching.
Determine appropriate cache settings for site pages.
Review cache settings using the Business Manager Toolkit.
Introduction
The Demandware platform supports various caching mechanisms to improve site performance and
minimize customer wait times. The platform manages some aspects of caching, but as a developer, you
can fine-tune caching mechanisms through cache settings and coding best practices. For dynamic data,
you’ll optimize page caching. For static content, you’ll optimize use of the Content Delivery Network
(CDN).
Use caching:
For the pages that most customers will access
For the most heavily visited pages
On production instances to ensure optimal performance.
On development instances to help Quality engineers validate caching behavior.
For “personalized” data—data customized for a particular user group.
When should you use page caching? Select all that apply. The Knowledge Check answer key is on the
following page.
When should you use page caching? Select all that apply.
Answer: c, e.
Important: Set the varyby=”price_promotion” parameter for a page only if the page is
personalized; otherwise, the performance will degrade unnecessarily.
To enable page caching, in Business Manager, select Administration > Sites >
Manage Sites. Select the Cache tab and select the Enable page caching
option.
Note: If you disable page caching in Business Manager, the Demandware platform ignores the
<iscache> tags in the ISML templates.
Invalidating the page cache clears the cache and resets the caching “clock” for relative caching. The
page cache is invalidated at the following times:
When the defined caching time is exceeded
When merchants invalidate the page cache explicitly using Business Manager—the entire page
cache or just the cache corresponding to specific pipelines
When there is code replication
After the cache is invalidated, subsequent requests are slow until the cache is filled again. It’s best to
invalidate the cache only when necessary and during periods of low traffic.
In this exercise, you’ll enable page caching in your sandbox. Normally, you’d disable caching in your
sandbox so that you can see the results of your updates immediately. However, for the exercises in this
course, you’ll enable page caching.
1. Select Administration > Sites > Manage Sites.
2. For Instance Type, select Sandbox / Development.
3. Select the Cache tab and select the Enable page caching option.
In this exercise, you’ll use the Storefront Toolkit to verify cache settings.
2. To review the cache settings for the whole page, move your mouse over the green cache icon in
the upper left of the window.
You can use the scrollbar on the Cache Information (Page) window to view the cache settings of
each remote include on the page. Move your mouse over any of the green cache icons to review
the cache settings for a particular page portion.
In this exercise, you’ll remove the Homepage page cache partition that Demandware provides. You’ll
create the Homepage partition and invalidate it.
1. In the Cache tab, under Page Partitions, select the Homepage row and click Remove Partition.
2. Click Yes to confirm the delete.
3. Now you’ll create the Homepage page cache partition by clicking Add Partition in the Cache tab:
a. In the ID field, enter a unique identifier for the partition, in this case, Homepage.
b. In the Name field, enter Homepage.
4. Click Add Pipeline.
5. In the field that displays, enter the name of a pipeline node to be included in the page cache
partition, in this case, Default-Store.
6. Click Add Pipeline for each of the other pipeline nodes that belong in the Homepage partition,
enter the following pipelines, and click Save.
Home-Show
Home-IncludeHeader
Home-IncludeHeaderMenu
7. On the storefront site, reload the page and move your mouse over the green cache icon in the
header. Note the Caching Status time.
9. On the storefront site, reload the page and move your mouse over the green cache icon in the
header again.
Note that the Caching Status time has been reset.
10. Move your mouse over other green cache icons to see the Caching Status for other portions of the
page.
Note that the Caching Status has only changed for the partition you invalidated and not for the
other portions of the page.
In this exercise, you’ll invalidate the cache and view the updated Caching Status using the Storefront
Toolkit.
1. With Cache Information enabled in the Storefront Toolkit, move your mouse over the green cache
icons to note the Caching Status time for each portion of the page.
2. In Business Manager, invalidate the cache for the entire page:
a. Select Administration > Sites > Manage Sites and select SiteGenesis.
b. Select the Cache tab.
Just Thinking
Which cache settings would you use for the selected portions of the following
storefront page? For example, what cache time would you use for each? Does the page
portion include personalized content?
Don’t cache the portions of a page that are specific to an individual customer, like the mini-cart, cart,
and profile pages:
Best Practices
Caching Tips
Be sure to cache the result pages of redirect templates, especially heavily loaded pages like those
created with the RedirectURL-Start pipeline.
If possible, use HTTP GET rather than POST for form submissions with cached results. If you use
the POST form method, you bypass the page cache and make a server-side call.
The following diagram represents a storefront page with partial pages A-F described in the table.
In the following question, select the ISML code segments you can use to cache the refinement panel
(represented as C in the diagram)? Select all that apply.
a. Use <iscomponent pipeline =“MyRefine-Start”/> where the MyRefine-Start
pipeline generates an ISML template that contains <iscache type=“relative”
hour="24"/>.
b. Use <isinclude template=“search/components/myrefinebar”/>. where the
included template, myrefinebar, contains <iscache type=“relative” minute=“30”
varyby=“price_promotion”/>.
c. Use <isinclude template=“search/components/myrefinebar”/> where the
included template, myrefinebar, contains <iscache type=“relative” hour=“24”/>.
d. Use <isinclude url=“$URLUtils(‘MyRefine-Start’)}”/> where the MyRefine-
Start pipeline generate an ISML template that contains <iscache type=“relative”
hour=“4”/>.
Best Practice
Sometimes improving the cache hit ratio is not always the best strategy for a pipeline. Sometimes
reducing the processing time of a pipeline is the best approach. Optimize expensive pieces of code—
sluggish pipelines, as well as heavily used ones. You’ll learn code optimization tips later in this course.
Demandware leverages a Content Delivery Network (CDN) that provides static caching for content
assets using Akamai servers. Demandware caches images, icons, CSS style sheets, and other
non-dynamic assets. You can set the length of time for static content to be cached by the web server.
This value is called the Time to Live (TTL). By default, the TTL for content assets is 24 hours (86,400
seconds). At the end of the TTL, the cache is emptied and content is retrieved from the server.
Best Practice
To take advantage of the CDN caching, do not use system objects to deliver static resources. By doing
so, you bypass the optimized static caching that Demandware provides. For example, to display a
different background image that varies based on the category a customer is browsing, you can use
JavaScript dynamic processing.
Learning Objectives
After completing this module, you will be able to:
Optimize client-side and server-side processing using coding best practices.
Manage data efficiently and monitor data storage usage.
Introduction
eCommerce applications built on the Demandware Commerce platform will run fast and perform
reliably if you use the platform within its capabilities. By using coding best practices for your
customizations, you can improve page load times and ensure that your site stays within quota limits.
This module provides tips so that you can select the optimal approach for your site customizations,
including client-side and server-side customizations.
To improve client-side performance, you’ll need to minimize the size and the number of elements on
pages. Additionally, you’ll want to reduce the amount of JavaScript and avoid complex style sheets and
HTML markup. The key to improving client-side performance is reducing the number of HTTP requests
to the server. The less data transmitted and the fewer requests performed, the better the performance.
You can reduce the number of HTTP requests using these methods:
Separate HTML from JavaScript.
Compress and concatenate CSS and JavaScript.
Use “CSS sprites” to combine visual elements into a single image.
To reduce the number of HTTP requests, use unobtrusive JavaScript, the strict separation of
the HTML and the JavaScript layer. Separating the HTML from the JavaScript leads to a more
robust site. Because browsers stop, load, and interpret JavaScript where the code occurs on
the page, issues with inline JavaScript blocks can cause the browser to stall when loading
images or rendering a page.
Separating out the HTML and JavaScript also makes the Document Object Model (DOM)
smaller and improves page load times. Avoiding inline JavaScript also makes it easier to
understand both the HTML and JavaScript, as the markup is plain HTML without any embedded
chunks of functionality.
To reduce the number of HTTP requests, you should minify your CSS and JavaScript files.
BuildSuite has tools for minifying code. Implement CSS in a layered way—start with an overall
CSS, then develop more granular CSS on a per-page basis. You can concatenate your CSS and
JavaScript using BuildSuite to:
Reduce the number of requests
Speed up the download
Speed up caching of resources
It’s best to concatenate these files during deployment. If you concatenate the files during
development, you lose the modularity of your code making the code more difficult to update
and debug. These types of tools merge JavaScript files, insert generated variable and function
names, and strip comments, line breaks, and other unnecessary characters to reduce the size
of the files. For details on BuildSuite, see https://xchange.demandware.com/docs/DOC-5728.
If your site is rich in images, you’ll need to optimize image processing. The first step is to
compress the images. You can also use “CSS sprites” to further optimize image processing. A
CSS sprite is an aggregation of a group of images, stacked vertically or horizontally. Instead of
retrieving multiple image files, the client accesses the single aggregated image, the sprite. A
client request for an image specifies the coordinates of one of the images included within the
sprite. By using CSS sprites, you can load a single image rather than loading images individually.
Consequently, you reduce the number of HTTP requests and speed up page loads.
Knowledge Check
2. Match the technique on the left with its benefit to performance on the right.
Minimize Document Object Model (DOM) Separates HTML and JavaScript layers.
1. Which of the following are methods of reducing the number of HTTP requests to the server?
a. Use aggregated images to optimize image processing.
b. Separate the templates from the pipelines.
c. Concatenate CSS and JavaScript files.
d. Aggregate server requests.
Answers: a, c
2. Match the technique on the left with its benefit to performance on the right.
The Demandware platform provides the resources necessary to meet client requirements and handle
customer load automatically. The platform optimizes your site’s performance. However, scaling
hardware resources can’t always make up for inefficient coding. By following the best practices in this
lesson, you’ll ensure that your site and the underlying systems—web servers, application servers and
database servers—will scale effectively.
Performance improvements on the server side can greatly improve your site’s performance. You can
minimize the number of transactions that are passed through to the application layer and to the
database following these guidelines (described in the following table):
Avoid iterating over products and categories.
Minimize post-processing of search results.
Optimize the display of search results.
Avoid URL diversity by keeping positional parameters out of URLs.
Ensure efficiency of scripts within templates.
For long-running operations like imports and exports, use jobs.
Do not trigger live calls to external systems on frequently visited pages.
Category.getProducts()
Category.getOnlineProducts()
Category.getCategoryAssignments()
Category.getOnlineCategoryAssignments()
ProductMgr.queryAllSiteProducts()
These methods execute queries against the database and typically return large collections of
objects. Often developers use these methods to traverse category trees and iterate over
objects to decide whether to use the objects. Iterating over these objects causes database
churn, where the system must rotate objects in memory frequently. And if you add these
collections onto the pipeline dictionary as opposed to using them in a pipelet script, the
performance degrades further. The system keeps these objects loaded into memory until it
processes the request completely. If you include them in a pipelet script, the system can make
them available for garbage collection after the script executes.
ProductSearchModel.orderableProductsOnly(flag)
ProductSearchRefinements.getNextLevelCategoryRefinementValues(Category)
ProductSearchHit.getRepresentedVariationValues(ProductVariationAttribute)
ProductSearchHit.getMinPrice(), ProductSearchHit.getMaxPrice()
You can ensure that storefront pipelines are efficient by selecting optimal API methods rather than
developing custom code to process query results. For instance, instead of manually counting the
number of products in a category using a loop, use
ProductSearchRefinementValue.getHitCount().
The following inefficient code example implements a global navigation scheme that renders category
links for each category with orderable products. The script calls
dw.catalog.CatalogMgr.getSiteCatalog() and iterates over the site’s categories and the
This code sample instantiates all products in every category until it finds an orderable product—a
costly algorithm.
Best Practice
Search results can be large sets. If a site has 10 thousand t-shirts and you search for yellow variants, the
custom algorithm causes a huge processing hit. Custom search result processing is the most prevalent
cause of performance issues.
Rather than post-processing product or content search results using custom code, include all search
criteria in a single query for efficient execution.
A developer in your group created the following script and has found that the performance is poor:
1. What are some possible reasons for the poor performance? Select all that apply.
a. The script checks for master products using product.master.
b. The script iterates over products to determine if a product is a variant.
c. The script returns a custom color by iterating over custom objects.
d. The script post-processes search results returned using the ProductMgr class.
e. The script uses the Search API’s availability model to determine if the product variation exists.
2. To improve the performance of the script, which of the following API methods might you use
instead of the methods in the script? Select all that apply.
a. ProductSearchModel.orderableProductsOnly()
b. GetProductList pipelet
c. ProductMgr.getRepresentedVariationValues()
d. ProductSearchHit.getRepresentedVariationValues()
e. Search pipelet
1. What are some possible reasons for the poor performance? Select all that apply.
a. The script checks for master products using product.master.
b. The script iterates over products to determine if a product is a variant.
c. The script returns a custom color by iterating over custom objects.
d. The script post-processes search results returned using the ProductMgr class.
e. The script uses the Search API’s availability model to determine if the product variation exists.
Answers: b, d
2. To improve the performance of the script, which of the following API methods might you use
instead of the methods in the script? Select all that apply.
a. ProductSearchModel.orderableProductsOnly()
b. GetProductList pipelet
c. ProductMgr.getRepresentedVariationValues()
d. ProductSearchHit.getRepresentedVariationValues()
e. Search pipelet
Answers: a, d, e
View the parameters for the Paging pipelet by accessing the Demandware
online help: https://info.demandware.com and selecting
Demandware API > Demandware Pipelets > Common. Select the Paging
pipelet. Alternatively, you can review the Search-Show pipeline in
SiteGenesis.
Because the URLs are generated with different parameters, the system doesn’t make use of the cached
URLs. Caching actually impedes performance in this case. Use the same URL to ensure proper reuse.
To view the URLs and their parameters called for your site, within Business
Manager, select Site > Analytics > Traffic Reports. Select the Top Pages
report. The Analytics reports are available only on production instances.
Instead of using pipeline scripts for long running operations, use jobs.
The maximum timeout for jobs is 60 minutes. You’ll learn about optimizing jobs later in the course.
Another way to prevent thread exhaustion is to avoid using live calls to external systems on
frequently visited pages.
For example, do not use live calls on the home page, product detail pages, category pages, and search
results pages. Where live calls are needed, specify a low timeout value, for example, one second. A
Demandware application server thread waiting for a response from an external system cannot serve
other requests. Many threads waiting for responses can make the entire cluster unresponsive for all
pages.
Following are storage strategies you can use with the platform. Review these strategies and answer the
questions that follow.
External Systems
Decide whether the data should be stored in the Demandware database or in an external
system, such as an Enterprise Resource Planning (ERP) system or a Product Information
Management (PIM) system.
You can use asynchronous import and export jobs to synch up with the external systems.
You can also develop synchronous integrations using web services. Some implementations
store legacy order data and email subscription data in custom objects, but in this case it’s
more efficient to keep the data in external systems and use web services for real-time
access to the data.
One way to limit the use of custom objects is to use session storage. When you calculate the
cart contents or a wish list, you can create a session basket stored as JSON. When the customer
accesses the mini-cart, the system checks if the JSON object exists and, if so, parses it. Using
the session storage in this way minimizes database access. When the customer views a new
page, the system does not have to access the basket in the database to make sure the basket is
up-to-date.
Cookies
Cookies let you store data across sessions. Use cookies for information that is required for
every server-side request. An effective use of cookies is to save geolocation information. Sites
can use geolocation to present region-specific content or experiences based on the customer’s
location. To do this, store the geolocation data in a cookie on the browser. If you use cookies in
this way, you won’t need to make subsequent calls to the geolocation system for each request.
Keep in mind that the cookie header size is limited in the web adapter, so use cookies
sparingly. If a cookie grows with each request (for example, if you’re appending data to a
cookie based on a customer’s actions), the cookie header will eventually exceed the limit and
the web adapter will no longer accept the request. Since cookies are sent with every request,
problems with cookies slow down client-side performance. It is preferable to use
sessionStorage or localStorage to cookies if possible.
HTML5
Like cookies, you can use HTML5 to store data locally within a user’s browser. HTML5 has the
advantage that the system utilizes the stored data without having to send the data with every
web request. HTML5 lets you access large data sets quickly (typically up to 5 MB).
Keep in mind that local storage has a separate database per protocol in the browser (HTTP
versus HTTPS).
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
2. Suppose you need to store data across sessions but you don’t want to send the data with each
request. Which storage strategy or strategies would you use? Select all that apply.
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
3. Which storage strategy or strategies let you save state information within a session? Select all that
apply.
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
4. Which storage strategy is most efficient for storing legacy order data?
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
1. Suppose you need to store data across sessions. Which storage strategy or strategies would you
use? Select all that apply.
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
2. Suppose you need to store data across sessions but you don’t want to send the data with each
request. Which storage strategy or strategies would you use? Select all that apply.
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
3. Which storage strategy or strategies let you save state information within a session? Select all that
apply.
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
4. Which storage strategy is most efficient for storing legacy order data?
Sessions
External Systems
Custom and System Objects
Cookies
HTML5
Object Creation
Delay creating baskets, wish lists, and custom objects until the customer actually needs them. Create a
basket only when a customer adds an item for the first time. If you create the basket automatically for
a customer’s session or request, the customer has to wait for the database to create the object. If you
avoid creating these objects until they’re needed, you create fewer objects, so your system cleanup
jobs will be faster.
Note that when you use a transactional pipelet, you’re accessing the database, so limit the number of
transactional pipelets you use in your pipelines.
Input/Output
Do not use the dw.io classes unless you absolutely have to.
With the dw.io classes, you risk making concurrent writes by different application servers to the
same file.
For example, to generate log files, use the dw.system.Logger.
Object Churn
The Demandware platform uses an Object Relational Mapper (ORM)—an object cache—to avoid
duplicate fetches from the database, for example, when fetching product information. This cache is
transparent to the custom application, but improves performance significantly. The system fetches the
items initially, then looks them up easily using their primary keys.
Learning Objectives
After completing this module, you will be able to:
Analyze performance using Pipeline Profiler.
Analyze performance using Business Manager Analytics.
The Pipeline Profiler in Business Manager lets you analyze execution times of scripts, pipelines, and
templates. To ensure that you develop efficient code, it’s important to use the Pipeline Profiler during
development. You can activate the Profiler on production instances, as well, but using it during
development and testing can head off issues.
To diagnose your pipelines and scripts, you:
1. Enable the Profiler.
2. Run the pipelines you want to test.
3. View the Pipeline Profiler page.
To ensure that you obtain meaningful averages, run the pipelines and scripts a number of times before
viewing the data. Performance times vary depending on factors like caching and site traffic.
The Profiler captures all requests that hit the application server and detects high hit counts and high
average runtimes. These statistics help you identify the root causes of slow pages or pipelines, so you
know which code to optimize.
Note: The Profiler analyzes code executed by the application server only. The Profiler doesn’t monitor
cached pages—the web server manages cached pages, bypassing the application server.
When you use the Pipeline Profiler, monitor the pipelines that your site calls most frequently.
Performance issues in these pipelines have more of an effect on site performance than pipelines called
infrequently. Optimizing frequently-used pipelines is often an effective method for improving site
performance.
However, the Total Time for the Product-Detail pipeline is 6,990 ms, longer than for the
Product-HitTile pipeline, which is 2,251 ms. Even though the Product-HitTile pipeline is
called frequently, its Total Time is far less than the Product-Detail pipeline.
Optimizing long-running pipelines is critical to improving site performance. In this example, the priority
is to optimize the Product-Detail pipeline, a long-running pipeline with an average runtime of
2,230 ms. To investigate the Product-Detail pipeline further, click the Product link corresponding
to the Detail start node.
Processing times factor in includes and script code. The Pipelet Node IDs and the Interaction Node IDs
are represented as paths from the start node through branches to the nodes.
However, you can’t drill into included templates to analyze their performance.
The Location in the Script Data report shows the template or script that calls the method.
Important: The Profiler analyzes down to the function-level. The Profiler cannot detect issues within a
function—if you have an expensive loop within a function, the Profiler can’t detect it.
In this exercise, you’ll analyze pipeline performance using the Pipeline Profiler.
1. In Business Manager, select Administration > Operations > Pipeline Profiler.
The stop button displays while the profiler captures performance data.
3. On your sandbox storefront, trigger pipelines by navigating through categories, for example, Mens >
Clothing > Dress Shirts.
You can trigger other pipelines by performing other operations on your storefront.
4. Return to the Administration > Operations > Pipeline Profiler page and click the stop button to
stop profiling.
5. Under Browse Captured Pipeline Data, click your storefront site link, Sites-SiteGenesis-Site if
you’re using SiteGenesis.
6. Click the Average Time header link twice to list the average pipeline runtimes in descending order.
In this exercise, you’ll use the Pipeline Profiler to compare the two scripts shown in the previous lesson.
The scripts and templates you need for the exercise are included in the Developing for
Performance and Scalability Exercises.zip file provided by your instructor.
Note: If you are taking the eLearning version of the course, the zip file is on the Course Materials tab.
2. In the Pipeline Profiler, click the start button to activate the profiler.
The stop button displays while the profiler captures performance data.
3. Run your DisplayCats1 pipeline a few times now that the profiler is capturing data.
4. Under Browse Captured Pipeline Data, click your storefront site link, for example, Sites-
SiteGenesis-Site.
In the Profiler-Pipeline Performance page, you’ll see the profiling results of the DisplayCats1
pipeline. The results include the number of hits, the total, average, minimum, and maximum time,
for example:
The DisplayCats2 pipeline is much more efficient than the DisplayCats1 pipeline. The
DisplayCats1 pipeline instantiates and post-processes all products in the catalog. The
DisplayCats2 pipeline accesses the pre-built search index with orderable data, which is much
faster than accessing the database for every product.
Business Manager Analytics lets you run reports against the data the Demandware platform collects.
Use the Analytics reports to diagnose performance issues in production, but not on sandboxes or
development instances.
The Demandware operations team must enable Demandware Analytics on your instance for you to
access the reports. If enabled, Demandware runs the Analytics reports once a day. The reports are
based on data from the web server log files of the Demandware web tier.
Depending on the report type, you can generate daily, weekly, monthly, quarterly, or yearly reports.
You can export reports to XLS files for further analysis. Be sure to check the Business Manager
Analytics reports frequently, especially after code deployments.
Use the Analytics Traffic Reports to review site traffic statistics for site traffic.
To view the analytics traffic reports, in Business Manager, select Site >
Analytics > Traffic Reports. Select one of the report types. Analytics are
supported only on production instances.
Select the Top Pages report on the Traffic Reports page to list the top 100,000 URLs used in a day.
For the top pages:
Check that each of the top pages is being cached.
Look for ways to improve the scripts used on the page.
Look for ways to optimize the images and the JavaScript on the page.
You can review pipeline performance data on production instances using the Analytics Technical
Reports. In addition to drilling into pipeline data, the Technical Reports let you analyze pipeline
includes and cache hit ratios, as well.
To view the analytics technical reports, in Business Manager, select Site >
Analytics > Technical Reports. Select one of the report types. Analytics are
supported only on production and load testing instances.
Select one of the pipeline reports. Each type of report has a version for includes, as well, for example,
there is a Pipeline Runtime report for pipelines and a Pipeline Runtime (Includes) report for remote
includes in the pipeline.
Retrieved from Cache: The green bar shows the percentage of requests that the web server retrieves
from the page cache. The goal is to maximize this value to keep all transactions on the web tier.
Not Found in Cache: The red bar shows requests that were not retrieved from cache. If a template
does not have the <iscache> tag, the content is not cached. If a template does the have the <iscache>
tag but the cache has expired, the page needs to be regenerated.
For the percentage of cached pipelines and includes, combine the green bar (retrieved from cache) and
the blue bar (cache store).
Cache Store: The blue bar shows the percentage of requests that were cached but never retrieved
again within the caching time. Caching these requests is wasteful since they’re never accessed again.
To reduce the percentage of the requests stored in cache and not retrieved:
Analyze pipelines and includes with high Cache Store percentages to determine whether it
makes sense to cache them. Sometimes instead of trying to increase the cache hit ratio, you
should tune the performance of the pipeline or include code and avoid caching the page or
partial page.
If it makes sense to cache these pipelines and include responses, you might need to lengthen
the caching time.
Use methods that increase the likelihood that cached pages and partial pages are reused. For
example, ensure that URLs are reused by avoiding varied parameters that cause URL
mismatches. Another method is to reuse snippets and templates for similar behaviors. For
example, if you’re displaying the same data using a gallery or a listing, create and cache the
snippet and template once, then change the styling using JavaScript and style sheets.
On the other hand, pipelines related to the cart and user accounts are not cached since they’re
customer-specific.
The Object Churn Trends report shows data for the last 24 hours for all sites on an instance, rather
than just a single site. Like the rest of the Business Manager Analytics, the Object Churn Trends report
is generated on production instances only.
To analyze database performance, in Business Manager, select Site > Analytics > Object
Churn Trends and choose the object types you’re investigating: sessions, baskets, orders,
products, wish lists, or customers.
Review the documentation (https://info.demandware.com) and match the reports on the left with their
contents on the right. (Answers on following page)
Visit Duration Report Lists the unique URLs that lead visitors to the site
and generate the most traffic.
Request Runtime Report Tracks the response time of your store—the time
between receiving the request and delivering the
last byte of data.
Visit Duration Report Lists the unique URLs that lead visitors to the site
and generate the most traffic.
Request Runtime Report Tracks the response time of your store—the time
between receiving the request and delivering the
last byte of data.
In this exercise, you’ll use the Report Browser to review caching behavior by pipeline.
1. In Business Manager, select your site and select Site > Analytics > Traffic Reports.
2. Answer the following questions about Traffic Reports.
Note: Although Analytics reports are not available generally on sandbox instances, the sandboxes
do have sample Analytics reports for the year 2012.
What was the maximum number of requests during the month of August in 2012?
_____________________________________
On which day was the longest visit during time span, 8/12/12 – 8/18/12?
_____________________________________
On February 13, 2012, of the average request runtimes, when was the longest and how many
seconds was it?
_____________________________________
Which date in December, 2012 had the most visits to the site?
_____________________________________
Review the documentation (https://info.demandware.com) and match the reports on the left with their
contents on the right. (Answers on following page)
In this exercise, you’ll use the Report Browser to review pipeline data.
1. In Business Manager, select your site and select Site > Analytics > Technical Reports.
2. Answer the following questions about Technical Reports.
Note: Although Analytics reports are not available generally on sandbox instances, the sandboxes
do have sample Analytics reports for the entire year or 2012.
What percentage of the main request pipelines were retrieved from cache?
_____________________________________
What percentage of main request pipelines were stored in cache but never retrieved?
_____________________________________
Were there errors in any of the pipelines and if so, what were the error codes?
_____________________________________
In this exercise, you’ll use the Report Browser to review caching behavior by pipeline.
1. In Business Manager, select your site and select Site > Analytics > Technical Reports.
2. Select a date or range of dates to filter the technical reports.
Note: Although Analytics reports are not available generally on sandbox instances, the sandboxes
do have sample Analytics reports for the entire year or 2012. Select a date in 2012 to view sample
reports.
To troubleshoot your site, use Business Manager Analytics to rank the top five slowest or most heavily
used pipelines on production. These are the pipelines you should investigate for potential
optimizations. Once you determine the top five pipelines, you can use the Pipeline Profiler to analyze
them further.
Follow these tips to look for slow or heavily used pipelines:
Use Traffic Reports > Top Pages to view the top 100 URLs with their parameters. Use these to rank
the areas of the site with slow performance.
Use Technical Reports > Pipeline Performance to analyze pipelines with high average processing
times and weak cache hit ratios.
Use Technical Reports > Pipeline Runtime Distribution to see another view of request times
because average runtimes do not always reflect optimal runtimes; good caching can make a
pipeline appear faster than it really is.
Use Technical Reports > Pipeline Runtime (Includes) to determine which pipelines are calling
which includes.
Use Object Churn Trends to investigate database performance issues.
Learning Objectives
After completing this module, you will be able to:
Create and schedule jobs that limit resource consumption on the platform.
Diagnose issues with job performance.
Use web services efficiently.
Introduction
The number and type of integrations affect the performance of your site. The data models your team
develops are a key factor in optimizing storefront processing. You can make your storefront faster by
developing a good data structure.
You’ll need to determine the optimal way to handle each type of data being transferred. You can use:
Storefront pipelines and scripts
Asynchronous integration using jobs
Synchronous integration using web services
The following table compares these methods for transferring data, as well as the timeouts for each
method. The timeouts represent the amount of time the system waits for a response before closing
the browser connection.
For asynchronous integrations you’ll set up jobs using the Integration Framework. Asynchronous
integrations let you import and export large amounts of data. These jobs can be resource-intensive, so
this lesson provides tips to optimize job performance.
Review the tips for optimizing job performance in the following table. Then complete the table that
follows.
Tips Explanation
Schedule jobs Large import or export jobs use a great deal of system resources, so
strategically. schedule your jobs strategically. Schedule jobs for non-peak hours and
disperse job start times to balance the job load. Ideally, try to stagger job
start times so that only one job at a time runs on an instance group.
As product imports are time and resource intensive, be strategic about
their use, especially if you have multiple sites. Try to leverage synergies
between sites. Not every site needs to import its own catalog, price
book, and inventory. When you import products, consolidate data during
transformation, so that import jobs touch each changed product only
once.
In general, limit the frequency of imports, exports, and search index
builds. Each of these operations clears the cache which slows processing
until the cache is filled again.
Set up jobs and index Do not perform frequent catalog imports and search index builds on the
builds on the staging production server. To reduce resource consumption on the production
instance. server, use staging and then replicate to production—Import catalogs to
staging, build the search index there, then replicate to production.
Replicate catalogs, prices, and search indexes only once a day. If you
need to update the availability of products regularly, you can perform
incremental search index builds on production throughout the day.
Some third-party integrations require imports of data directly to the
production instance. As long as you limit the frequency of these jobs,
they won’t hinder performance.
As with catalog imports, run sanity and plausibility checks on staging
rather than production. Sanity and plausibility checks ensure that your
Use delta processing. To reduce the number of objects being imported or exported, use delta
feeds. Delta feeds include only those objects and attributes that have
changed. For example, if a client typically sells 1000 items a day, it should
not be necessary to import 100,000 inventory records every 30 minutes.
You can set up a delta feed in Integration Framework by selecting the
MERGE import mode. The MERGE mode is more efficient than the
UPDATE mode and it creates objects if they do not exist. For best
performance, avoid the REPLACE import mode as it is less efficient than
MERGE and UPDATE because it deletes then recreates attributes and
relationships.
To prepare for delta processing, use standard Demandware import
formats as defined in the Demandware object schema files. The schema
governs the XML import files you’ll create for each system object type.
The Demandware online help (https://info.demandware.com)
provides an Import/Export Object cheat sheet with links to the
Demandware object schema files on which you’ll base your XML import
files.
Optimize the Standard Demandware imports can process arbitrary feed sizes. When
memory footprint for processing these large data sets, pay attention to the memory footprint.
large data sets. Design your loop logic so memory consumption doesn’t increase as the
result set increases. Be sure to keep only currently processed objects in
memory and refrain from performing operations like sorting on
collections in memory. To ensure that objects can be freed from
memory, don’t retain references to objects in memory.
Avoid reading an entire file into memory by reading feeds one record at
a time. Also, stream data to a file at regular intervals to avoid building
large structures in memory.
If you need to create multiple feeds, query the objects once and write
records to all of the feeds as you iterate over the results. This method
saves time because you only need to create the objects in memory once.
Group operations in Demandware commits changes to the database for each business object.
explicit pipeline To improve performance, commit related changes in a single transaction
transactions. by enclosing the import pipelets in an explicit pipeline transaction. You
must ensure that the transaction size limit of 1000 is not exceeded, so use
this approach only as an exception.
Adhere to the Keep application server utilization by jobs to a minimum. Calculate the
suggested job load job load factor—the total number of seconds per day of job execution
factor. time on an instance (staging or production) divided by 86,400 (the
number of seconds in a day). Try to keep the job load factor below 0.2.
Develop a recovery Develop a recovery strategy for your jobs. Jobs that end abnormally can
strategy. cause server restarts or application server failures. Design each job so that it
recovers automatically, for example, by recognizing which files have not
yet been imported during the subsequent execution.
Clean up after import For imports, implement proper cleanup and delete files that are older
and exports. than two weeks. Compress import files after they are processed.
Be sure to delete site exports when you no longer need them. To
minimize the size of site exports, exclude product images.
All commerce sites have external integration points, and these external integrations heavily impact site
performance. Front-end integrations typically implement features like analytics, ratings, and reviews
using JavaScript, Ajax, or JSON.
You learned coding tips to optimize client-side front-end integrations in the Optimizing Performance
during Development module. This lesson focuses on optimizing back-end integrations. On the back-end,
developers typically implement synchronous integrations using web services. Examples of back-end
integrations include payment processors, tax calculation software, and inventory management systems.
To optimize the performance of synchronous back-end integrations:
Specify proper timeouts for back-end services.
Manage unresponsive services.
Optimize REST-based caching.
Following is a sample OCAPI configuration (specified in JSON), which includes cache settings for each
specified resource. The cache_time attribute is specified in seconds.
Learning Objectives
After completing this module, you will be able to:
Describe platform quotas and the benefits of using them.
Describe the Demandware data usage policy.
Identify quota violations using the Business Manager Quota Status and quota log files.
Create email notifications for quota violations.
Describe the types and levels of quotas and explain the purpose of each.
Describe the process of overriding platform quotas.
The Demandware Quota Framework measures usage on all instances of your realm against a set of
quotas. Quotas define programmatic boundaries and best practices for using the Demandware API and
business objects. Quotas ensure that you stay within the platform limits for memory usage, resource
consumption, API calls, and the number of business objects. Demandware provides explicit usage
controls—for example, the frequency of script and pipelet API calls and the number of objects
per-instance. These limits prevent issues such as out-of-memory exceptions, thread exhaustion,
database churn, and overuse of garbage collection. In preventing these issues, quotas ensure that your
custom implementations can safely operate and scale on the Demandware platform.
If an implementation exceeds quota boundaries, the system throws an exception. The system handles
quota exceptions as follows:
Exception Action
WARN threshold (usually 60% of limit) The system logs a quota warning.
Quotas with limit 0 have no WARN The system throws an exception immediately for quotas with
threshold. limit 0.
Note: Quotas with limit 0 should not be used with custom
application code.
Quota Enforcement
Quotas are either "enforced" or "not enforced."
If an enforced quota is exceeded, the system throws an exception preventing the current operation
from completing. The Quota Framework writes these types of exceptions directly to the quota logs.
You’ll learn how to access the quota logs later in this module.
Examples of quota violations can include:
A collection that creates too many objects in memory
Too many persistent objects of a particular object type created in an instance
When an enforced quota is exceeded in a storefront request, the general error page appears. If an
enforced quota is exceeded during a standard import, the system reports a data warning or data error.
When a quota is not enforced, the platform provides a warning, but does not take any other action.
Important: Demandware does not enforce all quotas initially. Over time Demandware will enforce
these quotas. This policy gives developers a chance to refactor code to prevent quota violations.
Exceeding enforced quotas can lead to additional fees.
It’s important to keep custom implementations within platform usage limits. Demandware’s standard
contract includes an addendum that outlines this Data Usage Policy (see
https://xchange.demandware.com/docs/DOC-1481).
This document details specific data usage components, including data processing (DP), data transfer
(DT), and data storage (DS).
The parameters within the policy include:
A baseline monthly allocation for all customers, independent of gross merchandise volume (GMV)
An additional monthly allocation based on GMV
A per-unit fee for resource utilization in excess of the total monthly entitlement
Although most sites stay within their monthly resource allocations, sometimes sites utilize resources
beyond their allocations. For example, some clients gain significant business value from the additional
resources consumed during a particular period of time. In this case, the client pays an additional fee
for the benefit of extra resources.
You can view the status of quotas to identify issues within your code and correct the issues before they
affect site stability. Use the Business Manager Quota Status page, as well as the quota log files, to view
the status of quotas.
Quota Status
You use the Business Manager Quota Status page to view API and object usage levels, as well as the
number of quota violations on a particular instance. You can also see which quotas the system
enforces as well as the maximum values and the limits for quotas.
The Quota Status page displays quota usage and violations for the last 24 hours or in the time period
since the server was started. Some quotas specify warning thresholds to indicate that your site is
nearing a quota limit. Quotas display with a green icon if there are no quota issues, an orange icon for
warnings, and a red icon for violations.
The Quota Status shows:
Object quotas
Object relation quotas
API quotas
Object Quotas
Object quotas limit the number of objects of a particular type per instance. The platform updates
object quotas within 20 minutes of the quota being exceeded.
In this example, the number of catalogs allowed is 200. The instance uses three catalogs.
API Quotas
API quotas set limits on API usage, for example, the number of object instances, object sizes, runtimes,
and the number of executions per page.
In the example below, the platform limits the number of custom objects created to 10 per page:
Email Alerts
You can set up email alerts to notify yourself or others about quota violations. Once a day, the system
sends email alerts listing all quotas above the warning threshold or above the limit. This example
shows an email notifying recipients that an API quota limit was exceeded.
When a quota’s warn threshold is exceeded, or when a specific quota limit is exceeded, messages are
written to the quota log files, along with the code segment where the exception occurred. This
happens for all quotas, enforced and not enforced. Quota log files are created using a filename prefix
of "quota”, for example, quota-blade2-5-appserver-20140723.log.
To access quota logs within Business Manager, select Site Development >
Development Setup. Under WebDAV Access, click the link under Log Files.
Quota Entry
This example shows an entry in a quota log file:
[2014-07-23 13:00:22.577 GMT] [RequestHandlerServlet|14320460|Sites-
SiteGenesis-Site|Product-
Show|PipelineCall|pRLNN0PXCp7rqIClq7IcPUvuEEjtkOWUAMBdUWhniPRFlV4Xvim3HwhDw9
6bRZ7e5BUxhnkC2ExrVYZVBFLJtw==] Quota api.jsStringLength (enforced, warn
600000, limit 1000000): warn threshold exceeded 1 time(s), max actual was
600010, current location: request/site Sites-SiteGenesis-Site/top pipeline
Product-Show/interaction node/template refapp.default_.CustomDisplay
Quota Log File Entry Quota Log File Entry Data (from preceding example)
Component
[RequestHandlerServlet|14320460|Sites-SiteGenesis-
Thread Site|Product-
Show|PipelineCall|pRLNN0PXCp7rqIClq7IcPUvuEEjtkOWUAMBdU
WhniPRFlV4Xvim3HwhDw96bRZ7e5BUxhnkC2ExrVYZVBFLJtw==]
The details of the message are separated by colons and include the following:
Quota type
Whether the quota is enforced or not enforced
The warn threshold (if one exists)
The limit
The number of times since the last log entry for this quota that the WARN threshold or limit
was exceeded
The highest observed reading (max actual)
The current location, which shows where the warn threshold or limit was exceeded
The log system collects information about quota violations and aggregates quota log messages over a
period of time. After the time period ends, the system writes to the quota log file the next time a
WARN threshold or limit is exceeded. Demandware writes the code location of the most recent event
to the log file, but does not expose the code locations of the aggregated events.
Quota overrides are a transitional solution to handle legacy quota exceptions. These overrides soften
quotas and quota status actions by not enforcing violations. Quota overrides downgrade quotas from
error to warn.
For quota issues on primary instances (production, staging, and development), you must request a
temporary quota override via a support ticket. Demandware overrides the quota if the impact justifies
a SEV-1 issue. Quota overrides are typically reserved for use with realms created before the
Demandware Quota Framework was introduced (November 2011). Demandware never removes quota
overrides unilaterally.
In this exercise, you’ll set up an email alert, which will send you an email alert when a quota violation
occurs.
1. In Business Manager, select Administration > Operations > Quota Status.
2. Under Quota Alert Settings, enter your email address in the Email to Addresses field.
3. Select the Enabled field and click Apply.
The system sends an email alert once a day if there are quota violations.
Because the Quota Status monitors object, object relation, and API violations, you can use the report
to diagnose performance issues.
In this exercise, you’ll diagnose performance issues using the Quota Status report. You’ll compare the
quota results of two different scripts, getProdByColor1.ds and getProdByColor2.ds.
This script, getProdByColor1.ds, causes an API quota violation:
The script generates a product feed for product search engines. The feed contains only online products,
represented as variants of each orderable color. So, instead of adding one entry for the master, the
feed should have one variant entry for each different color. This script provides external product
search engines with relevant color matches.
The script attempts to generate the feed by obtaining all objects using the
dw.catalog.ProductMgr.queryAllSiteProducts() method on the entire storefront, then
querying for online variations. The Demandware platform prevents the queryAllSiteProducts()
method from being used on the storefront, so this script fails with an API quota violation.
This more efficient script uses productSearchHits from the Search API to access the online
products.
In this exercise, you’ll run the GetProductsByColor1.ds script.
Now, you’ll run the GetProductByColor2 pipeline which calls the more efficient
getProdByColor2.ds script:
Knowledge Check
Learning Objectives
After completing this module, you will be able to:
Summarize the diagnostic tools and identify under which circumstances they should be used.
Identify external tools for analyzing performance.
Lesson 7.1: Diagnostic Tools for Analyzing Performance and Data Usage
This table shows the diagnostic tools available to help you troubleshoot performance issues on the
Demandware platform:
Pipeline Profiles pipelines and scripts to detect: Business Manager: All instance
Profiler Administration > types
Slow pipelines
Operations > Pipeline Profiler
Inefficient coding practices
Bottlenecks on production
instances, but run the Profiler on
production for only about 15
minutes.
The Demandware Control Center provides a complete view of overall resource consumption for a site
on the platform. Use the Control Center to verify storefront and job processing times. Ideally, resource
consumption should not increase as the traffic increases.
A good strategy is to review the traffic patterns of the site using the Analytics Traffic Reports. Then,
compare the traffic results to the overall resource consumption to validate the ability of the application
to scale. Even if the CPU consumption scales with the traffic, there is room for improvement. If
consumption grows even higher as the traffic increases, you need to take immediate action.
To view resource consumption details for your instances, you must have a
Demandware Control Center account. Log in to the Demandware Control Center
(https://controlcenter.podX.demandware.net). In the menu, click Instance
Management. Expand to select an organization and instance.
To view resource consumption statistics for an instance, click Statistics. Scroll to the bottom to see
usage statistics, for example, the following report compares processing seconds from last month
versus this month:
The graphs show the cumulative and daily processing seconds for the current and last month. They
distinguish between storefront and job processing.
Click the Download link for each graph to save it as a CSV file.
In addition to the utilities and reports Demandware provides, there are effective external tools to help
you analyze performance. These free tools measure the performance of your pages and suggest ways
to improve performance.
WebPagetest
You can use WebPagetest to test the performance of pages on your site (www.webpagetest.org).
WebPagetest lets you select a browser and a location from which to run the test. The tool provides a
waterfall view that can help pinpoint issues:
PageSpeed
PageSpeed is supported on the Google Developers site
(https://developers.google.com/speed/pagespeed). PageSpeed provides suggestions for improving
your site’s performance – for either desktop or mobile usage:
Congratulations
Now that you’ve learned how to develop customizations and integrations using practices that ensure
optimal performance and scalability, you can:
Review your scripts to ensure that you are using the coding best practices you’ve learned in this
course. Use the script data in the Pipeline Profiler to compare and optimize pipeline and script
implementations. If possible, perform a code review and replace deprecated API calls with
up-to-date API calls.
Analyze your storefront site using the Storefront Toolkit Cache Information and the Pipeline
Profiler to check that the site uses effective caching strategies.
Use the Pipeline Profiler, Business Manager Analytics, and the Demandware Control Center to
analyze and troubleshoot performance issues.
Be sure your team allocates time to analyze your site’s performance continually and to implement
ongoing optimizations.
If you have now completed the following courses, you can sign up for the Customization, Integration,
and Performance certification exam:
Exploring SiteGenesis
Working with the Demandware APIs
Integrating with Demandware
Developing for Improved Performance and Stability