0% found this document useful (0 votes)
559 views338 pages

c1791 Student Guide

Uploaded by

ArneMagnusson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
559 views338 pages

c1791 Student Guide

Uploaded by

ArneMagnusson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 338

Lead System Architect 8.

3
8.3.1
Student Guide
© 2019
Pegasystems Inc., Cambridge, MA
All rights reserved.

Trademarks

For Pegasystems Inc. trademarks and registered trademarks, all rights reserved. All other trademarks or service marks are property of their
respective holders.

For information about the third-party software that is delivered with the product, refer to the third-party license file on your installation media
that is specific to your release.

Notices

This publication describes and/or represents products and services of Pegasystems Inc. It may contain trade secrets and proprietary information
that are protected by various federal, state, and international laws, and distributed under licenses restricting their use, copying, modification,
distribution, or transmittal in any form without prior written authorization of Pegasystems Inc.

This publication is current as of the date of publication only. Changes to the publication may be made from time to time at the discretion of
Pegasystems Inc. This publication remains the property of Pegasystems Inc. and must be returned to it upon request. This publication does not
imply any commitment to offer or deliver the products or services described herein.

This publication may include references to Pegasystems Inc. product features that have not been licensed by you or your company. If you have
questions about whether a particular capability is included in your installation, please consult your Pegasystems Inc. services consultant.

Although Pegasystems Inc. strives for accuracy in its publications, any publication may contain inaccuracies or typographical errors, as well as
technical inaccuracies. Pegasystems Inc. shall not be liable for technical or editorial errors or omissions contained herein. Pegasystems Inc. may
make improvements and/or changes to the publication at any time without notice.

Any references in this publication to non-Pegasystems websites are provided for convenience only and do not serve as an endorsement of these
websites. The materials at these websites are not part of the material for Pegasystems products, and use of those websites is at your own risk.

Information concerning non-Pegasystems products was obtained from the suppliers of those products, their publications, or other publicly
available sources. Address questions about non-Pegasystems products to the suppliers of those products.

This publication may contain examples used in daily business operations that include the names of people, companies, products, and other third-
party publications. Such examples are fictitious and any similarity to the names or other data used by an actual business enterprise or individual is
coincidental.

This document is the property of:

Pegasystems Inc.
One Rogers Street
Cambridge, MA 02142-1209
USA
Phone: 617-374-9600
Fax: (617) 374-9620
www.pega.com

DOCUMENT: Lead System Architect 8.3 Student Guide


SOFTWARE VERSION: Pega 8
UPDATED: 01 2020
CONTENTS

COURSE OVERVIEW 2
Setting up the Pega Platform 4
Leveraging Pega applications 24
Designing the case structure 35
Starting with App Studio 50
Designing for specialization 57
Promoting reuse 82
Designing the data model 87
Extending an industry framework data model 106
Assigning work 115
Defining the authorization scheme 135
Defining the authorization scheme 139
Mitigating security risks 163
Defining a reporting strategy 180
How to define a reporting strategy 187
Query design 196
User experience design and performance 208
Conducting usability testing 218
Designing background processing 225
Designing Pega for the enterprise 236
Defining a release pipeline 251
Assessing and monitoring quality 275
Conducting load testing 289
Estimating hardware requirements 295
Handling flow changes for cases in flight 302
Extending an application 320
Leveraging AI and robotic automation 328

COURSE SUMMARY 336


COURSE OVERVIEW
Completing the Course Exercises
The Lead System Architect course include a series of exercises based on a business use case. These exercises
provide an opportunity to practice the skills taught in the topics. Each exercise can be completed independent of
the other exercises.
The exercises in this course provide practical, hands-on experience to help you immediately apply your new
skills. The exercises help reinforce the learning objectives.
To help you complete the exercises, two levels of documentation are provided:
l Your Assignment — specifies the high-level tasks you perform to solve the business problem provided in the
scenario.
l Detailed Steps — shows the series of steps needed to complete the exercise.
While individual exercises are available online, you may find it useful to download and print the complete
exercise guide. The guide is available in the Related Content panel on the right side of this lesson.

Exercise environment options


Pega Academy provides two options for completing the exercises in this course:
l Cloud-based exercise environment (when available).
l VM-based exercise environment (recommended).
The exercise environments are independent of each other. This means the work you complete in one
environment does not exist in the other environment.

Cloud-based exercise environment


The cloud environment exercise system for this course may be available online and is accessed through the
Open Exercise System link on each exercise page when available.
Note: The cloud-based environment is not available offline and does not persist once you complete the course.

VM-based exercise environment

The VM environment is available offline and persists after you complete the course.
Front Stage Event Booking business scenario
Front Stage Event Booking assists customers with booking large-scale, high-profile corporate and musical events,
hosting between 5,000 and 18,000 guests per event. Front Stage has been in business for 30 years, and uses a
range of technology. Some technology is old, such as the reservation system that runs on a mainframe. Some
technology is new, such as the most recent mobile application that helps sales executives track leads.
Front Stage relies on the Information Technology (IT) department to maintain legacy applications, as well as to
support their highly mobile sales organization. In the past, IT created applications that were difficult to use and
did not meet the needs of the end users of these applications. In some cases, the new application slowed the
business instead of making the users more productive.
Front Stage is aware of several smaller event booking companies who are using newer technology to gain a
competitive edge. These smaller companies have started to cut into the corporate event booking segment, and
Front Stage sees a dip in sales in this segment as a result. Front Stage's CEO, Joe Schofield, recognizes that if
Front Stage avoids investing in technology to transform the way they operate, then Front Stage will be out of
business in two years.

Your mission: Architect a Pega solution for Front Stage


During this course, you create an event booking solution for Front Stage. Using the business scenario document,
you apply what you learn in each lesson to design the best technical solution to meet Front Stage's vision of
digital transformation.
Note: Each exercise includes a unique scenario and assignment that identify the specific business problem to
address. Before you attempt each exercise, review the scenario and assignment to understand the goal of the
exercise.
Setting up the Pega Platform
Introduction to setting up Pega Platform
Your application can be deployed on-premise or in a cloud environment. To set up Pega Platform optimally for
the application, you need to understand the profile and operational requirements of that environment. In this
lesson, you look at deployment options, high availability, and hardware sizing as well as planned and unplanned
outages.
After this lesson, you should be able to:
l Compare deployment options
l Architect an environment with high availability
l Request hardware sizing estimates for an environment
l Take a node out of service with minimal disruption
Deployment options
Because of Pega Platform's standards-based open architecture, you have maximum flexibility in deploying and
evolving your applications. Pega Platform can run on-premise in different operating system environments with
any of the popular application servers and databases.
In addition, Pega Platform can be made available as a cloud application for development, testing, and production.
You can mix approaches with development and test environments on cloud, and then move production-ready
applications to on-premise.

On-premise
On-premise refers to systems and software that are installed and operate on customer sites, instead of in a
cloud environment.

Pega Platform requires two pieces of supporting software in your environment:


l A database to store the rules and work objects used and generated
l An application server that supports the Java EE specification – Provides a run-time environment and other
services (such as database connections, Java Messaging Services (JMS) support, and connector and services
interfaces to other external systems)

Cloud choice
Running on the cloud in any form is an attractive option for many organizations. Pega Platform provides flexible
support across different cloud platforms and topology managers.
Your platform choice depends on your needs and your environment.

The three basic models for deploying Pega Platform on the cloud include:
l Pega Cloud – Pegasystems’ managed cloud platform service offering is architected for Pegasystems’
applications. Pega Cloud is the fastest time to value.

For more information about Pega Cloud, see the PDN article Pega Cloud.
l Customer Managed Cloud – Customer-managed cloud environments are run within private clouds or run on
Infrastructure-as-a-Service (IaaS) offerings delivered by providers such as Amazon Web Services, Microsoft
Azure, or Google Cloud Platform.

l Partner Managed Cloud – Partner-managed cloud environments are owned and


controlled by business partners. A partner-managed cloud delivers the Pega
Platform as a custom hosting solution or purpose-built application service provider.

Pivotal Cloud Foundry


To simplify IT operations and automate system management tasks, deploy the Pega Platform on the Pivotal Cloud
Foundry (PCF) Platform-as-a-Service infrastructure (PaaS).

PCF is a topology manager. Using a topology manager like PCF still requires one of the above Cloud providers. For
more information, see the PDN article Deploying Pega Platform Service Broker on Pivotal Cloud Foundry by
using the Ops Manager.
To have greater control over the deployment, use BOSH to deploy the Pega Service Broker. For more information,
see the PDN article Deploying the Pega Platform Service Broker on Cloud Foundry by using BOSH.
Docker container
Pega can run as a Docker container. Docker is a Container-as-a-Service (CaaS) infrastructure. Docker is a cost-
effective and portable way to deploy a Pega application because you do not need any software except the Docker
container and a Docker host system.

Developers use Docker to eliminate certain problems when collaborating on code with co-workers. Operators use
Docker to run and manage apps side-by-side in isolated containers. Enterprises use Docker to build agile
software delivery pipelines.
Containers provide a way to package software in a format that can run isolated on a shared operating system.
For more information on Docker support, see the PDN article Pega Platform Docker Support.
KNOWLEDGE CHECK

What deployment options are supported for Pega Platform?


On-premise and cloud are supported for Pega Platform. The three options for cloud are Pega Cloud, Customer
Managed Cloud, and Partner Managed Cloud.
High availability
Application outages can be costly to organizations. The organization loses business when the application is not
available, and it may also be subject to penalties and fines. An unplanned application outage can also damage
the organization's reputation.
Availability is a percentage of time your application is functioning. High availability (HA) has no standard
definition because up-time requirements vary. In general, HA refers to systems that are durable and likely to
operate continuously without failure for a long time. You can correlate the business value of HA to the cost of the
system being unavailable. An industry standard way of referring to availability is in terms of nines:

Availability (%) Downtime/year


99.9 (three nines) 8.76 hours
99.99 (four nines) 52.56 minutes
99.999 (five nines) 5.26 minutes
Nine nines 31.5569 milliseconds

High availability architecture


To reduce the risk of unplanned application downtime, design the application to withstand this risk. Designing a
highly available application means building redundancy and failover into your application so that these risks are
minimized. For example, one implementation can have several load balancers, physical and virtual machines,
shared storage repositories, and databases.
Clustering
The concept of clustering involves taking two or more Pega Platform servers and organizing them to work
together to provide higher availability, reliability, and scalability than can be obtained by using a single Pega
Platform server. The app servers can be on-premise or in a cloud and must have a means of dynamically
allocating servers to support increased demand.
Pega Platform servers are designed to support redundancy among various components, such as connectors,
services, listeners, and search. The exact configuration varies based on the specifics of the applications in the
production environment.

Load Balancing
Load balancing is a methodology to distribute the workload across multiple nodes in a multinode clustered
environment. Load balancers monitor node health and direct requests to healthy nodes in the cluster. Since the
requestor information (session, PRThread, and clipboard page) are stored in memory, Pega requires all requests
from the same browser session to go to the same JVM. In network parlance, this is also known as sticky sessions
or session affinity.
Session affinity is configured with the load balancer. It ensures that all requests from a user are handled by the
same Pega Platform server. Load-balancing Pega nodes in a multinode clustered environment can be achieved
by using hardware routers that support “sticky” HTTP sessions. Cisco systems Inc. and F5 Networks Inc. are
examples of vendors who offer such hardware. There is also software, virtual, and cloud-based load balancer
solutions such as Amazon EC2’s elastic scaling that are available.
The load balancers must support session affinity and cookie persistence. Production load balancers offer a range
of options for configuring session affinity. The Pega Platform supports cookie-based affinity. You can configure
cookies for high availability session affinity using the following variables:
l session/ha/quiesce/customSessionInvalidationMethod
l session/ha/quiesce/cookieToInvalidate

SSO Authentication
Single sign-on (SSO) authentication, though not required, provides a seamless experience for the end user.
Without SSO, the user reauthenticates when the user session is moved to another node.

Shared Storage
Users' session data is persisted in shared storage in the event of failover or server quiesce. The shared storage
allows stateful application data to be moved between nodes. Pega supports a shared storage system which can
either be a shared disk drive, Network File System (NFS), or a database. All three of these options require read
write access on those systems for Pega to write data. By default Pega uses database persistence in a HA
configuration. If organizations decide on a different shared storage system then they need to make sure the
Shared Storage integrates with Pega 7. It is essential to configure shared storage to support Quiesce and crash
recovery.

Split schema
The database tier must have a failover solution that meets production service-level requirements. The Pega
Platform database has a split schema. With split schema, the Pega Database Repository is defined in two
separate schemas: Rules and Data. Rules includes the rule base and system objects, and Data includes data and
work objects. Both can be configured during installation and upgrade.
When users save a change to a process flow, they are saving a record to the Rule schema. The Data schema
stores run-time information such as processes state, case data, assignments, and audit history. The split schema
design is used to support solutions that need to be highly available.
Split schema can also be used to separate customer transactional data and rules from an operational
perspective. For example, data typically changes more than rules so it can be backed up more frequently. Rule
upgrades and rollbacks can be managed independently from data.
With split schema rolling re-starts can be performed to support upgrades, reducing server downtime. In this
example, the rules schema is copied and upgraded once. Each node in the cluster is then quiesced, redirected to
the updated rules schema, and then restarted one at time.
For more information on high availability configuration, see the Deploying a highly available system article on
the Pega Community.
KNOWLEDGE CHECK

Why is using single sign-on in a high-availability architecture recommended?


Because the user must reauthenticate when the user session is moved to another node.
Cluster topologies

Horizontal and vertical cluster topologies


The Pega platform is most often deployed as a traditional JEE application, running within a JVM inside an
application server.
Pega engines can be scaled vertically or horizontally and are typically deployed in conjunction with a load
balancer and other infrastructure resources such as proxy and HTTP.
Note: Load balancers must support session affinity for users who are leveraging the Pega UI. Service clients can
invoke Pega using either stateless or stateful sessions.
Horizontal scaling means that multiple application servers are deployed on separate physical or virtual
machines.
Vertical scaling means that multiple Pega 7 servers are deployed on the same physical or virtual machines by
running them on different port numbers. Pega 7 natively supports a combination setup that uses both horizontal
and vertical clusters.
A cluster may have heterogeneous servers in terms of hardware and operating system. For instance, some
servers can use Linux, and others can use Windows. Usually, the only restriction is that all servers in a cluster
must run the same Pega 7 version. Pega engines can be deployed across heterogeneous platforms as shown
here.
For example, a collection of WebSphere nodes can be used to support a human user community and a Jboss
node can be deployed to handle low-volume service requests made by other applications. The Pega engine can
also be deployed on zSeries mainframes. This flexible deployment model allows Pega customers to build a
process once, and deploy it seamlessly across multiple platforms and channels without redesigning or rebuilding
the process each time.

In-memory cache management options for multi-node cluster topologies


Pega Platform supports two in-memory cache management options for multi-node cluster topologies: Hazelcast
and Apache Ignite.

Option Embedded Client-Server


Hazelcast Default Not Supported
Ignite Supported Supported
Hazelcast
Hazelcast is embedded in the Pega platform and is the default in-memory cache management option or in-
memory data grid used by Pega in multinode cluster topologies.

Hazelcast is an open source in-memory data grid based on Java. In a Hazelcast grid, data is evenly distributed
among the nodes of a cluster, allowing for horizontal scaling of processing and available storage. Hazelcast can
only run embedded in every node and does not support a client-server topology.

Apache Ignite
Pega can use Apache Ignite instead of Hazelcast for in-memory cache management.

Apache Ignite is an in-memory cache management platform that can provide greater speed and scalability in
large multinode clusters. Apache Ignite supports high-performance transactions, real-time streaming, and fast
analytics in a single, comprehensive data access and processing layer.
Unlike Hazelcast, Apache Ignite supports client-server mode. Client-server mode provides greater cluster stability
in large clusters and supports the ability for servers and clients to be separately scaled up. Use client-server
mode for large production environments that consist of more than five cluster nodes or if you experience cluster
instability in clusters that contain fewer nodes. The number of nodes in the cluster that can lead to cluster
instability depends on your environment, and switching to client-server mode is determined individually.
Client-server mode is a clustering topology that separates Pega Platform processes from cluster communication
and distributed features. Client-server mode clustering technology has separate resources and uses a different
JVM from the Pega Platform. The client nodes are Pega Platform nodes that perform application jobs and call the
Apache Ignite client to facilitate communication between Pega Platform and the Apache Ignite servers. The
Servers are stand-alone Apache Ignite servers that provide base clustering capabilities, including communication
between the nodes and distributed features. At least three Apache Ignite servers are required for one cluster.
The client-server topology adds value to the business by providing the following advantages:
l The cluster member life cycle is independent of the application life cycle since nodes are deployed as
separate server instances.
l Cluster performance is more predictable and reliable because cluster components are isolated and do not
compete with the application for CPU, memory, and I/O resources.
l Identifying the cause of any unexpected behavior is easier because cluster service activity is isolated on its
own server.
l A client-server topology provides more flexibility since clients and servers can be scaled independently.
For more information about enabling client-server mode, see Pega Platform Deployment Guides.
KNOWLEDGE CHECK

When would you use the client-server cluster topology?


You use the client-server cluster topology for large production environments that consist of more than five
cluster nodes or if you experience cluster instability.
Planned and unplanned outages
Applications become unavailable to users for two reasons: system maintenance and system crashes. In either
situation, no loss of work should occur and work can continue to be processed.

Planned outage
In a planned outage, you know when application changes are taking place. For example, if you need to take a
node out of service to increase heap size on the JVM, you can take that node out of service and move users to
another node users noticing any difference.

Quiesce
The process of quiescing provides the ability to take a Pega Platform server out of service for maintenance or
other activities.
To support quiescing, the node is first taken out of load balancer rotation. The Pega application passivates, or
stores, the user session, and then activates the session on another node. Passivation works at the page, thread,
and requestor level. The inverse of passivation is activation. Activation brings the persisted data back into
memory on another node.
You can quiesce a node from:
l The High Availability landing pages in DEV Studio (if you are not using multiple clusters)
l The System Management Application (SMA)
l The Autonomic Event Services (AES) application (recommended for use with multiple clusters)
l REST API services (starting in v7.4)
l A custom Pega Platform management console by incorporating cluster management MBeans
When quiesced, the server looks for the accelerated passivation setting. By default, Pega Platformsets the
passivation to five seconds. After five seconds, it passivates all existing user sessions. When users send another
request, their user session activates in another Pega Platformserver without loss of information.
The five second passivation timeout might be too aggressive in some applications. System Administrators can
increase the timeout value to reduce load. The timeout value should be large enough so that a typical user can
submit a request.
Once all existing users are moved from the server, the server can be upgraded. Once the process is complete,
the server is enabled in the load balancer and quiesce is canceled.

High availability roles


Pega Platform provides two roles (PegaRULES:HighAvailabilityQuiesceInvestigator and
PegaRULES:HighAvailabilityAdminstrator) that you can add to access groups for administrators who manage highly-
available applications.
The High Availability Quiesce Investigator role lets administrators perform diagnostics or debug issues on a
quiesced system. When quiesced, the system reroutes all users without this role.
The High Availability Administrator role gives administrators access to the High Availability landing pages. These
users can also investigate issues on a quiesced system.

Out-of-place upgrade
Pega Platform provides the ability to perform out-of-place upgrades with little or no downtime. An out-of-place,
or parallel, upgrade involves creating a new schema, migrating rules from the old schema to the new schema,
and upgrading the new schema to a new Pega release. The data instances in the existing data schema also
update. Once the updates are complete, the DB connections are modified to point to the new rules schema and
the nodes are quiesced and restarted one at a time in a rolling restart.

In-place upgrade
Pega Platform also provides the ability to perform in-place upgrades, which may involve significant downtime
because existing applications need to be stopped. After, pre-upgrade scripts or processes may need to be run.
Prior to importing the new version of the Pega rulebase, the database schema would be updated manually or
automatically using the Installation and Upgrade Assistant (IUA). EAR or WAR files, if used, are undeployed and
replaced with the new EAR and WAR files. The new archives would need to be loaded. After, additional
configuration changes may be made using scripts or the Upgrade Wizard.
Unplanned outage
With Pega Platform configured for high availability, the application can recover from both browser and node
crashes. Pega Platform uses dynamic container persistence for relevant work data. The dynamic container
maintains UI and work states, but not the entire clipboard.

Node crash
Pega saves the structure of UI and relevant work metadata on shared storage devices for specific events. When a
specific value is selected on a UI element, the form data is stored in the shared storage as a requestor clipboard
page. When the load balancer detects a node crash, it redirects traffic to another node in the pool of active
servers. The new node that processes the request detects a crash and uses the UI metadata to reconstruct the UI
from the shared storage.
On redirection to a new node, the user must reauthenticate, so a best practice is to use Single Sign-on to avoid
user interruption. Since the user’s clipboard is not preserved from the crash, data that has been entered but not
committed on assign, perform, and confirm harnesses is lost.

Browser crash
When the browser terminates or crashes, users connect to the correct server based on session affinity. The state
of the user session is recovered without loss since the clipboard preserves both metadata for the UI and any
data entered on screens and submitted.
Crash recovery matrix
Events Browser crash Node crash
UI is redrawn Yes Yes
User must re- No, if redirected to the same node No, with Single Sign-on
authenticate Yes, if the authentication cookie was Yes, without Single Sign-on
lost
Data entry loss No, if redirected to the same node Data not committed is lost
Data not committed is lost if the
authentication cookie was lost

KNOWLEDGE CHECK

When would you quiesce a node?


When taking the node out of service for maintenance or other activities.
Hardware sizing estimation
Pega offers the Hardware sizing estimation service to help organizations with hardware resource planning.
The sizing estimate can be applied to on-premise, Pega Cloud, or customer cloud implementations.
Using either known or predicted data about the application's usage, Pega applies complex modeling techniques
to estimate hardware sizing for the application server disk, CPU, JVM, and database requirements. The hardware
sizing estimation team constantly reviews, updates, and enhances the model using global production field
information and feedback, in-house metrics, and periodic performance benchmarks to provide the best estimate
possible.
Sizing estimates can be applied to any environment at any point of the project, from sales through enterprise
planning and production. The hardware sizing estimate can be performed again after your application goes into
production. Events such as adding new case types or adding more concurrent users require a new hardware
estimate to ensure your application can handle the additional load.
The hardware sizing estimation service uses a questionnaire-based process to gather information. You can send
an email to [email protected].
KNOWLEDGE CHECK

Which environments can the hardware sizing estimate be requested for?


The hardware sizing estimation can be applied to any environment at any point in a project.
Setting up the Pega Platform quiz
Question 1
What are the three basic provider models for how to deploy Pega Platform on the cloud? (Choose Three)

C # Response Justification
C 1 Pega Cloud Pega Cloud is Pega's managed cloud
platform service.
C 2 Customer Managed Cloud Customer-owned and controlled cloud
environment that runs within private
clouds or on infrastructure-as-a-
service.
C 3 Partner Managed Cloud Owned and controlled by business
partner. Delivers the Pega Platform as
service.
4 Cloud Foundry Cloud Foundry is a topology manager;
it still requires a Cloud provider.

Question 2
An environment designed for high availability consists of which two of the following components? (Choose Two)

C # Response Justification
C 1 Load balancer The load balancer monitors
health of the nodes.
C 2 Shared storage The shared storage allows
stateful application data to be
shared across nodes.
3 Cloud foundry Cloud foundry is not part of a
high-availability architecture.
4 Database replication Database replication is not used
in a high availability
architecture
Question 3
Which two of the following statements are correct with regard to server crashes? (Choose Two)

C # Response Justification
C 1 The UI is redrawn. The UI state is maintained.
2 The user must always Reauthenticate is only needed if
reauthenticate. a single sign-on is not used.
C 3 Data not committed is lost. Data that was not committed is
lost.
4 The entire clipboard is The Pega Platform uses a
recovered. dynamic container that
maintains UI and work states,
but not the entire clipboard.

Question 4
Which of the following two statements are correct with regard to the hardware sizing estimation service offered
by Pega? (Choose Two)

C # Response Justification
1 There is no need to make use The sizing estimate can be
of this service if a cloud is applied to any deployment type.
used.
C 2 Sizing estimates can be Sizing estimates can be applied
applied at any point of the at any point of the project.
project.
C 3 Events such as adding new Changes to the application
case types require a new characteristics require a new
hardware estimate. hardware estimate.
4 The service is aimed at Sizing estimates can be applied
production environments to any environment.
only.
Leveraging Pega applications
Introduction to leveraging Pega applications
Pega's customer engagement and industry applications can shorten the delivery time frame of your business
application. By determining the differences, or gaps, between what the Pega application provides and your
organization's requirements, you can deliver a minimum set of functionality to begin providing value in days and
weeks, not months and years.
By the end of this lesson, you should be able to:
l Explain the benefits of using Pega's application offerings
l Describe how the gap analysis approach impacts your application design tasks
Benefits of leveraging a Pega application
Building software applications that return an investment in the form of cost savings, automation, or new
business can be a risky, time consuming, and expensive endeavor for an organization. Organizations want to
minimize as much risk, cost, and time investment as possible.
To achieve those objectives, deliver an application with the minimum set of functionality that is lovable to the
business. The minimum lovable product (MLP) provides an application with the minimum amount of
functionality that your users love, and not just live with. Over time, you iterate and improve that product or
application as business needs change and evolve.
By starting with a Pega application, you are far closer to the MLP than if you start your application design from
the beginning.

Instead of starting with a lengthy requirements gathering process, you can demonstrate the functionality that
the Pega application provides for you and compare that functionality to your business requirements. This
process is called performing a gap analysis.
Once you identify these gaps, you can design for the minimum amount of functionality needed to deliver the
MLP.
KNOWLEDGE CHECK

What is the primary benefit of using a Pega application


You can more rapidly delivery business value (an MLP) to your end users when starting with a Pega application
compared with creating an application design from scratch. Starting with a Pega application allows business
users to comment on existing application features. Once the business identifies the differences (gaps), you only
have to customize the application to fill those gaps.
Pega's application offerings
Pega offers two categories of applications:
l Customer engagement applications
l Industry applications
Customer engagement applications include Pega Customer Service, Pega Marketing, and Pega Sales
Automation. The Customer Decision Hub centralizes all customer engagement activities and provides intelligent
customer interactions, regardless of channel or application.

The industry applications provide solutions for:


l Communications & Media
l Energy & Utilities
l Financial Services
l Government
l Healthcare
l Insurance
l Life Sciences
l Manufacturing & High Technology
In some cases, the customer engagement applications intersect with an industry solution. For example, Customer
Service for Financial Services and Pega Marketing for Financial Services are both customer engagement
applications with a focus on the Financial Services industry.
Tip: For business context of what each customer engagement and industry application provides, see the
Products page on pega.com.
KNOWLEDGE CHECK

Why do you need to know about Pega's customer engagement and industry application offerings?
Pega's customer engagement and industry applications are key to rapidly delivering a solution that provides
immediate business value. You need to know at a high level what each application provides and where to learn
more about each application.
How to customize a Pega application
Even though each Pega application has a unique set of capabilities to meet specific industry needs, the process
for customizing the application is the same regardless of the application. The following process describes how to
customize and extend a Pega application.
l Acquaint yourself with application features
l Create your application implementation layer
l Perform the solution implementation gap analysis
l Define the Minimum Lovable Product (MLP)
l Record desired customizations using Agile Workbench
l Customize the application
l Evolve the application

Acquaint yourself with the Pega application features


To effectively demonstrate the application to your business users, you need to know the Pega application
features and how those features relate to and solve the business problem at hand. Pega offers several resources
to help you to familiarize yourself with the application. Use the Pega Products & Applications menu on the PDN
to navigate to the appropriate application landing page.
Caution: If you are unaware of the features already provided to you by the Pega application, you will spend time
and resources building features that already exist. Use the application overview in the Pega application itself to
review application features.
Create the application implementation layer
The application implementation layer represents the container for your application customizations. As business
users add case types with Pega Express, the application implementation layer houses the rules created by those
users and any customizations you make to support. The following diagram illustrates the generalized structure
of a Pega application and how your application implementation layer fits into that structure.

Depending on the installation, a Pega application can have multiple build-on products to provide the full suite of
functionality. For example, the following diagram illustrates the construction of the Pega Customer Service for
Healthcare application, including Pega Sales Automation and Pega Marketing applications.
Note: Use the New Application wizard to create the application implementation layer.

Perform the solution implementation gap analysis


The solution implementation gap analysis process allows you to demonstrate the application while discussing
the organization's needs. This process guides the customer into using capabilities that the application already
provides instead of building new features. The goal is to create a first release that allows the organization to
start seeing business value quickly.
Note: The solution implementation gap analysis differs from the product gap analysis. The product gap
analysis is performed early in the sales cycle to determine if the Pega application is a right fit for the
organization, or if a custom application is required.

Define the Minimum Lovable Product


The Minimum Lovable Product (MLP) is the minimum functionality the business needs to get business value
from the customized application. The MLP is also known as the first production release. You can address
features that are prioritized after the MLP in subsequent iterations of the application. These subsequent
iterations are known as extended production releases.

Record customizations using Agile Workbench


As you demonstrate the application with your business users, you can record the desired customizations with
Agile Workbench. Agile Workbench integrates with Agile Studio to record feedback, bugs, and user stories to
allow you and your team to customize the application accordingly. Agile Workbench also integrates with other
project management tools such as JIRA.
Customize the application
After you have prioritized the backlog captured in Agile Workbench according to the MLP definition, you can start
customizing the application. Typically, the MLP includes creating connections to back-end systems, configuring
security and reports, and creating application-specific rules to meet immediate business requirements. For
example, configuring coaching tips is a configurations step unique to Pega Customer Service.

Evolve the application


During the initial application demonstration with the business users, you captured the user's customization
requirements and requests. Not all of those customizations were delivered with the first production release. As
the business uses the application in production, users have new requirements and enhancements to improve
the application. Continue to use Agile Workbench to capture feedback in the application directly. This way, you
can evolve and improve your application over time and according to business needs.
KNOWLEDGE CHECK

What is the purpose of the Minimum Lovable Product (MLP)?


The purpose of the MLP is to deliver value to the business as soon as possible, while allowing for subsequent
iterations of the application to improve and deepen application functionality.
Leveraging Pega applications quiz
Question 1
The primary benefit of using a Pega application is that it allows you to _______________________.

C # Response Justification
C 1 get to the minimum lovable Pega's applications allow you to get to the minimum lovable
product (MLP) faster product faster, delivering business value to your organization
quickly.
2 compete with other industry While Pega's applications compete with industry point solutions,
specific point solutions this is not the primary benefit of leveraging a Pega application.
3 integrate with systems of While the Pega applications provide connection points to systems
record for lines of business of record through data pages, this is not the primary benefit for
more easily leveraging a Pega application.
4 utilize Pega's artificial While Pega's AI capabilities are core to the customer engagement
intelligence (AI applications applications, this is not the primary benefit for leveraging a Pega
more effectively application.

Question 2
Pega's applications include ______________ and _______________. (Choose Two)

C # Response Justification
C 1 customer engagement Customer engagement applications are a part of application
applications offerings.
2 marketing applications Pega offers the Pega Marketing application as part of the overall
customer engagement application suite.
C 3 industry applications Industry applications are part of Pega's application offerings.
4 artificial intelligence Artificial intelligence is part of multiple application offerings.
applications

Question 3
Performing a ____________________ allows you to determine the differences between the features that the Pega
application provides and your unique business requirements to support delivery of the minimum loveable
product (MLP).
C # Response Justification
C 1 solution implementation gap Performing a solution implementation gap analysis and capturing
analysis the differences in Agile Workbench allows you to quickly gather
unique business requirements.
2 DCO session DCO consists of a set of tools and behaviors that you can use to
facilitate a more cross-functional, collaborative application
development experience between business architects and system
architects.
3 requirements planning Requirements planning can take several weeks, postponing the
session ability to start providing business value.
4 sprint review A sprint review is part of the agile delivery methodology.
Designing the case structure
Introduction to designing the case structure
The case structure is one of the most important considerations when designing an application. Designing the
case structure encompasses identifying the individual business processes and how you represent each process
(cases, subcases, subflows). A poorly designed case structure may cause refactoring if present or future
requirements become challenging to implement.
After this lesson, you should be able to:
l Identify cases given a set of requirements
l Compare parallel processing options
l Explain advantages and disadvantages of subcase and subflow processing
l Determine when to use tickets
How to identify cases
A case type represents a group of functionality that facilitates both the design time configuration and run-time
processing of cases. Pega Platform provides many features that support case processing, such as the case life
cycle design (including stages and steps), reporting, and security.
The LSA decides which Pega Platform features to use—flow, subflows, data objecs, or other components—when
designing a solution.
In some applications, a case type is easily identified as it represents a single straightforward business process.
However, in other situations, producing a robust design with longevity may require careful consideration.
In Pega Platform, other dependencies on the design include reporting, security, locking, extensibility, and
specialization. Consider all requirements before creating the design since this forms the foundation for the
application.
The case design begins by identifying the processes in the application. Next, you determine if subcases are
warranted, including any additional cases that may benefit from case processing. Identify specialization needs
for these cases. Consider future extensibility requirements before implementing the case designs.
For example, a Purchase Request process involves identifying a list of the items, and quantity of each item to be
purchased. The list of items is reviewed and then forwarded for approval by one or more approvers. Then, the
list of items is ordered. When the items are received, the process is complete.
You can use several case designs to represent this business process:
l A single Purchase Request Case for the entire process, with subprocesses for each line item
l A Purchase Request case to gather the initial request and spawn a single Purchase Order Case
l A Purchase Request case to gather the initial request and spawn Purchase Order subcases for each line item
All provided solutions may be initially viable, but the optimal solution takes into account current and possible
future requirements to minimize maintenance.

Guidelines for identifying cases


Basic questions to consider when identifying cases include:
l Does the case represent the item(s) that require processing?
l Is there a benefit to splitting the case (process) into multiple cases or subcases, or are parallel flows
sufficient?
l Is a case or subcase really required?
l If a subcase is created, a general rule is that the processing of the main case depends on the completion of
the subcase(s). Does this hold true?
l Are there other present or future requirements, such as reporting and security, that may be more easily
implemented by adjusting the case design?
Carefully consider all of the questions. Some situations may not require additional cases(s) and could instead
result in an inefficient use of resources and an overly complex solution. Because creating cases involves
additional processing, always ensure you have a good reason for creating additional cases.
The previous Purchase Request example above illustrates these points:
l If there were security requirements based on the approval of individual line item types, you can implement
the case design solution with subcases for individual line items.
l If there is no specific requirement for line item processing, the simple solution involving subprocesses for
each line item is suitable.
l Adding a Purchase Order case may be unnecessary unless there was a requirement specifically stating the
need for it (for example, Reporting).
Case identification may be straightforward or there may be situations where additional cases or processes could
be advantageous. An example of this is data that may support the case processing, such as customer or account
data. In situations where using the infrastructure provided by case processing for data—such as an approval
process, security, or auditing—may be advantageous, then providing a case may be more suitable than updating
the data directly.
Case processing
Requestors execute the majority of case processing in Pega Platform. Each requestor executes within its own Java
thread. The separate Java threads allow multiple requestors to perform actions in parallel. The most common
requestor types are initiated by Services, Agents, or different users logging on to the system to process cases.
The case design determines how efficiently the case is processed. An efficient case design accounts for steps or
processes that can be performed in parallel by separate requestors. One example is leveraging subprocesses to
gain approval from different users. Each approval process is performed by separate requestors. A more complex
example is queuing tasks to a standard agent in a multinode system. There are limitations to this type of parallel
processing. Limitations differ with the case configuration and the type of processing implemented.
The two major types of parallel processing are same-case processing and subcase processing.

Same-case processing
Same-case processing occurs when multiple assignments associated with the same case are created. Each
assignment is initiated through a child or subprocess that is different from the parent process. Multiple
assignments for a single case are initiated through Split Join, Split For Each, or Spinoff subprocesses. The Split Join
and Split For Each subprocesses pause and then return control to the main flow, depending on the return
conditions specified in the subprocess shape. The Spinoff subprocess is not meant to pause the main flow as it is
an unmanaged process.
All of these subprocess options may result in multiple assignments being created, leading to different requestors
processing the case (assuming they are assigned to different users). One limiting factor is locking. The default
case locking mechanism prevents users from processing (locking) the case at the same time. This has been
alleviated in Pega Platform with the introduction of optimistic locking. Optimistic locking allows multiple users to
access the case at the same time, and only locking the case momentarily when completing the assignment. The
drawback is that once the first user has submitted changes, subsequent users are prompted to refresh their
cases prior to submitting their changes. The probability of two requestors accessing the case at the same time is
low, but the designer should be aware of this possibility and the consequences, especially in cases where the
requestor is a nonuser.

Subcase processing
The primary difference between subcase and same-case processing is that one or more subcases are involved.
The processes for each subcase may create one or more assignments for each subcase. Locking can be a limiting
factor when processing these assignments. If the default locking configuration is specified for all subcases, then
all subcases including the parent are locked while an assignment in any subcase is performed. This can be
alleviated by selecting the Do Not Lock Parent configuration in the subcases. Locking is a significant difference
between subflow and subcase parallelism.
Tip: With the correct locking configuration, simultaneous processing may take place without interruption for
subcases, whereas a possibility exists for interruption when subflows are involved.
This behavior must be accounted for, especially when automated tasks such as agents are involved. A locked
parent case may prevent the agent from completing its task, and error handling must be incorporated, allowing
the agent to retry the task later on. If a design leveraging subcases with independent locking was used such that
the agent operated on the subcase, it minimizes the possibility of lock contention. In general, lock subcases
independently of the parent case unless there is a reason for also locking the parent case.
When waiting for the subcases to complete processing, a wait step is used to pause the parent case. If subcases
of the same type are involved, you configure the same wait shape to allow the main case to proceed after all
subcases are resolved.

If different types of subcases are involved, a ticket is used in conjunction with the wait shape to allow the parent
case to proceed only after all subcases, regardless of the type, are completed. The AllCoveredResolved ticket is
used and is triggered when all the covered subcases are resolved. You configure the ticket in the same flow as
the wait shape, and you place the ticket in the flow at the location at which processing should continue.
Configure the wait shape as a timer with a duration longer than the time to issue the ticket.

Subcase and subflow comparison


You have many factors to consider when deciding on a suitable case design. The following table summarizes
some advantages of leveraging a design incorporating multiple cases or subcases.

Factor Consideration
Security Class-based security offers more options for security refinement using multiple cases.
Data security increased as subcases only contain data pertinent to their case.
Factor Consideration
Reporting Reporting on smaller data sets may be easier and offer potential for performance increase
(the may be a disadvantage if a join is required).
Persistence You have the ability to persist case data separately.
Locking You have the ability to lock cases separately and process without interruption (significant in
cases involving automated processing).
Specialization This can be extended or specialized with a class or leverage Case Specialization feature.
Dependency Subcase processing can be controlled through the state of parent or sibling cases.
Management
Performance There is pure parallel processing since separate cases may be accessed using a separate
requestor.
Ad hoc You can leverage the ad hoc case processing feature.
processing

Advantages of a single case design involving only subflows are listed in the following table.

Factor Consideration
Data Data is readily available; no replication of data is necessary.
Reporting All data is accessible for reports.
Attachments All attachments are accessible (coding required for subcases).
Policy Implementing this feature is easy. Managing “Suspend work” when multiple cases are involved
Override is more complex.
Case design - example one
Consider the following requirements by an automobile manufacturing company automating an Illness and Injury
Reporting application.
Like many corporations, the automobile manufacturing company must log work-related fatalities, injuries, and
illnesses. For example, if an employee contracts tuberculosis at work, then the illness must be logged in the
company's safety records.
Certain extreme events, such as death, must be reported to the regulatory agency immediately. These reports
are called submissions. Submission processes and requirements differ by country. Some countries have
additional rules, based on state or province. Typically, these rules are more stringent forms of the national
guidelines. There are also some guidelines that are specific to injury type. A small subset of injuries requires
injury-specific fields to be filled in. For example, with hearing loss, the actual assessment of the loss, measured
in decibels, must be recorded.
The Illness and Injury Reporting application must support two processes.
First, any injury or illness must be recorded. This is a guided and dynamic data-entry procedure that is specific
to the regulations of the country in which the plant is located. The culmination of these entries is an electronic
logbook.
Second, the application must generate a summary of submission records for every plant per year. Each record
summary must be verified, certified, and posted. Notably, severe events must be reported to the regulatory body
of the corresponding country, and the status of this submission must be tracked. The reports of these record
type are separate—there is never a need for a list of records that is a mix of Injury Records, Annual Summaries,
and Submissions. However, because summaries are a culmination of injury records, and submissions are
spawned by injury records, assuming that injury record information is included in summary or submission
report is reasonable.
The following images illustrates the requirements:

Design a case structure to support this application.

Solution
You probably identified three independent case types with no subcase relationships:
l Illness Injury is for logging each illness or injury event
l Annual Summary is to track the end-of-year report for each plant
l Submission is for those events that must be reported to the regulatory agency.

Discussion
An Annual Summary appears to be only a report, but you create a case because the requirements explicitly state
that the status of these reports must be tracked, indicating a dedicated process. Furthermore, these reports
must contain static information. While the original content may be derived from a report, this content must be
fixed and persisted.
Create a Submission case since the requirements stated that the Submission process, and status of each
submission, must be tracked. Submission tracking is performed independently of the original injury record, and
so is best kept as a separate case.
You might consider that Submission could be a subclass of Injury Illness, but Submission is not a type of illness
injury. Submission is a case that is spawned by an Illness injury case. Also, Submission is not a subcase of Illness
Injury since the Illness Injury is not dependent on the Submission processing being completed.
Case design - example two
Consider the following requirements for an automobile manufacturing company automating a warranty claim
application. Two primary processes are supported by the application: a Warranty Claim process and a Recall
process.
For a warranty claim, a customer brings a car to the dealership because something is not working correctly. The
dealer assesses the problem with the car, enters the claim details into the application, and then receives
verification from the application that the repair is covered under warranty. The dealer is subsequently
compensated for the work by the car manufacturer.
Every warranty claim includes one or more claim tasks. Each claim task represents the work that must be
completed to resolve the claim. Most warranty claims are simple and have a single Pay Dealer claim task. Some
warranty claims require more complex processing if disputes are involved.
Recalls are separate from warranty claims. Recalls cover the entire process from initially notifying customers
when a recall is necessary to compensate the dealer for the work completed to support the recall. One particular
type of claim task is a "Part Return". This claim task stands apart from others in that it requires an additional set
of business rules and the process is different.
Design a case structure to support this application.

Solution
At least two cases are possible: Recall and Warranty Claim.
Recall has no dependencies but does have a distinct process. You might represent Recall as a stand-alone case.
You have several design options for the Warranty Claim case.
One option is to create a stand-alone Warranty Claim case with conditional subprocesses spawned for each type
of claim task. This approach is easy to implement, but it limits extensibility and the ability to specialize the claim
tasks.
Another option is to create the Warranty Claim case with a subcase for each claim task. This design option offers
the flexibility to create specialized claim tasks such as Parts Return. The Warranty Claim case is the parent, or
cover, case of the Claim Task case since the Warranty Claim depends on all Claim tasks resolving before the
warranty Claim case can be resolved.
You represent the Parts Return case type as a subclass of the ClaimTask class to indicate that PartsReturn is a
specific type of ClaimTask case. This is an important distinction between subclasses and subcases. The hierarchy
for subcases is established in the Case type rule, similar to the composition relationship between pages in a data
model. A subclass indicates an is-a relationship and is indicated as such in the class structure.

Not enough information is provided in the requirements to determine which solution is more suitable for the
Claim Task case design. If there are many specialization or extensibility requirements for the application, the
latter design for the Claim task is a more suitable design.
Case processing quiz
Question 1
Which statement is true when comparing subcases and subflows?

C # Response Justification
1 Subcases are always Subcases are not always
preferred over subflows required. It depends on the
specific requirements. There are
situations where subflows may
be advantageous.
C 2 Subcases offer more security Class-based security offers more
options than subflows options for security refinement
using multiple cases. Data
security increases because
subcases only contain data
pertinent to their case.
3 Locking consequences can be One limiting factor with
ignored with subflows subflows is locking. The default
case locking mechanism
prevents users from processing
(locking) the case at the same
time.
4 Subcases perform better than There is no evidence that one
subflows approach performs better than
the other.

Question 2
Select two reasons when the AllCoveredResolved ticket is required in conjunction with the wait shape. (Choose
Two)

C # Response Justification
C 1 When the wait shape is configured as a A wait shape can be used to
timer with a duration longer than the prevent the parent case from
C # Response Justification
duration in which the ticket would be proceeding until all the
issued. subcases are completed. The
AllCoveredResolved ticket is
then used to bypass the wait
shape when all the subcases are
resolved.
C 2 When different types of subcases are The AllCoveredResolved ticket is
involved. used and is triggered when all
the covered subcases are
resolved, regardless of their
type.
3 When the wait shape is configured as a If the wait shape expires before
timer with a duration shorter than duration all subcases are resolved the
in which the ticket would be issued. AllCoveredResolved ticket may
be ignored.
4 When any of the covered subcases are This is a default configuration of
resolved. the wait state and does not
require the use of an
AllCoveredResolved ticket.

Question 3
Select two advantages of using a single case over subcase designs. (Choose Two)

C # Response Justification
C 1 No replication of data is All case data is readily available
necessary with a single case to the single case; therefore no
replication of data is necessary.
2 Subflows can be processed With subcases you have the
without interruption ability to lock cases separately
and process them without
interruption. (This is significant
when cases involve automated
processing).
C 3 All attachments are readily All attachments are directly
C # Response Justification
accessible available to the single case and
no coding is required to access
them.
4 Dependency management Subcase processing can be
between subflows can be controlled through the state of
leveraged the parent or sibling cases.
Starting with App Studio
Introduction to starting with App Studio
Pega Express gives you the ability to collaborate with the business to quickly create new applications and, over
time, build a library of assets that can be reused across the enterprise.
After this lesson, you should be able to:
l Explain the benefits of App Studio
l Describe the development roles
l Ensure the successful adoption of App Studio
l Reuse assets created in App Studio

Benefits of App Studio


(missing or bad snippet)
App Studio is designed for everyone, and it enables you to build more through composition, templates, and
reuse. Whether that means leveraging your company's IT assets, driving consistency and reuse through
templates, or extending Pega applications, Pega Express accelerates your project.
Leverage App Studio for enablement, collaboration, and innovation.
Building an application with App Studio in mind ensures guardrail compliance, transparency, and visibility.
KNOWLEDGE CHECK

Name three ways that App Studio can accelerate your projects.
App Studio lets you leverage your company's IT assets, drive consistency and reuse through templates, or
extend Pega applications.

Development roles
(missing or bad snippet)
Pega Express is a development environment designed for new users with less technical expertise, such as
business experts. Pega Express includes contextual help and tours for self-learning, enabling business experts to
quickly create applications.
Staff members certified in Pega act as coaches for teams of business experts to facilitate application
development.
KNOWLEDGE CHECK

Pega Express is designed for what type of users?


Pega Express is designed for new Pega users who are not technical experts.

How to ensure adoption of App Studio


(missing or bad snippet)
Leveraging App Studio in business can bring tremendous advantages, but there are some things to consider to
ensure the successful adoption of Pega Express.
First, establish governance and evaluate applications are fit for the program. Once the application is reviewed,
pair the business team with a coach. Teams with little or no App Studio experience rely heavily on the coach,
while other more experienced teams are more self-sufficient. Ensure that there are regular meetings between
the project team and the coach.
Publish a release train schedule and hold retrospectives to adjust what was not working. Create a community for
business users to share their experiences and ask questions. Establish an access management and support
strategy for applications in production.
Develop a maturity model for your organization. Pega recommends a four-level maturity model.

Creating and Supporting Application Makers


Setting Up Organizational Guardrails
Leverage Coaches
Creating a Center of Excellence
Use the PDN Pega Express community to share ideas and ask questions.
KNOWLEDGE CHECK

Describe the recommended maturity model.


See above image.

How to reuse assets and leverage Relevant Records in APP


Studio
Ensure that assets are built to be reusable, and publish them for re-use in APP Studio. Existing assets can be
refactored into reusable assets using the refactoring tools.
Use Relevant Records to encourage reuse of reusable rules in APP Studio and the Case Designer.
As an organization's maturity model is implemented, more and more reusable enterprise assets are available.
Manage the common assets through a center of excellence (COE).
KNOWLEDGE CHECK

How are reusable assets best managed?


They are best managed by the COE.

Relevant records designate records of a case or data type as important or reusable in a child class. Relevant
records for a case type can include references to fields (properties), views (sections), processes (flows), user
actions (flow actions), correspondence rules, paragraphs, harnesses, service level agreements, and when rules
that are explicitly important to your case. For a data type, relevant records designate the most important
inherited fields (properties) for that data type.
The relevant records can include records that are defined directly against the class of the case or data type and
fields inherited from parent classes.
Designating a record as a relevant record controls the majority of the prompting and filtering in the Case
Designer and Data Designer. For example, user actions and processes defined as relevant records show up when
adding a step in the Case Designer.

Fields marked as relevant for a case type define the data model of the case. Processes and user actions marked
as relevant appear in Case Designer prompts to encourage reuse. Views marked as relevant appear as reusable
views.
Fields, views, processes, and user actions are automatically marked as relevant records when you create them
within the Case Designer and Data Designer. You can manually designate relevant records on the Relevant
Records landing page. It is also possible to add a Relevant Record using pxSetRelevantRecord.
KNOWLEDGE CHECK

What impact does designating a record as a relevant record have?


Relevant records control design-time prompting and filtering in several areas of Data Designer and Case
Designer.
Starting with APP Studio quiz
Question 1
Which three types of records can be marked as relevant records? (Choose Three)

C # Response Justification
1 Activities Activities cannot be marked as a
relevant records.
C 2 Properties Properties can be marked as
relevant records.
C 3 Processes Processes can be marked as
relevant records.
C 4 Harnesses Harnesses can be marked as
relevant records.

Question 2
What two benefits does APP Studio provide? (Choose Two)

C # Response Justification
C 1 An easy-to-use development APP Studio allows new users to
environment designed for quickly be productive.
new users
2 Quick creation of enterprise Enterprise assets such as data
assets integration and SSO need to be
set up in Designer Studio.
C 3 Rapid construction of new APP Studio allows you to rapidly
application concepts build out new application
concepts.
4 Support for scenario testing APP Studio does not support
testing.

Question 3
What are two key initiatives to ensure the successful adoption of APP Studio? (Choose Two)
C # Response Justification
C 1 Evaluate that applications are Ensure the application is a fit for
fit for the APP Studio program APP Studio.
C 2 Establish a center of Establish a COE for program
excellence (COE) for the governance.
organization
3 Ensure business users are Pair the business team with a
technically self-sufficient coach with technical expertise.
before starting a project
4 Strive towards a Strive towards a centralized
decentralized project model.
validation and approval
model

Question 4
Which of the following two options are best practices in APP Studio? (Choose Two)

C # Response Justification
C 1 Manage reusable assets Managing reusable assets is a
through a center of COE task.
excellence (COE)
2 Create reusable IT assets IT assets such as SSO
using APP Studio integration are done in Designer
Studio.
C 3 Share business assets across Create business assets, such as
applications data models and reports, that
are sharable across
applications.
4 Avoid APP Studio for DCO APP Studio increases the
session effectiveness of DCO.
Designing for specialization
Introduction to designing for specialization
Pega Platform provides various solutions for specializing applications to support ever-changing requirements.
This lesson describes these specialization solutions and the best ways to apply them in your applications.
After this lesson, you should be able to:
l Describe the principles and purposes of component applications
l Discuss the advantages of building application layers using multiple component applications
l Specialize an application by overriding rulesets in the built-on application
l Decide when to use ruleset, class, and circumstance specialization
l Decide when to use pattern inheritance and organization hierarchy specialization
l Analyze and discuss various approaches to specializing an application to support a specific set of
requirements
Object Oriented Development in Pega
A consideration of Pega asset design and reuse starts with a brief mention of Object Oriented Development
(OOD) principles and how Pega leverages them and how Pega allows you to leverage them.
According to Robert Martin, Object Oriented Development encompasses three key aspects and five principles.

Aspects of OOD
The following are the three essential aspects of OOD.

Encapsulation
Encapsulation is used to hide the values or state of a structured data object inside a class, preventing
unauthorized parties' direct access to the object.

Inheritance
Inheritance is the ability for one object to take on the states, behaviors, and functionality of another object.

Polymorphism
Polymorphism lets you assign various meanings or usages to an entity according to its context. Accordingly, you
can use a single entity as a general category for different types of actions.

SOLID development principles


SOLID is a mnemonic for the five principles of OOD. According to Martin, OOD should adhere to these principles.
Single Responsibility
Open/Closed
Liskov Substitution
Interface Segregation
Dependency Inversion

Single Responsibility
The Single Responsibility principle states that every module should have responsibility over a single part of the
functionality provided by the software, and that responsibility should be encapsulated by the module.
Avoid placing within the same module functionality where changes can occur for different reasons. The UIKit
application, which contains a single ruleset, is example of Single Responsibility.
Open/Closed
The Open/Closed principle states that software entities (such as classes, modules, and functions) should be open
for extension, but closed for modification. An entity can allow its behavior to be extended without modifying its
source code.
The Open/Closed principle is most directly related to extensibility in Pega. If implemented correctly, an object,
call it “A”, that uses another object, call it “B”, need not change when features are added to object “B”. Following
this principle helps avoid maintenance-intensive ripple effects when new code is added to support new
requirements. An example of the Open/Closed Principle in the Pega platform is the Pega Healthcare’s PegaHC-
Data-Party class extending the PegaRULES Data-Party class.

Liskov Substitution
The Liskov Substitution principle states that objects that reference other objects by their base class need not be
aware how that class has been extended. An example in the Pega platform is how correspondence and routing
works the same regardless of the class being Data-Party-Person or Data-Party-Org.

Interface Segregation
The Interface Segregation principle (ISP) states that it is better to define multiple interfaces to an object, each
fulfilling a specific purpose, as opposed to exposing a single large and complex interface, parts of which are of
no interest to a client. ISP is intended to keep a system decoupled and thus easier to re-factor, change, and
redeploy. Examples of ISP in the Pega platform include Service Packages and parametrized Data Pages. Data
Propagation would also meet the definition of ISP if a single Data instance is passed as opposed to multiple,
individual scalar Properties.

Dependency Inversion
The Dependency Inversion principle refers to a software development technique where objects facilitate the
configuration and construction of objects on which they depend. This in contrast to an object completely
encapsulating its dependencies. The Dependency Inversion principle works hand-in-hand with Liskov
Substitution. An example of Dependency Inversion in the Pega platform is Dynamic Class Referencing (DCR). As
opposed to an object strictly using a value hard-coded within a rule’s Pages & Classes tab to create a page, a Data
Page can be asked to supply the value for the page’s pxObjClass property.
Specialization design considerations
When deciding on the application layers to be developed, consider the business requirements for specialization.
Selecting a design that introduces more specialization layers than are required can be complex. This complexity
increases the time, resources, and effort required to produce a Minimum Lovable Product (MLP).

Specialization considerations
Always follow object-oriented principles to ensure rules are extensible. For example, use parameterization and
dynamic class referencing (DCR) to support specialization in the future.
When considering specialization, be aware of the following things:
l A specialization layer need not be specific to one type of application. Instead, a specialization layer can
support multiple applications across an enterprise.
l Circumstancing, pattern-inheritance, and data modeling techniques may eliminate the need to define a
specialization layer for an application.
l A specialization layer can be composed of multiple built-on applications.

Single implementation application


Most development efforts can achieve the MLP by developing a single production application either directly on
the Pega Platform or by leveraging one or more of Pega's Horizontal or Industry foundation applications.
A single application is the best approach in the following scenarios:
l The enterprise does not span multiple regions where business rules vary dramatically.
l The enterprise is only interested in completing the implementation of a framework developed by a vendor.
The enterprise does not need or want to extend its own application.
l The enterprise has divisions that develop division-unique applications.

Specialization layer with multiple implementations


In special cases, the development effort may require a specialization layer (this could be a framework
application) on which one or more production applications ( a special type of implementation application) are
built.
This diagram shows specialization of an application across different regions in North America. The procedures
and policies specific to a region are layered on top. Every time the system interacts with a user or advances a
case, it selects the policy and procedure that is most specific to the situation at hand. This means that only those
policies and procedures that are specific to French-speaking Quebec, for example, need to be defined in that
layer. For all other regional policies and procedures, the more generic layers beneath are consulted in order.
A specialization layer on which one or more production applications are built makes sense in the following
scenarios:
l The enterprise spans multiple regions where business rules vary dramatically and most of the core
functionality will be reused across the enterprise.
l The enterprise wants to target distinct customer types while leveraging a core application. Business rules vary
dramatically between customer types.

Note: Note: A framework layer IsA specialization layer. “Framework” has a specific meaning. “Framework” means
an application that spans every case type. “Framework” represents an entire layer. A “Framework” is a special
type of specialization layer.
Note: Note: A Production application IsAn “Implementation” application. A Production application is a more
specific type of an “Implementation” application. A Production application is what end users use. It is what you
send down a CI/CD pipeline all the way to the end.
How to choose the application structure
Any application can be built on other applications and leverage reusable components. An application can be
specialized in multiple ways such as using class inheritance, ruleset overriding, and circumstancing. When
specializing an application using class inheritance you can use pattern inheritance for specialization within an
application and direct inheritance for specialization across applications. In this lesson we will explore how the
New Application Wizard can be used to define the application structure and how the New Application Wizard
uses direct inheritance for specialization across applications.

Using the New Application Wizard


When creating an application with the New application wizard, you have the option to specify the application
structure in the advanced settings. In the advanced setting the application structure can be either
implementation or framework, and the default is implementation. In both cases, a single application is
created, but with different purposes.
Selecting implementation creates an application for a specific business purpose. Users log in to the application
to achieve numerous business outcomes. An implementation can be built on other implementations, reusable
components, or frameworks.

A framework layer typically defines a common work-processing foundation for a set of case types and data
objects. The framework contains the default configuration of the application and its associated classes that is
then specialized by the implementations. The classes of the implementation layer directly inherit from the
classes in the Framework layer. Frameworks are not designed to run on their own; there should always be at
least one implementation. Implementations extend the elements of the framework to create a composite
application that targets a specific group such as a region, customer type, product, organization or division.
For example, the MyCo enterprise makes auto loans, and has an auto loan framework that is composed of the
assets needed for MyCo's standard auto loan process. Each division of MyCo extends that basic auto loan
application to meet their specific divisional needs.
Only create a specialization layer such as a framework if at the start of a project the business requirements
express a need to leverage such a layer throughout the enterprise.
Important: When using the New Application Wizard do use the "Framework" option purely for the sake of
future-proofing. Maintaining a framework comes at a cost that cannot be justified without clear evidence for its
need.
KNOWLEDGE CHECK

When would you create a framework?


When it is clear at the start of a project that an application will be reused across the organization by other
applications that will specialize it for different reasons such as region, customer type, product, or division.
Specialization and component applications
When considering approaches to application specialization, think of applications as components rather than as
frameworks. Components are part of a whole, like the wheels or the engine of a car. In contrast, a framework
describes the essential, static structure, like the chassis of a car.
Note: A framework can also be referred to as a foundation, model, template, or blueprint
Pega component applications follow the object-oriented programming (OOP) open/closed principle. This
principle states that an object does not need to be changed to support its use by other objects. In addition,
objects need not change if additional features are added to the used object. This avoids maintenance-intensive
ripple effects when new code is added to support new requirements. Modeling a business process according to
the Template Design Pattern used by Pega Platform follows the open/closed principle. You define a foundation
algorithm in a base class. The derived classes implement code at allowed extension points.
When you use applications as components, you can take a modular approach to application configuration. You
can create application layers by building on multiple component applications.
Note: Do not confuse a component application with a component that is a collection of rulesets used to create a
small feature that can be added to any Pega Platform application.

Applications as components
You can design Pega applications specifically for use as components. By definition, a component is recursive.
That is, a component can comprise other components in the same way that objects can comprise other objects.
An application satisfies this definition since an application can be built-on on multiple applications as discussed
below.
The term component implies that an object has a stable interface and can be tested separately according to
published specifications. A component application need not contain its own unit test ruleset(s) but it can,
temporarily, during development. Prior to deployment, unit test rulesets would be moved to a test-only
application built on the component application.
In similar fashion, the first stage of a case type within a component application can be configured as valid only
during development. “During development” can be defined as “when the case is not covered”, if the case type is
only used as a subcase in production. When a Production application extends a case type within the Component
application that includes a test-only stage, the Production application is free to remove that stage from its own
case type rule.

Layers and multiple built-on applications


Prior to Pega 7.2, the Pega Platform layer concept was based on the single built-on-application configuration.
Over time, this configuration can create complex application structures that become challenging to maintain. For
example, a combination of one or multiple frameworks with enterprise rulesets can potentially contain multiple
unrelated rulesets. The combination also might result in your having to clone application definitions that you do
not directly manage. This situation creates a large number of required updates when changes are made,
including having to resynchronize various rulesets and versions across applications .
Starting with Pega 7.2, you can use multiple built-on component applications. Using these built-on applications
breaks up common and stand-alone components into their own applications. This modular design approach lets
you point an application to another application rule. As a result, changes to any required rulesets and versions
across applications are more easily managed. The use of multiple built-on applications also eliminates the
need to use ruleset prerequisites. Instead, you can use Application Validation mode to avoid warnings related to
the use of rulesets that are located across multiple applications .
Use built-on component applications to modularize functionality and promote reuse. Built-on applications
encourage use of Application Validation mode over ruleset prerequisites. Warnings related to use of the same
ruleset across applications are avoided.
For more information, see the Pega Community article Using multiple built-on applications.
For more information about how Pega Platform processes various hierarchical structures of multiple built-on
applications at design time, see the Pega Community article Application stack hierarchy for multiple built-on
applications.
For a video demonstration and discussion of multiple built-on applications, see the Pega Community Tech Talk
video Multiple built-on applications.
How to leverage built-on applications and components
Decomposition refers to the process of breaking down a problem into smaller, more maintainable parts. In
programming, those parts would be called components.
To create a solution that includes case types, you need an application. When used as a built-on application, the
built-on application can be considered a component. A collection of rulesets that does not include case types can
also be considered a component. Examples include reusable flows or correspondence, integration assets or a
function library. An application can leverage either type of component and then become a component, itself, or
be deployed as a production application.
The following example shows the built-on applications and components that support a Claims application. This
example is built directly on PegaRULES. You can also build applications on one or more Pega foundation
applications, such as Pega Customer Service.

Numerous applications and components are available for reuse on the Pega Exchange. To contribute to the Pega
Exchange, submit applications and components to make them available to the Pega community. For example, you
can add the PegaDevOpsFoundation application as sibling built-on application when using the Deployment
Manager application to orchestrate CI/CD.
For applications you want to display in the New Application wizard as potential built-on applications, select
Show this application in the New Application wizard on the Application wizard tab on the application rule.
Use the Components landing page (Designer Studio > Application > Components) to create and manage
components. A component defines a set of ruleset versions and can have other components or applications as
prerequisites.
When creating a case type that does not extend a Foundation application case type it is advantageous to add that
case type to a new, case type-specific ruleset. Doing so will facilitate the development of component applications.
When this approach is followed, and an application is built on PegaRULES, every case type would exist in its own
ruleset. While the application rule, work pool class and application data classes would exist in the ruleset created
by the New Application Wizard.
A case type that appears to be a good candidate for a component application should avoid dependencies on the
work pool class and application-level data classes. Instead the case type component candidate should utilize the
data class that the application-level data classes extend. Similarly work pool rules that the case type component
candidate uses can be moved to the layer beneath the current application.
At the point that all dependencies on the current application have been removed, the Refactoring Wizard
(System > Refactor > Classes) can be used to remove “-APP-“ from the case type’s class name. This refactored
case type can then be placed within its own application. The original application then defines the new
component application as a built on application. Lastly the case type rule within the original application must be
restored. This can be accomplished by performing a “bottom up” Save-As for the refactored case type rule to its
parent application.
When citizen developers design and implement applications using APP Studio, they typically include multiple
discrete components of functionality. As those citizen developers become more familiar with the business needs
of the organization, opportunities to reuse those components in other applications may arise.
To create reusable built-on applications and components from citizen developer built applications, first, identify
the reusable components. Then, refactor the appropriate rules from the existing application into your new
reusable built-on applications and components as described above.
Note: It is important to define relevant records for components, not just applications, to simplify and encourage
their use.
KNOWLEDGE CHECK

When would you create a component rather than an application?


When creating a feature without a case type that is not runnable on its own
Ruleset, class, and circumstance specialization
Code developed according to object-oriented principles is inherently extensible. For this reason, rules can be
specialized by ruleset, class, or circumstance.

Ruleset and class specialization


A framework is a common work-processing foundation that you extend to an implementation. Designing a
framework requires knowledge of how more than one implementation would use it.
Without this information, abstracting a model common to both implementations is difficult. Focus on the
implementation at the beginning while watching for future specialization.
The following image shows how the Pega Platform supports ruleset and class specialization dimensions. In this
example, the Center of Excellence (COE) team is responsible for locking down the base 01.01.01 MyApp
application. The COE is also responsible for developing the next version of the foundation or base MyApp
application (for example, 02.01.01).
Production applications, MyAppEU and MyAppUK, remain built on MyApp 01.01.01 until they are ready to
upgrade to the 02.01.01 version. This is similar to upgrading applications to a newer version of the Pega
platform. The difference is that Pega is the COE. The purpose of the Application Version axis is to permit
evolution (upgrading and versioning) of the applications, including the locked down foundation application.
Note how the Production applications leverage the Ruleset dimension, but both applications do not need to
leverage the Class dimension. It is assumed that the UK user population is smaller than the non-UK user
population and that both user populations remain in the original database. It is also assumed that UK wants its
case data to be stored separately so chooses direct inheritance for its case type classes although it was possible
to have selected pattern inheritance. Pre-existing UK cases would need to be migrated to the new case type
classes, plus security would need to be put in place to prevent the MyAppEU application from accessing
MyAppUK data and vice versa .
From MyApp’s perspective, there is no difference between separating MyAppUK user’s into a new user
population or starting out with non-UK users first then adding UK users later, but to a UK-specific application. In
either case, the MyApp application must be carefully managed since it supports two sets of user.
The MyAppEU application can use Ruleset specialization because there is no need to migrate its user
population’s cases to different class names. This is true whether the same database, or a new database, is used
for the new or split-out user population. The only difference is that inheritance and security are required if the
same database is used. If, however, a new database is used for the new user population, no need exists to
leverage the Class dimension, i.e., the Ruleset dimension would suffice.

Ruleset Specialization Example


A simple example of how the Pega platform uses ruleset specialization is how Pega handles localization.
Localization in Pega is accomplished using rulesets.
After running the localization process, you complete the translation in progress by selecting the new default
ruleset created by the system to store the translated strings. You select an organization ruleset so that the
translations can be shared across applications.
Then you add this ruleset to the access groups that require this translation. The system is automatically
displayed the translated text when the user interface is rendered.

Circumstance specialization
In DEV Studio, circumstanced rules are displayed by expand-and-collapse navigation in the App Explorer.
Circumstanced rules can also be searched for using: Case Management > Tools > Find Rules > Find By
Circumstance.
Note: You can also locate rules that are similarly circumstanced with a report definition that filters by
pyCircumstanceProp and pyCircumstanceVal.
A benefit of circumstancing is that it enables you to see the base rule and its circumstanced variations side-by-
side. The Case Designer supports viewing circumstanced Case Type rules this way, as well.
There are a number of drawbacks to circumstancing case-related rules as opposed to circumstancing other rule
types such as Decision rules. DEV Studio displays rules using Requestor-level scope. Case-related rules are
normally circumstanced using a case-related Property. Hence circumstanced rules would only be active when a
case instance is opened, meaning the scope would be Thread-level.
According to its Requestor-level scope, the Case Designer always displays the base versions of circumstanced
rules such as Flows. Similarly, if a circumstanced rule is opened from another rule, the base version would be
displayed. The rule’s “Action > View siblings” menu option is needed to locate the correct circumstanced
variation of the base rule. For numerous inter-related rules, as is typical for case design, this process can become
tedious. Circumstancing case type rules is not a solution to this drawback unless circumstance-unique names
are used for circumstance-unique rules such as Flows .
Specializing an application by overriding rulesets
To create an application by overriding rulesets in the built-on application, do the following:
1. Create a new ruleset using the Record Explorer.
2. In the Create RuleSet Version form, select the Update my current Application to include the new
version option.
3. Copy the existing Application rule to the new ruleset and give the application a new name that represents its
specialized purpose.
4. Open the new Application rule.
5. Configure the new application as built-on the original application.
6. Remove every ruleset from the application rulesets list except the ruleset you created in step 1.
7. Open the original application again and remove the ruleset you created and added in step 1.
8. Create new access groups that point to the new Application rule you created in steps 2 and 3.
Note: A ruleset override application can be constructed without needing to run the New Application wizard.
Inheritance and organization hierarchy specialization
You can use pattern inheritance as a special type of class specialization within an existing workpool. You can also
leverage pattern inheritance to specialize applications according to organization structure.
The Pega class naming convention is displayed in the following table. There are two optional implicit hierarchies
within each class name: organization and class specialization. In the table below, a class name can be formed
using any combination of values from each of the three columns.

Optional Optional application qualifier prefix Optional Class specialization


Organization plus standard Data/Work/Int prefixes
Org- [App-]Data-Class -A
Org-Div- [App-]Work-CaseType -B-C
Org-Div-Unit- [App-]Int-Class -B-D

Here are some CaseType class name examples using the pattern described in the table above:
Org-App-Work-CaseType
Org-App-Work-CaseType-B
Org-App-Work-CaseType-B-C

Pattern inheritance specialization


Pattern inheritance can leverage Dynamic Class Referencing (DCR) to decide which case type class to construct or
transition to. Similar to circumstancing, the requisite property values must be available for pattern-inheritance
specialization to occur at run-time. For example, a parent case instantiating a child case class must have the
right information available to determine which pattern-inheritance implementing class to use. The parent case
could also decide to change its own class to a pattern-inheriting class using the same information.
Assuming that the number of specialized classes is relatively small, pattern-inheritance specialization enhances
maintenance when using the App Explorer within DEV Studio. All pattern-specialized rules are grouped by class
and therefore can be viewed in the App Explorer broken down by rule category followed by rule type.

Organization hierarchy specialization


The organization structure of the company can be used to specialize case types or reuse component applications
using direct inheritance.
For example, some companies are so large that you can write applications specifically for one division. The code
within those applications is likely not shared with other divisions. For such scenarios, you specialize case type
class names containing a division modifier. For example:
ORG-DIV-APP-[Work/Data/Int]-
Although rarely used, Pega Platform supports specialization down to the org unit level. In those situations, case
type class names contain an org unit modifier. For example:
ORG-DIV-UNIT-APP-[Work/Data/Int]-
In this example ORG-DIV-UNIT-APP-[Work/Data/Int]- org unit level classes could inherit directly from ORG-DIV-APP-
[Work/Data/Int]- division level classes thus allowing for specialization of the division level classes in the org unit
level.
If any application within the organization can reuse component applications, you can specify the component
applications as built-on applications by an enterprise layer application.
Similarly, component applications capable of being reused by any application within a specific division could be
specified as built-on applications by a specific division layer application.
Specialization use cases
This topic describes sample use cases and recommended specialization approaches.

Use case: Define the product at the enterprise level


3Phase Inc. is a large electronics manufacturing company that has been in business for 25 years. Over time, the
ability of the company to stock new products and replacement parts has become increasingly complex. To
manage this issue, the company created a separate Supply Chain Management (SCM) division. Currently, there is
no need to specialize SCM applications by region.
Design a specialization solution that supports this requirement.

Discussion and recommendation


The recommended specialization approach involves defining the product at the enterprise level. Defining
utilities such as a product selector should also be defined at the enterprise level. This approach enables the SCM
division and the Sales and Customer Service divisions to share the standard product data definition and utilities.
SCM’s mission is specialized—ensuring that products and replacement parts are in stock within different
regions. The mission of SCM is outside the scope of Sales and Customer Service personnel responsibilities.
As a result of SCM's specialized mission, you develop a unique set of application rules for the SCM division using
the case structure: PHA-SCM-APP-[Work/Data/Int]. In the future, if you need to specialize rules for a particular a
region, you can investigate various approaches such as:
l Circumstance rules by the name of the region, including case type rule circumstancing
l Use pattern inheritance specialization by appending the region’s acronym, prefixed with a dash, to case type
class names
l Create a new application specific to the region. You create a wrapper application built on the existing
application. Then, you create a new implementation application built on the existing application.
Note: Only the first two approaches support all regions with a single application. The third option requires
application switching.

Use case: Creating a component implementation layer


GameMatch is a social media company that specializes in introducing members to each other as they play
different types of games.
The process for setting up and playing the game is the same for any game. The rules for each game are
different. The entire process runs in the context of the game members decide they want to play. For the purpose
of reporting, you store each interaction from game launch to match completion to different tables according to
the selected game.
Design a specialization solution that supports this requirement.
Discussion and recommendation
For the following reasons, the recommended specialization approach involves creating a component
implementation layer.
l The entire end-to-end interaction is similar regardless of the game selected.
l The rules for playing each game are different.
l Requiring users to switch context from one interaction to another is acceptable.
l Separately persisting each interaction is desirable.
You develop an implementation application for each unique type of game built on a framework specialization.
Designing for class specialization quiz
Question # 1
Which two of the following design approaches are most closely related to the concept of application layering?
(Choose Two)

C # Response Justification
C 1 Open/Closed Principle Of the five main (object-oriented
programming) OOP
development principles, the
Open/Closed Principle is the
most directly related to
extensibility.
C 2 Template design pattern The Template design pattern
embodies the concept of
layering.
3 Polymorphism Polymorphism is one of the
three essential aspects of OOP.
It is not closely related to the
concept of layering an
application. Polymorphism
occurs at the rule level. Layering
occurs at the ruleset level.
4 Liskov Substitution The Liskov Substitution OOP
development principle states
that any object that implements
a particular interface or API can
be used equally well by an
application. The Open-Closed
Principle is more closely related
to layering.

Question # 2
Which of the following two terms are associated with the term component? (Choose Two)
C # Response Justification
C 1 Recursive The definition of a component is
recursive.
2 Framework A framework does not imply
component in the general sense
of the word.
C 3 Interface A component has a stable
interface.
4 Ruleset A ruleset can be used to create
a component, but is not a
component.

Question # 3
Which three of the following approaches are valid ways to extend rules in Pega Platform? (Choose Three)

C # Response Justification
C 1 Ruleset A rule can be overridden in a
different ruleset.
C 2 Class Pattern and direct-inheritance
can be applied to classes.
C 3 Circumstance Circumstancing is a way to
specialize rules.
4 Dynamic class referencing DCR is a way to decide which
(DCR) specialized rule to use, but it is
not a way to specialize a rule.
5 Organization hierarchy Organization hierarchy is a
pattern-inheritance-based class
naming strategy.

Question # 4
A ruleset override application can be used for which purpose?

C # Response Justification
C 1 Creating the first The first implementation
C # Response Justification
implementation application application built on a framework
built on a framework application can use ruleset
application override. This does not require
the New Application Wizard to
generate.
2 Defining applications specific Division-specific class names
to a division within an can be defined for division-
enterprise specific applications. Those
applications are not called
ruleset override applications.
3 Aggregating multiple built-on Aggregating multiple built-on
applications into a single applications into a single
application application is something a
framework application might do,
but not a ruleset override
application.
4 A repository for Dependency Inversion of Control-based rules
inversion-based rules can be placed in any ruleset
regardless the type of
application.

Question # 5
Which two options are valid reasons for initially creating a single application? (Choose Two)

C # Response Justification
C 1 Avoid the development of two Ideally, the same development
application layers team is not charged with
simultaneously. developing two layers
simultaneously. One application
delivery should precede the
other application based on
versioning in the same fashion
that ruleset versions are used to
upgrade the same application
within a progression of
C # Response Justification
environments (development, QA,
UAT, Production).
C 2 The application is specific to a Rules specific to a government
single government agency. agency typically cannot be
applied to other government
agencies. The components used
to build a government agency
application could be shared
with other government
agencies.
3 Development of component In the long run, separately
applications takes more time tested, reusable applications
since each has to be tested increases development speed.
individually.
4 The concept of multiple built- This is only true prior to Pega
on component applications 7.2.2.
has not been promoted in the
past.
Promoting reuse
Introduction to promoting reuse
When building an application you should consider packaging certain assets separately to promote reuse. This
lesson explains how to leverage relevant records and how to decompose an application into reusable built on
application and components. The lesson also discusses the role of a COE in managing reuse.
After this lesson, you should be able to:
l Simplify reuse with relevant records
l Leverage built on applications and components for reuse
l Discuss the role of a COE in reuse
Application versioning in support of reuse
The New Application Wizard automatically adds two Org layer rulesets to new applications (Org and OrgInt). By
default these rulesets are set to use Ruleset validation. As you create additional applications and add rules to
the Org layer you may need to add additional rulesets to the Org layer. Eventually a large number of specialized
rulesets in the Org layer is possible. To avoid having to maintain ruleset dependencies within the Org layer you
can set the Org layer rulesets to use Application validation. This raises the possibility that multiple applications
reference the same Application-based rulesets, and this generates a warning. Packaging the Org layer rulesets
as a built-on application helps eliminate these issues.
At the same time that rule management is simplified from an intra-component perspective, complexity increases
from a component client perspective when components are versioned.
For example, if you make major changes to rules within a built-on application, that built-on application may no
longer be consistent with the application which were built-on it. For example, an updated validation rule in a
built-on application might enforce that added properties have values. This could be problematic to applications
built-on it. In this example, you should consider updating the built-on application's version before deploying the
changes.

Reasons to version an application


These are reasons to version an application:
l The application is using upgraded built-on application versions.
l Rulesets in the application ruleset list have been versioned, added, or removed.
When versioning an application, you can control:
l The patch levels of the ruleset versions that the application specifies
l The versions of the application's built-on applications
You can also lock the application to prevent unauthorized updates.
Tip: Detailed documentation of application dependencies between applications benefits maintenance. Part of a
Center of Excellence's (COE) role is to keep track of application dependencies.

Reasons not to version an application


There are valid reasons for not increasing an application version. A change could entail adding rules that are not
used by the parent application. For example, adding properties that are not validated or Rule-File-Binary rules do
not impact a parent application.
Utility code consists of reusable functions and data transforms. Placing reusable utility code, such as functions
and data transform, in a specialized ruleset can be added to a built-on application. Parent applications at any
version then have access to that utility code.
The role of a COE in reuse
To encourage appropriate and best use of the Pega Platform, organizations can create a Center of Excellence
(COE). A COE is an internal organization that centralizes the resources and expert staff who support all project
roles, including business analysts, lead system architects, and administrators.

Reuse is critical in getting the benefits and value of the Pega Platform. The responsibility of the COE is to manage
and promote reuse across projects for the organization. If no one is responsible and accountable for reuse,
assets are often reinvented.
For more information on establishing a COE, refer to the Pega Community article Creating a Center of Excellence.
KNOWLEDGE CHECK

How is a COE involved in reuse?


The COE maintains a central repository of reusable assets for use across the organization.
Designing for reuse quiz
Question 1
Which three of the following features are available in a built on application and are not available in a
component? (Choose Three)

C # Response Justification
1 Pega Exchange publishing Both built on applications and
components can be published
to Pega Exchange.
C 2 Extendable case types Use a built on application if you
want to extend a case type
C 3 PegaUnit test creation A built on application can
contain a PegaUnit ruleset and
support creation of PegaUnit
test case and suite rules. A
component by itself cannot
place test case and suite rules
in a PegaUnit ruleset
4 Integration rules Integration rules can be
included in both built on
applications and components.
C 5 Self-testable A built on application can be
designed to be self-testable. A
component’s individual rules
can be unit tested but cannot
test itself.

Question 2
Which two of the following are key responsibilities of a COE? (Choose Two)

C # Response Justification
1 Provide support to the LSA on The COE provides support to all
the project project roles, not only the LSA
C # Response Justification
role.
C 2 Identify new opportunities in The COE helps business
the organization managers identify opportunities
3 Lead application development The COE supports individual
efforts application development efforts
without taking the lead
C 4 Manage and promote The COE is responsible for
reusable assets managing and promoting reuse
Designing the data model
Introduction to designing the data model
Every application benefits from a well-designed data model. A well-designed data model facilitates reuse and
simplifies maintenance of the application.
At the end of this lesson, you should be able to:
l Design a case data model to support reuse and integrity
l Design a new data model
l Extend a data class to higher level
l Leverage the Template Design Pattern for Data Instances
Data model reuse layers
Designing a data model for reuse is one of the most critical areas in any software project. A well-designed data
model has a synergistic effect, the whole being greater than the sum of its parts. In contrast, a poorly designed
data model has a negative effect on the quality and maintainability of an application.
A temptation exists after a case type is created to immediately define its life cycle. Case life cycles can be rapidly
developed in APP Studio with views that contain numerous properties, everything defined at the case level. This
approach will become counter-productive at some point owing to the lack of reusable data classes and work
pool-level properties that can be shared with other cases. Numerous hours would then be required to refactor
and retest the code.
It is better to start out grouping related properties using Data classes, classes that would contain the views to
display those properties. This is, after all, why an embedded page is now referred to as a “field group”. Creating
Data classes promotes reuse across an application’s case types. Data classes enhance and simplify application
maintenance.
It could be said that one of the primary purposes of a case is to manage a specific set of data. Managing data is a
different role than being the data. Viewed another way, the primary purpose of data is to be used by one or more
cases.
Cases have two types of properties:

Type of Property Examples


1. Customer Data Event Type, Number of Attendees, Cost
2. Transaction State and History pyStatusWork, pxCreateDateTime

It is rarely necessary to define new properties for the second type.


In certain scenarios you may be tempted to define a case type with little or no life cycle or behavior, with the
intention to use the case type primarily to store data locally. The case life cycle might consist of changing
pyStatusWork from New to Open to Resolved-Completed. It could have a single assignment with an SLA.
A case may appear simple at first and not worth having a field group. A build-for-change best practice, though, is
to always associate at least one field group with a case type, one of the field groups being synonymous with the
case type.
Some data is referenced or calculated, while other data is enhanced. The former is read-only, the latter is
updatable.
For example, A Data-Warranty instance that contains static information such as the terms and conditions of a
product’s warranty would be viewed read-only. A Data-Warranty-Claim instance, on the other hand, would
contain dynamically varying properties such as PurchaseDate, ClaimDate, ExpirationDate, and Expired. Within the
view that displays those additional properties, the first two would be updateable, while second two would be
calculated, and hence read-only.
The sample Event Booking application also follows this pattern. The fields within an FSG-Data-Hotel instance are
static from a Event case’s perspective. In contrast, the fields within a RoomsRequest instance are dynamic.
The data model for a case should also take into consideration the ease with which customer data is propagated
from one case to another. A field group can be populated immediately prior to a subcase or spin off case being
created. A field group used this way at a minimum would need to be defined at the work pool class level. If the
subcase or spin-off case was an extension of a case type component, the applies-to class for the field group
would need to be more generic, for example Work-Cover-.
Proactive Greenfield Data Modeling
How data model design is approached depends on whether a pre-existing Foundation application data model
exists or whether the application being developed has no prior work, hence no constraints. A name for the latter
situation is “greenfield development”. How to work with a Foundation application data model is discussed in the
lesson that follows. The discussion of proactive greenfield data model development begins in this lesson with
some fundamentals.
An often-used technique for developing an object model is to parse a document while extracting nouns and
verbs. The nouns would be analyzed to determine suitability as an object and, if so, is that object a specialization
of a more generic object. For example, a document might mention a Car which is a type of Automobile or Vehicle.
The verbs found in the document could be used for various purposes such as identifying possible cases,
processes or steps within a case depending on the granularity of the document.
Avoid developing a data model in isolation. Perhaps a solution already exists. An example is when the price for a
certain quantity of items purchased or consumed must be determined. The price per item can remain constant,
i.e., unit pricing; or the price per item decrease as quantity increases, i.e., volume pricing; or the price can vary
from range-to-range until the quantity is consumed, i.e., tiered pricing. In this example, a data Model that
supports multiple pricing strategies would be, in the long run, the most beneficial.
The sample FSG Event Booking application contains an FSG-Data-Pricing class. Instead of maintaining prices per
item within the database, preferable for a large number of items, this example maintains item prices within
Decision Tables since the number of items is small. If prices were to change over time, the Decision Tables could
be circumstanced using an “As-of-date” DateTime property. If prices were kept in the database, an AsOfDate
property would need to be added as well as a QtyTo property to support the definition of a quantity range.

Being Proactive id Key


Ideally, before any greenfield application development takes place within APP Studio, an LSA would be made
aware of the application’s nature and goals. The LSA would analyze this information to derive a rough idea of the
initial data model and case structure that APP Studio developers would leverage.
The next step is for the LSA to model this information at appropriate levels. The most abstract information such
as Data-Vehicle would be placed at the lowest layer. Specializations of that class could also be made within the
same layer such as Data-Vehicle-Car. The LSA must also decide whether the class should possess an Organization
prefix, or when specific to a Division, give it an Org-Div- prefix.
If the potential exists for a Data instance to be obtained through Integration, which includes querying data
stored locally, Data Pages should be defined. When APP Studio is used, a Data type will be created in the
CustomerData schema. In DEV Studio it is possible to select the PegaDATA schema when a local data source is
added to a Data type. Database table within the CustomerData schema do not contain “Pega” columns such as
pzInsKey or pzPvStream, a.k.a., the BLOB.
The decision whether a Data class would always be used as a Field Group (Page) and never persisted on it own is
important. Note the differences below in how that data is persisted if required for reporting.

Data always embedded Data persisted outside a BLOB


If an embedded Page List property is A Field Group List (Page List) property can be defined that references
optimized, a Rule-Declare-Index, Index a List Data Page where data is stored external to the object, i.e., not
class, Index table, Index class property, within the object’s BLOB.
and index table column are generated.
An example in the solution provided for this course is the FSG-Data-
Pricing data type and the FSG-Booking-Work-BookEvent case’s .Pricing
(Field Group List (PageList) that references D_PricingList passing it
the pzInsKey of the BookEvent case as Ref parameter value.

A second example is FSG-Data-Address data type and the FSG-Data-


Location class’s .Addresses Field Group List (Page List) property.
If an embedded Page property is A Field Group (Page) property can be defined that references a non-
optimized, a column is created in the List Data Page where data is stored external to the object, i.e., not
database table and mentioned in the within the object’s BLOB.
Class rule’s Mapping tab.
The Field Group (Page) property only needs to contain the key(s) to a
Lookup Data Page. An example in the solution provided in this
course is FSG-Booking-Work-RoomsRequest .Hotel where .Hotel.pyGUID
is passed as the pyGUID parameter to D_Hotel.

Note that the RoomsRequest case’s .Hotel property was overridden by the production Booking application.

Within the component Hotel application’s the RoomsRequest case’s .Hotel property is defined as a normal Field
Group (page). This was done to support data capture and persistence to facilitate testing performed against the
Hotel case component. The Booking application’s purpose is to book events, not define hotel instances. The
Booking application’s RoomsRequest should only perform an FSG-Data-Hotel Lookup using a value it originally
obtained from .RoomsRequest.HotelGUID within the InitFromRoomsRequest Hotel subcase spin-off data transform.
How to extend a data class to higher layers
Enterprise-level data classes can be referenced by Enterprise level Work- classes. Similarly, implementation-level
data classes are normally referenced by implementation level Work- classes. Enterprise-level <Org>-Data- classes
should also be able to be referenced by implementation level <Org>-<App>-Work cases. So how can an
enterprise-level data class (i.e.: <Org>-Data-Warranty) be referenced by an implementation level <Org>-<App>-
Work cases? Or how can an enterprise level Work- case reference an implementation level data class? The
solution is to use Dynamic Class Referencing (DCR).

Using DCR
When a ClipboardPage is constructed multiple properties are initialized when a number of properties are
instantiated by default that have a “px” prefix including pxObjClass. The “px” prefix connotes “should remain
static” but does not necessarily mean “must remain static”. The values of some "px" properties should not be
changed post-construction such as pxCreateDateTime and pxCreateOperator as they record historical information.
It is possible to its change the value of the pxObjClass property post-construction but care must be taken. An
example of where pxObjClass is changed at run-time is when Assign-WorkBasket is changed to Assign-Worklist and
vice-a-versa.
A ClipboardPage is a StringMap according to the Engine API. Suppose the value of pxObjClass for an existing
ClipboardPage is changed to a more specialized class within the inheritance hierarchy, such as class that directly
inherits from the existing class. This is not the same as constructing a ClipboardPage from the start using the
original pxObjClass value.
The difference lies in the values that are initialized when the pyDefault Data Transform, if any, is called at the
time of ClipboardPage construction. A pyDefault Data Transform can be configured to call its super (more generic)
Data Transform. When case types are generated, the Call super data transform check-box option is set. When
Data classes are created, this option is not set automatically.
A possible approach to the Call super data transform option post-construction initialization problem is to use
an @baseclass WasPageInitialized ValueGroup property. Every pyDefault Data Transform within an inheritance
hierarchy can be altered as follows.

1. Set param.Class = @replaceAll(.pxObjClass,"-","_")


2. When .WasPageInitialized(param.Class) Exit Data Transform
3. <if this transform called from application level as opposed to enterprise level, apply any pattern-inherited
Default transform(s) left to right>
4. <steps to initialize the defaults for this class go here>
5. Set .WasPageInitialized(param.Class) = true

Whether a Data class at the root of an inheritance hierarchy, e.g., Org-Data-Vehicle, calls its super Data Transform
is optional.
The figures below are from the FSGGogoRoad and FSGEnt sample applications provided with this course.
FSGGogoRoad is built on the FSGEnt application, and the FSGEnt application is built on the FSG application. The
FSGGogoRoad and FSGEnt applications are intended to mirror the relationship between the OOTB PegaSample and
PegaRULES applications. The FSGEnt application is considered enterprise-level; while the FSGGogoRoad application
is considered to be implementation-level.
The figures show eight pyDefault Data Transforms within the FSG-Examples-Data-Vehicle class hierarchy. Note that
a FSG-GogoRoad-Data-Vehicle pyDefault transform would not be called when the super Data Transforms are
invoked. This is the reason for having step (3) above. As the pyDefault data transform change GogoRoad’s
Motorcycle, Truck, and Car classes, each must apply a data transform with a different name such as
DefaultGogoRoad in their step (3).
The first Field within the Identify Vehicle screen within an Assistance Request case establishes the vehicle’s Type,
i.e., Car, Truck, or Motorcycle. On change, the Type field’s value is posted to the Server followed by a Section
refresh where the TypeOnChange activity is called. The TypeOnChange activity passes the Type value as a
parameter to the D_VehicleDCR Data Page. The D_VehicleDCR Data Page, in turn, invokes its LoadVehicleDCR data
transform. This data transform is overridden by the FSGGogoRoad application as illustrated in the figure below.
The TypeOnChange activity then sets the pxObjClass of the case’s FSG-Example-Work .Vehicle Field Group (Page)
equal to the pxObjClass of the D_VehicleDCR Data Page.
The LoadVehicleDCR data transform does not need to be complex. A simple Decision Table could be used which
could be modified as new Vehicle types are added.
Modifying the .Type property for the LoadVehicleDCR data transforms at both the enterprise and implementation
application levels is as simple as appending the Type value to the root Vehicle Data class path.

Ruleset LoadVehicleDCR
FSGGogoRoad .pxObjClass = "FSG-GogoRoad-Data-Vehicle-" + param.Type
FSGExample .pxObjClass = "FSG-Examples-Data-Vehicle-" + param.Type
Leveraging the Template Pattern for Data Instances
A number of ways exist to leverage the Template Pattern when dealing with Data instances. A common example
is when a Data class at the enterprise layer declares the class of a Work page as Work-Cover-. The Enterprise
layer should not be aware of any applications within layers above it so should never declare a Work page a class
using <Org>-<App>-Work.
When this is done a Work page of class <Org>-<App>-Work-XYZ would exist on the Clipboard. For simplicity,
suppose case XYZ contains an enterprise-level field group property. The <Org>-<App> application has had no
reason to extend the field group’s <Org>-<Data> page class to <Org>-<App>-<Data>. The <Org>-<App>
application could do this if it needs to, but for this example, it would add unnecessary complexity.
Case XYZ asks its enterprise-level field group to perform an action, for example apply a data transform. The
enterprise-level data transform does something, then wants to have the Work page do something as well, for
example execute an activity or data transform. At the enterprise level that activity or data transform only needs
to be defined, i.e., stubbed out; it does not need to execute any code. The application does not need to override
that activity or data transform if does not need to. If it does, the code in that activity or data transform would be
executed, perhaps updating the same case that had asked the enterprise-level field group to perform an action.
The solution provided in the course contains an example that goes a step further by enforcing what rules can be
overridden and which cannot. The example is the CreateOrUpdateLocation flow called twice from the Data-Portal
AppConfigContainer Section; one Link for Hotels, another for Venues. The CreateOrUpdateLocation flow is launched
in a modal dialog. The work pages behind modal dialog, D_RoomsRequestTempCase and D_BookEventTempCase, are
declared temporary since they should not be persisted. What should be persisted is either a new or existing FSG-
Data-Location, i.e., the base class for FSG-Data-Hotel and FSG-Data-Venue.
Below is the Work- CreateOrUpdateLocation templated screen flow. Because CreateOrUpdateLocation is a screen
flow, it can only call another screen flow, here one named SaveLocation.
Unlike CreateOrUpdateLocation, the SaveLocation flow is non-Final meaning the FSG-Booking-Work-Event case type
class and the FSG-Hotel-Work-RoomsRequest case type class are allowed to override it. The RoomsRequest case
overrides the SaveLocation flow by specifying D_HotelSavable as the name of the Savable Data Page to use. The
BookEvent case overrides the SaveLocation flow by specifying D_VenueSavable as the name of the Savable Data
Page to use.
The table below shows the Template Pattern in action. The consumer of the CreateOrUpdateLocation is free to
extend it to a new and different class that extends FSG-Data-Location. The consumer only needs to override the
five low-complexity rules on the right.

Rule Type Final Rules Extension Points


Flow Work- SaveLocation
Flow Action Work- SearchLocation
Work- AddLocationInfo
Work- ReviewSavedLocation
Section Work- AddLocationInfo FSG-Data-Location AddInfo
Activity Work- SaveLocationAddress
Data Transform Work- PreSearchLocation Work- ChangeLocationDataClass
Work- PreAddLocationInfo Work- InitLocation
Work- PreSaveLocation Work- PreSaveLocationExtension
How to maintain data integrity
Maintaining data integrity is crucial in applications that persist data that can be accessed and updated by
multiple requestors. This data includes shared information that can be updated, such as reference data, where
Pega is the system of record. You can mitigate data integrity issues by locking instances, avoiding redundancy,
and accounting for temporal data.

Locking instances
On the Locking tab of a data class rule, select Allow locking to allow instances to be locked. The Locking tab is
only displayed when the data class is declared concrete as opposed to abstract. By default, the class key defined
in the General tab is used as a key.

Avoiding redundancy
A potential data integrity issue can arise if you are querying two different tables in a database for the same
entity. If you are retrieving data from similar columns that exist in both tables, the values in those columns may
be different.
To avoid these potential conflicts, keep in mind the single source of truth principle. This is the practice of
structuring information models and associated schemata such that every data element is stored exactly once.
Within a case hierarchy, you may want to always use data propagation from a parent case to each child case.
However, if the data propagated from the cover is subject to change, then accessing the data from pyWorkCover
directly is better. You can use a data page if you need to access data from a cover’s cover.
The following image illustrates the propagation of information from the WorkPage to a Claim case (W = WorkPage
and C = Claim). W1 is the original WorkPage, W2 is W1’s cover, and C is W2’s cover.

It should be noted that the use of recursion in the above example could be avoided by defining a ClaimID
property at Org-App-Work and ensuring the value of ClaimID is propagated to each child case. A subcase’s
pxCoverInsKey would never change. Not setting the ClaimID property initially and propagating it leads to
information loss which requires effort to recover. Any property not subject to change can similarly be
propagated, especially for purposes such as reporting and security enforcement.
Referencing pages outside a case has the additional benefit of reducing the case’s BLOB size. A smaller BLOB
reduces the amount of clipboard memory consumed. The clipboard is able to passivate memory if it is not
accessed within a certain period of time.
Instead of maintaining large amounts of data within the BLOB as either embedded pages or page lists, consider
storing that information in history-like tables, for example, tables which do not contain a BLOB column. These
tables let you use data pages to retrieve the data as needed. As with any type of storage, consider exposing a
sufficient number of columns in these tables to allow a glimpse of what the pages may contain while avoiding
BLOB reads.
KNOWLEDGE CHECK

How does the single source of truth principle help in data integrity?
This principle ensures that every data element is stored exactly once

Accounting for temporal data


Accounting for temporal data is another way to help ensure data integrity. Temporal data is valid within a
certain time period. One approach to accommodating temporal data is to create data classes using a base class
containing Version and IsValid properties. The base class can also contain properties suited for lists such as
ListType and SortOrder.
Alternatively, you can accommodate temporal data using a custom rule. Custom rules can be the best approach
for maintaining a list that contains a list. For example, assume you create a survey that contains a list of
questions. Each question has a list of responses. You could define a Rule-Survey rule class and the following
data model.
l QuestionList — A field group list of class Data-SurveyQuestion
l ResponseList — A field group list of class Data-SurveyResponse
The ResponseList is owned by each Data-SurveyQuestion.
Using data pages to perform SnapShot Pattern lazy loading can compromise data integrity if the data page
lookup parameters do not account for temporal data. The child case could load information newer than the
information that the parent case communicated to a customer.
You could use data propagation to child cases but this approach can cause redundant data storage. To avoid this
issue, you can have child cases refer to their cover case for the information (the questions and possible
responses) sent to the customer. Only the cover needs the SnapShot Pattern. The child case only needs to record
the customer’s response(s) to each question.
KNOWLEDGE CHECK

What are two ways to accommodate temporal data?


Use a data class with a version property, or use a custom rule.
Designing the Data Model Quiz
Question 1
In which of the following situations is data locking required?

C # Response Justification
1 When a case property uses Locking is not necessary when
the Data Page Snapshot copying data.
pattern
2 When initially saving a record Locking is not necessary when
to a custom queue table initially saving data.
3 When the allow locking check The fact that a data class allows
box is checked within a data locking does not mean every
class rule and an instance of instance that is opened has to
the data class is opened be locked.
C 4 When ensuring integrity of a A custom queue table can be
custom queue table accessed by multiple requestors
at the same time - some
requestors wanting to update
records, and other requestors
wanting to delete records.

Question 2
An enterprise-level data class has multiple instances that can be selected from a data page sourced repeating
grid. The class and ruleset in which the class is defined wants to include a data transform that adds an instance
to a list. For extensibility, which two of the following options could be implemented? (Choose Two)

C # Response Justification
C 1 A lookup data page returns An application may want to
the class of the page extend the enterprise data class
according to the application using the data transform as is.
that adds to the list.
C 2 Provide a uniquely named, An application that overrides
embedded-list property the add-to-list data transform
C # Response Justification
applied to @baseclass that should be able reuse an existing
collects the same class. list property and not have to
define a new one.
3 Create a class with "- Creating a new pattern-
Extension" appended to the inheriting class in the same
data class name. Save the ruleset is an example of
data transform to that class. specialization but does not by
itself increase extensibility.
4 A lookup data page returns The .pxResults() list property is
the name of the page to use already extensible since it can
when appending to contains instances of any class.
.pxResults().

Question 3
Which two hierarchy types are applicable to object-oriented programing inheritance? (Choose Two)

C # Response Justification
C 1 Subsumptive A subsumptive hierarchy is
synonymous with is-a
relationships.
C 2 Compositional A compositional hierarchy is
where a thing is the sum of its
parts—its parts also cable of
being compositions.
3 Programmatic No such hierarchy type exists.
4 Layer-based No such hierarchy type exists.

Question 4
A dependency tree has which characteristics?

C # Response Justification
1 A level can only reference the A level only has to reference the
previous level. previous level once. It may also
C # Response Justification
reference lower levels.
2 Circular references are not This works, provided a
allowed within a level. subreport is used to isolate the
correct version to use.
C 3 The number of items in a In general, objects are not
level tends to decrease as the specialized infinitely.
level increases.
4 Layers are synonymous with Levels only approximate the
unique contiguous sets of layer concept. Some
levels. relationships may be deeper
than others, yet belong in a
lower layer according its
reusability.

Question 5
Which two of the following options are ways to version data? (Choose Two)

C # Response Justification
C 1 Define a custom rule. A custom rule is inherently
versionable.
C 2 Add a property such as an as- This works, provided a
of date or monotonically subreport is used to isolate the
increasing version. Filter correct version to use.
based on values.
3 When deploying data, remove This by itself does not make
existing instances first. data vesionable.
4 Define report definitions with When data is deployed to a new
subreports that filter on MAX environment, the update time-
pxUpdateTime. stamp is set to the current date-
time.
Extending an industry framework data
model
Introduction to extending an industry foundation data model
Pega offers a foundation data model for each major industry, including financial services, healthcare, and
communications. Similar to leveraging a Pega application, using the industry foundation data model can give you
an improved starting point when implementing your application.
After this lesson, you should be able to:
l Identify benefits of using an industry foundation data model
l Extend an industry foundation data model
l Use versioning to manage changes to integration data model
Industry foundation data model benefits
Pega's industry foundation data models allow you to leverage and extend an existing logical data model instead
of building one from scratch. Pega offers industry data models for Healthcare, Communications and Media, Life
Sciences, Insurance, and Financial Services. Instead of building data classes yourself, you can map the system of
record properties to the data class and properties of the industry data model. For example, the following image
illustrates the logical model for the member data types for the Healthcare Industry foundation.

You can embellish the industry foundation data classes to include additional properties from external systems of
record as needed.
Pega's industry foundation data models apply the Separation of Concerns (SoC) design principle: Keep the
business logic separate from the interface that gets the data. The business logic determines when the data is
needed and what to do with that data after it has been retrieved. The interface insulates the business logic from
having to know how to get the data.
In Pega Platform, data pages connect business logic and interface logic. The following image illustrates the
relationship between the data page, the data class, and the mechanism for retrieving the data.

This design pattern allows you to change the integration rules without impacting the data model or application
behavior.
KNOWLEDGE CHECK

What are the two key benefits of using a Pega industry foundation data model?
The industry foundation provides data pages that separate business logic from the source of the data
(interface), and provides a robust starting point for data properties and classes.
Rather than directly extend the industry foundation data model, your project may mandate that you use an
Enterprise Service Bus (ESB)- dictated data model. The goal of an ESB is to implement a canonical data model
that allows clients who access the bus to talk to any service advertised on the bus.

In a situation where an ESB is used the Pega development team does not define the canonical data model.
However, the team may maintain the mapping between the canonical data model and the foundation data
model.
Note: The best practice is for the Pega development team to leverage and extend the foundation data model as
needed. All mappings of the external SOR properties to the properties in the Pega foundation data model
should be done by the Pega development team within the Pega application using Data Pages.
How to extend an industry foundation data model
Follow this process to extend an industry foundation data model:
l Obtain industry foundation data model documentation
l Obtain the data model for the system of record
l Map the system of record data model to the industry foundation data model
l Add properties to the industry foundation data model and add data classes as needed
l Maintain a data dictionary
Before you begin the mapping process, determine which parts of the data to map. For example, when producing
the initial minimal lovable product (MLP), it may not be necessary to map all of the data from the source before
the go-live.
Note: Building web services can be an expensive and lengthy process. If you discover that you need to build a
new web service, consider using robotic automation.

Obtain industry foundation data model documentation


The Community landing page for each industry foundation data model contains an entity relationship diagram
(ERD) and data dictionary. You will need these documents to help you map the industry foundation data model to
the system of record data model. Acquaint yourself with the relationship of the data types, classes, and
properties that the industry foundation data model provides.
For example, the Pega Customer Service data model has three main classes:
1. Pega-Interface-Account
2. Pega-Interface-Customer
3. Pega-Interface-Contact
In an industry such as banking, a customer typically has multiple accounts such as checking and savings. A
customer should be able to define multiple contacts per account. Therefore, the relationship between Account
and Contact is many-to-many.
Pega industry frameworks do not force the consumer of its data models to use intermediary, many-to-many
association classes. Instead, in true Separation of Concerns (SoC) fashion, Pega industry frameworks hide many-
to-many relationship complexity by having data consumers reference appropriately named and parameterized
data pages.

Obtain the data model from the system of record


Work with the enterprise architecture team at your organization to obtain a model of the system of record. This
data model documentation can take the form of:
l An entity relationship diagram
l A canonical data model (typically used ESB solutions)
l A WSDL or XSD
l A spreadsheet
Regardless of format, this documentation serves as a source for mapping the industry model to the system of
record.

Map the system of record data model to the industry foundation data model
The next step is to map the system of record data model to the industry foundation data model. To help with this
process, use a tabular format to record this information, such as a spreadsheet. The output is a reference
document to use when mapping property values from the integration response to the application data structure.
During this analysis, you may find that you need new properties for the application. For example, when mapping
the healthcare industry foundation data model, you may find that you need a property to store information when
a claim is submitted outside of the member's home country. Record the name and class where the property
resides because you will need to add it to the application data model.

Add data classes and properties


Only create new data classes and properties if your application requires that data from the source system. Use
the data classes and properties from the industry foundation data model as much as possible.
If you create any new properties and data classes, generate the integration rules into the organization level
integration ruleset. Test each data page to ensure the mapping of the source data to the application data model
is correct.
Tip: Over time, web service and rest service definitions can change. Use rulesets to maintain versions of
integration rules. As a best practice, allow the center of excellence to manage and deploy these rules to
development environments.

Maintain a data dictionary


If the data mapping is not recorded, it may be difficult or impossible for another team maintaining the model to
reverse the mapping if necessary. A data dictionary is especially important if two or more source data items map
to one output data item (this type of relationship is called surjection). For instance, the same type of information
exists in two different paths within the integration data model. Encourage your development team to document
the meaning and proper use of data model properties.
KNOWLEDGE CHECK

What is the primary benefit of using an industry foundation data model?


You do not have to create the data model yourself. You map the system of record data model to the industry
foundation data model, and only add new fields if required by the application.
How to use integration versioning
The development of a Pega application in parallel to the development of an ESB or another form of integration to
a legacy system is common. The integration data model has its own unique internal dependencies. The mapping
code depends on the current state of the integration data model for conversion to the business logic data model.
Even after an application is placed into production, these internal dependencies may change.
Mapping code acts as an insulator between the business data model, which is subject to change, as well as the
integration data model, which is also subject to change. This is also known as loose coupling.
To deal with changes to integration data models, generate the models with new integration base classes such as
<Org>-Int-<service,v2,v3..>.
Note that the new data model may create code redundancy. This issue is easily addressed by generating the
mapping code into a different ruleset. Over time, when there is no need to return to a previous ruleset , you can
remove the ruleset from application definition. Ideally, you remove the unused ruleset when versioning the
application.
Changes to the integration root classes need to be accounted for. Use DCR to accommodate those changes. Since
data pages support when conditions, you can use those conditions to determine which integration version to use
based on application version. You can also use a Data-Admin-System-Setting value to a data page to
accommodate the interface data model change.
Extending an industry foundation data model Quiz
Question 1
Which two statements are true with regard to industry foundation data model (Choose Two)?

C # Response Justification
1 The industry foundation data While most commonly used with Pega applications, the industry
model can only be used as a foundation data model can be implemented separately from a
built on application in Pega application.
conjunction with Pega
applications.
C 2 A best practice is to use the If you need to add new properties, you can add them to the
industry foundation data model industry model data classes, in your organization level ruleset. You
as-is, as much as possible. do not have to add new classes, unless the structure of the source
data requires that you do so.
C 3 Pega’s industry foundation data The complexity of dealing with many-to-many relationships is
model simplifies the complexity handled by data pages provided with industry foundation.
of many-to-many relationships .
4 Data pages provided by the The logic to map an integration data model to the industry
industry foundation data model foundation data model changes from one interface to another.
need to be extended for each Industry foundation data pages need not change.
external integration.

Question 2
Which of the following statements is true regarding the mapping of data from a system of record to an industry
foundation data model?

C # Response Justification
1 Every property retrieved from It is possible for a minimal lovable product (MLP) to be achieved with
an interface must be a fraction of the total amount of retrievable data being mapped.
mapped to the business data
model before an application
can go live.
2 Maintaining a data dictionary Adding to a data dictionary during development does not
C # Response Justification
is a wasteful use of team significantly increase project scope.
resources.
3 The mechanics of how data is An LSA should know how to map data in the application but should
mapped is the LSA's primary delegate that responsibility to the team.
concern.
C 4 Leveraging a Center of A best practice is for the COE to maintain integration and data
Excellence (COE) can ensure model rules across applications to promote reuse and consistent
consistent implementation of implementation of the industry foundation data model and
the industry foundation data integration rules.
model.

Question 3
To accommodate changes to integration rules, an LSA __________________ and _______________? (Choose Two)

C # Response Justification
1 Creates new ruleset A class can only be defined in one ruleset. The integration wizards
versions using the existing only generate rules. They do not withdraw rules no longer used.
integration base class.
C 2 Specifies a new integration A new base class does not overlap with previously generated classes.
base class when using a DCR can be used to obtain the name of the correct integration base
wizard. Use dynamic class class to use.
referencing (DCR) to get the
class name at run time.
C 3 Versions the application There is no need to specify a ruleset in an application that is never
when certain integration used.
rulesets become obsolete.
4 Develops custom rules to Custom rules are not recommended for integration. Custom rules are
encapsulate the integration ideal for a simplistic data model where properties stay the same
data model thereby the from version to version but what does change are the contents of
complexity of versioning. pages and page lists.
Assigning work
Introduction to assigning work
A case is often assigned to a user to complete a task. For example, an employee expense requires approval by
the manager of a cost center or a refund is processed by a member of the accounting team.
In this lesson, you learn how to leverage work parties in routing and how to customize the Get Next Work
functionality to fulfill the requirements.
After this lesson, you should be able to:
l Compare push routing to pull routing
l Leverage work parties in routing
l Explain the default Get Next Work functionality
l Customize Get Next Work
Push routing and pull routing
The two basic types of routing are push routing and pull routing.
Push routing logic is invoked during case flow processing to determine the next assignment for the case. Push
routing occurs when a pyActivityType=ROUTE activity is used to create either a worklist or workbasket
assignment. When routing to a worklist assignment, Pega can use multiple criteria to select the ultimate owner,
such as availability (whether an operator is available or on vacation), the operator’s work group, operator skills, or
current workload. You can even configure routing to a substitute operator if the chosen operator is not available.
Pull routing also known as system-selected assignment model, occurs outside the context of a case creating
an assignment. In standard portals, you can pull the next assignment to work on using Get Next Work by clicking
Next Assignment at the top of the portal. It is also possible to pull an assignment to work on by checking Look
for an assignment to perform after add? within a flow rule.
The Get Next Work feature selects the most urgent assignment from a set of assignments shared across
multiple users. Ownership of the fetched assignment does not occur until either MoveToWorklist is called or the
user submits the fetched assignment's flow action. The GetNextWork_MoveToWorklist Rule-System-Settings rule
must be set to true for the MoveToWorklist activity to be called.
Note: MoveToWorklist is called from pzOpenAssignmentForGetNextWork following the successful execution of the
GetNextWorkCriteria decision tree.
The following image illustrates how pyActivityType=ROUTE activities, when run in a case processing context, are
used to achieve push routing. The image also illustrates how GetNextWork-related rules, executed in a non-case
processing context, are used to achieve pull routing.

There are four main categories of Push Routing activities as shown in the table below.
Common Organization Based Decision Based Skills Based
ToAssignedOperator ToWorkGroup ToDecisionMap ToLeveledGroup
ToCreateOperator ToWorkGroupManager ToDecisionTable ToSkilledGroup
ToCurrentOperator ToOrgUnitManager ToDecisionTree ToSkilledWorkbasket
ToWorkParty
ToNewWorkParty
ToWorkbasket
ToWorklist

The ToCurrentOperator routing activity must be used carefully. If a Change Stage action were performed that
moves the case backward, the person who performed the move may not want to become the owner of the new
assignment. Instead the person performing the move may want the assignment to be routed to a party
associated to the case. In that situation ToWorkParty routing should be used.
If a background process such as a Standard Agent or Advanced Agent move a case forward to an assignment that
uses ToCurrentOperator routing, a ProblemFlow could result. An assignment using ToCurrentOperator after a Wait
shape will route to the requestor who routed the case to the Wait shape.
Routing activities such as ToAssignedOperator, ToCreateOperator, and ToCurrentOperator do not specify the role of
the person receiving the assignment. Routing to a role makes it more clear why the assignment was routed to the
person who received it. When using ToWorklist routing it is better to use a property name that indicates the
receiver’s role rather than a hard coded Operator ID.
Party roles should also be specific to the solution and not be overly generic. At the beginning, the person who
created a case can be considered the “Owner”, but later when in case life cycle, after being routed to multiple
person, it can become confusing as to who in fact is the “Owner”.
ToNewWorkParty and ToWorkParty both route an assignment to the worklist of the party specified by the
PartyRole parameter. ToNewWorkParty will also add a work party for the PartyRole if it did not already exist.
ToWorkParty on the other hand will throw an error if the configured PartyRole did not exist prior to the
assignment.

KNOWLEDGE CHECK

What feature in the Pega Platform supports the pull routing paradigm?
Get Next Work
How to leverage work parties in routing
Adding work parties to case types allows for consistent design and can simplify configuration of routing
throughout the case life cycle. The layer of abstraction provided by work parties aids the developer by providing
a dynamic, extensible and reusable routing solution. Using work parties in your solutions allows you to leverage
related base product provided functionality such as a ready to use data model, validation, UI forms and
correspondence which simplifies design and maintenance. The basic configuration for work parties have already
been described in pre-requisite courses. This topic will concentrate on forming a deeper understanding of work
party functionality and routing configuration to aid the developer in fully leveraging this functionality.

Understanding Work Party Behavior


Most of the existing code for work parties is contained in the Data-Party class which provides the base
functionality and orchestration for rules in derived classes. Several classes extend Data-Party, such as Data-
Party-Operator which overrides the base functionality. It is important to understand this polymorphic behavior
since the behavior will change depending on the class used to define the work party. For example, work party
initialization, validation and display will differ between work parties constructed from the Data-Party-Operator
class compared to those constructed from the Data-Party-Person class. You are encouraged to review and
compare the rules provided in the Data-Party class and subclasses to gain an appreciation for the base
functionality and specialization provided.
An example of a polymorphic rule is the WorkPartyRetrieve activity. This activity is overridden within the Data-
Party-Operator and Data-Party-Person derived classes.

The WorkPartyRetrieve activity is significant since it is invoked every time a page is added to the .pyWorkParty()
pages embedded on the Case. The .pyWorkParty() page group stores the invidual parties added to the case. The
property definition contains an on-change activity which ultimately invokes the WorkPartyRetrieve activity.
It may be required to override the default behavior of some aspect of the work parties such as validation or
display. This can be performed either through ruleset specialization or by extending the work party class and
overriding the required rules. If this is required, ensure the scope of the changes are made correctly so not to
change behavior in unintended ways.
Configuring Work Party rules
A Work Party rule defines a contract which defines the possible work parties which can be utilized in the Case.
Define work parties for the case in the standard work parties rule pyCaseManagementDefault. Use meaningful
party role names to enhance application maintainability. Avoid generic names such as Owner and Originator.
Generic, non-descriptive role names such as Owner can be subject to change and may not intuitively describe
the party’s role with respect to the case.
You can use the visible on entry (VOE) option to add work parties when a case is created. VOE allows you to:
• Enable the user to add work parties in the New harness
• Automatically add the current operator as a work party
• Automatically add a different operator as a work party
Use the data transform CurrentOperator in the work parties rule to add the current operator as a work party
when a case is created, example shown below. You can create custom data transforms to add other work parties
when a case is created.

Context Add Current Add Other Operator


Operator
VOE Data CurrentOperator D_pxWorkgroup
Transform [WorkGroup:”EventManagement@Booking”].pyManager

Setting Work Parties When Performing Flow Actions


Prior to routing to a designated work party, the work party property values must be initialized. This can be
performed at the time the case is created as described above using the VOE option on the Work Parties rule or it
can be performed dynamically during case processing. When dynamically setting the work party values, leverage
the addWorkObjectParty activity. This also allows you to specify a data transform to initialize the values. Caution
must be used when using this activity however if the work party already exists and is not declared repeatable.

Context On Perform (Pre-Processing) On Submit (Post-Processing)


Flow Action SET Param.PartyClass =”Data-Party- SET Param.PartyClass =”Data-Party-Operator”
Data Person” SET Param.PartyModel = “CurrentOperator” WHEN
SET Param.PartyModel = “NewParty” @pageExists(".pyWorkParty(" + param.PartyRole + ")")
Transforms
WHEN @pageExists(".pyWorkParty(" + REMOVE .pyWorkParty(param.PartyRole)
param.PartyRole + ")") = false SET Param.Success = @pxExecuteAnActivity(“Primary”,
SET Param.Success = “addWorkObjectParty”)
@pxExecuteAnActivity(“Primary”,
“addWorkObjectParty”)

Note: Do not attempt to initialize and set the values on the work party page directly as this may cause
unintended results
Note: As a best practice, define routing for every assignment, including the first assignment. This prevents
routing issues if the case is routed back to the first assignment during case processing, or if the previous step is
advanced automatically via an SLA.
KNOWLEDGE CHECK

What is the advantage of using work parties?


Work parties allows for consistent design and configuration of routing throughout the case life cycle.
Get Next Work
Using the Get Next Work feature, your application can select the next assignment for a user. By choosing the
best, most appropriate assignment to work on next, your application can promote user productivity, timeliness of
processing, and customer satisfaction.
Users typically click Next Assignment in the Case Manager or Case Worker portal to retrieve assignments. An
activity then starts and performs several steps to retrieve the assignments. The application calls the
@baseclass.doUIAction activity, and that calls Work-.GetNextWork, and that immediately calls
Work-.getNextWorkObject.
What happens next depends on the configuration of the operator record of the user. If Get from workbaskets
first is not selected, the Work-.findAssignmentInWorklist activity is invoked, followed by the
Work-.findAssignmentInWorkbasket activity. If Get from workbaskets first is selected, the
Work-.findAssignmentInWorklist activity is skipped and Work-.findAssignmentInWorkbasket is immediately invoked.
The Work-.findAssignmentInWorklist and Work-.findAssignmentInWorkbasket activities retrieve the assignments with
the Assign-Worklist.GetNextWork and Assign-WorkBasket.GetNextWork list views, respectively.

When multiple workbaskets are listed on a user operator record, the workbaskets are processed from top to
bottom. If you configure an Urgency Threshold for the workbasket, then assignments with an urgency above the
defined threshold are prioritized. Lower urgency assignments are considered only after all applicable
workbaskets are emptied of assignments with an urgency above the threshold. If Merge workbaskets is
selected, the listed workbaskets are treated as a single workbasket.
Instead of specifying the workbaskets to retrieve work from, you can select the Use all workbasket
assignments in user's work group options to include all workbaskets belonging to the same work group as the
user. When this option is used care must be taken to exclude workbasket assignments used to wait for subcases
to complete.

If you configure the case to route with the ToSkilledWorkbasket router, then the skills defined on the operator
record of the user are considered when retrieving the next assignment. An assignment can have both required
and desired skills. Only required skills are considered by Get Next Work.
Define the user's skills using the Skill and Rating fields on the operator record. Skills are stored in the pySkills
property on the OperatorID page. Skills checking is not performed when users fetch work from their own worklist
since they would not own an assignment without the proper skills. The Get Next Work functionality ensures that
users can only retrieve assignments from the workbasket, if the user has all the skills with at least the ratings
defined.
The Assign-Worklist.GetNextWork list view uses the default getContent activity to retrieve assignments. The Assign-
WorkBasket.GetNextWork uses a custom get content activity getContentForGetNextWork to construct a query. The
query varies based on Rule-Admin-System-Settings rules that start with GetNextWork_. By default the query
compares the user's skills to the assignment's required skills, if any.
Before the assignment returned by the list view is selected, the Assign-.GetNextWorkCriteria decision tree checks if
the assignment is ready to be worked on and if the assignment was previously worked on by the user today. The
assignment is skipped if it was previously worked on by the user today.
KNOWLEDGE CHECK

Where are the settings specified on the user's operator record applied when getting the next
assignment?

In the custom get content activity getContentForGetNextWork used in the Assign-WorkBasket.GetNextWork list view
How to customize Get Next Work
You can customize Get Next Work processing to meet the needs of your application and your business
operations. The most common customization requirement is adjusting the prioritization of work returned by Get
Next Work. You change the prioritization of work by adjusting the assignment urgency.
Adjusting assignment urgency may not be a good long term solution however, since urgency can also be affected
by other case or assignment urgency adjustments. A better long term solution is to adjust the filter criteria in the
Assign-WorkBasket.GetNextWork and Assign-Worklist.GetNextWork list views. For example, you can sort by the
assignment's create date or join the assignment with the case or another object to leverage other data for
prioritization.
Sometimes different work groups have different Get Next Work requirements. When different business
requirements exist, you customize each Get Next Work functionality to satisfy both sets of requirements. So a
change implemented to satisfy a certain requirement should not affect the solution to a different requirement.
For example, if assignments for gold-status customers should be prioritized for customer service representatives
(CSRs), but not for the accounting team, then the change implemented to prioritize gold customers for CSRs must
not affect the prioritization for the accounting team.
You can create several circumstanced list views if the requirements cannot be implemented in a single list view,
or if a single list view is hard to understand and maintain.
Use the Assign-.GetNextWorkCriteria decision tree to filter the results returned by the GetNextWork list view. You
can define and use your own when rules in the GetNextWorkCriteria decision tree. Create circumstanced versions
of the GetNextWorkCriteria decision tree if needed.
Note: Using the GetNextWorkCriteria decision tree for filtering has performance impacts since items are iterated
and opened one-by-one. Always ensure the GetNextWork list view performs the main filtering.
Circumstance Example:
GetNextWork ListView: Assign-WorkBasket
Circumstanced: OperatorID.pyWorkGroup = FacilityCoordinator@Booking
Criteria: .pxWorkGroup = FacilityCoordinator@Booking
Get These Fields: .pxUrgencyAssign Descending (1), pxCreateDateTime Ascending (2)
Show These Fields: .pxUrgencyAssign, pxCreateDateTime
Besides circumstancing the GetNextWork ListView, it is also possible to circumstance the GetNextWorkCriteria
Decision Tree for a particular WorkGroup. Other alternatives exist such as specializing the
getContentForGetNextWork Activity to call a different Decision rule to produce the desired results. When
specializing any of these rules, it is important to implement changes efficiently to ensure results will be
performant.
KNOWLEDGE CHECK

How can you change the Get Next Work prioritization without customizing the GetNextWork and
GetNextWorkCriteria rules?
By adjusting the assignment urgency
Alternate Ways to Find Work
Pega’s Get Next Work (GNW) feature is by far the most-used way to retrieve an assignment that needs attention.
GNW is geared toward short time-frame interactions with a customer where the work being fetched may require
operator skill matching. GNW saves valuable time by not returning assignments that contain an error, nor
associated to a case that is locked.
GNW is closely tied to Service Level rule configuration. Service Level rules provide Goal, Deadline, and Passed
Deadline milestones at which urgency can be increased.

Recall the “How to customize Get Next Work" discussion about circumstancing. Suppose the workbasket
assignment’s pxCreateDateTime property was used as the primary sort and not the secondary sort? Such a
requirement could exist for a workgroup that simply wants the oldest workbasket assignments processed first,
i.e., First-In, First Out (FIFO). The pyGetWorkBasket Report Definition behind the D_WorkBasket List Data Page used
to display workbasket assignments could be modified to use an ascending sort on pxCreateDateTime as opposed
to a descending sort on pxUrgencyAssign as it uses by default.
A value for pxUrgencyAssign could be derived after the fact, i.e., reverse engineered, by taking into account
@CurrentDateTime() and the time that the workbasket assignment was created. Going a step further, the value for
pxUrgencyAssign need not be computed until displayed in a UI.

Taking this a example one step further, suppose a customer-specific SLA deadline duration, days, hours, or
minutes, is added to the pxCreateDateTime of a Claim case. All ClaimUnit subcases of that case must be
completed by that deadline. Different ClaimUnits, however, take different durations, on average, to complete. For
efficiency, each ClaimUnit’s average time to complete could be made available within a node-level Data Page.
A ClaimUnit spun off to a workbasket could compute its pxDeadlineTime datetime property as: @CurrentDateTime
+ (customer deadline – average time to complete). A circumstanced version of the Assign-WorkBaset GetNextWork
ListView could perform an ascending sort on pxDeadlineTime as opposed to uisng pxUrgencyAssign descending. A
circumstanced version of the pyGetWorkBasket Report Definition could do the same.
The R-D-E for pxUrgencyAssign can also be circumstanced. Below is an example of how pxUrgencyAssign could
“reverse engineer” its value based on datetime properties.

1 CaseDeadlineDateTime = CaseCreateDateTime + CaseDeadlineDuration


2 (Assign-) pxDeadlineTime = CaseDeadlineDateTime – CaseAvgTimeToComplete
3 pxUrgencyAssign = min(100, 100 * ((CurrentDateTime – pxCreateDateTime) / (pxDeadlineTime -
pxCreateDateTime)) )
Note how the time units within the numerator and denominator, whether minutes, hours, or days, cancel each
other creating unit-less number like a percentage. It is easy to see that the value for pxUrgencyAssign will be 100
using the above formula when CurrentDateTime = pxDeadlineTime. The choice of 100 was arbitrary. Any factor
could be used such as 80. The value could be defined in a Dynamic System Setting.
Also note how the above calculation is more granular than using a Service Level rule’s Goal and Deadline states
to adjust urgency. A Queue Processor or Job Scheduler could periodically update an assignment’s urgency more
frequently than a Service Level agent, but at the expense of performance and efficiency.
Like pxUrgencyAssign, an assignment’s pxGoalTime and pxDeadlineTime can be adjusted from their initial value.
The pxAdjustSLATimes API activity would be used. As usual you would adjust an assignment’s deadline for a
specific reason, not merely to speed up or slow down the increase in pxUrgencyAssign’s value. Frequent deadline
modification would impact assignment-level reporting.
There are situations where “urgency” would not be based on time at all but on something else such as distance.
A shuttle driver may want to choose their next assignment based on proximity to their current location. Imagine
within the FSG Booking application solution a shuttle driver’s location being used to identify the closest hotel
where event attendees are waiting to be transported to an event.
The Booking application invokes a SQL Connector to compute distance. It would be difficult to incorporate a SQL
Connector call into the Pega-provided GetNextWork implementation. The “Open assignment” action could be
invoked from a button in place of the “Get next work” action. The assignment key to use could be provided by a
Data Page that in, turn, calls an activity that, in turn, calls the SQL Connector. The value provided to the “Open
assignment” event action could be: D_GetWBAssigment.pzInsKey.
An “Open assignment” Data Page could also call a Report Definition. The Report Definition could filter out
assignments where pyErrorMessage is not null. A post-query Data Transform could check for the workbasket’s
associated case being locked, skipping it and going onto the next assignment. Skill matching could be performed
using a subreport defined against Index-AssignmentSkills that, in turn, is joined to Index-OperatorSkills.
Designing routing quiz
Question 1
What is the primary difference between push routing and pull routing?

C # Response Justification
1 Push routing activity names Activity names have nothing to
begin with "To," and pull do with the fundamental
routing activity names begin difference between push and
with either "Get" or "Find." pull routing.
2 The security type value for The ROUTE activity security type
push routing activities is value is used by Pega to identify
ROUTE, and the security type push routing activities, but is
value for pull routing not the primary difference
activities is ACTIVITY. compared to pull routing.
C 3 Push routing is initiated Push routing occurs either
during the context of case immediately after case creation
processing, and pull routing or when the FinishAssignment
is initiated outside the activity is executed during case
context of case processing. processing.
4 The behavior of push routing The opposite is true. A number
is configured in Rule-System- of Rule-System-Settings exist to
Settings, and pull routing is control the behavior of Assign-
not. WorkBasket Get Next Work.

Question 2
The work parties rule directly facilitates routing decisions when accessed in which two configurations? (Choose
Two)

C # Response Justification
C 1 Accessed when a work party Visible-on-entry is accessed to
is populated immediately decide whether a party role
after case instantiation should be populated prior to the
first assignment.
C # Response Justification
C 2 Accessed when validating The Work Parties rule is
which work party roles are evaluated to detect whether an
allowed to be used and their attempt is made to use an
cardinality unspecified party role as well as
to prevent a singular work party
role from being added multiple
times.
3 Accessed when Organization Organization push routing
routing activities are called activities do not rely on the Work
such as Parties rule for information
ToWorkGroupManager
4 Accessed during Get Next Get Next Work does not, by
Work to validate whether the itself, add a party role to a case.
operator's primary access role
corresponds to a party role.

Question 3
Choose two ways to customize Get Next Work for a particular type of actor or primary role? (Choose two)

C # Response Justification
C 1 Circumstance Get Next Work Circumstancing is a consistent
rules based a text fragment way to specialize Get Next Work
derived from the operator's for on particular type of actor.
primary access group.
C 2 Override certain Get Next Implementing if / then / else
Work activities to react switch-like logic within certain
differently according to the Get Next Work is possible.
operator's work group. However, customization of the
GetNextWork ListView to
accommodate different actors is
difficult.
3 Define a workgroup-specific This solution is difficult to
variation for each distinct maintain. The Assign-
workbasket purpose. WorkBasket GetNextWork
C # Response Justification
Operator records specify ListView is capable of filtering
workbaskets that correspond assignments based on
to their work group. associated pxWorkgroup.
4 Define skill names specific to This solution is difficult to
each work group. Set skills on maintain. Skill names need not
each workbasket assignment be workgroup specific. Also, this
according to the work group approach only affects the
to which the workbasket is results obtained by the Get Next
associated. Work query and not any
decision made downstream.

Question 4
Which two configuration approaches differentiate Get Next Work behavior for different actors without modifying
Get Next Work related rules? (Choose two)

C # Response Justification
C 1 List workbaskets in specific This approach can be used
order on operator records. when a membership status
hierarchy exists and those with
a higher status are given
preferential treatment.
C 2 Increase case urgency prior to Adjusting assignment urgency
a workbasket assignment. either through an SLA rule or
modifying case urgency before
the assignment is created is the
de facto way to utilize Get Next
Work as-is.
3 Override one or more Overriding a GetNextWork R-S-S
GetNextWork related Rule- rule would apply to everyone.
System-Setting rules. The R-S-S would need to be
circumstanced to apply to a
particular actor type.
4 Implement a button that This is a separate alternative to
actions open assignment, the Get Next Work use to either
key for which is supplied by a
C # Response Justification
Data Page. replace or augment it. This
approach does not affect how
the Pega platform Get Next Work
rules behave.
Defining the authorization scheme
Introduction to defining the authentication scheme
In most cases, you want to authenticate users when logging into an application to establish who they are and
that they are who they say they are. You can implement authentication features that ensure valid users can
access the environments they are authorized to access. The Pega Platform provides a complementary set of
authentication services.
After this lesson, you should be able to:
l Compare the available authentication services
l Determine the appropriate authentication service for a given use case
l Design an authentication scheme
Authentication design considerations

Authentication design considerations


Authentication is proving to the application you are who you say you are. Each organization has policies on how
users are authenticated into the application. Most organizations use some form of single sign on. If the
organization is running an enterprise tier deployment, it may be using container-based authentication or JAAS or
JEE security. If so, this affects how you design your authentication scheme and your application.
In short, the Pega application implements to the organization's authentication policy. For more information on
authentication protocols supported by Pega, see the Pega Community article: Authentication in the Pega
Platform and the Pega Help Topic: Authentication services .
Pega 8 has replaced PRBasic with Basic credentials and also added support for Anonymous, Oauth2, OIDC
(Open ID Connect), and Kerberos.
Pega can be the Identity Provider (IdP) or the IdP can be external. An example of an external Identity Provider is
Microsoft’s Active Directory Federated Service (ADFS), which is used on-premise as well as within Microsoft’s
Azure cloud offering.
Pega is the IdP when the AuthenticationType used to access Pega is Basic credentials. Someone accessing Pega
this way would not have “Use external authentication” checked within the Security tab on their Operator record.
Defining the authentication scheme quiz
Question 1
What is the default Authentication type in the Pega Platform?

C # Response Justification
1 PRSecuredBasic PRSecuredBasic is no longer a
valid authentication type.
2 Custom Custom is a valid option for an
authentication type but is not
the default.
C 3 Basic Credentials Basic Credentials is the default
authentication type in the Pega
Platform.
4 Java Java is not a valid authentication
type. default.
Defining the authorization scheme
Introduction to defining the authorization scheme
In most cases, you want to restrict authenticated users from accessing every part of an application. You can
implement authorization features that ensure users can access only the user interfaces and data that they are
authorized to access. The Pega Platform provides a complementary set of access control features called Role-
based access control and Attribute-based access control.
After this lesson, you should be able to:
l Compare role and attribute based access control
l Identify and configure roles and access groups for an application
l Determine the appropriate authorization model for a given use case
l Determine the rule security mode
Another access control capability in Pega is Client-based access control (CBAC). This is more focused on
tracking and processing requests to view, update or remove personal Customer data held across your Pega
applications, such as that required by EU GDPR (and similar) regulations. In itself, it doesn’t influence the
authorization considerations for LSAs when designing a Pega application, and is not discussed further in this
module.
Authorization design considerations

Authorization design considerations


Authorization is about who can do what and who can see what in the application. In general, give the minimum
access needed to perform the job. This rule applies to both end users and developers.
As you are designing your authorization scheme:
l Create a matrix of access roles, privileges, and attributes to be secured. Determine where to use role-based
access (RBAC) and attributed-based controls (ABAC) in your authorization scheme. For more information on
RBAC and ABAC, see the Pega Community article Authorization models in the Pega Platform.
l Define security on reports and attachments, and background processes. Background processes such as
agents need an associated access group.
l Determine the level of auditing (history) required for each case type. Only write entries when necessary.
Otherwise, you can impact performance when history tables become too large.
l Determine what level of rule auditing is required for developer roles.
l Secure developer access. Not every developer should have administrator rights. Your organization may also
have restrictions on which developers can create activity rules or SQL connector rules.
l Leverage the Deny Rule security mode when defining Access Groups. Some organizations enforce a deny
first policy. In this model, users must be explicitly granted privileges to access certain information. If you have
similar requirements for the application you are designing, review usage of the Rule Security Mode setting on
each access group. For more information on usage of this setting, see the Pega Community article Setting role
privileges automatically for access group Deny mode.
Grasping the importance of security design and analysis of your application is essential. If you need a refresher,
see the Customizing Security Requirements in Pega Applications course on Pega Academy. Also refer to Security
checklist for Pega Platform applications on the Pega Community throughout the design of your application.
KNOWLEDGE CHECK

When should you begin the design of your security model?


Begin designing your security model as early as possible. Several factors can impact how you implement
security in the application. Be aware of those factors to make sure your application meets the organization's
security standards. Failing to meet these standards prevent your application from going to production.
Improperly securing your application opens your organization to unnecessary risk.
Authorization models
Use authorization models to define a user's access to specific features of Pega Platform. For example, you can
restrict an end user's ability to view data or perform certain actions on a specific instance of a class at run time.
You can restrict a business or system architect's ability to create, update, or delete rules at design time or
determine access to certain application development tools such as the Clipboard or Tracer.
The Pega Platform offers two authorization models that are different but complementary: Role-based access
control (RBAC) and Attribute-based access control (ABAC). Role-based and Attribute-based access control
coexist.

Role-based access control (RBAC)


Pega Applications typically bring many personae (“user roles”) together to get work done. Personae (roles) include
Customers, Call Center Workers, Managers, and Administrators. Most users fulfill one of the persona’s (role's)
applicable for a given Application. RBAC defines the actions that one persona (role) is authorized to perform on a
Class by Class basis, including:
l Class-wide actions: Execute Activities, Run Reports
l Record-level actions: Open, Modify (Create and Update), Delete
l Rule-specific actions as governed by Privileges
Review of role-based access control (RBAC) rule types
Every application has distinct personae (roles) that form the basis for authorization. You configure RBAC using
the following rule types.

l Class: the type of data that is subject to authorization


l Access When: rule defining conditional access control (like a regular When rule) which typically evaluates
one or more properties on the Class subject to authorization. The intent is that the access control outcome
yielded from the Access When rule will vary between different instances of the same Class. For example:

Is the Purchase Request case currently in the “Approval” stage?


Is the Employee’s salary greater than 50,000?
l Privilege: rule that can be attached to certain rule types (or parts of them) to authorize who can execute (part
of) that rule. For example:
Only holders of an “ApprovePR” Privilege can perform the Approve Purchase Request Flow Action
Only holders of a “ShowPR” Privilege can see a “Show Purchase Requests” menu item configured within a
Navigation rule
l Access Role: a collection of Access of Role to Object and Access Deny rules defining all role-based access
control settings that govern what access is granted (and denied) to those users who are allocated the Access
Role
l Access of Role to Object (ARO): association of an Access Role to the operations that holders of that Access
Role are granted/denied to perform on an Object (i.e. an instance of a Class).
l Access Deny: association of an Access Role to the operations that holders of that Access Role are explicitly
denied to perform on an Object.
l Access Group: the collection of Access Roles that, when aggregated together, form the role-based access
control portfolio for a single persona (role).
Note: For a refresher on the configuration aspects of the rule types that comprise a RBAC authorization model,
refer to the "Application Security" module in the "Certified Senior System Architect" course.

Recently emerged RBAC features


The following features have emerged in recent releases of the Pega Platform that further influence role-based
access control. Examples of each are discussed in later lessons.
l Dependent Roles: allows Access Roles to be configured and resolve at runtime based on a dependency
hierarchy of Access Roles
l Privilege Inheritance: allows Privileges granted to a Class for an Access Role (by its ARO) to include those
Privileges specified on AROs of its superclasses within the same Access Role
l Short-circuited access checking: Access Roles that yield an explicit grant or deny outcome can return this
outcome immediately without checking subsequent Access Roles on the Access Group when – particularly in
the case of an explicit grant – the eventual access control decision will be to grant access
l App Studio managed authorization: App Studio provides a basic authorization configuration capability
which supplements the RBAC foundation provided from Dev Studio

Why Privileges are so important


Privileges provide better security because they can be used to control access to individual rules, sometimes parts
of them.
Privileges are a critical component of your authorization strategy, because they provide an opportunity for the
execution of certain rules to be explicitly authorized “just in time”. This is regardless of how the application may
have reached a state where that authorization check is being made, reducing the exposure that other changes to
the authorization model or unforeseen usage of the application implicitly authorize the unintended execution of
a rule.

Run-Time Role Resolution and Privilege Inheritance


When you define Access of Role to Object (ARO) rules on a class basis. Pega navigates the class hierarchy and
determines the most specific ARO relative to the class of the object for that Role. Any less specific AROs in the
class hierarchy for that Role are ignored. The operation being performed is allowed if the most specific ARO
allows the operation. If the Operator has been granted multiple Roles, the most specific ARO rules are
determined for each Role. The Pega platform performs the operation if the operation is allowed in any of the
most specific AROs.
As stated, privileges can provide more granular security because they are defined on individual rules. For
example, in order to execute a flow action secured by a privilege, the user must be granted the privilege. The
privilege is granted through the most specific AROs for the Class of the object. There is, however, an option on
the Role for inheriting privileges within AROs defined in the class hierarchy. Selecting this option provides the
operator with all the privileges granted by the AROs defined for the classes in the the class hierarchy of the
object.
In the following example, the Role has the option for inheriting privileges selected. If the user works on an
Expense Report case, the access rights are defined in the row highlighted in green. Additional privileges are
inherited from the class hierarchy (TGB-HRApps-Work and Work-).

Access Read Write Delete Rea Writ Dele Execu Execut Privileges
class instanc instanc instanc d e te te e
es es es rule rule rules rules activiti
s s es
Work- 5 5 5 5 5 AllFlows(5)
AllFlowActions(5)
TGB- 5 5 5 5 5 ManagerReports
HRApps- (5)
Work
TGB- 5 5 5 5 5 5 SubmitExpenseRe
HRApps- port(5)
Work-
ExpenseRep
ort

Note: If an operator has multiple Access Roles, the Access Roles are joined with an OR such that only one of the
most specific AROs for each Access Role needs to grant access in order to perform the operation.
KNOWLEDGE CHECK

Which Access of Role to Object is used if there are several available in the inheritance path?
The most specificAccess of Role to Object in the class hierarchy relative to the class of the object is identified
and defines the access. Any less specific AROs in the class hierarchy are ignored.
Attribute-based access control (ABAC)
ABAC complements RBAC by enabling Access Control Policies to control access to specific attributes of a record
(so long as RBAC has granted access to the record), regardless of where those attributes are used in the
Application (on a screen, in a report). ABAC can also be used to define record-level access control (additional to
RBAC) where the conditions for accessing those records are NOT determined by the persona (role) that the
operator fulfills for the Application.
ABAC is optional and used in conjunction with RBAC. ABAC compares user information to case data on a row-by-
row or column-by-column basis. You configure ABACusing Access Control Policy rules that specify the type of
access and Access Control Policy Condition rules defining a set of policy conditions that compare user
properties or other information on the clipboard to properties in the restricted class.

You define access control policies for classes inheriting from Assign-, Data-, and Work- and use the full
inheritance functionality of Pega Platform. Access control policy conditions are joined with AND when multiple
same-type access control policies exist in the inheritance path with different names. Access is allowed only when
all defined access control policy conditions are satisfied.
Note: When both RBAC and ABAC models are implemented, the policy conditions in the models are joined with
an AND. Access is granted only when both the RBAC policy conditions AND the ABAC policy conditions are met.
In the following example, if the HR application user wants to update a Purchase case, the conditions for the
access control policies defined in the class hierarchy are joined with AND. The user is granted access for
updating the Purchase case only if WorkUpdate AND HRUpdate AND HRPurchaseUpdate evaluates to true.
Access Read Update Delete Discover PropertyRead PropertyEncrypt
class
Work- WorkRead WorkUpdate WorkDiscov
er
TGB- HRUpdate HRDele HRDiscover HRPropRead
HR- te
Work
TGB- HRPurchaseRe HRPurchaseUpd HRPurchasePropR HRPurhcasePropEnc
HR- ad ate ead rypt
Work-
Purcha
se

To enable ABAC, in the Records Explorer, go to Dynamic System Settings and update the
EnableAttributeBasedSecurity value to True.
KNOWLEDGE CHECK

Which access control policy is used if there are several available in the inheritance path?
All access control policies having the same type and different names are considered. The conditions are joined
with AND.
Establishing a Dependent Roles hierarchy
Pega Infinity allows a dependency hierarchy of Access Roles to be defined that allows more persona (user role)
specific RBAC to incrementally override the RBAC that is available from more generic Access Roles.
An Access Role MyApp:User can, for example, be configured dependent on the Pega Platform Access Role
PegaRULES:User4, and ‘inherit’ all authorizations available in the dependent role without defining explicit ARO
records for MyApp:User. You can define this by selecting Manage dependent roles:

Then entering Dependent roles in the Manage dependent roles dialog:

In this example, any Access Group which includes the MyApp:User Access Role, remains authorized to Read and
Write instances of any Case Type despite not having any AROs (Access of Role Object(s)) of its own,. This is
because:
l The absence of ARO records on the MyApp:User Access Role means that this Access Role neither grants nor
denies access. It yields no authorization outcome on its own
l As MyApp:User is dependent on PegaRULES:User4, any unresolved authorization checks are deferred to
PegaRULES:User4 to determine if that Access Role in turn yields an outcome
l As PegaRULES:User4 would not define an ARO for a Case Type specific to the MyApp application, the RBAC
algorithm works its way up the inheritance hierarchy of that Case Type’s Class to try and find a relevant ARO
for this check. PegaRULES:User4 does define an ARO for Work- (a superclass of any MyApp Case Type) which is
the ARO that most specifically matches an instance of a Case Type in MyApp
l As the settings of ‘5’ explicitly grant Read and Write access to the Case Type, this outcome from
PegaRULES:User4 is propagated as the outcome for the same authorization check on the MyApp:User Access
Role
Should an Access Role need to largely honour the authorization outcomes of an existing Access Role, but override
the outcomes in certain scenarios, you can also use dependent roles to configure only those AROs in the new
Access Role which override the outcomes that would otherwise be attained from its dependent roles. Any
authorization outcomes not specified in the top-level role continue to be deferred to its dependent roles for an
outcome.
For example, consider a requirement to restrict MyApp users to update all case types only if they are not
Resolved, whilst preserving all other access typically afforded to Pega Platform users. With dependent roles, this
can be implemented using a single ARO in MyApp:User which specifies the required restriction (using an
Application-specific Access When rule):
By leaving all other settings on the ARO unspecified, other authorization checks (e.g. Read access) are deferred to
the dependent role: in this example PegaRULES:User4. Read access would continue to be granted by the
dependent role, as no setting in the top-level role overrides it.
The benefits of using Dependent Role hierarchies are:
1. Eliminating duplication of AROs for application-specific Access Roles: The example above where MyApp
needed one setting to vary an otherwise reusable baseline would – when dependent roles are not used –
require all other settings on the ARO (and potentially other AROs) to be duplicated into the MyApp:User Access
Role.
2. Access Role layering: More generic Access Roles can be created to form the authorization foundation for
application-specific access roles that utilise similar personae (user roles). Application-specific access roles can
then establish a dependency on more generic access roles (which may in turn depend on Pega Platform
access roles), incrementally adding configuration of only that behaviour which differs between the Application
layer and layers on which it depends.
3. Multiple dependencies: Access roles can be configured to have multiple dependent access roles, providing
multiple dependencies to defer too so that an authorization outcome can be attained, based on a collection of
otherwise disparate access roles. Often there are exceptional users who concurrently perform the
responsibilities of multiple personae (user roles). Creating an Access Role for these users and having it
depend on multiple ‘sibling’ Access Roles from the same application may achieve this outcome.
4. Reuse Pega Platform or Pega Application authorization: Often the access roles resident in Pega Platform
(or any Pega Applications you are building on) for typical personae (user roles) such as end users, managers
and system administrators yield most of the required authorization outcomes. Implementing Application-
specific access roles to depend on the corresponding Pega Platform access roles provides a working
authorization baseline with no duplication of AROs.
5. Maintainability: By only configuring the authorization requirements that are unique to your Application-
specific access roles, and deferring the remainder to its dependent roles, it is clearer to maintainers of your
application how your application-specific authorization deviates from a more commonly understood
foundation. Configuring RBAC without dependent roles would lead to a larger number of AROs at the
application level, many of which are often slightly modified clones of the access roles provided by Pega. The
slight modifications can be hard to isolate.
6. Upgrade-ability: By virtue of eliminating duplicated AROs and instead deferring to AROs specified in
dependent roles, Upgrades to your Pega Platform or Pega Applications better allows the authorization of your
Applications to immediately reflect authorization changes that are delivered in the upgrade.
Tip: As of Pega Infinity, Application-specific access roles generated by the New Application wizard establish Pega
Platform access roles as their dependent roles.
When dependent roles are not used, your application-specific Access Roles have no links to the new or updated
Pega Access Roles, yielding the following impacts:
l Any changes to the Pega’s authorization model in upgraded access roles would be masked by the application-
specific access roles.
l Any new features delivered in any Pega upgrade may depend on Privileges from the upgraded access roles
that would be masked by the application-specific access roles.
KNOWLEDGE CHECK

Which access is granted if there are several dependent access roles defined for a dependent access
role?
All dependent role names are considered. The conditions are joined with OR.
How to create roles and access groups for an application
Each user (operator) of an application has a defined persona (user role) for processing cases. Applications
typically allow some groups of users to create and process cases, and other groups of users to approve or reject
those cases.

For example, in an application for managing purchase requests, any user can submit a purchase request, but
only department managers can approve purchase requests. Each group of users has specific responsabilities
and plays a particular persona (user role) in processing and resolving the case.

Access Roles and their design considerations


An Access Role identifies a job position or responsibility defined for an application. For example, an access role
can define the capabilities of LoanOfficer or CallCenterSupervisor. The system grants users specified capabilities,
such as the capability to modify instances of a certain class, based on the access roles acquired from the access
group in use.
Before you create access roles for your application, assess the role-based access control needs for the
application so as to determine how many distinct access roles are needed. Also determine what each individual
access role should be named so as to describe the authorization each grants. An access role defines what its
holder is authorized to do in the application.
For example, an access role might represent the actions that a manager, fulfillment operator, clerical worker, or
auditor is authorized to perform. A given user can hold multiple access roles at one time. The collection of access
roles held by the user at one time act as a group of capabilities and represents the set of actions that the user is
authorized to perform. For example, fulfillment operators might have access to open customer order records,
while managers may have access to open and modify customer order records.
Consider that an LSA has the following three (3) access role design options to fulfill the access control needs of
fulfillment operators and managers in the above ‘Ordering’ application example:
Option 1:

Type Name Description


Access Ordering:FulfillmentOperator allows Open access for Customer
Role
Access Ordering:Manager allows Open and Modify access for Customer
Role
Access Ordering:FulfillmentOperators references the Ordering:FulfillmentOperator Access Role only
Group granting Open access to Customer objects
Access Ordering:Managers references the Ordering:Manager Access Role only granting Open
Group and Modify access to Customer objects

Option 2:

Type Name Description


Access Ordering:FulfillmentOperator allows Open access for Customer
Role
Access Ordering:Manager allows Modify access for Customer
Role
Access Ordering:FulfillmentOperators references the Ordering:FulfillmentOperator Access Role only granting
Group Open access to Customer objects
Access Ordering:Managers references both the Ordering:FulfillmentOperator and
Group Ordering:Manager Access Roles granting Open and Modify access to
Customer objects granting Open (from FulfillmentOperator) and Modify
(from Manager) access to Customer objects

Option 3:

Type Name Description


Access Ordering:FulfillmentOperator allows Open access for Customer
Role
Access Ordering:Manager allows Modify access for Customer and specifies
Role Ordering:FulfillmentOperator as a Dependent Role
Access Ordering:FulfillmentOperators references the Ordering:FulfillmentOperator Access Role only granting
Group Open access to Customer objects
Access Ordering:Managers references the Ordering:Manager Access Role only granting Open
Group (inherited from FulfillmentOperator) and Modify (from Manager) access
to Customer objects

The design consideration for which option a LSA should take is whether the access control needs of the Manager
nearly always “builds on” (or is a superset of) those of the Fufillment Operator.
Option 1: allows for the access control needs of each persona (user role) to evolve independently of each other,
with the maintainability overhead of some access control settings being duplicated across each access role;
Option 2: requires the access control needs of the Manager to always be a super-set of the Fulfillment Operator,
as a grant returned from the FulfillmentOperator access role will (without more advanced RBAC configuration) be
enough to grant access to the Managers access group, even if the Manager access role denies access;
Option 3: allows the access control needs of the Manager to be predominantly based on the FulfillmentOperator
access role, whilst allowing the Manager access role to both introduce Manager specific settings as well as
override (i.e. explicitly revoke) settings specified in the dependent FulfillmentOperator access role.
Solution: Option 3 would typically yield an intended authorization design with the fewest AROs and the lowest
likelihood of duplication. This promotes a maintainable and understandable solution, and has flexibility to adapt
as additional Journeys are added that inevitably invalidate some of the authorization decisions reached in earlier
releases.
Note: The use of Access Deny rules and/or the “Stop access checking once a relevant Access of Role to Object
instance explicitly denies or grants access” option on the access group rule-form can help Option 2 achieve the
same outcome as Option 3, but this adds more rules and complexity to the design.
Applications created from the New Application wizard have three foundation Access Roles:
l <ApplicationName>:User
l <ApplicationName>:Manager
l <ApplicationName>:Administrator
Note: Other access roles are also created for RBAC pertaining to Pega API and APP Studio usage, but are
inconsequential for this lesson.
Prior to Pega 8, the best practice was to avoid using the foundation Access Roles except as starting points for new
application-specific access roles. The application-specific acess roles would be created by cloning the foundation
roles.
Starting Pega 8 with the introduction of Dependent Roles, the best practice is to create application-specific
access roles which specify the foundation access roles as dependencies.
The naming convention used for access roles is: <ApplicationName>:<RoleName> where RoleName is singular.
Use the Roles for Access landing page (DEV Studio > Configure > Org & Security > Tools > Security > Role
Names) to create new application specific roles.
Note: A capability to clone the AROs from an existing access role to another is available from the Access Roles
landing page. This is typically for backwards-compatibility only with Pega RBAC capabilities from earlier versions
that preceded the availability of Dependent Roles.

Access roles created in newer versions of Pega would typically utilise Dependent Roles as a preference over
cloning.

Access of Role to Object (ARO) design considerations


Use an Access of Role to Object to grant access permissions to objects (instances) of a given class and named
privileges to a role. Access permissions and named privileges can be granted up to a specified Production Level
between 1 through 5 (1 being Experimental, 2 being Development, 3 being QA, 4 being Staging, and 5 being
Production) or conditionally via Access When rules.

When planning the set of AROs to specify for an Access Role Name, consider the following:
l Is the access role they apply to inheriting authorization from Dependent Roles? If so, the AROs needed for
your access role can be limited to those that alter the authorization outcomes otherwise derived from its
Dependent Roles.
l Is the access role utilising Privilege Inheritance? If not, some Privileges from superclass AROs may need to be
re-specified in subclass AROs in the same Access Role.
l Leaving settings blank in an ARO results in Pega deferring to superclass AROs and Dependent Roles to
determine the authorization for that setting. This is a legitimate, object-oriented approach to configuring
authorization, but needs design.
l Is the access role to be used in access groups which “short-circuit” testing access roles once access is
explicitly granted or denied. If so, be conscious of the distinction between configuring a setting value (either
an Access When rule or a Production Level number) both of which could explicitly deny access, and leaving
the setting blank (delegating the authorization outcome to a superclass ARO, dependent role or a subsequent
roles on the Access Group).

Access Deny (RDO) design considerations


Use an Access Deny rule to explicitly deny authorization before evaluating whether any corresponding AROs for
the same Class and Access Role may grant access to the same action.

Defining Access Roles that only contain Access Deny rules, sequencing these Access Roles earlier in the list
shown on an Access Group, and activating the “Stop access checking once a relevant Access of Role to Object
instance explicitly denies or grants access” option facilitates restricting authorization that would otherwise be
granted to the Access Group by Access Roles listed after it on the Access Group. Roles that only contain Access
Deny rules can be described as Access-Deny-Only Access Roles.
Note: Access Deny rules cannot be configured to deny a Privilege that is otherwise granted by AROs in any of
the Access Roles in the Access Group.
The typical use case is where a requirement for a persona (user role) emerges whose authorization is very close
to – but a subset of – an existing persona (user role). For example, given an Ordering application with an existing
Manager persona (user role) (using an Ordering:Manager Access Role), the need for an “Associate Manager”
persona (user role) arises, where the only difference in authorization is value of Orders they are authorized to
open. An implementation approach for this using Access Deny rules could be:
1. Create an Access-Deny-only Access Role named Ordering:AssociateManagerDeny
2. Create an Access When rule on the Order class that compares the order value to the threshold required by
the business rule
3. Create an Access Deny rule for the new Access Role on the Order class, applying the new Access When as the
Read Instance setting
4. Create the Ordering:AssociateManagers Access Group, adding the following Access Roles:
a. Ordering :AssociateManagerDeny – denying Open Order access according to the business rule
b. Ordering:Manager – granting the same authorization that existing Managers have
5. Turn on the “Stop access checking once a relevant Access of Role to Object instance explicitly denies or grants
access” setting on the Ordering:AssociateManagers Access Group
Note: This scenario can also be implemented using Dependent Roles.

Think about how a Dependent Roles design could achieve the same outcome. What are the advantages and
disadvantages? Could this be used to address the Access Deny rule’s inability to deny Privileges?

Access Group design considerations


An Access Group is associated with a user through the Operator ID record. The Access Group determines the
Access Roles the users in the Access Group hold, the aggregate of which are the actions those users are
authorized to perform. The naming convention used for access groups is: application name, colon, group of
users.
The naming convention used for Access Groups is: <ApplicationName>:<GroupsName> where GroupsName is in
plural form, e.g. Customers.
Having an access role dedicated to a particular capability can be useful when multiple personas (user roles)
perform a similar responsibility in addition to their distinct primary responsibilities. For example, the personae
(user roles) in an HR application are Employee, HR generalist, HR manager, and executive. Both HR managers and
executives can update delegated rules. In this case, create an additional access role called
<ApplicationName>:DelegatedRulesAdmin. This access role can then be added to the Access Groups for both HR
managers and executives, so that they can update delegated rules in addition to their respective primary
responsibilities.
Note: Pega Platform delivers some fine-grained access roles to allow particular access groups to be granted
specific Privilege-based capabilities. Examples include:
PegaRULES:BasicSecurityAdministrator
PegaRULES:AgileWorkbench
KNOWLEDGE CHECK

What is the purpose of the default roles?


To be used as the basis when creating application specific roles
How to configure authorization
The Role-based access control (RBAC) and Attribute-based access control (ABAC) authorization models
always coexist. RBAC is defined for every user through the roles specified in the access group, and ABAC is
optionally added to complement RBAC.
RBAC is typically used to specify the access control requirements that pertain to the persona (user role) an
operator observes when using a Pega application.
l Stephen is a Call Center Worker when using the Customer Service application, needing authorization to create
Service cases, but is unauthorized to perform account changes for VIP customers.
l Rebecca is a Senior Account Manager when using the Customer Service application, and is granted the
authorization to perform account changes for VIP customers.
Stephen’s and Rebecca’s organizational roles determine what they are each authorized to do in the Customer
Service application. RBAC can restrict the operator to accessing specific UI components, such as audit trail and
attachments, or restrict the operator from performing specific actions on a case using privileges. You can also
use RBAC to restrict access to rules and application tools, such as Tracer and Access Manager during design
time.
You use ABAC to restrict access on specific instances of classes using policies that are not role-based, but instead
based on other attributes known about the user. For example, each operator may be tagged with a Security
Classification, which in itself applies limitations on which data the operator is authorized to access.
For example, in the Customer Service application used by Stephen and Rebecca above, a Security Clearance of
AAA is needed to be able to see a Customer’s address history older than 5 years, as well their Social Security
Number.
l Stephen holds a Security Clearance of AAA, and whenever he accesses Customer information in the
application, he should be authorized to see full address history and the customer’s Social Security Number,
even though the RBAC for his persona (user role)prohibits him from performing account changes if that
customer is a VIP.
l Rebecca holds a Security Clearance of B, and is therefore only authorized to see a Customer’s address history
up to 5 years old, and is not authorized to see the Customer’s Social Security Number, even though the RBAC
for her persona (role) allows her to make changes to VIP customer accounts.
The above access control policies driven by conditions that are not role-based are typically implemented using
ABAC, which can apply at both the record-level (e.g. visibility of Address records) and attribute-level (e.g. visibility
of the Customer’s Social Security Number).
The following table shows actions supported by RBAC and ABAC.

Action Description RBAC ABAC


Open/read instances Open a case and view case data in reports and searches X X
Property Read in instances Restrict data in a case the user can open X
Action Description RBAC ABAC
Discover instances Access data in a case without opening the case X
Modify/Update instances Create and update a case X X
Delete instances Delete and update a case X X
Run report Run reports X
Execute activity Execute activities X
Open rules Open and view a rule X
Modify rules Create and update a rule X
Privileges Execute rules requiring specified privileges X

Note: You can only define ABAC for classes inheriting from Assign-, Data-, and Work-.
Use the Access Manager to configure RBAC. ABAC is configured by implementing Access Control Policy and
Access Control Policy Condition rules, which may in turn reference Access When rules.
KNOWLEDGE CHECK

When do you configure ABAC?


To complement RBAC by restricting actions on a specific instance.
Rule security mode
The Rule security mode setting on the access group helps enforce a deny first policy. In a deny first policy,
users must be granted privileges to access certain information or perform certain actions. The rule security
mode determines how the system executes rules accessed by members of the access group.
The three supported rule security modes are Allow, Deny, and Warn.
Allow is the default and recommended setting. The system allows users in the access group to execute a rule
that has no privilege defined, or to execute a privileged rule for which the user has the appropriate privilege. If
more specific security is needed for an individual rule, specify a privilege for the rule.
Use Deny to require privileges for all rules and users. This setting is only recommended if your organization
security policies require a granular and strict security definition.
If Deny is selected and a privilege is not defined for a rule, the system automatically generates a privilege for the
rule and checks if the user has been assigned that privilege. The privilege is made up of
<RuleType>:Class.RuleName (5)—for example, Rule-Obj-Flow:MyCo-Purchase-Work-Request.CREATE (5). The generated
privilege is not added to the rule.
If the user has the generated privilege, the system executes the rule. If the user lacks the generated privilege, the
system denies execution and writes an error message to the PegaRULES log.
If a privilege is defined for a rule, the system checks whether the user has the privilege defined on the rule. If
not, the system checks if the user has the generated privilege for the rule. If the user has either privilege, the
system executes the rule. If the user has neither privilege, the system denies execution of the rule and logs an
error message in the PegaRULES log.
Use Warn to identify missing privileges for a user role. The system performs the same checking as in Deny mode,
but performs logging only when no privilege has been specified for the rule or the user role. The warning
messages written to the PegaRULES log are used to generate missing privileges for user roles with
thepyRuleExecutionMessagesLogged activity.
Ensure sufficient time and resources are available to perform a system-wide test including all expected users
before changing the rule security mode. See the Pega Community article Setting role privileges automatically for
access group Deny mode.
KNOWLEDGE CHECK

When would you set the Rule security setting to Deny?


When the organization's security policies require a granular security definition across the application.
Defining the authorization scheme quiz
Question 1
When is access granted if both RBAC and ABAC are configured in an application?

C # Response Justification
1 Access is granted when RBAC RBAC does not override ABAC.
evaluates to true as it
overrides ABAC.
2 Access is granted when ABAC ABAC does not override RBAC.
evaluates to true as it
overrides RBAC.
C 3 Access is only granted if both RBAC and ABAC must both
RBAC and ABAC evaluate to evaluate to true.
true.
4 Access is granted if ABAC RBAC and ABAC must both
evaluates to true unless an evaluate to true.
Access Deny restricts access.

Question 2
You have an access group with two access roles with conflicting access privileges. Which two configuration
options are recommended to ensure that access is denied? (Choose Two)

C # Response Justification
C 1 Use an Access Deny to Access Deny rules ensure that
explicitly restrict access access is denied across roles in
an Access Group.
2 Create a single role restricting Using a single role is possible,
access but this option is not
recommended if there are only
a few conflicts.
3 Modify the role granting This option is not recommended
access because the role might be used
in other access groups and the
C # Response Justification
ruleset may be locked.
C 4 Use ABAC to restrict access Both RBAC and ABAC must be
true for access to be granted.

Question 3
You want a group of users to view certain properties in a case without being able to open the case. How do you
implement the requirement?

C # Response Justification
1 RBAC Read action Use the Read action to allow the
user to open and view the
instance.
C 2 ABAC Discover action Use the Discover action to
enable viewing selected data in
an instance the user cannot
open.
3 ABAC Property Read action Use the Property Read action to
restrict access to data in an
instance the user can open.
4 RBAC Privilege Privileges cannot be used to
restrict access.

Question 4
When do you set the Rule security mode on the access group to Warn?

C # Response Justification
C 1 When verifying that the Warn is used prior to setting the
access role is configured mode to Deny to ensure the
correctly for rule execution access role is set up correctly.
2 When automatically notifying Setting Rule security mode to
the system administrator Warn does not result in
when access is denied to a notifications being sent.
rule
C # Response Justification
3 When displaying a custom This setting is not related to any
warn message if rule messages displayed to the user.
execution failed
4 When writing a message in ABAC cannot override RBAC.
the log file if ABAC overrode
the RBAC setting
Mitigating security risks
Introduction to mitigating security risks
Securing an application and ensuring that the correct security is set up is important. Correct security entails
making sure users are who they say they are (authentication). Correct security also entails proper authorization
(that users can only access cases they are allowed to access and can only see data they are allowed to see.
Correct security also means identifing and addressing security vulnerabilities like cross site scripting or phishing
attacks. This lesson examines common mistakes that can open up vulnerabilities in the system, and how to
address them to help avoid potential risks.
After this lesson, you should be able to:
l Identify security risks
l Detect and mitigate posible attacks using Content Security Policies
l Identify potential vulnerabilities with the Rule Security Analyzer
l Know how to secure a Pega application in production
l Discuss security best practices
l Use security event logging
Security risks
Every application includes a risk of tampering and unwanted intruders. When an application is developed
traditionally using SQL or another language, vulnerabilities inherent to the language are included, leaving the
systems open to attack. Tampering can occur in many ways, and are often difficult to detect and predict. URL
tampering or cross-site scripting can easily redirect users to malicious sites, so taking the proper steps to protect
your application is essential.
Developing applications using best practices ensures that rules are written properly, and secures the application
against threats. To maximize the integrity and reliability of applications security, features must be implemented
at multiple levels.
Each technique to strengthen the security of an application has a cost. Most techniques have one-time
implementation costs, but some might have ongoing costs for processing or user inconvenience. You determine
the actions that are most applicable and beneficial to your application.
When initially installed, Pega Platform is intentionally configured with limited security. This is appropriate for
experimentation, learning, and application development.
KNOWLEDGE CHECK

What can you do to mitigate security risks when developing applications?


Follow best practices and take actions to strengthen the security.
Content security policies
Every application includes a risk of tampering and unwanted intruders. When an application is developed
traditionally using SQL or another language, vulnerabilities inherent to the language are included, leaving the
systems open to attack. Tampering can occur in many ways, and are often difficult to detect and predict. URL
tampering or cross-site scripting can easily redirect users to malicious sites, so taking the proper steps to protect
your application is essential.
Developing applications using best practices ensures that rules are written properly, and secures the application
against threats. To maximize the integrity and reliability of applications security, features must be implemented
at multiple levels.
Each technique to strengthen the security of an application has a cost. Most techniques have one-time
implementation costs, but some might have ongoing costs for processing or user inconvenience. You determine
the actions that are most applicable and beneficial to your application.
When initially installed, Pega Platform is intentionally configured with limited security. This is appropriate for
experimentation, learning, and application development.
Content Security Policies (CSP) are used as a layer of security that protects your browser from loading and
running content from untrusted sources. The policies help detect and mitigate certain types of attacks on your
application through a browser, including Cross Site Scripting (XSS) and data injection attacks.
When a browser loads a page, it is instructed to include assets such as style sheets, fonts, and JavaScript files.
The browser has no way of distinguishing script that is part of your application and script that has been
maliciously injected by a third party. As a result, the malicious content could be loaded into your application.
CSPs help protect your application from such attacks.
Note: If an attack takes place, the browser reports to your application that a violation has occurred.
CSPs are a set of directives that define approved sources of content that the user's browser may load. The
directives are sent to the client in the Content-Security-Policy HTTP response header. Each browser type and
version obey as much of the policy as possible. If a browser does not understand a directive, then that directive
is ignored. In other situations, the policy is explicitly followed. Each directive governs a specific resource type that
affects what is displayed in a browser. Special URL schemes that refer to specific pieces of unique content—such
as data:, blob:, and filesystem:—are excluded from matching a policy of any URL and must be explicitly listed.
CSPs are instances of the Rule-Access-CSP class in the Security category.
To access the Content Security Policies in an application, you can:
l Specify the Content Security Policy on the Integration & Security tab of the application rule form
l Use the Application Explorer to list the Content Security Policies in your application
l Use the Records Explorer to list all the Content Security Policies that are available to you
For details on how to set Content Security Policies, see the help topic Policy Definition tab on the Content
Security Policies form.
KNOWLEDGE CHECK

Content security policies help detect and mitigate certain types of attacks by __________.
preventing browsers from loading and running content from untrusted sources
Rule Security Analyzer
The Rule Security Analyzer tool identifies potential security risks in your applications that may introduce
vulnerabilities to attacks such as cross-site scripting (XSS) or SQL injection.
Typically, such vulnerabilities can arise only in non-autogenerated rules such as stream rules (HTML, JSP, XML, or
CSS), and custom Java or SQL statements.
The Rule Security Analyzer scans non-autogenerated rules, comparing each line with a regular expressions rule
to find matches. The tool examines text, HTML, JavaScript, and Java code in function rules and individual activity
Java method steps, and other types of information depending on rule type.
The Rule Security Analyzer searches for vulnerabilities in code by searching for matches to regular expressions
(regex) defined in Rule Analyzer Regular Expressions rules. Several Rule Analyzer Regular Expression rules are
provided as examples for finding common vulnerabilities. You may also create your own Rule Analyzer Regular
Expression rules to search for additional patterns.
The most effective search for vulnerabilities is to rerun the Rule Analyzer several times, each time matching
against a different Regular Expressions rule.
Important: Use trained security IT staff to review the output of the Rule Security Analyzer tool. They are better
able to identify false positives and remedy any rules that do contain vulnerabilities.
Running the Rule Security Analyzer before locking a ruleset is recommended. This allows you to identify and
correct issues in rules before they are locked. The Rule Security Analyzer takes a couple of minutes to run
through the different regular expressions.
For more information on the Rule Security Analyzer, click How to use the Rule Security Analyzer tool.
KNOWLEDGE CHECK

The Rule Security Analyzer tool helps identify security risks introduced in __________ rules.
non-autogenerated
How to secure an application
Find out who is responsible for application security in the organization and engage them from the start of the
project to find out any specific requirements and standards, and what level of penetration testing is done.

Rules
Perform the following tasks:
l Ensure that properties are of the correct type (integers, dates, not just text).
l Run the Rule Security Analyser and fix any issues.
l Fix any security issues in the Guardrail report.

Rulesets
Lock each ruleset version, except the production ruleset, before promoting an application from the development
environment. Also, secure the ability to add versions, update versions, and update the ruleset rule itself by
entering three distinct passwords on the security tab on the ruleset record.

Documents
If documents can be uploaded into the application, complete the following tasks:
l Ensure that a virus checker is installed to enforce which files can be uploaded. You can use an extension
point in the CallVirusCheck activity to ensure that a virus checker is installed.
l Ensure file types are restricted by adding a when rule or decision table to the SetAttachmentProperties activity
to evaluate whether a document type is allowed.

Authorization
Verify that the authorization scheme is implemented and has been extensively tested to meet requirements.
Ensure the production level is set to an appropriate value in the System record. Set the production level to 5 for
the production environment. The production-level value affects Rule-Access-Role-Obj and Rule-Access-Deny-Obj
rules. These rules control the classes that can be read and updated by a requestor with an access role. If this
setting interferes with valid user needs, add focused Rule-Access-Role-Obj rules that allow access instead of
lowering the production level.

Authentication
Enable the security policies if out-of-the-box authentication is used (Designer Studio > Org & Security >
Authentication > Security Policies). If additional restrictions are required by a computer security policy, add a
validation rule. Set up time-outs at the application server level, requestor level, and access group level that are of
an appropriate length.
Integration
Work with the application security team and external system teams to ensure connectors and services are
secured in an appropriate way.

Operators and access groups


If the Pega Platform was deployed in secured mode out-of-the-box users are disabled be default. Disable any
users not used if the platform was not deployed in secure mode. Enable security auditing for changes to
operator passwords, access groups, and application rules.
Review the Unauthenticated access group to make sure that it has the minimum required access to rules.

Dynamic system settings


Configure the Dynamic System Settings in a production environment as described in the PDN article Security
checklist for Pega 7 Platform applications.
Note: Do not configure the Dynamic System Settings for a development environment, because they restrict the
Tracer tool and other developer tools.

Deployment
When deploying an application to an environment other than development, limit or block functionality to certain
features and remove unnecessary resources. Default settings exposes risks into an application since they
provide known starting point for intruders. Taking defaults out of the equation reduces overall risk dramatically.
Make the following changes to default settings:
l Rename and deploy prweb.war only on nodes requiring it. Knowing the folder and content of prweb.war is a
high security risk as it provides access to the application.
l Remove any unnecessary resources or servlets from the web.xml. Rename default servlets where applicable,
particularly PRServlet.
l Rename prsysmgmt.war and deploy it on a single node per environment. Also, deploy prsysmgmt.war on its
own node as someone could get the endpoint URL from the application server by taking the URL from the help
pop-up window. Password protect access to the SMA servlet on the production environment.
l Rename prhelp.war and deploy it on a single node per environment. In addition, deploy prsysmgmt.war on
its own node as someone could get the endpoint URL from the application server by gtaking the URL from the
help pop-up window. 
l Rename prgateway.war and rename and secure the prgateway servlet. The prgateway.war contains the
Pega Web Mashup proxy server to connect to a Pega application.
Database
Ensure that the system has been set up using a JDBC connection pool approach through the application server,
rather than the database being set up in the prconfig.xml.
Limit the capabilities and roles that are available to the PegaRULES database account on environments other
than development to reduce additional capabilities to truncate tables, create or delete tables, or otherwise alter
the schema. This limit on capabilities and roles might cause the View/Modify Database Schema tool to operate in
read-only mode.
KNOWLEDGE CHECK

What can you do to mitigate security risks when developing applications?


Follow best practices and take actions to strengthen the security.
Security best practices
Security policies are an important aspect to securing an application. Security policies are important whether
Pega is the Identity Provider (IdP) or whether the IdP is external.
Pega platform provides multiple ways to enforce security.
System -> Settings -> Security Policies
In this lesson we discuss best security practices.

Avoid Excessive Privilege Grants


As a precaution, do not to assign permissive access roles, such as WorkMgr4, early on if not completely certain
the user needs Work- Perform Privilege. For example, a case worker’s productivity might be enhanced by being
able to view certain reports but this does mean case worker should be assigned the Manager Portal by way of
defining their Access Role WorkMgr4 and allowing the access to a Manager portal. Instead a Dashboard can be
added to specialized Case Worker portal.
Rule-Access-Role-Object (Access of Role to Objects or ARO) rules are non-versioned. It is not possible to override
an ARO rule within a different ruleset. There can only be one instance of an ARO rule based on its keys,
pyAccessRole and pyAccessToClass.
As opposed to updating the Work- ARO to remove Perform Privilege as the default, a more specific class can be
added such as the work pool class, e.g., FSG-Booking-Work. Within that new ARO, the Perform Privilege can be
removed. The Perform Privilege can then be granted for the set of case types that a manager oversees.

URL Attack Prevention


Obfuscation of URLs is not a guarantee that an attacker will never find way to form a URL that, if not secured by
the system, will allow access to information the attacker should not be allowed to view, or worse allow the
attacker to modify information in the system.
Remeber the saying "obscurity is not security". If a member of a certain workgroup. or someone with a certain
primary access role should not be allowed to create case types then you need to define an authorization policy
that explicitly prevents that access. Hiding the pyCreateCaseMenu within the Data-Portal pyPortalNavInner Section
using a When rule is not the proper solution, albeit can be part of the solution. This is true regardless if the
When rule is checking for a Privilege.
Anyone can who knows how can define a URL where the QueryString begins as follows:
?pyActivity=doUIAction&action=createNewWork&className=
Can create the case, unless true security (via proper authorization) is implemented.
This doesn’t mean that When rules should never be created that check whether the user has a certain Access
Role or Privilege. Security-checking When rules should only be used when no better means of enforcement exists
or in combination with proper authorization to improve the user experience.
You do not want to be in a situation where When rules must be applied in every situation that requires
enforcement. Doing so is the same as writing code that contains if/else logic; whenever a new “else” value is
invented, if/else logic needs to be updated. The object-oriented way to enforce security is to secure the object
that is ultimately accessed, not every path that can be taken to get to the object.

Example:
Suppose an operator has Read access to a case but not Perform access. The user could issue a URL such as:
?pyActivity=doUIAction&action=openWorkByHandle&key=FSG-BOOKING-WORK EVENT-77
No error message would be displayed. Instead the screen would say “NoID” plus display a button that says
“Open in Review ”.
If you do not want that person to view the case in Review mode unless that person is either the current owner or
the last person to update the case. Simply preventing access to the assignment is not sufficient. This is because
Assign- Read access = canPerform only prevents the assignment from being performed, it does not prevent the
associated case from being opened. Access must also be prevented or allowed at the case-level.
Notice the RBAC configuration below for FSG-Booking-Work. The pxRelatedToMe Access When rule allows the case
to be opened only when last updated or resolved by the current operator, or currently owned by the current
operator. A co-worker would not be allowed to open the case.

Before

After
In the Booking application, the primary Access Role for Sales Executives should be cloned from or even better,
dependent on PegaRULES:User4. Below shows what would happen when SalesExecutive1@Booking attempts to
open a case owned by SalesExecutive2@Booking after the modification above.
?pyActivity=doUIAction&action=openWorkByHandle&key=FSG-BOOKING-WORK EVENT-77

Limit Portal Functionality


The Bulk Actions option is present with the Case Worker portal’s Operator menu. If the requirement is that a
Facility Coordinator should not be allowed to create an Event case and you did not set up the proper
authorization policy, then the Facility Coordinator would be able to use the Bulk Actions option to create the case
as follows:
Step 1:
Step 2:

Review the Security Checklist Before Deploying Applications


Pega has defined multiple areas to review before deploying an application to Production. Each Deployment
Manager pipeline includes a security review step to remind architects of this essential task. The security
guidelines are included in the pxApplicationSecurityChecklist Application Guide rule which can be launched from
an Application rule’s Documentation tab.
For more details review the Pega Comminity article Security checklist for deploying applications.
KNOWLEDGE CHECK

What can you do to mitigate security risks before deploying developing applications to production?
Follow best practices and take actions as suggested in the Security Checklist to strengthen the security.
Security event logging
In addition to data and rule modification auditing, plus recording work history, Pega provides the ability to record
security-related events to a file named PegaRULES-SecurityEvent.log. This log file can be accessed from DEV
Studio using: Configure > System > Operations > Logs > Log files.

Below are two example security event log entries. Notice that each entry is recorded using JSON format.
{"id":"6e11a563-fd93-46d8-9de0-3963fb43a70f","eventCategory":"Security
administration event","eventType":"security event configuration
changed","appName":"Booking","tenantID":"shared","ipAddress":"192.168.118.1","timeSt
amp":"Fri 2019 Jul 12,
17:30:54:274","operatorID":"Admin@Booking","nodeID":"ff9ef7835fd4906aea82694c981938d
0","message":"security event configuration has been
modified.","requestorIdentity":"20190710T213105"}
{"id":"ed76e8a7-ea28-4e9a-8830-01e8e90301ae","eventCategory":"Authentication
event","eventType":"Operator record
change","appName":"Booking","tenantID":"shared","ipAddress":"192.168.118.1","timeSta
mp":"Fri 2019 Jul 12,
17:41:24:976","operatorID":"Admin@Booking","nodeID":"ff9ef7835fd4906aea82694c981938d
0","requestorIdentity":"20190710T213105","operatorRecID":"DATA-ADMIN-OPERATOR-ID
ADMIN@BOOKING","operatorRecName":"Admin","operation":"update"}
Using: Configure > Org & Security > Tools > Security > Security Event Configuration, displays which type of
events are recorded. At the bottom is the ability to enable or disable Custom event logging.
Note: The Security Event Configuration only allows you to turn custom events on or off.
This setting does not provide control over when individual custom events are logged. You could, for example,
define a parameterized When rule used to control whether a step in a Data Transform or Activity step should
record a custom security event. The When rule’s parameter could be used to perform a Data Page-mediated
lookup to see whether logging of the custom event has been enabled.
Custom event logging can be used to facilitate the fulfillment of Client-Based Access Control (CBAC) auditing
requirements.
It is possible to log a custom event within an Activity java step using:
tools.getSecEventLogger().logCustomEvent(PublicAPI tools, String eventType, String
outcome, String message, Map<String, String> customFlds)
With the parameter values:
l eventType: Name of the event type to keep track of custom events
l outcome: The outcome of the event
l message: Any message that a user needs to log as part of the event.
l customFlds: A map of key-value pairs that log extra information for the event. 
A better long term approach, however, would be to execute this API with a Rule-Utility-Function. This is because
future versions of Pega platform may curtail the use of java steps in Activities.
According to the help topic Adding a custom security event, to record a custom security event you would create a
java step within an activity.
It would be overly complex to require code that calls a Function to supply a StringMap (Map<String, String>)
customFlds parametter. The Function could instead accept a text-based ValueGroup Property. That ValueGroup
Property can be converted to a StringMap within the Function. The following steps describe how you could
configure this function.
1. Create a Library and Function
2. Have the Function accept four parameters (String, String, String, ClipboardProperty)
3. The supplied ClipboardProperty must be a ValueGroup
4. The Function converts the ValueGroup ClipboardProperty to a locally declared Map<String, String> customFlds
variable 
PublicAPI tools = null;
PRThread thisThread = (PRThread)ThreadContainer.get();
if (thisThread != null) tools = thisThread.getPublicAPI();
else throw new PRAppRuntimeException("Pega-RULES", 0, "Unable to obtain current
thread");
Map<String, String> customFldsMap = new HashMap<String, String>();
java.util.Iterator iter = customFlds.iterator();
while (iter.hasNext())
{ ClipboardProperty prop = (ClipboardProperty)iter.next();
customFldsMap.put(prop.getName(), prop.getStringValue());
}
tools.getSecEventLogger().logCustomEvent(tools, eventType, outcome, message,
customFldsMap);

Below is an example of a custom security event.


{"id":"c86a4299-9355-418b-b95d-519f842693d1","eventCategory":"Custom
event","eventType":"FooBla","appName":"Booking","tenantID":"shared","ipAddress":"192
.168.118.1", "timeStamp":"Fri 2019 Jul 12,
17:46:05:284","operatorID":"Admin@Booking",
"nodeID":"ff9ef7835fd4906aea82694c981938d0","outcome":"Fail","message":"FooBla
failed","requestorIdentity":"20190710T213105"}
Note: The event category for every custom security event is “Custom Event”. To enable or disable logging of a
specific custom security event type, you would need to use the (custom) eventType value as a When rule
parameter. The When rule would use the parameter to perform a node-level Data Page lookup. If the lookup
shows that logging of the custom eventType is enabled, the When rule would return “true”. In turn, the custom
security event-logging function (RUF) would be called.

KNOWLEDGE CHECK

How do you enable custom event logging?


Using: Configure > Org & Security > Tools > Security > Security Event Configuration.
Mitigating security risks quiz
Question 1
When implementing a security requirement, it is important to _____________________ and _____________________.
(Choose Two)

C # Response Justification
C 1 consider the cost for the It is important to consider the
users user inconvenience of a security
feature.
2 only change default settings if Often it is recommended to
absolutely necessary change default settings to
strengthen security.
C 3 understand vulnerabilities Always consider vulnerabilities
inherent from the application inherent from the application
programming language and programming language and
underlying platform underlying platform.
4 consider if the requirement This is not something to
can be implemented using consider. APAC and RBAC
ABAC instead of RBAC complement each other.

Question 2
Which two of the following security risks can be identified by the Rule security analyzer? (Choose Two)

C # Response Justification
1 Unsecured rulesets and The Rule security analyzer does
unlocked ruleset versions not check ruleset versions.
C 2 Vulnerabilities in stream rules The Rule security analyzer
checks for vulnerabilities in
hand-crafted stream rules.
3 Properties with incorrect type The Rule security analyzer does
not check property types.
C 4 Hand-crafted Java code The Rule security analyzer
checks for vulnerabilities in
hand-crafted Java code.
Question 3
Content security policies ______________ and ________________. (Choose Two)

C # Response Justification
C 1 protect your browser from Content security policies helps
loading and running content by identifying untrusted
from untrusted sources sources.
C 2 help detect and mitigate Content security policies helps
Cross Site Scripting (XSS) and mitigate XSS and data injection
data injection attacks attacks.
3 create an event when security This is true for security events.
relevant rules are updated
4 find common vulnerabilities This is true for the Rule security
by searching for matches to analyzer.
regular expressions
Defining a reporting strategy
Introduction to defining a reporting strategy
Defining a reporting strategy goes beyond creating reports in Pega. Many organizations use a data warehousing
solution and have distinct requirements for retaining data.
After this lesson, you should be able to:
l Identify requirements that influence reporting strategy definition
l Discuss alternatives to traditional reporting solutions
l Define a reporting strategy for the organization
Reporting and data warehousing
Organizations often want to combine data from web applications, legacy applications, and other sources in order
to make decisions in real time or near real time. To make these decisions, many organizations use business
intelligence software to collect, format, and store the data, and provide software to analyze this data.
A data warehouse is a system used for reporting and data analysis. The data warehouse is a central repository of
integrated data from one or more separate sources of data. The extract, transform, and load (ETL) process
prepares the data for use by the data warehouse. The following conceptual image illustrates a typical end-to-end
process of extracting data from systems of record and storing the data in the warehouse, then making that data
available to reporting tools.

The key factor that determines whether you design your reports in the Pega application or leverage an external
reporting tool is the impact on application performance. For example, if your reporting requirements state that
you need to show how many assignments are in a workbasket at any given time, creating a report on the
assignment workbasket table is appropriate. If you to analyze multiple years of case information to perform some
type of trending analysis, use reporting tools suited for that purpose instead. You can provide a link to those
reports from the end user portal in the Pega application.

Business Intelligence Exchange (BIX)


Business Intelligence Exchange (BIX) allows you to extract data from your production application, and format the
data to make it suitable for loading into a data warehouse. BIX is an optional add-on product consisting of a
ruleset and a stand-alone Java program that can be run from a command line. BIX data from the BIX process can
be formatted as XML or comma separated (CSV), or can be output directly to a database. The following diagram
depicts the process of extracting the data from the Pega database and preparing the data for use by
downstream reporting processes.

For more information on BIX, see the help topic Business Intelligence Exchange.

Archiving and purging data


Another facet of the data management and warehousing solution is planning how and when to purge data from
the production system. Over time, the work and history tables can grow significantly. In addition to making this
data available for reporting from a data warehouse, create a strategy for managing the size of these tables. This
strategy could include partitioning database tables or moving the data to a staging database. This strategy could
also involve purging this data from the database after it has been archived in the warehouse.
Note: Pega provides a wizard for purging data from production tables. For more information on purging data
using the wizard, see the help topic Purge/Archive wizard.
KNOWLEDGE CHECK

What is the primary reason for using an external reporting tool instead of Pega reporting?
An external reporting tool is used because of the potential impact on system performance. If you need a report
that does heavy analysis or trending type reporting over large quantities of data, use a tool meant for that
purpose. Pega can handle this type of reporting, but be aware of impact to system performance, particularly
when embedding reports in end user portals.
How to define a reporting strategy

Before you define your reporting strategy, assess the overall reporting needs of the organization. The goal is to
get the right information to the right users when they need the information. Treat your reporting strategy design
as you would any large-scale application architecture decision. A robust reporting strategy can help prevent
future performance issues and help satisfy users' expectations.
As you define your reporting strategy, ask yourself the following questions:
l What reporting requirements already exist?
l Who needs the report data?
l When is the report data needed?
l Why is the report data needed?
l How is the report data gathered and accessed?

What reporting requirements already exist?


Organizations rely on reporting and business intelligence to drive decisions in the organization. Sometimes,
government and industry standards drive reporting needs. For example, executive management requires
dashboard reports and Key Performance Indicators (KPIs) to drive strategic decisions for the business, oversee
how the organization is performing, and take action based on that data. Line level managers need reports to
ensure their teams meet business service level agreements (SLAs). When defining a reporting strategy, inventory
the reports that the business uses to make key decisions to help determine the overall reporting strategy.

Who needs the report data?


Once you have an inventory of these reports, create a matrix categorizing the user roles and reports each role
uses to make business decisions. For example, you may create throughput reports for various users. Line
managers use the reports to show the throughput of team members. Managers use reports to optimize routing
of assignments. Executives may want to see a summary of throughput over specific periods. The reports enable
the executives to drill down into the results of individual departments and plan staffing requirements for these
departments in the coming months. Individual team members can see their own throughput metrics to gauge
how close they are to meeting their own goals and to work toward their incentives.

When is the report data needed?


Identify how frequently the data needs to be delivered. The outcome of your research affects configuration
decisions such as report scheduling settings, agent schedules, and refresh strategies on data pages. Other
factors, such as currency requirements, may play a role in your strategy. For example, you may have a data page
that contains exchange rates. This data needs to be current on an hourly basis. In addition, the report that
sources the data page must have a refresh strategy.
Related to frequency is the question of data storage and availability. The answer influences how you architect a
data-archiving or table-partitioning approach. Implementing a data-archiving strategy and partitioning tables
can help with the long-term performance of your reports.

Why is the report data needed?


This question is related to who needs the data. Existing reporting requirements influence what data the report
must contain. As you research the need for each report, you may find the report data is not needed at all. For
example, you may discover that no one in the organization reads the report or uses the report as the basis for
any decisions. On the other hand, you may find opportunities to provide new reports. For example, a department
could not create a necessary report because the current business process management system cannot extract
the data. With the Pega application, you can extract this data using BIX and feed it to a data warehouse where
managers perform analytics on resolved work.

How is the report data gathered?


Pega Platform offers several options for gathering report data within the application. The strategy you
recommend depends on the type of reporting you are doing. If the organization requires heavy trending
reporting, and business intelligence (BI) reporting, a data warehouse may be a better fit. If you want to display
the status of work assignments on a dashboard in the application, report definitions with charting is appropriate.

Alternatives to standard reporting solutions


Although Pega offers powerful reporting capabilities, also consider alternatives to traditional reporting and data
warehousing approaches. These approaches may be the best way to meet your reporting requirements. For
example, you can use robotic automation to gather data from external desktop applications. Or, if you are using
data for analytics, consider using adaptive and predictive decisioning features. If you need dynamic queries, you
can also use freeform searching on text such as Elasticsearch instead of constructing a report definition to
gather the data. With the growth in popularity of big data and NoSQL databases, freeform search is becoming
more common.
Starting in v7.4, you can run report definitions against Elasticsearch indexes instead of using SQL queries directly
against the database. Be aware however, that running report definitions against Elasticsearch indexes is
disabled by default and does not apply to reports that have features that are not supported by Elasticsearch. If a
report query cannot be run against Elasticsearch indexes, Pega Platform automatically uses an SQL query.
Defining a reporting strategy quiz
Question # 1
Which of the following two actions do you perform when you begin developing your reporting strategy? (Choose
Two)

C # Response Justification
C 1 Inventory the reports the Making an inventory lets you see
organization currently uses the information needs that drive
the organization's reporting
requirements.
C 2 Create a matrix categorizing A matrix helps you understand
the user roles and reports how different roles use specific
each role uses to make reports and discover if there is
business decisions. overlap.
3 Install the BIX product You install BIX only if you need
to extract data from the
organization's database and
export it to a data warehouse.
4 Optimize the properties in all You might optimize properties
the reports to enhance reporting
performance, but optimization is
not part of developing a
reporting strategy.

Question 2
What benefits do you gain by leveraging a data warehouse coupled with Pega reporting? (Choose Two)

C # Response Justification
C 1 You can use reporting tools suited for This is a benefit of using a data warehouse.
trending and data analysis.
C 2 You can focus on using Pega for This is a benefit of using a data warehouse.
reporting on throughput data.
3 You can purge cases from your Purging is a separate activity from warehousing, though
C # Response Justification
database along with the data they are related.
warehousing process.
4 You can use BIX to export cases to a While this is a true statement of BIX capabilities, this is not
database, a comma separated file, or a direct benefit of leveraging data warehousing.
XML format.

Question #3
Your organization uses a large number of business intelligence (BI) reports. Which two approaches would you
consider to be good solutions? (Choose Two)

C # Response Justification
C 1 Data warehouse A data warehouse may be the
best solution if the organization
requires heavy trending
reporting and business
intelligence (BI) reporting.
C 2 BIX BIX is likely the best way to
extract data from the
application server and export it
to the data warehouse.
3 Elasticsearch You use Elasticsearch mainly for
dynamic queries, but it may not
be useful for BI reports.
4 Data archive Archiving data may be useful for
enhancing reporting
performance, but it is not part of
developing a reporting strategy.
How to define a reporting strategy
Introduction to designing reports for performance
Poorly designed reports can have a major impact on performance. A report may run with no issues in a
development environment. When run with production data, the report may perform poorly. This issue may
impact performance for all application users.
After this lesson, you should be able to:
l Explain the causes of poorly performing reports and the impact poor performance can have on the rest of the
application
l Describe how to design reports to minimize performance issues
l Identify the cause and remedy a poorly performing report
Impact of reports on performance
When an application is first put into production, a report may run with no issue and within established service
level agreements (SLAs). As the amount of application data grows, the report may run more slowly. Poor report
performance can cause memory, CPU, and network issues. These issues can affect all application users, not just
the user running the report.
To help you diagnose and mitigate these issues, Pega generates performance alerts when specific limits or
thresholds are exceeded. For example, the PEGA0005 - Query time exceeds limit alert helps you recognize when
queries are inefficiently designed or when data is loaded indiscriminately.
For more information about performance alerts, see the Pega Community article Performance alerts, security
alerts, and Autonomic Event Services.
Important: Guardrail warnings alert you to reports that could have performance issues. Instruct your teams to
address warnings before moving your application from development to target environments.

Memory impact
Large result sets can cause out-of-memory issues. The application places query results on the clipboard page of
the users. If those pages are not managed, your application eventually shuts down with an out-of-memory (OOM)
error.

CPU impact
Using complex SQL can also have a CPU impact on the database server. When the database is performing poorly,
all users on all nodes are affected. Autonomic Event Service (AES) and Predictive Diagnostic Cloud (PDC) can help
you identify the issues. Your database server administrator can set up performance monitoring for the database
server.

Network impact
Sending large result sets over the network can may cause perceived performance issues for individual users
depending upon their bandwidth, network integrity, and network traffic.
KNOWLEDGE CHECK

How can reporting affect performance across the application?


Poorly designed queries can be CPU intensive, causing issues with the database. When the database is
affected, all users are affected. Not managing large result sets can adversely affect application server memory
and database memory. The most common consequence of improper management of large result sets is an out-
of-memory error on the application server node.
How to configure an application to improve report
performance
A report definition (and the Obj-* methods) is just a query. Pega Platform constructs and optimizes a query
based on parameters defined in the application rule. Then, Pega Platform delivers the results of the query to a
clipboard page to either display to your end users or to use for other purposes, such as running an agent on the
result set.
The same principles you use in tuning a database query can be applied to designing reports for performance.
You can configure the report definition and Obj-* methods to use. You can apply techniques at the database
level. Or, you can choose to take an entirely different approach for gathering data, such as using robotic
automation or elasticsearch.
The goal is to return data to your users in the most efficient way possible and with as little impact to other users.
The following techniques discuss best practices for configuring rules within the application.

Use data pages when possible


The best approach to optimizing your report is to avoid running the report. Data pages can help you do that. If
data already exists on data page, use it. Design the refresh strategy to only get data when required. Use node
scoped pages when possible.

Paginate results
Paginating results allows you to return groups of data at a time. Returning big groups of records may make it
difficult for users to find the information they are looking for. For example, a report that returns 50 records at a
time may be too much information for a user to sift through. Select the Enable Paging option on the report
definition and specify the page size.
For more information on how to configure paging in reports, see the Pega Community article When and how to
configure paging in reports.

Optimize properties
If you expect to use a property in your selection criteria, optimize that property. Optimizing a property creates a
column in the database table, which you can then index as needed.
For more information about optimizing properties, see the help topic Property optimization using the Property
Optimization tool.

Utilize declare indexes


Declare indexes allow you to expose embedded page list data. For example, the application stores Work-Party
instances in an pr_index_workparty table. This allows you to write a report definition that joins work object data
to work party data instead of extracting the work party data from the pzpvstream column (the BLOB), which can
be expensive.
For more information on how to utilize declare index, see the article How to create a Declare Index rule for an
embedded property with the Property Optimization tool.

Leverage a reports database


To reduce impact of queries on the production database, you can run reports against a reports database (also
known as an alternate database). This approach offloads demand of the production database to a replicated
database.
For more information on using a reports database, see the topic Setting up a reports database.

Avoid outer joins


Selecting Include all rows on the Data Access tab of the report definition can be costly. This option causes the
system to use an outer join for the report in which all instances of one of the classes are included in the report
even if they have no matching instances in the other class. If possible, select Only include matching rows.
How to tune the database to improve report performance
You can perform specific database tuning and maintenance tasks to help improve report performance. Enlist the
help of your database administrator to perform these tasks and to provide additional guidance. These tasks vary
depending on the database vendor you are using. Regardless of the database your application is running on,
these techniques can help you improve reports performance.

Partition tables
Table partitioning allows tables or indexes to be stored in multiple physical sections. A partitioned index is like
one large index made up of multiple little indexes. Each chunk, or partition, has the same columns, but a
different range of rows. How you partition your tables depends on your business requirements.
For more information on partitioning Pega tables, see PegaRULES table partitioning

Run Explain Plans on your queries


An Explain Plan describes the path the query takes to return a result set. This technique can help you determine
if the database is taking the most efficient route to return results. You can extract the query with substituted
values by using the Database profiler or by tracing the report while it runs. Once you have the query with
substituted values, you can run the Explain Plan for the query in the database client of your choice.

Create table indexes


After you have exposed one or more columns in a database tables, you can create an index on that column. Do
not create too many indexes, because this can degrade performance. In general, create an index on a column if
any of the following statements is true.
l The column is queried frequently.
l A referential integrity constraint exists on the column.
l A UNIQUE key integrity constraint exists on the column.

Drop the pzpvStream column on pr_index tables


Pr_index tables do not require the pzPvStream column. Removing this column prevents replicated data from
being returned to the application and taking up memory on the clipboard.

Purge and archive data


Depending on the retention requirements for your application, consider archiving data to Nearline or Offline
storage, either in another database table or in a data warehouse. Purging and archiving data that is either no
longer needed or infrequently accessed can improve report performance because the application has a smaller
set of records to consider when running the query. You can also use the Purge and Archive wizard to achieve this
purpose.
For more information about purging and archiving data, see the help topic Purge/Archive wizards.
Important: Be sure to consider table relationships to ensure your archiving solution encompasses all
application data.

Load test with realistic production data volumes


You can prepopulate a staging environment with production-like data to test your reports with a realistic volume
of data. Many organizations require any sensitive information be removed (scrubbed) prior to running this type
of test, and this can take some time. Plan your testing accordingly.
Designing reports for performance quiz
Question # 1
Your application database server is experiencing out-of-memory (OOM) errors. You suspect that the issue is
related to a new report you recently put into production. Which of the following options is the likeliest reason
that the report is causing these OOM issues?

C # Response Justification
C 1 Large result sets The application places query
results on the clipboard page of
the users. If those pages are not
managed, your application
eventually shuts down with an
OOM error.
2 Complex SQL requests Using complex SQL typically has
a CPU impact on the database
server, not OOM issues.
3 Heavy network traffic Sending large result sets over
the network that has heavy
traffic may cause perceived
performance issues by the user
but not cause OOM issues.
4 Unoptimized properties Unoptimized properties may
cause prolonged background
processing times but not OOM
issues.

Question # 2
You have created a set of reports that use large numbers of embedded properties. Which two of the following
techniques can you use to help improve report performance? (Choose Two)

C # Response Justification
C 1 Create declare indexes Declare indexes allow you to
expose embedded page list
data.
C # Response Justification
2 Enable pagination Pagination controls the number
of results a user sees at one
time and is not directly related
to the use of embedded
properties.
C 3 Run reports against a reports This approach offloads demand
database of the production database to a
replicated database dedicated
to reporting.
4 Include all rows when joining This technique causes the
reports system to use an outer join and
is not directly related to the use
of embedded properties.

Question # 3
What database tuning technique do you use to determine if the database is taking the most efficient route to
returning results?

C # Response Justification
C 1 Run explain plans on your An explain plan describes the
queries path the query takes to return a
result set. This technique can
help you determine if the
database is taking the most
efficient route to return results.
2 Create table indexes You create an index for a
database column if the column
is queried frequently or there
are specific constraints on the
column.
3 Partition the database tables Table partitioning allows tables
or indexes to be stored in
multiple physical sections.
4 Perform a load test This technique only tests system
C # Response Justification
performance based on realistic
data volumes.
Query design
Introduction to Query Design
It is not enough just to know how to configure reports. There can be multiple solutions to the same problem, one
superior to the others. There can be situations where an ideal solution is not possible using the data that already
exists. Creating data to simply a query is a technique that can be leveraged. Also, the decision how best way to
store data to facilitate reporting needs to be made early on; data query ability should not be an afterthought.
After this lesson, you should be able to:
l Produce queries that reference ancestors within a case hierarchy
l Produce queries based on generated or reformatted data
l Produce queries that include correlated subqueries
l Produce queries that contain complex SQL
Queries that reference ancestors within a case hierarchy
Within XPath there is the notion of the following axes, one of which is named “ancestor”. The definition of
ancestor is parent, grandparent, …, etc.
Suppose a case needing to be reviewed by multiple independent reviewers. A subcase is created for each of the
N required reviewers. The initial assignment in each subcase is a workbasket assignment. Each reviewer asks the
system to pull a Review workbasket assignment into their worklist. The parent of the case that the workbasket
assignment references should not match any parent of a subcase that the reviewer has previously fetched.
Assumption: When Review workbasket assignment is fetched and moved to the reviewer’s worklist, the reviewer
is immediately persisted as the subcase’s Reviewer work party, i.e., pyWorkParty (Reviewer).

Solution:
select pzInsKey from
{Assign-WorkBasket} WB,
{Org-App-Work-Review} REV
where
WB.pxAssignedOperatorID = [workbasket]
and WB.pxRefObjectKey = REV.pzInsKey
and REV.pxCoverInsKey not in
(select REV2.pxCoverInsKey from
{Org-App-Work-ReviewCover} REV2, {Index-WorkPartyUri} WP
where
WP.pxInsIndexedKey = REV2.pzInsKey
and WP.pxPartyRole = "Reviewer"
and WP.pyPartyIdentifier = OperatorID.pyUserIdentifier);
Suppose though that the case being review is two levels above the case to be fetched by a reviewer? Cases do
not possess a “pxCoverCoverInsKey” property. Still, if you needed, you could define that property but should use a
more meaningful property name. If the case to be reviewed is a Claim you would define a work pool-level
property named “ClaimKey”. The “parent” and “grandparent” Claim case would set ClaimKey equal to its pzInsKey
plus propagate that ClaimKey property to its subcases. Likewise those subcases would also propagate the
ClaimKey property to their subcases.
Propagation of data assumed to remain static would also simply the definition of Access When rules. Care must
be taken with this approach as it is possible, however unlikely, that the information can become stale or
inaccurate. For example, the parent case of a subcase can be altered when the pxMove activity is invoked. This is
an example where a data integrity violation can potentially occur.
An alternative is to perform what is known a hierarchical or recursive query which each database vendor
implements differently but is not supported by the Report Definition rule.
Interestingly if the goal is to avoid the same person reviewing the same case twice, and the case design not
involve subcases, the query shown above would work just as well if the word “Cover” was removed. In other word,
if subflows are created using Split-For-Each, as opposed to subcases being spun off, the query below would
prevent the same reviewer, using GetNextWork, from pulling a workbasket assignment for a case that the
reviewer has already reviewed.
select pzInsKey from
{Assign-WorkBasket} WB,
{Org-App-Work-Review} REV
where
WB.pxAssignedOperatorID = [workbasket]
and WB.pxRefObjectKey = REV.pzInsKey
and REV.pxInsKey not in
(select REV2.pxInsKey from
{Org-App-Work-Review} REV2,
{Index-WorkPartyUri} WP
where
WP.pxInsIndexedKey = REV2.pzInsKey
and WP.pxPartyRole = "Reviewer"
and WP.pyPartyIdentifier = OperatorID.pyUserIdentifier);
Queries based on generated or reformated data
Suppose there is a requirement to produce a trend report where the chart shows the number of cases
completed each day as well as the number of cases resolved each day.
Assumption: Cases may take longer than one day to complete.
Note: there would be no point in creating the report if cases were always resolved the same day that they were
created.
The temptation exists to define the trend report against the work pool class of the cases being analyzed but this
would not be correct. A trend report requires a Date against which to plot the two results, i.e., number of cases
created vs number of cases resolved. Neither pxCreateDateTime or pyResolvedTimeStamp can be used as the
plotting date. If so, cases resolved days later would be counted on the day that the case was created. That or
cases created days before being resolved would be counted on the day that the case was resolved.
An attempt could be made to query case history to identify case creation and case resolution events. While it is
possible to use History-Work to identify case creation and case resolution events, doing so would be complex.
Also, the history table contains numerous rows that contain other types of information.
As opposed to searching for case creation and resolution events within case history, a separate Data Type could
be defined. For example:
Data-SimpleCaseHistory
String CaseID
String CaseType
String EventType
DateTime WhenOccurred
Here, the allowed values for EventType would be kept to a minimum, for example, “Create” and “Resolve”.
A trend report could be defined against this table by itself. Or, the work pool class could joined to this table
using pyID = CaseID. Either way, each EventType would be plotted against the truncation of the WhenOccurred
DateTime value to a Date value. Data instances within this table can be generated retroactively.
A different solution is to define and populate a Timeline table that contains the Dates to plot against.
Data-Timeline
Date Date
TrueFalse IsWeekStart
TrueFalse IsMonthStart
TrueFalse IsYearStart
This table would need to be populated as far into the past and into the future as would be needed. Other trend
reports could leverage the same table.
However, because it is not possible to define a JOIN based on the result of a SQL Function such as day(),
CreateDate and ResolveDate Date properties would need to be added and exposed within the work pool table.
Those database columns would also need to be indexed. The query using the Timeline table would require two
subreports, one selecting and counting rows where the CreateDate matches a given TimeLine Date. The second
subreport would select and count the number of rows where the ResolveDate matches the same TimeLine date.
The Timeline approach would not be as performant as the SimpleCaseHistory approach due to having to join to
the work pool table twice. It also would only be usable as a List report since each subreport performs a COUNT
instead of the main report performing a GROUP BY aggregation which can, in turn, be charted. Using SQL it
would be possible to UNION the result of the two subreports but this is not supported by Pega’s Report
Definition rule.
The ideal solution would be to base the trend report on the SimpleCaseHistory class alone without joining to the
work pool table. This example demonstrates the benefit of extracting data and persisting it to a different form to
facilitate business intelligence.
Queries that include correleated subqueries
It is not always possible to obtain a desired result in a single query using a Report Definition. Instead, the
desired level of information can be obtained from a follow-on or drill-down query based on the initial query.

Example
XYZ Corp orders office supplies through multiple suppliers. XYZ Corp wants to see detail about the most
expensive line items purchased through each supplier. XYZ has Order cases that have LineItem subcases.
LineItem detail is stored in a separate table.
How would you obtain this information?

Solution
Define a subreport that obtains supplier ID and max(Price) from the line item table. Within the main report query
the line item table joining to the subreport by supplier ID.

Rationale
For same supplier, multiple products from the same supplier may have the same price which also happens to be
the maximum price for that supplier within the order.

Dilemma
The result does not show line item detail.

Solution
Drill down to obtain information about the line item for the supplier that share the same price. The drill-down
report could ask for distinct rows.
The syntax below achieves the result in a single query.
Select
ORD.OrderID,
LI.SupplierID,
PROD.SKU,
PROD.Detail,
LI.Price
From
Order ORD,
LineItem LI,
Product PROD
Where
LI.OrderID = ORD.OrderID
and PROD.SKU = LI.SKU
and LI Price = (select max(price)
From LineItem LI2
Where
LI2.OrderID = LI.OrderID
and LI2.Supplier_ID = LI.SupplierID)
TEMP
This type of query is called is called a correlated subquery. Note how the subquery references LineItem
columns using two different aliases, LI and LI2.
If not handled properly by the database, a correlated subquery can be inefficient since it implies that a separate
query will be executed for every row. Modern databases, based on what their query optimizer tells them, can
“unnest” the subquery such that it acts the same as referencing a (materialized) VIEW.
Report Definitions allow a parameter to be passed to the subquery from the main report. The parameter’s value
does not change from row-to-row unlike when a JOIN is defined to a particular column.
When configuring a subreport, no alias is required to reference the main report. Instead, on the right-hand side
of the subreport join, simply enter a property that belongs to the main report’s class. This is where the
correlation occurs.
The query syntax below was generated by the CorrelatedSubqueryTest Report Definition within the provided
BookingReports ruleset. The starting point for this Report Definition was the CloneJOINTest Report Definition, also
contained within the provided BookingReports ruleset. It was not necessary to have defined the CLONE join to
demonstrate that a correlated subquery can be defined. What this example demonstrates is that it is also
possible to reference a value on the right-hand side of the subreport correlated join that is derived from a JOIN
performed by the main report.
SELECT
"PC0"."pyid" AS "pyID",
"PC0"."pystatuswork"
AS "pyStatusWork", TO_CHAR(DATE_TRUNC('day' "PC0"."pxcreatedatetime"::TIMESTAMP),
'YYYYMMDD')
AS "pyDateValue(1)",
"CLONE"."pylabel" AS "pyLabel",
"HOTEL"."srcol4" AS "pyLabel",
"PC0"."pzinskey" AS "pzInsKey"
FROM
pegadata.pc_FSG_Booking_Work "PC0"
INNER JOIN
pegadata.pc_FSG_Booking_Work
"CLONE" ON ( ( "CLONE"."pyid" = "PC0"."pyid" )
AND "CLONE"."pxobjclass" = ?
AND "PC0"."pxobjclass"
IN (? , ? ))
INNER JOIN (
SELECT
"HOTEL"."pyguid" AS "srcol1",
"HOTEL"."brand" AS "srcol2",
"HOTEL"."name" AS "srcol3",
"HOTEL"."pylabel" AS "srcol4",
"HOTEL"."pzinskey" AS "pzinskey"
FROM
pegadata.pr_FSG_Data_Hotel_e0214 "HOTEL"
WHERE "HOTEL"."pxobjclass" = ? ) "HOTEL" ON ( ( "HOTEL"."srcol4" =
"CLONE"."pylabel") AND "CLONE"."pxobjclass" = ? AND "PC0"."pxobjclass" IN (? , ? ))
Note how the first example query in this section included a comparison to an aggregated value. i.e., LI Price =
(select max(price) from LineItem LI2. This is something that a subreport can do that a regular JOIN cannot do.
Below is another example.
Suppose you want to enforce that only one LineItem subcase within a Purchase Order can be in someone’s
worklist at any given time. Parent case locking could be used to prevent two persons from working on the
Purchase Order at the same time. This does not prevent simultaneous ownership of LineItem worklist
assignments for the same Purchase Order.
SELECT
LI1.LineItemID from LineItem LI1, (SELECT count(*) as LI2Count
FROM
LineItem L2,
Assign-Worklist WL
WHERE
L2.PurchaseOrderID = LI1.PurchaseOrderID AND L2.pzInsKey = WL.pxRefObjectKey) A
WHERE
L1.pyStatusWork not like 'Resolved%' AND A.LI2Count = 0
The requirement could be changed, for example, to less than or equal to two (<= 2). In that case the outer query
should be joined to Assign-WorkBasket to prevent a LineItem case being returned that is in someone’s worklist.
Instead every LineItem case returned by the query is associated to a WorkBasket assignment.
Queries that contain complex SQL
ListViews allow direct concatenation of SQL-like syntax that, in turn, is converted to SQL. ListViews do not support
SQL Functions or subreports. For this reason and others, ListViews are deprecated, meaning should not be used
to define new queries and/or reports.
There are number of ways to query data that are not supported by Report Definitions. An example is the
Haversine formula used at the FSG enterprise layer within the provide example Booking application solution. The
query is found in the Browse tab of the FSG-Data-Address HaversineFormula Connect-SQL rule.

SELECT pyGUID AS pyGUID,


Reference AS Reference,
IsFor AS IsFor,
Street AS Street,
City AS City,
State AS State,
PostalCode AS PostalCode,
Country AS Country,
Latitude AS Latitude,
Longitude AS Longitude,
Distance AS Distance
FROM (
SELECT
z.pyguid AS pyGUID,
z.reference AS Reference,
z.isfor AS IsFor,
z.street AS Street,
z.city AS City,
z.state AS State,
z.postalcode AS PostalCode,
z.country AS Country,
z.latitude AS Latitude,
z.longitude AS Longitude,
p.radius,
p.distanceunit
* DEGREES(ACOS(COS(RADIANS(p.latpoint))
* COS(RADIANS(z.latitude))
* COS(RADIANS(p.longpoint - z.longitude))
+ SIN(RADIANS(p.latpoint))
* SIN(RADIANS(z.latitude)))) AS Distance
FROM {Class:FSG-Data-Address} AS z
JOIN ( /* these are the query parameters */
SELECT
{AddressPage.Latitude Decimal } AS latpoint,
{AddressPage.Longitude Decimal } AS longpoint
{AddressPage.Distance Decimal } AS radius,
{AddressPage.pyNote Decimal } AS distanceunit
{AddressPage.IsFor } AS isfor ) AS p
ON p.isfor = z.isfor) AS d
WHERE distance <= radius
ORDER BY distance
LIMIT 15

Note the RDB-List step within the Code-Pega-List Connect_SQL_pr_fsg_data_address activity that is sourced by the
D_AddressesWithinDistance Data Page.
It is not possible to define this type of query using a Report Definition since it has two FROM-clause SELECTs, one
aliased “z”, the other aliased “d”. Unlike a Report Definition, a Connect SQL rule lacks the ability to dynamically
modify its filter conditions based on a parameter value being empty. Unless the Report Definition is configured
to generate “is null” when a parameter lacks a value, Pega will ignore the filter condition which, in some cases,
can be risky unless a limit is placed on the number of returned rows.
Within the HaversineFormula query there is no need to generate the filter conditions. It does not make sense to
execute the query unless a value is supplied for every query parameter, with the exception of the IsFor column,
currently either “HOTEL” or “VENUE”.
Care must be taken when using Connect-SQL rules as the column names may not be returned as aliased. For
example, despite aliasing the lower-case postalcode colum to camel-case PostalCode, the column name is
returned all lower-case, the same as it exists in a Postgres database. For this reason the D_
AddressesWithinDistance Data Page calls a post-processing activity named Post_Connect_SQL_pr_fsg_data_
address to convert column names to camel-case to then match way property names are spelled within the FSG-
Data-Address class.
The java step in Post_Connect_SQL_pr_fsg_data_address is brute force. Ideally the External Mapping tab of the
FSG-Data-Address Rule-Obj-Class rule could be leveraged. Note the pxClassSQL.pxColumnMap PageList within
that class rule. Note: in the future java step in activities will be forbidden, The code in this java step should be
moved to a Rule-Obj-Function.

<?xml version="1.0" encoding="UTF-8"?>


<pxClassSQL>
<pxObjClass>Embed-ClassSQL</pxObjClass>
<pxColumnMap REPEATINGTYPE="PageList">
<rowdata REPEATINGINDEX="1">
<pxObjClass>Embed-ColumnMapping</pxObjClass>
<pxColumnName>city</pxColumnName>
<pyPropertyName_RH>.City</pyPropertyName_RH>
<pyColumnName_RH>city</pyColumnName_RH>
<pxPropertyName>.City</pxPropertyName>
</rowdata>
<rowdata REPEATINGINDEX="2">
<pxObjClass>Embed-ColumnMapping</pxObjClass>
<pxColumnName>country</pxColumnName>
<pyPropertyName_RH>.Country</pyPropertyName_RH>
<pyColumnName_RH>country</pyColumnName_RH>
<pxPropertyName>.Country</pxPropertyName>
</rowdata>
<rowdata REPEATINGINDEX="3">
<pxObjClass>Embed-ColumnMapping</pxObjClass>
<pxColumnName>distance</pxColumnName>
<pyPropertyName_RH>.Distance</pyPropertyName_RH>
<pyColumnName_RH>distance</pyColumnName_RH>
<pxPropertyName>.Distance</pxPropertyName>
</rowdata>
</pxColumnMap>
<!-- Etc.., etc.. -->
</pxClassSQL>
Query design quiz
Question # 1
What are two ways to define a trend report with a chart that shows the number of cases completed each day as
well as the number of cases resolved each day? (Choose Two)

C # Response Justification
C 1 Define and reports against A trend report could be defined
history table against this history table by
itself and would be the
preferred solution.
2 Use a Complex SQL requests Using complex SQL by itself
could not yield the desired
result as the some of the
requisite data points are
undefined and it is not possible
to define a JOIN based on the
result of a SQL Function such as
day().
C 3 Define and report against a Defining and populating a
Timeline table Timeline table that contains the
Dates to plot against could
generate the desired trend
report albeit not a ideal.
4 Use a series of sub-reports Defining a series of subreports
would not help as some of the
data points are undefined and
because it is not possible to
define a JOIN based on the
result of a SQL Function such as
day().
User experience design and performance
Introduction to user experience design and performance
Application performance and user experience are naturally related. If the application’s user interface is not
responsive or performance is sluggish, the user experience is poor.
After this lesson, you should be able to:
l Identify application functionality that can directly impact the user experience
l Describe strategies for optimizing performance from the perspective of the user
l Design the user experience to optimize performance
How to identify functionality that impacts UX
A good user experience is not only related to the construction of user views and templates. Thoughtful use of
features such as background processing can impact the user experience and overall performance of the
application. Consider these areas of application functionality that directly impact the user experience and use
the following guidelines to ensure the application provides the best possible experience for your end users:
l Background Processing
l SOR Pattern
l External Integration
l Network Latency
l Large Data Sets
l User Feedback
l Responsiveness

Background processing
Moving required processing to perform in the background so the end user does not have to wait can improve the
perceived performance of an application. This is can also be referred to as asynchronous processing. While the
user is busy reading a screen or completing another task, the application performs required tasks in another
requestor session, apart from the user’s requestor session.
Scenarios where background processing can be leveraged include:
l SOR Pattern
l External Integration
l Network latency

Leverage the system of record (SOR) pattern


When leveraging the SOR pattern, required data is not kept with the case itself. Instead the data is retrieved
when needed at run time from an external SOR. In these scenarios, you can defer the load of the external data
after the initial screen is loaded.

Design for realistic integration response times


External integration with an SOR is almost always a requirement. When integrating with external systems,
establish realistic expectations on the amount of time needed to load data retrieved from the external systems.
By leveraging background processing and asynchronous processing, you can quickly render an initial user
interface. This technique allows the end user to start working while the application gathers additional data. The
application then displays the data as soon as it becomes available.
Estimate network latency accurately
Never underestimate the impact that network latency can have on the amount of time required to retrieve data
from external systems. Whenever possible, colocate the Pega Database repository on the same high-speed
network as the application servers running the Pega Platform or engine. Keep systems you are integrating with
as close as possible to your data center. If the system you are integrating with is located very far away, consider
using replicated data from a nearby data warehouse or proxy system. You can also use Edge Servers for web
content that is referenced frequently.

Avoid usage of large data sets


When it comes to data, less is always better. Avoid retrieving large data sets. Keep your result sets as small as
possible. Only retrieve the data that is immediately required for the task at hand. Consider aggregating data sets
ahead of time by introducing data warehouses and multidimensional databases where quick response times are
critical.

Provide meaningful user feedback


If it will take longer than a couple of seconds to load a screen, provide the end user meaningful feedback of how
much time is needed to complete the processing. Give the end user something else to do while the processing is
taking place. Also, you could design the interaction so the user can opt to perform it in the background or cancel
out if it is taking too long. Always keep the end user in control.

Leverage responsive UI
As the form factor changes, leverage Pega's Responsive UI support and only show the user what is absolutely
necessary to complete the task at hand. Avoid creating the "Everything View" that tries to show every piece of
information all at once. Move unnecessary or optional information away from the screen as the screen size is
reduced. Keep your User Interfaces highly specialized and focused on individual and specific tasks.

Other performance issues that affect the user experience


Many application performance issues affect the user experience directly or indirectly. Because the application
shares resources across users and background processes, an issue in another part of the application can affect
the individual end user in some way.
Sometimes, performance issues show up only after the application has been in production for several weeks and
users over a period of days start to experience a performance degradation. Use performance tools and
troubleshooting techniques to identify causes of poor performance. For more information, see the following Pega
Community article: Support Play: A methodology for troubleshooting performance.
User experience performance optimization strategies
Poor application performance can prevent users from meeting service levels and can seriously impact the overall
user experience. Careful design and usage of the UI components, case types, and service calls can help avoid
performance issues.
Apply the following strategies to provide an optimal user experience:
l Use layout groups to divide a rich user interface
l Leverage case management to divide complex cases
l Run service calls asynchronously
l Investigate PEGA0001 alerts

Use layout groups to divide a rich user interface


Loading a single, complex form into the user's session can impact performance. Design the user interface to
allow the user to focus on completing a specific task. Techniques include:
l Breaking complex forms into logical modules that allow users to focus on an immediate goal
l Using layout groups to break long forms into logical groups
Once the form is divided into layout groups, design each layout group to use the deferred load feature. This
approach allows data in the layout group to be loaded when the form loads. The data for the other layout groups
dynamically loads when users select each layout group on the browser.
Important: Use data pages as the source of data for deferred loading. Cache and reuse the data sets using
appropriately sized and scoped data pages.

Leverage case management to divide complex cases


Dividing complex cases into smaller, more manageable subcases allows you to process each subcase
independently. This technique avoids loading a single large case into the user's session. For example, optimistic
locking avoids the need to open parent cases when a subcase is opened. Each subcase opened in a separate
PRThread instance executes asynchronously and independently of other cases.

Run service calls asynchronously


For any long-running service call, use the run-in-parallel option on the Connect method. This option allows the
service call to run in a separate requestor session. When the service call runs in another requestor, users are not
waiting for the service call to finish and can continue with other tasks.

Investigate PEGA0001 alerts


PEGA0001 alerts typically mask other underlying performance issues that can negatively impact the user
experience. Leverage one of the Pega-provided performance tools such as AES or PDC to identify the underlying
performance issue. Once you have identified the cause of the performance problem, redesign and implement a
solution to address the problem.
Examples of alerts that are behind the PEGA0001 alert messages include:
l PEGA0005 — Query time exceeds limit
l PEGA0020 — Total connect interaction time exceeds limit
l PEGA0026 — Time to connect to database exceeds limit
Tip: In general, avoid retrieving large result sets. Only retrieve the minimal information required for the task at
hand.

Design practices to avoid


Avoid these design practices:
l Misuse of list controls
l Uncoordinated parallel development

Misuse of list controls


Misuse of list controls is a common problem that can easily be avoided during the design of a solution. Configure
autocomplete controls to fetch data from data pages that never return more than 100 rows of data. Limit drop-
down list boxes to no more than 50 rows of data.
Autocomplete controls negatively impact the user experience if:
l The potential result set is larger than 100 rows
l If all the results in the list start with the same three characters
Reduce the result set size for all list controls if more than 100 rows use a different UI component or data lookup
mechanism. Source list controls from data pages that load asynchronously.

Uncoordinated parallel development


Uncoordinated parallel development efforts can also impact performance for the user. For example, multiple
development teams could invoke the same web service returning the same result set multiple times and within
seconds of each other. Multiple service calls returning the same result set waste CPU cycles and memory. To
avoid this situation, devise a strategy for the development teams to coordinate web service calls through use of
shared data pages.
How to design the user experience to optimize performance
The best way to prevent performance issues is to design the application to avoid them in the first place. Use the
following techniques to optimize user interface performance and for the best possible user experience:
l Leverage asynchronous and background processing
l Implement the system of record (SOR) data retrieval pattern
l Utilize deferred data loading
l Paginate large result sets
l Leverage data pages
l Use repeating dynamic layouts
l Maximize client-side expressions
l Use single-page dynamic containers
l Utilize Layout refresh and Optimized code
l Leverage new Pega Platform user interface features

Asynchronous processing options


Pega provides multiple mechanisms to run processing in parallel or asynchronously when integrating with
external systems. For instance, you may initiate a call to a back-end system and continue your processing without
blocking and waiting for the external system’s response. This is useful when the external system processing time
can be long and when the result of the processing is not needed immediately. This topic presents how the
following Pega Platform features can be leveraged to improve the user experience.

Run connectors in parallel


Imagine the following scenario. In a claims application, you retrieve the data and policies for customers who call
to file a claim. The data is retrieved using two connectors: GetCustomerData and GetPolicyList. To speed up the
loading of customer data, you run the connectors in parallel.
You can use the Run in Parallel option to accomplish this. In this case, each connector runs as a child requestor.
The calling activity, the parent requestor, retains control of the session and does not have to wait for each
connector, in succession, to receive its response. The Run in Parallel feature is useful when subsequent tasks can
be performed while waiting for multiple connectors to retrieve their individual responses.

Execute connectors using queued mode


Imagine the following scenario. You have a SOAP connector called UpdateCustomerData that updates a customer
record in an external system. The response returned by the service is irrelevant for subsequent processing.
Since the customer might be temporarily locked by other applications, you retry the execution if it fails.
In addition to being executed synchronously and in parallel the SOAP, REST, SAP, and HTTP connectors can also
be executed in queue mode. Select queueing in the Processing Options section on the connector record’s
Service tab to configure queuing.
When queueing is used, each request is queued, and then processed later in the background by an agent. The
next time that the agent associated with the queue runs, it attempts to execute the request. The queueing
characteristics are defined in the connector's Request Processor.

Background processing
Background processing can also be leveraged to allow an initial screen to load which allows the user to continue
working while additional detailed information is retrieved. This strategy is particularly useful when using the
SOR design pattern.

Pagination
Pagination can be leveraged to allow long-running reports to retrieve just enough information to load the first
page of the report. As the user scrolls down to view the report additional records are retrieved and displayed as
they are needed. Use appropriate pagination settings on grids and repeating dynamic layouts to reduce the
amount of markup used in the UI.

Deferred data loading


Deferred data loading can be used to significantly improve the perceived performance of a user interface. Either
through asynchronous or background processing the screen is rendered almost intermediately allowing the user
to get on with the task at hand while additional information is retrieved and displayed as it becomes available.
Use defer load options to display secondary content.

Data pages
Use data pages as the source for list-based controls. Data pages act as a cached data source, that can be scoped,
invalidated based on configured criteria and garbage collected.

Repeating dynamic layouts


Use repeating dynamic layouts for nontabular lists. Avoid multiple nested repeating dynamic layouts.

Consolidate server-side processing


Ensure that multiple actions that are processed on the server are bundled together so that there is only a single
round trip.

Client-side expressions
Use client-side expressions instead of server-side expressions whenever possible. Whenever expressions can be
validated on the client, they run on the browser. This is typically true for visibility conditions, disabled conditions,
and required conditions. Enable the Client Side Validation check box (only visible when you have added an
expression), and then tab out of the field.

Single page dynamic containers


Use non i-Frame (i-Frame free) single page dynamic containers (this is the default setting in Pega 7). Single Page
Dynamic Containers are much lighter on the browser, enables better design and web-like interaction.
Only embed sections if they are truly being reused. Pega copies the configuration of included sections into the
sections they are included. It is more efficient to reference a section by simply dragging and dropping the
section into a cell.

Layout refresh and optimized code


Use refresh layout instead of refresh section to refresh only what is required. To reduce markup, use the
Optimized code based settings in the dynamic layouts Presentation tab.

Pega Platform features


Leverage all the newest technologies in Pega Platform for better client-side performance and smaller markup.
The newest user interface technology is all HTML 5 and CSS 3.0.
Take advantage of icon fonts and new menus. Use the least number of layouts and controls possible, and always
use the latest components and configurations available.
Use screen layouts, layout groups, dynamic layouts, dynamic containers, and repeating dynamic layouts. Avoid
legacy accordion, column repeat, tabbed repeat, and freeform tables as they run in quirks mode and should not
be used.. Use layout groups rather than legacy tabs, which have been deprecated. Also avoid inline styles (not
recommended although still available), smart layouts, and panel sets.
UX Design and Performance Quiz
Question 1
Which two processing options can help improve the user experience? (Choose Two)

C # Response Justification
C 1 Run connectors in parallel Running connectors in parallel helps
complete complex tasks more quickly
by running them asynchronously.
2 Run processes synchronously Running complex processes
synchronously can slow down the user
experience. A better approach is to run
them in parallel or asynchronously.
C 3 Deferred data loading Deferred data loading can be used to
significantly improve the perceived
performance of a user interface.
4 Avoid requeueing failed Requeueing failed responses helps
responses ensure good feedback during transient
errors.

Question 2
Which two Pega features can help improve the user experience? (Choose Two)

C # Response Justification
1 Smart layouts Avoid using legacy controls and
layouts.
C 2 Layout groups Layout Groups are the official
replacement in Pega Platform
for legacy tabs. Legacy Tabs run
in quirks mode. This is slower
and not HTML 5.0 compliant.
C 3 Repeating dynamic layouts Use new capability introduced
in Pega Platform for better
client-side performance.
C # Response Justification
4 Panel sets Avoid using legacy controls and
layouts.

Question 3
Which two techniques can improve performance and the related user experience? (Choose Two)

C # Response Justification
1 Maximize server-side Minimize round trips to the
Processing server by ensuring that multiple
actions processed on the server
are bundled together.
2 Minimize client side Use client-side expressions
expressions instead of server side
expressions whenever possible.
C 3 Use single page dynamic Single Page Dynamic Containers
containers over legacy i-frames enables
better design and web-like
interaction.
C 4 Use the Optimized code Using the Optimized code layout
option for layout mode in mode setting in the dynamic
dynamic layouts layouts Presentation tab helps
to reduce markup.
Conducting usability testing
Introduction to conducting usability testing
Usability testing is a method for determining how easy an application is to use by testing the application with
real users in a controlled setting. Usability testing is an important part of designing and implementing a good
user experience.
After this lesson, you should be able to:
l Discuss the importance of usability testing
l Plan for usability testing
l Describe the six stages of usability testing
Usability testing
Planning for, and building usability testing into, the project plan is essential to designing a positive user
experience. The goal of usability testing is to better understand how users interact with the application, and to
improve the application based on the results of the usability tests.
Usability testing involves interacting with real-world participants—the users of the application—to obtain
unbiased feedback and usability data. During usability testing, you collect both quantitative and qualitative data
concerning the user experience. The quantitative data is generally the most valuable. An example of quantitative
data is: 73% of users performed the given task in 2 seconds or less.
Sometimes qualitative data, such as a user’s opinion of the software, is also collected. An example of qualitative
data is: 67% of users tested agreed with the statement, “I felt the app was easy to use".

Who participates in usability testing?


Effective usability testing involves typical users of your application. This is critical in making usability testing
work. Usability testing cannot be substituted by working with project managers, project sponsors, stakeholders,
or business architects.

Why is usability testing so important?


Usability testing validates the ease of use of the user interface design so that time and resources are not spent
on developing a poor user interface. Usability testing is done to collect valuable design data in the least amount
of time as possible. The output of usability testing helps identify usability issues. To make the testing effective,
establish goals before you start planning.

What are some of the goals to set?


Set measurable metrics or Key Performance Indicators (KPIs) for quantitative testing. Some examples include:
l Reduce the number of steps to complete the approval process by 10%.
l Reduce the number of errors on a given transaction by 20%.
l Reduce response times throughout the application by at least 30%.

Who performs usability testing?


Usability testing in an enterprise environment is a serious task. The best person to perform it is someone who
has experience conducting these tests with end users and documenting the feedback. Pega offers assistance
with usability testing and recommends that you engage with Pega to review all screens before starting to develop
them.
When do you perform usability testing?
Usability testing is conducted periodically throughout the software development life cycle and can help identify
issues early in the software development life cycle.
How to conduct usability testing
Usability testing is a method for determining how easy an application is to use by testing the application with
real users in a controlled setting. Users are asked to complete tasks to see where they encounter problems and
experience confusion. Observers record the results of the test and communicate those results to the
development team or product owner.
To conduct a usability test of your application, you need to recruit test participants. Select test participants who
have the same skills as your production application users.
After you plan the usability test and recruit usability test participants, you conduct the usability test.
Usability testing typically involves six stages:
1. Select tasks to test
2. Document sequence of testing steps
3. Decide on the testing method
4. Select participants
5. Conduct tests
6. Compile results

Select tasks to test


Work with the product owner to select the tasks to test. The selected tasks cover the most common and
important use cases. For example, you are conducting usability testing for a time-sheet application.
The product owner identifies three tasks for usability testing:
l Users enter their hours worked in the time-sheet application, and then submit the time sheet to their
manager for approval.
l Users can view their vacation balance.
l Managers review and can approve time sheets.

Document sequence of testing steps


Break down each of the identified tasks into a sequence of steps and document them. Then, give this document
to the usability testing participants.
The following steps explain how to enter a time sheet.
1. Use the Timesheet application to enter your hours for the week.
2. Submit the time sheet for your manager’s review and approval.

Use the following table as a reference.


Day of the week Activity
Mon Attended training on Agile Methodology
Tue Attended training on Agile Methodology
Wed Attended training on Agile Methodology
Thu General Work
Fri General Work

3. Begin by adding the appropriate time codes for training and general work for each day of the week.
4. Review, and then submit your completed time sheet.

Decide on a testing method


Usability testing can be conducted in an unmoderated or moderated setting. The brevity and quality of your
testing instructions are critical when conducting unmoderated testing because one can guide the testing
participants if they need help. The benefit of conducting moderated testing is that you can immediately respond
to user behavior and ask questions.

Select participants
Consult with the product owner to get the list of end user participants who will participate in performing the
usability testing.

Conduct testing
Ensure that the usability testing participants understand the tasks and the sequence of steps. You want the
usability testing participants to perform these tasks without assistance. Monitor the participants as they perform
the testing, take notes, and measure all interactions. While participants perform the testing, they should also
take notes based on what they observe.

Compile feedback
Compile feedback based on the notes provided by the testing participants, and further discussion with the
participants. Consider measuring both user performance and user preference metrics. Users performance and
preference do not always match. Often users perform poorly when using a new application, but their preference
ratings may be high. Conversely, they may perform well but their preference ratings are low.
Conducting Usability Testing
Question 1
Which three of the following steps are part of usability testing? (Choose Three)

C # Response Justification
C 1 Documenting the One of the outcomes of usability
recommendations needed to testing is to document
fix the issue. recommended improvements.
C 2 Identifying the users of the Usability testing must involve
application. the users of the application to
be effective.
C 3 Putting together a list of tasks Identifying key tasks to test is an
that needs to be tested. important part of usability
testing.
4 Locking the ruleset before Usability testing should be
usability testing. performed during DCO and
design.

Question 2
Select two reasons why usability testing is important. (Choose Two).

C # Response Justification
1 Usability testing allows Usability testing cannot be
project managers to substituted by working with
subjectively define the project managers, project
applications usability sponsors, stakeholders, or
business architects.
C 2 Usability testing validates the Usability testing validates the
ease of use of the user ease of use of the user interface
interface design design so that time and
resources are not spent on
developing a poor user
interface.
C # Response Justification
C 3 Usability testing is done to Usability testing is done to
collect valuable design data collect valuable design data in
the least amount of time as
possible. The output of usability
testing helps identify usability
issues.
4 Usability testing is done after Usability testing is conducted
the application is deployed to periodically throughout the
production and is used to software development life cycle
compute a net promoter and can help identify issues
score early in the software
development life cycle.

Question 3
What are the last three stages of usability testing? (Choose Three)

C # Response Justification
1 Select the testing location Although the testing location is
interesting it is not part of
usability testing.
C 2 Select participants Consult with the product owner
to get the list of end user
participants who will participate
in performing the usability
testing.
C 3 Conduct testing Monitor the participants as they
perform the testing, take notes,
and measure all interactions.
C 4 Compile feedback Compile feedback based on the
notes provided by the testing
participants, and further
discussion with the participants.
Designing background processing
Introduction to designing background processing
The design of background processing is crucial to meeting business service levels and automating process.
Background processes must be carefully designed to ensure all work can be completed within the business
service levels. There are several features provided by Pega Platform which can be leveraged to provide an
optimal solution.
After this lesson, you should be able to:
l Evaluate background processing design options
l Configure asynchronous processing for integration
l Optimize default agents for your application
Background processing options
Pega Platform supports several options for background processing. You can use standard and advanced agents,
service level agreements (SLAs), and the Wait shape to design background processing in your application.

Queue Processors
Queue Processors allow you to focus on configuring the specific operations to perform. When using Queue
Processors, Pega Platform provides built-in capabilities for error handling, queuing and dequeing, and commits. .
Queue Processors are often used in an application that stems from a common framework or used by the Pega
Platform itself.
By default, Queue Processors run in the security context of the ASYNCPROCESSOR requestor type When
configuring the Queue-For-Processing method in an Activity, or the Run in Background step in a Stage, is it possible
to specify an alternate Access Group. It is also possible for the activity that the Queue Processor runs to change
the Access Group. An example is the Rule-Test-Suite pzInitiateTestSuiteRun activity executed by the
pzInitiateTestSuiteRun Queue Processor
Queues are shared across all nodes. The throughput can be improved by leveraging multiple Queue Processors
on separate nodes to process the items in a queue.

Standard agents
Standard agents are generally preferred when you have items queued for processing. Standard agents allow you
to focus on configuring the specific operations to perform. When using standard agents, Pega Platform provides
built-in capabilities for error handling, queuing and dequeing, and commits.
By default, standard agents run in the security context of the person who queued the task. This approach can be
advantageous in a situation where users with different access groups leverage the same agent. Standard agents
are often used in an application with many implementations that stem from a common framework or in default
agents provided by Pega Platform. The Access Group setting on an Agents rule only applies to Advanced Agents
which are not queued. To always run a standard agent in a given security context, you need to switch the queued
Access Group by overriding the System-Default-EstablishContext activity and invoke the setActiveAccessGroup()
java method within that activity.
Queues are shared across all nodes. The throughput can be improved by leveraging multiple standard agents on
separate nodes to process the items in a queue.
Note: There are several examples of default agents using the standard mode. One example is the agent
processing SLAs ServiceLevelEvents in the Pega-ProCom ruleset.
KNOWLEDGE CHECK

As part of an underwriting process, the application must generate a risk factor for a loan and insert
the risk factor into the Loan case. The risk factor generation is an intensive calculation that requires several
minutes to run. The calculation slows down the environment. You would like to have all risk factor calculations
run automatically between the hours of 10:00 P.M. and 6:00 A.M. to avoid the slowdown during daytime
working hours. Design a solution to support this
Use a Delayed Dedicated Queue Processor. Set the Date Time for Processing to 10:00PM

OR

Create a standard agent to perform the calculation. Include a step in the flow to queue the case for the agent.
Pause the case processing and wait for the agent to complete processing.

This solution delays the loan process and waits for the agent to resume the flow. It can take advantage of other
claims processing agents if enabled on other nodes which may reduce the time it take stop process all of the
loan risk assessments.

KNOWLEDGE CHECK

You need to automate a claim adjudication process in which files containing claims are parsed,
verified, adjudicated. Claims which pass those initial steps are automatically created for further processing. A
single file containing up to 1,000 claims is received daily before 5:00 P.M. Claim verification is simple and takes
a few milliseconds but claim adjudication might take up to five minutes
In an activity invoke the Queue-For-Processing method against each claim.

OR

Create a standard agent to perform the calculation. Include a step in the flow to queue the case for the agent.
Pause the case processing and wait for the agent to complete processing

Using the File service activity to only verify claims and then offload the task to the agent is preferred because it
does not significantly impact the intake process. It can also take advantage of multinode processing if available.
Furthermore, the modular design of the tasks would allow for reuse and extensibility if required in the future.
However, if you use the same file service activity for claim adjudication, it impacts the time required to process
the file. Processing is only available on a single node and there is little control overthe time frame while the file
service runs. Extensibility and error handling might also be more challenging. Consideration must be given to
the time an agent requires to perform the task. For example, the time required to process the claims by a
single agent is 5,000 minutes (83.33 hours). This is not suitable for a single agent running on a single node to
complete the task. A system with the agent enabled on eight nodes could perform the task in the off-hours. If
only a single node is available, an alternative solution is to split the file up into smaller parts, which are then
scheduled for different agents (assuming there is enough CPU available for each agent to perform its task).
Job Schedulers
Use Job Schedulers when there is no requirement to queue a reoccurring task. Unlike Queue Processors, the Job
Scheduler not only must decide which records to process, it also must establish each record’s step page context
before performing work on that record. For example, if you need to generate statistics every midnight for
reporting purposes, the output of a report definition can determine the list of items to process. The Job
Scheduler must then operate on each item in the list.
Unlike Queue Processors, a Job Scheduler needs to decide whether a record needs to be locked. It also must
decide whether it needs to commit records that have been updated using Obj-Save. If, say, a Job Scheduler
creates a case, or opens a case with a lock and causes it to move to a new assignment or complete its life cycle, it
would not be necessary for the Job Scheduler to issue a Commit.

Advanced agents
Use Advanced agents when there is no requirement to queue and perform a reoccurring task. Advanced agents
can also be used when there is a need for more complex queue processing. When advanced agents perform
processing on items that are not queued, the advanced agent must determine the work that is to be performed.
For example, if you need to generate statistics every midnight for reporting purposes,. the output of a report
definition can determine the list of items to process.
Tip: There are several examples of default agents using the advanced mode, including the agent for full text
search incremental indexing FTSIncrementalIndexer in the Pega-SearchEngine ruleset.
In situations where an advanced agent uses queuing, all queuing operations must be handled in the agent
activity.
Tip: The default agent ProcessServiceQueue in the Pega-IntSvcs ruleset is an example of an advance agent
processing queued items.
When running on a multinode configuration, configure agent schedules so that the advanced agents coordinate
their efforts. To coordinate agents, select the advanced settings Run this agent on only one node at a time
and Delay next run of agent across the cluster by specified time period.
KNOWLEDGE CHECK

ABC Company is a distributor of discount wines and uses Pega platform for order tracking. There are
up to 100 orders per day, with up to 40 different line items in each order specifying the product and quantity.
There are up to 5,000 varieties of wines that continuously change over time as new wines are added to and
dropped from the list. ABC Company want to extend the functionality of the order tracking application to
determine recent hot-selling items by recording the top 10 items ordered by volume each day. This information
is populated in a table and used to ease historical reporting.
Use Job Schedulers.

OR

An advanced agent runs after the close of business each day, and it performs the following tasks:
• Opens all order cases for that day and tabulates the order volume for each item type
• Determines the top 10 items ordered and records these in the historical reporting table

The agent activity should leverage a report to easily retrieve and sort the number of items ordered in a day.
When recording values in the historical table, a commit and error handling step must be included in the
activity.

Service level agreements (SLAs)


Using SLAs is a viable alternative to using an agent in some situations. The escalation activity in an SLA provides
a method for you to invoke agent functionality without creating a new agent. For example, if you need to provide
a solution to conditionally add a subcase at a specific time in the future, then adding a parallel step in the main
case incorporating an assignment with an SLA and escalation activity can perform this action.
Tip: The standard connector error handler flow Work-.ConnectionProblem leverages an SLA to retry failed
connections to external systems.
An SLA must always be initiated in the context of a case. Any delay in SLA process impacts the timeliness of
executing the escalation activity.
The SLA Assignment Ready setting allows you to control when the assignment is ready to be processed by the
SLA agent. For example, you can create an assignment today, but configure it to process tomorrow. An operator
can still access the assignment if there is direct access to the assignment through a worklist or workbasket.
Note: Pega Platform records the assignment ready value in the queue item when the assignment is created. If the
assignment ready value is updated, the assignment must be recreated for the SLA to act on the updated value.

Wait shape
The Wait shape provides a viable solution in place of creating a new agent or using an SLA. The Wait shape can
only be applied against a case within a flow in a step, and wait for a single event (timed or case status), before
allowing the case to advance. Single-event triggers applied against a case represents the most suitable use case
for the Wait shape; the desired case functionality at the designated time or status follows the Wait shape
execution.
Within the provided FSG Booking application there is a good example of where a Timer Wait Shape could be
used. The Timer can be used in a loop-back polling situation, where a user may want to have an operation
executed immediately within the loop-back. In this example, a user may want to poll for the current weather
forecast instead of waiting for the next automated retrieval to occur. As shown, this loop-back can be
implemented in parallel to a user task such as flagging weather preparation set up and tear down task
completion. It would be overly complex to update a Queue Processor’s record to fire as soon as possible, then
have to wait several seconds to see the result. As stated earlier, an SLA should not be used for polling or periodic
update situations.
Asynchronous integration
Pega Platform provides multiple mechanisms to perform processing asynchronously. For instance, an application
may initiate a call to a back-end system and continue processing without blocking and waiting for the external
system’s response. This approach is useful when the external system processing time can be an issue and when
the result of the processing is not required immediately. A similar feature is also available for services allowing
you to queue an incoming request.

Commonly Used Asynchronous Integration Approaches


Commonly used asynchronous approaches include use of the Load-DataPage method and using the Run-In-
Parallel option with the Connect-Wait method.

Load-DataPage
An optimal design pattern for any process that selects filtered rows from the same overall record set could be an
Asynchronous Data Page. Retrieving the same large record set over and over again is a waste of processing
resources.

Running a Connector in the Background using Run-In-Parallel and Connect-


Wait
Most connector rules have the capability to run in parallel by invoking the connectors from an activity using the
Connect-* methods with the RunInParallel option selected. When the run in parallel option is selected, a
connector runs as a child requestor. The calling activity continues the execution of subsequent steps. Use the
Connect-Wait method to join the current requestor session with the child requestor sessions.

Note: If you configure several connectors to run in parallel, ensure the response data is mapped to separate
clipboard pages, and error handling is set up.
If a slow-running connector is used to source a data page, the data page can be preloaded using the Load-
DataPage method in an activity to ensure the data is available without delay when needed. Grouping several
Load-DataPage requestors by specifying a PoolID is possible. Use the Connect-Wait method to wait for a specified
interval, or until all requestors with the same PoolID have finished loading data.

Less Commonly Used Asynchronous Integration Approaches


Less commonly used asynchronous integration approaches include asynchronous service processing and
asynchronous connector processing.

Asynchronous service processing


Most service types support asynchronous processing. Email and JSR94 Services are exceptions. The service types
that support asynchronous processing leverage the standard agent queue. These service rules can be
configured to run asynchronously or to perform the request asynchronously after the initial attempt to invoke
the service synchronously fails. In both cases, a queue item ID that identifies the queued request is returned to
the calling application. This item ID corresponds to the queued item that records the information and state of
the queued request. Once the service request is queued, the ProcessServiceQueue agent in the Pega-IntSvcs
ruleset processes the item queued and invokes the service. The results of the service request are stored in the
instance and the service request is kept in the queue until the results are retrieved.
In the meantime, the calling application that initiated the service request stores the queue item ID and continues
its execution. In most cases, the calling application calls back later with the queue item ID to retrieve the results
of the queued service request. The standard activity @baseclass.GetExecutionRequest is used as a service activity
by the service to retrieve the result of the queued service.

When configuring this option for the service, a service request processor that determines the queuing and
dequeuing options must be created. This information is used by the ProcessServiceQueue agent for supporting
information to perform the tasks.

Asynchronous connector processing


Several connector rules offer an asynchronous execution mode through the queue functionality similar to
asynchronous services. When leveraging this capability, the connector request is stored in a queued item for the
ProcessConnectQueue agent in the Pega-IntSvcs ruleset to make the call to the service at a later time. The queued
connector operates in a fire-and-forget style. This means that there is no response immediately available from
the connector. Before choosing to use this asynchronous processing mechanism, assess whether the fire-and-
forget style is suitable for your requirements.
A connector request processor must also be configured for the asynchronous mode of operation. This
configuration is similar to asynchronous service configuration, with the difference being the class of the queued
object.
KNOWLEDGE CHECK

In which situation would you consider asynchronous integration?


When the response is not immediately required
Default agents
When Pega Platform is initially installed, many default agents are configured to run in the system (similar to
services configured to run in a computer OS). Review and tune the agent configurations on a production system
because there are default agents that:
l Are unnecessary for most applications because the agents implement legacy or seldom-used features
l Should not run in production
l Run at inappropriate times by default
l Run more frequently than needed, or not frequently enough
l Run on all nodes by default, but should only run on one node
For example, by default, there are several agents configured to run the Pega-DecisionEngine in the system. Disable
these agents if decisioning is not applicable to the application(s). Enable some agents only in a development or
QA environment, such as the Pega-AutoTest agents. Some agents are designed to run on a single node in a
multinode configuration.
A complete review of agents and their configuration settings are available in the following Pega Community
article Working with Agents. Because these agents are in locked rulesets, they cannot be modified. To change
the configuration for these agents update the agent schedules generated from the agents rule.
KNOWLEDGE CHECK

Why is it important to review and tune default agents?


Because there might be agents that should not run on the environment or that need to be tuned to fit the
application
Designing background processing quiz
Question 1
Select the true statement regarding agents.

C # Response Justification
C 1 Commits must be issued in Advanced agents must handle
advanced agents activities. their own commits.
2 An advanced agent cannot be Advanced agents can process a
used to process a queue. queue, but this must be coded
in the agent activity.
3 In a multinode system, There are use cases where an
standard agents are always advanced agent would be
preferred. preferred in a multinode
system.
4 An advanced agent should be Advanced agents can operate on
enabled on only a single node multiple nodes, but the agent
in a multinode system. activity must be carefully
designed to avoid conflicts.

Question 2
Select the true statement regarding SLAs and Wait shapes.

C # Response Justification
C 1 An SLA can be used to replace The escalation activity in a
a custom agent in some custom SLA can be used to
situations. perform a scheduled task,
replacing an agent.
2 An SLA is a special type of An SLA is a special type of
advanced agent. standard agent.
3 Wait shapes completely Wait shapes do not have
replace SLA functionality. escalation activities, and cannot
completely replace an SLA.
C # Response Justification
4 Wait shape escalation activity Wait shapes do not have
can advance the case in a escalation.
flow.

Question 3
Select the two true statements regarding background processing. (Choose Two)

C # Response Justification
C 1 Background processing Background processing always
involves separate requestors. involves separate requestors.
2 Parallel flows are an example A parallel flow runs in the same
of leveraging background requestor session as the flow
processing. that initiated it.
3 Locking effects are negated Locking must be considered
with background processing. with background processing as
separate requestors cannot lock
an object at the same time.
C 4 Connect-Wait method can be Connect-Wait method can be
used to test if background used after the Run-in-parallel
processing for connectors method to wait for child
complete. requestors to complete.
Designing Pega for the enterprise
Introduction to designing Pega for the enterprise
Pega is enterprise-grade software designed to transform the way organizations do business and the way they
serve customers. The Pega application not only works with existing enterprise technologies, but also leverages
those technologies to provide an end-to-end architectural solution.
After this lesson, you should be able to:
l Describe the design thinking and approach for architecting Pega for the enterprise
l Describe deployment options and how those deployment choices can affect design decisions
l Describe how Pega interacts with existing enterprise technologies
l Describe the design approach when architecting a Pega application
Designing the Pega enterprise application
You can easily be overwhelmed by the number of external applications and channels you need to work with to
deliver a single application to your business users. With this in mind, the following video describes how to design
the end-to-end Pega enterprise application, starting with Pega in the middle of your design.

Transcript
Pega is not just another web application that sits in your library of web or mobile apps. Pega radically transforms
the way organizations do business. Pega can drastically reduce costs, build customer journeys, and fully
automate work.
Your job, as a lead system architect, is to take the digital transformation vision and transform business
applications that perform real work for real people and drive business outcomes for even the largest of
organizations. It is easy to be overwhelmed by all of the existing technologies, channels, integrations to legacy
systems, and trying to figure out how the Pega fits into the big picture. But, if you start with Pega in the middle,
and work your way out to those channels and systems of record, one application at a time, the vision becomes
reality, release by release.
The entire digital transformation of a large organization is not realized in one release of the application. At the
start of a project, you probably only know a portion of what the end to end architecture will look like, and that is
ok. Instead of thinking channel-in or system-up, think of Pega out—intelligently routing work from channels
through back end systems, then adding automation where it makes sense, and thinking end to end at all times.
Whether you are designing your application with Pega Platform or you are starting with a Pega CRM or industry
application, designing with Pega in the middle and thinking one application at a time allows you to implement
your application based on what you know today, and gives you the freedom and flexibility to design for whatever
comes tomorrow.
For a great Demonstration of Pega Infinity please view the following Pega World 2019 presentation: Pega Infinity
Demo
Application deployment and design decisions
Pega works the same regardless of the environment in which it is running. Pega runs the same on Pega Cloud as
it does on a customer cloud, such as Amazon Web Services (AWS) or Google Cloud, as it does on-premise. No
matter the environment, Pega follows the standard n-tier architecture you may already recognize.

Because Pega is software that writes software, you can run your application anywhere or move it from one
environment to another. For example, you could start building your application on a Pega Cloud environment,
then move your application to an on-premise environment. The application functions the same way.
Consider these two environment variations when designing your application:
l Requirements to deploy an enterprise archive (.ear)
l Requirements to use multitenancy

Enterprise archive (.ear) deployment


Pega can be deployed as an enterprise archive (.ear) or a web archive (.war). You must use an enterprise (.ear)
 deployment if you have one or more of the following requirements:
l You need to use message-driven beans (JMS MDB) to handle messaging requirements
l You need to implement two phase commit or transactional integrity across systems
l You need to implement Java Authentication and Authorization Service (JAAS) or Java Enterprise Edition (JEE)
 security
l You need to use the Rule-Service-EJB rule type
l You have enterprise requirements that all applications run on a JEE compliant application server
Pega recomends the deployment of an .ear but you also have the option of using a .war deployment, if for
example you are running on TomCat.
For a listing of supported application servers and corresponding deployment archive types, see the Platform
Support Guide.

Multitenancy
Multitenancy allows you to run multiple logically separate applications on the same physical hardware. This
allows the use of a shared layer for common processing across all tenants, yet allows for isolation of data and
customization of rules and processes specific to the tenant.

Multitenancy supports the business processing outsourcing (BPO) model using the Software as a Service (SaaS)
infrastructure. For example, assume the shared layer represents a customer service application offered by
ServiceCo. Each partner of ServiceCo is an independent BPO providing services to a distinct set of customers. The
partner (tenant) can customize processes unique to the business and can leverage the infrastructure and shared
rules that ServiceCo provides.
When designing for multitenancy, consider:
l Release management and software development lifecycle – The multitenant provider must establish
guidelines for deploying and manages instances and work with tenants to deploy, test, and monitor
applications.
l Multitenant application architecture – The multitenant provider must describe the application architecture to
the tenants and explain how tenant layers can be customized.
l System maintenance – Maintenance activities in multitenancy affect all tenants. For example, when a system
patch is applied, all tenants are affected by the patch.
l Tenant life cycle – The multitenant provider and tenant must work together to plan disk and hardware
resources based on the tenant's plans for the application.
l Tenant security – The two administrators in a multitenant environment include the multitenant provider
administrator and the tenant administrator. The multitenant provider sets up the tenant, and manages
security and operations in the shared layer. The tenant administrator manages security and operations of the
tenant layer.
For more information on multitenancy, see the Multitenancy help topic.
KNOWLEDGE CHECK

Name two situations in which you need to make additional design consideration with respect to how the
application is deployed?
Enterprise tier deployment (.ear) requirements and use of multitenancy
Security design principles
Like performance, security is always a concern, no matter what application you work on or design. Whether on
premise or in a cloud environment, failing to secure your application exposes the organization to huge risk, and
can result in serious damage to the organization's reputation. Take security design and implementation very
seriously and start the security model design early.

Your organization's security standards


Your organization likely has standards on how all applications authenticate users and what data can be accessed
based on role. You may also be required to use third-party authentication tools when invoking web services, or
when another application calls Pega as a service. Ask the enterprise architecture team or technical resources at
the organization for security standards so you know what you need to account for in your design and implement
in the application.
An organization's security policies are often the result of industry regulatory requirements. Many industries have
specific regulations on sharing data outside of the organization as well as within the organization. For example,
in the United States, healthcare organizations comply with HIPAA (Health Insurance Portability and Accountability
Act). Educate yourself with industry and government regulations that apply to the application you are designing.
If the application resides in a cloud environment or is a hybrid cloud/on-premise deployment, acquaint yourself
with the network architecture and security protocols in place. Learn who is performing what role in maintaining
the security of the application. For example, Pega Cloud describes the architecture, security controls, compliance
with government standards, and monitoring services Pega Cloud offers in the Pega Cloud Security Overview
document. Work with the infrastructure teams at your organization to identify security contacts and what
measures are in place to protect application data and customer privacy.
KNOWLEDGE CHECK

When should you begin the design of your security model?


Begin designing your security model as early as possible. Several factors can impact how you implement
security in the application. Be aware of those factors to make sure your application meets the organization's
security standards. Failing to meet these standards prevent your application from going to production.
Improperly securing your application opens your organization to unnecessary risk.
Pega application monitoring
Many organizations have application performance monitoring (APM) tools in place to track and report on
application performance and responsiveness. While these tools can report on data such as memory and
CPU usage on your database and application servers, they do not provide detailed information about the health
of the Pega application itself.
Pega provides two tools designed to monitor as well as provide recommendations on how to address alerts
generated by the Pega application. These tools compliment any APM tools you might be using to give you a
complete picture of the health of your Pega application.
l Autonomic Event Services (AES) – AES monitors on-premise applications. AES is installed and managed on-
site.
l Predictive Diagnostic Cloud (PDC) – Pega PDC is a Pega-hosted Software as a Service (SaaS) application that
monitors Pega Cloud applications. PDC can also be configured to monitor on-premise applications.
The tool you use depends on your monitoring requirements and if you want to customize the monitoring
application.
The following table compares differences between AES and PDC.

PDC AES
Hardware provisioning Pega Customer
Installation and upgrades Pega Customer
Ability to customize Upon request Fully customizable
Release schedule Quarterly Yearly
Communication with monitored nodes One-way Two-way
Active system management (restart agents, Not available Available
listeners, quiesce node)

Both AES and PDC monitor the alerts from and health activity for multiple nodes in a cluster. Both send you a
scorecard that summarizes application health across nodes. The most notable difference, from an architecture
standpoint, is that AES interacts with the monitor node to allow you to manage processes on the monitored
nodes, such as restarting agents and quiescing application nodes.
You can use AES or PDC to monitor development, test, or production environments. For example, you can set up
AES to monitor a development environment to identify any troublesome application area before promoting to
higher environments.
The System Management Application (SMA) can be used to monitor and manage activity on an individual
node. SMA is built on Java Management Extensions (JMX) and provides a standard API to monitor and manage
resources either locally or by remote access.
Pega Platform continually generates performance data (PAL) and issues alerts if the application exceeds
thresholds that you have defined. The following diagram compares the access to monitored nodes to gather and
display that performance data.

For more information on AES, PDC, and SMA, see the following resources:
l Autonomic Event Services (AES) landing page
l Predictive Diagnostic Cloud (PDC) landing page
l System Management Application (SMA) help topic
KNOWLEDGE CHECK

What are some differences between PDC and AES?


The Autonomic Event Services (AES) application communicates with the monitored system in a two-way fashion.
AES allows you to manage requestors, agents, and listeners from the AES console. The Predictive Diagnostic
Cloud (PDC) only reads data from, and does not communicate back, to the monitored system.
Case interaction methods from external applications
You can expose Pega case types to external application by generating mashup code or by generating
microservice code from within the case type settings in Dev Studio. The method you choose depends on your use
case and requirements.

Pega Web Mashup


Pega Web Mashup, fomerly known as the Internet Application Composer (IAC), allows you to embed mashup code
in any website architecture. Use this option when you need to embed Pega UI content into the organization's
website, whether it is hosted on-premise or on Pega Cloud. For example, you could embed a credit card
application case type into a bank's corporate website.

For more information on deployment and configuration options, see the Create a Web Mashup landing page on
the Pega Community.
Microservices
A microservice architecture is a method for developing applications using independent, lightweight services that
work together as a suite. In a microservices architecture, each service participating in the architecture:
l Is independently deployable
l Runs a unique process
l Communicates through a well-defined, lightweight mechanism
l Serves a single business goal
The microservice architectural approach is usually contrasted with the monolithic application architectural
approach. For example, instead of designing a single application with Customer, Product, and Order case types,
you might design separate services that handle operations for each case type. Exposing each case type as a
microservice allows the service to be called from multiple sources, with each service independently managed,
tested, and deployed.
While Pega Platform itself is not a microservice architecture, the Pega Platform compliments the microservice
architectural style for the following reasons:
l You can expose any aspect of Pega (including cases) as a consumable service, allowing Pega to participate in
microservice architectures. For more information on Pega API, see the Pega API for the Pega Platform Pega
Community article.
l You can create this service as an application or as an individual service that exists in its own ruleset.
l You can reuse services you create across applications, leveraging the Situational Layer Cake for additional
flexibility in what each service can do, without overloading the service.
Tip: Microservice architecture is a broad topic. Researching benefits and drawbacks of this style before
committing to a microservice architecture is recommended. For further guidance, see the Microservices article
by Martin Fowler.
KNOWLEDGE CHECK

What is the difference between exposing a case type using mashup and exposing a case type using a
microservice?
A mashup allows you to embed the entire case type UI into the organization's web application(s). A
microservice allows you to call a specific operation on a case type (or other Pega objects, such as assignments)
to run a single purpose operation from one or more calling applications.
Designing Pega for the enterprise
Question 1
When developing a new Pega application, start with _________________ and ___________________. (Choose Two)

C # Response Justification
C 1 one application at a time and base your Build for change allows you to build solutions iteratively
design on what you know today based on what you know today.
2 channel design and work your way into Starting with channels can lead to complexity and
the Pega application inconsistency in user experience. Users can do some
functions on channels, but not on the web self-service
site or when calling into a call center.
3 integration to systems of record and work Starting with systems of record or legacy systems tends
your way back into the Pega application to produce application stacks that are typically not
extensible to multiple channels and result in a rigid
portal-to-system of record design.
C 4 the Pega application in the middle and Starting with Pega in the middle allows for a consistent
work your way out to channels and solution to be developed and evolved over time,
systems of record connecting channels and back end systems to provide
an end to end solution.

Question 2
Which two business requirements impact how you deploy your application? (Choose Two)

C # Response Justification
1 You have requirements to start application You can move your application from Pega Cloud to on-
development in Pega Cloud, and then premise using standard Pega installation and
move your application to an on-premise deployment model.
production environment.
C 2 You have requirements to use message MDB requires an enterprise deployment (.ear).
driven beans (MDB) to deliver messages
from a system of record to the Pega
application.
3 You have requirements for case types to You can expose case types through a microservice
C # Response Justification
participate in a distributed architecture using the standard Pega installation and deployment
through microservices. model.
C 4 You have requirements to support a The BPO model requires a multitenant deployment.
business process outsourcing (BPO) model
for your organization.

Question 3
What is the primary reason for designing the security model as early as possible?

C # Response Justification
1 To begin penetration testing as early as While important, penetration testing occurs later in
possible the project, prior to go live.
C 2 To ensure you account for and design to This is the primary reason for starting the security
the organization security standards and design early. Security requirements can be complex,
legal requirements and identifying the organization's security standards
early to make sure they are woven into your
application design is crucial.
3 To start your team on building access role, While starting to build early is important, this is not
privileges, and attribute-based controls the primary reason for designing the security model
as early as possible.
4 To quickly address any warnings you see in You do not have any warnings until you start building
the Rule Security Analyzer your security rules.

Question 4
Why would you use PDC or AES when the organization already has application performance monitoring tools in
place?

C # Response Justification
1 PDC and AES collect alerts and health While AES and PDC collect health activity across nodes, this
activity across Pega nodes. is not the primary reason for using either PDC or AES.
C 2 PDC and AES show insights into the PDC and AES show insights in the health of Pega
health of the Pega application. applications, but traditional application server and
database monitoring tools do not.
C # Response Justification
3 PDC and AES provide the ability to stop AES includes this ability. PDC does not allow you to
and start agents, listeners, and manage agents, listeners, and requestors.
manage requestors.
4 PDC and AES are available both on PDC is available in Pega Cloud and can be used on-
premise and in the Pega Cloud. premise. AES is only available on-premise.
Defining a release pipeline
Introduction to defining a release pipeline
Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development through testing to deployment on your production system. This lesson explains how
to use Pega Platform tools and common third-party tools to implement a release pipeline.
After this lesson, you should be able to:
l Describe the DevOps release pipeline
l Discuss development process best practices
l Identify continuous integration and delivery tasks
l Articulate the importance of defining a test approach
l Develop deployment strategies for applications
How to define a release management approach
Depending on the application release model, development methodologies, and culture of the organization, you
see differences in the process and time frame in which organizations deliver software to production. Some
organizations take more time moving new features to production because of industry and regulatory compliance.
Some have adopted automated testing and code migration technologies to support a more agile delivery model.
Organizations recognize the financial benefit of releasing application features to end users and customers faster
than their competitors, and many have adopted a DevOps approach to streamline their software delivery life
cycle. DevOps is a collaboration between Development, Quality, and Operations staff to deliver high- quality
software to end users in an automated, agile way. By continuously delivering new application features to end
users, organizations can gain a competitive advantage in the market. Because DevOps represents a significant
change in culture and mindset, not all organizations are ready to immediately embrace DevOps.
These are your tasks as the technical lead on the project:
1. Assess the organization's existing release management processes and tooling. Some organizations may
already work with a fully automated release pipeline. Some organizations may use limited automated testing
or scripts for moving software across environments. Some organizations may perform all release management
tasks manually.
2. Design a release management strategy that achieves the goal of moving application features through testing
and production deployment, according to the organization's release management protocols.
3. Evolve the release management process over time to an automated model, starting with testing processes.
The rate of this evolution depends on the organization's readiness to adopt agile methodologies and rely on
automated testing and software migration tools and shared repositories.
Important: While setting up your team release management practices, identify a Release Manager to oversee
and improve these processes. The Release Manager takes care of creating and locking rulesets and ensures that
incoming branches are merged into the correct version.
Release pipeline
Even if the organization releases software in an automated way, most organizations have some form of a manual
(or semi-automated) release pipeline. The following image illustrates the checkpoints that occur in the release
pipeline.

This pipeline highlights developer activities and customer activities. Developer activities include:
l Unit testing
l Sharing changes with other developers
l Ensuring changes do not conflict with other developer's changes
Once the developer has delivered changes to the customer, customer activities typically include:
l Testing new features
l Making sure existing features still work as expected
l Accepting the software and deploying to production
These activities occur whether or not you are using an automated pipeline. The Standard Release process
described in Migrating application changes explains the tasks of packaging and deploying changes to your target
environments. If you are on Pega Cloud, be aware of certain procedures when promoting changes to production.
For more information, see Change management in Pega Cloud Services.
Moving to an automated pipeline
In an organization that deploys software with heavy change management processes and governance, you
contend with old ways of doing things. Explain the benefits of automating these processes, and explain that
moving to a fully automated delivery model takes time. The first step is to ensure that the manual processes in
place, particularly testing, have proven to be effective. Then, automating bit by bit over time, a fully automated
pipeline emerges.
When discussing DevOps, the terms continuous integration, continuous deployment, and continuous delivery are
frequently used.
Use the following definitions for these terms:
l Continuous integration – Continuously integrating into a shared repository multiple times per day
l Continuous delivery – Always ready to ship
l Continuous deployment – Continuously deploying or shipping (no manual process involved)
Automating and validating testing processes is essential in an automated delivery pipeline. Create and evolve
your automated test suites using Pega Platform capabilities along with industry testing tools. Otherwise, you are
automating promotion of code to higher environments, potentially introducing bugs found by your end users
that are more costly to fix.
For more information on the DevOps pipeline, see the DevOps release pipeline overview.
KNOWLEDGE CHECK

Is the goal of your release management strategy to move the organization to DevOps?
No. The goal of your release management strategy is to implement a repeatable process for deploying high-
quality applications so users of that application can start realizing business value. Over time, as those
processes become repeatable, they are ideal for automation. Continuous integration and continuous delivery
(and eventually, continuous deployment) benefit the organization and often give it a competitive advantage.
DevOps release pipeline
DevOps is a culture of collaboration within an organization between development, quality assurance, and
operations teams that strives to improve and shorten the organization’s software delivery life cycle.
DevOps involves the concept of software delivery pipelines for applications. A pipeline is an automated process
to quickly move applications from development through testing to deployment. At the beginning of the pipeline
are the continuous integration and continuous delivery (CI/CD) phases.

The Continuous Integration portion of the pipeline is dominated by the development group. The Continuous
Delivery portion of the pipeline is dominated by the quality assurance group.
Pega Platform includes tools to support DevOps. For example it keeps an open platform, provides hooks and
services based on standards, and supports most popular tools.
The pipeline is managed by some form of orchestration and automation server such as open source Jenkins.
Pega’s version of an automation server is the Deployment Manager available on the Pega Exchange.
The following diagram illustrates an example of a release pipeline for the Pega Platform.
The automation server plays the role of an orchestrator and manages the actions that happen in continuous
integration and delivery. In this example, Pega’s Deployment Manager is used as the automation server.

A pipeline pushes application archives into, and pulls then from, application repositories. The application
repositoriesare used to store the application archive for each successful build. There should be both a
development repository and a production repository. JFrog’s artifactory can be used an application
repository, but an equivalent tool could be used as well. For example, for Pega Cloud applications hosted in the
Amazon Web Services (AWS) cloud computing service would use S3 buckets as an application repository.
Notice that at each stage in the pipeline, a continuous loop provides the development team with real-time
feedback on testing results.
In most cases, the system of record is a shared development environment.
The term system of record is used in a distributed development environment. Separate development
environments can push branches related to the same application to a central server known as the system of
record. The central server is considered a type=Pega repository. Within the system of record published branches
are merged. Those branches are then removed from the originating development environment. See
Development workflow in the DevOps pipeline on the Pega Community.
Note: Pega Platform is assumed to manage all schema changes.
For more information review the DevOps release pipeline overview article in the Pega Community.
KNOWLEDGE CHECK

What role does the automation server play in a release pipeline?


The automation server plays the role of an orchestrator.
Continuous integration and delivery
A continuous integration and continuous delivery (CI/CD) pipeline is an automated process to quickly move
applications from development through testing to deployment.

The Pega CI/CD pipeline


The following image depicts the high-level overview of the Pega CI/CD pipeline. Different questions are asked
during every stage of the pipeline. These questions can be grouped into two different categories:
l Developer centric questions – Are the changes good enough to share and do they work together with other
developer changes?
l Customer centric questions – Is the application with the new changes functional as designed and expected by
customers and ready to use by customers?

Drilling down to specific questions for each step in the pipeline:


l Ready to share – As a developer, am I ready to share my changes with other developers? Ensure that the new
functionality being introduced works and critical existing functionality still continues to work.
l Integrate changes – Do all the integrated changed work together? Once all the changes from multiple
developers have been integrated, does all the critical functionality still work?
l Ready to Accept – Do all the acceptance criteria for the application still pass? This is typically where the
application undergoes regression testing to ensure functional correctness.
l Deploy – Is this application ready for real deployment? The final phase is where the fully validated application
is deployed into production, typically after it was verified in a preproduction environment.

Continuous integration
With continuous integration, application developers frequently check in their changes to the source environment
and use an automated build process to automatically verify these changes. The Ready to Share and Integrate
Changes steps ensure that all the necessary critical tests are run before integrating and publishing changes to a
development repository.
During continuous integration, maintain these best practices:
l Keep the product rule Rule-Admin-Product, referenced in an application pipeline, up-to-date.
l Automatically trigger merges and builds using the Deployment Manager. Alternatively, an export can be
initiated using the prpcServiceUtils.bat tool
l Identify issues early by running PegaUnit tests and critical integration tests before packaging the application.
If any of these tests fail, stop the release pipeline until the issue is fixed.
l Publish the exported application archives into a repository, such as JFrog Artifactory, to maintain a version
history of deployable applications.
KNOWLEDGE CHECK

What are the key characteristics of the continuous integration process?


Integrate changes frequently; rapid testing and feedback; ability to identify cause of failure and address in
quickly

Continuous delivery
With continuous delivery, application changes run through rigorous automated regression testing and are
deployed to a staging environment for further testing to ensure that the application is ready to deploy on the
production system.
In the Ready to Accept step, testing runs to ensure that the acceptance criteria are met. The Ready to Deploy step
verifies all the necessary performance, scale, and compatibility tests necessary to ensure the application is ready
for deployment.
The Deploy step validates in a preproduction environment, deploys to production, and runs post-deployment
tests with the potential to roll back as needed.
Follow these best practices to ensure application quality:
l Use Docker or a similar tool to create test environments for user acceptance tests (UAT) and exploratory tests.
l Create a wide variety of regression tests through the user interface and the service layer.
l Check the tests into a separate version control system such as Git.
l If a test fails, roll back the latest import.
l If all the tests pass, annotate the application package to indicate that it is ready to be deployed. Deployment
can be performed manually or automatically.
KNOWLEDGE CHECK

What is the key purpose of the continuous delivery process?


To ensure the application is ready for deployment on the production system by performing extensive
regression testing
Modular development deployment strategies
Dedicating a ruleset to a single case type helps to promote reuse. Other reasons to dedicate a ruleset to a single
case type include:
l Achieving continuous integration (CI) branch-based development
l Encouraging case-oriented user stories using Agile Studio’s scrum methodology to manage project software
releases
l Managing branches that contain rules that originate from different rulesets. When this occurs, a branch
ruleset is generated and the generated ruleset prepends the original ruleset's name to the branch name
l Accommodating multiple user stories in a branch
l Simplifying the ability to populate the Agile Workbench Work item to associate field when checking a rule
into a branch

When you create a project within Agile Studio, a backlog work item is also created. When developing an
application built on a foundation application, the Agile Studio backlog can prepopulate with a user story for each
foundation application case type. Case types appropriate for the Minimal Loveable Product (MLP) release can
then be selected from that backlog. For more information, see the article Review Case Type Backlog.
Pega’s Deployment Manager provides a way to manage CI/CD pipelines, including support for branch-based
development. It is possible to automatically trigger a Dev-to-QA deployment when a single branch at a time
successfully merges into the primary application. The rules checked into that branch must only belong to the
primary application for this to occur.
When a case type class is created within a case type-specific ruleset, rules generated by Designer Studio’s Case
Designer view will also be added to that ruleset. This is true despite Case Designer supporting the ability to
develop multiple case types within the same application.

Branch-based development review


Application Branches are managed within Designer Studio’s App view.
While it is not necessary to dedicate a branch to a single case type, as seen in the following image, doing so
simplifies the branch review process.

When a case-related rule in a case-specific ruleset is saved to a branch, a case-specific branch ruleset generates
if one does not already exist. Changes made within the Case Designer that affect that rule occur within the
branch ruleset’s version of that rule. When a branch ruleset is created, it is placed at the top of the application's
ruleset stack.
The merge of a single branch is initiated from the Application view’s Branches tab by right-clicking on the
branch name to display a menu.
At the end of the merge process, the branch will be empty when the Keep all source rules and rulesets after
merge option is not selected. The branch can then be used for the next sets of tasks, issues, or bugs defined in
Agile Studio.

Deployment Manager branch-based development


Consider a scenario in which the Deployment Manager application, running on a separate orchestration server,
is configured to automatically initiate a delivery when a single-branch merge completes for an application
successfully. Also suppose the development environment application, built on the same PegaDevOpsFoundation
application, configures the RMURL (Release Manager URL) Dynamic System Setting (D-S-S) to point to the
orchestration server’s PRRestService. When initiating a single-branch merge, the development environment
sends a request to the Deployment Manager application. The Deployment Manager application orchestrates the
packaging of the application within the development environment, the publishing of that package to a mutual
Dev/QA repository, and the import of that package into the QA environment.

Application packaging
The initial Application Packaging wizard screen asks which built-on applications in addition to the application
being packaged should be included in the generated product rule. Note that components are also mentioned, a
component being a Rule-Application where pyMode = Component.
Multiple applications referencing the same ruleset is highly discouraged. Immediately after saving an application
rule to a new name, warnings appear in both applications — one warning for each dual-referenced ruleset.
The generated warnings lead to the following conclusions:
l A product rule should contain a single Rule-Application where pyMode = Application.
l Product rules should be defined starting with applications that have the fewest dependencies, ending with
applications that have the greatest number of dependencies.
Deployment strategy is different when the production application being deployed is dependent on other
multiple built-on component applications.
Let's consider the example of the FSG Booking application. The FSGEmail application would be packaged first,
followed by the Hotel application, followed by the Booking application.
While it is possible to define a product rule that packages a component only, there is no need to do so. The
component can be packaged using the component rule itself as shown below.
Currently, the Deployment Manager only supports pipelines for Rule-Application instances where pyMode =
Application. When an application is packaged, and that application contains one or more components, those
components should also be packaged. If a built-on application has already packaged a certain component, that
component can be skipped. In the following image, the FSGEmail application’s product rule includes the
EmailEditor component. Product rules above FSGEmail (for example, Hotel and Booking) do not need to include
the EmailEditor component.
The Open-closed principle applied to packaging and deployment
The goal of the Open-closed principle is to eliminate ripple effects. A ripple effect occurs when an object makes
changes to its interface as opposed to defining a new interface and deprecating the existing interface. The
primary interface for applications on which other applications are built, such as FSGEmail and Hotel, is the data
required to construct the new interface using data propagation. If the EmailEditor component mandates a new
property, the FSGEmail application needs to change its interface to applications that are built on top of it, such as
the Hotel application. The Hotel application then needs to change its built-on interface to allow the Booking
application to supply the value for the new mandated property.
By deploying applications separately and in increasing dependency order, the EmailEditor component change
eventually becomes available to the Booking application without breaking that application or the applications
below it.
Note: It is not a best practice to update all three applications (FSGEmail, Hotel, and Booking) using a branch
associated to the Booking application.
Best practices for System of Record team-based development
Pega Platform developers use agile practices to create applications in a shared development environment
leveraging branches to commit changes.
Follow these best practices to optimize the development process:
l Leverage multiple built-on applications to develop smaller component applications. Smaller applications are
easier to develop, test, and maintain.
l Use branches when multiple teams contribute to a single application. Use the Branches explorer to view
quality, guardrail scores, and unit tests for branches.
l Peer review branches before merging. Create reviews and assign peer reviews from the Branches explorer
and use Pulse to collaborate with your teammates.
l Use Pega Platform developer tools, such as rule compare and rule form search, to determine how to best
address any rule conflict.
l Hide incomplete or risky work using toggles to facilitate continuously merging of branches.
l Create PegaUnit test cases to validate application data by comparing expected property values to the actual
values returned by running the rule.
Multiteam development flow
The following diagram shows how multiple development teams interact with the system of record (SOR).

The process begins when Team A requests a branch review against the system of record. A Branch Reviewer first
requests conflict detection, then executes the appropriate PegaUnit tests. If the Branch Reviewer detects
conflicts or if any of the PegaUnit tests fail, the reviewer notifies the developer who requests the branch. The
developer stops the process to fix the issues. If the review detects and the PegaUnit tests execute successfully,
the branch merges into the system of record. The ruleset versions associated to the branch are then locked.
Remote Team B can now perform an on-demand rebase of the SOR application’s rules into their system. A rebase
pulls the most recent commits made to the SOR application into Team B's developer system.
The SOR host populates a comma-separated value D-S-S named HostedRulesetsList. Team B defines a type=Pega
Repository that points to the SOR host’s PRRestService. After Team B clicks the Get latest ruleset versions link
within its Application rule and selects the SOR host’s Pega Repository, a request goes to the SOR to return
information about versions for every ruleset within the SOR’s HostedRulesetsList. Included in that information is
each version’s pxRevisionID. Team B’s system then compares its ruleset versions to versions in the response.
Only versions that do not exist in Team B’s system, or where the pxRevisionID does not match the SOR system’s
pxRevisionID, are displayed. Team B then proceeds with the rebase or cancel. Only versionable rules are included
when a rebase is performed. Non-versioned rules such as Application, Library, and Function are NOT included in
a rebase operation. For this reason, packaging Libraries as a component are desirable.
For more information review the Development workflow in the DevOps pipeline article in the Pega Community.
The following Sequence Diagram describes the process using changes to the FSG Email Application as an
example:
Actors:
l Developer: Member of the Enterprise development team responsible for implement a new feature in the
FSGEmail application.
l Branch Reviewer: Member of the Enterprise development team responsible to code review requests by the
Developer, and merge if the code review is successful.
l Pega SOR: Pega instance configure as the SOR. This instances is the master of the last stable changes made to
the FSGEmail application.
l Booking App Team: Development team responsible for the Booking and Hotel applications.
Process:
l Enterprise development team implements changes related to a new feature of the FSGEmail application.
l A developer from the enterprise team requests a branch review against the system of record.
l A Branch Reviewer first requests conflict detection, then executes the appropriate PegaUnit tests.
l If the Branch Reviewer detects conflicts or if any of the PegaUnit tests fail, the reviewer notifies the developer
who requests the branch review. The branch reviewer stops the process to allow the developer to fix the
issues.
l If the review detects no conflicts and the PegaUnit tests execute successfully, the branch merges into the
system of record. The ruleset versions associated to the branch are then locked.
l The Booking App team can now perform an on-demand rebase of the SOR application’s rules into their
system.
l A rebase pulls the most recent commits made to the SOR application into the Booking App team's system.

Always-locked ruleset versions option


When initially developing an application, open ruleset versions are necessary and desirable. At some point, a
transition can be made to where the application’s non-branched rulesets always remain locked.
When merging a branch, an option exists to choose Create new version and Lock target after merge. to
facilitate rebase operations. A system that requests a rebase from a ruleset's always-locked SOR host detects
newly created and locked ruleset versions before proceeding with the rebase or cancel.

KNOWLEDGE CHECK

When would you use a release toggle?


To exclude work when merging branches
Defining a release pipeline quiz
Question 1
Which of the following three components are needed to support a DevOps release pipeline? (Choose Three)

C # Response Justification
C 1 System of record The system of record is the
shared development
environment.
C 2 Automation server The automation server
orchestrates and manages the
actions.
C 3 Application repository The application repository
stores the application archive
for each version.
4 Enterprise service bus An enterprise service bus is not
required.
5 Virtual server A virtual server is not required.

Question 2
When does the developer trigger a rebase?

C # Response Justification
1 Directly after a branch is After a branch has been
published to the system of published, the conflict detection
record is requested.
2 When the unit tests has The branch is merged after the
executed successfully for a unit tests have executed
branch successfully.
3 When there are no merge The unit tests are executed if
conflicts for the branch there are no merge conflicts.
C 4 When a branch has been Rebase pulls the most recent
successfully merged in the commits from the system of
system of record record.
Question 3
In which stage of the DevOps release pipeline is acceptance testing performed?

C # Response Justification
1 Development Unit tests are created during
development.
2 Continuous integration Unit tests are performed in the
continuous integration stage.
C 3 Continuous delivery Acceptance testing is performed
in the continuous delivery stage.
4 Deployment Acceptance testing must occur
prior to deployment.

Question 4
When defining a release management strategy, what is your primary objective?

C # Response Justification
1 Move the organization to a continuous While CI/CD is a goal for an organization, it is not the
integration/continuous deployment (CI/CD) primary goal of your release management strategy.
model.
2 Introduce automated testing into the Automated testing is a goal to meet on the way to a
organization. repeatable release process, but not the primary goal
of the overall strategy.
C 3 Create a repeatable and sustainable This is the primary goal of the release management
process for ensuring business value is strategy. Moving to DevOps facilitates the ability to
delivered safely and as quickly as possible deliver changes to users quickly.
to end users.
4 Implement a process for developers to Defining a process for reviewing changes and
minimize rule conflicts when merging into a merging is part of ensuring quality and consistency
central repository. of rules, but not the primary goal of the release
management strategy.
Assessing and monitoring quality
Introduction to assessing and monitoring quality
Coupled with automated unit tests and test suites, monitoring the quality of the rules is crucial to ensuring
application quality before application features are promoted to higher environments.
After this lesson, you should be able to:
l Develop a test automation strategy
l Establish quality measures and expectations on your team
l Create a custom guardrail warning
l Customize the rule check-in approval process
Test Automation
Having an effective automation test suite for your Pega application ensures that the features and changes you
deliver to your customers are high quality and do not introduce regressions.
At a high level, this is the recommended test automation strategy for testing your Pega applications:
l Create your automation test suite based on industry best practices for test automation.
l Build up your automation test suite by using Pega Platform capabilities and industry test solutions.
l Run the right set of tests at different stages.
l Test early and test often.
Industry best practices for test automation can be graphically shown as a test pyramid. Test types at the bottom
of the pyramid are the least expensive to run, easiest to maintain, require the least amount of time to run, and
represent the greatest number of tests in the test suite. Test types at the top of the pyramid are the most
expensive to run, hardest to maintain, require the most time to run, and represent the least number of tests in
the test suite. The higher up the pyramid you go, the higher the overall cost and the lesser the benefits.

UI-based functional and scenario tests


Use UI-based functional tests and end-to-end scenario tests to verify that end-to-end cases work as expected.
These tests are the most expensive to run. Pega Platform supports automated testing for these types of tests
through the TestID property in user interface rules. For more information, see the article Test ID and Tour ID for
unique identification of UI elements. By using the TestID property to uniquely identify a user interface element,
you can write dependable automated UI-based tests against any Pega application.

API-based functional tests


Perform API-based testing to verify that the integration of underlying components work as expected without
going through the user interface. These tests are useful when the user interface changes frequently. In your
Pega application, you can validate case management workflows through the service API layer using the Pega API.
Similarly, you can perform API-based testing on any functionality that is exposed through REST and SOAP APIs.For
more information on the Pega API, see the article Getting started with the Pega API.

Unit tests
Use unit tests for most of your testing. Unit testinglooks at the smallest units of functionality and are the least
expensive tests to run. In an application, the smallest unit is the rule. You can unit test rules as you develop
them by using the PegaUnit test framework. For more information, see the article PegaUnit testing.

Automation test suite


Use both Pega Platform capabilities and industry test solutions, such as JUnit, RSpec, and SoapUI to build your
test automation suite. When you build your automation test suite, run it on your pipeline. During your
continuous integration stage, the best practice is to run your unit tests, guardrail compliance, and critical
integration tests. These tests ensure that you get sufficient coverage, quick feedback, and fewer disruptions from
test failures that cannot be reproduced. During the continuous delivery stage, a best practice is to run all your
remaining automation tests to guarantee that your application is ready to be released. Such tests include
acceptance tests, full regression tests, and nonfunctional tests such as performance and security tests.
You receive the following benefits by running the appropriate tests at each stage of development:
l Timely feedback
l Effective use of test resources
l Reinforcement of testing best practices
KNOWLEDGE CHECK

Why is it recommended to use unit testing for most of the testing?


Unit tests test the smallest units of functionality and are the least expensive tests to run.
How to establish quality standards in your team
Fixing a bug costs far more once the bug has reached production users. The pattern of allowing low-quality
features into your production environment results in technical debt. Technical debt means you spend more time
fixing bugs than working on new features that add business value. Allowing unreviewed or lightly tested changes
to move through a continuous integration/continuous deployment (CI/CD) pipeline can have disastrous results
for your releases.
Establishing standard practices for your development team can prevent these type of issues and allows you to
focus on delivering new features to your users. These practices include:
l Leveraging branch reviews
l Establishing rule check-in approval process
l Addressing guardrail warnings
l Creating custom guardrail warnings
l Monitoring alerts and exception
Establishing these practices on your team helps to ensure that your application is of the highest quality possible
before promoting to other environments or allowing the change's inclusion in the continuous integration
pipeline.

Leveraging branch reviews


To increase the quality of your application, you or a branch development team can create reviews of branch
contents. For more information on how to create and manage branch reviews, see the help topic Branch reviews.
The Branch quality landing page aids the branch review process, displaying guardrail warnings, merge
conflicts, and unit test results. It is important to maintain a high compliance score and to ensure code has been
tested. The Deployment Manager’s non-optional pxCheckForGuardrails flow will halt a merge attempt when a Get
Branch Guardrails response shows that the weighted guardrail compliance score is less than the minimum-
allowed guardrail score.
Use Pulse to collaborate on reviews. Pulse can also send emails when a branch review is assigned and closed.
Once all comments and quality concerns are addressed, you can merge the branch into the application.

Establishing check-in approval


You can enable and customize the default rule check-in approval process to perform steps you see necessary to
maintain the quality of the checked-in rules. For example, you can modify the check-in approval process to route
check-ins from junior team members to a senior team member for review.

Addressing aplication guardrail warnings


The Application Guardrails landing page (DEV Studio > Configure > Application > Quality > Guardrails) helps
you understand how compliant your application is with best practices, or guardrails. For more information on the
reporting metrics and key indicator available on the landing page, see the help topic Application Guardrails
landing page.
Addressing the warnings can be time consuming. Review and address these warnings daily so they do not
become overwhelming and prevent you from moving your application features to other environments. For more
information on how to addrress warnings, see the help topic Improving your compliance score .

Creating custom guardrail warnings


You can create custom guardrail warnings to catch certain types of violations. For example, your organization
wants to place a warning on any activity rule that uses the Obj-Delete activity method. You can create a custom
guardrail warning to display a warning that must be justified prior to moving the rule to another environment.

Monitoring alerts and exception


Application with frequent alerts and exceptions should not be promoted to other environments. Use Autonomic
Event Services (AES). For more information, see the article Introduction to Autonomic Event Services (AES). If you
do not have access to AES, use the PegaRULES Log Analyzer (PLA) to download and analyze the contents of
application and exception logs. For more information, see the topic PegaRULES Log Analyzer (PLA) on Pega
Exchange.
How to leverage the Application Quality landing page
The Application Quality landing page is as important to a Lead System Architect’s role as the rest of DEV Studio.
It is one thing to implement functionality. It is another to prove that the functionality works correctly and meets
established quality standards. Plus it is important to monitor how application quality trends – has it improved or
gotten worse over time? Application Quality affects the rate at which the application moves through a Dev Ops
pipeline. It also affects the rate at which new features can be added to the application.

Settings
Establishing standard practices for your development team can prevent these type of issues and allows you to
focus on delivering new features to your users. These practices include:

Setting Description Effect


Applications Current Application or Include If Include Built-On Applications is selected, user is then
Included Built-On Applications allowed to select which Built-On `applications to include
Guardrails Ignore test rulesets when When true, exclude from the Guardrail score unit test
calculating Guardrails score? setup activities, transforms, etc., that were placed within
Defaulted to true test rulesets
Quality trend 2 weeks to 6 months Defines the Quality trend interval period
interval
Test execution 1 week to 6 months Defines the Test execution look-back duration period
look-back
duration
Scenario test Configure delay for scenario test Enable or Disables a scenario test case execution delay
case execution execution? Defaulted to false

Rule Coverage Testing


User level: Single user in a single session. Different users can simultaneously perform their own user level tests.
Application level: Multi-session. Tab contains a “Start new session” button
Use Cases:
l Team is building/modifying an application & works on a test/sample application that is built on top of the
actual application and maintains test artifacts in the test application. Team wants to generate test coverage
report of the actual application by running the tests from the test applicationCurrent Application or Include
Built-On Applications
l Team wants to generate test coverage report as part of the automated tests run in the CICD pipeline and use
it for quality gating purpose
Coverage and Unit Test Rules Coverage-only Rules
Activity Correspondence
Case type Declare Expression
Data page Flow action
Data transform Section
Decision table Validate
Decision tree Decision Data
Strategy XML Stream
Flow HTML
When HTML Fragment
Harness
Paragraph
How to customize the rule check-in approval process
The rule check-in feature allows you to use a process to manage changes to the application. Use this feature to
make sure that checked-in rules are meeting the quality standards by ensuring they are reviewed by a senior
member of the team.
The Pega Platform comes with the Work-RuleCheckIn default work type for the approval process. The work type
contains standard properties and activities and a flow called ApproveRuleChanges that is designed to control the
rule check-in process.
For instructions on how to enable rule check-in approval, see the help topic Configuring the rule check-in
approval process.
When the default check-in approval process is in force for a ruleset version, the flow starts when a developer
begins rule check in. The flow creates a work item that is routed to a workbasket. The standard decision tree
named Work-RuleCheckIn.FindReviewers returns the workbaskets. Rules awaiting approval are moved to the
CheckInCandidates ruleset.
By default, the review work items are assigned to a workbasket with the same name as the candidate ruleset
defined in the Work-RuleCheckIn.pyDefault data transform. Override the Work-RuleCheckIn.FindReviewers decision
tree if you want to route to a different workbasket or route to different workbaskets based on certain criteria.
The approver can provide a comment and take three actions:
l Approve the check-in to complete the check-in process and resolve the rule check-in work item.
l Reject the check-in to delete the changed rule and resolve the rule check-in item.
l Send it back to the developer for further work to route the work item to the developer and move the rule to
the developer's private ruleset.
Affected parties are notified by email about the evaluation results.
You can enhance the default rule check-in approval process to meet your organization's requirements.
KNOWLEDGE CHECK

How can the rule check-in approval process help in monitoring quality?
By ensuring rules are reviewed by senior members of the team before they are checked in
How to create a custom guardrail warning
Guardrail warnings identify unexpected and possibly unintended situations, practices that are not
recommended, or variances from best practices. You can create additional warnings that are specific to the
organization's environment or development practices. Unlike rule validation errors, warning messages do not
prevent the rule from saving or executing.
To add or modify rule warnings, override the empty activity called @baseclass.CheckForCustomWarnings. This
activity is called as part of the Rule-.StandardValidate activity, which is called by, for example, Save and Save-As,
and is designed to allow you to add custom warnings.
You typically want to place the CheckForCustomWarnings activity in the class of the rule type to which you want to
add the warning. For example, if you want to add a custom guardrail warning to an activity, place
CheckForCustomWarnings in the Rule-Obj-Activity class. Place the CheckForCustomWarnings activity in a ruleset
available to all developers.
Configure the logic for checking if a guardrail warning needs to be added in the CheckForCustomWarnings activity.
Add the warning using the @baseclass.pxAddGuardrailMessage function in the Pega-Desktop ruleset.
You can control the warnings that appear on a rule form by overriding the standard decision tree Embed-
Warning.ShowWarningOnForm. The decision tree can be used to examine information about a warning, such as
name, severity, or type to decide whether to present the warning on the rule form. Return true to show the
warning, and false if you do not want to show it.
It is not a best practice to suppress rule warnings as a way to improve your application guardrail compliance
score.
KNOWLEDGE CHECK

When would you create a custom guardrail warning?


To identify variances from best practices specific to the organization's environment or development practices
Assessing and monitoring quality quiz
Question 1
Which two of the following tools are useful when assessing the quality of an application? (Choose Two)

C # Response Justification
C 1 Autonomic Event Services Use AES to monitor the health of
(AES) your application.
2 System Management Use the SMA to monitor and
Application (SMA) manage the resources and
processes of Pega Platform
applications.
C 3 PegaRULES Log Analyzer (PAL) Use PAL to analyze the contents
of application and exception
logs.
4 BIX Use BIX to export data to an
external system.

Question 2
Which type of testing is UI based?

C # Response Justification
C 1 Scenario testing UI-based functional tests are
used to verify end-to-end test
scenarios.
2 Functional testing Functional testing is API-based.
3 Unit testing Unit tests test rules, the smallest
units of functionality.
4 API testing API-based testing is used to test
the integration of underlying
components.

Question 3
In which of the following situations would you override the ShowWarningOnForm decision tree?
C # Response Justification
1 To display a custom guardrail Custom warnings are displayed
warning by default.
C 2 To filter guardrail warnings Use the ShowWarningOnForm to
displayed on the rule form define which warnings to show
on the rule form.
3 To define the criteria for The criteria for adding a custom
adding custom guardrail guardrail warning is defined in
warning CheckForCustomWarnings.
4 To invoke the Invoke the
pxAddGuardrailMessage pxAddGuardrailMessage function
function in CheckForCustomWarnings.

Question 4
What two things happen if the standard rule check-in approval process is enabled when a rule is checked in?
(Choose Two)

C # Response Justification
C 1 The rule is moved to a The rule is moved to the
candidate ruleset. CheckInCandidates ruleset.
2 The rule is routed to an The assignment is routed to a
approver. workbasket from which
approvers pull assignments.
C 3 A work item is created. A work item of type Work-
CheckInRule is created.
4 A notification is sent to the The approver is not known at
approver. the time of check-in. Approvers
pull items from a workbasket.
Conducting load testing
Introduction to conducting load testing
Load testing is an important part of preparing any application for production deployment. It helps identify
performance issues that may only become apparent when the application is under load. Performance issues
found in load testing are not easy to detect in a normal development environment.
After this lesson, you should be able to:
l Design a load testing strategy
l Leverage load testing best practices
Load testing
Load testing is the process of putting demand on your application and measuring its response. Load testing is
performed to determine a system's behavior under both normal and anticipated peak load conditions. Load
testing helps identify the maximum operating capacity of an application as well as any bottlenecks, and
determines which component is causing degradation.
The term load testing is often used synonymously with concurrency testing, software performance testing,
reliability testing, and volume testing. All of these are types of nonfunctional testing that are part of functionality
testing used to validate suitability for use of any given software.

Load testing allows you the validate that your application meets the performance acceptance criteria, such as
response times, throughput, and maximum user load. The Pega Platform can be treated as any web application
when performing load testing.
Tip: Performance testing requires skilled and trained practitioners who are able to design and construct,
execute and review performance tests taking into account best practices. You can engage Pega's Performance
Health Check service to help design and implement your load testing plan.
KNOWLEDGE CHECK

What question does the load testing answer?


Will the system meet the expected performance goals?
How to load test a Pega application
To load test your Pega application, you can use any web application load-testing tool, such as jMeter or
Loadrunner.
Before exercising a performance test, the best practice is to exercise the main paths through the application,
including all those to be exercised by the test scripts, and then take a Performance Analyzer (PAL) reading for
each path. Investigate and fix any issues that are exposed.
Note: Load testing is not the time to shake out application quality issues. Ensure that the log files are clean
before attempting any load tests. If exceptions and other errors occur often during routine processing, the load
test results are not valid.
Run the load test as a series of iterations with the goals identified by business metrics along with technical
metrics to achieve.
l Test environment baseline – This is the first test to establish that application, environment, and tools are all
working correctly.
l Application baseline – This is a test run with one user or one batch process creating a case in a single JVM.
Then increase to 10 then to 100 users or 100 cases created by the batch process.
l Full end to end test – This is the first full test of the application end to end, still in a single JVM.
l Failure in one JVM – Test what happens if there is a failure in one of the JVMs.
l Span JVMs based on the peak business and technical metrics/goals – This is iterated as much as is needed to
achieve agreed success metrics.
Begin testing just with HTTP transactions by disabling agents and listeners. Then, test the agents and listeners.
Finally, test with both foreground and background processing.
The performance tests must be designed to mimic the real-world production use. Collect data on CPU utilization,
I/O volume, memory utilization, and network utilization to help understand the influences on performance.
Relate the capacity of the test machines to production hardware. If the test machines have 20 percent of the
performance of the production machines, then the test workload should be 20 percent of the expected
production workload. If you expect to use two or more JVMs per server in production, use the same number when
testing.
KNOWLEDGE CHECK

Which tool is recommended for load testing a Pega solution?


Use the web application load testing tool your organization is most familiar with.
Load testing best practices
Pegasystems has extensive performance load testing experience, based on hundreds of implementations. The
following list provides ten best practices that help you plan for success in the testing and implementation of a
Pega solution. For the entire list of best practices, see the PDN article Ten best practices for successful
performance load testing, or click any of the following links to go directly to that best practice.
1. Design the load test to validate the business use
2. Validate performance for each component first
3. Script user log-in only once
4. Set realistic think times
5. Switch off virus-checking
6. Validate your environment first
7. Prime the application first
8. Ensure adequate data loads
9. Measure results appropriately
10. Focus on the right tests
KNOWLEDGE CHECK

Pega recommends testing an application with a step approach. What does that mean?
First testing with 50 users, then 100, 150, and 200 for example. Generate a simple predictive model to estimate
expected response time for more users.
Conducting load testing quiz
Question 1
The term load testing is often used synonymously with __________ and __________. (Choose Two)

C # Response Justification
C 1 concurrency testing Concurrency testing is
synonymous with load testing.
C 2 volume testing Volume testing is synonymous
with load testing.
3 functional testing Functional testing tests against
the specifications
4 unit testing Unit testing tests individual
units of source code.

Question 2
Which three of the following options defines part of a valid performance test? (Choose Three)

C # Response Justification
C 1 Switching off virus-checking When virus checking runs, it
impacts any buffered I/O, and
this changes the collected
response times.
C 2 Including think times Think time is important in
duration-based tests. Otherwise,
more work arrives than is
appropriate in the test period.
3 Logging in virtual user before Logging in the virtual users
each interaction before each action is not how
real users behave.
C 4 Run a first use assembly Testing during a FUA cycle
before the test starts skews the results.
Estimating hardware requirements
Introduction to estimating hardware requirements
Due to the number of users expected to be on the system at one time, the number of case types you expect to
process on any given day, and other factors, your application needs appropriate computing resources. Pega
offers the Hardware Sizing Estimate service to guide you through this process.
By the end of this lesson, you should be able to:
l Identify events that cause a hardware (re)sizing
l Describe the process for submitting a hardware sizing estimate
l Submit a hardware sizing estimate request
Hardware estimation events
At the beginning of your project, someone sized the development, testing, and production environments based
on the expected concurrent number of users, case type volume, and other factors that impact application
performance. You may have been part of the initial application sizing exercise, depending on when you arrived
on the project. When you perform formal load testing, you see how well your application performs according to
key performance indicators (KPIs).
Note: Throughout the development cycle, you can monitor performance of the Pega application using the Pega
Predictive Diagnostic Cloud or Pega Autonomic Event Services. The tool you use depends if you are on-premise,
using Pega Cloud, or using another cloud environment.
As you add new users and new functionality to the application, the environment infrastructure can become
insufficient for what you are asking the application to handle. For example, your new commercial loan request
application shortened the process from two weeks to two days. Because the commercial loan application is
successful, the personal loan department wants to start using the application. You expect 10 times as many
personal loan requests than commercial loan requests and 700 new personal loan processors to start using the
application. The effect is similar to adding water to a glass that is not large enough to hold the amount of water
you need it to hold: To hold more water, you need a larger glass.

Consider initiating a new hardware sizing estimate if you are:


l Increasing the number of concurrent users
l Introducing a new Pega application, such as Pega Customer Service or Pega Sales Automation
l Increasing the number of background processes, such as agents or listeners
l Introducing a new case type
l Introducing one or more new integrations to external systems, including robotic automations
Pega offers the Hardware Sizing Estimate service to help you assess hardware needs based on current and
planned application usage. Even if you are unsure if the application infrastructure needs modification, you can
initiate a request with this service for guidance on how to proceed. The resulting estimate includes
recommended settings for application server memory, number of JVMs, and database server disk and memory
needed to support your application.
KNOWLEDGE CHECK

Why do you initiate a new sizing estimate when you are adding new functionality or users to your
existing application?
Your current application infrastructure may be insufficiently sized to handle the load of new users,
applications, or case types you plan to introduce. To avoid a degradation in application performance as you
evolve your application, estimate new infrastructure requirements and implement whatever is necessary to
support your application requirements.
How to submit a hardware sizing estimate request
You determined that new changes or enhancements to your application could impact the performance or
stability of your production application. The Pega Hardware Sizing Estimate team can help you estimate the
infrastructure needs for your application. The following steps illustrate how to submit a request to the Hardware
Sizing Estimate team.

Initiate the request in one of two ways:


l If you are internal to Pega, create a request in the Hardware Sizing Estimate application accessible through
the Pega Portal. The Hardware Sizing Estimate application prompts you for the environment information
needed to process your sizing estimate request.
l If you not internal to Pega, send an email to [email protected]. The Hardware Sizing Estimate
team sends you an excel-based questionnaire to complete.
Important: Do not use an existing version of the questionnaire. The Hardware Sizing Estimate team constantly
refines the sizing models. Always request a new questionnaire for new hardware sizing estimates.
Whether you use the sizing request application or the questionnaire process, collect information about current
number of users and details about the database, application server, and JVM configuration. Work with your
operations team to help you gather this information.
The Hardware Sizing Estimate team uses the information you supply about your application to produce an
estimated sizing report. The process takes approximately five business days. The team processes requests on a
first in, first out (FIFO) basis. The Hardware Sizing Estimate team sends you the sizing report when complete.
Note: The process for sizing estimation is the same if your application is running on Pega Cloud. The Hardware
Sizing Estimate team works with Pega Cloud provisioning to communicate environment sizing recommendations.
Estimating hardware requirements
Question 1
What are two consequences of adding a large number of new users to your application without evaluating the
hardware infrastructure to support those users? (Choose Two)

C # Response Justification
C 1 The connection pool for your The sizing recommendation is likely to increase the number of
application may not be sufficient to connections available in the connection pool for the
handle the number of concurrent application.
users and background processes.
2 There is no impact. The application While cloud environments support elasticity, your application
resizes automatically to handle the may require more specific configuration changes based on
new load. application usage.
C 3 Overall application performance When adding new users to the application, each user's
can degrade. clipboard takes up a certain amount of memory. Depending on
how many active users are in the application and how often
garbage collection occurs, that memory may not be available
for other application processes.
4 Application security configuration is While this may or may not be true depending on your security
harder to maintain. configuration, security configuration does not have a bearing
on your hardware infrastructure.

Question 2
Which application changes cause you to request a hardware sizing estimate? (Choose Two)

C # Response Justification
C 1 Adding a new division of users to A new division of users brings more traffic to the application.
your application. Depending on the number of users, this addition may impact the
performance of the application. Request a sizing estimate in this
situation.
C 2 Introducing a new case type into Exposing a new case type on an organization's website results in
your application exposed by Pega many instances of this case type. Depending on the complexity of
Web Mashup in the organization's the case type, this addition could have database impact. Request
C # Response Justification
public facing website. a sizing estimate in this situation.
3 Modifying a process flow to A split for each occurs within the user's requestor session.
include a split for each shape. Assuming no additional case type volume or new requestor
sessions are created as part of the split for each processing,
existing infrastructure is sufficient to support this change.
4 Replacing an existing web SOAP Assuming the response data is of the same type and size, existing
call with a REST call. infrastructure is sufficient to support this change.

Question 3
How do you determine if the existing hardware is sufficient to handle a significant increase in the number cases
created in the application?

C # Response Justification
C 1 Engage the Pega Hardware Sizing The Pega Hardware Sizing Estimate team possesses years of
Estimate team to produce a experience with sizing multiple Pega Platform and Pega
recommended sizing. application implementations and developed a sizing
algorithm specific to Pega's infrastructure needs.
2 Research hardware sizing estimation While these tools exist, they do not take into account
tools and choose the tool appropriate nuances of Pega application usage.
for your application server and
database.
3 Create a request with Pega Support to Pega Support likely directs you to the Hardware Sizing
ask for assistance in recommending Estimate team.
infrastructure changes.
4 Ask the organization's infrastructure While the organization's infrastructure team can give
team to right size the application to estimates based on existing application usage, the Pega
Hardware Sizing Estimate team can give you specific
handle modifications to the guidance on how the new volume affects the Pega
application. application.
Handling flow changes for cases in flight
Introduction to handling flow changes for cases in flight
Business applications change all the time. These changes often impact active cases in a production environment.
Without proper planning, these active cases could fall through the cracks due to a deleted or modified step or
stage.
By properly planning your strategy for upgrading production flows, active cases will be properly accounted for
and the change will be seamlessly integrated. This lesson presents three approaches to safely update flow rules
without impacting existing cases that are already in production.
After this lesson, you should be able to:
l Identify updates that might create problem flows
l Choose the best approach for updating flows in production
l Use problem flows to resolve flow issues
l Use the Problem Flow landing page
Flow changes for cases in flight
Business processes frequently change. These changes can impact cases that are being worked on. Without
proper planning, these in-flight cases could become stuck or canceled due to a deleted or modified step or stage.
For example, assume you have a flow where a cases goes from a Review Loan Request step, then to a Confirm
Request step, and then to a Fulfill Request step. If you remove the Confirm Request step during a process
upgrade, what happens to open cases in that step?
By properly planning your strategy for upgrading production flows, in-flight cases will be properly accounted for
and the upgrade will be seamlessly integrated.

Possible reasons for problem flows


Since flow rules hold assignment definitions, altering a flow rule could invalidate existing assignments. Following
are examples of why a problem may occur in a flow:
l You remove a step in which there are open cases. This change causes orphaned assignments.
l You replace a step with a new step with the same name. This change may cause a problem since flow
processing relies on an internal name for each assignment shape.
l You remove or replace other wait points in the flow such as a Subprocess or a Split-For-Each shape. These
changes may cause problems since their shape IDs are referenced in active subflows.
l You remove a stage from a case life cycle and there are in-flight cases. In-flight cases are not be able to
change stages.

Parent flow information that affects processing


Run-time flow processing relies on flow information contained in assignments. Changing an active assignment's
configuration within a flow, or removing the assignment altogether, will likely cause a problem. Critical flow-
related assignment information includes:
pxTaskName — the shape ID of the assignment shape to which it is linked. For example, Assignment1
pyInterestPageClass — the class of the flow rule. For example, FSG-Booking-Work-Event
pyFlowType — the name of the flow rule. For example, Request_Flow_0
How to manage flow changes for cases in flight
There are three fundamental approaches to safely updating flows that are already in production. Because each
application configuration and business setting is unique, choose the approach that best fits your situation.
Important: Whichever approach you choose, always test the assignments with existing cases, not just the newly
created cases.

Approach 1: Switch to the application version of in-flight cases


This approach allows users to process existing assignments without having to update the flows. Add a new
access group that points to the previous application version. Then, add the access group to the operator ID so
that the operator can switch to the application from the user portal.
In this example, an application has undergone a major reconfiguration. You created a new version of the
application that includes a newer ruleset versions. Updates include reconfigured flows, as well as decisioning
and data management functionality. You decided to create a new access group due to the extent of changes that
go beyond flow updates.

Advantage: The original and newer versions of the application remain intact since no attempt is made to
backport enhancements added to the newer version.
Drawback: Desirable fixes and improvements incorporated into the newer application version are not available to
the older version.
Care must be taken not to process a case created in the new application version when using the older
application version and vice versa. Both cases and assignments possess a pxApplicationVersion property. Security
rules, such as Access Deny, can be implemented to prevent access to cases and assignment that do not
correspond to the currently used application version.
The user's worklist can either be modified to only display cases that correspond to the currently used application
version or the application version can simply display as separate a worklist view column. Likewise, Get Next Work
should be modified to only return workbasket assignments that correspond to the currently used application
version.
Approach 2: Process existing assignments in parallel with the new flow
This approach preserves certain shapes, such as Assignment, Wait, Subprocess, Split-For-Each, and so on, within
the flow despite those shapes no longer being used by newly created cases. The newer version of the flow is
reconfigured such that new cases never reach the previously used shapes; yet existing assignments continue to
follow their original path.
In this example, you have redesigned a process so that new cases no longer utilize the Review and Correct
assignments. You will replace them with Create and Review Purchase Request assignments. Because you only
need to remove two assignments, you decide that running the two flow variations in parallel is the best
approach.

You make the updates in the new flow version in two steps.
First, drag the Review and Correct assignments to one side of the diagram. Remove the connector from the Start
shape to the Review assignment. Keep the Confirm Request connector intact. This ensures that in-flight
assignments can continue to be processed.
Second, Insert the Create and Review Purchase Request assignments at the beginning of the flow. Connect the
Review Purchase Request to the Create Purchase Order Smart Shape using the Confirm Request flow action.

Later, you can run a report that checks whether the old assignments are still in process. If not, you can remove
the outdated shapes in the next version of the flow.
Advantage: All cases use the same rule names across multiple versions.
Drawbacks: This approach may not be feasible given configuration changes. In addition, it may result in cluttered
Process Modeler diagrams.

Approach 3: Circumstancing
This approach involves circumstancing as many rules as needed to differentiate the current state of a flow from
its desired future state. One type of circumstancing that would satisfy this approach is called as-of-date
circumstancing. As-of-date is when a property within a case is identified, for example pxCreateDateTime, that
property then used as the Start Date within a date range. The End Date within the date range is left blank. An
application-specific DateTime property could be used as well such as a property such as .CustomerApprovalDate.
Advantage: Simple at first to implement at first using the App Explorer. No need to switch applications.
Drawbacks: The drawbacks to the use of circumstancing were listed in the Designing for Specialization lesson.
The primary drawback is that the Case Designer is affected when circumstancing is used except for its support
for specialized Case Type rules. Case Type rules cannot be specialized by DateTime property, as-of-date
circumstancing is not allowed. This presents a problem in that the changes should be carried forward
indefinitely.

Since the Case Designer’s scope is requestor-level the Case Designer only “sees” the base versions of
circumstanced rules such as Flows. Whenever a circumstanced rule is opened from another rule, what is shown
is the base version. To locate the correct circumstanced variation of the base rule, the “Action > View siblings”
menu option must be used. The greater the number of circumstanced rules, the harder it becomes to “picture”
how the collection of similarly circumstanced rules, and non-circumstanced rules interact.
Approach 4: Move existing assignments
In this approach, you set a ticket that is attached within the same flow, change to a new stage, or restart the
existing stage. In-flight assignments advance to a different assignment where they resume processing within the
updated version.
You run a bulk processing job that locates every outdated assignment in the system affected by the update. For
each affected assignment, bulk processing should call Assign-.OpenAndLockWork followed by Work-.SetTicket,
pxChangeStage, or pxRestartStage. For example, you can execute a Utility shape that restarts a stage
(pxRestartStage).
The following example shows a bulk assignment activity using SetTicket:

After you have configured the activity, you deploy the updated flow and run the bulk assignment activity.
Important: The system must be off-line when you run the activity.

Example
In this example, a small insurance underwriting branch office processes about 50 assignments a day; most are
resolved within two days. In addition, there is no overnight processing. You run a bulk process because the
number of unresolved assignments is relatively small and the necessary locks can be acquired during the
evening. Note that it is not necessary to use the Commit method.
Advantage: A batch process activity directs assignments by performing the logic outside the flow. You do not
need to update the flow by adding a Utility shape to the existing flow. The activity enables you to keep the
processing logic in the flow and makes upgrades easier. The activity also facilitates flow configuration and
maintenance in Pega Express.
Drawback: It might be impractical if the number of assignments is large, or if there is no time period when the
background processing is guaranteed to acquire the necessary locks.
Approach 5: Direct Inheritance and Dynamic Class Referencing (DCR)
This approach is a hybrid solution that involves circumstancing for shared work pool-level rules and direct
inheritance for case-specific rules. For case-specific rules, differentiation of a flow’s current state from its desired
future state is accomplished using direct inheritance and DCR. The example below illustrates this approach.

Work pool-level As-of-Date Case-specific Direct Inheritance


Circumstancing
FSG-Booking-Work FSG-Booking-Work-BookEvent-Y2020 Find by name first (Pattern) = false Parent
HypotheticalSharedSubFlow class (Directed) = FSG-Booking-Work-BookEvent

In combination with the differentiation value being set, the pxObjClass of the case would be changed. In the
example above, the value of the differentiating property, pxCreateDateTime is established immediately upon case
creation. The pyDefault Data Transform for the case can determine the current year using the following logic:
Param.CurrentYear = @toInt(@String.substring(@DateTime.CurrentDateTime(),0,4))
Then subsequently change the case’s pxObjClass using the following logic:
.pxObjClass - .pxObjClass + "-Y" + Param.CurrentYear
Although this approach requires a class to be created for every year; it does not behave the same as as-of-date
circumstancing.
One way to avoid creating a class for each year is to skip one or more years when pointing to the Direct Parent
class if no flow changes have been made within those years that would affect in-flight cases. Instead of always
using Param.CurrentYear, the most recent year when class specialization occurred could be determined using a
Data Page using the following logic:
Param.ContractYear = D_ContractYearToUse
[ObjClass:Primary.pxObjClass, StartYear:Param.CurrentYear].ContractYear.pxObjClass =
.pxObjClass + "-Y" + Param.CurrentYear
The logic within D_ContractYearToUse could be:
.ContractYear = Param.StartYear
For each page in D_StartsWithClassNameDesc
[ObjClass:Param.ObjClass].pxResults
Param.Year = <the last 4 characters in> pyClassName
IF (Param.Year only contains digits AND Param.Year <= Param.StartYear)
THEN Primary.ContractYear = Param.Year
Exit Transform
Advantage: Classes defined within an application’s ruleset stack are requestor-level information so are
compatible with the Case Designer’s ability to display a case’s current state. It does so in conjunction with how
the application rule’s Cases & data tab is configured.
Name Work ID prefix Implementation class
BookEvent EVENT- FSG-Booking-Work-BookEvent-Y2022
WeatherPrep WPREP- FSG-Booking-Work-WeatherPrep-Y2021
RoomsRequest ROOMS- FSG-Booking-Work-RoomsRequest-Y2020

Note that application rules configured as shown above would only need to be used during design and
development. In Production, DCR would be used to establish the pxObjClass that each case type should use.
Drawbacks: A number of arguments can be made against using this approach. Each argument is addresses
below.

Argument Counter Argument

Creating classes It takes very little time to create a new class and configure it to directly extend another
takes extra time class. If you only need to create a class once a year, the amount of time to perform this
task is negligible in comparison to other development tasks that would take place within
that year.

Inheritance path There is no restriction on the depth of an inheritance hierarchy. Other limits would be
can become too reached long before inheritance hierarchy would become an issue.
long and or impact
rule resolution
performance

Lengthy Rule resolution begins with a database query. Circumstanced rules are evaluated at the
inheritance path end of the 10-step rule resolution algorithm. Class names are examined at the beginning.
An extra row in the database is an extra row whether due to having a different
would impact rule pxObjClass value or different pyCircumstanceProp and pyCircumstanceVal column
resolution values.
performance Rule resolution is only performed so many times before the resolved rule is cached. The
rule cache is based on usage patterns. Over time the rule cache will evolve whether as-
of-date circumstancing or date-based direct inheritance is used.

Extra classes would Class groups, work pools, and Data-Admin-DB-Table records determine where data is
complicate future persisted.
database table
storage decisions

Does not scale Increasing the number of unique pxObjClass values in the same database table does not
affect system architecture.
How to use problem flows to resolve flow issues
When an operator completes an assignment and a problem arises with the flow, the primary flow execution is
paused and a standard problem flow starts. A standard problem flow enables an administrator to determine
how to resolve the flow.
Pega Platform provides two standard problem flows: FlowProblems for general process configuration issues, and
pzStageProblems for stage configuration issues. The problem flow administrator identifies and manages problem
flows on the Flow Errors landing page.
Note: As a best practice, override the default workbasket or problem operator settings in the
getProblemFlowOperator routing activity in your application to route the problem to the appropriate destination.

Customizing FlowProblems
You can copy the FlowProblems flow to your application to support your requirements. Do not change the name
key. In this example, you add a Send Email Smart Shape to each of the CancelAssignment actions so that the
manager is notified when the cancellations occur.

Managing stage-related problem flows


Problem flows can arise due to stage configuration changes, such as when a stage is removed or relocated. When
an assignment is unable to process due to a stage-related issue, the system starts the standard pzStageProblems
flow.
In the following example, assume the Booking stage has been refactored to be a separate case type. A Booking
case creation step and a wait step were added at the end of the Request stage's process. As a result you remove
the unnecessary Booking stage within the parent stage's case life cycle. Finally, any inflight assignments that
existed in the Booking stage were not moved back to the Request stage using a bulk processing activity.

When a user attempts to advance a case formerly situated in the removed Booking stage, the pzStageProblems
flow would be initiated. Within this flow, the operator can use the Actions menu to select Change stage.

The operator can then manually move the case to another stage, the Request stage being the most appropriate
choice.
For backward compatibility consider temporarily keeping an outdated stage and its steps as they are. For newly
created cases, use a Skip stage when condition in the Stage Configuration dialog to bypass the outdated stage.
How to use problem flows to resolve flow issues
When an operator completes an assignment and a problem arises with the flow, the primary flow execution is
paused and a standard problem flow starts. A standard problem flow enables an administrator to determine
how to resolve the flow.
Pega Platform provides two standard problem flows: FlowProblems for general process configuration issues, and
pzStageProblems for stage configuration issues. The problem flow administrator identifies and manages problem
flows on the Flow Errors landing page.
Note: As a best practice, override the default workbasket or problem operator settings in the
getProblemFlowOperator routing activity in your application to route the problem to the appropriate destination.

Customizing FlowProblems
You can copy the FlowProblems flow to your application to support your requirements. Do not change the name
key. In this example, you add a Send Email Smart Shape to each of the CancelAssignment actions so that the
manager is notified when the cancellations occur.

Managing stage-related problem flows


Problem flows can arise due to stage configuration changes, such as when a stage is removed or relocated. When
an assignment is unable to process due to a stage-related issue, the system starts the standard pzStageProblems
flow.
In the following example, assume the Booking stage has been refactored to be a separate case type. A Booking
case creation step and a wait step were added at the end of the Request stage's process. As a result you remove
the unnecessary Booking stage within the parent stage's case life cycle. Finally, any inflight assignments that
existed in the Booking stage were not moved back to the Request stage using a bulk processing activity.

When a user attempts to advance a case formerly situated in the removed Booking stage, the pzStageProblems
flow would be initiated. Within this flow, the operator can use the Actions menu to select Change stage.

The operator can then manually move the case to another stage, the Request stage being the most appropriate
choice.
For backward compatibility consider temporarily keeping an outdated stage and its steps as they are. For newly
created cases, use a Skip stage when condition in the Stage Configuration dialog to bypass the outdated stage.
Handling flow changes for cases in flight quiz
Question # 1
Which shape is least likely to cause a problem when removed from a flow already in production?

C # Response Justification
1 Split-For-Each Removing a Subprocess or a
Split-For-Each shape may cause
problems since their shape IDs
are referenced in active
subflows.
2 Assignment Removing an Assignment shape
for which there are open
assignments results in
orphaned assignments.
3 Subflow Removing a Subprocess or a
Split-For-Each shape may cause
problems since their shape IDs
are referenced in active
subflows.
C 4 Decision A Decision shape does not
contain properties available on
assignments.

Question # 2
Which assignment property is not critical in terms of flow processing?

C # Response Justification
1 pyFlowType — The name of This property is available on an
the flow rule assignment.
2 pyInterestPageClass — The This property is available on an
class of the flow rule assignment.
C 3 pxTaskLabel — The label given This property is not critical to
to the assignment task flow processing
C # Response Justification
4 pxTaskName — The shape ID This property is available on an
of the assignment shape assignment.

Question # 3
Which one of the following is usually the best approach to safely change flows in production?

C # Response Justification
1 Create a new circumstanced Circumstancing the flow is not a
flow and leave the old flow valid approach since more rules
intact to support outdated than just the flow rule would
assignments likely need to be circumstanced
as well.
C 2 The best approach to use The best approach to use
depends on the nature of the depends upon a number factors
application such as the complexity of the
changes made to the
application.
3 Revert the user’s application Having to switch applications to
version when processing process older cases may not be
older assignments acceptable to certain users.
4 Use tickets, change stage, or It might be impractical if the
restart the current stage to number of assignments is large,
process old assignments to or if there is no moment when
be removed before applying the background processing is
changes guaranteed to acquire the
necessary locks.
Extending an application
Introduction to extending an application
Case specialization describes how an existing application can be transformed into a framework / model /
template / blueprint application without having to rename the classes of existing case type instances. As an LSA,
you are sometimes asked to take an existing application and evolve it so as to use it as a foundation for more
specialized implementations.
After this lesson, you should be able to:
l Describe how an existing application can be transformed into a framework / model / template / blueprint
application
l Extend an application to a new user population
l Split an existing user population
How to extend existing applications
Extending a production application can occur for various reasons, planned or unplanned. Some of these reasons
include:
l The enterprise has planned to sequentially roll out extensions to a foundation application due to budgetary
and development resource limitations
l The enterprise has discovered the need to:
Extend the production application to a new set of users
Split the production application to a new set of users
In either situation, the resulting user populations access their own application derived from the original
production application.
The previous scenarios fall into two major approaches:
l Extending the existing production application to support a new user population
l Splitting the existing production application to support subsets of the existing user population
Within each of the two major approaches are two deployment approaches: either to a new database or to the
same database.

Deployment approaches
Whether extending or dividing an application, you can host the user populations on either a new database or the
original database.

Deploying to a new database


When you deploy the application to a new database, the data in both applications are isolated from each other.
For instance, you can use the same table names in each database. Use ruleset specialization to differentiate the
rules specific to each application's user population. This approach is similar to using foundation data model
classes — embellishment is preferable to extension. You do not need to use class specialization.

Deploying to the original database


When you deploy to the original database, use class specialization to differentiate the data. Class specialization
creates new Data-Admin-DB-ClassGroup records and work pools. As a result, case data is written to tables that are
different from the original tables.
Security enforcement between applications hosted on the same database is essential. Unlike case data,
assignments and attachments cannot be stored to different database tables. You can avoid this issue by using
Pega’s multitenant system. Discuss with the organization whether Pega’s multitenant system is a viable option.
Applications, cases, and assignments contain various organization properties. Use these properties as
appropriate to restrict access between applications hosted in the same database.
Organization Properties
Application Case Assignment
pyOwningOrganization pyOrigOrg pyOwnerOrg pxAssignedOrg
pyOwningDivision pyOrigDivision pyOwnerDivision pxAssignedOrgDiv
pyOwningUnit pyOrigOrgUnit pyOwnerOrgUnit pxAssignedOrgUnit
pyOrigUserDivision

Run the New Application wizard to achieve class specialization. In the Name your application screen, click the
Advanced Configuration link. Under Organization settings, enter at least one new value in the Organization,
Division, and Unit Name fields.
Suppose the new user population is associated to new division and there is a requirement to prevent an
operator in the new division from accessing an assignment created by the original division. The easiest solution
is to implement a Read Work- Access Policy that references the following Work- Access Policy Condition.

pxOwnerDivision = Application.pyOwningDivision
AND pxOwnerOrganization = Application.pyOwningOrganization
Alternatively, you can also define an access deny rule.
Note: Using De Morgan’s law, define the access deny-invoked access when rule as negation of how a single-
access-role access when rule would be defined.

When...
pxOwnerDivision != Application.pyOwningDivision
OR pxOwnerOrganization != Application.pyOwningOrganization

Extending an application to a new user population


If you extend an application to support a new user population, the extended application can be:
l An application previously defined as a foundation application
l An application that becomes a template, framework, blueprint, or model application on top of which new
implementations are built

Extending the application to a new database


When deploying to a new database, ruleset specialization is sufficient to differentiate the existing application’s
user population. Use the ruleset override procedure described in the Designing for Specialization lesson to
specialize the existing application and to define the new application.

Extending the application to an existing database


To support a new user population within an existing database, run the New Application wizard to generate an
application that extends the classes of the existing application’s case types. Then use the ruleset override
procedure described in the Designing for Specialization lesson to specialize the existing application.
Splitting an application's existing user population
In some situations, you may want to split an application's existing user population into subsets. The resulting
subsets each access a user population-specific application built on the original application.
When active cases exist throughout a user population and there is a mandate to subdivide that user population
into two distinct applications, reporting and security become problematic. Cloning the existing database is not a
good approach. This can make controlling duplicate access, such as agents, difficult.

Moving a subset of the existing user population to a new database


If you create a new database to support a subdivided user population, and immediate user migration is not
required, you can gradually transition user/account data from the existing database to the new database. Ideally,
transfer user/account data starting with classes that have the fewest dependencies. For example, attachment
data does not reference other instances.
Copy resolved cases for a given user/account to the new database, but do not purge resolved cases from the
original system immediately. Wait until the migration process is complete for that user/account. Use the
Purge/Archive wizard to perform this task (Designer Studio > System > Operations > Purge/Archive).
Optionally, modify case data organization properties to reflect the new user population.
A requirement to immediately move a subset of an existing user population to a new database is more complex
due to the likelihood of open cases. Use the Package Work wizard to perform this task ( Designer Studio >
Distribution > Package Work).

Creating subsets of the existing user population within the original database
The most complex situation is when immediate user population separation is mandated within the same
database. To support this requirement, a subset of the existing cases must be refactored to different class
names.
Manipulating the case data model for an entire case hierarchy while a case is active is risky and complex. For this
reason, seek advice and assistance before attempting a user population split for the same application within the
same database.

Case type class names


Avoid refactoring every case type class name when splitting a user population within an existing database.
Refactoring class names is a time-consuming process. Businesses prefer the most expedient and cost effective
change management process. The most cost effective approach keeps the largest percentage of users in the
existing work pool class and moves the smaller user population to a new work pool class.
Pega auto-generates Database table names. Pega Express generates names for rules such as When rules, Flow
names, and Sections. Case type class names need not exactly reflect their user populations. An application's
name, its organization properties, and associated static content are sufficient to distinguish one specialized
application from another.
The notion of defining a framework, foundation, template, model, or blueprint layer that abstracts a business
process is sound. In the past, these foundations classes used the FW (FrameWork) abbreviation in their class
names. Naming case classes using the FW abbreviation sometimes occurs at the beginning of the development
process. If during post-production an implementation application becomes a framework application, its class
name does not contain the FW abbreviation. This abbreviation is optional, not a necessary, naming convention.
Extending existing applications quiz
Question # 1
What are two benefits of using a completely new database for a new user population? (Choose Two)

C # Response Justification
1 Multitenancy can be avoided Adding a new user population
to an existing database, yet
wanting to keep that user
population’s data separated
from the existing user
population’s data , can be
achieved using a multi-tenancy
approach.
C 2 No need exists to use class With complete database
specialization separation, there is no need to
use class specialization. Each
implementation application can
use ruleset specialization to
differentiate the rules specific
to its user population.
3 Assignments and Work pool data can be saved to
attachments can be saved to different tables according to
different tables their associated Data-Admin-
DB-ClassGroup records, but
assignments and attachments
cannot.
C 4 The same table names can be If the user population is
used in both databases completely new, then a
completely new database can
be used. Doing so would
achieve total isolation even if
the same table names are used
in both databases.
Question # 2
Which two statements are true when subdividing an existing user population into two distinct applications?
(Choose Two)

C # Response Justification
C 1 User/account data can be If a new database is created to
gradually migrated support the transition and
immediate migration is not
required, user/account data can
be gradually migrated from the
existing database to the new
database until the user
population separation is
complete.
2 Reporting and security is When active cases exist
simplified throughout a user population
and there is a mandate to
subdivide that user population
into two distinct applications,
reporting and security become
problematic.
C 3 Resolved cases can be Resolved cases for a given
duplicated during migration user/account can be duplicated,
but not purged from the original
system, until the migration
process is complete.
4 Cloning the existing database What is not desirable is to clone
is desirable the existing database since it
would be difficult to control
duplicate access, for example by
agents.

Question # 3
Which two statements are true regarding rule names? (Choose Two)
C # Response Justification
1 All foundation classes must The need to specialize an
contain FW application may not be
discovered until it is in
production. At that point the
application would become a
foundation. Refactoring that
application's case type names to
contain FW would be wasteful.
C 2 Refactoring class names is a Refactoring class names is a
time-consuming process time-consuming process.
Businesses prefer that changes
be implemented in the most
expedient way, which would also
be the most cost-effective way.
C 3 Pega Express generates Database table names are auto-
names for rules generated and Pega Express
generates names for rules such
as When rules, Flow names, and
Sections.
4 Case type class names must Claiming that case type class
exactly reflect their user names must exactly reflect their
population otherwise user population, otherwise
developer productivity may developer productivity would
suffer suffer, is a weak argument.
Leveraging AI and robotic automation
Introduction to leveraging AI and robotic automation
Artificial intelligence (AI) and robotic automation technology changes the way people work. AI and robotics both
automate work, each in a different way. Choosing the most appropriate automation technology depends on the
result you want to achieve.
After this lesson, you should be able to:
l Compare AI and robotic automation technologies
l Identify opportunities to leverage AI in your application
l Identify opportunities to leverage robotic automation in your application
Artificial intelligence and robotic automation comparison
Artificial Intelligence (AI) and robotic automation are similar in that they perform a task or tasks instead of a
human being. AI and robotic automation solutions are not impacted by geography and are not prone to error,
like human beings. However, the application of each technology differs based on what you are trying to achieve.
You could also design a solution that uses AI and robotic automation capabilities in tandem; they are not
mutually exclusive technologies. Grasping the benefits and differences between AI and robotic automation
allows you to identify opportunities to use these technologies and to design an application that can radically
change the way the organization performs work. Pega offers the following technology to meet these needs:
l AI capabilities, in the form of the Intelligent Virtual Assistant, Customer Decision Hub, the Decision
Management features
l Robotic automation capabilities, including Robotic Desktop Automation (RDA) , Robotic Process Automation
(RPA), and Workforce Intelligence (WFI)
The following table summarizes the key differences between Robotic Desktop Automation (RDA), Robotic Process
Automation (RPA), Workforce Intelligence (WFI) and AI capabilities.

Capability RDA RPA WFI AI


Assists end users with routine manual tasks X
Fully replaces the end user's involvement in the X X
task
Identifies opportunities for process X
improvement
Self learning technology, requiring no X
programming

Artificial intelligence
Artificial intelligence (AI) can be defined as anything that makes the system seem smart. An artificial
intelligence solution learns from the data available to it. This data can be structured or unstructured (such as big
data), and can take in images, sound, or text inputs. The value of AI increases as the solution gains age and
experience, not unlike a human being.
For an AI solution to be self learning, the AI solution uses experiences to form its basis of knowledge, not
programmed inputs. The Adaptive Decision Manager (ADM) service is an example of adaptive learning
technology. For example, when training an Artificial Intelligence solution to recognize a cat, you do not tell the AI
to look for ears, whiskers, a tail, and fur. Instead, you show the AI pictures of cats. When the AI responds with a
rabbit, you coach the AI to distinguish a cat from a rabbit. Over time, the AI becomes better at identifying a cat.
This technology can be a powerful ally in building a customer's profile, preferences and attributes.
An AI solution can also predict the next action a customer will take. This ability allows an organization to serve
the customer in a far more effective way. The organization can know why a customer is contacting them before
the customer even calls. For example, an AI solution can guide a customer service representative to offer
products or services that the customer actually wants, based on previous behavior of the customer. Predictive
Analytics provides this capability.
The Customer Decision Hub combines both predictive and adaptive analytics to provide a seamless customer
experience and only shows offers relevant to that customer. The Customer Decision Hub is the centerpiece of the
Pega Sales Automation, Pega Customer Service, and Pega Marketing applications.
AI uses a natural language processing (NLP) to detect patterns in text or speech to determine the intent or
sentiment of the question or statement. For example, a bank customer uses the Facebook messenger channel to
check his account balance. In the background, the bank's software analyzes the intent of the question in the
message, performs the balance inquiry and returns the response to the customer. The Intelligent Virtual
Assistant is an example of NLP in action.
Note: AI is a powerful tool. AI can also carry risk, if you are not cautious. For more information on this topic, see
the AI Customer Engagement: Balancing Risk and Reward presentation on Pega.com.

Robotic automation
Robotic automation is technology that allows software to replace human activities that are rule based, manual,
and repetitive. Pega robotic automation applies this technology with:
l Robotic desktop automation (RDA)
l Robotic process automation (RPA)
l Workforce intelligence (WFI)
Robotic desktop automation (RDA) automates routine tasks to simplify the employee experience. RDA mimics
the actions of user interacting with another piece of software on the desktop. For example, a customer service
representative (CSR) logs into five separate desktop applications to handle customer inquiries throughout the
day. You can use RDA to log that CSR into these applications automatically. This allows the CSR to focus on better
serving the customer.
Usage of RDA is also known as user-assisted robotics.
Robotic process automation (RPA) fully automates routine and structured manual processes. No user
involvement is required. With RPA, you assign a software robot to perform time consuming, routine tasks with no
interaction with a user. These software robots perform work on one or more virtual servers. For example, a bank
requires several pieces of documentation about a new customer before the bank can onboard that new
customer. Gathering this information can take one person an entire day to complete this task. You can use RPA to
gather these documents from one or more source systems. The software robot can perform the same process in
minutes.

Use of RPA is also known as unattended robotics.


Workforce intelligence (WFI)connects desktop activity monitoring to cloud-based analytics to gain insight
about your people, processes, and technology. WFI enables the organization to find opportunities to streamline
processes or user behavior. For example, this technology can identify where a user is repeatedly copying and
pasting, switching screens, or typing the same information over and over. This allows the organization to detect
areas for process improvement. When you implement changes to those processes, the organization can realize
significant time and money savings.
For more information on RDA, RPA, and WFI, see the Pega Robotic Automation landing page on the Pega
Community.
KNOWLEDGE CHECK

What characteristics distinguish Artificial Intelligence from robotic automation? What are some
examples of robotic automation and Artificial Intelligence?
Robotic automation mimics user behavior through software. Software robots perform routine, sometimes,
onerous tasks instead of users. Robotic automation can solve problems where a web services or data
warehousing solution was previously required. An example of robotic automation is Robotic Desktop
Automation, which can invoke automations to gather data from legacy systems from the users desktop. Artifical
Intelligence solutions learn based on available inputs. AI solutions also need to be trained to refine it's ability
to predict future behavior of those interacting with the AI solution. The Customer Decision Hub and Intelligent
Virtual Assistant are two examples of Pega's implementations of AI.
Leveraging AI and robotic automation quiz
Question 1
You want to reduce call-center volume and increase customer service representative effectiveness by offloading
routine tasks to a chatbot through a conversational UI. Which tool do you choose?

C # Response Justification
1 Robotic desktop automation (RDA) Robotic desktop automation mimics user interaction with
desktop applications.
2 Robotic process automation (RPA) Robotic process automation performs routine tasks on
behalf of the user in the background.
C 3 Intelligent Virtual Assistant The Intelligent Virtual Assistant lets you introduce a
conversational UI into your organization, allowing you to
deliver contextual and personalized service.
4 Natural language processing Natural language processing is part of an overall
AI solution.

Question 2
What two key aspects distinguish an AI solution from a robotic automation solution (Choose Two)?

C # Response Justification
C 1 An AI solution needs to be trained or An AI solution needs to be trained or coached to arrive at a
coached. correct conclusion. Over time, the AI becomes more
effective.
2 An AI solution must be carefully One of the benefits of AI is that it does not require
programmed. programming. AI learns on its own.
C 3 An AI solution takes in unstructured One of the benefits of AI is that it can take in multiple forms
input to help it learn. of data, structured or unstructured.
4 An AI solution replaces manual, While an AI solution can replace manual, repetitive tasks,
repetitive tasks. this is better suited for an RDA or RPA solution.

Question 3
Tasks that are best suited for a robotic automation solution are _____________ and _____________. (Choose Two)
C # Response Justification
C 1 highly manual Highly manual tasks are well suited for a robotic
automation solution.
C 2 routine Routine tasks are well suited for a robotic automation
solution.
3 require analysis Tasks that require analysis are suited for human beings,
or possibly AI.
4 require judgment Tasks the require judgment are suited for human beings,
or possibly AI.
COURSE SUMMARY

Lead System Architect summary


Now that you have completed this course, you should be able to:
l Design the Pega application as the center of the digital transformation solution
l Describe the benefits of starting with a Pega customer engagement or industry application
l Recommend appropriate use of robotics and artificial intelligence in the application solution
l Leverage assets created by business users who are building apps in Pega Express
l Design case types and data models for maximum reusability
l Design an effective reporting strategy
l Design background processes, user experience, and reporting for optimal performance
l Create a release management strategy, including DevOps, when appropriate
l Ensure your team is adhering to development best practices and building quality application assets
l Evolve your application as new business requirements and technical challenges arise

Next steps
To further your learning and share in discussions pertinent to lead system architects, including the latest
information on certification requirements, see the Lead System Architects space on the Pega Community.

You might also like