c1791 Student Guide
c1791 Student Guide
3
8.3.1
Student Guide
© 2019
Pegasystems Inc., Cambridge, MA
All rights reserved.
Trademarks
For Pegasystems Inc. trademarks and registered trademarks, all rights reserved. All other trademarks or service marks are property of their
respective holders.
For information about the third-party software that is delivered with the product, refer to the third-party license file on your installation media
that is specific to your release.
Notices
This publication describes and/or represents products and services of Pegasystems Inc. It may contain trade secrets and proprietary information
that are protected by various federal, state, and international laws, and distributed under licenses restricting their use, copying, modification,
distribution, or transmittal in any form without prior written authorization of Pegasystems Inc.
This publication is current as of the date of publication only. Changes to the publication may be made from time to time at the discretion of
Pegasystems Inc. This publication remains the property of Pegasystems Inc. and must be returned to it upon request. This publication does not
imply any commitment to offer or deliver the products or services described herein.
This publication may include references to Pegasystems Inc. product features that have not been licensed by you or your company. If you have
questions about whether a particular capability is included in your installation, please consult your Pegasystems Inc. services consultant.
Although Pegasystems Inc. strives for accuracy in its publications, any publication may contain inaccuracies or typographical errors, as well as
technical inaccuracies. Pegasystems Inc. shall not be liable for technical or editorial errors or omissions contained herein. Pegasystems Inc. may
make improvements and/or changes to the publication at any time without notice.
Any references in this publication to non-Pegasystems websites are provided for convenience only and do not serve as an endorsement of these
websites. The materials at these websites are not part of the material for Pegasystems products, and use of those websites is at your own risk.
Information concerning non-Pegasystems products was obtained from the suppliers of those products, their publications, or other publicly
available sources. Address questions about non-Pegasystems products to the suppliers of those products.
This publication may contain examples used in daily business operations that include the names of people, companies, products, and other third-
party publications. Such examples are fictitious and any similarity to the names or other data used by an actual business enterprise or individual is
coincidental.
Pegasystems Inc.
One Rogers Street
Cambridge, MA 02142-1209
USA
Phone: 617-374-9600
Fax: (617) 374-9620
www.pega.com
COURSE OVERVIEW 2
Setting up the Pega Platform 4
Leveraging Pega applications 24
Designing the case structure 35
Starting with App Studio 50
Designing for specialization 57
Promoting reuse 82
Designing the data model 87
Extending an industry framework data model 106
Assigning work 115
Defining the authorization scheme 135
Defining the authorization scheme 139
Mitigating security risks 163
Defining a reporting strategy 180
How to define a reporting strategy 187
Query design 196
User experience design and performance 208
Conducting usability testing 218
Designing background processing 225
Designing Pega for the enterprise 236
Defining a release pipeline 251
Assessing and monitoring quality 275
Conducting load testing 289
Estimating hardware requirements 295
Handling flow changes for cases in flight 302
Extending an application 320
Leveraging AI and robotic automation 328
The VM environment is available offline and persists after you complete the course.
Front Stage Event Booking business scenario
Front Stage Event Booking assists customers with booking large-scale, high-profile corporate and musical events,
hosting between 5,000 and 18,000 guests per event. Front Stage has been in business for 30 years, and uses a
range of technology. Some technology is old, such as the reservation system that runs on a mainframe. Some
technology is new, such as the most recent mobile application that helps sales executives track leads.
Front Stage relies on the Information Technology (IT) department to maintain legacy applications, as well as to
support their highly mobile sales organization. In the past, IT created applications that were difficult to use and
did not meet the needs of the end users of these applications. In some cases, the new application slowed the
business instead of making the users more productive.
Front Stage is aware of several smaller event booking companies who are using newer technology to gain a
competitive edge. These smaller companies have started to cut into the corporate event booking segment, and
Front Stage sees a dip in sales in this segment as a result. Front Stage's CEO, Joe Schofield, recognizes that if
Front Stage avoids investing in technology to transform the way they operate, then Front Stage will be out of
business in two years.
On-premise
On-premise refers to systems and software that are installed and operate on customer sites, instead of in a
cloud environment.
Cloud choice
Running on the cloud in any form is an attractive option for many organizations. Pega Platform provides flexible
support across different cloud platforms and topology managers.
Your platform choice depends on your needs and your environment.
The three basic models for deploying Pega Platform on the cloud include:
l Pega Cloud – Pegasystems’ managed cloud platform service offering is architected for Pegasystems’
applications. Pega Cloud is the fastest time to value.
For more information about Pega Cloud, see the PDN article Pega Cloud.
l Customer Managed Cloud – Customer-managed cloud environments are run within private clouds or run on
Infrastructure-as-a-Service (IaaS) offerings delivered by providers such as Amazon Web Services, Microsoft
Azure, or Google Cloud Platform.
PCF is a topology manager. Using a topology manager like PCF still requires one of the above Cloud providers. For
more information, see the PDN article Deploying Pega Platform Service Broker on Pivotal Cloud Foundry by
using the Ops Manager.
To have greater control over the deployment, use BOSH to deploy the Pega Service Broker. For more information,
see the PDN article Deploying the Pega Platform Service Broker on Cloud Foundry by using BOSH.
Docker container
Pega can run as a Docker container. Docker is a Container-as-a-Service (CaaS) infrastructure. Docker is a cost-
effective and portable way to deploy a Pega application because you do not need any software except the Docker
container and a Docker host system.
Developers use Docker to eliminate certain problems when collaborating on code with co-workers. Operators use
Docker to run and manage apps side-by-side in isolated containers. Enterprises use Docker to build agile
software delivery pipelines.
Containers provide a way to package software in a format that can run isolated on a shared operating system.
For more information on Docker support, see the PDN article Pega Platform Docker Support.
KNOWLEDGE CHECK
Load Balancing
Load balancing is a methodology to distribute the workload across multiple nodes in a multinode clustered
environment. Load balancers monitor node health and direct requests to healthy nodes in the cluster. Since the
requestor information (session, PRThread, and clipboard page) are stored in memory, Pega requires all requests
from the same browser session to go to the same JVM. In network parlance, this is also known as sticky sessions
or session affinity.
Session affinity is configured with the load balancer. It ensures that all requests from a user are handled by the
same Pega Platform server. Load-balancing Pega nodes in a multinode clustered environment can be achieved
by using hardware routers that support “sticky” HTTP sessions. Cisco systems Inc. and F5 Networks Inc. are
examples of vendors who offer such hardware. There is also software, virtual, and cloud-based load balancer
solutions such as Amazon EC2’s elastic scaling that are available.
The load balancers must support session affinity and cookie persistence. Production load balancers offer a range
of options for configuring session affinity. The Pega Platform supports cookie-based affinity. You can configure
cookies for high availability session affinity using the following variables:
l session/ha/quiesce/customSessionInvalidationMethod
l session/ha/quiesce/cookieToInvalidate
SSO Authentication
Single sign-on (SSO) authentication, though not required, provides a seamless experience for the end user.
Without SSO, the user reauthenticates when the user session is moved to another node.
Shared Storage
Users' session data is persisted in shared storage in the event of failover or server quiesce. The shared storage
allows stateful application data to be moved between nodes. Pega supports a shared storage system which can
either be a shared disk drive, Network File System (NFS), or a database. All three of these options require read
write access on those systems for Pega to write data. By default Pega uses database persistence in a HA
configuration. If organizations decide on a different shared storage system then they need to make sure the
Shared Storage integrates with Pega 7. It is essential to configure shared storage to support Quiesce and crash
recovery.
Split schema
The database tier must have a failover solution that meets production service-level requirements. The Pega
Platform database has a split schema. With split schema, the Pega Database Repository is defined in two
separate schemas: Rules and Data. Rules includes the rule base and system objects, and Data includes data and
work objects. Both can be configured during installation and upgrade.
When users save a change to a process flow, they are saving a record to the Rule schema. The Data schema
stores run-time information such as processes state, case data, assignments, and audit history. The split schema
design is used to support solutions that need to be highly available.
Split schema can also be used to separate customer transactional data and rules from an operational
perspective. For example, data typically changes more than rules so it can be backed up more frequently. Rule
upgrades and rollbacks can be managed independently from data.
With split schema rolling re-starts can be performed to support upgrades, reducing server downtime. In this
example, the rules schema is copied and upgraded once. Each node in the cluster is then quiesced, redirected to
the updated rules schema, and then restarted one at time.
For more information on high availability configuration, see the Deploying a highly available system article on
the Pega Community.
KNOWLEDGE CHECK
Hazelcast is an open source in-memory data grid based on Java. In a Hazelcast grid, data is evenly distributed
among the nodes of a cluster, allowing for horizontal scaling of processing and available storage. Hazelcast can
only run embedded in every node and does not support a client-server topology.
Apache Ignite
Pega can use Apache Ignite instead of Hazelcast for in-memory cache management.
Apache Ignite is an in-memory cache management platform that can provide greater speed and scalability in
large multinode clusters. Apache Ignite supports high-performance transactions, real-time streaming, and fast
analytics in a single, comprehensive data access and processing layer.
Unlike Hazelcast, Apache Ignite supports client-server mode. Client-server mode provides greater cluster stability
in large clusters and supports the ability for servers and clients to be separately scaled up. Use client-server
mode for large production environments that consist of more than five cluster nodes or if you experience cluster
instability in clusters that contain fewer nodes. The number of nodes in the cluster that can lead to cluster
instability depends on your environment, and switching to client-server mode is determined individually.
Client-server mode is a clustering topology that separates Pega Platform processes from cluster communication
and distributed features. Client-server mode clustering technology has separate resources and uses a different
JVM from the Pega Platform. The client nodes are Pega Platform nodes that perform application jobs and call the
Apache Ignite client to facilitate communication between Pega Platform and the Apache Ignite servers. The
Servers are stand-alone Apache Ignite servers that provide base clustering capabilities, including communication
between the nodes and distributed features. At least three Apache Ignite servers are required for one cluster.
The client-server topology adds value to the business by providing the following advantages:
l The cluster member life cycle is independent of the application life cycle since nodes are deployed as
separate server instances.
l Cluster performance is more predictable and reliable because cluster components are isolated and do not
compete with the application for CPU, memory, and I/O resources.
l Identifying the cause of any unexpected behavior is easier because cluster service activity is isolated on its
own server.
l A client-server topology provides more flexibility since clients and servers can be scaled independently.
For more information about enabling client-server mode, see Pega Platform Deployment Guides.
KNOWLEDGE CHECK
Planned outage
In a planned outage, you know when application changes are taking place. For example, if you need to take a
node out of service to increase heap size on the JVM, you can take that node out of service and move users to
another node users noticing any difference.
Quiesce
The process of quiescing provides the ability to take a Pega Platform server out of service for maintenance or
other activities.
To support quiescing, the node is first taken out of load balancer rotation. The Pega application passivates, or
stores, the user session, and then activates the session on another node. Passivation works at the page, thread,
and requestor level. The inverse of passivation is activation. Activation brings the persisted data back into
memory on another node.
You can quiesce a node from:
l The High Availability landing pages in DEV Studio (if you are not using multiple clusters)
l The System Management Application (SMA)
l The Autonomic Event Services (AES) application (recommended for use with multiple clusters)
l REST API services (starting in v7.4)
l A custom Pega Platform management console by incorporating cluster management MBeans
When quiesced, the server looks for the accelerated passivation setting. By default, Pega Platformsets the
passivation to five seconds. After five seconds, it passivates all existing user sessions. When users send another
request, their user session activates in another Pega Platformserver without loss of information.
The five second passivation timeout might be too aggressive in some applications. System Administrators can
increase the timeout value to reduce load. The timeout value should be large enough so that a typical user can
submit a request.
Once all existing users are moved from the server, the server can be upgraded. Once the process is complete,
the server is enabled in the load balancer and quiesce is canceled.
Out-of-place upgrade
Pega Platform provides the ability to perform out-of-place upgrades with little or no downtime. An out-of-place,
or parallel, upgrade involves creating a new schema, migrating rules from the old schema to the new schema,
and upgrading the new schema to a new Pega release. The data instances in the existing data schema also
update. Once the updates are complete, the DB connections are modified to point to the new rules schema and
the nodes are quiesced and restarted one at a time in a rolling restart.
In-place upgrade
Pega Platform also provides the ability to perform in-place upgrades, which may involve significant downtime
because existing applications need to be stopped. After, pre-upgrade scripts or processes may need to be run.
Prior to importing the new version of the Pega rulebase, the database schema would be updated manually or
automatically using the Installation and Upgrade Assistant (IUA). EAR or WAR files, if used, are undeployed and
replaced with the new EAR and WAR files. The new archives would need to be loaded. After, additional
configuration changes may be made using scripts or the Upgrade Wizard.
Unplanned outage
With Pega Platform configured for high availability, the application can recover from both browser and node
crashes. Pega Platform uses dynamic container persistence for relevant work data. The dynamic container
maintains UI and work states, but not the entire clipboard.
Node crash
Pega saves the structure of UI and relevant work metadata on shared storage devices for specific events. When a
specific value is selected on a UI element, the form data is stored in the shared storage as a requestor clipboard
page. When the load balancer detects a node crash, it redirects traffic to another node in the pool of active
servers. The new node that processes the request detects a crash and uses the UI metadata to reconstruct the UI
from the shared storage.
On redirection to a new node, the user must reauthenticate, so a best practice is to use Single Sign-on to avoid
user interruption. Since the user’s clipboard is not preserved from the crash, data that has been entered but not
committed on assign, perform, and confirm harnesses is lost.
Browser crash
When the browser terminates or crashes, users connect to the correct server based on session affinity. The state
of the user session is recovered without loss since the clipboard preserves both metadata for the UI and any
data entered on screens and submitted.
Crash recovery matrix
Events Browser crash Node crash
UI is redrawn Yes Yes
User must re- No, if redirected to the same node No, with Single Sign-on
authenticate Yes, if the authentication cookie was Yes, without Single Sign-on
lost
Data entry loss No, if redirected to the same node Data not committed is lost
Data not committed is lost if the
authentication cookie was lost
KNOWLEDGE CHECK
C # Response Justification
C 1 Pega Cloud Pega Cloud is Pega's managed cloud
platform service.
C 2 Customer Managed Cloud Customer-owned and controlled cloud
environment that runs within private
clouds or on infrastructure-as-a-
service.
C 3 Partner Managed Cloud Owned and controlled by business
partner. Delivers the Pega Platform as
service.
4 Cloud Foundry Cloud Foundry is a topology manager;
it still requires a Cloud provider.
Question 2
An environment designed for high availability consists of which two of the following components? (Choose Two)
C # Response Justification
C 1 Load balancer The load balancer monitors
health of the nodes.
C 2 Shared storage The shared storage allows
stateful application data to be
shared across nodes.
3 Cloud foundry Cloud foundry is not part of a
high-availability architecture.
4 Database replication Database replication is not used
in a high availability
architecture
Question 3
Which two of the following statements are correct with regard to server crashes? (Choose Two)
C # Response Justification
C 1 The UI is redrawn. The UI state is maintained.
2 The user must always Reauthenticate is only needed if
reauthenticate. a single sign-on is not used.
C 3 Data not committed is lost. Data that was not committed is
lost.
4 The entire clipboard is The Pega Platform uses a
recovered. dynamic container that
maintains UI and work states,
but not the entire clipboard.
Question 4
Which of the following two statements are correct with regard to the hardware sizing estimation service offered
by Pega? (Choose Two)
C # Response Justification
1 There is no need to make use The sizing estimate can be
of this service if a cloud is applied to any deployment type.
used.
C 2 Sizing estimates can be Sizing estimates can be applied
applied at any point of the at any point of the project.
project.
C 3 Events such as adding new Changes to the application
case types require a new characteristics require a new
hardware estimate. hardware estimate.
4 The service is aimed at Sizing estimates can be applied
production environments to any environment.
only.
Leveraging Pega applications
Introduction to leveraging Pega applications
Pega's customer engagement and industry applications can shorten the delivery time frame of your business
application. By determining the differences, or gaps, between what the Pega application provides and your
organization's requirements, you can deliver a minimum set of functionality to begin providing value in days and
weeks, not months and years.
By the end of this lesson, you should be able to:
l Explain the benefits of using Pega's application offerings
l Describe how the gap analysis approach impacts your application design tasks
Benefits of leveraging a Pega application
Building software applications that return an investment in the form of cost savings, automation, or new
business can be a risky, time consuming, and expensive endeavor for an organization. Organizations want to
minimize as much risk, cost, and time investment as possible.
To achieve those objectives, deliver an application with the minimum set of functionality that is lovable to the
business. The minimum lovable product (MLP) provides an application with the minimum amount of
functionality that your users love, and not just live with. Over time, you iterate and improve that product or
application as business needs change and evolve.
By starting with a Pega application, you are far closer to the MLP than if you start your application design from
the beginning.
Instead of starting with a lengthy requirements gathering process, you can demonstrate the functionality that
the Pega application provides for you and compare that functionality to your business requirements. This
process is called performing a gap analysis.
Once you identify these gaps, you can design for the minimum amount of functionality needed to deliver the
MLP.
KNOWLEDGE CHECK
Why do you need to know about Pega's customer engagement and industry application offerings?
Pega's customer engagement and industry applications are key to rapidly delivering a solution that provides
immediate business value. You need to know at a high level what each application provides and where to learn
more about each application.
How to customize a Pega application
Even though each Pega application has a unique set of capabilities to meet specific industry needs, the process
for customizing the application is the same regardless of the application. The following process describes how to
customize and extend a Pega application.
l Acquaint yourself with application features
l Create your application implementation layer
l Perform the solution implementation gap analysis
l Define the Minimum Lovable Product (MLP)
l Record desired customizations using Agile Workbench
l Customize the application
l Evolve the application
Depending on the installation, a Pega application can have multiple build-on products to provide the full suite of
functionality. For example, the following diagram illustrates the construction of the Pega Customer Service for
Healthcare application, including Pega Sales Automation and Pega Marketing applications.
Note: Use the New Application wizard to create the application implementation layer.
C # Response Justification
C 1 get to the minimum lovable Pega's applications allow you to get to the minimum lovable
product (MLP) faster product faster, delivering business value to your organization
quickly.
2 compete with other industry While Pega's applications compete with industry point solutions,
specific point solutions this is not the primary benefit of leveraging a Pega application.
3 integrate with systems of While the Pega applications provide connection points to systems
record for lines of business of record through data pages, this is not the primary benefit for
more easily leveraging a Pega application.
4 utilize Pega's artificial While Pega's AI capabilities are core to the customer engagement
intelligence (AI applications applications, this is not the primary benefit for leveraging a Pega
more effectively application.
Question 2
Pega's applications include ______________ and _______________. (Choose Two)
C # Response Justification
C 1 customer engagement Customer engagement applications are a part of application
applications offerings.
2 marketing applications Pega offers the Pega Marketing application as part of the overall
customer engagement application suite.
C 3 industry applications Industry applications are part of Pega's application offerings.
4 artificial intelligence Artificial intelligence is part of multiple application offerings.
applications
Question 3
Performing a ____________________ allows you to determine the differences between the features that the Pega
application provides and your unique business requirements to support delivery of the minimum loveable
product (MLP).
C # Response Justification
C 1 solution implementation gap Performing a solution implementation gap analysis and capturing
analysis the differences in Agile Workbench allows you to quickly gather
unique business requirements.
2 DCO session DCO consists of a set of tools and behaviors that you can use to
facilitate a more cross-functional, collaborative application
development experience between business architects and system
architects.
3 requirements planning Requirements planning can take several weeks, postponing the
session ability to start providing business value.
4 sprint review A sprint review is part of the agile delivery methodology.
Designing the case structure
Introduction to designing the case structure
The case structure is one of the most important considerations when designing an application. Designing the
case structure encompasses identifying the individual business processes and how you represent each process
(cases, subcases, subflows). A poorly designed case structure may cause refactoring if present or future
requirements become challenging to implement.
After this lesson, you should be able to:
l Identify cases given a set of requirements
l Compare parallel processing options
l Explain advantages and disadvantages of subcase and subflow processing
l Determine when to use tickets
How to identify cases
A case type represents a group of functionality that facilitates both the design time configuration and run-time
processing of cases. Pega Platform provides many features that support case processing, such as the case life
cycle design (including stages and steps), reporting, and security.
The LSA decides which Pega Platform features to use—flow, subflows, data objecs, or other components—when
designing a solution.
In some applications, a case type is easily identified as it represents a single straightforward business process.
However, in other situations, producing a robust design with longevity may require careful consideration.
In Pega Platform, other dependencies on the design include reporting, security, locking, extensibility, and
specialization. Consider all requirements before creating the design since this forms the foundation for the
application.
The case design begins by identifying the processes in the application. Next, you determine if subcases are
warranted, including any additional cases that may benefit from case processing. Identify specialization needs
for these cases. Consider future extensibility requirements before implementing the case designs.
For example, a Purchase Request process involves identifying a list of the items, and quantity of each item to be
purchased. The list of items is reviewed and then forwarded for approval by one or more approvers. Then, the
list of items is ordered. When the items are received, the process is complete.
You can use several case designs to represent this business process:
l A single Purchase Request Case for the entire process, with subprocesses for each line item
l A Purchase Request case to gather the initial request and spawn a single Purchase Order Case
l A Purchase Request case to gather the initial request and spawn Purchase Order subcases for each line item
All provided solutions may be initially viable, but the optimal solution takes into account current and possible
future requirements to minimize maintenance.
Same-case processing
Same-case processing occurs when multiple assignments associated with the same case are created. Each
assignment is initiated through a child or subprocess that is different from the parent process. Multiple
assignments for a single case are initiated through Split Join, Split For Each, or Spinoff subprocesses. The Split Join
and Split For Each subprocesses pause and then return control to the main flow, depending on the return
conditions specified in the subprocess shape. The Spinoff subprocess is not meant to pause the main flow as it is
an unmanaged process.
All of these subprocess options may result in multiple assignments being created, leading to different requestors
processing the case (assuming they are assigned to different users). One limiting factor is locking. The default
case locking mechanism prevents users from processing (locking) the case at the same time. This has been
alleviated in Pega Platform with the introduction of optimistic locking. Optimistic locking allows multiple users to
access the case at the same time, and only locking the case momentarily when completing the assignment. The
drawback is that once the first user has submitted changes, subsequent users are prompted to refresh their
cases prior to submitting their changes. The probability of two requestors accessing the case at the same time is
low, but the designer should be aware of this possibility and the consequences, especially in cases where the
requestor is a nonuser.
Subcase processing
The primary difference between subcase and same-case processing is that one or more subcases are involved.
The processes for each subcase may create one or more assignments for each subcase. Locking can be a limiting
factor when processing these assignments. If the default locking configuration is specified for all subcases, then
all subcases including the parent are locked while an assignment in any subcase is performed. This can be
alleviated by selecting the Do Not Lock Parent configuration in the subcases. Locking is a significant difference
between subflow and subcase parallelism.
Tip: With the correct locking configuration, simultaneous processing may take place without interruption for
subcases, whereas a possibility exists for interruption when subflows are involved.
This behavior must be accounted for, especially when automated tasks such as agents are involved. A locked
parent case may prevent the agent from completing its task, and error handling must be incorporated, allowing
the agent to retry the task later on. If a design leveraging subcases with independent locking was used such that
the agent operated on the subcase, it minimizes the possibility of lock contention. In general, lock subcases
independently of the parent case unless there is a reason for also locking the parent case.
When waiting for the subcases to complete processing, a wait step is used to pause the parent case. If subcases
of the same type are involved, you configure the same wait shape to allow the main case to proceed after all
subcases are resolved.
If different types of subcases are involved, a ticket is used in conjunction with the wait shape to allow the parent
case to proceed only after all subcases, regardless of the type, are completed. The AllCoveredResolved ticket is
used and is triggered when all the covered subcases are resolved. You configure the ticket in the same flow as
the wait shape, and you place the ticket in the flow at the location at which processing should continue.
Configure the wait shape as a timer with a duration longer than the time to issue the ticket.
Factor Consideration
Security Class-based security offers more options for security refinement using multiple cases.
Data security increased as subcases only contain data pertinent to their case.
Factor Consideration
Reporting Reporting on smaller data sets may be easier and offer potential for performance increase
(the may be a disadvantage if a join is required).
Persistence You have the ability to persist case data separately.
Locking You have the ability to lock cases separately and process without interruption (significant in
cases involving automated processing).
Specialization This can be extended or specialized with a class or leverage Case Specialization feature.
Dependency Subcase processing can be controlled through the state of parent or sibling cases.
Management
Performance There is pure parallel processing since separate cases may be accessed using a separate
requestor.
Ad hoc You can leverage the ad hoc case processing feature.
processing
Advantages of a single case design involving only subflows are listed in the following table.
Factor Consideration
Data Data is readily available; no replication of data is necessary.
Reporting All data is accessible for reports.
Attachments All attachments are accessible (coding required for subcases).
Policy Implementing this feature is easy. Managing “Suspend work” when multiple cases are involved
Override is more complex.
Case design - example one
Consider the following requirements by an automobile manufacturing company automating an Illness and Injury
Reporting application.
Like many corporations, the automobile manufacturing company must log work-related fatalities, injuries, and
illnesses. For example, if an employee contracts tuberculosis at work, then the illness must be logged in the
company's safety records.
Certain extreme events, such as death, must be reported to the regulatory agency immediately. These reports
are called submissions. Submission processes and requirements differ by country. Some countries have
additional rules, based on state or province. Typically, these rules are more stringent forms of the national
guidelines. There are also some guidelines that are specific to injury type. A small subset of injuries requires
injury-specific fields to be filled in. For example, with hearing loss, the actual assessment of the loss, measured
in decibels, must be recorded.
The Illness and Injury Reporting application must support two processes.
First, any injury or illness must be recorded. This is a guided and dynamic data-entry procedure that is specific
to the regulations of the country in which the plant is located. The culmination of these entries is an electronic
logbook.
Second, the application must generate a summary of submission records for every plant per year. Each record
summary must be verified, certified, and posted. Notably, severe events must be reported to the regulatory body
of the corresponding country, and the status of this submission must be tracked. The reports of these record
type are separate—there is never a need for a list of records that is a mix of Injury Records, Annual Summaries,
and Submissions. However, because summaries are a culmination of injury records, and submissions are
spawned by injury records, assuming that injury record information is included in summary or submission
report is reasonable.
The following images illustrates the requirements:
Solution
You probably identified three independent case types with no subcase relationships:
l Illness Injury is for logging each illness or injury event
l Annual Summary is to track the end-of-year report for each plant
l Submission is for those events that must be reported to the regulatory agency.
Discussion
An Annual Summary appears to be only a report, but you create a case because the requirements explicitly state
that the status of these reports must be tracked, indicating a dedicated process. Furthermore, these reports
must contain static information. While the original content may be derived from a report, this content must be
fixed and persisted.
Create a Submission case since the requirements stated that the Submission process, and status of each
submission, must be tracked. Submission tracking is performed independently of the original injury record, and
so is best kept as a separate case.
You might consider that Submission could be a subclass of Injury Illness, but Submission is not a type of illness
injury. Submission is a case that is spawned by an Illness injury case. Also, Submission is not a subcase of Illness
Injury since the Illness Injury is not dependent on the Submission processing being completed.
Case design - example two
Consider the following requirements for an automobile manufacturing company automating a warranty claim
application. Two primary processes are supported by the application: a Warranty Claim process and a Recall
process.
For a warranty claim, a customer brings a car to the dealership because something is not working correctly. The
dealer assesses the problem with the car, enters the claim details into the application, and then receives
verification from the application that the repair is covered under warranty. The dealer is subsequently
compensated for the work by the car manufacturer.
Every warranty claim includes one or more claim tasks. Each claim task represents the work that must be
completed to resolve the claim. Most warranty claims are simple and have a single Pay Dealer claim task. Some
warranty claims require more complex processing if disputes are involved.
Recalls are separate from warranty claims. Recalls cover the entire process from initially notifying customers
when a recall is necessary to compensate the dealer for the work completed to support the recall. One particular
type of claim task is a "Part Return". This claim task stands apart from others in that it requires an additional set
of business rules and the process is different.
Design a case structure to support this application.
Solution
At least two cases are possible: Recall and Warranty Claim.
Recall has no dependencies but does have a distinct process. You might represent Recall as a stand-alone case.
You have several design options for the Warranty Claim case.
One option is to create a stand-alone Warranty Claim case with conditional subprocesses spawned for each type
of claim task. This approach is easy to implement, but it limits extensibility and the ability to specialize the claim
tasks.
Another option is to create the Warranty Claim case with a subcase for each claim task. This design option offers
the flexibility to create specialized claim tasks such as Parts Return. The Warranty Claim case is the parent, or
cover, case of the Claim Task case since the Warranty Claim depends on all Claim tasks resolving before the
warranty Claim case can be resolved.
You represent the Parts Return case type as a subclass of the ClaimTask class to indicate that PartsReturn is a
specific type of ClaimTask case. This is an important distinction between subclasses and subcases. The hierarchy
for subcases is established in the Case type rule, similar to the composition relationship between pages in a data
model. A subclass indicates an is-a relationship and is indicated as such in the class structure.
Not enough information is provided in the requirements to determine which solution is more suitable for the
Claim Task case design. If there are many specialization or extensibility requirements for the application, the
latter design for the Claim task is a more suitable design.
Case processing quiz
Question 1
Which statement is true when comparing subcases and subflows?
C # Response Justification
1 Subcases are always Subcases are not always
preferred over subflows required. It depends on the
specific requirements. There are
situations where subflows may
be advantageous.
C 2 Subcases offer more security Class-based security offers more
options than subflows options for security refinement
using multiple cases. Data
security increases because
subcases only contain data
pertinent to their case.
3 Locking consequences can be One limiting factor with
ignored with subflows subflows is locking. The default
case locking mechanism
prevents users from processing
(locking) the case at the same
time.
4 Subcases perform better than There is no evidence that one
subflows approach performs better than
the other.
Question 2
Select two reasons when the AllCoveredResolved ticket is required in conjunction with the wait shape. (Choose
Two)
C # Response Justification
C 1 When the wait shape is configured as a A wait shape can be used to
timer with a duration longer than the prevent the parent case from
C # Response Justification
duration in which the ticket would be proceeding until all the
issued. subcases are completed. The
AllCoveredResolved ticket is
then used to bypass the wait
shape when all the subcases are
resolved.
C 2 When different types of subcases are The AllCoveredResolved ticket is
involved. used and is triggered when all
the covered subcases are
resolved, regardless of their
type.
3 When the wait shape is configured as a If the wait shape expires before
timer with a duration shorter than duration all subcases are resolved the
in which the ticket would be issued. AllCoveredResolved ticket may
be ignored.
4 When any of the covered subcases are This is a default configuration of
resolved. the wait state and does not
require the use of an
AllCoveredResolved ticket.
Question 3
Select two advantages of using a single case over subcase designs. (Choose Two)
C # Response Justification
C 1 No replication of data is All case data is readily available
necessary with a single case to the single case; therefore no
replication of data is necessary.
2 Subflows can be processed With subcases you have the
without interruption ability to lock cases separately
and process them without
interruption. (This is significant
when cases involve automated
processing).
C 3 All attachments are readily All attachments are directly
C # Response Justification
accessible available to the single case and
no coding is required to access
them.
4 Dependency management Subcase processing can be
between subflows can be controlled through the state of
leveraged the parent or sibling cases.
Starting with App Studio
Introduction to starting with App Studio
Pega Express gives you the ability to collaborate with the business to quickly create new applications and, over
time, build a library of assets that can be reused across the enterprise.
After this lesson, you should be able to:
l Explain the benefits of App Studio
l Describe the development roles
l Ensure the successful adoption of App Studio
l Reuse assets created in App Studio
Name three ways that App Studio can accelerate your projects.
App Studio lets you leverage your company's IT assets, drive consistency and reuse through templates, or
extend Pega applications.
Development roles
(missing or bad snippet)
Pega Express is a development environment designed for new users with less technical expertise, such as
business experts. Pega Express includes contextual help and tours for self-learning, enabling business experts to
quickly create applications.
Staff members certified in Pega act as coaches for teams of business experts to facilitate application
development.
KNOWLEDGE CHECK
Relevant records designate records of a case or data type as important or reusable in a child class. Relevant
records for a case type can include references to fields (properties), views (sections), processes (flows), user
actions (flow actions), correspondence rules, paragraphs, harnesses, service level agreements, and when rules
that are explicitly important to your case. For a data type, relevant records designate the most important
inherited fields (properties) for that data type.
The relevant records can include records that are defined directly against the class of the case or data type and
fields inherited from parent classes.
Designating a record as a relevant record controls the majority of the prompting and filtering in the Case
Designer and Data Designer. For example, user actions and processes defined as relevant records show up when
adding a step in the Case Designer.
Fields marked as relevant for a case type define the data model of the case. Processes and user actions marked
as relevant appear in Case Designer prompts to encourage reuse. Views marked as relevant appear as reusable
views.
Fields, views, processes, and user actions are automatically marked as relevant records when you create them
within the Case Designer and Data Designer. You can manually designate relevant records on the Relevant
Records landing page. It is also possible to add a Relevant Record using pxSetRelevantRecord.
KNOWLEDGE CHECK
C # Response Justification
1 Activities Activities cannot be marked as a
relevant records.
C 2 Properties Properties can be marked as
relevant records.
C 3 Processes Processes can be marked as
relevant records.
C 4 Harnesses Harnesses can be marked as
relevant records.
Question 2
What two benefits does APP Studio provide? (Choose Two)
C # Response Justification
C 1 An easy-to-use development APP Studio allows new users to
environment designed for quickly be productive.
new users
2 Quick creation of enterprise Enterprise assets such as data
assets integration and SSO need to be
set up in Designer Studio.
C 3 Rapid construction of new APP Studio allows you to rapidly
application concepts build out new application
concepts.
4 Support for scenario testing APP Studio does not support
testing.
Question 3
What are two key initiatives to ensure the successful adoption of APP Studio? (Choose Two)
C # Response Justification
C 1 Evaluate that applications are Ensure the application is a fit for
fit for the APP Studio program APP Studio.
C 2 Establish a center of Establish a COE for program
excellence (COE) for the governance.
organization
3 Ensure business users are Pair the business team with a
technically self-sufficient coach with technical expertise.
before starting a project
4 Strive towards a Strive towards a centralized
decentralized project model.
validation and approval
model
Question 4
Which of the following two options are best practices in APP Studio? (Choose Two)
C # Response Justification
C 1 Manage reusable assets Managing reusable assets is a
through a center of COE task.
excellence (COE)
2 Create reusable IT assets IT assets such as SSO
using APP Studio integration are done in Designer
Studio.
C 3 Share business assets across Create business assets, such as
applications data models and reports, that
are sharable across
applications.
4 Avoid APP Studio for DCO APP Studio increases the
session effectiveness of DCO.
Designing for specialization
Introduction to designing for specialization
Pega Platform provides various solutions for specializing applications to support ever-changing requirements.
This lesson describes these specialization solutions and the best ways to apply them in your applications.
After this lesson, you should be able to:
l Describe the principles and purposes of component applications
l Discuss the advantages of building application layers using multiple component applications
l Specialize an application by overriding rulesets in the built-on application
l Decide when to use ruleset, class, and circumstance specialization
l Decide when to use pattern inheritance and organization hierarchy specialization
l Analyze and discuss various approaches to specializing an application to support a specific set of
requirements
Object Oriented Development in Pega
A consideration of Pega asset design and reuse starts with a brief mention of Object Oriented Development
(OOD) principles and how Pega leverages them and how Pega allows you to leverage them.
According to Robert Martin, Object Oriented Development encompasses three key aspects and five principles.
Aspects of OOD
The following are the three essential aspects of OOD.
Encapsulation
Encapsulation is used to hide the values or state of a structured data object inside a class, preventing
unauthorized parties' direct access to the object.
Inheritance
Inheritance is the ability for one object to take on the states, behaviors, and functionality of another object.
Polymorphism
Polymorphism lets you assign various meanings or usages to an entity according to its context. Accordingly, you
can use a single entity as a general category for different types of actions.
Single Responsibility
The Single Responsibility principle states that every module should have responsibility over a single part of the
functionality provided by the software, and that responsibility should be encapsulated by the module.
Avoid placing within the same module functionality where changes can occur for different reasons. The UIKit
application, which contains a single ruleset, is example of Single Responsibility.
Open/Closed
The Open/Closed principle states that software entities (such as classes, modules, and functions) should be open
for extension, but closed for modification. An entity can allow its behavior to be extended without modifying its
source code.
The Open/Closed principle is most directly related to extensibility in Pega. If implemented correctly, an object,
call it “A”, that uses another object, call it “B”, need not change when features are added to object “B”. Following
this principle helps avoid maintenance-intensive ripple effects when new code is added to support new
requirements. An example of the Open/Closed Principle in the Pega platform is the Pega Healthcare’s PegaHC-
Data-Party class extending the PegaRULES Data-Party class.
Liskov Substitution
The Liskov Substitution principle states that objects that reference other objects by their base class need not be
aware how that class has been extended. An example in the Pega platform is how correspondence and routing
works the same regardless of the class being Data-Party-Person or Data-Party-Org.
Interface Segregation
The Interface Segregation principle (ISP) states that it is better to define multiple interfaces to an object, each
fulfilling a specific purpose, as opposed to exposing a single large and complex interface, parts of which are of
no interest to a client. ISP is intended to keep a system decoupled and thus easier to re-factor, change, and
redeploy. Examples of ISP in the Pega platform include Service Packages and parametrized Data Pages. Data
Propagation would also meet the definition of ISP if a single Data instance is passed as opposed to multiple,
individual scalar Properties.
Dependency Inversion
The Dependency Inversion principle refers to a software development technique where objects facilitate the
configuration and construction of objects on which they depend. This in contrast to an object completely
encapsulating its dependencies. The Dependency Inversion principle works hand-in-hand with Liskov
Substitution. An example of Dependency Inversion in the Pega platform is Dynamic Class Referencing (DCR). As
opposed to an object strictly using a value hard-coded within a rule’s Pages & Classes tab to create a page, a Data
Page can be asked to supply the value for the page’s pxObjClass property.
Specialization design considerations
When deciding on the application layers to be developed, consider the business requirements for specialization.
Selecting a design that introduces more specialization layers than are required can be complex. This complexity
increases the time, resources, and effort required to produce a Minimum Lovable Product (MLP).
Specialization considerations
Always follow object-oriented principles to ensure rules are extensible. For example, use parameterization and
dynamic class referencing (DCR) to support specialization in the future.
When considering specialization, be aware of the following things:
l A specialization layer need not be specific to one type of application. Instead, a specialization layer can
support multiple applications across an enterprise.
l Circumstancing, pattern-inheritance, and data modeling techniques may eliminate the need to define a
specialization layer for an application.
l A specialization layer can be composed of multiple built-on applications.
Note: Note: A framework layer IsA specialization layer. “Framework” has a specific meaning. “Framework” means
an application that spans every case type. “Framework” represents an entire layer. A “Framework” is a special
type of specialization layer.
Note: Note: A Production application IsAn “Implementation” application. A Production application is a more
specific type of an “Implementation” application. A Production application is what end users use. It is what you
send down a CI/CD pipeline all the way to the end.
How to choose the application structure
Any application can be built on other applications and leverage reusable components. An application can be
specialized in multiple ways such as using class inheritance, ruleset overriding, and circumstancing. When
specializing an application using class inheritance you can use pattern inheritance for specialization within an
application and direct inheritance for specialization across applications. In this lesson we will explore how the
New Application Wizard can be used to define the application structure and how the New Application Wizard
uses direct inheritance for specialization across applications.
A framework layer typically defines a common work-processing foundation for a set of case types and data
objects. The framework contains the default configuration of the application and its associated classes that is
then specialized by the implementations. The classes of the implementation layer directly inherit from the
classes in the Framework layer. Frameworks are not designed to run on their own; there should always be at
least one implementation. Implementations extend the elements of the framework to create a composite
application that targets a specific group such as a region, customer type, product, organization or division.
For example, the MyCo enterprise makes auto loans, and has an auto loan framework that is composed of the
assets needed for MyCo's standard auto loan process. Each division of MyCo extends that basic auto loan
application to meet their specific divisional needs.
Only create a specialization layer such as a framework if at the start of a project the business requirements
express a need to leverage such a layer throughout the enterprise.
Important: When using the New Application Wizard do use the "Framework" option purely for the sake of
future-proofing. Maintaining a framework comes at a cost that cannot be justified without clear evidence for its
need.
KNOWLEDGE CHECK
Applications as components
You can design Pega applications specifically for use as components. By definition, a component is recursive.
That is, a component can comprise other components in the same way that objects can comprise other objects.
An application satisfies this definition since an application can be built-on on multiple applications as discussed
below.
The term component implies that an object has a stable interface and can be tested separately according to
published specifications. A component application need not contain its own unit test ruleset(s) but it can,
temporarily, during development. Prior to deployment, unit test rulesets would be moved to a test-only
application built on the component application.
In similar fashion, the first stage of a case type within a component application can be configured as valid only
during development. “During development” can be defined as “when the case is not covered”, if the case type is
only used as a subcase in production. When a Production application extends a case type within the Component
application that includes a test-only stage, the Production application is free to remove that stage from its own
case type rule.
Numerous applications and components are available for reuse on the Pega Exchange. To contribute to the Pega
Exchange, submit applications and components to make them available to the Pega community. For example, you
can add the PegaDevOpsFoundation application as sibling built-on application when using the Deployment
Manager application to orchestrate CI/CD.
For applications you want to display in the New Application wizard as potential built-on applications, select
Show this application in the New Application wizard on the Application wizard tab on the application rule.
Use the Components landing page (Designer Studio > Application > Components) to create and manage
components. A component defines a set of ruleset versions and can have other components or applications as
prerequisites.
When creating a case type that does not extend a Foundation application case type it is advantageous to add that
case type to a new, case type-specific ruleset. Doing so will facilitate the development of component applications.
When this approach is followed, and an application is built on PegaRULES, every case type would exist in its own
ruleset. While the application rule, work pool class and application data classes would exist in the ruleset created
by the New Application Wizard.
A case type that appears to be a good candidate for a component application should avoid dependencies on the
work pool class and application-level data classes. Instead the case type component candidate should utilize the
data class that the application-level data classes extend. Similarly work pool rules that the case type component
candidate uses can be moved to the layer beneath the current application.
At the point that all dependencies on the current application have been removed, the Refactoring Wizard
(System > Refactor > Classes) can be used to remove “-APP-“ from the case type’s class name. This refactored
case type can then be placed within its own application. The original application then defines the new
component application as a built on application. Lastly the case type rule within the original application must be
restored. This can be accomplished by performing a “bottom up” Save-As for the refactored case type rule to its
parent application.
When citizen developers design and implement applications using APP Studio, they typically include multiple
discrete components of functionality. As those citizen developers become more familiar with the business needs
of the organization, opportunities to reuse those components in other applications may arise.
To create reusable built-on applications and components from citizen developer built applications, first, identify
the reusable components. Then, refactor the appropriate rules from the existing application into your new
reusable built-on applications and components as described above.
Note: It is important to define relevant records for components, not just applications, to simplify and encourage
their use.
KNOWLEDGE CHECK
Circumstance specialization
In DEV Studio, circumstanced rules are displayed by expand-and-collapse navigation in the App Explorer.
Circumstanced rules can also be searched for using: Case Management > Tools > Find Rules > Find By
Circumstance.
Note: You can also locate rules that are similarly circumstanced with a report definition that filters by
pyCircumstanceProp and pyCircumstanceVal.
A benefit of circumstancing is that it enables you to see the base rule and its circumstanced variations side-by-
side. The Case Designer supports viewing circumstanced Case Type rules this way, as well.
There are a number of drawbacks to circumstancing case-related rules as opposed to circumstancing other rule
types such as Decision rules. DEV Studio displays rules using Requestor-level scope. Case-related rules are
normally circumstanced using a case-related Property. Hence circumstanced rules would only be active when a
case instance is opened, meaning the scope would be Thread-level.
According to its Requestor-level scope, the Case Designer always displays the base versions of circumstanced
rules such as Flows. Similarly, if a circumstanced rule is opened from another rule, the base version would be
displayed. The rule’s “Action > View siblings” menu option is needed to locate the correct circumstanced
variation of the base rule. For numerous inter-related rules, as is typical for case design, this process can become
tedious. Circumstancing case type rules is not a solution to this drawback unless circumstance-unique names
are used for circumstance-unique rules such as Flows .
Specializing an application by overriding rulesets
To create an application by overriding rulesets in the built-on application, do the following:
1. Create a new ruleset using the Record Explorer.
2. In the Create RuleSet Version form, select the Update my current Application to include the new
version option.
3. Copy the existing Application rule to the new ruleset and give the application a new name that represents its
specialized purpose.
4. Open the new Application rule.
5. Configure the new application as built-on the original application.
6. Remove every ruleset from the application rulesets list except the ruleset you created in step 1.
7. Open the original application again and remove the ruleset you created and added in step 1.
8. Create new access groups that point to the new Application rule you created in steps 2 and 3.
Note: A ruleset override application can be constructed without needing to run the New Application wizard.
Inheritance and organization hierarchy specialization
You can use pattern inheritance as a special type of class specialization within an existing workpool. You can also
leverage pattern inheritance to specialize applications according to organization structure.
The Pega class naming convention is displayed in the following table. There are two optional implicit hierarchies
within each class name: organization and class specialization. In the table below, a class name can be formed
using any combination of values from each of the three columns.
Here are some CaseType class name examples using the pattern described in the table above:
Org-App-Work-CaseType
Org-App-Work-CaseType-B
Org-App-Work-CaseType-B-C
C # Response Justification
C 1 Open/Closed Principle Of the five main (object-oriented
programming) OOP
development principles, the
Open/Closed Principle is the
most directly related to
extensibility.
C 2 Template design pattern The Template design pattern
embodies the concept of
layering.
3 Polymorphism Polymorphism is one of the
three essential aspects of OOP.
It is not closely related to the
concept of layering an
application. Polymorphism
occurs at the rule level. Layering
occurs at the ruleset level.
4 Liskov Substitution The Liskov Substitution OOP
development principle states
that any object that implements
a particular interface or API can
be used equally well by an
application. The Open-Closed
Principle is more closely related
to layering.
Question # 2
Which of the following two terms are associated with the term component? (Choose Two)
C # Response Justification
C 1 Recursive The definition of a component is
recursive.
2 Framework A framework does not imply
component in the general sense
of the word.
C 3 Interface A component has a stable
interface.
4 Ruleset A ruleset can be used to create
a component, but is not a
component.
Question # 3
Which three of the following approaches are valid ways to extend rules in Pega Platform? (Choose Three)
C # Response Justification
C 1 Ruleset A rule can be overridden in a
different ruleset.
C 2 Class Pattern and direct-inheritance
can be applied to classes.
C 3 Circumstance Circumstancing is a way to
specialize rules.
4 Dynamic class referencing DCR is a way to decide which
(DCR) specialized rule to use, but it is
not a way to specialize a rule.
5 Organization hierarchy Organization hierarchy is a
pattern-inheritance-based class
naming strategy.
Question # 4
A ruleset override application can be used for which purpose?
C # Response Justification
C 1 Creating the first The first implementation
C # Response Justification
implementation application application built on a framework
built on a framework application can use ruleset
application override. This does not require
the New Application Wizard to
generate.
2 Defining applications specific Division-specific class names
to a division within an can be defined for division-
enterprise specific applications. Those
applications are not called
ruleset override applications.
3 Aggregating multiple built-on Aggregating multiple built-on
applications into a single applications into a single
application application is something a
framework application might do,
but not a ruleset override
application.
4 A repository for Dependency Inversion of Control-based rules
inversion-based rules can be placed in any ruleset
regardless the type of
application.
Question # 5
Which two options are valid reasons for initially creating a single application? (Choose Two)
C # Response Justification
C 1 Avoid the development of two Ideally, the same development
application layers team is not charged with
simultaneously. developing two layers
simultaneously. One application
delivery should precede the
other application based on
versioning in the same fashion
that ruleset versions are used to
upgrade the same application
within a progression of
C # Response Justification
environments (development, QA,
UAT, Production).
C 2 The application is specific to a Rules specific to a government
single government agency. agency typically cannot be
applied to other government
agencies. The components used
to build a government agency
application could be shared
with other government
agencies.
3 Development of component In the long run, separately
applications takes more time tested, reusable applications
since each has to be tested increases development speed.
individually.
4 The concept of multiple built- This is only true prior to Pega
on component applications 7.2.2.
has not been promoted in the
past.
Promoting reuse
Introduction to promoting reuse
When building an application you should consider packaging certain assets separately to promote reuse. This
lesson explains how to leverage relevant records and how to decompose an application into reusable built on
application and components. The lesson also discusses the role of a COE in managing reuse.
After this lesson, you should be able to:
l Simplify reuse with relevant records
l Leverage built on applications and components for reuse
l Discuss the role of a COE in reuse
Application versioning in support of reuse
The New Application Wizard automatically adds two Org layer rulesets to new applications (Org and OrgInt). By
default these rulesets are set to use Ruleset validation. As you create additional applications and add rules to
the Org layer you may need to add additional rulesets to the Org layer. Eventually a large number of specialized
rulesets in the Org layer is possible. To avoid having to maintain ruleset dependencies within the Org layer you
can set the Org layer rulesets to use Application validation. This raises the possibility that multiple applications
reference the same Application-based rulesets, and this generates a warning. Packaging the Org layer rulesets
as a built-on application helps eliminate these issues.
At the same time that rule management is simplified from an intra-component perspective, complexity increases
from a component client perspective when components are versioned.
For example, if you make major changes to rules within a built-on application, that built-on application may no
longer be consistent with the application which were built-on it. For example, an updated validation rule in a
built-on application might enforce that added properties have values. This could be problematic to applications
built-on it. In this example, you should consider updating the built-on application's version before deploying the
changes.
Reuse is critical in getting the benefits and value of the Pega Platform. The responsibility of the COE is to manage
and promote reuse across projects for the organization. If no one is responsible and accountable for reuse,
assets are often reinvented.
For more information on establishing a COE, refer to the Pega Community article Creating a Center of Excellence.
KNOWLEDGE CHECK
C # Response Justification
1 Pega Exchange publishing Both built on applications and
components can be published
to Pega Exchange.
C 2 Extendable case types Use a built on application if you
want to extend a case type
C 3 PegaUnit test creation A built on application can
contain a PegaUnit ruleset and
support creation of PegaUnit
test case and suite rules. A
component by itself cannot
place test case and suite rules
in a PegaUnit ruleset
4 Integration rules Integration rules can be
included in both built on
applications and components.
C 5 Self-testable A built on application can be
designed to be self-testable. A
component’s individual rules
can be unit tested but cannot
test itself.
Question 2
Which two of the following are key responsibilities of a COE? (Choose Two)
C # Response Justification
1 Provide support to the LSA on The COE provides support to all
the project project roles, not only the LSA
C # Response Justification
role.
C 2 Identify new opportunities in The COE helps business
the organization managers identify opportunities
3 Lead application development The COE supports individual
efforts application development efforts
without taking the lead
C 4 Manage and promote The COE is responsible for
reusable assets managing and promoting reuse
Designing the data model
Introduction to designing the data model
Every application benefits from a well-designed data model. A well-designed data model facilitates reuse and
simplifies maintenance of the application.
At the end of this lesson, you should be able to:
l Design a case data model to support reuse and integrity
l Design a new data model
l Extend a data class to higher level
l Leverage the Template Design Pattern for Data Instances
Data model reuse layers
Designing a data model for reuse is one of the most critical areas in any software project. A well-designed data
model has a synergistic effect, the whole being greater than the sum of its parts. In contrast, a poorly designed
data model has a negative effect on the quality and maintainability of an application.
A temptation exists after a case type is created to immediately define its life cycle. Case life cycles can be rapidly
developed in APP Studio with views that contain numerous properties, everything defined at the case level. This
approach will become counter-productive at some point owing to the lack of reusable data classes and work
pool-level properties that can be shared with other cases. Numerous hours would then be required to refactor
and retest the code.
It is better to start out grouping related properties using Data classes, classes that would contain the views to
display those properties. This is, after all, why an embedded page is now referred to as a “field group”. Creating
Data classes promotes reuse across an application’s case types. Data classes enhance and simplify application
maintenance.
It could be said that one of the primary purposes of a case is to manage a specific set of data. Managing data is a
different role than being the data. Viewed another way, the primary purpose of data is to be used by one or more
cases.
Cases have two types of properties:
Note that the RoomsRequest case’s .Hotel property was overridden by the production Booking application.
Within the component Hotel application’s the RoomsRequest case’s .Hotel property is defined as a normal Field
Group (page). This was done to support data capture and persistence to facilitate testing performed against the
Hotel case component. The Booking application’s purpose is to book events, not define hotel instances. The
Booking application’s RoomsRequest should only perform an FSG-Data-Hotel Lookup using a value it originally
obtained from .RoomsRequest.HotelGUID within the InitFromRoomsRequest Hotel subcase spin-off data transform.
How to extend a data class to higher layers
Enterprise-level data classes can be referenced by Enterprise level Work- classes. Similarly, implementation-level
data classes are normally referenced by implementation level Work- classes. Enterprise-level <Org>-Data- classes
should also be able to be referenced by implementation level <Org>-<App>-Work cases. So how can an
enterprise-level data class (i.e.: <Org>-Data-Warranty) be referenced by an implementation level <Org>-<App>-
Work cases? Or how can an enterprise level Work- case reference an implementation level data class? The
solution is to use Dynamic Class Referencing (DCR).
Using DCR
When a ClipboardPage is constructed multiple properties are initialized when a number of properties are
instantiated by default that have a “px” prefix including pxObjClass. The “px” prefix connotes “should remain
static” but does not necessarily mean “must remain static”. The values of some "px" properties should not be
changed post-construction such as pxCreateDateTime and pxCreateOperator as they record historical information.
It is possible to its change the value of the pxObjClass property post-construction but care must be taken. An
example of where pxObjClass is changed at run-time is when Assign-WorkBasket is changed to Assign-Worklist and
vice-a-versa.
A ClipboardPage is a StringMap according to the Engine API. Suppose the value of pxObjClass for an existing
ClipboardPage is changed to a more specialized class within the inheritance hierarchy, such as class that directly
inherits from the existing class. This is not the same as constructing a ClipboardPage from the start using the
original pxObjClass value.
The difference lies in the values that are initialized when the pyDefault Data Transform, if any, is called at the
time of ClipboardPage construction. A pyDefault Data Transform can be configured to call its super (more generic)
Data Transform. When case types are generated, the Call super data transform check-box option is set. When
Data classes are created, this option is not set automatically.
A possible approach to the Call super data transform option post-construction initialization problem is to use
an @baseclass WasPageInitialized ValueGroup property. Every pyDefault Data Transform within an inheritance
hierarchy can be altered as follows.
Whether a Data class at the root of an inheritance hierarchy, e.g., Org-Data-Vehicle, calls its super Data Transform
is optional.
The figures below are from the FSGGogoRoad and FSGEnt sample applications provided with this course.
FSGGogoRoad is built on the FSGEnt application, and the FSGEnt application is built on the FSG application. The
FSGGogoRoad and FSGEnt applications are intended to mirror the relationship between the OOTB PegaSample and
PegaRULES applications. The FSGEnt application is considered enterprise-level; while the FSGGogoRoad application
is considered to be implementation-level.
The figures show eight pyDefault Data Transforms within the FSG-Examples-Data-Vehicle class hierarchy. Note that
a FSG-GogoRoad-Data-Vehicle pyDefault transform would not be called when the super Data Transforms are
invoked. This is the reason for having step (3) above. As the pyDefault data transform change GogoRoad’s
Motorcycle, Truck, and Car classes, each must apply a data transform with a different name such as
DefaultGogoRoad in their step (3).
The first Field within the Identify Vehicle screen within an Assistance Request case establishes the vehicle’s Type,
i.e., Car, Truck, or Motorcycle. On change, the Type field’s value is posted to the Server followed by a Section
refresh where the TypeOnChange activity is called. The TypeOnChange activity passes the Type value as a
parameter to the D_VehicleDCR Data Page. The D_VehicleDCR Data Page, in turn, invokes its LoadVehicleDCR data
transform. This data transform is overridden by the FSGGogoRoad application as illustrated in the figure below.
The TypeOnChange activity then sets the pxObjClass of the case’s FSG-Example-Work .Vehicle Field Group (Page)
equal to the pxObjClass of the D_VehicleDCR Data Page.
The LoadVehicleDCR data transform does not need to be complex. A simple Decision Table could be used which
could be modified as new Vehicle types are added.
Modifying the .Type property for the LoadVehicleDCR data transforms at both the enterprise and implementation
application levels is as simple as appending the Type value to the root Vehicle Data class path.
Ruleset LoadVehicleDCR
FSGGogoRoad .pxObjClass = "FSG-GogoRoad-Data-Vehicle-" + param.Type
FSGExample .pxObjClass = "FSG-Examples-Data-Vehicle-" + param.Type
Leveraging the Template Pattern for Data Instances
A number of ways exist to leverage the Template Pattern when dealing with Data instances. A common example
is when a Data class at the enterprise layer declares the class of a Work page as Work-Cover-. The Enterprise
layer should not be aware of any applications within layers above it so should never declare a Work page a class
using <Org>-<App>-Work.
When this is done a Work page of class <Org>-<App>-Work-XYZ would exist on the Clipboard. For simplicity,
suppose case XYZ contains an enterprise-level field group property. The <Org>-<App> application has had no
reason to extend the field group’s <Org>-<Data> page class to <Org>-<App>-<Data>. The <Org>-<App>
application could do this if it needs to, but for this example, it would add unnecessary complexity.
Case XYZ asks its enterprise-level field group to perform an action, for example apply a data transform. The
enterprise-level data transform does something, then wants to have the Work page do something as well, for
example execute an activity or data transform. At the enterprise level that activity or data transform only needs
to be defined, i.e., stubbed out; it does not need to execute any code. The application does not need to override
that activity or data transform if does not need to. If it does, the code in that activity or data transform would be
executed, perhaps updating the same case that had asked the enterprise-level field group to perform an action.
The solution provided in the course contains an example that goes a step further by enforcing what rules can be
overridden and which cannot. The example is the CreateOrUpdateLocation flow called twice from the Data-Portal
AppConfigContainer Section; one Link for Hotels, another for Venues. The CreateOrUpdateLocation flow is launched
in a modal dialog. The work pages behind modal dialog, D_RoomsRequestTempCase and D_BookEventTempCase, are
declared temporary since they should not be persisted. What should be persisted is either a new or existing FSG-
Data-Location, i.e., the base class for FSG-Data-Hotel and FSG-Data-Venue.
Below is the Work- CreateOrUpdateLocation templated screen flow. Because CreateOrUpdateLocation is a screen
flow, it can only call another screen flow, here one named SaveLocation.
Unlike CreateOrUpdateLocation, the SaveLocation flow is non-Final meaning the FSG-Booking-Work-Event case type
class and the FSG-Hotel-Work-RoomsRequest case type class are allowed to override it. The RoomsRequest case
overrides the SaveLocation flow by specifying D_HotelSavable as the name of the Savable Data Page to use. The
BookEvent case overrides the SaveLocation flow by specifying D_VenueSavable as the name of the Savable Data
Page to use.
The table below shows the Template Pattern in action. The consumer of the CreateOrUpdateLocation is free to
extend it to a new and different class that extends FSG-Data-Location. The consumer only needs to override the
five low-complexity rules on the right.
Locking instances
On the Locking tab of a data class rule, select Allow locking to allow instances to be locked. The Locking tab is
only displayed when the data class is declared concrete as opposed to abstract. By default, the class key defined
in the General tab is used as a key.
Avoiding redundancy
A potential data integrity issue can arise if you are querying two different tables in a database for the same
entity. If you are retrieving data from similar columns that exist in both tables, the values in those columns may
be different.
To avoid these potential conflicts, keep in mind the single source of truth principle. This is the practice of
structuring information models and associated schemata such that every data element is stored exactly once.
Within a case hierarchy, you may want to always use data propagation from a parent case to each child case.
However, if the data propagated from the cover is subject to change, then accessing the data from pyWorkCover
directly is better. You can use a data page if you need to access data from a cover’s cover.
The following image illustrates the propagation of information from the WorkPage to a Claim case (W = WorkPage
and C = Claim). W1 is the original WorkPage, W2 is W1’s cover, and C is W2’s cover.
It should be noted that the use of recursion in the above example could be avoided by defining a ClaimID
property at Org-App-Work and ensuring the value of ClaimID is propagated to each child case. A subcase’s
pxCoverInsKey would never change. Not setting the ClaimID property initially and propagating it leads to
information loss which requires effort to recover. Any property not subject to change can similarly be
propagated, especially for purposes such as reporting and security enforcement.
Referencing pages outside a case has the additional benefit of reducing the case’s BLOB size. A smaller BLOB
reduces the amount of clipboard memory consumed. The clipboard is able to passivate memory if it is not
accessed within a certain period of time.
Instead of maintaining large amounts of data within the BLOB as either embedded pages or page lists, consider
storing that information in history-like tables, for example, tables which do not contain a BLOB column. These
tables let you use data pages to retrieve the data as needed. As with any type of storage, consider exposing a
sufficient number of columns in these tables to allow a glimpse of what the pages may contain while avoiding
BLOB reads.
KNOWLEDGE CHECK
How does the single source of truth principle help in data integrity?
This principle ensures that every data element is stored exactly once
C # Response Justification
1 When a case property uses Locking is not necessary when
the Data Page Snapshot copying data.
pattern
2 When initially saving a record Locking is not necessary when
to a custom queue table initially saving data.
3 When the allow locking check The fact that a data class allows
box is checked within a data locking does not mean every
class rule and an instance of instance that is opened has to
the data class is opened be locked.
C 4 When ensuring integrity of a A custom queue table can be
custom queue table accessed by multiple requestors
at the same time - some
requestors wanting to update
records, and other requestors
wanting to delete records.
Question 2
An enterprise-level data class has multiple instances that can be selected from a data page sourced repeating
grid. The class and ruleset in which the class is defined wants to include a data transform that adds an instance
to a list. For extensibility, which two of the following options could be implemented? (Choose Two)
C # Response Justification
C 1 A lookup data page returns An application may want to
the class of the page extend the enterprise data class
according to the application using the data transform as is.
that adds to the list.
C 2 Provide a uniquely named, An application that overrides
embedded-list property the add-to-list data transform
C # Response Justification
applied to @baseclass that should be able reuse an existing
collects the same class. list property and not have to
define a new one.
3 Create a class with "- Creating a new pattern-
Extension" appended to the inheriting class in the same
data class name. Save the ruleset is an example of
data transform to that class. specialization but does not by
itself increase extensibility.
4 A lookup data page returns The .pxResults() list property is
the name of the page to use already extensible since it can
when appending to contains instances of any class.
.pxResults().
Question 3
Which two hierarchy types are applicable to object-oriented programing inheritance? (Choose Two)
C # Response Justification
C 1 Subsumptive A subsumptive hierarchy is
synonymous with is-a
relationships.
C 2 Compositional A compositional hierarchy is
where a thing is the sum of its
parts—its parts also cable of
being compositions.
3 Programmatic No such hierarchy type exists.
4 Layer-based No such hierarchy type exists.
Question 4
A dependency tree has which characteristics?
C # Response Justification
1 A level can only reference the A level only has to reference the
previous level. previous level once. It may also
C # Response Justification
reference lower levels.
2 Circular references are not This works, provided a
allowed within a level. subreport is used to isolate the
correct version to use.
C 3 The number of items in a In general, objects are not
level tends to decrease as the specialized infinitely.
level increases.
4 Layers are synonymous with Levels only approximate the
unique contiguous sets of layer concept. Some
levels. relationships may be deeper
than others, yet belong in a
lower layer according its
reusability.
Question 5
Which two of the following options are ways to version data? (Choose Two)
C # Response Justification
C 1 Define a custom rule. A custom rule is inherently
versionable.
C 2 Add a property such as an as- This works, provided a
of date or monotonically subreport is used to isolate the
increasing version. Filter correct version to use.
based on values.
3 When deploying data, remove This by itself does not make
existing instances first. data vesionable.
4 Define report definitions with When data is deployed to a new
subreports that filter on MAX environment, the update time-
pxUpdateTime. stamp is set to the current date-
time.
Extending an industry framework data
model
Introduction to extending an industry foundation data model
Pega offers a foundation data model for each major industry, including financial services, healthcare, and
communications. Similar to leveraging a Pega application, using the industry foundation data model can give you
an improved starting point when implementing your application.
After this lesson, you should be able to:
l Identify benefits of using an industry foundation data model
l Extend an industry foundation data model
l Use versioning to manage changes to integration data model
Industry foundation data model benefits
Pega's industry foundation data models allow you to leverage and extend an existing logical data model instead
of building one from scratch. Pega offers industry data models for Healthcare, Communications and Media, Life
Sciences, Insurance, and Financial Services. Instead of building data classes yourself, you can map the system of
record properties to the data class and properties of the industry data model. For example, the following image
illustrates the logical model for the member data types for the Healthcare Industry foundation.
You can embellish the industry foundation data classes to include additional properties from external systems of
record as needed.
Pega's industry foundation data models apply the Separation of Concerns (SoC) design principle: Keep the
business logic separate from the interface that gets the data. The business logic determines when the data is
needed and what to do with that data after it has been retrieved. The interface insulates the business logic from
having to know how to get the data.
In Pega Platform, data pages connect business logic and interface logic. The following image illustrates the
relationship between the data page, the data class, and the mechanism for retrieving the data.
This design pattern allows you to change the integration rules without impacting the data model or application
behavior.
KNOWLEDGE CHECK
What are the two key benefits of using a Pega industry foundation data model?
The industry foundation provides data pages that separate business logic from the source of the data
(interface), and provides a robust starting point for data properties and classes.
Rather than directly extend the industry foundation data model, your project may mandate that you use an
Enterprise Service Bus (ESB)- dictated data model. The goal of an ESB is to implement a canonical data model
that allows clients who access the bus to talk to any service advertised on the bus.
In a situation where an ESB is used the Pega development team does not define the canonical data model.
However, the team may maintain the mapping between the canonical data model and the foundation data
model.
Note: The best practice is for the Pega development team to leverage and extend the foundation data model as
needed. All mappings of the external SOR properties to the properties in the Pega foundation data model
should be done by the Pega development team within the Pega application using Data Pages.
How to extend an industry foundation data model
Follow this process to extend an industry foundation data model:
l Obtain industry foundation data model documentation
l Obtain the data model for the system of record
l Map the system of record data model to the industry foundation data model
l Add properties to the industry foundation data model and add data classes as needed
l Maintain a data dictionary
Before you begin the mapping process, determine which parts of the data to map. For example, when producing
the initial minimal lovable product (MLP), it may not be necessary to map all of the data from the source before
the go-live.
Note: Building web services can be an expensive and lengthy process. If you discover that you need to build a
new web service, consider using robotic automation.
Map the system of record data model to the industry foundation data model
The next step is to map the system of record data model to the industry foundation data model. To help with this
process, use a tabular format to record this information, such as a spreadsheet. The output is a reference
document to use when mapping property values from the integration response to the application data structure.
During this analysis, you may find that you need new properties for the application. For example, when mapping
the healthcare industry foundation data model, you may find that you need a property to store information when
a claim is submitted outside of the member's home country. Record the name and class where the property
resides because you will need to add it to the application data model.
C # Response Justification
1 The industry foundation data While most commonly used with Pega applications, the industry
model can only be used as a foundation data model can be implemented separately from a
built on application in Pega application.
conjunction with Pega
applications.
C 2 A best practice is to use the If you need to add new properties, you can add them to the
industry foundation data model industry model data classes, in your organization level ruleset. You
as-is, as much as possible. do not have to add new classes, unless the structure of the source
data requires that you do so.
C 3 Pega’s industry foundation data The complexity of dealing with many-to-many relationships is
model simplifies the complexity handled by data pages provided with industry foundation.
of many-to-many relationships .
4 Data pages provided by the The logic to map an integration data model to the industry
industry foundation data model foundation data model changes from one interface to another.
need to be extended for each Industry foundation data pages need not change.
external integration.
Question 2
Which of the following statements is true regarding the mapping of data from a system of record to an industry
foundation data model?
C # Response Justification
1 Every property retrieved from It is possible for a minimal lovable product (MLP) to be achieved with
an interface must be a fraction of the total amount of retrievable data being mapped.
mapped to the business data
model before an application
can go live.
2 Maintaining a data dictionary Adding to a data dictionary during development does not
C # Response Justification
is a wasteful use of team significantly increase project scope.
resources.
3 The mechanics of how data is An LSA should know how to map data in the application but should
mapped is the LSA's primary delegate that responsibility to the team.
concern.
C 4 Leveraging a Center of A best practice is for the COE to maintain integration and data
Excellence (COE) can ensure model rules across applications to promote reuse and consistent
consistent implementation of implementation of the industry foundation data model and
the industry foundation data integration rules.
model.
Question 3
To accommodate changes to integration rules, an LSA __________________ and _______________? (Choose Two)
C # Response Justification
1 Creates new ruleset A class can only be defined in one ruleset. The integration wizards
versions using the existing only generate rules. They do not withdraw rules no longer used.
integration base class.
C 2 Specifies a new integration A new base class does not overlap with previously generated classes.
base class when using a DCR can be used to obtain the name of the correct integration base
wizard. Use dynamic class class to use.
referencing (DCR) to get the
class name at run time.
C 3 Versions the application There is no need to specify a ruleset in an application that is never
when certain integration used.
rulesets become obsolete.
4 Develops custom rules to Custom rules are not recommended for integration. Custom rules are
encapsulate the integration ideal for a simplistic data model where properties stay the same
data model thereby the from version to version but what does change are the contents of
complexity of versioning. pages and page lists.
Assigning work
Introduction to assigning work
A case is often assigned to a user to complete a task. For example, an employee expense requires approval by
the manager of a cost center or a refund is processed by a member of the accounting team.
In this lesson, you learn how to leverage work parties in routing and how to customize the Get Next Work
functionality to fulfill the requirements.
After this lesson, you should be able to:
l Compare push routing to pull routing
l Leverage work parties in routing
l Explain the default Get Next Work functionality
l Customize Get Next Work
Push routing and pull routing
The two basic types of routing are push routing and pull routing.
Push routing logic is invoked during case flow processing to determine the next assignment for the case. Push
routing occurs when a pyActivityType=ROUTE activity is used to create either a worklist or workbasket
assignment. When routing to a worklist assignment, Pega can use multiple criteria to select the ultimate owner,
such as availability (whether an operator is available or on vacation), the operator’s work group, operator skills, or
current workload. You can even configure routing to a substitute operator if the chosen operator is not available.
Pull routing also known as system-selected assignment model, occurs outside the context of a case creating
an assignment. In standard portals, you can pull the next assignment to work on using Get Next Work by clicking
Next Assignment at the top of the portal. It is also possible to pull an assignment to work on by checking Look
for an assignment to perform after add? within a flow rule.
The Get Next Work feature selects the most urgent assignment from a set of assignments shared across
multiple users. Ownership of the fetched assignment does not occur until either MoveToWorklist is called or the
user submits the fetched assignment's flow action. The GetNextWork_MoveToWorklist Rule-System-Settings rule
must be set to true for the MoveToWorklist activity to be called.
Note: MoveToWorklist is called from pzOpenAssignmentForGetNextWork following the successful execution of the
GetNextWorkCriteria decision tree.
The following image illustrates how pyActivityType=ROUTE activities, when run in a case processing context, are
used to achieve push routing. The image also illustrates how GetNextWork-related rules, executed in a non-case
processing context, are used to achieve pull routing.
There are four main categories of Push Routing activities as shown in the table below.
Common Organization Based Decision Based Skills Based
ToAssignedOperator ToWorkGroup ToDecisionMap ToLeveledGroup
ToCreateOperator ToWorkGroupManager ToDecisionTable ToSkilledGroup
ToCurrentOperator ToOrgUnitManager ToDecisionTree ToSkilledWorkbasket
ToWorkParty
ToNewWorkParty
ToWorkbasket
ToWorklist
The ToCurrentOperator routing activity must be used carefully. If a Change Stage action were performed that
moves the case backward, the person who performed the move may not want to become the owner of the new
assignment. Instead the person performing the move may want the assignment to be routed to a party
associated to the case. In that situation ToWorkParty routing should be used.
If a background process such as a Standard Agent or Advanced Agent move a case forward to an assignment that
uses ToCurrentOperator routing, a ProblemFlow could result. An assignment using ToCurrentOperator after a Wait
shape will route to the requestor who routed the case to the Wait shape.
Routing activities such as ToAssignedOperator, ToCreateOperator, and ToCurrentOperator do not specify the role of
the person receiving the assignment. Routing to a role makes it more clear why the assignment was routed to the
person who received it. When using ToWorklist routing it is better to use a property name that indicates the
receiver’s role rather than a hard coded Operator ID.
Party roles should also be specific to the solution and not be overly generic. At the beginning, the person who
created a case can be considered the “Owner”, but later when in case life cycle, after being routed to multiple
person, it can become confusing as to who in fact is the “Owner”.
ToNewWorkParty and ToWorkParty both route an assignment to the worklist of the party specified by the
PartyRole parameter. ToNewWorkParty will also add a work party for the PartyRole if it did not already exist.
ToWorkParty on the other hand will throw an error if the configured PartyRole did not exist prior to the
assignment.
KNOWLEDGE CHECK
What feature in the Pega Platform supports the pull routing paradigm?
Get Next Work
How to leverage work parties in routing
Adding work parties to case types allows for consistent design and can simplify configuration of routing
throughout the case life cycle. The layer of abstraction provided by work parties aids the developer by providing
a dynamic, extensible and reusable routing solution. Using work parties in your solutions allows you to leverage
related base product provided functionality such as a ready to use data model, validation, UI forms and
correspondence which simplifies design and maintenance. The basic configuration for work parties have already
been described in pre-requisite courses. This topic will concentrate on forming a deeper understanding of work
party functionality and routing configuration to aid the developer in fully leveraging this functionality.
The WorkPartyRetrieve activity is significant since it is invoked every time a page is added to the .pyWorkParty()
pages embedded on the Case. The .pyWorkParty() page group stores the invidual parties added to the case. The
property definition contains an on-change activity which ultimately invokes the WorkPartyRetrieve activity.
It may be required to override the default behavior of some aspect of the work parties such as validation or
display. This can be performed either through ruleset specialization or by extending the work party class and
overriding the required rules. If this is required, ensure the scope of the changes are made correctly so not to
change behavior in unintended ways.
Configuring Work Party rules
A Work Party rule defines a contract which defines the possible work parties which can be utilized in the Case.
Define work parties for the case in the standard work parties rule pyCaseManagementDefault. Use meaningful
party role names to enhance application maintainability. Avoid generic names such as Owner and Originator.
Generic, non-descriptive role names such as Owner can be subject to change and may not intuitively describe
the party’s role with respect to the case.
You can use the visible on entry (VOE) option to add work parties when a case is created. VOE allows you to:
• Enable the user to add work parties in the New harness
• Automatically add the current operator as a work party
• Automatically add a different operator as a work party
Use the data transform CurrentOperator in the work parties rule to add the current operator as a work party
when a case is created, example shown below. You can create custom data transforms to add other work parties
when a case is created.
Note: Do not attempt to initialize and set the values on the work party page directly as this may cause
unintended results
Note: As a best practice, define routing for every assignment, including the first assignment. This prevents
routing issues if the case is routed back to the first assignment during case processing, or if the previous step is
advanced automatically via an SLA.
KNOWLEDGE CHECK
When multiple workbaskets are listed on a user operator record, the workbaskets are processed from top to
bottom. If you configure an Urgency Threshold for the workbasket, then assignments with an urgency above the
defined threshold are prioritized. Lower urgency assignments are considered only after all applicable
workbaskets are emptied of assignments with an urgency above the threshold. If Merge workbaskets is
selected, the listed workbaskets are treated as a single workbasket.
Instead of specifying the workbaskets to retrieve work from, you can select the Use all workbasket
assignments in user's work group options to include all workbaskets belonging to the same work group as the
user. When this option is used care must be taken to exclude workbasket assignments used to wait for subcases
to complete.
If you configure the case to route with the ToSkilledWorkbasket router, then the skills defined on the operator
record of the user are considered when retrieving the next assignment. An assignment can have both required
and desired skills. Only required skills are considered by Get Next Work.
Define the user's skills using the Skill and Rating fields on the operator record. Skills are stored in the pySkills
property on the OperatorID page. Skills checking is not performed when users fetch work from their own worklist
since they would not own an assignment without the proper skills. The Get Next Work functionality ensures that
users can only retrieve assignments from the workbasket, if the user has all the skills with at least the ratings
defined.
The Assign-Worklist.GetNextWork list view uses the default getContent activity to retrieve assignments. The Assign-
WorkBasket.GetNextWork uses a custom get content activity getContentForGetNextWork to construct a query. The
query varies based on Rule-Admin-System-Settings rules that start with GetNextWork_. By default the query
compares the user's skills to the assignment's required skills, if any.
Before the assignment returned by the list view is selected, the Assign-.GetNextWorkCriteria decision tree checks if
the assignment is ready to be worked on and if the assignment was previously worked on by the user today. The
assignment is skipped if it was previously worked on by the user today.
KNOWLEDGE CHECK
Where are the settings specified on the user's operator record applied when getting the next
assignment?
In the custom get content activity getContentForGetNextWork used in the Assign-WorkBasket.GetNextWork list view
How to customize Get Next Work
You can customize Get Next Work processing to meet the needs of your application and your business
operations. The most common customization requirement is adjusting the prioritization of work returned by Get
Next Work. You change the prioritization of work by adjusting the assignment urgency.
Adjusting assignment urgency may not be a good long term solution however, since urgency can also be affected
by other case or assignment urgency adjustments. A better long term solution is to adjust the filter criteria in the
Assign-WorkBasket.GetNextWork and Assign-Worklist.GetNextWork list views. For example, you can sort by the
assignment's create date or join the assignment with the case or another object to leverage other data for
prioritization.
Sometimes different work groups have different Get Next Work requirements. When different business
requirements exist, you customize each Get Next Work functionality to satisfy both sets of requirements. So a
change implemented to satisfy a certain requirement should not affect the solution to a different requirement.
For example, if assignments for gold-status customers should be prioritized for customer service representatives
(CSRs), but not for the accounting team, then the change implemented to prioritize gold customers for CSRs must
not affect the prioritization for the accounting team.
You can create several circumstanced list views if the requirements cannot be implemented in a single list view,
or if a single list view is hard to understand and maintain.
Use the Assign-.GetNextWorkCriteria decision tree to filter the results returned by the GetNextWork list view. You
can define and use your own when rules in the GetNextWorkCriteria decision tree. Create circumstanced versions
of the GetNextWorkCriteria decision tree if needed.
Note: Using the GetNextWorkCriteria decision tree for filtering has performance impacts since items are iterated
and opened one-by-one. Always ensure the GetNextWork list view performs the main filtering.
Circumstance Example:
GetNextWork ListView: Assign-WorkBasket
Circumstanced: OperatorID.pyWorkGroup = FacilityCoordinator@Booking
Criteria: .pxWorkGroup = FacilityCoordinator@Booking
Get These Fields: .pxUrgencyAssign Descending (1), pxCreateDateTime Ascending (2)
Show These Fields: .pxUrgencyAssign, pxCreateDateTime
Besides circumstancing the GetNextWork ListView, it is also possible to circumstance the GetNextWorkCriteria
Decision Tree for a particular WorkGroup. Other alternatives exist such as specializing the
getContentForGetNextWork Activity to call a different Decision rule to produce the desired results. When
specializing any of these rules, it is important to implement changes efficiently to ensure results will be
performant.
KNOWLEDGE CHECK
How can you change the Get Next Work prioritization without customizing the GetNextWork and
GetNextWorkCriteria rules?
By adjusting the assignment urgency
Alternate Ways to Find Work
Pega’s Get Next Work (GNW) feature is by far the most-used way to retrieve an assignment that needs attention.
GNW is geared toward short time-frame interactions with a customer where the work being fetched may require
operator skill matching. GNW saves valuable time by not returning assignments that contain an error, nor
associated to a case that is locked.
GNW is closely tied to Service Level rule configuration. Service Level rules provide Goal, Deadline, and Passed
Deadline milestones at which urgency can be increased.
Recall the “How to customize Get Next Work" discussion about circumstancing. Suppose the workbasket
assignment’s pxCreateDateTime property was used as the primary sort and not the secondary sort? Such a
requirement could exist for a workgroup that simply wants the oldest workbasket assignments processed first,
i.e., First-In, First Out (FIFO). The pyGetWorkBasket Report Definition behind the D_WorkBasket List Data Page used
to display workbasket assignments could be modified to use an ascending sort on pxCreateDateTime as opposed
to a descending sort on pxUrgencyAssign as it uses by default.
A value for pxUrgencyAssign could be derived after the fact, i.e., reverse engineered, by taking into account
@CurrentDateTime() and the time that the workbasket assignment was created. Going a step further, the value for
pxUrgencyAssign need not be computed until displayed in a UI.
Taking this a example one step further, suppose a customer-specific SLA deadline duration, days, hours, or
minutes, is added to the pxCreateDateTime of a Claim case. All ClaimUnit subcases of that case must be
completed by that deadline. Different ClaimUnits, however, take different durations, on average, to complete. For
efficiency, each ClaimUnit’s average time to complete could be made available within a node-level Data Page.
A ClaimUnit spun off to a workbasket could compute its pxDeadlineTime datetime property as: @CurrentDateTime
+ (customer deadline – average time to complete). A circumstanced version of the Assign-WorkBaset GetNextWork
ListView could perform an ascending sort on pxDeadlineTime as opposed to uisng pxUrgencyAssign descending. A
circumstanced version of the pyGetWorkBasket Report Definition could do the same.
The R-D-E for pxUrgencyAssign can also be circumstanced. Below is an example of how pxUrgencyAssign could
“reverse engineer” its value based on datetime properties.
C # Response Justification
1 Push routing activity names Activity names have nothing to
begin with "To," and pull do with the fundamental
routing activity names begin difference between push and
with either "Get" or "Find." pull routing.
2 The security type value for The ROUTE activity security type
push routing activities is value is used by Pega to identify
ROUTE, and the security type push routing activities, but is
value for pull routing not the primary difference
activities is ACTIVITY. compared to pull routing.
C 3 Push routing is initiated Push routing occurs either
during the context of case immediately after case creation
processing, and pull routing or when the FinishAssignment
is initiated outside the activity is executed during case
context of case processing. processing.
4 The behavior of push routing The opposite is true. A number
is configured in Rule-System- of Rule-System-Settings exist to
Settings, and pull routing is control the behavior of Assign-
not. WorkBasket Get Next Work.
Question 2
The work parties rule directly facilitates routing decisions when accessed in which two configurations? (Choose
Two)
C # Response Justification
C 1 Accessed when a work party Visible-on-entry is accessed to
is populated immediately decide whether a party role
after case instantiation should be populated prior to the
first assignment.
C # Response Justification
C 2 Accessed when validating The Work Parties rule is
which work party roles are evaluated to detect whether an
allowed to be used and their attempt is made to use an
cardinality unspecified party role as well as
to prevent a singular work party
role from being added multiple
times.
3 Accessed when Organization Organization push routing
routing activities are called activities do not rely on the Work
such as Parties rule for information
ToWorkGroupManager
4 Accessed during Get Next Get Next Work does not, by
Work to validate whether the itself, add a party role to a case.
operator's primary access role
corresponds to a party role.
Question 3
Choose two ways to customize Get Next Work for a particular type of actor or primary role? (Choose two)
C # Response Justification
C 1 Circumstance Get Next Work Circumstancing is a consistent
rules based a text fragment way to specialize Get Next Work
derived from the operator's for on particular type of actor.
primary access group.
C 2 Override certain Get Next Implementing if / then / else
Work activities to react switch-like logic within certain
differently according to the Get Next Work is possible.
operator's work group. However, customization of the
GetNextWork ListView to
accommodate different actors is
difficult.
3 Define a workgroup-specific This solution is difficult to
variation for each distinct maintain. The Assign-
workbasket purpose. WorkBasket GetNextWork
C # Response Justification
Operator records specify ListView is capable of filtering
workbaskets that correspond assignments based on
to their work group. associated pxWorkgroup.
4 Define skill names specific to This solution is difficult to
each work group. Set skills on maintain. Skill names need not
each workbasket assignment be workgroup specific. Also, this
according to the work group approach only affects the
to which the workbasket is results obtained by the Get Next
associated. Work query and not any
decision made downstream.
Question 4
Which two configuration approaches differentiate Get Next Work behavior for different actors without modifying
Get Next Work related rules? (Choose two)
C # Response Justification
C 1 List workbaskets in specific This approach can be used
order on operator records. when a membership status
hierarchy exists and those with
a higher status are given
preferential treatment.
C 2 Increase case urgency prior to Adjusting assignment urgency
a workbasket assignment. either through an SLA rule or
modifying case urgency before
the assignment is created is the
de facto way to utilize Get Next
Work as-is.
3 Override one or more Overriding a GetNextWork R-S-S
GetNextWork related Rule- rule would apply to everyone.
System-Setting rules. The R-S-S would need to be
circumstanced to apply to a
particular actor type.
4 Implement a button that This is a separate alternative to
actions open assignment, the Get Next Work use to either
key for which is supplied by a
C # Response Justification
Data Page. replace or augment it. This
approach does not affect how
the Pega platform Get Next Work
rules behave.
Defining the authorization scheme
Introduction to defining the authentication scheme
In most cases, you want to authenticate users when logging into an application to establish who they are and
that they are who they say they are. You can implement authentication features that ensure valid users can
access the environments they are authorized to access. The Pega Platform provides a complementary set of
authentication services.
After this lesson, you should be able to:
l Compare the available authentication services
l Determine the appropriate authentication service for a given use case
l Design an authentication scheme
Authentication design considerations
C # Response Justification
1 PRSecuredBasic PRSecuredBasic is no longer a
valid authentication type.
2 Custom Custom is a valid option for an
authentication type but is not
the default.
C 3 Basic Credentials Basic Credentials is the default
authentication type in the Pega
Platform.
4 Java Java is not a valid authentication
type. default.
Defining the authorization scheme
Introduction to defining the authorization scheme
In most cases, you want to restrict authenticated users from accessing every part of an application. You can
implement authorization features that ensure users can access only the user interfaces and data that they are
authorized to access. The Pega Platform provides a complementary set of access control features called Role-
based access control and Attribute-based access control.
After this lesson, you should be able to:
l Compare role and attribute based access control
l Identify and configure roles and access groups for an application
l Determine the appropriate authorization model for a given use case
l Determine the rule security mode
Another access control capability in Pega is Client-based access control (CBAC). This is more focused on
tracking and processing requests to view, update or remove personal Customer data held across your Pega
applications, such as that required by EU GDPR (and similar) regulations. In itself, it doesn’t influence the
authorization considerations for LSAs when designing a Pega application, and is not discussed further in this
module.
Authorization design considerations
Access Read Write Delete Rea Writ Dele Execu Execut Privileges
class instanc instanc instanc d e te te e
es es es rule rule rules rules activiti
s s es
Work- 5 5 5 5 5 AllFlows(5)
AllFlowActions(5)
TGB- 5 5 5 5 5 ManagerReports
HRApps- (5)
Work
TGB- 5 5 5 5 5 5 SubmitExpenseRe
HRApps- port(5)
Work-
ExpenseRep
ort
Note: If an operator has multiple Access Roles, the Access Roles are joined with an OR such that only one of the
most specific AROs for each Access Role needs to grant access in order to perform the operation.
KNOWLEDGE CHECK
Which Access of Role to Object is used if there are several available in the inheritance path?
The most specificAccess of Role to Object in the class hierarchy relative to the class of the object is identified
and defines the access. Any less specific AROs in the class hierarchy are ignored.
Attribute-based access control (ABAC)
ABAC complements RBAC by enabling Access Control Policies to control access to specific attributes of a record
(so long as RBAC has granted access to the record), regardless of where those attributes are used in the
Application (on a screen, in a report). ABAC can also be used to define record-level access control (additional to
RBAC) where the conditions for accessing those records are NOT determined by the persona (role) that the
operator fulfills for the Application.
ABAC is optional and used in conjunction with RBAC. ABAC compares user information to case data on a row-by-
row or column-by-column basis. You configure ABACusing Access Control Policy rules that specify the type of
access and Access Control Policy Condition rules defining a set of policy conditions that compare user
properties or other information on the clipboard to properties in the restricted class.
You define access control policies for classes inheriting from Assign-, Data-, and Work- and use the full
inheritance functionality of Pega Platform. Access control policy conditions are joined with AND when multiple
same-type access control policies exist in the inheritance path with different names. Access is allowed only when
all defined access control policy conditions are satisfied.
Note: When both RBAC and ABAC models are implemented, the policy conditions in the models are joined with
an AND. Access is granted only when both the RBAC policy conditions AND the ABAC policy conditions are met.
In the following example, if the HR application user wants to update a Purchase case, the conditions for the
access control policies defined in the class hierarchy are joined with AND. The user is granted access for
updating the Purchase case only if WorkUpdate AND HRUpdate AND HRPurchaseUpdate evaluates to true.
Access Read Update Delete Discover PropertyRead PropertyEncrypt
class
Work- WorkRead WorkUpdate WorkDiscov
er
TGB- HRUpdate HRDele HRDiscover HRPropRead
HR- te
Work
TGB- HRPurchaseRe HRPurchaseUpd HRPurchasePropR HRPurhcasePropEnc
HR- ad ate ead rypt
Work-
Purcha
se
To enable ABAC, in the Records Explorer, go to Dynamic System Settings and update the
EnableAttributeBasedSecurity value to True.
KNOWLEDGE CHECK
Which access control policy is used if there are several available in the inheritance path?
All access control policies having the same type and different names are considered. The conditions are joined
with AND.
Establishing a Dependent Roles hierarchy
Pega Infinity allows a dependency hierarchy of Access Roles to be defined that allows more persona (user role)
specific RBAC to incrementally override the RBAC that is available from more generic Access Roles.
An Access Role MyApp:User can, for example, be configured dependent on the Pega Platform Access Role
PegaRULES:User4, and ‘inherit’ all authorizations available in the dependent role without defining explicit ARO
records for MyApp:User. You can define this by selecting Manage dependent roles:
In this example, any Access Group which includes the MyApp:User Access Role, remains authorized to Read and
Write instances of any Case Type despite not having any AROs (Access of Role Object(s)) of its own,. This is
because:
l The absence of ARO records on the MyApp:User Access Role means that this Access Role neither grants nor
denies access. It yields no authorization outcome on its own
l As MyApp:User is dependent on PegaRULES:User4, any unresolved authorization checks are deferred to
PegaRULES:User4 to determine if that Access Role in turn yields an outcome
l As PegaRULES:User4 would not define an ARO for a Case Type specific to the MyApp application, the RBAC
algorithm works its way up the inheritance hierarchy of that Case Type’s Class to try and find a relevant ARO
for this check. PegaRULES:User4 does define an ARO for Work- (a superclass of any MyApp Case Type) which is
the ARO that most specifically matches an instance of a Case Type in MyApp
l As the settings of ‘5’ explicitly grant Read and Write access to the Case Type, this outcome from
PegaRULES:User4 is propagated as the outcome for the same authorization check on the MyApp:User Access
Role
Should an Access Role need to largely honour the authorization outcomes of an existing Access Role, but override
the outcomes in certain scenarios, you can also use dependent roles to configure only those AROs in the new
Access Role which override the outcomes that would otherwise be attained from its dependent roles. Any
authorization outcomes not specified in the top-level role continue to be deferred to its dependent roles for an
outcome.
For example, consider a requirement to restrict MyApp users to update all case types only if they are not
Resolved, whilst preserving all other access typically afforded to Pega Platform users. With dependent roles, this
can be implemented using a single ARO in MyApp:User which specifies the required restriction (using an
Application-specific Access When rule):
By leaving all other settings on the ARO unspecified, other authorization checks (e.g. Read access) are deferred to
the dependent role: in this example PegaRULES:User4. Read access would continue to be granted by the
dependent role, as no setting in the top-level role overrides it.
The benefits of using Dependent Role hierarchies are:
1. Eliminating duplication of AROs for application-specific Access Roles: The example above where MyApp
needed one setting to vary an otherwise reusable baseline would – when dependent roles are not used –
require all other settings on the ARO (and potentially other AROs) to be duplicated into the MyApp:User Access
Role.
2. Access Role layering: More generic Access Roles can be created to form the authorization foundation for
application-specific access roles that utilise similar personae (user roles). Application-specific access roles can
then establish a dependency on more generic access roles (which may in turn depend on Pega Platform
access roles), incrementally adding configuration of only that behaviour which differs between the Application
layer and layers on which it depends.
3. Multiple dependencies: Access roles can be configured to have multiple dependent access roles, providing
multiple dependencies to defer too so that an authorization outcome can be attained, based on a collection of
otherwise disparate access roles. Often there are exceptional users who concurrently perform the
responsibilities of multiple personae (user roles). Creating an Access Role for these users and having it
depend on multiple ‘sibling’ Access Roles from the same application may achieve this outcome.
4. Reuse Pega Platform or Pega Application authorization: Often the access roles resident in Pega Platform
(or any Pega Applications you are building on) for typical personae (user roles) such as end users, managers
and system administrators yield most of the required authorization outcomes. Implementing Application-
specific access roles to depend on the corresponding Pega Platform access roles provides a working
authorization baseline with no duplication of AROs.
5. Maintainability: By only configuring the authorization requirements that are unique to your Application-
specific access roles, and deferring the remainder to its dependent roles, it is clearer to maintainers of your
application how your application-specific authorization deviates from a more commonly understood
foundation. Configuring RBAC without dependent roles would lead to a larger number of AROs at the
application level, many of which are often slightly modified clones of the access roles provided by Pega. The
slight modifications can be hard to isolate.
6. Upgrade-ability: By virtue of eliminating duplicated AROs and instead deferring to AROs specified in
dependent roles, Upgrades to your Pega Platform or Pega Applications better allows the authorization of your
Applications to immediately reflect authorization changes that are delivered in the upgrade.
Tip: As of Pega Infinity, Application-specific access roles generated by the New Application wizard establish Pega
Platform access roles as their dependent roles.
When dependent roles are not used, your application-specific Access Roles have no links to the new or updated
Pega Access Roles, yielding the following impacts:
l Any changes to the Pega’s authorization model in upgraded access roles would be masked by the application-
specific access roles.
l Any new features delivered in any Pega upgrade may depend on Privileges from the upgraded access roles
that would be masked by the application-specific access roles.
KNOWLEDGE CHECK
Which access is granted if there are several dependent access roles defined for a dependent access
role?
All dependent role names are considered. The conditions are joined with OR.
How to create roles and access groups for an application
Each user (operator) of an application has a defined persona (user role) for processing cases. Applications
typically allow some groups of users to create and process cases, and other groups of users to approve or reject
those cases.
For example, in an application for managing purchase requests, any user can submit a purchase request, but
only department managers can approve purchase requests. Each group of users has specific responsabilities
and plays a particular persona (user role) in processing and resolving the case.
Option 2:
Option 3:
The design consideration for which option a LSA should take is whether the access control needs of the Manager
nearly always “builds on” (or is a superset of) those of the Fufillment Operator.
Option 1: allows for the access control needs of each persona (user role) to evolve independently of each other,
with the maintainability overhead of some access control settings being duplicated across each access role;
Option 2: requires the access control needs of the Manager to always be a super-set of the Fulfillment Operator,
as a grant returned from the FulfillmentOperator access role will (without more advanced RBAC configuration) be
enough to grant access to the Managers access group, even if the Manager access role denies access;
Option 3: allows the access control needs of the Manager to be predominantly based on the FulfillmentOperator
access role, whilst allowing the Manager access role to both introduce Manager specific settings as well as
override (i.e. explicitly revoke) settings specified in the dependent FulfillmentOperator access role.
Solution: Option 3 would typically yield an intended authorization design with the fewest AROs and the lowest
likelihood of duplication. This promotes a maintainable and understandable solution, and has flexibility to adapt
as additional Journeys are added that inevitably invalidate some of the authorization decisions reached in earlier
releases.
Note: The use of Access Deny rules and/or the “Stop access checking once a relevant Access of Role to Object
instance explicitly denies or grants access” option on the access group rule-form can help Option 2 achieve the
same outcome as Option 3, but this adds more rules and complexity to the design.
Applications created from the New Application wizard have three foundation Access Roles:
l <ApplicationName>:User
l <ApplicationName>:Manager
l <ApplicationName>:Administrator
Note: Other access roles are also created for RBAC pertaining to Pega API and APP Studio usage, but are
inconsequential for this lesson.
Prior to Pega 8, the best practice was to avoid using the foundation Access Roles except as starting points for new
application-specific access roles. The application-specific acess roles would be created by cloning the foundation
roles.
Starting Pega 8 with the introduction of Dependent Roles, the best practice is to create application-specific
access roles which specify the foundation access roles as dependencies.
The naming convention used for access roles is: <ApplicationName>:<RoleName> where RoleName is singular.
Use the Roles for Access landing page (DEV Studio > Configure > Org & Security > Tools > Security > Role
Names) to create new application specific roles.
Note: A capability to clone the AROs from an existing access role to another is available from the Access Roles
landing page. This is typically for backwards-compatibility only with Pega RBAC capabilities from earlier versions
that preceded the availability of Dependent Roles.
Access roles created in newer versions of Pega would typically utilise Dependent Roles as a preference over
cloning.
When planning the set of AROs to specify for an Access Role Name, consider the following:
l Is the access role they apply to inheriting authorization from Dependent Roles? If so, the AROs needed for
your access role can be limited to those that alter the authorization outcomes otherwise derived from its
Dependent Roles.
l Is the access role utilising Privilege Inheritance? If not, some Privileges from superclass AROs may need to be
re-specified in subclass AROs in the same Access Role.
l Leaving settings blank in an ARO results in Pega deferring to superclass AROs and Dependent Roles to
determine the authorization for that setting. This is a legitimate, object-oriented approach to configuring
authorization, but needs design.
l Is the access role to be used in access groups which “short-circuit” testing access roles once access is
explicitly granted or denied. If so, be conscious of the distinction between configuring a setting value (either
an Access When rule or a Production Level number) both of which could explicitly deny access, and leaving
the setting blank (delegating the authorization outcome to a superclass ARO, dependent role or a subsequent
roles on the Access Group).
Defining Access Roles that only contain Access Deny rules, sequencing these Access Roles earlier in the list
shown on an Access Group, and activating the “Stop access checking once a relevant Access of Role to Object
instance explicitly denies or grants access” option facilitates restricting authorization that would otherwise be
granted to the Access Group by Access Roles listed after it on the Access Group. Roles that only contain Access
Deny rules can be described as Access-Deny-Only Access Roles.
Note: Access Deny rules cannot be configured to deny a Privilege that is otherwise granted by AROs in any of
the Access Roles in the Access Group.
The typical use case is where a requirement for a persona (user role) emerges whose authorization is very close
to – but a subset of – an existing persona (user role). For example, given an Ordering application with an existing
Manager persona (user role) (using an Ordering:Manager Access Role), the need for an “Associate Manager”
persona (user role) arises, where the only difference in authorization is value of Orders they are authorized to
open. An implementation approach for this using Access Deny rules could be:
1. Create an Access-Deny-only Access Role named Ordering:AssociateManagerDeny
2. Create an Access When rule on the Order class that compares the order value to the threshold required by
the business rule
3. Create an Access Deny rule for the new Access Role on the Order class, applying the new Access When as the
Read Instance setting
4. Create the Ordering:AssociateManagers Access Group, adding the following Access Roles:
a. Ordering :AssociateManagerDeny – denying Open Order access according to the business rule
b. Ordering:Manager – granting the same authorization that existing Managers have
5. Turn on the “Stop access checking once a relevant Access of Role to Object instance explicitly denies or grants
access” setting on the Ordering:AssociateManagers Access Group
Note: This scenario can also be implemented using Dependent Roles.
Think about how a Dependent Roles design could achieve the same outcome. What are the advantages and
disadvantages? Could this be used to address the Access Deny rule’s inability to deny Privileges?
Note: You can only define ABAC for classes inheriting from Assign-, Data-, and Work-.
Use the Access Manager to configure RBAC. ABAC is configured by implementing Access Control Policy and
Access Control Policy Condition rules, which may in turn reference Access When rules.
KNOWLEDGE CHECK
C # Response Justification
1 Access is granted when RBAC RBAC does not override ABAC.
evaluates to true as it
overrides ABAC.
2 Access is granted when ABAC ABAC does not override RBAC.
evaluates to true as it
overrides RBAC.
C 3 Access is only granted if both RBAC and ABAC must both
RBAC and ABAC evaluate to evaluate to true.
true.
4 Access is granted if ABAC RBAC and ABAC must both
evaluates to true unless an evaluate to true.
Access Deny restricts access.
Question 2
You have an access group with two access roles with conflicting access privileges. Which two configuration
options are recommended to ensure that access is denied? (Choose Two)
C # Response Justification
C 1 Use an Access Deny to Access Deny rules ensure that
explicitly restrict access access is denied across roles in
an Access Group.
2 Create a single role restricting Using a single role is possible,
access but this option is not
recommended if there are only
a few conflicts.
3 Modify the role granting This option is not recommended
access because the role might be used
in other access groups and the
C # Response Justification
ruleset may be locked.
C 4 Use ABAC to restrict access Both RBAC and ABAC must be
true for access to be granted.
Question 3
You want a group of users to view certain properties in a case without being able to open the case. How do you
implement the requirement?
C # Response Justification
1 RBAC Read action Use the Read action to allow the
user to open and view the
instance.
C 2 ABAC Discover action Use the Discover action to
enable viewing selected data in
an instance the user cannot
open.
3 ABAC Property Read action Use the Property Read action to
restrict access to data in an
instance the user can open.
4 RBAC Privilege Privileges cannot be used to
restrict access.
Question 4
When do you set the Rule security mode on the access group to Warn?
C # Response Justification
C 1 When verifying that the Warn is used prior to setting the
access role is configured mode to Deny to ensure the
correctly for rule execution access role is set up correctly.
2 When automatically notifying Setting Rule security mode to
the system administrator Warn does not result in
when access is denied to a notifications being sent.
rule
C # Response Justification
3 When displaying a custom This setting is not related to any
warn message if rule messages displayed to the user.
execution failed
4 When writing a message in ABAC cannot override RBAC.
the log file if ABAC overrode
the RBAC setting
Mitigating security risks
Introduction to mitigating security risks
Securing an application and ensuring that the correct security is set up is important. Correct security entails
making sure users are who they say they are (authentication). Correct security also entails proper authorization
(that users can only access cases they are allowed to access and can only see data they are allowed to see.
Correct security also means identifing and addressing security vulnerabilities like cross site scripting or phishing
attacks. This lesson examines common mistakes that can open up vulnerabilities in the system, and how to
address them to help avoid potential risks.
After this lesson, you should be able to:
l Identify security risks
l Detect and mitigate posible attacks using Content Security Policies
l Identify potential vulnerabilities with the Rule Security Analyzer
l Know how to secure a Pega application in production
l Discuss security best practices
l Use security event logging
Security risks
Every application includes a risk of tampering and unwanted intruders. When an application is developed
traditionally using SQL or another language, vulnerabilities inherent to the language are included, leaving the
systems open to attack. Tampering can occur in many ways, and are often difficult to detect and predict. URL
tampering or cross-site scripting can easily redirect users to malicious sites, so taking the proper steps to protect
your application is essential.
Developing applications using best practices ensures that rules are written properly, and secures the application
against threats. To maximize the integrity and reliability of applications security, features must be implemented
at multiple levels.
Each technique to strengthen the security of an application has a cost. Most techniques have one-time
implementation costs, but some might have ongoing costs for processing or user inconvenience. You determine
the actions that are most applicable and beneficial to your application.
When initially installed, Pega Platform is intentionally configured with limited security. This is appropriate for
experimentation, learning, and application development.
KNOWLEDGE CHECK
Content security policies help detect and mitigate certain types of attacks by __________.
preventing browsers from loading and running content from untrusted sources
Rule Security Analyzer
The Rule Security Analyzer tool identifies potential security risks in your applications that may introduce
vulnerabilities to attacks such as cross-site scripting (XSS) or SQL injection.
Typically, such vulnerabilities can arise only in non-autogenerated rules such as stream rules (HTML, JSP, XML, or
CSS), and custom Java or SQL statements.
The Rule Security Analyzer scans non-autogenerated rules, comparing each line with a regular expressions rule
to find matches. The tool examines text, HTML, JavaScript, and Java code in function rules and individual activity
Java method steps, and other types of information depending on rule type.
The Rule Security Analyzer searches for vulnerabilities in code by searching for matches to regular expressions
(regex) defined in Rule Analyzer Regular Expressions rules. Several Rule Analyzer Regular Expression rules are
provided as examples for finding common vulnerabilities. You may also create your own Rule Analyzer Regular
Expression rules to search for additional patterns.
The most effective search for vulnerabilities is to rerun the Rule Analyzer several times, each time matching
against a different Regular Expressions rule.
Important: Use trained security IT staff to review the output of the Rule Security Analyzer tool. They are better
able to identify false positives and remedy any rules that do contain vulnerabilities.
Running the Rule Security Analyzer before locking a ruleset is recommended. This allows you to identify and
correct issues in rules before they are locked. The Rule Security Analyzer takes a couple of minutes to run
through the different regular expressions.
For more information on the Rule Security Analyzer, click How to use the Rule Security Analyzer tool.
KNOWLEDGE CHECK
The Rule Security Analyzer tool helps identify security risks introduced in __________ rules.
non-autogenerated
How to secure an application
Find out who is responsible for application security in the organization and engage them from the start of the
project to find out any specific requirements and standards, and what level of penetration testing is done.
Rules
Perform the following tasks:
l Ensure that properties are of the correct type (integers, dates, not just text).
l Run the Rule Security Analyser and fix any issues.
l Fix any security issues in the Guardrail report.
Rulesets
Lock each ruleset version, except the production ruleset, before promoting an application from the development
environment. Also, secure the ability to add versions, update versions, and update the ruleset rule itself by
entering three distinct passwords on the security tab on the ruleset record.
Documents
If documents can be uploaded into the application, complete the following tasks:
l Ensure that a virus checker is installed to enforce which files can be uploaded. You can use an extension
point in the CallVirusCheck activity to ensure that a virus checker is installed.
l Ensure file types are restricted by adding a when rule or decision table to the SetAttachmentProperties activity
to evaluate whether a document type is allowed.
Authorization
Verify that the authorization scheme is implemented and has been extensively tested to meet requirements.
Ensure the production level is set to an appropriate value in the System record. Set the production level to 5 for
the production environment. The production-level value affects Rule-Access-Role-Obj and Rule-Access-Deny-Obj
rules. These rules control the classes that can be read and updated by a requestor with an access role. If this
setting interferes with valid user needs, add focused Rule-Access-Role-Obj rules that allow access instead of
lowering the production level.
Authentication
Enable the security policies if out-of-the-box authentication is used (Designer Studio > Org & Security >
Authentication > Security Policies). If additional restrictions are required by a computer security policy, add a
validation rule. Set up time-outs at the application server level, requestor level, and access group level that are of
an appropriate length.
Integration
Work with the application security team and external system teams to ensure connectors and services are
secured in an appropriate way.
Deployment
When deploying an application to an environment other than development, limit or block functionality to certain
features and remove unnecessary resources. Default settings exposes risks into an application since they
provide known starting point for intruders. Taking defaults out of the equation reduces overall risk dramatically.
Make the following changes to default settings:
l Rename and deploy prweb.war only on nodes requiring it. Knowing the folder and content of prweb.war is a
high security risk as it provides access to the application.
l Remove any unnecessary resources or servlets from the web.xml. Rename default servlets where applicable,
particularly PRServlet.
l Rename prsysmgmt.war and deploy it on a single node per environment. Also, deploy prsysmgmt.war on its
own node as someone could get the endpoint URL from the application server by taking the URL from the help
pop-up window. Password protect access to the SMA servlet on the production environment.
l Rename prhelp.war and deploy it on a single node per environment. In addition, deploy prsysmgmt.war on
its own node as someone could get the endpoint URL from the application server by gtaking the URL from the
help pop-up window.
l Rename prgateway.war and rename and secure the prgateway servlet. The prgateway.war contains the
Pega Web Mashup proxy server to connect to a Pega application.
Database
Ensure that the system has been set up using a JDBC connection pool approach through the application server,
rather than the database being set up in the prconfig.xml.
Limit the capabilities and roles that are available to the PegaRULES database account on environments other
than development to reduce additional capabilities to truncate tables, create or delete tables, or otherwise alter
the schema. This limit on capabilities and roles might cause the View/Modify Database Schema tool to operate in
read-only mode.
KNOWLEDGE CHECK
Example:
Suppose an operator has Read access to a case but not Perform access. The user could issue a URL such as:
?pyActivity=doUIAction&action=openWorkByHandle&key=FSG-BOOKING-WORK EVENT-77
No error message would be displayed. Instead the screen would say “NoID” plus display a button that says
“Open in Review ”.
If you do not want that person to view the case in Review mode unless that person is either the current owner or
the last person to update the case. Simply preventing access to the assignment is not sufficient. This is because
Assign- Read access = canPerform only prevents the assignment from being performed, it does not prevent the
associated case from being opened. Access must also be prevented or allowed at the case-level.
Notice the RBAC configuration below for FSG-Booking-Work. The pxRelatedToMe Access When rule allows the case
to be opened only when last updated or resolved by the current operator, or currently owned by the current
operator. A co-worker would not be allowed to open the case.
Before
After
In the Booking application, the primary Access Role for Sales Executives should be cloned from or even better,
dependent on PegaRULES:User4. Below shows what would happen when SalesExecutive1@Booking attempts to
open a case owned by SalesExecutive2@Booking after the modification above.
?pyActivity=doUIAction&action=openWorkByHandle&key=FSG-BOOKING-WORK EVENT-77
What can you do to mitigate security risks before deploying developing applications to production?
Follow best practices and take actions as suggested in the Security Checklist to strengthen the security.
Security event logging
In addition to data and rule modification auditing, plus recording work history, Pega provides the ability to record
security-related events to a file named PegaRULES-SecurityEvent.log. This log file can be accessed from DEV
Studio using: Configure > System > Operations > Logs > Log files.
Below are two example security event log entries. Notice that each entry is recorded using JSON format.
{"id":"6e11a563-fd93-46d8-9de0-3963fb43a70f","eventCategory":"Security
administration event","eventType":"security event configuration
changed","appName":"Booking","tenantID":"shared","ipAddress":"192.168.118.1","timeSt
amp":"Fri 2019 Jul 12,
17:30:54:274","operatorID":"Admin@Booking","nodeID":"ff9ef7835fd4906aea82694c981938d
0","message":"security event configuration has been
modified.","requestorIdentity":"20190710T213105"}
{"id":"ed76e8a7-ea28-4e9a-8830-01e8e90301ae","eventCategory":"Authentication
event","eventType":"Operator record
change","appName":"Booking","tenantID":"shared","ipAddress":"192.168.118.1","timeSta
mp":"Fri 2019 Jul 12,
17:41:24:976","operatorID":"Admin@Booking","nodeID":"ff9ef7835fd4906aea82694c981938d
0","requestorIdentity":"20190710T213105","operatorRecID":"DATA-ADMIN-OPERATOR-ID
ADMIN@BOOKING","operatorRecName":"Admin","operation":"update"}
Using: Configure > Org & Security > Tools > Security > Security Event Configuration, displays which type of
events are recorded. At the bottom is the ability to enable or disable Custom event logging.
Note: The Security Event Configuration only allows you to turn custom events on or off.
This setting does not provide control over when individual custom events are logged. You could, for example,
define a parameterized When rule used to control whether a step in a Data Transform or Activity step should
record a custom security event. The When rule’s parameter could be used to perform a Data Page-mediated
lookup to see whether logging of the custom event has been enabled.
Custom event logging can be used to facilitate the fulfillment of Client-Based Access Control (CBAC) auditing
requirements.
It is possible to log a custom event within an Activity java step using:
tools.getSecEventLogger().logCustomEvent(PublicAPI tools, String eventType, String
outcome, String message, Map<String, String> customFlds)
With the parameter values:
l eventType: Name of the event type to keep track of custom events
l outcome: The outcome of the event
l message: Any message that a user needs to log as part of the event.
l customFlds: A map of key-value pairs that log extra information for the event.
A better long term approach, however, would be to execute this API with a Rule-Utility-Function. This is because
future versions of Pega platform may curtail the use of java steps in Activities.
According to the help topic Adding a custom security event, to record a custom security event you would create a
java step within an activity.
It would be overly complex to require code that calls a Function to supply a StringMap (Map<String, String>)
customFlds parametter. The Function could instead accept a text-based ValueGroup Property. That ValueGroup
Property can be converted to a StringMap within the Function. The following steps describe how you could
configure this function.
1. Create a Library and Function
2. Have the Function accept four parameters (String, String, String, ClipboardProperty)
3. The supplied ClipboardProperty must be a ValueGroup
4. The Function converts the ValueGroup ClipboardProperty to a locally declared Map<String, String> customFlds
variable
PublicAPI tools = null;
PRThread thisThread = (PRThread)ThreadContainer.get();
if (thisThread != null) tools = thisThread.getPublicAPI();
else throw new PRAppRuntimeException("Pega-RULES", 0, "Unable to obtain current
thread");
Map<String, String> customFldsMap = new HashMap<String, String>();
java.util.Iterator iter = customFlds.iterator();
while (iter.hasNext())
{ ClipboardProperty prop = (ClipboardProperty)iter.next();
customFldsMap.put(prop.getName(), prop.getStringValue());
}
tools.getSecEventLogger().logCustomEvent(tools, eventType, outcome, message,
customFldsMap);
KNOWLEDGE CHECK
C # Response Justification
C 1 consider the cost for the It is important to consider the
users user inconvenience of a security
feature.
2 only change default settings if Often it is recommended to
absolutely necessary change default settings to
strengthen security.
C 3 understand vulnerabilities Always consider vulnerabilities
inherent from the application inherent from the application
programming language and programming language and
underlying platform underlying platform.
4 consider if the requirement This is not something to
can be implemented using consider. APAC and RBAC
ABAC instead of RBAC complement each other.
Question 2
Which two of the following security risks can be identified by the Rule security analyzer? (Choose Two)
C # Response Justification
1 Unsecured rulesets and The Rule security analyzer does
unlocked ruleset versions not check ruleset versions.
C 2 Vulnerabilities in stream rules The Rule security analyzer
checks for vulnerabilities in
hand-crafted stream rules.
3 Properties with incorrect type The Rule security analyzer does
not check property types.
C 4 Hand-crafted Java code The Rule security analyzer
checks for vulnerabilities in
hand-crafted Java code.
Question 3
Content security policies ______________ and ________________. (Choose Two)
C # Response Justification
C 1 protect your browser from Content security policies helps
loading and running content by identifying untrusted
from untrusted sources sources.
C 2 help detect and mitigate Content security policies helps
Cross Site Scripting (XSS) and mitigate XSS and data injection
data injection attacks attacks.
3 create an event when security This is true for security events.
relevant rules are updated
4 find common vulnerabilities This is true for the Rule security
by searching for matches to analyzer.
regular expressions
Defining a reporting strategy
Introduction to defining a reporting strategy
Defining a reporting strategy goes beyond creating reports in Pega. Many organizations use a data warehousing
solution and have distinct requirements for retaining data.
After this lesson, you should be able to:
l Identify requirements that influence reporting strategy definition
l Discuss alternatives to traditional reporting solutions
l Define a reporting strategy for the organization
Reporting and data warehousing
Organizations often want to combine data from web applications, legacy applications, and other sources in order
to make decisions in real time or near real time. To make these decisions, many organizations use business
intelligence software to collect, format, and store the data, and provide software to analyze this data.
A data warehouse is a system used for reporting and data analysis. The data warehouse is a central repository of
integrated data from one or more separate sources of data. The extract, transform, and load (ETL) process
prepares the data for use by the data warehouse. The following conceptual image illustrates a typical end-to-end
process of extracting data from systems of record and storing the data in the warehouse, then making that data
available to reporting tools.
The key factor that determines whether you design your reports in the Pega application or leverage an external
reporting tool is the impact on application performance. For example, if your reporting requirements state that
you need to show how many assignments are in a workbasket at any given time, creating a report on the
assignment workbasket table is appropriate. If you to analyze multiple years of case information to perform some
type of trending analysis, use reporting tools suited for that purpose instead. You can provide a link to those
reports from the end user portal in the Pega application.
For more information on BIX, see the help topic Business Intelligence Exchange.
What is the primary reason for using an external reporting tool instead of Pega reporting?
An external reporting tool is used because of the potential impact on system performance. If you need a report
that does heavy analysis or trending type reporting over large quantities of data, use a tool meant for that
purpose. Pega can handle this type of reporting, but be aware of impact to system performance, particularly
when embedding reports in end user portals.
How to define a reporting strategy
Before you define your reporting strategy, assess the overall reporting needs of the organization. The goal is to
get the right information to the right users when they need the information. Treat your reporting strategy design
as you would any large-scale application architecture decision. A robust reporting strategy can help prevent
future performance issues and help satisfy users' expectations.
As you define your reporting strategy, ask yourself the following questions:
l What reporting requirements already exist?
l Who needs the report data?
l When is the report data needed?
l Why is the report data needed?
l How is the report data gathered and accessed?
C # Response Justification
C 1 Inventory the reports the Making an inventory lets you see
organization currently uses the information needs that drive
the organization's reporting
requirements.
C 2 Create a matrix categorizing A matrix helps you understand
the user roles and reports how different roles use specific
each role uses to make reports and discover if there is
business decisions. overlap.
3 Install the BIX product You install BIX only if you need
to extract data from the
organization's database and
export it to a data warehouse.
4 Optimize the properties in all You might optimize properties
the reports to enhance reporting
performance, but optimization is
not part of developing a
reporting strategy.
Question 2
What benefits do you gain by leveraging a data warehouse coupled with Pega reporting? (Choose Two)
C # Response Justification
C 1 You can use reporting tools suited for This is a benefit of using a data warehouse.
trending and data analysis.
C 2 You can focus on using Pega for This is a benefit of using a data warehouse.
reporting on throughput data.
3 You can purge cases from your Purging is a separate activity from warehousing, though
C # Response Justification
database along with the data they are related.
warehousing process.
4 You can use BIX to export cases to a While this is a true statement of BIX capabilities, this is not
database, a comma separated file, or a direct benefit of leveraging data warehousing.
XML format.
Question #3
Your organization uses a large number of business intelligence (BI) reports. Which two approaches would you
consider to be good solutions? (Choose Two)
C # Response Justification
C 1 Data warehouse A data warehouse may be the
best solution if the organization
requires heavy trending
reporting and business
intelligence (BI) reporting.
C 2 BIX BIX is likely the best way to
extract data from the
application server and export it
to the data warehouse.
3 Elasticsearch You use Elasticsearch mainly for
dynamic queries, but it may not
be useful for BI reports.
4 Data archive Archiving data may be useful for
enhancing reporting
performance, but it is not part of
developing a reporting strategy.
How to define a reporting strategy
Introduction to designing reports for performance
Poorly designed reports can have a major impact on performance. A report may run with no issues in a
development environment. When run with production data, the report may perform poorly. This issue may
impact performance for all application users.
After this lesson, you should be able to:
l Explain the causes of poorly performing reports and the impact poor performance can have on the rest of the
application
l Describe how to design reports to minimize performance issues
l Identify the cause and remedy a poorly performing report
Impact of reports on performance
When an application is first put into production, a report may run with no issue and within established service
level agreements (SLAs). As the amount of application data grows, the report may run more slowly. Poor report
performance can cause memory, CPU, and network issues. These issues can affect all application users, not just
the user running the report.
To help you diagnose and mitigate these issues, Pega generates performance alerts when specific limits or
thresholds are exceeded. For example, the PEGA0005 - Query time exceeds limit alert helps you recognize when
queries are inefficiently designed or when data is loaded indiscriminately.
For more information about performance alerts, see the Pega Community article Performance alerts, security
alerts, and Autonomic Event Services.
Important: Guardrail warnings alert you to reports that could have performance issues. Instruct your teams to
address warnings before moving your application from development to target environments.
Memory impact
Large result sets can cause out-of-memory issues. The application places query results on the clipboard page of
the users. If those pages are not managed, your application eventually shuts down with an out-of-memory (OOM)
error.
CPU impact
Using complex SQL can also have a CPU impact on the database server. When the database is performing poorly,
all users on all nodes are affected. Autonomic Event Service (AES) and Predictive Diagnostic Cloud (PDC) can help
you identify the issues. Your database server administrator can set up performance monitoring for the database
server.
Network impact
Sending large result sets over the network can may cause perceived performance issues for individual users
depending upon their bandwidth, network integrity, and network traffic.
KNOWLEDGE CHECK
Paginate results
Paginating results allows you to return groups of data at a time. Returning big groups of records may make it
difficult for users to find the information they are looking for. For example, a report that returns 50 records at a
time may be too much information for a user to sift through. Select the Enable Paging option on the report
definition and specify the page size.
For more information on how to configure paging in reports, see the Pega Community article When and how to
configure paging in reports.
Optimize properties
If you expect to use a property in your selection criteria, optimize that property. Optimizing a property creates a
column in the database table, which you can then index as needed.
For more information about optimizing properties, see the help topic Property optimization using the Property
Optimization tool.
Partition tables
Table partitioning allows tables or indexes to be stored in multiple physical sections. A partitioned index is like
one large index made up of multiple little indexes. Each chunk, or partition, has the same columns, but a
different range of rows. How you partition your tables depends on your business requirements.
For more information on partitioning Pega tables, see PegaRULES table partitioning
C # Response Justification
C 1 Large result sets The application places query
results on the clipboard page of
the users. If those pages are not
managed, your application
eventually shuts down with an
OOM error.
2 Complex SQL requests Using complex SQL typically has
a CPU impact on the database
server, not OOM issues.
3 Heavy network traffic Sending large result sets over
the network that has heavy
traffic may cause perceived
performance issues by the user
but not cause OOM issues.
4 Unoptimized properties Unoptimized properties may
cause prolonged background
processing times but not OOM
issues.
Question # 2
You have created a set of reports that use large numbers of embedded properties. Which two of the following
techniques can you use to help improve report performance? (Choose Two)
C # Response Justification
C 1 Create declare indexes Declare indexes allow you to
expose embedded page list
data.
C # Response Justification
2 Enable pagination Pagination controls the number
of results a user sees at one
time and is not directly related
to the use of embedded
properties.
C 3 Run reports against a reports This approach offloads demand
database of the production database to a
replicated database dedicated
to reporting.
4 Include all rows when joining This technique causes the
reports system to use an outer join and
is not directly related to the use
of embedded properties.
Question # 3
What database tuning technique do you use to determine if the database is taking the most efficient route to
returning results?
C # Response Justification
C 1 Run explain plans on your An explain plan describes the
queries path the query takes to return a
result set. This technique can
help you determine if the
database is taking the most
efficient route to return results.
2 Create table indexes You create an index for a
database column if the column
is queried frequently or there
are specific constraints on the
column.
3 Partition the database tables Table partitioning allows tables
or indexes to be stored in
multiple physical sections.
4 Perform a load test This technique only tests system
C # Response Justification
performance based on realistic
data volumes.
Query design
Introduction to Query Design
It is not enough just to know how to configure reports. There can be multiple solutions to the same problem, one
superior to the others. There can be situations where an ideal solution is not possible using the data that already
exists. Creating data to simply a query is a technique that can be leveraged. Also, the decision how best way to
store data to facilitate reporting needs to be made early on; data query ability should not be an afterthought.
After this lesson, you should be able to:
l Produce queries that reference ancestors within a case hierarchy
l Produce queries based on generated or reformatted data
l Produce queries that include correlated subqueries
l Produce queries that contain complex SQL
Queries that reference ancestors within a case hierarchy
Within XPath there is the notion of the following axes, one of which is named “ancestor”. The definition of
ancestor is parent, grandparent, …, etc.
Suppose a case needing to be reviewed by multiple independent reviewers. A subcase is created for each of the
N required reviewers. The initial assignment in each subcase is a workbasket assignment. Each reviewer asks the
system to pull a Review workbasket assignment into their worklist. The parent of the case that the workbasket
assignment references should not match any parent of a subcase that the reviewer has previously fetched.
Assumption: When Review workbasket assignment is fetched and moved to the reviewer’s worklist, the reviewer
is immediately persisted as the subcase’s Reviewer work party, i.e., pyWorkParty (Reviewer).
Solution:
select pzInsKey from
{Assign-WorkBasket} WB,
{Org-App-Work-Review} REV
where
WB.pxAssignedOperatorID = [workbasket]
and WB.pxRefObjectKey = REV.pzInsKey
and REV.pxCoverInsKey not in
(select REV2.pxCoverInsKey from
{Org-App-Work-ReviewCover} REV2, {Index-WorkPartyUri} WP
where
WP.pxInsIndexedKey = REV2.pzInsKey
and WP.pxPartyRole = "Reviewer"
and WP.pyPartyIdentifier = OperatorID.pyUserIdentifier);
Suppose though that the case being review is two levels above the case to be fetched by a reviewer? Cases do
not possess a “pxCoverCoverInsKey” property. Still, if you needed, you could define that property but should use a
more meaningful property name. If the case to be reviewed is a Claim you would define a work pool-level
property named “ClaimKey”. The “parent” and “grandparent” Claim case would set ClaimKey equal to its pzInsKey
plus propagate that ClaimKey property to its subcases. Likewise those subcases would also propagate the
ClaimKey property to their subcases.
Propagation of data assumed to remain static would also simply the definition of Access When rules. Care must
be taken with this approach as it is possible, however unlikely, that the information can become stale or
inaccurate. For example, the parent case of a subcase can be altered when the pxMove activity is invoked. This is
an example where a data integrity violation can potentially occur.
An alternative is to perform what is known a hierarchical or recursive query which each database vendor
implements differently but is not supported by the Report Definition rule.
Interestingly if the goal is to avoid the same person reviewing the same case twice, and the case design not
involve subcases, the query shown above would work just as well if the word “Cover” was removed. In other word,
if subflows are created using Split-For-Each, as opposed to subcases being spun off, the query below would
prevent the same reviewer, using GetNextWork, from pulling a workbasket assignment for a case that the
reviewer has already reviewed.
select pzInsKey from
{Assign-WorkBasket} WB,
{Org-App-Work-Review} REV
where
WB.pxAssignedOperatorID = [workbasket]
and WB.pxRefObjectKey = REV.pzInsKey
and REV.pxInsKey not in
(select REV2.pxInsKey from
{Org-App-Work-Review} REV2,
{Index-WorkPartyUri} WP
where
WP.pxInsIndexedKey = REV2.pzInsKey
and WP.pxPartyRole = "Reviewer"
and WP.pyPartyIdentifier = OperatorID.pyUserIdentifier);
Queries based on generated or reformated data
Suppose there is a requirement to produce a trend report where the chart shows the number of cases
completed each day as well as the number of cases resolved each day.
Assumption: Cases may take longer than one day to complete.
Note: there would be no point in creating the report if cases were always resolved the same day that they were
created.
The temptation exists to define the trend report against the work pool class of the cases being analyzed but this
would not be correct. A trend report requires a Date against which to plot the two results, i.e., number of cases
created vs number of cases resolved. Neither pxCreateDateTime or pyResolvedTimeStamp can be used as the
plotting date. If so, cases resolved days later would be counted on the day that the case was created. That or
cases created days before being resolved would be counted on the day that the case was resolved.
An attempt could be made to query case history to identify case creation and case resolution events. While it is
possible to use History-Work to identify case creation and case resolution events, doing so would be complex.
Also, the history table contains numerous rows that contain other types of information.
As opposed to searching for case creation and resolution events within case history, a separate Data Type could
be defined. For example:
Data-SimpleCaseHistory
String CaseID
String CaseType
String EventType
DateTime WhenOccurred
Here, the allowed values for EventType would be kept to a minimum, for example, “Create” and “Resolve”.
A trend report could be defined against this table by itself. Or, the work pool class could joined to this table
using pyID = CaseID. Either way, each EventType would be plotted against the truncation of the WhenOccurred
DateTime value to a Date value. Data instances within this table can be generated retroactively.
A different solution is to define and populate a Timeline table that contains the Dates to plot against.
Data-Timeline
Date Date
TrueFalse IsWeekStart
TrueFalse IsMonthStart
TrueFalse IsYearStart
This table would need to be populated as far into the past and into the future as would be needed. Other trend
reports could leverage the same table.
However, because it is not possible to define a JOIN based on the result of a SQL Function such as day(),
CreateDate and ResolveDate Date properties would need to be added and exposed within the work pool table.
Those database columns would also need to be indexed. The query using the Timeline table would require two
subreports, one selecting and counting rows where the CreateDate matches a given TimeLine Date. The second
subreport would select and count the number of rows where the ResolveDate matches the same TimeLine date.
The Timeline approach would not be as performant as the SimpleCaseHistory approach due to having to join to
the work pool table twice. It also would only be usable as a List report since each subreport performs a COUNT
instead of the main report performing a GROUP BY aggregation which can, in turn, be charted. Using SQL it
would be possible to UNION the result of the two subreports but this is not supported by Pega’s Report
Definition rule.
The ideal solution would be to base the trend report on the SimpleCaseHistory class alone without joining to the
work pool table. This example demonstrates the benefit of extracting data and persisting it to a different form to
facilitate business intelligence.
Queries that include correleated subqueries
It is not always possible to obtain a desired result in a single query using a Report Definition. Instead, the
desired level of information can be obtained from a follow-on or drill-down query based on the initial query.
Example
XYZ Corp orders office supplies through multiple suppliers. XYZ Corp wants to see detail about the most
expensive line items purchased through each supplier. XYZ has Order cases that have LineItem subcases.
LineItem detail is stored in a separate table.
How would you obtain this information?
Solution
Define a subreport that obtains supplier ID and max(Price) from the line item table. Within the main report query
the line item table joining to the subreport by supplier ID.
Rationale
For same supplier, multiple products from the same supplier may have the same price which also happens to be
the maximum price for that supplier within the order.
Dilemma
The result does not show line item detail.
Solution
Drill down to obtain information about the line item for the supplier that share the same price. The drill-down
report could ask for distinct rows.
The syntax below achieves the result in a single query.
Select
ORD.OrderID,
LI.SupplierID,
PROD.SKU,
PROD.Detail,
LI.Price
From
Order ORD,
LineItem LI,
Product PROD
Where
LI.OrderID = ORD.OrderID
and PROD.SKU = LI.SKU
and LI Price = (select max(price)
From LineItem LI2
Where
LI2.OrderID = LI.OrderID
and LI2.Supplier_ID = LI.SupplierID)
TEMP
This type of query is called is called a correlated subquery. Note how the subquery references LineItem
columns using two different aliases, LI and LI2.
If not handled properly by the database, a correlated subquery can be inefficient since it implies that a separate
query will be executed for every row. Modern databases, based on what their query optimizer tells them, can
“unnest” the subquery such that it acts the same as referencing a (materialized) VIEW.
Report Definitions allow a parameter to be passed to the subquery from the main report. The parameter’s value
does not change from row-to-row unlike when a JOIN is defined to a particular column.
When configuring a subreport, no alias is required to reference the main report. Instead, on the right-hand side
of the subreport join, simply enter a property that belongs to the main report’s class. This is where the
correlation occurs.
The query syntax below was generated by the CorrelatedSubqueryTest Report Definition within the provided
BookingReports ruleset. The starting point for this Report Definition was the CloneJOINTest Report Definition, also
contained within the provided BookingReports ruleset. It was not necessary to have defined the CLONE join to
demonstrate that a correlated subquery can be defined. What this example demonstrates is that it is also
possible to reference a value on the right-hand side of the subreport correlated join that is derived from a JOIN
performed by the main report.
SELECT
"PC0"."pyid" AS "pyID",
"PC0"."pystatuswork"
AS "pyStatusWork", TO_CHAR(DATE_TRUNC('day' "PC0"."pxcreatedatetime"::TIMESTAMP),
'YYYYMMDD')
AS "pyDateValue(1)",
"CLONE"."pylabel" AS "pyLabel",
"HOTEL"."srcol4" AS "pyLabel",
"PC0"."pzinskey" AS "pzInsKey"
FROM
pegadata.pc_FSG_Booking_Work "PC0"
INNER JOIN
pegadata.pc_FSG_Booking_Work
"CLONE" ON ( ( "CLONE"."pyid" = "PC0"."pyid" )
AND "CLONE"."pxobjclass" = ?
AND "PC0"."pxobjclass"
IN (? , ? ))
INNER JOIN (
SELECT
"HOTEL"."pyguid" AS "srcol1",
"HOTEL"."brand" AS "srcol2",
"HOTEL"."name" AS "srcol3",
"HOTEL"."pylabel" AS "srcol4",
"HOTEL"."pzinskey" AS "pzinskey"
FROM
pegadata.pr_FSG_Data_Hotel_e0214 "HOTEL"
WHERE "HOTEL"."pxobjclass" = ? ) "HOTEL" ON ( ( "HOTEL"."srcol4" =
"CLONE"."pylabel") AND "CLONE"."pxobjclass" = ? AND "PC0"."pxobjclass" IN (? , ? ))
Note how the first example query in this section included a comparison to an aggregated value. i.e., LI Price =
(select max(price) from LineItem LI2. This is something that a subreport can do that a regular JOIN cannot do.
Below is another example.
Suppose you want to enforce that only one LineItem subcase within a Purchase Order can be in someone’s
worklist at any given time. Parent case locking could be used to prevent two persons from working on the
Purchase Order at the same time. This does not prevent simultaneous ownership of LineItem worklist
assignments for the same Purchase Order.
SELECT
LI1.LineItemID from LineItem LI1, (SELECT count(*) as LI2Count
FROM
LineItem L2,
Assign-Worklist WL
WHERE
L2.PurchaseOrderID = LI1.PurchaseOrderID AND L2.pzInsKey = WL.pxRefObjectKey) A
WHERE
L1.pyStatusWork not like 'Resolved%' AND A.LI2Count = 0
The requirement could be changed, for example, to less than or equal to two (<= 2). In that case the outer query
should be joined to Assign-WorkBasket to prevent a LineItem case being returned that is in someone’s worklist.
Instead every LineItem case returned by the query is associated to a WorkBasket assignment.
Queries that contain complex SQL
ListViews allow direct concatenation of SQL-like syntax that, in turn, is converted to SQL. ListViews do not support
SQL Functions or subreports. For this reason and others, ListViews are deprecated, meaning should not be used
to define new queries and/or reports.
There are number of ways to query data that are not supported by Report Definitions. An example is the
Haversine formula used at the FSG enterprise layer within the provide example Booking application solution. The
query is found in the Browse tab of the FSG-Data-Address HaversineFormula Connect-SQL rule.
Note the RDB-List step within the Code-Pega-List Connect_SQL_pr_fsg_data_address activity that is sourced by the
D_AddressesWithinDistance Data Page.
It is not possible to define this type of query using a Report Definition since it has two FROM-clause SELECTs, one
aliased “z”, the other aliased “d”. Unlike a Report Definition, a Connect SQL rule lacks the ability to dynamically
modify its filter conditions based on a parameter value being empty. Unless the Report Definition is configured
to generate “is null” when a parameter lacks a value, Pega will ignore the filter condition which, in some cases,
can be risky unless a limit is placed on the number of returned rows.
Within the HaversineFormula query there is no need to generate the filter conditions. It does not make sense to
execute the query unless a value is supplied for every query parameter, with the exception of the IsFor column,
currently either “HOTEL” or “VENUE”.
Care must be taken when using Connect-SQL rules as the column names may not be returned as aliased. For
example, despite aliasing the lower-case postalcode colum to camel-case PostalCode, the column name is
returned all lower-case, the same as it exists in a Postgres database. For this reason the D_
AddressesWithinDistance Data Page calls a post-processing activity named Post_Connect_SQL_pr_fsg_data_
address to convert column names to camel-case to then match way property names are spelled within the FSG-
Data-Address class.
The java step in Post_Connect_SQL_pr_fsg_data_address is brute force. Ideally the External Mapping tab of the
FSG-Data-Address Rule-Obj-Class rule could be leveraged. Note the pxClassSQL.pxColumnMap PageList within
that class rule. Note: in the future java step in activities will be forbidden, The code in this java step should be
moved to a Rule-Obj-Function.
C # Response Justification
C 1 Define and reports against A trend report could be defined
history table against this history table by
itself and would be the
preferred solution.
2 Use a Complex SQL requests Using complex SQL by itself
could not yield the desired
result as the some of the
requisite data points are
undefined and it is not possible
to define a JOIN based on the
result of a SQL Function such as
day().
C 3 Define and report against a Defining and populating a
Timeline table Timeline table that contains the
Dates to plot against could
generate the desired trend
report albeit not a ideal.
4 Use a series of sub-reports Defining a series of subreports
would not help as some of the
data points are undefined and
because it is not possible to
define a JOIN based on the
result of a SQL Function such as
day().
User experience design and performance
Introduction to user experience design and performance
Application performance and user experience are naturally related. If the application’s user interface is not
responsive or performance is sluggish, the user experience is poor.
After this lesson, you should be able to:
l Identify application functionality that can directly impact the user experience
l Describe strategies for optimizing performance from the perspective of the user
l Design the user experience to optimize performance
How to identify functionality that impacts UX
A good user experience is not only related to the construction of user views and templates. Thoughtful use of
features such as background processing can impact the user experience and overall performance of the
application. Consider these areas of application functionality that directly impact the user experience and use
the following guidelines to ensure the application provides the best possible experience for your end users:
l Background Processing
l SOR Pattern
l External Integration
l Network Latency
l Large Data Sets
l User Feedback
l Responsiveness
Background processing
Moving required processing to perform in the background so the end user does not have to wait can improve the
perceived performance of an application. This is can also be referred to as asynchronous processing. While the
user is busy reading a screen or completing another task, the application performs required tasks in another
requestor session, apart from the user’s requestor session.
Scenarios where background processing can be leveraged include:
l SOR Pattern
l External Integration
l Network latency
Leverage responsive UI
As the form factor changes, leverage Pega's Responsive UI support and only show the user what is absolutely
necessary to complete the task at hand. Avoid creating the "Everything View" that tries to show every piece of
information all at once. Move unnecessary or optional information away from the screen as the screen size is
reduced. Keep your User Interfaces highly specialized and focused on individual and specific tasks.
Background processing
Background processing can also be leveraged to allow an initial screen to load which allows the user to continue
working while additional detailed information is retrieved. This strategy is particularly useful when using the
SOR design pattern.
Pagination
Pagination can be leveraged to allow long-running reports to retrieve just enough information to load the first
page of the report. As the user scrolls down to view the report additional records are retrieved and displayed as
they are needed. Use appropriate pagination settings on grids and repeating dynamic layouts to reduce the
amount of markup used in the UI.
Data pages
Use data pages as the source for list-based controls. Data pages act as a cached data source, that can be scoped,
invalidated based on configured criteria and garbage collected.
Client-side expressions
Use client-side expressions instead of server-side expressions whenever possible. Whenever expressions can be
validated on the client, they run on the browser. This is typically true for visibility conditions, disabled conditions,
and required conditions. Enable the Client Side Validation check box (only visible when you have added an
expression), and then tab out of the field.
C # Response Justification
C 1 Run connectors in parallel Running connectors in parallel helps
complete complex tasks more quickly
by running them asynchronously.
2 Run processes synchronously Running complex processes
synchronously can slow down the user
experience. A better approach is to run
them in parallel or asynchronously.
C 3 Deferred data loading Deferred data loading can be used to
significantly improve the perceived
performance of a user interface.
4 Avoid requeueing failed Requeueing failed responses helps
responses ensure good feedback during transient
errors.
Question 2
Which two Pega features can help improve the user experience? (Choose Two)
C # Response Justification
1 Smart layouts Avoid using legacy controls and
layouts.
C 2 Layout groups Layout Groups are the official
replacement in Pega Platform
for legacy tabs. Legacy Tabs run
in quirks mode. This is slower
and not HTML 5.0 compliant.
C 3 Repeating dynamic layouts Use new capability introduced
in Pega Platform for better
client-side performance.
C # Response Justification
4 Panel sets Avoid using legacy controls and
layouts.
Question 3
Which two techniques can improve performance and the related user experience? (Choose Two)
C # Response Justification
1 Maximize server-side Minimize round trips to the
Processing server by ensuring that multiple
actions processed on the server
are bundled together.
2 Minimize client side Use client-side expressions
expressions instead of server side
expressions whenever possible.
C 3 Use single page dynamic Single Page Dynamic Containers
containers over legacy i-frames enables
better design and web-like
interaction.
C 4 Use the Optimized code Using the Optimized code layout
option for layout mode in mode setting in the dynamic
dynamic layouts layouts Presentation tab helps
to reduce markup.
Conducting usability testing
Introduction to conducting usability testing
Usability testing is a method for determining how easy an application is to use by testing the application with
real users in a controlled setting. Usability testing is an important part of designing and implementing a good
user experience.
After this lesson, you should be able to:
l Discuss the importance of usability testing
l Plan for usability testing
l Describe the six stages of usability testing
Usability testing
Planning for, and building usability testing into, the project plan is essential to designing a positive user
experience. The goal of usability testing is to better understand how users interact with the application, and to
improve the application based on the results of the usability tests.
Usability testing involves interacting with real-world participants—the users of the application—to obtain
unbiased feedback and usability data. During usability testing, you collect both quantitative and qualitative data
concerning the user experience. The quantitative data is generally the most valuable. An example of quantitative
data is: 73% of users performed the given task in 2 seconds or less.
Sometimes qualitative data, such as a user’s opinion of the software, is also collected. An example of qualitative
data is: 67% of users tested agreed with the statement, “I felt the app was easy to use".
3. Begin by adding the appropriate time codes for training and general work for each day of the week.
4. Review, and then submit your completed time sheet.
Select participants
Consult with the product owner to get the list of end user participants who will participate in performing the
usability testing.
Conduct testing
Ensure that the usability testing participants understand the tasks and the sequence of steps. You want the
usability testing participants to perform these tasks without assistance. Monitor the participants as they perform
the testing, take notes, and measure all interactions. While participants perform the testing, they should also
take notes based on what they observe.
Compile feedback
Compile feedback based on the notes provided by the testing participants, and further discussion with the
participants. Consider measuring both user performance and user preference metrics. Users performance and
preference do not always match. Often users perform poorly when using a new application, but their preference
ratings may be high. Conversely, they may perform well but their preference ratings are low.
Conducting Usability Testing
Question 1
Which three of the following steps are part of usability testing? (Choose Three)
C # Response Justification
C 1 Documenting the One of the outcomes of usability
recommendations needed to testing is to document
fix the issue. recommended improvements.
C 2 Identifying the users of the Usability testing must involve
application. the users of the application to
be effective.
C 3 Putting together a list of tasks Identifying key tasks to test is an
that needs to be tested. important part of usability
testing.
4 Locking the ruleset before Usability testing should be
usability testing. performed during DCO and
design.
Question 2
Select two reasons why usability testing is important. (Choose Two).
C # Response Justification
1 Usability testing allows Usability testing cannot be
project managers to substituted by working with
subjectively define the project managers, project
applications usability sponsors, stakeholders, or
business architects.
C 2 Usability testing validates the Usability testing validates the
ease of use of the user ease of use of the user interface
interface design design so that time and
resources are not spent on
developing a poor user
interface.
C # Response Justification
C 3 Usability testing is done to Usability testing is done to
collect valuable design data collect valuable design data in
the least amount of time as
possible. The output of usability
testing helps identify usability
issues.
4 Usability testing is done after Usability testing is conducted
the application is deployed to periodically throughout the
production and is used to software development life cycle
compute a net promoter and can help identify issues
score early in the software
development life cycle.
Question 3
What are the last three stages of usability testing? (Choose Three)
C # Response Justification
1 Select the testing location Although the testing location is
interesting it is not part of
usability testing.
C 2 Select participants Consult with the product owner
to get the list of end user
participants who will participate
in performing the usability
testing.
C 3 Conduct testing Monitor the participants as they
perform the testing, take notes,
and measure all interactions.
C 4 Compile feedback Compile feedback based on the
notes provided by the testing
participants, and further
discussion with the participants.
Designing background processing
Introduction to designing background processing
The design of background processing is crucial to meeting business service levels and automating process.
Background processes must be carefully designed to ensure all work can be completed within the business
service levels. There are several features provided by Pega Platform which can be leveraged to provide an
optimal solution.
After this lesson, you should be able to:
l Evaluate background processing design options
l Configure asynchronous processing for integration
l Optimize default agents for your application
Background processing options
Pega Platform supports several options for background processing. You can use standard and advanced agents,
service level agreements (SLAs), and the Wait shape to design background processing in your application.
Queue Processors
Queue Processors allow you to focus on configuring the specific operations to perform. When using Queue
Processors, Pega Platform provides built-in capabilities for error handling, queuing and dequeing, and commits. .
Queue Processors are often used in an application that stems from a common framework or used by the Pega
Platform itself.
By default, Queue Processors run in the security context of the ASYNCPROCESSOR requestor type When
configuring the Queue-For-Processing method in an Activity, or the Run in Background step in a Stage, is it possible
to specify an alternate Access Group. It is also possible for the activity that the Queue Processor runs to change
the Access Group. An example is the Rule-Test-Suite pzInitiateTestSuiteRun activity executed by the
pzInitiateTestSuiteRun Queue Processor
Queues are shared across all nodes. The throughput can be improved by leveraging multiple Queue Processors
on separate nodes to process the items in a queue.
Standard agents
Standard agents are generally preferred when you have items queued for processing. Standard agents allow you
to focus on configuring the specific operations to perform. When using standard agents, Pega Platform provides
built-in capabilities for error handling, queuing and dequeing, and commits.
By default, standard agents run in the security context of the person who queued the task. This approach can be
advantageous in a situation where users with different access groups leverage the same agent. Standard agents
are often used in an application with many implementations that stem from a common framework or in default
agents provided by Pega Platform. The Access Group setting on an Agents rule only applies to Advanced Agents
which are not queued. To always run a standard agent in a given security context, you need to switch the queued
Access Group by overriding the System-Default-EstablishContext activity and invoke the setActiveAccessGroup()
java method within that activity.
Queues are shared across all nodes. The throughput can be improved by leveraging multiple standard agents on
separate nodes to process the items in a queue.
Note: There are several examples of default agents using the standard mode. One example is the agent
processing SLAs ServiceLevelEvents in the Pega-ProCom ruleset.
KNOWLEDGE CHECK
As part of an underwriting process, the application must generate a risk factor for a loan and insert
the risk factor into the Loan case. The risk factor generation is an intensive calculation that requires several
minutes to run. The calculation slows down the environment. You would like to have all risk factor calculations
run automatically between the hours of 10:00 P.M. and 6:00 A.M. to avoid the slowdown during daytime
working hours. Design a solution to support this
Use a Delayed Dedicated Queue Processor. Set the Date Time for Processing to 10:00PM
OR
Create a standard agent to perform the calculation. Include a step in the flow to queue the case for the agent.
Pause the case processing and wait for the agent to complete processing.
This solution delays the loan process and waits for the agent to resume the flow. It can take advantage of other
claims processing agents if enabled on other nodes which may reduce the time it take stop process all of the
loan risk assessments.
KNOWLEDGE CHECK
You need to automate a claim adjudication process in which files containing claims are parsed,
verified, adjudicated. Claims which pass those initial steps are automatically created for further processing. A
single file containing up to 1,000 claims is received daily before 5:00 P.M. Claim verification is simple and takes
a few milliseconds but claim adjudication might take up to five minutes
In an activity invoke the Queue-For-Processing method against each claim.
OR
Create a standard agent to perform the calculation. Include a step in the flow to queue the case for the agent.
Pause the case processing and wait for the agent to complete processing
Using the File service activity to only verify claims and then offload the task to the agent is preferred because it
does not significantly impact the intake process. It can also take advantage of multinode processing if available.
Furthermore, the modular design of the tasks would allow for reuse and extensibility if required in the future.
However, if you use the same file service activity for claim adjudication, it impacts the time required to process
the file. Processing is only available on a single node and there is little control overthe time frame while the file
service runs. Extensibility and error handling might also be more challenging. Consideration must be given to
the time an agent requires to perform the task. For example, the time required to process the claims by a
single agent is 5,000 minutes (83.33 hours). This is not suitable for a single agent running on a single node to
complete the task. A system with the agent enabled on eight nodes could perform the task in the off-hours. If
only a single node is available, an alternative solution is to split the file up into smaller parts, which are then
scheduled for different agents (assuming there is enough CPU available for each agent to perform its task).
Job Schedulers
Use Job Schedulers when there is no requirement to queue a reoccurring task. Unlike Queue Processors, the Job
Scheduler not only must decide which records to process, it also must establish each record’s step page context
before performing work on that record. For example, if you need to generate statistics every midnight for
reporting purposes, the output of a report definition can determine the list of items to process. The Job
Scheduler must then operate on each item in the list.
Unlike Queue Processors, a Job Scheduler needs to decide whether a record needs to be locked. It also must
decide whether it needs to commit records that have been updated using Obj-Save. If, say, a Job Scheduler
creates a case, or opens a case with a lock and causes it to move to a new assignment or complete its life cycle, it
would not be necessary for the Job Scheduler to issue a Commit.
Advanced agents
Use Advanced agents when there is no requirement to queue and perform a reoccurring task. Advanced agents
can also be used when there is a need for more complex queue processing. When advanced agents perform
processing on items that are not queued, the advanced agent must determine the work that is to be performed.
For example, if you need to generate statistics every midnight for reporting purposes,. the output of a report
definition can determine the list of items to process.
Tip: There are several examples of default agents using the advanced mode, including the agent for full text
search incremental indexing FTSIncrementalIndexer in the Pega-SearchEngine ruleset.
In situations where an advanced agent uses queuing, all queuing operations must be handled in the agent
activity.
Tip: The default agent ProcessServiceQueue in the Pega-IntSvcs ruleset is an example of an advance agent
processing queued items.
When running on a multinode configuration, configure agent schedules so that the advanced agents coordinate
their efforts. To coordinate agents, select the advanced settings Run this agent on only one node at a time
and Delay next run of agent across the cluster by specified time period.
KNOWLEDGE CHECK
ABC Company is a distributor of discount wines and uses Pega platform for order tracking. There are
up to 100 orders per day, with up to 40 different line items in each order specifying the product and quantity.
There are up to 5,000 varieties of wines that continuously change over time as new wines are added to and
dropped from the list. ABC Company want to extend the functionality of the order tracking application to
determine recent hot-selling items by recording the top 10 items ordered by volume each day. This information
is populated in a table and used to ease historical reporting.
Use Job Schedulers.
OR
An advanced agent runs after the close of business each day, and it performs the following tasks:
• Opens all order cases for that day and tabulates the order volume for each item type
• Determines the top 10 items ordered and records these in the historical reporting table
The agent activity should leverage a report to easily retrieve and sort the number of items ordered in a day.
When recording values in the historical table, a commit and error handling step must be included in the
activity.
Wait shape
The Wait shape provides a viable solution in place of creating a new agent or using an SLA. The Wait shape can
only be applied against a case within a flow in a step, and wait for a single event (timed or case status), before
allowing the case to advance. Single-event triggers applied against a case represents the most suitable use case
for the Wait shape; the desired case functionality at the designated time or status follows the Wait shape
execution.
Within the provided FSG Booking application there is a good example of where a Timer Wait Shape could be
used. The Timer can be used in a loop-back polling situation, where a user may want to have an operation
executed immediately within the loop-back. In this example, a user may want to poll for the current weather
forecast instead of waiting for the next automated retrieval to occur. As shown, this loop-back can be
implemented in parallel to a user task such as flagging weather preparation set up and tear down task
completion. It would be overly complex to update a Queue Processor’s record to fire as soon as possible, then
have to wait several seconds to see the result. As stated earlier, an SLA should not be used for polling or periodic
update situations.
Asynchronous integration
Pega Platform provides multiple mechanisms to perform processing asynchronously. For instance, an application
may initiate a call to a back-end system and continue processing without blocking and waiting for the external
system’s response. This approach is useful when the external system processing time can be an issue and when
the result of the processing is not required immediately. A similar feature is also available for services allowing
you to queue an incoming request.
Load-DataPage
An optimal design pattern for any process that selects filtered rows from the same overall record set could be an
Asynchronous Data Page. Retrieving the same large record set over and over again is a waste of processing
resources.
Note: If you configure several connectors to run in parallel, ensure the response data is mapped to separate
clipboard pages, and error handling is set up.
If a slow-running connector is used to source a data page, the data page can be preloaded using the Load-
DataPage method in an activity to ensure the data is available without delay when needed. Grouping several
Load-DataPage requestors by specifying a PoolID is possible. Use the Connect-Wait method to wait for a specified
interval, or until all requestors with the same PoolID have finished loading data.
When configuring this option for the service, a service request processor that determines the queuing and
dequeuing options must be created. This information is used by the ProcessServiceQueue agent for supporting
information to perform the tasks.
C # Response Justification
C 1 Commits must be issued in Advanced agents must handle
advanced agents activities. their own commits.
2 An advanced agent cannot be Advanced agents can process a
used to process a queue. queue, but this must be coded
in the agent activity.
3 In a multinode system, There are use cases where an
standard agents are always advanced agent would be
preferred. preferred in a multinode
system.
4 An advanced agent should be Advanced agents can operate on
enabled on only a single node multiple nodes, but the agent
in a multinode system. activity must be carefully
designed to avoid conflicts.
Question 2
Select the true statement regarding SLAs and Wait shapes.
C # Response Justification
C 1 An SLA can be used to replace The escalation activity in a
a custom agent in some custom SLA can be used to
situations. perform a scheduled task,
replacing an agent.
2 An SLA is a special type of An SLA is a special type of
advanced agent. standard agent.
3 Wait shapes completely Wait shapes do not have
replace SLA functionality. escalation activities, and cannot
completely replace an SLA.
C # Response Justification
4 Wait shape escalation activity Wait shapes do not have
can advance the case in a escalation.
flow.
Question 3
Select the two true statements regarding background processing. (Choose Two)
C # Response Justification
C 1 Background processing Background processing always
involves separate requestors. involves separate requestors.
2 Parallel flows are an example A parallel flow runs in the same
of leveraging background requestor session as the flow
processing. that initiated it.
3 Locking effects are negated Locking must be considered
with background processing. with background processing as
separate requestors cannot lock
an object at the same time.
C 4 Connect-Wait method can be Connect-Wait method can be
used to test if background used after the Run-in-parallel
processing for connectors method to wait for child
complete. requestors to complete.
Designing Pega for the enterprise
Introduction to designing Pega for the enterprise
Pega is enterprise-grade software designed to transform the way organizations do business and the way they
serve customers. The Pega application not only works with existing enterprise technologies, but also leverages
those technologies to provide an end-to-end architectural solution.
After this lesson, you should be able to:
l Describe the design thinking and approach for architecting Pega for the enterprise
l Describe deployment options and how those deployment choices can affect design decisions
l Describe how Pega interacts with existing enterprise technologies
l Describe the design approach when architecting a Pega application
Designing the Pega enterprise application
You can easily be overwhelmed by the number of external applications and channels you need to work with to
deliver a single application to your business users. With this in mind, the following video describes how to design
the end-to-end Pega enterprise application, starting with Pega in the middle of your design.
Transcript
Pega is not just another web application that sits in your library of web or mobile apps. Pega radically transforms
the way organizations do business. Pega can drastically reduce costs, build customer journeys, and fully
automate work.
Your job, as a lead system architect, is to take the digital transformation vision and transform business
applications that perform real work for real people and drive business outcomes for even the largest of
organizations. It is easy to be overwhelmed by all of the existing technologies, channels, integrations to legacy
systems, and trying to figure out how the Pega fits into the big picture. But, if you start with Pega in the middle,
and work your way out to those channels and systems of record, one application at a time, the vision becomes
reality, release by release.
The entire digital transformation of a large organization is not realized in one release of the application. At the
start of a project, you probably only know a portion of what the end to end architecture will look like, and that is
ok. Instead of thinking channel-in or system-up, think of Pega out—intelligently routing work from channels
through back end systems, then adding automation where it makes sense, and thinking end to end at all times.
Whether you are designing your application with Pega Platform or you are starting with a Pega CRM or industry
application, designing with Pega in the middle and thinking one application at a time allows you to implement
your application based on what you know today, and gives you the freedom and flexibility to design for whatever
comes tomorrow.
For a great Demonstration of Pega Infinity please view the following Pega World 2019 presentation: Pega Infinity
Demo
Application deployment and design decisions
Pega works the same regardless of the environment in which it is running. Pega runs the same on Pega Cloud as
it does on a customer cloud, such as Amazon Web Services (AWS) or Google Cloud, as it does on-premise. No
matter the environment, Pega follows the standard n-tier architecture you may already recognize.
Because Pega is software that writes software, you can run your application anywhere or move it from one
environment to another. For example, you could start building your application on a Pega Cloud environment,
then move your application to an on-premise environment. The application functions the same way.
Consider these two environment variations when designing your application:
l Requirements to deploy an enterprise archive (.ear)
l Requirements to use multitenancy
Multitenancy
Multitenancy allows you to run multiple logically separate applications on the same physical hardware. This
allows the use of a shared layer for common processing across all tenants, yet allows for isolation of data and
customization of rules and processes specific to the tenant.
Multitenancy supports the business processing outsourcing (BPO) model using the Software as a Service (SaaS)
infrastructure. For example, assume the shared layer represents a customer service application offered by
ServiceCo. Each partner of ServiceCo is an independent BPO providing services to a distinct set of customers. The
partner (tenant) can customize processes unique to the business and can leverage the infrastructure and shared
rules that ServiceCo provides.
When designing for multitenancy, consider:
l Release management and software development lifecycle – The multitenant provider must establish
guidelines for deploying and manages instances and work with tenants to deploy, test, and monitor
applications.
l Multitenant application architecture – The multitenant provider must describe the application architecture to
the tenants and explain how tenant layers can be customized.
l System maintenance – Maintenance activities in multitenancy affect all tenants. For example, when a system
patch is applied, all tenants are affected by the patch.
l Tenant life cycle – The multitenant provider and tenant must work together to plan disk and hardware
resources based on the tenant's plans for the application.
l Tenant security – The two administrators in a multitenant environment include the multitenant provider
administrator and the tenant administrator. The multitenant provider sets up the tenant, and manages
security and operations in the shared layer. The tenant administrator manages security and operations of the
tenant layer.
For more information on multitenancy, see the Multitenancy help topic.
KNOWLEDGE CHECK
Name two situations in which you need to make additional design consideration with respect to how the
application is deployed?
Enterprise tier deployment (.ear) requirements and use of multitenancy
Security design principles
Like performance, security is always a concern, no matter what application you work on or design. Whether on
premise or in a cloud environment, failing to secure your application exposes the organization to huge risk, and
can result in serious damage to the organization's reputation. Take security design and implementation very
seriously and start the security model design early.
PDC AES
Hardware provisioning Pega Customer
Installation and upgrades Pega Customer
Ability to customize Upon request Fully customizable
Release schedule Quarterly Yearly
Communication with monitored nodes One-way Two-way
Active system management (restart agents, Not available Available
listeners, quiesce node)
Both AES and PDC monitor the alerts from and health activity for multiple nodes in a cluster. Both send you a
scorecard that summarizes application health across nodes. The most notable difference, from an architecture
standpoint, is that AES interacts with the monitor node to allow you to manage processes on the monitored
nodes, such as restarting agents and quiescing application nodes.
You can use AES or PDC to monitor development, test, or production environments. For example, you can set up
AES to monitor a development environment to identify any troublesome application area before promoting to
higher environments.
The System Management Application (SMA) can be used to monitor and manage activity on an individual
node. SMA is built on Java Management Extensions (JMX) and provides a standard API to monitor and manage
resources either locally or by remote access.
Pega Platform continually generates performance data (PAL) and issues alerts if the application exceeds
thresholds that you have defined. The following diagram compares the access to monitored nodes to gather and
display that performance data.
For more information on AES, PDC, and SMA, see the following resources:
l Autonomic Event Services (AES) landing page
l Predictive Diagnostic Cloud (PDC) landing page
l System Management Application (SMA) help topic
KNOWLEDGE CHECK
For more information on deployment and configuration options, see the Create a Web Mashup landing page on
the Pega Community.
Microservices
A microservice architecture is a method for developing applications using independent, lightweight services that
work together as a suite. In a microservices architecture, each service participating in the architecture:
l Is independently deployable
l Runs a unique process
l Communicates through a well-defined, lightweight mechanism
l Serves a single business goal
The microservice architectural approach is usually contrasted with the monolithic application architectural
approach. For example, instead of designing a single application with Customer, Product, and Order case types,
you might design separate services that handle operations for each case type. Exposing each case type as a
microservice allows the service to be called from multiple sources, with each service independently managed,
tested, and deployed.
While Pega Platform itself is not a microservice architecture, the Pega Platform compliments the microservice
architectural style for the following reasons:
l You can expose any aspect of Pega (including cases) as a consumable service, allowing Pega to participate in
microservice architectures. For more information on Pega API, see the Pega API for the Pega Platform Pega
Community article.
l You can create this service as an application or as an individual service that exists in its own ruleset.
l You can reuse services you create across applications, leveraging the Situational Layer Cake for additional
flexibility in what each service can do, without overloading the service.
Tip: Microservice architecture is a broad topic. Researching benefits and drawbacks of this style before
committing to a microservice architecture is recommended. For further guidance, see the Microservices article
by Martin Fowler.
KNOWLEDGE CHECK
What is the difference between exposing a case type using mashup and exposing a case type using a
microservice?
A mashup allows you to embed the entire case type UI into the organization's web application(s). A
microservice allows you to call a specific operation on a case type (or other Pega objects, such as assignments)
to run a single purpose operation from one or more calling applications.
Designing Pega for the enterprise
Question 1
When developing a new Pega application, start with _________________ and ___________________. (Choose Two)
C # Response Justification
C 1 one application at a time and base your Build for change allows you to build solutions iteratively
design on what you know today based on what you know today.
2 channel design and work your way into Starting with channels can lead to complexity and
the Pega application inconsistency in user experience. Users can do some
functions on channels, but not on the web self-service
site or when calling into a call center.
3 integration to systems of record and work Starting with systems of record or legacy systems tends
your way back into the Pega application to produce application stacks that are typically not
extensible to multiple channels and result in a rigid
portal-to-system of record design.
C 4 the Pega application in the middle and Starting with Pega in the middle allows for a consistent
work your way out to channels and solution to be developed and evolved over time,
systems of record connecting channels and back end systems to provide
an end to end solution.
Question 2
Which two business requirements impact how you deploy your application? (Choose Two)
C # Response Justification
1 You have requirements to start application You can move your application from Pega Cloud to on-
development in Pega Cloud, and then premise using standard Pega installation and
move your application to an on-premise deployment model.
production environment.
C 2 You have requirements to use message MDB requires an enterprise deployment (.ear).
driven beans (MDB) to deliver messages
from a system of record to the Pega
application.
3 You have requirements for case types to You can expose case types through a microservice
C # Response Justification
participate in a distributed architecture using the standard Pega installation and deployment
through microservices. model.
C 4 You have requirements to support a The BPO model requires a multitenant deployment.
business process outsourcing (BPO) model
for your organization.
Question 3
What is the primary reason for designing the security model as early as possible?
C # Response Justification
1 To begin penetration testing as early as While important, penetration testing occurs later in
possible the project, prior to go live.
C 2 To ensure you account for and design to This is the primary reason for starting the security
the organization security standards and design early. Security requirements can be complex,
legal requirements and identifying the organization's security standards
early to make sure they are woven into your
application design is crucial.
3 To start your team on building access role, While starting to build early is important, this is not
privileges, and attribute-based controls the primary reason for designing the security model
as early as possible.
4 To quickly address any warnings you see in You do not have any warnings until you start building
the Rule Security Analyzer your security rules.
Question 4
Why would you use PDC or AES when the organization already has application performance monitoring tools in
place?
C # Response Justification
1 PDC and AES collect alerts and health While AES and PDC collect health activity across nodes, this
activity across Pega nodes. is not the primary reason for using either PDC or AES.
C 2 PDC and AES show insights into the PDC and AES show insights in the health of Pega
health of the Pega application. applications, but traditional application server and
database monitoring tools do not.
C # Response Justification
3 PDC and AES provide the ability to stop AES includes this ability. PDC does not allow you to
and start agents, listeners, and manage agents, listeners, and requestors.
manage requestors.
4 PDC and AES are available both on PDC is available in Pega Cloud and can be used on-
premise and in the Pega Cloud. premise. AES is only available on-premise.
Defining a release pipeline
Introduction to defining a release pipeline
Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development through testing to deployment on your production system. This lesson explains how
to use Pega Platform tools and common third-party tools to implement a release pipeline.
After this lesson, you should be able to:
l Describe the DevOps release pipeline
l Discuss development process best practices
l Identify continuous integration and delivery tasks
l Articulate the importance of defining a test approach
l Develop deployment strategies for applications
How to define a release management approach
Depending on the application release model, development methodologies, and culture of the organization, you
see differences in the process and time frame in which organizations deliver software to production. Some
organizations take more time moving new features to production because of industry and regulatory compliance.
Some have adopted automated testing and code migration technologies to support a more agile delivery model.
Organizations recognize the financial benefit of releasing application features to end users and customers faster
than their competitors, and many have adopted a DevOps approach to streamline their software delivery life
cycle. DevOps is a collaboration between Development, Quality, and Operations staff to deliver high- quality
software to end users in an automated, agile way. By continuously delivering new application features to end
users, organizations can gain a competitive advantage in the market. Because DevOps represents a significant
change in culture and mindset, not all organizations are ready to immediately embrace DevOps.
These are your tasks as the technical lead on the project:
1. Assess the organization's existing release management processes and tooling. Some organizations may
already work with a fully automated release pipeline. Some organizations may use limited automated testing
or scripts for moving software across environments. Some organizations may perform all release management
tasks manually.
2. Design a release management strategy that achieves the goal of moving application features through testing
and production deployment, according to the organization's release management protocols.
3. Evolve the release management process over time to an automated model, starting with testing processes.
The rate of this evolution depends on the organization's readiness to adopt agile methodologies and rely on
automated testing and software migration tools and shared repositories.
Important: While setting up your team release management practices, identify a Release Manager to oversee
and improve these processes. The Release Manager takes care of creating and locking rulesets and ensures that
incoming branches are merged into the correct version.
Release pipeline
Even if the organization releases software in an automated way, most organizations have some form of a manual
(or semi-automated) release pipeline. The following image illustrates the checkpoints that occur in the release
pipeline.
This pipeline highlights developer activities and customer activities. Developer activities include:
l Unit testing
l Sharing changes with other developers
l Ensuring changes do not conflict with other developer's changes
Once the developer has delivered changes to the customer, customer activities typically include:
l Testing new features
l Making sure existing features still work as expected
l Accepting the software and deploying to production
These activities occur whether or not you are using an automated pipeline. The Standard Release process
described in Migrating application changes explains the tasks of packaging and deploying changes to your target
environments. If you are on Pega Cloud, be aware of certain procedures when promoting changes to production.
For more information, see Change management in Pega Cloud Services.
Moving to an automated pipeline
In an organization that deploys software with heavy change management processes and governance, you
contend with old ways of doing things. Explain the benefits of automating these processes, and explain that
moving to a fully automated delivery model takes time. The first step is to ensure that the manual processes in
place, particularly testing, have proven to be effective. Then, automating bit by bit over time, a fully automated
pipeline emerges.
When discussing DevOps, the terms continuous integration, continuous deployment, and continuous delivery are
frequently used.
Use the following definitions for these terms:
l Continuous integration – Continuously integrating into a shared repository multiple times per day
l Continuous delivery – Always ready to ship
l Continuous deployment – Continuously deploying or shipping (no manual process involved)
Automating and validating testing processes is essential in an automated delivery pipeline. Create and evolve
your automated test suites using Pega Platform capabilities along with industry testing tools. Otherwise, you are
automating promotion of code to higher environments, potentially introducing bugs found by your end users
that are more costly to fix.
For more information on the DevOps pipeline, see the DevOps release pipeline overview.
KNOWLEDGE CHECK
Is the goal of your release management strategy to move the organization to DevOps?
No. The goal of your release management strategy is to implement a repeatable process for deploying high-
quality applications so users of that application can start realizing business value. Over time, as those
processes become repeatable, they are ideal for automation. Continuous integration and continuous delivery
(and eventually, continuous deployment) benefit the organization and often give it a competitive advantage.
DevOps release pipeline
DevOps is a culture of collaboration within an organization between development, quality assurance, and
operations teams that strives to improve and shorten the organization’s software delivery life cycle.
DevOps involves the concept of software delivery pipelines for applications. A pipeline is an automated process
to quickly move applications from development through testing to deployment. At the beginning of the pipeline
are the continuous integration and continuous delivery (CI/CD) phases.
The Continuous Integration portion of the pipeline is dominated by the development group. The Continuous
Delivery portion of the pipeline is dominated by the quality assurance group.
Pega Platform includes tools to support DevOps. For example it keeps an open platform, provides hooks and
services based on standards, and supports most popular tools.
The pipeline is managed by some form of orchestration and automation server such as open source Jenkins.
Pega’s version of an automation server is the Deployment Manager available on the Pega Exchange.
The following diagram illustrates an example of a release pipeline for the Pega Platform.
The automation server plays the role of an orchestrator and manages the actions that happen in continuous
integration and delivery. In this example, Pega’s Deployment Manager is used as the automation server.
A pipeline pushes application archives into, and pulls then from, application repositories. The application
repositoriesare used to store the application archive for each successful build. There should be both a
development repository and a production repository. JFrog’s artifactory can be used an application
repository, but an equivalent tool could be used as well. For example, for Pega Cloud applications hosted in the
Amazon Web Services (AWS) cloud computing service would use S3 buckets as an application repository.
Notice that at each stage in the pipeline, a continuous loop provides the development team with real-time
feedback on testing results.
In most cases, the system of record is a shared development environment.
The term system of record is used in a distributed development environment. Separate development
environments can push branches related to the same application to a central server known as the system of
record. The central server is considered a type=Pega repository. Within the system of record published branches
are merged. Those branches are then removed from the originating development environment. See
Development workflow in the DevOps pipeline on the Pega Community.
Note: Pega Platform is assumed to manage all schema changes.
For more information review the DevOps release pipeline overview article in the Pega Community.
KNOWLEDGE CHECK
Continuous integration
With continuous integration, application developers frequently check in their changes to the source environment
and use an automated build process to automatically verify these changes. The Ready to Share and Integrate
Changes steps ensure that all the necessary critical tests are run before integrating and publishing changes to a
development repository.
During continuous integration, maintain these best practices:
l Keep the product rule Rule-Admin-Product, referenced in an application pipeline, up-to-date.
l Automatically trigger merges and builds using the Deployment Manager. Alternatively, an export can be
initiated using the prpcServiceUtils.bat tool
l Identify issues early by running PegaUnit tests and critical integration tests before packaging the application.
If any of these tests fail, stop the release pipeline until the issue is fixed.
l Publish the exported application archives into a repository, such as JFrog Artifactory, to maintain a version
history of deployable applications.
KNOWLEDGE CHECK
Continuous delivery
With continuous delivery, application changes run through rigorous automated regression testing and are
deployed to a staging environment for further testing to ensure that the application is ready to deploy on the
production system.
In the Ready to Accept step, testing runs to ensure that the acceptance criteria are met. The Ready to Deploy step
verifies all the necessary performance, scale, and compatibility tests necessary to ensure the application is ready
for deployment.
The Deploy step validates in a preproduction environment, deploys to production, and runs post-deployment
tests with the potential to roll back as needed.
Follow these best practices to ensure application quality:
l Use Docker or a similar tool to create test environments for user acceptance tests (UAT) and exploratory tests.
l Create a wide variety of regression tests through the user interface and the service layer.
l Check the tests into a separate version control system such as Git.
l If a test fails, roll back the latest import.
l If all the tests pass, annotate the application package to indicate that it is ready to be deployed. Deployment
can be performed manually or automatically.
KNOWLEDGE CHECK
When you create a project within Agile Studio, a backlog work item is also created. When developing an
application built on a foundation application, the Agile Studio backlog can prepopulate with a user story for each
foundation application case type. Case types appropriate for the Minimal Loveable Product (MLP) release can
then be selected from that backlog. For more information, see the article Review Case Type Backlog.
Pega’s Deployment Manager provides a way to manage CI/CD pipelines, including support for branch-based
development. It is possible to automatically trigger a Dev-to-QA deployment when a single branch at a time
successfully merges into the primary application. The rules checked into that branch must only belong to the
primary application for this to occur.
When a case type class is created within a case type-specific ruleset, rules generated by Designer Studio’s Case
Designer view will also be added to that ruleset. This is true despite Case Designer supporting the ability to
develop multiple case types within the same application.
When a case-related rule in a case-specific ruleset is saved to a branch, a case-specific branch ruleset generates
if one does not already exist. Changes made within the Case Designer that affect that rule occur within the
branch ruleset’s version of that rule. When a branch ruleset is created, it is placed at the top of the application's
ruleset stack.
The merge of a single branch is initiated from the Application view’s Branches tab by right-clicking on the
branch name to display a menu.
At the end of the merge process, the branch will be empty when the Keep all source rules and rulesets after
merge option is not selected. The branch can then be used for the next sets of tasks, issues, or bugs defined in
Agile Studio.
Application packaging
The initial Application Packaging wizard screen asks which built-on applications in addition to the application
being packaged should be included in the generated product rule. Note that components are also mentioned, a
component being a Rule-Application where pyMode = Component.
Multiple applications referencing the same ruleset is highly discouraged. Immediately after saving an application
rule to a new name, warnings appear in both applications — one warning for each dual-referenced ruleset.
The generated warnings lead to the following conclusions:
l A product rule should contain a single Rule-Application where pyMode = Application.
l Product rules should be defined starting with applications that have the fewest dependencies, ending with
applications that have the greatest number of dependencies.
Deployment strategy is different when the production application being deployed is dependent on other
multiple built-on component applications.
Let's consider the example of the FSG Booking application. The FSGEmail application would be packaged first,
followed by the Hotel application, followed by the Booking application.
While it is possible to define a product rule that packages a component only, there is no need to do so. The
component can be packaged using the component rule itself as shown below.
Currently, the Deployment Manager only supports pipelines for Rule-Application instances where pyMode =
Application. When an application is packaged, and that application contains one or more components, those
components should also be packaged. If a built-on application has already packaged a certain component, that
component can be skipped. In the following image, the FSGEmail application’s product rule includes the
EmailEditor component. Product rules above FSGEmail (for example, Hotel and Booking) do not need to include
the EmailEditor component.
The Open-closed principle applied to packaging and deployment
The goal of the Open-closed principle is to eliminate ripple effects. A ripple effect occurs when an object makes
changes to its interface as opposed to defining a new interface and deprecating the existing interface. The
primary interface for applications on which other applications are built, such as FSGEmail and Hotel, is the data
required to construct the new interface using data propagation. If the EmailEditor component mandates a new
property, the FSGEmail application needs to change its interface to applications that are built on top of it, such as
the Hotel application. The Hotel application then needs to change its built-on interface to allow the Booking
application to supply the value for the new mandated property.
By deploying applications separately and in increasing dependency order, the EmailEditor component change
eventually becomes available to the Booking application without breaking that application or the applications
below it.
Note: It is not a best practice to update all three applications (FSGEmail, Hotel, and Booking) using a branch
associated to the Booking application.
Best practices for System of Record team-based development
Pega Platform developers use agile practices to create applications in a shared development environment
leveraging branches to commit changes.
Follow these best practices to optimize the development process:
l Leverage multiple built-on applications to develop smaller component applications. Smaller applications are
easier to develop, test, and maintain.
l Use branches when multiple teams contribute to a single application. Use the Branches explorer to view
quality, guardrail scores, and unit tests for branches.
l Peer review branches before merging. Create reviews and assign peer reviews from the Branches explorer
and use Pulse to collaborate with your teammates.
l Use Pega Platform developer tools, such as rule compare and rule form search, to determine how to best
address any rule conflict.
l Hide incomplete or risky work using toggles to facilitate continuously merging of branches.
l Create PegaUnit test cases to validate application data by comparing expected property values to the actual
values returned by running the rule.
Multiteam development flow
The following diagram shows how multiple development teams interact with the system of record (SOR).
The process begins when Team A requests a branch review against the system of record. A Branch Reviewer first
requests conflict detection, then executes the appropriate PegaUnit tests. If the Branch Reviewer detects
conflicts or if any of the PegaUnit tests fail, the reviewer notifies the developer who requests the branch. The
developer stops the process to fix the issues. If the review detects and the PegaUnit tests execute successfully,
the branch merges into the system of record. The ruleset versions associated to the branch are then locked.
Remote Team B can now perform an on-demand rebase of the SOR application’s rules into their system. A rebase
pulls the most recent commits made to the SOR application into Team B's developer system.
The SOR host populates a comma-separated value D-S-S named HostedRulesetsList. Team B defines a type=Pega
Repository that points to the SOR host’s PRRestService. After Team B clicks the Get latest ruleset versions link
within its Application rule and selects the SOR host’s Pega Repository, a request goes to the SOR to return
information about versions for every ruleset within the SOR’s HostedRulesetsList. Included in that information is
each version’s pxRevisionID. Team B’s system then compares its ruleset versions to versions in the response.
Only versions that do not exist in Team B’s system, or where the pxRevisionID does not match the SOR system’s
pxRevisionID, are displayed. Team B then proceeds with the rebase or cancel. Only versionable rules are included
when a rebase is performed. Non-versioned rules such as Application, Library, and Function are NOT included in
a rebase operation. For this reason, packaging Libraries as a component are desirable.
For more information review the Development workflow in the DevOps pipeline article in the Pega Community.
The following Sequence Diagram describes the process using changes to the FSG Email Application as an
example:
Actors:
l Developer: Member of the Enterprise development team responsible for implement a new feature in the
FSGEmail application.
l Branch Reviewer: Member of the Enterprise development team responsible to code review requests by the
Developer, and merge if the code review is successful.
l Pega SOR: Pega instance configure as the SOR. This instances is the master of the last stable changes made to
the FSGEmail application.
l Booking App Team: Development team responsible for the Booking and Hotel applications.
Process:
l Enterprise development team implements changes related to a new feature of the FSGEmail application.
l A developer from the enterprise team requests a branch review against the system of record.
l A Branch Reviewer first requests conflict detection, then executes the appropriate PegaUnit tests.
l If the Branch Reviewer detects conflicts or if any of the PegaUnit tests fail, the reviewer notifies the developer
who requests the branch review. The branch reviewer stops the process to allow the developer to fix the
issues.
l If the review detects no conflicts and the PegaUnit tests execute successfully, the branch merges into the
system of record. The ruleset versions associated to the branch are then locked.
l The Booking App team can now perform an on-demand rebase of the SOR application’s rules into their
system.
l A rebase pulls the most recent commits made to the SOR application into the Booking App team's system.
KNOWLEDGE CHECK
C # Response Justification
C 1 System of record The system of record is the
shared development
environment.
C 2 Automation server The automation server
orchestrates and manages the
actions.
C 3 Application repository The application repository
stores the application archive
for each version.
4 Enterprise service bus An enterprise service bus is not
required.
5 Virtual server A virtual server is not required.
Question 2
When does the developer trigger a rebase?
C # Response Justification
1 Directly after a branch is After a branch has been
published to the system of published, the conflict detection
record is requested.
2 When the unit tests has The branch is merged after the
executed successfully for a unit tests have executed
branch successfully.
3 When there are no merge The unit tests are executed if
conflicts for the branch there are no merge conflicts.
C 4 When a branch has been Rebase pulls the most recent
successfully merged in the commits from the system of
system of record record.
Question 3
In which stage of the DevOps release pipeline is acceptance testing performed?
C # Response Justification
1 Development Unit tests are created during
development.
2 Continuous integration Unit tests are performed in the
continuous integration stage.
C 3 Continuous delivery Acceptance testing is performed
in the continuous delivery stage.
4 Deployment Acceptance testing must occur
prior to deployment.
Question 4
When defining a release management strategy, what is your primary objective?
C # Response Justification
1 Move the organization to a continuous While CI/CD is a goal for an organization, it is not the
integration/continuous deployment (CI/CD) primary goal of your release management strategy.
model.
2 Introduce automated testing into the Automated testing is a goal to meet on the way to a
organization. repeatable release process, but not the primary goal
of the overall strategy.
C 3 Create a repeatable and sustainable This is the primary goal of the release management
process for ensuring business value is strategy. Moving to DevOps facilitates the ability to
delivered safely and as quickly as possible deliver changes to users quickly.
to end users.
4 Implement a process for developers to Defining a process for reviewing changes and
minimize rule conflicts when merging into a merging is part of ensuring quality and consistency
central repository. of rules, but not the primary goal of the release
management strategy.
Assessing and monitoring quality
Introduction to assessing and monitoring quality
Coupled with automated unit tests and test suites, monitoring the quality of the rules is crucial to ensuring
application quality before application features are promoted to higher environments.
After this lesson, you should be able to:
l Develop a test automation strategy
l Establish quality measures and expectations on your team
l Create a custom guardrail warning
l Customize the rule check-in approval process
Test Automation
Having an effective automation test suite for your Pega application ensures that the features and changes you
deliver to your customers are high quality and do not introduce regressions.
At a high level, this is the recommended test automation strategy for testing your Pega applications:
l Create your automation test suite based on industry best practices for test automation.
l Build up your automation test suite by using Pega Platform capabilities and industry test solutions.
l Run the right set of tests at different stages.
l Test early and test often.
Industry best practices for test automation can be graphically shown as a test pyramid. Test types at the bottom
of the pyramid are the least expensive to run, easiest to maintain, require the least amount of time to run, and
represent the greatest number of tests in the test suite. Test types at the top of the pyramid are the most
expensive to run, hardest to maintain, require the most time to run, and represent the least number of tests in
the test suite. The higher up the pyramid you go, the higher the overall cost and the lesser the benefits.
Unit tests
Use unit tests for most of your testing. Unit testinglooks at the smallest units of functionality and are the least
expensive tests to run. In an application, the smallest unit is the rule. You can unit test rules as you develop
them by using the PegaUnit test framework. For more information, see the article PegaUnit testing.
Settings
Establishing standard practices for your development team can prevent these type of issues and allows you to
focus on delivering new features to your users. These practices include:
How can the rule check-in approval process help in monitoring quality?
By ensuring rules are reviewed by senior members of the team before they are checked in
How to create a custom guardrail warning
Guardrail warnings identify unexpected and possibly unintended situations, practices that are not
recommended, or variances from best practices. You can create additional warnings that are specific to the
organization's environment or development practices. Unlike rule validation errors, warning messages do not
prevent the rule from saving or executing.
To add or modify rule warnings, override the empty activity called @baseclass.CheckForCustomWarnings. This
activity is called as part of the Rule-.StandardValidate activity, which is called by, for example, Save and Save-As,
and is designed to allow you to add custom warnings.
You typically want to place the CheckForCustomWarnings activity in the class of the rule type to which you want to
add the warning. For example, if you want to add a custom guardrail warning to an activity, place
CheckForCustomWarnings in the Rule-Obj-Activity class. Place the CheckForCustomWarnings activity in a ruleset
available to all developers.
Configure the logic for checking if a guardrail warning needs to be added in the CheckForCustomWarnings activity.
Add the warning using the @baseclass.pxAddGuardrailMessage function in the Pega-Desktop ruleset.
You can control the warnings that appear on a rule form by overriding the standard decision tree Embed-
Warning.ShowWarningOnForm. The decision tree can be used to examine information about a warning, such as
name, severity, or type to decide whether to present the warning on the rule form. Return true to show the
warning, and false if you do not want to show it.
It is not a best practice to suppress rule warnings as a way to improve your application guardrail compliance
score.
KNOWLEDGE CHECK
C # Response Justification
C 1 Autonomic Event Services Use AES to monitor the health of
(AES) your application.
2 System Management Use the SMA to monitor and
Application (SMA) manage the resources and
processes of Pega Platform
applications.
C 3 PegaRULES Log Analyzer (PAL) Use PAL to analyze the contents
of application and exception
logs.
4 BIX Use BIX to export data to an
external system.
Question 2
Which type of testing is UI based?
C # Response Justification
C 1 Scenario testing UI-based functional tests are
used to verify end-to-end test
scenarios.
2 Functional testing Functional testing is API-based.
3 Unit testing Unit tests test rules, the smallest
units of functionality.
4 API testing API-based testing is used to test
the integration of underlying
components.
Question 3
In which of the following situations would you override the ShowWarningOnForm decision tree?
C # Response Justification
1 To display a custom guardrail Custom warnings are displayed
warning by default.
C 2 To filter guardrail warnings Use the ShowWarningOnForm to
displayed on the rule form define which warnings to show
on the rule form.
3 To define the criteria for The criteria for adding a custom
adding custom guardrail guardrail warning is defined in
warning CheckForCustomWarnings.
4 To invoke the Invoke the
pxAddGuardrailMessage pxAddGuardrailMessage function
function in CheckForCustomWarnings.
Question 4
What two things happen if the standard rule check-in approval process is enabled when a rule is checked in?
(Choose Two)
C # Response Justification
C 1 The rule is moved to a The rule is moved to the
candidate ruleset. CheckInCandidates ruleset.
2 The rule is routed to an The assignment is routed to a
approver. workbasket from which
approvers pull assignments.
C 3 A work item is created. A work item of type Work-
CheckInRule is created.
4 A notification is sent to the The approver is not known at
approver. the time of check-in. Approvers
pull items from a workbasket.
Conducting load testing
Introduction to conducting load testing
Load testing is an important part of preparing any application for production deployment. It helps identify
performance issues that may only become apparent when the application is under load. Performance issues
found in load testing are not easy to detect in a normal development environment.
After this lesson, you should be able to:
l Design a load testing strategy
l Leverage load testing best practices
Load testing
Load testing is the process of putting demand on your application and measuring its response. Load testing is
performed to determine a system's behavior under both normal and anticipated peak load conditions. Load
testing helps identify the maximum operating capacity of an application as well as any bottlenecks, and
determines which component is causing degradation.
The term load testing is often used synonymously with concurrency testing, software performance testing,
reliability testing, and volume testing. All of these are types of nonfunctional testing that are part of functionality
testing used to validate suitability for use of any given software.
Load testing allows you the validate that your application meets the performance acceptance criteria, such as
response times, throughput, and maximum user load. The Pega Platform can be treated as any web application
when performing load testing.
Tip: Performance testing requires skilled and trained practitioners who are able to design and construct,
execute and review performance tests taking into account best practices. You can engage Pega's Performance
Health Check service to help design and implement your load testing plan.
KNOWLEDGE CHECK
Pega recommends testing an application with a step approach. What does that mean?
First testing with 50 users, then 100, 150, and 200 for example. Generate a simple predictive model to estimate
expected response time for more users.
Conducting load testing quiz
Question 1
The term load testing is often used synonymously with __________ and __________. (Choose Two)
C # Response Justification
C 1 concurrency testing Concurrency testing is
synonymous with load testing.
C 2 volume testing Volume testing is synonymous
with load testing.
3 functional testing Functional testing tests against
the specifications
4 unit testing Unit testing tests individual
units of source code.
Question 2
Which three of the following options defines part of a valid performance test? (Choose Three)
C # Response Justification
C 1 Switching off virus-checking When virus checking runs, it
impacts any buffered I/O, and
this changes the collected
response times.
C 2 Including think times Think time is important in
duration-based tests. Otherwise,
more work arrives than is
appropriate in the test period.
3 Logging in virtual user before Logging in the virtual users
each interaction before each action is not how
real users behave.
C 4 Run a first use assembly Testing during a FUA cycle
before the test starts skews the results.
Estimating hardware requirements
Introduction to estimating hardware requirements
Due to the number of users expected to be on the system at one time, the number of case types you expect to
process on any given day, and other factors, your application needs appropriate computing resources. Pega
offers the Hardware Sizing Estimate service to guide you through this process.
By the end of this lesson, you should be able to:
l Identify events that cause a hardware (re)sizing
l Describe the process for submitting a hardware sizing estimate
l Submit a hardware sizing estimate request
Hardware estimation events
At the beginning of your project, someone sized the development, testing, and production environments based
on the expected concurrent number of users, case type volume, and other factors that impact application
performance. You may have been part of the initial application sizing exercise, depending on when you arrived
on the project. When you perform formal load testing, you see how well your application performs according to
key performance indicators (KPIs).
Note: Throughout the development cycle, you can monitor performance of the Pega application using the Pega
Predictive Diagnostic Cloud or Pega Autonomic Event Services. The tool you use depends if you are on-premise,
using Pega Cloud, or using another cloud environment.
As you add new users and new functionality to the application, the environment infrastructure can become
insufficient for what you are asking the application to handle. For example, your new commercial loan request
application shortened the process from two weeks to two days. Because the commercial loan application is
successful, the personal loan department wants to start using the application. You expect 10 times as many
personal loan requests than commercial loan requests and 700 new personal loan processors to start using the
application. The effect is similar to adding water to a glass that is not large enough to hold the amount of water
you need it to hold: To hold more water, you need a larger glass.
Why do you initiate a new sizing estimate when you are adding new functionality or users to your
existing application?
Your current application infrastructure may be insufficiently sized to handle the load of new users,
applications, or case types you plan to introduce. To avoid a degradation in application performance as you
evolve your application, estimate new infrastructure requirements and implement whatever is necessary to
support your application requirements.
How to submit a hardware sizing estimate request
You determined that new changes or enhancements to your application could impact the performance or
stability of your production application. The Pega Hardware Sizing Estimate team can help you estimate the
infrastructure needs for your application. The following steps illustrate how to submit a request to the Hardware
Sizing Estimate team.
C # Response Justification
C 1 The connection pool for your The sizing recommendation is likely to increase the number of
application may not be sufficient to connections available in the connection pool for the
handle the number of concurrent application.
users and background processes.
2 There is no impact. The application While cloud environments support elasticity, your application
resizes automatically to handle the may require more specific configuration changes based on
new load. application usage.
C 3 Overall application performance When adding new users to the application, each user's
can degrade. clipboard takes up a certain amount of memory. Depending on
how many active users are in the application and how often
garbage collection occurs, that memory may not be available
for other application processes.
4 Application security configuration is While this may or may not be true depending on your security
harder to maintain. configuration, security configuration does not have a bearing
on your hardware infrastructure.
Question 2
Which application changes cause you to request a hardware sizing estimate? (Choose Two)
C # Response Justification
C 1 Adding a new division of users to A new division of users brings more traffic to the application.
your application. Depending on the number of users, this addition may impact the
performance of the application. Request a sizing estimate in this
situation.
C 2 Introducing a new case type into Exposing a new case type on an organization's website results in
your application exposed by Pega many instances of this case type. Depending on the complexity of
Web Mashup in the organization's the case type, this addition could have database impact. Request
C # Response Justification
public facing website. a sizing estimate in this situation.
3 Modifying a process flow to A split for each occurs within the user's requestor session.
include a split for each shape. Assuming no additional case type volume or new requestor
sessions are created as part of the split for each processing,
existing infrastructure is sufficient to support this change.
4 Replacing an existing web SOAP Assuming the response data is of the same type and size, existing
call with a REST call. infrastructure is sufficient to support this change.
Question 3
How do you determine if the existing hardware is sufficient to handle a significant increase in the number cases
created in the application?
C # Response Justification
C 1 Engage the Pega Hardware Sizing The Pega Hardware Sizing Estimate team possesses years of
Estimate team to produce a experience with sizing multiple Pega Platform and Pega
recommended sizing. application implementations and developed a sizing
algorithm specific to Pega's infrastructure needs.
2 Research hardware sizing estimation While these tools exist, they do not take into account
tools and choose the tool appropriate nuances of Pega application usage.
for your application server and
database.
3 Create a request with Pega Support to Pega Support likely directs you to the Hardware Sizing
ask for assistance in recommending Estimate team.
infrastructure changes.
4 Ask the organization's infrastructure While the organization's infrastructure team can give
team to right size the application to estimates based on existing application usage, the Pega
Hardware Sizing Estimate team can give you specific
handle modifications to the guidance on how the new volume affects the Pega
application. application.
Handling flow changes for cases in flight
Introduction to handling flow changes for cases in flight
Business applications change all the time. These changes often impact active cases in a production environment.
Without proper planning, these active cases could fall through the cracks due to a deleted or modified step or
stage.
By properly planning your strategy for upgrading production flows, active cases will be properly accounted for
and the change will be seamlessly integrated. This lesson presents three approaches to safely update flow rules
without impacting existing cases that are already in production.
After this lesson, you should be able to:
l Identify updates that might create problem flows
l Choose the best approach for updating flows in production
l Use problem flows to resolve flow issues
l Use the Problem Flow landing page
Flow changes for cases in flight
Business processes frequently change. These changes can impact cases that are being worked on. Without
proper planning, these in-flight cases could become stuck or canceled due to a deleted or modified step or stage.
For example, assume you have a flow where a cases goes from a Review Loan Request step, then to a Confirm
Request step, and then to a Fulfill Request step. If you remove the Confirm Request step during a process
upgrade, what happens to open cases in that step?
By properly planning your strategy for upgrading production flows, in-flight cases will be properly accounted for
and the upgrade will be seamlessly integrated.
Advantage: The original and newer versions of the application remain intact since no attempt is made to
backport enhancements added to the newer version.
Drawback: Desirable fixes and improvements incorporated into the newer application version are not available to
the older version.
Care must be taken not to process a case created in the new application version when using the older
application version and vice versa. Both cases and assignments possess a pxApplicationVersion property. Security
rules, such as Access Deny, can be implemented to prevent access to cases and assignment that do not
correspond to the currently used application version.
The user's worklist can either be modified to only display cases that correspond to the currently used application
version or the application version can simply display as separate a worklist view column. Likewise, Get Next Work
should be modified to only return workbasket assignments that correspond to the currently used application
version.
Approach 2: Process existing assignments in parallel with the new flow
This approach preserves certain shapes, such as Assignment, Wait, Subprocess, Split-For-Each, and so on, within
the flow despite those shapes no longer being used by newly created cases. The newer version of the flow is
reconfigured such that new cases never reach the previously used shapes; yet existing assignments continue to
follow their original path.
In this example, you have redesigned a process so that new cases no longer utilize the Review and Correct
assignments. You will replace them with Create and Review Purchase Request assignments. Because you only
need to remove two assignments, you decide that running the two flow variations in parallel is the best
approach.
You make the updates in the new flow version in two steps.
First, drag the Review and Correct assignments to one side of the diagram. Remove the connector from the Start
shape to the Review assignment. Keep the Confirm Request connector intact. This ensures that in-flight
assignments can continue to be processed.
Second, Insert the Create and Review Purchase Request assignments at the beginning of the flow. Connect the
Review Purchase Request to the Create Purchase Order Smart Shape using the Confirm Request flow action.
Later, you can run a report that checks whether the old assignments are still in process. If not, you can remove
the outdated shapes in the next version of the flow.
Advantage: All cases use the same rule names across multiple versions.
Drawbacks: This approach may not be feasible given configuration changes. In addition, it may result in cluttered
Process Modeler diagrams.
Approach 3: Circumstancing
This approach involves circumstancing as many rules as needed to differentiate the current state of a flow from
its desired future state. One type of circumstancing that would satisfy this approach is called as-of-date
circumstancing. As-of-date is when a property within a case is identified, for example pxCreateDateTime, that
property then used as the Start Date within a date range. The End Date within the date range is left blank. An
application-specific DateTime property could be used as well such as a property such as .CustomerApprovalDate.
Advantage: Simple at first to implement at first using the App Explorer. No need to switch applications.
Drawbacks: The drawbacks to the use of circumstancing were listed in the Designing for Specialization lesson.
The primary drawback is that the Case Designer is affected when circumstancing is used except for its support
for specialized Case Type rules. Case Type rules cannot be specialized by DateTime property, as-of-date
circumstancing is not allowed. This presents a problem in that the changes should be carried forward
indefinitely.
Since the Case Designer’s scope is requestor-level the Case Designer only “sees” the base versions of
circumstanced rules such as Flows. Whenever a circumstanced rule is opened from another rule, what is shown
is the base version. To locate the correct circumstanced variation of the base rule, the “Action > View siblings”
menu option must be used. The greater the number of circumstanced rules, the harder it becomes to “picture”
how the collection of similarly circumstanced rules, and non-circumstanced rules interact.
Approach 4: Move existing assignments
In this approach, you set a ticket that is attached within the same flow, change to a new stage, or restart the
existing stage. In-flight assignments advance to a different assignment where they resume processing within the
updated version.
You run a bulk processing job that locates every outdated assignment in the system affected by the update. For
each affected assignment, bulk processing should call Assign-.OpenAndLockWork followed by Work-.SetTicket,
pxChangeStage, or pxRestartStage. For example, you can execute a Utility shape that restarts a stage
(pxRestartStage).
The following example shows a bulk assignment activity using SetTicket:
After you have configured the activity, you deploy the updated flow and run the bulk assignment activity.
Important: The system must be off-line when you run the activity.
Example
In this example, a small insurance underwriting branch office processes about 50 assignments a day; most are
resolved within two days. In addition, there is no overnight processing. You run a bulk process because the
number of unresolved assignments is relatively small and the necessary locks can be acquired during the
evening. Note that it is not necessary to use the Commit method.
Advantage: A batch process activity directs assignments by performing the logic outside the flow. You do not
need to update the flow by adding a Utility shape to the existing flow. The activity enables you to keep the
processing logic in the flow and makes upgrades easier. The activity also facilitates flow configuration and
maintenance in Pega Express.
Drawback: It might be impractical if the number of assignments is large, or if there is no time period when the
background processing is guaranteed to acquire the necessary locks.
Approach 5: Direct Inheritance and Dynamic Class Referencing (DCR)
This approach is a hybrid solution that involves circumstancing for shared work pool-level rules and direct
inheritance for case-specific rules. For case-specific rules, differentiation of a flow’s current state from its desired
future state is accomplished using direct inheritance and DCR. The example below illustrates this approach.
In combination with the differentiation value being set, the pxObjClass of the case would be changed. In the
example above, the value of the differentiating property, pxCreateDateTime is established immediately upon case
creation. The pyDefault Data Transform for the case can determine the current year using the following logic:
Param.CurrentYear = @toInt(@String.substring(@DateTime.CurrentDateTime(),0,4))
Then subsequently change the case’s pxObjClass using the following logic:
.pxObjClass - .pxObjClass + "-Y" + Param.CurrentYear
Although this approach requires a class to be created for every year; it does not behave the same as as-of-date
circumstancing.
One way to avoid creating a class for each year is to skip one or more years when pointing to the Direct Parent
class if no flow changes have been made within those years that would affect in-flight cases. Instead of always
using Param.CurrentYear, the most recent year when class specialization occurred could be determined using a
Data Page using the following logic:
Param.ContractYear = D_ContractYearToUse
[ObjClass:Primary.pxObjClass, StartYear:Param.CurrentYear].ContractYear.pxObjClass =
.pxObjClass + "-Y" + Param.CurrentYear
The logic within D_ContractYearToUse could be:
.ContractYear = Param.StartYear
For each page in D_StartsWithClassNameDesc
[ObjClass:Param.ObjClass].pxResults
Param.Year = <the last 4 characters in> pyClassName
IF (Param.Year only contains digits AND Param.Year <= Param.StartYear)
THEN Primary.ContractYear = Param.Year
Exit Transform
Advantage: Classes defined within an application’s ruleset stack are requestor-level information so are
compatible with the Case Designer’s ability to display a case’s current state. It does so in conjunction with how
the application rule’s Cases & data tab is configured.
Name Work ID prefix Implementation class
BookEvent EVENT- FSG-Booking-Work-BookEvent-Y2022
WeatherPrep WPREP- FSG-Booking-Work-WeatherPrep-Y2021
RoomsRequest ROOMS- FSG-Booking-Work-RoomsRequest-Y2020
Note that application rules configured as shown above would only need to be used during design and
development. In Production, DCR would be used to establish the pxObjClass that each case type should use.
Drawbacks: A number of arguments can be made against using this approach. Each argument is addresses
below.
Creating classes It takes very little time to create a new class and configure it to directly extend another
takes extra time class. If you only need to create a class once a year, the amount of time to perform this
task is negligible in comparison to other development tasks that would take place within
that year.
Inheritance path There is no restriction on the depth of an inheritance hierarchy. Other limits would be
can become too reached long before inheritance hierarchy would become an issue.
long and or impact
rule resolution
performance
Lengthy Rule resolution begins with a database query. Circumstanced rules are evaluated at the
inheritance path end of the 10-step rule resolution algorithm. Class names are examined at the beginning.
An extra row in the database is an extra row whether due to having a different
would impact rule pxObjClass value or different pyCircumstanceProp and pyCircumstanceVal column
resolution values.
performance Rule resolution is only performed so many times before the resolved rule is cached. The
rule cache is based on usage patterns. Over time the rule cache will evolve whether as-
of-date circumstancing or date-based direct inheritance is used.
Extra classes would Class groups, work pools, and Data-Admin-DB-Table records determine where data is
complicate future persisted.
database table
storage decisions
Does not scale Increasing the number of unique pxObjClass values in the same database table does not
affect system architecture.
How to use problem flows to resolve flow issues
When an operator completes an assignment and a problem arises with the flow, the primary flow execution is
paused and a standard problem flow starts. A standard problem flow enables an administrator to determine
how to resolve the flow.
Pega Platform provides two standard problem flows: FlowProblems for general process configuration issues, and
pzStageProblems for stage configuration issues. The problem flow administrator identifies and manages problem
flows on the Flow Errors landing page.
Note: As a best practice, override the default workbasket or problem operator settings in the
getProblemFlowOperator routing activity in your application to route the problem to the appropriate destination.
Customizing FlowProblems
You can copy the FlowProblems flow to your application to support your requirements. Do not change the name
key. In this example, you add a Send Email Smart Shape to each of the CancelAssignment actions so that the
manager is notified when the cancellations occur.
When a user attempts to advance a case formerly situated in the removed Booking stage, the pzStageProblems
flow would be initiated. Within this flow, the operator can use the Actions menu to select Change stage.
The operator can then manually move the case to another stage, the Request stage being the most appropriate
choice.
For backward compatibility consider temporarily keeping an outdated stage and its steps as they are. For newly
created cases, use a Skip stage when condition in the Stage Configuration dialog to bypass the outdated stage.
How to use problem flows to resolve flow issues
When an operator completes an assignment and a problem arises with the flow, the primary flow execution is
paused and a standard problem flow starts. A standard problem flow enables an administrator to determine
how to resolve the flow.
Pega Platform provides two standard problem flows: FlowProblems for general process configuration issues, and
pzStageProblems for stage configuration issues. The problem flow administrator identifies and manages problem
flows on the Flow Errors landing page.
Note: As a best practice, override the default workbasket or problem operator settings in the
getProblemFlowOperator routing activity in your application to route the problem to the appropriate destination.
Customizing FlowProblems
You can copy the FlowProblems flow to your application to support your requirements. Do not change the name
key. In this example, you add a Send Email Smart Shape to each of the CancelAssignment actions so that the
manager is notified when the cancellations occur.
When a user attempts to advance a case formerly situated in the removed Booking stage, the pzStageProblems
flow would be initiated. Within this flow, the operator can use the Actions menu to select Change stage.
The operator can then manually move the case to another stage, the Request stage being the most appropriate
choice.
For backward compatibility consider temporarily keeping an outdated stage and its steps as they are. For newly
created cases, use a Skip stage when condition in the Stage Configuration dialog to bypass the outdated stage.
Handling flow changes for cases in flight quiz
Question # 1
Which shape is least likely to cause a problem when removed from a flow already in production?
C # Response Justification
1 Split-For-Each Removing a Subprocess or a
Split-For-Each shape may cause
problems since their shape IDs
are referenced in active
subflows.
2 Assignment Removing an Assignment shape
for which there are open
assignments results in
orphaned assignments.
3 Subflow Removing a Subprocess or a
Split-For-Each shape may cause
problems since their shape IDs
are referenced in active
subflows.
C 4 Decision A Decision shape does not
contain properties available on
assignments.
Question # 2
Which assignment property is not critical in terms of flow processing?
C # Response Justification
1 pyFlowType — The name of This property is available on an
the flow rule assignment.
2 pyInterestPageClass — The This property is available on an
class of the flow rule assignment.
C 3 pxTaskLabel — The label given This property is not critical to
to the assignment task flow processing
C # Response Justification
4 pxTaskName — The shape ID This property is available on an
of the assignment shape assignment.
Question # 3
Which one of the following is usually the best approach to safely change flows in production?
C # Response Justification
1 Create a new circumstanced Circumstancing the flow is not a
flow and leave the old flow valid approach since more rules
intact to support outdated than just the flow rule would
assignments likely need to be circumstanced
as well.
C 2 The best approach to use The best approach to use
depends on the nature of the depends upon a number factors
application such as the complexity of the
changes made to the
application.
3 Revert the user’s application Having to switch applications to
version when processing process older cases may not be
older assignments acceptable to certain users.
4 Use tickets, change stage, or It might be impractical if the
restart the current stage to number of assignments is large,
process old assignments to or if there is no moment when
be removed before applying the background processing is
changes guaranteed to acquire the
necessary locks.
Extending an application
Introduction to extending an application
Case specialization describes how an existing application can be transformed into a framework / model /
template / blueprint application without having to rename the classes of existing case type instances. As an LSA,
you are sometimes asked to take an existing application and evolve it so as to use it as a foundation for more
specialized implementations.
After this lesson, you should be able to:
l Describe how an existing application can be transformed into a framework / model / template / blueprint
application
l Extend an application to a new user population
l Split an existing user population
How to extend existing applications
Extending a production application can occur for various reasons, planned or unplanned. Some of these reasons
include:
l The enterprise has planned to sequentially roll out extensions to a foundation application due to budgetary
and development resource limitations
l The enterprise has discovered the need to:
Extend the production application to a new set of users
Split the production application to a new set of users
In either situation, the resulting user populations access their own application derived from the original
production application.
The previous scenarios fall into two major approaches:
l Extending the existing production application to support a new user population
l Splitting the existing production application to support subsets of the existing user population
Within each of the two major approaches are two deployment approaches: either to a new database or to the
same database.
Deployment approaches
Whether extending or dividing an application, you can host the user populations on either a new database or the
original database.
Run the New Application wizard to achieve class specialization. In the Name your application screen, click the
Advanced Configuration link. Under Organization settings, enter at least one new value in the Organization,
Division, and Unit Name fields.
Suppose the new user population is associated to new division and there is a requirement to prevent an
operator in the new division from accessing an assignment created by the original division. The easiest solution
is to implement a Read Work- Access Policy that references the following Work- Access Policy Condition.
pxOwnerDivision = Application.pyOwningDivision
AND pxOwnerOrganization = Application.pyOwningOrganization
Alternatively, you can also define an access deny rule.
Note: Using De Morgan’s law, define the access deny-invoked access when rule as negation of how a single-
access-role access when rule would be defined.
When...
pxOwnerDivision != Application.pyOwningDivision
OR pxOwnerOrganization != Application.pyOwningOrganization
Creating subsets of the existing user population within the original database
The most complex situation is when immediate user population separation is mandated within the same
database. To support this requirement, a subset of the existing cases must be refactored to different class
names.
Manipulating the case data model for an entire case hierarchy while a case is active is risky and complex. For this
reason, seek advice and assistance before attempting a user population split for the same application within the
same database.
C # Response Justification
1 Multitenancy can be avoided Adding a new user population
to an existing database, yet
wanting to keep that user
population’s data separated
from the existing user
population’s data , can be
achieved using a multi-tenancy
approach.
C 2 No need exists to use class With complete database
specialization separation, there is no need to
use class specialization. Each
implementation application can
use ruleset specialization to
differentiate the rules specific
to its user population.
3 Assignments and Work pool data can be saved to
attachments can be saved to different tables according to
different tables their associated Data-Admin-
DB-ClassGroup records, but
assignments and attachments
cannot.
C 4 The same table names can be If the user population is
used in both databases completely new, then a
completely new database can
be used. Doing so would
achieve total isolation even if
the same table names are used
in both databases.
Question # 2
Which two statements are true when subdividing an existing user population into two distinct applications?
(Choose Two)
C # Response Justification
C 1 User/account data can be If a new database is created to
gradually migrated support the transition and
immediate migration is not
required, user/account data can
be gradually migrated from the
existing database to the new
database until the user
population separation is
complete.
2 Reporting and security is When active cases exist
simplified throughout a user population
and there is a mandate to
subdivide that user population
into two distinct applications,
reporting and security become
problematic.
C 3 Resolved cases can be Resolved cases for a given
duplicated during migration user/account can be duplicated,
but not purged from the original
system, until the migration
process is complete.
4 Cloning the existing database What is not desirable is to clone
is desirable the existing database since it
would be difficult to control
duplicate access, for example by
agents.
Question # 3
Which two statements are true regarding rule names? (Choose Two)
C # Response Justification
1 All foundation classes must The need to specialize an
contain FW application may not be
discovered until it is in
production. At that point the
application would become a
foundation. Refactoring that
application's case type names to
contain FW would be wasteful.
C 2 Refactoring class names is a Refactoring class names is a
time-consuming process time-consuming process.
Businesses prefer that changes
be implemented in the most
expedient way, which would also
be the most cost-effective way.
C 3 Pega Express generates Database table names are auto-
names for rules generated and Pega Express
generates names for rules such
as When rules, Flow names, and
Sections.
4 Case type class names must Claiming that case type class
exactly reflect their user names must exactly reflect their
population otherwise user population, otherwise
developer productivity may developer productivity would
suffer suffer, is a weak argument.
Leveraging AI and robotic automation
Introduction to leveraging AI and robotic automation
Artificial intelligence (AI) and robotic automation technology changes the way people work. AI and robotics both
automate work, each in a different way. Choosing the most appropriate automation technology depends on the
result you want to achieve.
After this lesson, you should be able to:
l Compare AI and robotic automation technologies
l Identify opportunities to leverage AI in your application
l Identify opportunities to leverage robotic automation in your application
Artificial intelligence and robotic automation comparison
Artificial Intelligence (AI) and robotic automation are similar in that they perform a task or tasks instead of a
human being. AI and robotic automation solutions are not impacted by geography and are not prone to error,
like human beings. However, the application of each technology differs based on what you are trying to achieve.
You could also design a solution that uses AI and robotic automation capabilities in tandem; they are not
mutually exclusive technologies. Grasping the benefits and differences between AI and robotic automation
allows you to identify opportunities to use these technologies and to design an application that can radically
change the way the organization performs work. Pega offers the following technology to meet these needs:
l AI capabilities, in the form of the Intelligent Virtual Assistant, Customer Decision Hub, the Decision
Management features
l Robotic automation capabilities, including Robotic Desktop Automation (RDA) , Robotic Process Automation
(RPA), and Workforce Intelligence (WFI)
The following table summarizes the key differences between Robotic Desktop Automation (RDA), Robotic Process
Automation (RPA), Workforce Intelligence (WFI) and AI capabilities.
Artificial intelligence
Artificial intelligence (AI) can be defined as anything that makes the system seem smart. An artificial
intelligence solution learns from the data available to it. This data can be structured or unstructured (such as big
data), and can take in images, sound, or text inputs. The value of AI increases as the solution gains age and
experience, not unlike a human being.
For an AI solution to be self learning, the AI solution uses experiences to form its basis of knowledge, not
programmed inputs. The Adaptive Decision Manager (ADM) service is an example of adaptive learning
technology. For example, when training an Artificial Intelligence solution to recognize a cat, you do not tell the AI
to look for ears, whiskers, a tail, and fur. Instead, you show the AI pictures of cats. When the AI responds with a
rabbit, you coach the AI to distinguish a cat from a rabbit. Over time, the AI becomes better at identifying a cat.
This technology can be a powerful ally in building a customer's profile, preferences and attributes.
An AI solution can also predict the next action a customer will take. This ability allows an organization to serve
the customer in a far more effective way. The organization can know why a customer is contacting them before
the customer even calls. For example, an AI solution can guide a customer service representative to offer
products or services that the customer actually wants, based on previous behavior of the customer. Predictive
Analytics provides this capability.
The Customer Decision Hub combines both predictive and adaptive analytics to provide a seamless customer
experience and only shows offers relevant to that customer. The Customer Decision Hub is the centerpiece of the
Pega Sales Automation, Pega Customer Service, and Pega Marketing applications.
AI uses a natural language processing (NLP) to detect patterns in text or speech to determine the intent or
sentiment of the question or statement. For example, a bank customer uses the Facebook messenger channel to
check his account balance. In the background, the bank's software analyzes the intent of the question in the
message, performs the balance inquiry and returns the response to the customer. The Intelligent Virtual
Assistant is an example of NLP in action.
Note: AI is a powerful tool. AI can also carry risk, if you are not cautious. For more information on this topic, see
the AI Customer Engagement: Balancing Risk and Reward presentation on Pega.com.
Robotic automation
Robotic automation is technology that allows software to replace human activities that are rule based, manual,
and repetitive. Pega robotic automation applies this technology with:
l Robotic desktop automation (RDA)
l Robotic process automation (RPA)
l Workforce intelligence (WFI)
Robotic desktop automation (RDA) automates routine tasks to simplify the employee experience. RDA mimics
the actions of user interacting with another piece of software on the desktop. For example, a customer service
representative (CSR) logs into five separate desktop applications to handle customer inquiries throughout the
day. You can use RDA to log that CSR into these applications automatically. This allows the CSR to focus on better
serving the customer.
Usage of RDA is also known as user-assisted robotics.
Robotic process automation (RPA) fully automates routine and structured manual processes. No user
involvement is required. With RPA, you assign a software robot to perform time consuming, routine tasks with no
interaction with a user. These software robots perform work on one or more virtual servers. For example, a bank
requires several pieces of documentation about a new customer before the bank can onboard that new
customer. Gathering this information can take one person an entire day to complete this task. You can use RPA to
gather these documents from one or more source systems. The software robot can perform the same process in
minutes.
What characteristics distinguish Artificial Intelligence from robotic automation? What are some
examples of robotic automation and Artificial Intelligence?
Robotic automation mimics user behavior through software. Software robots perform routine, sometimes,
onerous tasks instead of users. Robotic automation can solve problems where a web services or data
warehousing solution was previously required. An example of robotic automation is Robotic Desktop
Automation, which can invoke automations to gather data from legacy systems from the users desktop. Artifical
Intelligence solutions learn based on available inputs. AI solutions also need to be trained to refine it's ability
to predict future behavior of those interacting with the AI solution. The Customer Decision Hub and Intelligent
Virtual Assistant are two examples of Pega's implementations of AI.
Leveraging AI and robotic automation quiz
Question 1
You want to reduce call-center volume and increase customer service representative effectiveness by offloading
routine tasks to a chatbot through a conversational UI. Which tool do you choose?
C # Response Justification
1 Robotic desktop automation (RDA) Robotic desktop automation mimics user interaction with
desktop applications.
2 Robotic process automation (RPA) Robotic process automation performs routine tasks on
behalf of the user in the background.
C 3 Intelligent Virtual Assistant The Intelligent Virtual Assistant lets you introduce a
conversational UI into your organization, allowing you to
deliver contextual and personalized service.
4 Natural language processing Natural language processing is part of an overall
AI solution.
Question 2
What two key aspects distinguish an AI solution from a robotic automation solution (Choose Two)?
C # Response Justification
C 1 An AI solution needs to be trained or An AI solution needs to be trained or coached to arrive at a
coached. correct conclusion. Over time, the AI becomes more
effective.
2 An AI solution must be carefully One of the benefits of AI is that it does not require
programmed. programming. AI learns on its own.
C 3 An AI solution takes in unstructured One of the benefits of AI is that it can take in multiple forms
input to help it learn. of data, structured or unstructured.
4 An AI solution replaces manual, While an AI solution can replace manual, repetitive tasks,
repetitive tasks. this is better suited for an RDA or RPA solution.
Question 3
Tasks that are best suited for a robotic automation solution are _____________ and _____________. (Choose Two)
C # Response Justification
C 1 highly manual Highly manual tasks are well suited for a robotic
automation solution.
C 2 routine Routine tasks are well suited for a robotic automation
solution.
3 require analysis Tasks that require analysis are suited for human beings,
or possibly AI.
4 require judgment Tasks the require judgment are suited for human beings,
or possibly AI.
COURSE SUMMARY
Next steps
To further your learning and share in discussions pertinent to lead system architects, including the latest
information on certification requirements, see the Lead System Architects space on the Pega Community.