AppBuild PDF
AppBuild PDF
AppBuild PDF
A ruleset defines a major subset of rules, because all rules belong to a ruleset. A ruleset
is a major aspect in access control, grouping interrelated rules, managing rules, and
moving applications. The term ruleset sometimes informally refers to the content of the
ruleset.
Describe a Ruleset
A ruleset is a container or an organizational construct that is used to identify, store, and manage a set of
rules that define an application or a major part of an application. Every instance of every rule type belongs
to a ruleset. A ruleset’s primary function is to group rules together for deployment.
Operators receive access to an application through an access group, which references the application. An
application is comprised of a list of rulesets. The application rulesets are typically created by the New
Application Wizard (also known as Application Express).
The purchasing application displayed here has an integration ruleset (SupplierIntegration) in addition to the
implementation and organizational rulesets.
Rulesets have versions. The version has a threepart key. For example 010203, where:
01 is the major version number
02 is the minor version number
03 is the patch number
References to version numbers sometimes omit the patch number. A reference to version 0307, for
example, means that all the versions between 030701 and 030799 are included.
A few rule types belong to a ruleset but not a ruleset version:
Application
Class
Ruleset and Ruleset Version
Access of Role to Object
Access Deny
Library
Use Checkout and Private Edit Features
Applications are typically developed by a team. The optional rule checkout facility supports an application
development team by allowing multiple members of the team to work on the same application in a
coordinated way.
Enable Checkout
Select Use checkout? on the Security tab on the ruleset to enable the checkout facility.
Operators need to have the Allow Rule Check out selected on the Security tab to be able to update rules
in rulesets that require check out.
Checkout
The checkout button appears when the ruleset the rule belongs to is unlocked, uses checkout and the
operator is allowed to check out rules.
If a developer checks out a rule no one else can checkout that rule and make changes to it until the
developer checks it in.
The check out button does not display if the rule is checked out by someone else. We can click the lock
icon to see who has checked out the rule.
If a rule is not available for checkout because it is checked out by someone else or because it is in a
locked ruleset version the private edit button appears instead of the check out button.
Private edit is a special case of the standard check out. It allows a developer to prototype or test changes
to a rule that is not available for standard check out.
To be able to do a private edit a developer needs to have the pxAllowPrivateCheckout privilege. The
standard access role PegaRULES:SysAdm4 provides this privilege.
When we check out a rule, we are making a private copy of the rule in our personal ruleset. Same is true
for private edits.
We can view our checkouts and private edits in the Private Explorer or by using the checkmark icon in the
header.
Use the Development landing page (DesignerStudio > Application > Development > Checked Out Rules) to
view all rules checkedout in the application. Private edits are not shown in this list.
Checkin
Checking in a checked out rule replaces the original base rule. A comment describing the changes to the
rule is required The check‐in comments can be viewed on the rule’s History tab.
Private edits can also be checked in. We need to select an unlocked ruleset version in which we want to
checkin the private edit.
If we want to delete a check out or private edit we can use the discard button. The discard button deletes
the rule in our private ruleset.
We can use the bulk action feature found in the Private Explorer menu or under the checkmark icon in the
header to checkin, open, or delete several checked out rules at the same time.
Note that Private edits cannot be checkedin in bulk.
Explain Ruleset Prerequisites
A ruleset always requires at least one ruleset as a prerequisite. For example, an application ruleset might
have a division or organizational ruleset as a prerequisite. Here in our case the Purchasing application
ruleset depends on the ADV organizational ruleset.
Enforcing prerequisites during development helps ensure that rule references across rulesets are correct
during rule resolution at runtime. For example, assume that during development we create and test a rule
in ruleset Alpha:010101 that depends upon a rule in ruleset Beta:010101. If, at runtime in a production
system, a user executes the rule in Alpha:010101 that references the rule in ruleset Beta:010101, but
ruleset Beta:010101 is not present on the production system (or is not in the user’s ruleset list), the Alpha
rule could fail or possibly run and produce an incorrect result.
Rule validation is performed against the prerequisites. When we save a rule, the system assembles a
complete required ruleset version list from:
All ruleset versions on this list
All ruleset versions listed as prerequisites for those ruleset versions
All lowernumbered ruleset versions of this ruleset and the rulesets included in the same major
version
Therefore, if you save a rule into Delta:010101 which depends on Alpha:020101, and Alpha:020101
depends on Beta:020101, only list Alpha:020101 as a required ruleset for Delta:010101. There's no
need to list Beta:020101 because it is already listed in Alpha:020101.
Note that rules in versions below the major version are not visible to rule resolution. For example, if you list
Alpha:020101 that depends on Beta:020101, Alpha:020101 won’t see rules in Beta:01XXXX.
If your ruleset only depends on the PRPC product enter the PegaProcessCommander ruleset as a
prerequisite.
There is a 99 patch version available of the PegaProcessCommander ruleset in the product. Use that
ruleset version as a prerequisite in your ruleset to avoid having to update the ruleset after product updates.
The PegaProcessCommander ruleset lists all product rulesets so we don’t need to list any product
rulesets below PegaProcessCommander.
The prerequisite information is also validated when importing rulesets. When a RuleSet is imported to a
target system, the system and/or import file will be checked to see if the RuleSet versions listed as
prerequisites exist; if they don’t a warning will be shown.
Explain Ruleset Lists
While ruleset prerequisites govern rule development and import, the ruleset list governs rule execution at
runtime. Because ruleset prerequisites are enforced during development, the system does not need to
validate them during runtime processing
The ruleset list is the list of rulesets that are “in play” for a given operator’s session. The ruleset list is
available in the operator profile (Operator menu > Profile).
The order of the rulesets is important as it is used by the rule resolution algorithm. We generally refer to
the rulesets with higher precedence as being “on top” of those with lower precedence.
The list is assembled during login. The process starts by finding the versioned application rule referenced
on the access group of the operator. Note that in rare configurations the access group can actually come
from the requestor definition, organization or division record.
For most applications, the ruleset list is primarily comprised of rulesets referenced on the application form.
The “builton” applications are recursively processed until the PegaRULES application is found.
The applications are then processed bottom up adding each application’s ruleset list on top of the
previously added rulesets. As an example, if application A has two rulesets and is built on top of application
B which has two rulesets and which is built on top of PegaRULES ruleset. We will end up with a ruleset
stack with the rulesets of application A on top of those of application B on top of the Pega rulesets. Finally,
if the operator is allowed to checkout rules a personal ruleset is added at the very top of the list. The
personal ruleset has the name of the operator ID.
Explore Best Practices
A ruleset version can be locked preventing changes to rules in that ruleset. A ruleset is typically locked
when development is finished and the application is moved to the test environment.
It is best practice to never unlock a locked ruleset. If changes are required to a locked ruleset, create a new
patch version of the ruleset That way consistency is ensured across environments.
Development should occur only in the highest version of a ruleset and only that version should be
unlocked. A ruleset version cannot be locked if any rules in that version are checked out.
The RuleSet Stack tab on the Application Structure landing page (DesignerStudio > Application > Structure
> RuleSet Stack) helps in following the best practices of ruleset locking and versioning.
We can use the Lock and Roll feature to lock and roll (increment) versions in a single step. Before using
the Lock and Roll we need to make sure that there aren’t any rules checked out.
If the last release built was promoted to testing or production it might be worth considering skipping a few
patch numbers reserving those for emergency fixes.
For example, consider the following situation. Ruleset version 010204 was promoted to QA and then production. The next
release with a few new features is planned to go into production in two weeks. The development of the next release is done in
010210, reserving 010205 to 010209 for emergency fixes.
After the release went into production two defects were found. The decision was made to push the fix for one
defect immediately, but for the other some additional testing was required. The first defect was fixed in 01‐02‐05
and the second one in 01‐02‐06. This is a snapshot of the environments after the first fix was promoted to
production and the second fix still being tested on QA.
Manage Rulesets
There are several tools to help us manage rulesets and ruleset versions. The ruleset refactoring tools are
found under DesignerStudio > System > Tools > Refactor RuleSets.
Use the Copy/Merge RuleSet tool to copy rulesets and ruleset versions into new versions or to merge
several ruleset versions into one version.
The Delete a Ruleset toolallows us to delete an entire ruleset or a specific version of a ruleset. Consider
the effect before of deleting rulesets that has been promoted to production.
The Skim a Ruleset tool collects the highest version of every rule in the ruleset and copies them to a new
major or minor version of that ruleset on the same system, with patch version 01. Consider a skim for each
major application release.
Use the Rulebase Compare toolto identify differences in the rules on two different systems.
Conclusion
In this lesson, we looked at rulesets. A ruleset is a fundamental building block of an application. We looked at their purpose
and impact on the application as well as some features and best practices.
Now, you should understand what a ruleset is and how rulesets are configured. You should also know how rulesets are used
during development vs runtime. Finally, you should be able to enable and use the checkout feature and utilize ruleset best
practices.
Introduction
For all but the smallest projects, development efforts by multiple teams need to be managed. As teams
achieve development milestones and are ready to add their improvements to the main application rulesets,
there is a chance that some of their changes to existing rules may conflict with the changes other teams
are making.
Identify when Branches are needed
The branching ability is especially useful when teams need to work simultaneously on the same rules, and
all members on one team need to see each other’s work, while also isolating their development changes
from other teams. In this way, one team’s rule changes do not affect the other teams until the changes are
stable, conflicts resolved, and approval is granted to make the new functions available to the entire
development organization.
For example, Team Alpha wants to work on an inventory selection feature and at the same time Team
Beta wants to work on a purchase order feature in the Purchasing application and both features involve
adding and editing rules in the same application rulesets. The teams’ daytoday development activity is
largely independent. We do not know ahead of time which feature will go into production first.
With the features of branches and branch rulesets, each team creates their own branch in which to do the
work to implement their feature. When the work is done, the team resolves any conflicts that the system
detects between their branch’s rules and other instances of the rules, and then merges their branch’s rules
into the base applicatio
So let’s see what we need to do in Pega 7 to make this work.
Setup a Team
First we need to create the team application which are built on the main application. The team application is typically named
in a way that reflects the base application, team name, or focus of the team.
For example, we could call the application Purchasing_TeamAlpha or Purchasing_InventorySelection.
The built on application and version needs to be updated to reference the main base application. The
include parent checkbox should be selected. We do not configure any application rulesets at this stage.
Create an access group that references the team application. The typical name of the access group uses
the application name plus the standard user type, in our case
Purchasing_InventorySelection:Administrators.
Update each team member’s operator ID to include the access group.
Create a Branch
The development branches are created using the Add Branch button on the Definition tab of the
application rule.
A branch name must start with a letter and contain only alphanumeric and dash characters, up to a
maximum of 16 characters. Best practice naming convention is to relate the branch name to the planned
development to be done in that branch. In this case we’ll call it InventorySel.
The order of the branches listed must match the order in which the rules should be resolved for the
developers on the team. When a developer logs into this application, the system assembles the
developer’s ruleset list and adds the branch rulesets on top of the ruleset stack according to the sequence
in which the branches are listed in this list.
Save the application records and click the Create Branch Ruleset button to create a branch ruleset. A
branch ruleset is branched from a ruleset in the base application, or any of its builton applications. The
team needs to determine which rulesets contain the rules they expect to modify for a planned
enhancement and create the branch rulesets based off those rulesets. In our case we want to branch the
Purchasing ruleset.
Branch rulesets are not versioned. There is only one ruleset version 010101 and the ruleset it branches
from is listed as prerequisite.
The check out feature (on the Security tab) is enabled by default for branch rulesets since it is a best
practice to use checkout when doing team development.
We can use the Actions menu to delete a branch from the system or remove it from the application.
Deleting the branch from the system will delete all the rulesets and rules belonging to the branch.
Develop Rules with Branched Rulesets
Team members can implement the planned features by performing rule development in the branches
using the branch rulesets. To update an existing rule, we do a Save As of the base rule and save it into the
associated branch ruleset and then work on that copy.
As a team works with rules from the base ruleset in its branches, other teams are working on rules from
the base rulesets in their branches. If a rule has been changed in the base ruleset since it was copied to
the branch a conflict message appears as shown below.
If the rule is copied into two different branches a merge warning message appears since the rules will need
to be merged later.
To create a new rule that is needed to implement the enhancement, save it directly in the branch ruleset
that is branched from the base ruleset that will contain the new rules when development is complete.
Merge Branches
The branch’s contents are usually merged into the base rulesets when the development in the branch is
complete and stable. Start the merge process by selecting Merge from the Actions menu.
All rules in the branch rulesets need to be checkedin to complete the merge. It is a best practice to lock
the branch rulesets before merging the branch. Use the lock option in the actions menu to lock the branch
ruleset. We can use the package option if we want to move the branch to another environment.
Select Create New Version to create a new version of the base ruleset for the branch. Alternatively select
the ruleset version you want to merge into.
Click on the conflicts and warnings to open the notices window that displays each rule that the wizard has
identified as having a conflict or warning.
If a rule in the branch ruleset has been updated in the base ruleset a merge conflict is shown. It is not possible to complete a
merge if such conflicts exist. The branch and base versions of the rules need to be merged manually before proceeding.
More than one team might have branched the same rulesets and introduced changes to rules that conflict with each other.
These types of conflicts show up as warnings, but do not prevent a merge. The team merging their branch first can do so
without any problems since the base rule has not changed. However, after the first team has merged their changes the second
team will get a conflict that must be resolved since the base rule was updated by the first team.
Click the compare button to view the differences between the branch copy of the rule and the one in the base RuleSet. If the
changes are minor open the branch rule and apply the same changes to it so that it matches the base rule. If the changes are
more complex, or if they conflict with the logic in the branch rule, contact the individual who modified the base rule and
negotiate what the final set of changes should be and the procedure for testing them.
When the changes to the branch rule are completed and tested, select conflict resolved merge this rule. Click OK when all
conflicts reported in the notices window are marked resolved and return to the Merge Branch Rulesets wizard to continue
with the merge process.
We have the option to keep all source and rules and rulesets after the merge. It is a best practice to provide a password to lock
the target ruleset after the merge. By default the branch and branch rulesets are deleted after a successful merge.
Explain Common Issues
To reduce the chance of losing bug fixes, developers working on fixes need to check whether rules they
need to work on have already been modified in branches. As each service pack becomes available, the
service pack team should coordinate with the teams that are affected to resolve inconsistencies and
preserve bug fixes.
Data instances are not contained in rulesets, and cannot be versioned or filtered with ruleset lists. Teams
must communicate regularly to avoid creating conflicting data instances, in particular with the release of
Service Packages for new versions of services. Limiting create/update access for data instances
(organizations, work groups, workbaskets, access groups, and so on) to a single developer on each team
simplifies coordination between teams.
If branches share rulesets on the enterprise and divisional reuse layers any changes to those rulesets
need to be closely coordinated to ensure that it doesn’t cause any issues with other applications using the
rules.
Conclusion
In this lesson, we looked at the branching feature that supports parallel development in a shared rule base. This is a powerful
feature that is often used for medium and large sized projects with several development teams.
Now, you should understand how to setup applications and access groups for development teams. You should also know how
to develop with and merge branches. Finally, you should understand common issues in parallel development and how to
address them.
Introduction
This lesson covers the Rule Resolution process, which is the backbone of how the system ensures the
correct rules are executed at the correct time.
Define Rule Resolution
So we’ve just seen what kinds of challenges face businesses today and some ways to address those
challenges. PRPC’s best tool for addressing those challenges is its ability to use Rule Resolution. Rule
Resolution lets the system remain dynamic and flexible. Applications that have been designed using Rule
Resolution eliminate the need for developers to worry about building every possible scenario. By
leveraging Rule Resolution, these applications can adapt to the immediate scenario and determine the
next best action without any human intervention.
So what is Rule Resolution?
Rule Resolution is the process PRPC uses to determine the most appropriate rule to execute. There’s two
parts to Rule Resolution. First, it needs to eliminate all the rules that are just not applicable to the current
scenario. Next, it ranks the rules to determine which one is the ‘most’ applicable.
Let’s take a look under the hood of the Rule Resolution process.
State the Rule Resolution Process
Rule resolution follows 10 basic steps when determining which rule to use:
1. Check the rule cache. If the rule is present in the cache, go to Step 8.
2. Choose all instances with the correct purpose.
3. Discard rules where Availability is set to No/Draft.
4. Discard inapplicable rulesets and versions.
5. Discard all candidates not defined on a class in the “ancestor tree”.
6. Rank remaining candidates by:
a. Class
b. Ruleset
c. Circumstance
d. Circumstance Date
e. Date/Time
Remove all candidates that are withdrawn or hidden by other withdrawn candidates. Select the first default
rule encountered and discard the candidates in the remainder of the list.
7. Set the cache.
8. Find the best instance (and check to make sure there is not a duplicate rule).
9. Check that the Availability is not set to BLOCKED.
10. Security – Verify that the user is authorized to see the rule.
Let’s take a closer look at each of these steps.
Step 1: Check the Rule Cache
During this step, the system checks to see if Rule Resolution has already been run for this rule. If it has,
the cache will have a list of all possible candidates for this rule. By using this list, the system can avoid the
need to query the database for rules every time they are executed.
It is not necessary to cover exactly how caches work at this time. There will be separate lessons that
describe the various caches in the system. However, it is important to note that that cache exists and is
used for this step. The keys for this cache are covered when we examine step 7.
If a list is found in the cache, the system uses the candidates from the list and proceeds to step 8, where
the list is evaluated. If it is not found, the system proceeds to build the list.
Example:
Let’s walk through how the system finds the section “CreateRequest” shown below while performing a
PurchaseRequest:
First the system checks the cache for a previous Rule Resolution of the Section rule “CreateRequest” that
used the same parameters as this current request. Since this is our first time running this rule, the rule is
not found in the cache so the system proceeds to the next step.
Step 2: Choose all instances with the correct purpose.
During this step, the system gathers a list of all rules that match the purpose. Though often used
interchangeably, the purpose is more than just the rule name. It is the combination of all keys to a rule,
except the class on which the rule is defined (known as the ‘AppliesTo’ class). At a minimum, this will be
the Rule Type and the Rule Name, with additional keys as needed depending on the rule type.
Example:
The system needs to find all instances of a ‘RuleHTMLSection’ called ‘CreateRequest’. It doesn’t do any
other filtering at this time, which still provides us a pretty lengthy list:
Step 3: Discard rules where Availability = No/Draft.
During this step, the system filters the list of candidates and removes any rules where the Availability is set
to either ‘No’ or ‘Draft’.
Availability is defined on a rule to control how this rule should be treated during Rule Resolution. An
availability of “No” or “Draft” indicates that this rule is not able to be executed and should not be considered
for Rule Resolution.
Example:
Continuing with our example above, there are only three candidates that have their Availability set to ‘No’
as indicated by the shaded lines below.
These rules are removed from the list of candidates.
Step 4: Discard inapplicable Rulesets and Versions.
During this step, the system looks at the operators ruleset list (known as the Ruleset Stack). The Ruleset
Stack is covered in a different lesson. For this lesson, it is only important to know that the Ruleset Stack
defines the list of available rulesets for this operator and that the rules in those rulesets must have the
same Major version number and must be lower than or equal to the specified Minor version number.
Example:
In our example, our operator’s list is defined as:
Purchasing:0201
ADV:0301
This removes eleven candidates from our list:
The three candidates from XYZ and Quoting are removed because their rulesets are not defined in our
Ruleset Stack. This makes sense because they belong to a completely different application.
Within Purchasing, the three candidates from the 010101 versions have been removed because they
don’t match our current Major version of 02. The candidate from 020201 matches the current Major
version, but is still removed since its Minor version of 02 is higher than our current Minor version of 01.
Within ADV, only one candidate matched our current Major version of 03. All the others have been
discarded.
Step 5: Discard all candidates not defined on a class in the “ancestor tree”.
The “ancestor tree” refers to a rule’s inheritance. Inheritance is defined in more detail in another lesson.
For this lesson, it is only important to remember that a class can inherit from other classes, which allows for
the reuse of rules.
This step is the first time that the system examines the ‘AppliesTo’ class of the candidates. If the
‘AppliesTo’ class does not match the current class, or one of the current classes parents, then those
candidates are discarded.
Example:
In our example, we were currently executing a Purchase Request. There is only candidate that is not in
the ancestor tree of the Purchase Request:
Therefore, the candidate defined on ADVPurchasingWorkPurchaseOrder gets removed from the list.
Step 6: Rank remaining candidates.
At this point, the system is trying to determine the order in which the candidates are the most correct. A lot
happens during the ranking step, three different sub steps actually occur.
1. First, the system sorts the candidates according to:
1. Class
2. Ruleset
3. Circumstance
4. Circumstance Date
5. Date/Time Range
6. Version
The first two ranks – Class and Ruleset – provide the basics of Rule Resolution. The closer the candidate
is to the class where it executes and the higher it is in the ruleset stack determine which is the default rule.
The next three ranks – Circumstance, Circumstance Date, and Date/Time Range – are qualifiers to those
basics. They allow us to specialize even further to address all the possible outlier cases. The exact details
of these qualifiers and an in depth look at this portion of Rule Resolution is covered in the Circumstancing
lesson in this course.
Finally, the last rank – Version – ranks the candidates by the ruleset version that contains them. This
ensures that circumstanced rules are not automatically overridden if the base rule is updated in a more
recent ruleset version.
2. During the ranking process, the system evaluates any of the candidates that have their Availability
set to ”Withdrawn”. Withdrawn is a special Availability that lets us skip over some candidates. When
a rule is marked as Withdrawn, it is removed from the list of candidates as well as any additional
candidates that match all of the following:
a. The same ‘AppliesTo’ Class
b. The same Ruleset Name
c. The same Ruleset Major version
d. The same Purpose
e. The same Qualifiers
This is the definition of how to match a Withdrawn rule. However, during Rule Resolution, the incompatible
Major versions and Purposes have already been discarded in the preceding steps. At this point, we’re only
concerned with the rule being in the same class, in the same RuleSet and having the same qualifiers.
3. The last thing the system does is determine the ‘default’ candidate. A default candidate is the first
candidate (highest ranked) that has no qualifiers. This default candidate is the last possible rule to
be executed as it will always be a match to any additional requests for this rule. Additional
candidates ranked below this one are discarded.
Once these three steps of ranking are complete, the only candidates remaining will either have some kind
of qualifier and there will be a single nonqualified candidate.
Example:
Continuing our example, we’ll first rank the results. Conveniently, our list is already in the correct order that
in which it was ranked.
Next, we see that there is a Withdrawn candidate defined on ADVPurchasingWorkPurchaseRequest in
the Purchasing ruleset and there are no qualifiers. We’ll remove all candidates on that same Class and
ruleset that also have no qualifiers.
Once those have been removed, we look for the first candidate that does not have any qualifiers. We find
that ADVPurchasingWork in the Purchasing ruleset, Version 020105 meets this criteria, so the
remainder of the list is removed.
This leaves us with only three candidates, two that have a qualifier and the default candidate.
Step 7: Set the cache.
This list is then stored in the cache for future Rule Resolution requests to leverage (in Step 1). The
system caches all the possible qualified combinations in a single list in order to improve its reuse. If it only
stored the exact match, there would have to be a new cache for every possible scenario. This cache is
keyed by the Ruleset Stack and the class that was being executed.
Example:
In our example, the result is that all the candidates are from ADVPurchasingWork.
However, since we were executing a Purchase Request, the key used when putting this list in the cache is
ADVPurchasingWorkPurchaseRequest.
Step 8: Find the best instance.
This is the final step that determines which candidate is the most likely one to be executed. Each of the
qualifiers is examined to see if there is a match. The system stops on the first match that’s available or the
default if none of the qualifiers match. This allows us to use the qualifiers, such as Circumstance, to
specify the outlier processes while still maintaining a base ‘default’ rule for the majority of cases.
Example:
In our example, we have three possible candidates that are evaluated.
First PRPC evaluates if the condition for the Circumstance was met. Let’s say the Circumstance was
“Supplier=Restricted”. If the value of “Supplier” equals “Restricted” then the system uses the first
candidate. Our “Supplier” is set to “Open” so we don’t match and the system moves to the next evaluation.
Now it evaluates a DateTime range. If the range on the candidate is specified as “Before June 1st, 2000”
and we’re executing this rule now, we don’t match the date range, so the system moves to the next
evaluation.
The candidate does not have any qualifiers, so the system automatically chooses this candidate. We now
know which rule to execute.
Step 9: Check that Availability does not show Blocked.
Even though the system’s narrowed down the selection to a single rule, Rule Resolution doesn’t stop
there. There are still a couple more things to do. The system is going to check to see if the Availability is
set to “Blocked”. Like the other Availability settings, Blocked is used to instruct the Rule Resolution
process. In this case, a candidate that is “Blocked” tells the system that the rule cannot be run. This is
different than the “No” Availability. Where “No” says “Don’t pick me, pick another rule”, “Blocked” says
“You have the right rule but I don’t want you to run.”
To summarize the Availability settings:
Yes = “I’m OK to be executed.”
No = “I’m not OK to be executed, pick someone else.”
Withdrawn = “I’m not OK to be executed and neither are any of my earlier versions. Pick
someone else.”
Blocked = “I’m not OK to be executed, but do not pick anyone else. Just don’t execute.”
Example:
In our example, the chosen candidate has Availability set to “Yes”, so this rule is not blocked and is
available to run.
If you remember, we did have a blocked rule in the original list of candidates, but that candidate was
discarded in Step 4.
Step 10: Security – Verify that the user is authorized to see the rule.
The final piece of Rule Resolution is to check that the user has the correct authorization to execute the
rule. If the user does not have the correct authorization they are prevented from executing. The features
of security and how to configure them are covered in a different lesson in this course.
Example:
In our example, the user has the correct authorization, so the section displays.
Introduction
In this lesson we’ll cover how to plan your class structure with designs for future growth
and some best practices for achieving maximum reuse.
Define the Enterprise Class Structure
So what is an ECS and why do I need one? This is one of the most important pieces of developing a
robust PRPC application. The Enterprise Class Structure is the backbone of reuse in our application.
The Recommended Enterprise Class Structure
PRPC generates an Enterprise Class Structure for us when we run the Application Express. Depending on
the settings we select, we might not have all of these layers shown below in our application.
The most common of these settings generates four layers in our ECS and they are the:
Organization Layer
Division Layer
Framework Layer
Implementation Layer
The Organization Layer
The organization layer exists at the very top of our enterprise class structure. This layer is intended to hold
all rules that apply corporate wide.
The kinds of rules most often encountered here are:
Security (for example, LDAP access, how users are granted rights, etc…)
Corporate Logos and Skins
Corporate wide data models (for example, Customer, Insured, etc…)
Some Integration Points (Note that these should only be placed here if they are used organization
wide. Access to a customer database is a good example of a potential organization wide
integration. However a list of automotive repair centers would not be a good idea as it is most
likely only applicable to a single line of business and would not be used organization wide.)
The Division Layer
The division layer exists to hold all the rules for a logical grouping within that business. Most often, this is
based on the line of business, but it is not restricted to that and instead could be regions, brands, or
departments. The key is to identify if these rules span across different processes within the business.
For example, an insurance company has several processes, Quoting, Underwriting, Claims, etc… Each of
these processes would be handled by different departments, but across all of them, the definition of a
vehicle is the same. By placing this definition at the division layer for the Automotive Line of Business, we’re
providing for reuse across all the different applications that deal with vehicles.
The kinds of rules most often encountered here are:
Division wide data models (like the vehicle example)
Division wide letterheads, signature lines, office locations, telephone numbers, etc…
The Framework Layer
The framework layer is all about the process. This is what empowers an organization to achieve the
majority of their reuse. Processes within an organization are often very similar between the different
divisions. By placing this common process in a framework, we just have to build it once for all the different
applications within that organization.
For example, if we look at an insurance company again, the claims process is generally the same no
matter which line of business.
Automotive, Home, Life, Health, etc… It doesn’t matter which kind of insurance, a claim still needs to follow
these same basic steps. This is what a framework provides. Frameworks should not be tied to a single
line of business, as this limits their reuse, but instead should always look at the broad picture across the
whole organization.
The kinds of rules most often encountered here are:
Case types
Flows
Flow actions
Sections
The Implementation Layer
The implementation layer is where it all comes together. This layer is the application that a user leverages
to perform their tasks. You can think of it as the ‘glue’ that holds together the rules from the framework,
organization and division layers. The rules that are the most specific would exist here.
The kinds of rules most often encountered here are:
Class groups
SLAs
Routers
One key aspect to the implementation layer is that this is where any of our work classes should be
instantiated. This allows the class to be the most specific and can then ensure it’s leveraging the
appropriate rule in all instances. It also keeps the work for a particular division tied to that one division and
avoids possible cross division contamination of work.
The Benefits to a Good Enterprise Class Structure
Well, we’ve covered the basics of how to organize a class structure, but we still haven’t covered why we
may want one. This is one area that is often missed when explaining to a business the benefits of the
ECS. An ECS provides minimal yield in the very first application. The major yields come as the
organization grows and additional applications are built.
Take a look at the following diagram.
This is an example of typical organization that has been building and using PRPC applications for a little
while. Now is when this organization is reaping the benefits of their ECS. By forward thinking they were
able to plan frameworks and divisions that enabled them to build 5 different applications with relative ease
and reuse.
Does this create more work?
No, it shouldn’t. A well designed Enterprise Class Structure should never contain 100% of the rules in any
one layer. Instead, the same number of rules that would have been used in a single application are just
spread around throughout the layers. Building an ECS should not take any more effort than asking
yourself “What does this rule represent?”
There’s no magic number to the balance of rules in any one of these layers. Each and every application is
going to have its own unique balance. The key to planning is keeping the purpose of the rule in mind when
building it. If the rule is related to the process, it goes in the Framework. If it’s a business standard, it goes
in the division or organization layers, as appropriate. If it’s specific to this one application, it goes in the
Implementation.
Inheritance – What makes the Enterprise Class Structure Work?
So how does it all work? How can rules existing in different classes be used with each
other? Inheritance!
Inheritance is what allows a class to use rules from its parent as if they were its own.
For example, if we have Classes A and B. If Class A (the child) inherits from Class B (the
parent), then A can use any rule defined on A or B. The opposite is not true. Parents
cannot access rules from their children, so Class B can only use the rules defined on B.
Inheritance is an Object Orientated Programing concept. Inheritance is defined as being
“a kind of”. In most Object Orientated Languages, we can only define one thing as being
“a kind of” one other thing. Like a desk ‘is a kind of’ furniture, or a poodle ‘is a kind of’
dog.
But not everything in life can be described as only being of one kind. For example, let’s
look at a swallow. A swallow is a living creature that flies. We couldn’t describe this
with a single inheritance path because not all things that fly are living creatures, and not
all things that are living creatures fly.
For example, let’s assume the follow properties exist for each of these classes:
Class Properties
Height from ground
Things That Fly
Airspeed
Living Creatures Number of Offspring
Swallow Length of tail feathers
If the Swallow could inherit from both Things That Fly and Living Creatures, it would be able to use all 4 of
the listed properties. Thankfully, in PRPC we can do that. PRPC has the ability to define two kinds of
inheritance, Pattern and Directed. By providing two paths of inheritance, we can achieve multiple reuse
patterns.
Pattern Inheritance
Pattern inheritance is defined by the name of the class. For example, let’s look at the class:
ADVPurchasingWorkPurchaseOrder
Anything before the last ‘‘ is considered the classes parent. Patten inheritance continues to chain all the
way up to the very first class referenced in the chain. So for our class, the chain of parents is:
ADVPurchasingWork
ADVPurchasing
ADV
Our class can use any of the rules defined on any of these three parents.
Directed Inheritance
Directed inheritance is how we explicitly specify an alternative parent for a class. We specify the directed
inheritance on the class’s rule form.
For our example, the directed parent is WorkCover. Directed parents also chain so if we were to open
each of the class definitions we’ll find that this classes parents are:
WorkCover
Work
@baseclass
So our class is also able to use any of the rules in these classes.
How can I see all the parents of my class?
In order to see all the parents a class has, we just need to right click on the class and select Inheritance, as
shown below.
This opens the Inheritance Viewer. The Inheritance Viewer allows us to see all the parents of the class as
shown here:
The parents are ranked according to the order they’re checked during rule resolution. When a rule is
requested, the system first looks in the child class, then works its way through the parents starting from 1.
Example:
We’ve requested to execute Rule ‘A’. Looking at the all the available Rule ‘A’s in the system, we find it on:
1. ADV
2. Work
3. ADVPurchasingWork
4. @baseclass
When the system ranks these it selects the copy of the rule in #3, since it follows the order shown in the
Inheritance Viewer.
Conclusion
In today’s applications, the enterprise class structure is what drives the reuse model. A well planned class
structure should incorporate the needs for reuse, both process orientated and business orientated
Introduction
In this lesson, we’ll discuss how we can use circumstancing to provide flexibility to a
PRPC system to address the complex demands of outlier business rules.
Define Circumstancing
Circumstancing is a way we can create specialized copies of rules to use based on the evaluation of a
condition. This helps us handle the outlier cases that go beyond the basic process. For example, we
might need the system to apply a different tax calculation if a person is retired. This is an easy case, and
potentially could have been handled directly in the tax calculation. But, what if we have a more complex
situation? Let’s say we need to apply a different tax calculation if a person is retired, they retired after Jan
1st, 2000 and they have no next of kin. Now what do we do?
These are the kinds of outlier situations that exist in businesses today. Complex business rules vary from
state to state; by time, by demographic groups, and any and all possible combinations. Trying to account
for all of these possible combinations in different decisions and paths in the flows leads to a complex nest
of possibilities that become harder and harder to maintain.
By using circumstancing, we can keep the processes streamlined, and allow the system to determine when
to do things differently based on the information available at the time the rule executes. There are four
types of circumstancing, but they can be summed up as being either based on their data and/or based on
time.
The Base Rule
One of the important concepts in Circumstancing is the base rule. The base rule is the uncircumstanced
version of the rule that acts as the default for the basic process. In fact, since a circumstance is a variation
of this base rule, the circumstanced version can’t even exist unless the base rule has been created first.
Creating a circumstance
So, how do we create a circumstanced rule? Easy, first we select our base rule, then we select ‘Specialize
by circumstance’ to a new rule.
In this lesson we are discussing the options for “Specialize by circumstance”. The other option to
“Specialize by class or ruleset” is described in the Reuse and Specialization lesson.
Which rules can be circumstanced?
So this means we can circumstance any and every rule right? Not so fast. Only certain rule types can be
circumstanced. In order for a rule to be circumstanced, that type of rule must have at least one these two
options enabled.
Allow selection of rules based on property values?
We use this option for circumstancing on the data available. In some cases, this data might be time
related, such as the creation date. However this is still static data and should not be confused with the
second option.
Allow rules that are only valid for a certain period of time?
We use this option for circumstancing based on the time the rule executes.
Identify Different Kinds of Circumstancing
So far we’ve discussed the different basic ways that rules can be circumstanced. .
But, let’s say we need to specialize a rule based on if:
The customer is retired
The customer retired after Jan 1st, 2000
The customer is retired and has no next of kin
We’re currently in tax season, turnaround the request in 2 days instead of 3.
Below are the types of circumstancing we can use. Let’s take a look at each of them in more detail and in
each one, we’ll evaluate a different part of our original example.
Single Property
As of Date Processing
Multivariate
TimeQualified
Single Property
We use single property circumstancing when we look at the data available.
When we use single property circumstancing, the property needs to be a single value. Lists and Groups
are not compatible with Single Property circumstancing. The property entered here is compared to the
entered value. This comparison looks for an exact match and is case sensitive. For this reason, Single
Property Circumstancing is not recommended for customer entered data.
Example
A single property circumstance is perfect for evaluating the first of our conditions. We can create a
circumstanced rule based on an “IsRetired” property. If the value of “IsRetired” equals “Yes” than this rule
is chosen. If the value is “No” then the rule is not chosen. Since the comparison is case sensitive, if the
value was “yes” then this rule is not chosen. This is why it is very important to always have control over the
values that are being compared.
As of Date
As of Date is very similar to Single Property, except that it:
1. Can only be used with date/time type properties.
2. Looks for circumstances where the rule is past the specified date instead of an exact match.
As of Date first looks for all circumstances where the value in the property is past the specified date or
time. It then chooses the circumstance that is closest to the value in the property. To illustrate, let’s say
we had three circumstances with the following dates defined:
Rule A.1: September 1st, 2012
Rule A.2: June 1st, 2012
Rule A.3: July 1st, 2012
If the system needs to pick one of these, and the property value is August 15th, 2012 then the system
would discard Rule A.1 because August 15th, 2012 is not past September 1st, 2012. It would keep the
other circumstances Rule A.2 and Rule A.3 because our date is past the dates of the circumstance.
Between A.2 and A.3, it would look for whichever circumstance is closer to the value in the property. This
is A.3 since July 1st, 2012 is later than June 1st, 2012, which makes it closest to our value of August 15th,
2012.
We’ll cover the exact process rule resolution uses to make these determinations a little later in this lesson.
Example
This kind of circumstancing is perfect for the second of our conditions. In this case we would create a
circumstance based on a “RetiredDate” property and specify Jan 1st, 2000 as the Date Value. This rule is
chosen anytime the “RetiredDate” is after Jan 1st, 2000. If the RetiredDate is previous to this date, for
instance Dec 31st, 1999 then the system drops to the default rule that has no circumstances defined.
Multivariate
But what if we need to evaluate more than one property? This is where multivariate circumstancing is
used. Multivariate requires the use of two additional rules, Circumstance Templates and Circumstance
Definitions. Using these two rules, we can define a circumstance that spans multiple properties and
multiple values.
In PRPC, Circumstance Definition and Circumstance Template rules are part of the Technical category and
are instances of the “RuleCircumstanceDefinition” and “RuleCircumstanceTemplate” rule type
respectively.
The Circumstance Template defines which properties take part in determining if the circumstance is valid.
What we define here becomes the list of properties that a Circumstance Definition can leverage.
The Circumstance Definition uses the Circumstance Template to set up the columns of valid values in this
Definition. The definition can be any number of rows and can contain any value, or a range of values, for
each property in that row.
Bringing this all together, with a Circumstance Template we can state that we want to evaluate multiple
properties. A Circumstance Definition then extends that Template to let us evaluate multiple values for
those multiple properties. A Circumstance Definition cannot be created without it’s corresponding
Circumstance Template and a Circumstance Template cannot be leveraged without at least one
Circumstance Definition being defined.
Once the Template and at least one Definition have been created, we can create a multivariate
circumstance by choosing to specialize by Template. On the new form, we can then specify the already
created Template and Definition to use for this rule.
Example
This kind of circumstancing is used for our third condition, since we will need to check if the person is
retired and if they have no next of kin. So, first we create a Circumstance Template to define that we’re
circumstancing on the “IsRetired” and “NextOfKin” properties. Once we’ve created the Circumstance
Template, we can create the Circumstance Definition that says we’re looking for someone who is retired
(IsRetired = ‘True’) and has no next of kin (NextOfKin = ‘None’). Finally, we specialize by Template and
select the template and definition we just created.
So that gives us four rules in the system to handle this special circumstance:
1. The Base rule
2. A Circumstanced Template
3. A Circumstance Definition
4. The Circumstanced Rule
Whenever this rule needs to be executed the system uses the Circumstance Definition and Template to
determine if the Circumstance is valid. If yes, it runs the Circumstanced version of the rule. If no, it runs
the Base rule.
TimeQualified (Date Range)
Time qualified rules are a little different than the other circumstances. These rules aren’t based on values.
Instead these rules depend on when they are executed. Time qualified rules are used for temporary
purposes. For example, we might need to adjust our business based on a particular season, such as
providing endofyear discounts, or maybe providing a longer turnaround time for grade requests outside
of the school year. This is different than circumstancing by a date property and care should be taken to
ensure the proper type of specialization is used to produce the desired effects.
When specializing by date, only two additional values can be specified. The Start Date and the End Date.
Only one of these is required, though both can be, and often are, specified. The system executes this
version of the rule if the current time is after the start date, but before the end date. If either the Start Date
or End Date is missing, then this becomes an open ended circumstance, which applies to all after a
specified start date or before a specified end date.
Example
This kind of circumstance satisfies our fourth condition, determining whether or not we’re in tax season. To
create this circumstance, we start with the SLA rule that specifies the turnaround time. We save this rule to
a new copy, provide a StartDate of Jan 1st and an end date of Apr 15th. The system now chooses this
circumstance whenever we’re between those dates
Describe Circumstancing in Rule Resolution
The Rule Resolution lesson introduces the concept of Ranking. Aside from its other functions, as covered
in that lesson, Ranking also impacts Circumstanced Rules. During the ranking process, all valid
circumstance variants are sorted according to their circumstance values.
The ranking processes orders by:
1. Override
2. Class
3. Ruleset
4. Circumstance Value in alphabetical order. (Multivariate circumstances rank by the order in which the
properties are listed in the Template
5. Circumstance Date in descending order.
6. Time Qualified by end date, in ascending order.
7. Time Qualified by start date, in descending order.
8. NonQualified
9. Version
Each of these sort orders is within the previous one. Override, Class, Ruleset and Versions are covered as
part of the Rule Resolution lesson, so we’ll just focus on how the Circumstanced rules are ranked.
Using our previous examples, if we were to take a look at every possible circumstance combination of:
If the customer is retired
If the customer retired after Jan 1st, 2000
If the customer is retired and has no next of kin
If we’re currently in tax season, turnaround the request in 2 days instead of 3.
If we were to account for every one of these possible combinations, we’d have a large number of rules to
maintain as shown in the table below.
Circumstance Circumstance Circumstance
Rank Order Value 1 Value 2 Date End Date Start Date
(IsRetired) (NextOfKin) (RetiredDate)
1 True None Jan 1st, 2000 Apr 15th, 2014 Jan 1st, 2014
2 True None Jan 1st, 2000
3 True None Apr 15th, 2014 Jan 1st, 2014
4 True None
5 True Jan 1st, 2000 Apr 15th, 2014 Jan 1st, 2014
6 True Jan 1st, 2000
7 True Apr 15th, 2014 Jan 1st, 2014
8 True
9 Apr 15th, 2014 Jan 1st, 2014
10 (Default)
In most cases though, the business doesn’t need to account for every possible combination. Often one or
more of these rules supersedes another. Clarify with your Business Architect (BA) or Subject Matter Expert
(SME) the intent of these specializations to ensure the correct number of rules is being created. If we were
to create exactly the specializations requested, we’d only have 5 variations as shown in the table below.
Circumstance Circumstance Circumstance
Rank Order Value 1 Value 2 Date End Date Start Date
(IsRetired) (NextOfKin) (RetiredDate)
1 True None
2 True Jan 1st, 2000
3 True
4 Apr 15th, 2014 Jan 1st, 2014
5 (Default)
This looks much easier to maintain, and the system can now be relied upon to choose the correct rule to
execute based on the ranking order.
This table is written to the cache and evaluated as a part of every rule resolution request. This allows the
cache the maximum amount of reuse by using a single cache for every circumstance variation. Every
execution then confirms against this table the exact circumstanced version that needs to be executed for
the exact values held at runtime.
The Base Rule Flag
It is important to note that the version of a rule is less important than the circumstance. This is relevant
when an update is required to one of the variations of the rule, or to the base rule itself. Let’s take a look
at the last circumstance.
If we’re currently in tax season, turnaround the request in 2 days instead of 3.
What if the base changes from 3 days to 4 days? This should be relatively easy; we just save the rule into
a new version and update it. But what about the circumstanced versions of the rules? Do we need to save
those too? No, because the system ranks version as less important than the circumstance. So it is
possible to have a circumstanced rule in version 010101 and the base rule in 010115. At run time, if it
matches the circumstance then it executes the rule from 010101. Otherwise it will execute the one from
010115.
So how do we get rid of a circumstance? If the system always allows the old rules is there no way to
remove circumstances? Thankfully, there is a way. Any version of any rule that can be circumstanced can
also be designated as a base rule. This is done by checking the Base Rule flag while setting the rule’s
availability.
Checking this flag designates that this version of the rule is now considered this rule’s base and any
previous circumstances no longer apply. Let’s take a look at the following list of all variations of a rule:
Version Circumstance
1 010101 None
2 010101 .Dept = Accounting
3 010101 .Dept = Engineering
4 010115 None
5 010120 .Dept = Engineering
6 010125 None, Base rule Checked
7 010130 .Dept = Accounting
8 010135 None
Given this list, if we were to execute this rule when .Dept= Accounting, we get the 7th rule (ver. 010130).
If we were to execute this rule when .Dept = Engineering, we get the 8thrule (ver. 010135).
This is because the 6th rule (ver. 010125) has the base rule checked. So, all the rules that were previous
to this version (rules 1 through 5) are no longer applicable to our ranking. When we look at only those
rules that are available for ranking, we can see that Engineering is not an applicable circumstance, so it
chooses the highest version of the rule with no circumstances defined.
Version Circumstance
6 010125 None, Base rule Checked
7 010130 .Dept = Accounting
8 010135 None
Pega Naming Conventions
Standard naming conventions help to ensure a uniform understanding of an application and its
components. Careful consideration of naming conventions at the outset of a project reduces the risk of
duplicated effort and rework. Additionally, some of the names employed in development are visible on user
forms and portals so creating clear meaningful labels from the beginning ensure that you are enhancing
the user experience and improving productivity.
The conventions presented here are suggestions. Adopt or adapt the ones you want based on your
application and organizational requirements. In any event, make sure the conventions are uniform and
apply them consistently.
Stages Stages describe the phases of a case transaction. Use nouns or gerunds (verbs ending in “ing”)
as a convention. For example, an auto loan case might have the following stages: submission, review,
underwriting, and financing. Avoid using verbs that describe an action or a status: order places, shipped,
account paid, and closed.
Steps Steps represent a process, an assignment, or another case. Steps are actionable “things to do”.
The name should describe the step’s overall purpose. Use a verb plus a noun convention. For example, an
auto loan case might have a step named “Submit Loan Application”.
Case Types Case types represent business transactions (cases). For example, a lending application
might have auto loan, personal loan, home equity loan transactions. In other words, a case represents
what must be processed, not how it is processed, which is described by stages and steps. Use a noun
convention. Avoid using vague, nonspecific names such as Auto or Personal, which do not provide
enough context. For instance, in a financial services organization, there may be Auto Loan cases and Auto
Accident cases.
Data Object Types Data object types are classes that contain the information such as properties and
data pages that are necessary for a case to accomplish its actions or achieve its business outcome. In
other words, they represent “things.” Use a noun convention; for example, Cost Center, Country, and
Exchange Rate.
Flow Actions Use a verb and noun convention for flow actions, for example, Approve Correspondence.
A flow action can reference other records. When associated with certain activities, data transforms and
validation records, prefix the the other records name with:
Pre — if used with a preactivity or data transform; for example, PreApproveCorrespondence
Post — if used with a postactivity or data transform; for example, PostApproveCorrespondence
Val — if used with a validate record; for example, ValApproveCorrespondence
When naming other records related to the flow action, follow these guidelines:
Section – use the same name as the flow action
Privilege – use the same name as the flow action
General Guideline for Record Naming
The general guidelines for all records are:
Choose names that reflect the purpose of the record.
Keep names short, but not so short that the purpose of the record is not evident.
Start all names with a capital letter. Start words within names with capital letters. Capitalize any
acronyms within names.
Do not include underscores or other special characters.
Start the names of active rules, such as activities and flows, with a verb indicating the main action
of the rule. Usually, include a noun or noun phrase indicating what the rule operates on. For
example, CreateClaim and ApprovePayment.
For passive or object records, such as parameters, properties, work parties, libraries, and roles,
choose nouns or noun phrases indicating the use of the rule. For example, ProjectGoalDate,
ClaimID, and DefenseAttorney.
Documenting Rules
Use the History tab to document the record. Enter text that describes the purpose in the description field. Enter guidance to users
of this rule, for example, designtime instructions or special run time considerations in the usage field.
The Custom Fields section displays a list of namevalue pairs associated with the rule. The custom fields can be used to help
identify specific development purposes such as indicating a theme, a bug or feature request identifier, and a person who
requested the change and so on. Custom fields can be associated with any unlocked rule, but is not available for data instances.
There is a wizard provided to search for rule instances by custom field names. Start the wizard by selecting DesignerStudio >
Process & Rules > Tools > Find Rules By Custom Field.
The wizard searches rule types that belong in the decision category only. To meet our specific reporting requirements we can
create custom report definitions in the IndexCustomFields class.
The Feedback Feature
The Feedback feature allows users, testers, and developers working with the application to record and submit ideas, suggestions,
bugs, issues, and proposed solutions.
The Feedback feature needs to be enabled for the application on the DesignerStudio > System > Settings > Project Management
landing page.
In addition, the access role PegaRULES:Feedback needs to be added to the access groups of operators that should be able to
send feedback.
Note: The feedback pushpin is not available on the developer portal.
Click and drag the pin to the area on the screen that you want to reference in the feedback. The email address in the operator
profile automatically populates in the From field.
A case of class PegaFeedback is created for each feedback submitted and routes it to the workbasket called Feedback. The
Feedback workbasket can also be accessed from the Show Feedback button next to the Pin button in the portal. Click on the
Feedback item you want to review.
The Feedback item contains the details provided along with the screenshot. There is a standard feedback flow called
pyUserFeedback that can be used as is or customized. The standard flow allows us to resolve, reject, or withdraw the feedback
item. It is also possible to request assistance from an operator on the system. The feedback item is then assigned to that operator’s
worklist. Note that the feedback work object always remains in the Feedback workbasket, even after it has been resolved.
Although the Feedback feature can be used and provides value on its own it is intended to be used with the Project Management
Framework. The Project Management Framework (PMF) is a powerful project management application for managing Scrum
projects. PMF can be used to jump start product planning by using its capability to import user stories and other data directly from
an Application Profile or Microsoft Excel spreadsheet. PMF’s interactive Scrumboard, reporting, and selfdocumentation features
improve visibility into project plans, progress, and status. PMF enables teams to update their tasks and bugs directly from the
Developer Studio
Custom Guardrail Warnings
Guardrail warnings identify unexpected and possibly unintended situations, practices not recommended, or variances from best
practices. We are going to look at how we can create additional warnings that are specific to our environment or development
practices.
Unlike rule validation errors, warning messages do not prevent the rule from saving or executing. For example, activities have a
warning message, but can be saved and executed.
The warning details are stored in a page list property called pxWarnings on the rule.
We can review the warnings for an application on the landing page found at DesignerStudio > Application > Guardrails. The
Warning Summary tab gives us an overview of existing warnings and provides a filter feature. The Warnings Details tab lists all
warnings.
To add or modify rule warnings, override the empty activity called CheckForCustomWarnings in the baseclass. This activity is called
as part of the StandardValidate activity which is called by, for example, Save and SaveAs and is designed to allow us to add
warnings.
Typically we want to place the CheckForCustomWarnings activity in the class of the rule type to which we want to add the warning.
We can add a CheckForCustomWarnings activity higher up in the class hierarchy, for example in Rule. However, keep in mind that
this activity is not called for rule types that have a more specific CheckForCustomWarnings activity in place.
This CheckForCustomWarnings activity adds a custom guardrail warning to the flow action rules missing a description. The
CheckForCustomWarnings activity has the rule that is being saved as its primary page.
In the first step we check if the description property has a value. If it does we exit the activity, otherwise we set the properties for the
warning. The warning is added in the second step using the standard function rule called pxAddGuardrailMessage. The
pxAddGuardrailMessage function does not have a return value and must therefore be called from a Java step.
Let’s have a look at the parameters for the pxAddGuardrailMessage function:
addToProperty – Clipboard property to add warning to
ruleMessageName – Name of message rule
messageParameters – HashStringMap of parameters
The message rule looks like this.
Flow actions without description now show a warning.
When we add a new warning it does not appear until the rules are saved again. Also, it is important to make sure those developers
we want to see the warning include the ruleset containing CheckForCustomWarnings activities the in their ruleset list
The organizational structure consists of six main components. The organization is the top level of the threelevel hierarchy available
to all of our applications. One system can support multiple organizations. The division is the middle level of the hierarchy.
Organization units form the bottom level of the hierarchy.
Operators also sometimes called users, caseworkers, agents, and CSRs are named users in the system. An operator always
belongs to exactly one unit. Each operator has a worklist containing the work assigned to him.
In addition to the operators there are the workbaskets, which similarly belong to an organizational unit. A workbasket can be
accessed by multiple operators and is associated with a workgroup.
A work group identifies a manager, and a set of operators and or workbaskets as a team. Work groups are not part of the three
level hierarchy and can span across units, divisions, and organizations.
In addition, a calendar is used to define the business days and holidays for the organization.
The organizational structure is typically used to:
Route work to the correct operator or workbasket
Determine the access rights of operators
Report on the activity of users by division, unit, and work group
The following diagram describes the relationship between the organizational objects.
To summarize:
An organization is linked to a calendar that defines the business days.
A division belongs to an organization. An organization can have several divisions.
An organization unit belongs to a division. A division can have several organization units.
An organization unit can report to one or more other organization units.
An organization unit has a manager.
An operator and workbasket belongs to an organization unit.
An operator’s and workbasket’s availability is defined by a calendar.
An operator and workbasket belong to a work group.
A work group has a manager.
A work group has a default workbasket.
An operator reports to a manager.
The organizational objects can be accessed from the Organization category in the Records Explorer.
The Organization landing page found at DesignerStudio > Org & Security > Organization has two tabs. The Operators tab displays
a list of operators in the system and information about them.
Use the Organizational Chart tab to view, create, or modify the organizational hierarchy and operators in the system. Right click to
add a node, child or operator, or view operators, or open the instance.
In certain situations the organizational structure might not be able to accommodate the requirements. This
might be the case when, for example, an operator reports to several managers, or belongs to several work
groups or units.
Requirements like this are typically related to routing, for example, a user might have one cost center
manager approving his or her expenses and a primary and secondary manager approving other types of
requests.
In order to meet such requirements the organizational structure needs to be extended. Properties can be
added to the classes representing the organizational objects, for example DataAdminOperatorID for
operators. This a good option if the properties are maintained in an external system, for example, if they
are provided by the LDAP server at login.
However, if the properties are to be maintained in Pega it is better to create a data table for easy
maintenance. In either case custom router activities utilizing the new properties or data table need to be
created.
Let’s have a look at how the fictional company LES is organized and how it is mapped to Pega’s organizational structure in a
Product Inventory and Procurement application.
LES is divided into regions, currently U.S. and EU, and each region has a number of departments. There are currently three
departments: Sales, Fulfillment, and Audit. The sales department in the U.S. is divided into three geographical areas: east, central,
and west.
Each geographical area has its own cost center manager, who reports to the department manager. Some people in sales, such as
the department supervisors, work across geographical areas.
The staff is divided into units with a unit manager. There are units in sales that span several geographical areas. Each unit
manages their own pool of work. Each sales representative reports to one of the department supervisors in addition to his or her
unit manager. Each department supervisor reports to the department manager.
Let’s have a look at how this can be mapped to the organizational model in Pega. The company LES is mapped as the organization
with the regions mapped as divisions.
The departments are mapped as units. The geographical areas in sales are mapped as units reporting to the sales unit. The cost
center manager is mapped in the manager field in the unit. Each cost center form their own work group.
The units are mapped as work groups with the unit manager being the work group manager. Each work group has its own
workbasket. Each sales representative is mapped to a department supervisor through the Reports To field on the operator record.
The sales supervisors and department managers that do not belong to a specific region are mapped directly to the sales unit. They
form their own work group called sales management. The sales supervisors reports to the sales manager.
There are a few things to note when mapping the organization structure:
Try to understand routing and reporting requirements.
Ignore terminology when mapping the organizational structure. For example, in our case what was called unit was mapped
as work group.
Do not try to map directly to the organizational chart of the organization. Instead try to understand what you need to
achieve with the organizational structure.
A good organizational structure is one that supports the routing and reporting requirements and is extensible.
Ruleset Validation Mode
Ruleset validations are performed every time a rule is saved. The purpose of ruleset validation is to guarantee that rules referenced
are available on the target system when the ruleset is promoted. It is important to understand that ruleset validation does not affect
rule resolution at runtime; it is only applied at design time.
The selected validation mode applies to all versions of that ruleset. ApplicationBased Validation is selected by default for new
rulesets.
Ruleset PrerequisiteBased Validation
The Ruleset PrerequisiteBased Validation (RSP) requires us to specify prerequisites. The Required RuleSets And Versions section
appears when RSP is selected.
Enter one or more ruleset versions on which this ruleset version depends. For example, if we are creating a ruleset version for an
application that uses rules in Claims:010101 then that ruleset version needs to be specified as a prerequisite. Only rules in the
ruleset specified as prerequisites can be referenced.
Every ruleset has a dependency on the base product so if our ruleset version doesn’t have any other prerequisite ruleset version
(that directly or through prerequisites depends on the base product) we need to specify the PegaProcessCommander ruleset as a
prerequisite.
The PegaProcessCommander ruleset lists all product rulesets so we don’t need to list any product rulesets below Pega
ProcessCommander.
There is a 99 patch version available of the PegaProcessCommander ruleset in the product. Use that ruleset version as a
prerequisite to avoid having to update the ruleset after product updates.
Ruleset prerequisites cannot be cyclic, for example, if Alpha:010101 defines Beta:010101 as a prerequisite then Alpha:010101
can’t be a prerequisite for Beta:010101.
Consider using RSP for rulesets shared across multiple applications, such as organizational rulesets. This promotes isolation and
allows reuse by other applications. The organizational rulesets created by the New Application wizard are of mode RSP.
ApplicationBased Validation
If the ApplicationBased Validation (ABV) mode is used, rules in the ruleset can reference all rules in the rulesets defined in the
same application and rulesets belonging to any builton application. They cannot reference rules outside the current application
stack or above the defining application.
An ABV branched ruleset has access to all rulesets in the application including other rulesets in the same branch as well as any
other branches also defined in the same application definition. Hence, all branches within a given application can refer to each
other.
rulesetapplication.png
The ABV mode of operation provides significant performance advantages over the alternative RSP approach and is recommended
in order to close the gap between design and runtime. The framework and application rulesets created by the New Application
wizard are of mode ABV.
It is a best practice to create applications for common rulesets and use the builton functionality to include them in the application.
Another best practice is to include unlocked ABV rulesets in one application only. This prevents ABV rulesets from referring to rules
that may not exist in all applications in which that ruleset is contained.
Let’s have a look at a simple example; it is assumed all rulesets are configured as ABV. Each shaded or unshaded cell represents
an application and each name represents a ruleset.
Within a given ruleset name, we cannot call rules in a higher ruleset version. For example, rules in LoanPricing:010101 cannot
reference rules in LoanPricing:010103. For versions across different ruleset names, the standard ABV logic is applied.
ABV does in contrast to RBV allow for codependent rulesets within the same application. That is, rules in LoanPricing can
reference rules in LoanUnderwriting and rules in LoanUnderwriting can reference rules in LoanPricing.
The next example uses a mixture of ABV and RBV. Rulesets with brackets indicate RBV rulesets with their respective prerequisite.
With RSP, we are unable to call ABV rulesets that are not in the prerequisites.
Changing the application definition might invalidate rules. The Validation tool is designed to quickly identify any invalid rules in the
application. The tool can be started by selecting DesignerStudio > Application >Tools > Validation. Utilize the Validation tool after
critical changes/milestones, such as changes to the application ruleset list or builton application and before lock/export.
Manage Rules, Rulesets and Applications
Let’s have a look at a few features that help us to manage development. These features are especially useful
for large development teams or teams with inexperienced resources.
Ruleset and Application Management
Let’s have a look at the ruleset Security tab.
It is possible to define passwords to restrict access to updating the ruleset and its versions. Specify the
passwords in the Define Passwords section.
When updating the ruleset record or adding or updating a version we need to provide the password.
We might also want to consider requiring a password to update the application record.
Rule Management
We can specify an approval process for rule checkins in the Rule Management section.
This allows the developer leader to review each and every rule change before the rule is checked in. Specify
the name of the class and flow of the approval process we want to use. There is a standard flow provided in
the WorkRuleCheckIn class called ApproveRuleChanges. The standard approval process is described in more
detail in the lesson “Delegating rules to the business user”.
Refactor
There are four utilities available to support refactoring of rulesets. They can be started from the DesignerStudio > System > Refactor
> RuleSets landing page.
Copy/Merge RuleSet
Let’s start by having a look at the Copy/Merge RuleSet utility.
With this utility we can:
Rename a ruleset
Change a ruleset version number
Merge several rulesets into one
Merge ruleset versions into one version
The main difference between copy and merge is that copy always leaves source ruleset versions unaltered. Merge provides us with
the option to delete the source ruleset versions.
Merging also provides us with the option to modify nonversioned rules, which copy doesn’t.
Delete a RuleSet
The ruleset delete utility can be used to delete an entire ruleset or a specific version of a ruleset.
Never delete a ruleset version that contains rules that are in use. Review the application and make sure it does not reference rules
to be deleted. Also, review data instances such as access groups, operator IDs and integration components to ensure that they do
not reference rules to be deleted.
The ruleset to be deleted can’t be locked or have checked out rules.
The utility saves the deleted ruleset in an archive file named <RulesetName>_<RulesetVersion>_DELETE.zip in the ServiceExport
directory located under temp. To recover a ruleset, specify the name and version number and then click the Restore button, or
alternatively use the Import utility.
We can also delete a ruleset version directly from the ruleset form using the delete button provided there are no rules in it. Once all
ruleset versions have been deleted the ruleset can also be deleted.
Skim a Ruleset
Skimming is the act of taking the highest version of a rule and saving it into a new, higher ruleset version.
There are two types of skims, major and minor. These reflect the tripartite naming convention for ruleset versions, majorminor
patch. Rules are stored in a higher minor version during a minor skim and during a major skim in a higher major version. For
example:
Major: Skimming rules in 060501 through 060925 into 070101
Minor: Skimming rules in 060501 through 060925 into 061001
The rules carried forward during a skim depend on the type of skim and the availability of the rule.
Available (Yes) Not Available (No) Final Withdrawn Blocked
Major Yes No Yes No Yes
Minor Yes No Yes Yes Yes
Blocked rules are carried forward because a blocked rule can block rules in other rulesets and that relationship should be
maintained.
Rules in versions below the major version are not visible to rule resolution. For example, if we list Alpha:020101 that depends on
Beta:020101, Alpha:020101 won’t see rules in Beta:01XXXX. Therefore, it is a best practice to create new major versions by
doing a major skim.
Also consider skimming when the product rule becomes large or when the version numbers are close to exhaustion.
Rulebase Compare
The Rulebase Compare tool is discussed in the Advanced Application Migration lesson.
Deciding When to Use Dynamic Referencing
The fundamental goal of a framework is that it is easy to extend to meet the requirements of a particular
implementation but to do so without unnecessary maintenance cost. This translates to an important edict: don’t
always copy rules. Although rule resolution dictates that implementationlayer rules are chosen over framework
rules; copying from the framework to the implementation layer isn’t always the right thing to do.
Let’s consider a couple of examples. If the process laid out by a flow rule is completely different between
implementations, then copying the flow rule from the framework to the implementation is entirely appropriate. There
is no better way to maintain the details of the process than having a dedicated rule.
On the other hand, let’s say the process is the same, but only a small element of the process, for example the
workbasket of a particular assignment, is different. In this case, copying the rule would be an inappropriate design.
Doing so would introduce unnecessary maintenance costs, as now there would be multiple places to update the
flow should it change some day.
Further, having separate rules would give the false impression that the processes are different, when in fact they
are the same. Dynamic Referencing is a mechanism that provides an alternative to copying rules; allowing a
variable to be used for a parameter value, rather than a hardcoded value.
The use of dynamic referencing is not limited to frameworks. Consider the following situation. In a claims application
we can handle both home and auto insurance claims. Some of the flows are used for both claim types and placed in
the work group class.
Some flows can be reused asis for both claim types, but some require minor customization. For example, there
might be a step in which the case is routed to a workbasket, which differs depending on the claim type and/or other
parameters, such as the organizational unit. This is also a perfect use case for dynamic referencing.
Extending a Framework
When configuring dynamic referencing for a framework, it is often advisable to create a data class to house the properties used as
the dynamic reference variables. That way it is easy to get an overview of the extension points that need to be configured when
extending the framework.
Here we’ve chosen the name “AppExtension” for the class housing the extension properties. This class should be in the framework
layer.
We then create a page property in the framework’s class group of the app extension class. This property is then available to all
application layers.
The property points to a data page holding the extension values.
The value is set for each application. Typically a data transform would be used, which is overridden in each application ruleset. In
this case we are using a data transform called AppExtension.
However, we could use an activity with, for example, a decision table.
The AppExtension property can then be referenced, for example, as shown here to specify the workbasket name.
Configure Dynamic Class Referencing
Consider the following scenario. In our claims application there is a subrogation subcase for some claims.
The subrogation class is defined in the framework layer, but is also defined at the application layer. How can we tell the
pxAddChildWork rule, which is called in the framework layer, to create an application subcase? Again, we can do this using dynamic
referencing. This particular use case demonstrates the “Dynamic Class Referencing” pattern.
First, we add the SubrogationClassName property to our AppExtension class.
Then we set the value in the pyDefault data transform in the application layer. As we said earlier, the data transform is picked by
way of rule resolution. We have a similar copy of the data transform in each implementation.
We can use the property in the pxAddChildWork utility in the framework layer to create an application layer case type.
What Solution Frameworks Are
A solution framework is a reusable set of functionality for a targeted business need. These frameworks are
typically industryspecific. For example, “Smart Investigate for Payments” supports banking operations,
whereas “CMF (Care Management Framework)” manages processes for healthcare companies.
When a framework is used, the amount of configuration that is specific to the organization’s application is
lessened, thereby reducing implementation, support, and maintenance costs.
Frameworks can range in scope. Some provide a set of reusable components that can be leveraged when
building an application, whereas others provide a nearcomplete application, requiring very little
customization.
Many frameworks are provided by Pegasystems particularly those that are nearcomplete applications.
However, frameworks can be built by Pegasystems’ Partners and customers as well.
The Impact of Using a Framework
The use of a framework impacts the approach taken when entering into DCO sessions.
When a framework is not used, DCO sessions are for capturing objectives. This is fitting. After all, the application is essentially
being built “from scratch”, so it is necessary to capture the requirements for the new application. On the other hand, if a framework
is used, there should be a shift to gap analysis discussions.
There should be an effort to exploit the features of the solution framework. The idea here is that the application isn’t really being
built, but rather, customerspecific framework gaps are filled in, in order to deem the application complete.
With this in mind, it is essential to understand the details of what the framework actually does, and also the reasons for choosing to
use a framework. Of course, those features that were central to that decision should be key pieces of the application and
springboards for gap analysis discussions. All of these specifics must be understood well before DCO sessions have begun.
The use of a framework can impact the approach to building the application. As we said before, frameworks can range in
comprehensiveness: from a reusable set of components, to being an almostcomplete application.
Pega “Foundation” frameworks, such as the Financial Services Industry Framework (FSIF), are good examples of frameworks on
the “reusable component” end of the spectrum, whereas others are more a complete application.
When the framework is a set of reusable components, there is a fair amount of configuration in the institution’s application
framework classes. However, even if the framework is used asis, it is highly recommended that application classes are still setup. It
should always be assumed that some bit of institutionspecific configuration is required, even if not needed right away.
Extending a Solution Framework
Do not confuse the concepts of a “Solution Framework” and “Application Framework Classes”. The framework layer of
classes is an integral part of the enterprise class structure. When a solution framework is used, the application
framework layer is built on the solution framework, rather than on the base product.
Let’s take a closer look at an example of how the New Application wizard is used to extend a solution framework. The
Financial Services Industry Foundation provides a reusable set of components that can be used to jumpstart the
building of a financial institution application. It includes 7 case types, including those used to setup a new account and
investigate suspicious activity.
Let’s say we want to create a Loan Application for our institution MyCo. We run the New Application wizard to extend the
Pega Financial Services Foundation Application framework. We specify the solution framework as the built on
application.
We need to decide on the application structure. There are three options available and depending on how we want to use
the solution framework we can use any of them. Select ‘Implementation Only’ if there is only one implementation or if
there will be no need for reuse at the application level. Select ‘Framework Only’ or “Framework and Implementation’ if
you want to build a framework that extends the solution framework.
The business objectives are pulled in from the solution framework and can be edited to fit our implementation.
The case types available in the solution framework are listed in step 3. We can select the case types we want to include
in our implementation.
The last step allows us to select data types we want to include in the application.
Let’s preview the application to be built. The structure looks the same as when we build on the base product. This is a
critical design concept: We still use the enterprise class structure, but inherit from FSIF in order to leverage its assets.
Flow rules are also created, but notice that these are not copies. They are wrappers that point to the solution framework
flows. These flows can be edited and augmented as needed.
If instead we built a loan application framework extending the solution framework with several applications then our loan
application framework classes inherit from the solution framework, and the application classes inherit from our loan
application framework classes.
Common Mistakes to Avoid When Using a Framework
For the final segment of this lesson, let’s identify a few mistakes that should be avoided when using a
solution framework. The first mistake is when the solution framework is not understood early enough. As
we said earlier, it is essential that both the functionality of the framework, as well as the business drivers for
choosing to use the framework, are clearly known well before DCO sessions begin.
Another mistake is when the features of the framework are not adequately used. Don’t look at a framework
of an example of what should be built; it should be the foundation of and an integral part of the eventual
application.
Finally, while this may sound obvious, we simply must point it out, read the instructions. While there is a lot
of consistency with how frameworks are installed and how they are used as starting points for new
applications, there are some differences. Be sure to understand these distinctions before getting started.
Using the Rename a Class tool we can rename a class and all of its pattern inheritance dependent classes and associated
objects, such as Work instances. We can only rename a class if the following conditions are met:
The class must be visible in your access group.
All rules to be modified must be checked in.
All rules to be modified must belong to unlocked ruleset versions.
The new name of the class must not already exist in the system.
It is not possible to change an abstract class to a concrete class or vice versa.
We recommend that a full backup be performed before running the Rename a Class wizard. The wizard creates a backup of the
classes renamed in the ServiceExport directory. The file is named with the old class name with “_REFACTOR” appended and
can be imported using the import tool.
Select DesignerStudio > System > Tools > Refactor Rules > Rename a Class to start the wizard.
Alternatively, select a class in the App Explorer, rightclick, and select Rename class from the context menu.
Select the class to be renamed and enter the new name for the class.
By default the search is limited to rulesets in the operator’s ruleset list. Select no if you want to select from all rulesets in the
system. Standard PegaRulesets are always excluded, since all versions of these rulesets are locked.
The ruleset’s scope allows us to select the rulesets to search when renaming.
This step appears only if there are work item instances associated with any of the classes that are being renamed. Selecting yes
updates the work item instances by changing the pxInsName and pzInsKey values to reference the new class name.
Click No to leave the work items unaffected. If we choose no, work items that referenced the original class can only be viewed as
XML.
This page lists all the classes that will be renamed, including subclasses of the class specified that are associated by pattern
inheritance.
The page has no input fields. Click Next to confirm the changes.
This step appears if the wizard finds rules to be modified in locked rulesets versions. For each ruleset version, supply a
password to unlock the ruleset or select Skip.
If a password is provided the wizard unlocks the ruleset version, renames the rules as specified, and then relocks the ruleset.
If we chose Skip, the wizard does not rename rules in the ruleset version. These rules are listed on the Wizard summary page
with a status of SKIPPED.
This step appears if the wizard finds rules to be modified that are checked out. Click Display to view checked out rules.
Check in any rules that you checked out and arrange for any rules checked out by other users to be checked in before
proceeding.
It is possible to continue the rename process with checked out rules. The wizard renames the original rule, but checked out
instances of the rule are not be modified. As a result, the checked out instances are no longer able to be checked in and
changes to the checked out instances are lost. It is therefore recommended that all rules be checked in that are being modified
before proceeding.
The next page has no input fields and displays the number of occurrences of the class name that will be changed. Use the
Export To Excel button if you want to review all the rules that will be changed.
The next page displays matches with the class name we asked to change that are found in strings that are not clearly class
names. The wizard cannot determine whether these strings should be changed when the class names are changed. Select the
instances to be renamed.
Clicking the Next button starts the renaming.
The final page lists all rule instances that could not be changed successfully. In this case all rules could be changed successfully
so the list is empty.
The following error conditions are reported:
Fail The system was unable to refactor the rule.
Skipped Rules that were not selected for refactoring are marked as skipped.
Refactored with errors Indicates that the refactored rule was saved to the system, but is not valid. These rules must be
reviewed and corrected.
The old class is not deleted until all changes to dependent classes and Work instances are completed. If the renaming process
is interrupted while in progress, we can restart it by executing the utility from the beginning.
Use the Review Log button to view the details of all changes.
When complete, the Rename a Class wizard has changed the system in the following ways:
The class is renamed.
Direct and pattern inheritance references the new class name. As a result, it has no effect on rule resolution.
Work instances associated with the class are renamed, if the option was selected.
All rules that referenced the original class now reference the class by the new name.
The DataAdminDBTable instance is updated to reflect the new class name if the class is a work pool or defines an
external table.
The wizard creates a new History class for the new class and the one for the old class name is deleted.
Using the Move Rules tool we can move rules from one class to another. It’s possible to use the Move rules tool only if the following
conditions are met:
The class to move from and to must be visible in our access group.
All rules to be modified must belong to unlocked ruleset versions.
Select a class in the App Explorer, rightclick, and select Refactor > Move rules from the context menu to start the tool.
First we select the class and ruleset version to which we want to move the rules, and then select the rules we want to move.
Checked out rules are listed separately in the list and are moved independently from the main version. We want to move the
DateOfLoss and PolicyHolder to the parent class.
.
Use the Delete a Class wizard to remove a class. The wizard removes the following:
The class itself and all of its pattern inheritance dependent classes.
Associated rules such as properties and activities, including checked out ones.
Instances, including work items.
Attachments and assignments.
Classes related to deleted classes by direct inheritance are not deleted.
The wizard creates a backup of the deleted class in the ServiceExport directory. The file is named with the class name and with
“_DELETE” appended and can be imported using the import tool.
It is possible to delete a class only if the following conditions are met:
The class to be deleted must be visible in our access group.
All rules to be modified must belong to unlocked ruleset versions.
Deleting a class might render an application inoperative. It is therefore recommended that a full backup be performed before
running this utility.
Select DesignerStudio > System > Refactor > Rules > Delete a Class to start the wizard. First select the class to delete.
The next page displays the class and subclasses that will be deleted.
If there are rules that the wizard was unable to delete those will be listed here. The message column explains why the rule was not
deleted and the actions required deleting it.
If there are undeleted rules, the class cannot be deleted. However, all other rules associated with the class have been deleted. To
complete deleting the class, modify the rules listed so that they can be deleted and then run the wizard again.
Use the Reference Search dropdown to set the scope over which the wizard is to search for references to the deleted class.
The final step displays references to the deleted objects that may need to be fixed. If this search is made over the entire rulebase
on a large system, it may take some time for this page to display. Review the list and correct any invalid references that might
interfere with your system.
Use the Search/Replace a String wizard to search for and replace a string wherever it appears in rules in the entire rulebase.
Strings are replaced only in rules and data instances, not in work items, assignments, attachments, or other objects.
It is possible to search and replace a string only if the following conditions are met:
All rules to be modified must be checked in.
All rules to be modified must belong to unlocked ruleset versions, or the passwords need to be at hand.
The wizard uses the fulltext search to reduce search time and increase accuracy. It is therefore recommended that we enable full
text search for rules before running this utility unless the scope is limited to a few rulesets. Open the DesignerStudio > System >
Settings > Search landing page to check if search has been enabled for rules.
Rules updated by the wizard are not automatically revalidated and may become invalid as a result of string replacement. As a
precaution it is recommended that full backup be performed before running this utility in case the results of the wizard are not
satisfactory.
Select DesignerStudio > System > Tools > Refactor Rules > Search/Replace a String to start the wizard. Specify the string value to
be replaced in the Original String Value field and the string to replace the original string value in the New String Value field. We can
limit the rulesets we want to search.
This page is shown if the string to be replaced is found in a class or ruleset name that will be renamed. Although it is possible to
proceed it is strongly recommended that you cancel the wizard and use the Rename a Class wizard to rename any classes listed
and the Copy/Merge Ruleset wizard to rename any rulesets listed and then rerun the wizard again. Changing class and ruleset
names with the Search/Replace a String wizard is very likely to cause problems.
The next page is shown if the wizard finds rules to be modified that are checked out. It is not possible to continue the search and
replace process until the rules have been checked in.
The next page lists all the rules that contain a match with the string to be replaced. Select the instances to be replaced. Only
selected instances are changed.
If we select to change rules in subclasses of Work or any of its properties, the subsequent processing of those work items in those
classes may encounter problems.
Click Finish to start the update of the selected rules. Depending upon the number of rules affected, this may require minutes or
could take longer to complete.
The final page lists all selected rule instances that could not be updated successfully. The following error conditions are reported:
Fail The system was unable to refactor the rule.
Skipped Rules that were not selected for refactoring are marked as skipped.
Refactored with errors Indicates that the refactored rule was saved to the system, but is not valid. These rules must be
reviewed and corrected.
The Rule Compare wizard lets us compare any two rule instances of the same rule type. Select DesignerStudio > System >
Refactor > Rules > Compare Rules to start the wizard.
In the panel labeled Instance #1 we need to specify the search terms for the first of the two rule instances we want to compare.
Select the rule to compare from the list of returned results.
Use the panel labeled Instance #2 for the second rule instance.
The next page shows a sidebyside comparison of the differences found. Each row in the list shows the name of the property that
is different with its value in the first rule instance, its value in the second rule instance, and whether the difference was a value
change, an insertion, or a deletion.
Click a row to see more details.
This concludes the lesson on Refactoring an Application.
Application Design Case Study Injury Reporting Application
This Case Study involves architecting the class structure for a system used to log workrelated injuries taking place
at manufacturing plants of an Automobile and Boat Manufacturer.
The company that will be using this application is Velo Motor Company. It is an international Automobile and Boat
Maker.
Velo has two Divisions for their auto business:
First, the wellknown "Velo" line, which represents their primary line of cars and trucks.
There is also the "Luxo" line, which comprises three luxury models.
There is one marine division; this is the "Floato" line and consists of two models.
Velo has five worldwide manufacturing plants, in the United States, Canada, Germany, Brazil, and Japan.
Here is a list of Velo's current models. The six models in the Velo division are all prefixed with the letter "V", and the
two truck models start with "VT". Likewise, the Luxo and Floato Models are prefixed with "L" and "F".
The relevant Velo organizations consist of manufacturing, marketing, and sales.
Part manufacturing is performed only in Canada, and many parts are reused across models and model lines. The
chassis used in the ultraplush L40 is the same one used in the economical V10.
Boat Assembly is only done in Brazil, and cars and trucks are assembled at all plants.
Marketing and Sales are separated by Product line and Division. For example, the Luxo division has a completely
different website than the primary Velo line.
Application Specific Requirements
Illness and Injury Reporting system background:
Corporations must log work related fatalities, injuries, and illnesses across many industries. For example, if a
hospital employee were to contract tuberculosis at work, then this event must be logged in the hospital’s records.
Manufacturing plants, such as Velo’s auto and boat plants, must also track injuries and illnesses.
The guidelines are countryspecific, as they are set by the regulatory agency for the country. For example, OSHA
sets Safety and Health regulation for the United States, CCOHS does so for Canada, GDA for Germany, and so
on. Certain extreme events, such as death, must be reported to the regulatory agency immediately. In our case
study, we refer to these communiqués as “submissions”. Submission processes and requirements also differ by
Country. Some countries have updated or additional rules, based on State or Province. Typically, these rules are
more stringent forms of the national guidelines. There are also some guidelines that are specific to injury type. A
very small subset of injuries requires injuryspecific fields to be filled in. For example, with hearing loss, the actual
assessment of the loss, measured in decibels, must be recorded.
Velo’s Specific Requirements
There are there are three processes that must be supported:
1. An injury or illness must be recorded. This is more than simply filling out a form; it is a guided and dynamic
dataentry procedure that is specific to the regulations of the plant’s Country. The culmination of these
entries is an electronic “Logbook”.
2. A Summary of these records must be generated, for every plant and year. Each Summary record must be
verified, certified, and posted. Notably severe events must be reported to the regulatory body of the
corresponding country, and the status of this submission must be tracked. As for reporting, the reports of
these record types will be separate; that is, there will never be a need for a list of records that is a mix of
Injury Records, Annual Summaries, and Submissions. However, because Summaries are basically a
culmination of Injury Records, and Submissions are spawned by Injury Records, it is reasonable to assume
that Injury Record information might be included in Summary or Submission Reports.
3. Each plant must be able to manage their own business rules, because each is in a separate country.
Mission
Build the class structure for the Velo Injury and Illness Reporting and Submission (IIRS) Application. Do not build
the complete application such as flows, properties, and user interface. We are only responsible for building the
class structure that can house these rules once the complete application is built.
Consider what additional questions we would ask of Velo to help us with the application design.
Approach
Be sure to consider the requirements that have been laid out thus far, including, Manufacturing Plants, State or
Province specific regulations, and Injuryspecific data entry.
Take into account the Velo requirements and information that has been provided, however not every detail is
pertinent to the application design. This is meant to be a simulation of real life. It is not uncommon to be provided
with a deluge of information; only some of which is germane to the application design.
Use common sense and PRPC best practices, in addition to the Velospecific requirements.
Keep in mind there is not necessarily one right answer.
Construct a New Data Structure
We can use the Data Explorer to add or remove data types.
Deselecting a data type removes it from the Data Explorer, but the underlying rules are not deleted. Select
view all to display all the data types available.
Select Create New to add a new data type to the application. We want to add a data type address.
The wizard creates the class specified in the ID field using the given details.
In the next step we can add properties to the data type.
A property can be defined as single value, value list, or value group in which case the type must be
specified or as a page, page list, or page group in which case we need to specify the page class.
In next step we define how we want to display the properties.
In the final screen an overview of the properties to be created displays. Click Submit to create the
properties.
Properties can also be created directly from the Application Explorer. Use New link to create a single
property or the Create properties link to create several properties in a similar way that we created them
using the Data Explorer.
In many situations it is very useful to quickly understand the data model of an application that is in
development or already developed. There are two landing pages (DesignerStudio > Data Model > Classes
& Properties) that provides a more complete view than the Application Explorer.
The Property Tree tab shows the properties defined and their type for a given Applies To class, here we
see our general work class.
The Class Relationship tab shows the relationship of just the page, page list and page group properties.
This can be helpful as a high level relationship model.
Now that we have created the classes and property definitions we must also understand how to actually
create pages of the classes and instantiate properties on those pages. In most cases PRPC manages the
creation of pages for us. Pages can be created to represent new objects, for example when we create new
work, or to represent an already persisted object, such as opening existing work. PRPC also creates
embedded pages for us whenever the property is referenced.
From a more programmatic standpoint we can explicitly create new pages using PageNew or bring an
already existing instance onto the clipboard as a page using ObjOpen.
There are many other ways pages get created but these are some of the most common ones.
Property instantiation is even easier than pages. Properties are created as they are used or referenced.
Conclusion
Understanding and taking full advantage of the data model is essential in building an application. Take business
requirements, such as reporting and case persistence but also flexibility and scalability into account when building the data
model.
Now, we understand the fundamentals of data modelling. We understand the role properties have and how the data model is
related to the class structure and inheritance. Finally we know the best practices and how to effectively construct a new data
structure.
Compare the Property Type Options
Let’s take a look at the property rule form and some of the settings we can define for a property.
The first and most important part of defining a property is to select the appropriate property type.
The most straightforward mode is Single Value, which is used to represent simple, scalar values, for
example a price.
The value list and value group modes represent collections of single value properties. The value list is an
ordered list subscripted by a number and the group is subscripted by a string term and is unordered.
The page type allows a single property to represent a collection of properties as a page of a specific class.
This is very powerful as it allows us to embed and reuse complex data structures in various parts of an
application. For example an “Account” structure could be used by multiple work types.
Page List and Page Group are similar to value lists and groups in how they are subscripted but their
instances represent pages rather than single values. Page Lists and Page Groups are generally preferred
over Value Lists and Value Groups because they are more flexible.
There are also a number of Java related types; these are used when interfacing with external java libraries
or beans. They are not covered in this lesson.
Each property type has additional options associated with it; let’s look at Single Value first. A property of
mode Single Value also has a type. The type represents the kind of data that is valid for that property.
Common types include Text, which can store strings, a variety of numeric types such as Integer, Decimal
and Double as well as temporal properties such as DateTime, Date and TimeOfDay. Each type is well
documented in PRPC help and can be reviewed there.
The Value List and Value Group have the same options as Single Value, which makes sense since Value
Lists and Group are just a collection of single values.
Now let’s look at the options for Property Mode. First we see that instead of defining a type, like text or
integer, we instead define a Page class. The class determines what this page will represent.
For example, if we enter our supplier class then our property will “be a” supplier. We can reference
properties of the line item class, for example SupplierID, using the dot notation. For example
.Supplier.SupplierID.
Like the Page mode, the PageList mode requires a Page Definition. This property now represents a list of
Line Items rather than a single line item. The list can be referenced using the dot notations with a numeric
subscript, for example .LineItems(1).ProductID.
The PageGroup mode is similar to the PageList except that the subscript is a text string and the list’s order
is not guaranteed. PageGroups can be referenced using a string subscript, for example .Vendors
(VendorA).VendorID. PageGroups are useful when a specific item in the list needs to be retrieved based
on a key.
Compare the Data Access Options
The options for Data Access differ if the property is of single value or page type.
Single Value
For a single value type property there are two data access options; Manual and Automatic reference to
class instances.
Select Manual if the user adds data to this property through the UI or if data transforms or other rules may
be required to manipulate the value.
Select Automatic reference to class instances (linked) to establish a direct relationship to a single
instance of a different concrete class also known as the target.
At runtime an instance of the specified class is retrieved in readonly mode, hence without a lock. In this
example we are showing the standard pxCreateOperator property.
If there is only one key, the value of the source property becomes the key used to open the object at run
time. If there is more than one key, the key properties become input fields where users may enter values
or properties that contain the appropriate values at run time.
We can use the linked property as a Page property in property references. For example, we can reference
the name of the create operator as .pxCreateOperator.pyFullName in a case type.
Help contains a list of standard linked properties, see Atlas – Standard linked properties.
Page
For the page type property there are three data access options: Manual, Refer to a data page, and Copy
data from a data page.
Select Manual if the user adds data to this page property through the UI or if data transforms or other
rules may be required to manipulate the value.
Use Refer to a data page to point to a data page. The data is not persisted with the case, but instead is
always fetched when needed. We use this setting if we want the most up to date information, such as a
customer‘s address.
Use Copy data from a data page to copy data from a data page to the property. The data is not be
reloaded unless one of the data page parameters changes. The data is persisted with the case. We use
this setting if we want a snapshot of the data, such as the details of an insurance policy.
How to configure a property to use a Data Page is covered in more detail in one of the Data Modelling
lessons.
Linked Property vs Data Page
The best practice is to use data pages over linked properties whenever possible. Data pages are more
flexible with both reference and copy options. In addition, the data source can easily be changed for
example, from database to SOAP.
Linked properties should only be used if we can guarantee that we’ll only ever need readonly access and
that the objects will always live in a database.
Configure Display and Validation
The Display and Validation options are available when single value property is selected.
The UI Control field is populated with a default control for the specific property type. For example, if we
select Time of Day the control automatically gets set to pxDateTime.
The control listed on the property form defines how the property displays in the UI if Inherit from
property is selected.
The Table Type field provides validation against a list of valid values. The list is determined by the table
type. This validation occurs both at runtime and design time when saving business rules.
Local List allows us to define a simple list of strings that define the valid values.
Prompt List uses two values, a standard value, the value stored on the clipboard and database and a
prompt value, the value shown to users.
For example, we can enter ‘Approve’ for the Standard value and ‘Approve Order’ for the Prompt Value.
This allows us to use Approve in our rules but have the user see a more descriptive label. This provides
flexibility in separating the display from the rule logic.
Local Lists and Prompt Lists are useful for simple cases however they do not allow for reuse as the list is
associated with a single property.
Class Key Value allows us to specify a class whose instances become the “allowed values”. The class
key value requires us to enter a validation class, which is the class whose instances are used for validation.
For example, if our property was to represent a workbasket defined in the system, we’d enter DataAdmin
WorkBasket. Clicking preview shows us how this will be rendered. This is the list of defined workbaskets.
The actual field displayed is the Key property for the class specified which is why the table is called “Class
Key Value”.
The Subset Name field is used when the class has a multipart key and we wish to use only a subset of
values. For more details review the help file.
The Display Only (Not For Validation), option allows us not to validate against these values; this is
commonly used with localization.
The last table type we will review is the Field Value. This table type allows us to utilize field value rules
which have the added value of being fully localizable.
Since field values are rule resolved we supply the Class first. Here we are going to use the pyChannel field
values defined at baseclass.
You may have also noticed a table type called Remote List, this table type is deprecated and should not
be used.
All of the table types can be used with the Dropdown control. Select As defined on property as the list
source type.
Another powerful aspect of using table edits is validation in certain business rules that might be delegated,
such as decision tables.
Configure Advanced Settings
There are also many advanced features that can be set on a property definition. These options are
dependent on the property mode and type set on the general tab. Here we can see the options for a
Single Value, Text property.
Max Length allows us to set the maximum length, in characters, of the field. Any attempt to set the string
to a longer value results in a validation error.
Expected Length is used by some UI controls to set the width of a text field. No warning or validation
error occurs if more text is entered.
Override Sort Function allows us to specify a custom sort function to be used when sorting a list.
Access When setting is used for encrypted text to determine when the clear text value can be accessed.
Edit Input allows us to set the RuleEditInput rule that applies to this property. Edit Inputs is used to
format a value when it is entered on a user screen. This formatting occurs on the server side.
Use Validate specifies a RuleEditValidate rule that is applied when user input from a screen is sent to the
server. This rule is applied after the editinput and can add validation messages to the property.
Column Inclusion provides guidance to database administrators (DBAs) as to whether a column should
be exposed for direct reporting. This field does not have direct impact on the runtime behavior of the
system.
If we want the value of this property to be omitted when a page containing the property is committed to the
database we can select Do not save property data. Marking appropriate properties improves
performance by reducing the size of the Storage Stream property value.
Select Cannot be Declarative Target to prevent a property from being used as a declarative target. This
is helpful to indicate to other architects that a property should not be used as a declarative target.
Select Cannot be included as Input Field to prevent users from directly entering a value for this property
in an HTML form. This can be useful as an additional security measure for critical properties.
Select Allow use as Reference Property in Activities to make this property a reference property that can
link to a source property.
Select Cannot be localized in UI controls to prevent this property from being localized.
A qualifier is essentially metadata about a property. There are a few standard qualifiers, such as
pyDecimalPrecision.
Page Properties share many of the advanced options with the single value property, but also have a few
additional ones.
Select Validate embedded page to validate this pages data, even though it is embedded. In almost all
cases this should be left checked.
The java page option is specific for working with Java objects.
Conclusion
A property defines and labels a value that is associated with a class. The first and most important part of defining a property is
to select the appropriate property type mode. However, property definitions can also be used to load data, define consistent
presentation, access and validation of data across an application or even an enterprise.
Now, we understand the fundamentals of data modelling. We understand the role properties have and how the data model is
related to the class structure and inheritance. Finally we know the best practices and how to effectively construct a new data
structure.
Polymorphism, as defined by Wikipedia, is the ability to create a variable, a function, or an object that has more than one form. Let’s
look at a general example of polymorphism.
.
Here we see that a shape defines a few attributes, type, color as well as a few methods to calculate area or to rotate. Triangles,
rectangles and circles inherit these attributes of shape since they have an “isa” relationship. That is, a square “isa” shape.
The square, rectangle and circle extend the shape by adding new attributes and overriding the definition of the area calculation.
This is all very common and is often implemented using class inheritance to define the “isa” relationship.
Not as common, but equally powerful, is the ability to declare a variable as a shape and allow its actual type to be determined at
runtime. In Pega 7 terms this allows a page or an embedded page’s class to dynamically change at runtime.
In an insurance company’s quote application there is a list of vehicles to be covered as part of an auto quote. The list of vehicles
can include things such as cars, motorcycles and trucks. Each of these vehicle types has differences in their business rules and
processes.
To implement this we need an abstract vehicle class for the car, motorcycle and trucks from which they can inherit. Since each
vehicle type is its own class we have the benefit of clearly visualizing what rules are common and which are specialized for each
vehicle type.
The property VehiclesToCover is defined as a page list with the Vehicle class as the page class. Polymorphism allows any class
that inherits from Vehicle to be added to our list. Hence, a single property contains all of our cars, motorcycles and trucks.
The VehiclesToCover class only knows about the properties defined at Vehicle not the specialized properties. This is a key concept
to remember. At design time only knows rules defined at the abstract class are visible, however at runtime the system finds rules
based on the actual class on the clipboard.
Now let’s jump into building an interface for adding pages to our page list and then appropriately setting the class of each page.
Here we have the GetVehicles flow action with a section that has a repeating grid for adding new vehicles.
We can see that the class of each page in the list is our Vehicle class. The section allows us to enter vehicles, however we must
specify in which classes the vehicles should be.
We do so using a simple data transform that is called at the end of our GetVehicles flow action. Our data transform uses the type
property of each vehicle to appropriately map the vehicles to the right class. By setting the pxObjClass property we are effectively
changing the class of the embedded page.
After GetVehicles is submitted, our next flow action is CaptureVehicleDetails, where we collect additional information for each
vehicle. Here we collect different information for each vehicle type.
We can achieve that by overriding the DetailedInfo section defined in the Vehicle class. The runtime class of each page is used to
find the appropriate section rule.
In a general sense polymorphism makes the application more flexible and easier to change. Polymorphism often leads to more
reuse which in term leads to faster development and less errors. Another benefit is that when properly used polymorphism can
improve the readability of your rules.
Alternatives
First, we could create a separate page list properties for cars, motorcycles and trucks. Each page list would have a static page
class and developers would likely have to create separate user interfaces for each.
Another option would be to have just one page class for all vehicles and a single page list. In this case, we would have to use
conditional logic and/or circumstancing to introduce process and rule differences.
In both cases, reusability is reduced as we need to more statically define our structures. With only one page list, class business
rules become harder to maintain. With multiple page lists creating a seamless user interface is more challenging as is implementing
certain types of business rules.
More importantly, what happens when a new vehicle type is added, for example boats? Having multiple page lists will likely cause
us to need to touch multiple rules in order to implement this change.
Using polymorphism provides us with a more elegant solution. Rule resolution uses the runtime class of each page to apply the
correct rules, processes and user interface.
In a general sense polymorphism makes the application more flexible and easier to change. Polymorphism often leads to more
reuse which in term leads to faster development and less errors.
A reference property is a type of property that acts like a pointer to another property or page. Another way to think about it is as an
alias for a property.
To make a property a reference property we need to go to the Advanced tab and simply click the checkbox.
Reference properties are most commonly used to link related pages within a work object. They can be used to link other top level
pages but this requires special care as the developer is responsible for making sure the page is available on the clipboard when the
reference property is referenced.
Let’s look at an example of where a reference property may be helpful. Our example is for the auto insurance quoting application.
Our data structure has two page lists embedded directly within the case type. One is for the list of drivers on the policy. The other is
for the vehicles to be covered by the policy. Each page list has a number of embedded properties and pages. Each vehicle has a
primary driver and we want to be able to display driver details with the vehicle.
Our question is how to design our data structure to properly link the primary driver with the vehicle? Without using reference
properties we have a few options we could consider.
First we could copy and embed the driver information inside the VehicleToCover page list. However, doing this leads to multiple
copies of data throughout the data structure. These copies can affect performance and make updates to the data more difficult as
we will need to traverse the data structure.
We could just store the subscript of the driver we wished to link. However this leads to very ugly property references and is difficult
to debug.
At runtime, using the PropertyRef activity method the PrimaryDriver page can be linked to the applicable driver page in the
DriversOnPolicy page list property. This allows us to establish a relationship without copying any data.
The PropertyRef method is pretty simple. On the left we list the reference property and on the right the page or property we wish to
map to. We are able to refer to these properties using the same syntax as if this was a regular property.
Once linked the references are maintained until the link is explicitly broken or changed using the PropertyRef method. Property
references cannot be circular.
Let’s have a look at how these properties look on the clipboard. We see that the clipboard shows us the reference from
PrimaryDriver to the driver page in the DriversOnPolicy.
In summary, reference properties are not commonly needed. However, in more advanced data structures that require the linking of
various embedded entities they can be very powerful. They can help improve runtime performance and make design time easier as
well by making property references simpler and more intuitive
Common Data Access Patterns
Patterns are often well documented and understood by the community as they are usually easier to implement than a “from
scratch” design. Patterns are more reliable and reduce the risk of unknown side effects when compared to one off solutions.
The point of these patterns isn’t for us to memorize the details but to understand the concepts behind them and how to tailor them
to meet our applications needs. The best way to master utilizing them is to experiment with them in different situations.
SOR Pattern
The System of Record (SOR) pattern describes a situation where our case needs to access data related to a case that is stored in
another system or application. In most situations the case doesn’t own the referenced object but rather may display data for context
or use data in rules. For example, a loan application or a credit card dispute may need to access the customer’s account
information and history.
Another common trait of this pattern is that the case needs to have access to the most current data. For example, if the account
holder’s phone number changes we want that to be reflected when the data is accessed from the case. Usually, the data loaded
comes from an external data source.
Let’s have a look at how we can implement this pattern in a claims application for the customer account information. We start with
the D_Customer data page, which represents our customer data. The data is loaded from a SOAP connector and the customer ID
is passed in as a parameter to the data page.
One part of this pattern that differs from application to application is when the data needs to be refreshed. Under the refresh
strategy section we need to balance application performance with data accuracy. If we select reload once per interaction, the
lookup or service call can be called once for each user interaction. This provides nearreal time access to the data. However, using
this selection can often call the service more often than is necessary which may lead to performance issues.
In situations where the data isn’t frequently updated we can choose a temporal refresh strategy. For example if we determine that
data no older than one hour is acceptable we can set the refresh to one hour. Remember the data also refreshes if the data page
does not exist yet or has been explicitly removed.
Once we have set the refresh strategy we’ll want to determine how we want to access the page. Using an auto populate property
provides us convenient access to the vendor data without having to reference D_Customer directly. Since the customer’s account
data should be refreshed throughout the claim’s lifecycle we select the refer option rather than the copy option.
This property now provides us easy access to the vendor information without ever having to refer to the data page directly. No need
to set anything up on the pages and classes tab or run an activity.
Snapshot Pattern
In the snapshot pattern the case does not point to a data page but instead the data from the data page is copied into the case
when the data is accessed. Once the data is copied into the case the data page is not accessed on subsequent property
references.
This pattern is especially useful when the data needs to reflect a specific point in time. For example, an insurance claim may want a
copy of the policy data as it exists when the claim is filed. If the policy changes AFTER the claim we DON’T want it updated. This is
the opposite of the SOR pattern we discussed earlier.
However, if the parameters used by the data page change, the data is copied into the case again. In our claims application we
configure the policy property to copy data from data page. Since the data is stored in the case it is persisted into the database with
the case, making it available for reporting.
Reference Pattern
The next pattern we’ll look at is one of the most common and simplest patterns. We call it the reference data pattern. In this pattern
we need to reference a list of data that is usually not directly connected to a given case.
This could be a list of products, or countries, or perhaps a list of valid values for a drop down. In many cases the same list can be
used by other cases or even other applications. In many cases the list is used to populate UI controls.
One permutation of this pattern is where the list needs to be filtered based on the selection of a previous value. For example a list
of cities may be populated based on a selected country. Let’s look at the configuration of using two data pages to implement these
types of cascading selects now.
The first data page with the country list is loaded via a report definition from the local data storage. Since this list can be shared by
all users we can make it a node level page to improve performance. Also, since this list is not based on any input it does not require
any parameters.
Our second data page, the city data page, is defined similarly. One important difference to notice is that the city page takes the
country as parameter. This is needed so that our city drop down reflects the selected country.
Next, let’s look at the UI configuration which connects our two data pages. Our first drop down for the country field is defined as a
dropdown control using the data page as the source. We specify the properties we want to store when the item is selected, as well
as the property to use for display. This is all that is needed for our first drop down.
For the city field things are a little more interesting. Here we see that since our data page has a parameter defined on it we are
prompted for the parameter. We can use the value country that will hold the value from our first drop down.
Even though the country property is not populated when the UI is rendered the relationship is detected between the country and
city drop downs and the controls are updated dynamically. This is all that is needed to implement this common yet simple design
pattern.
Keyed Access Pattern
The keyed access pattern is not as common as the previous patterns but when appropriately applied this pattern can significantly
improve an applications performance and maintainability.
The primary aspect of this pattern is that one data page can be utilized as both a list and a single page. All of the data is loaded into
a single list data page during the initial load and then can subsequently be accessed as a single page via an autopopulating
property.
This serves as alternative to having two separate data pages. This makes management simpler and can also improve
performance. This pattern can be useful when the entire dataset we are working with can be loaded in a single service call and
stored efficiently. It is also useful in cases where users may need to frequently switch back and forth between pages in the list.
An example would be a list of currencies and their exchange rates to a certain base currency. A single data page could load the
various currency rates. That same page could then be used to both select a currency from a list and also load the currency rate to
be used by expressions or other rules. Let’s look at the details of such a configuration.
It starts with our single data page, which is defined as type list. The data is sourced by a REST service that takes the base currency
as a request parameter and returns a list of exchange rates. The keyed page access checkbox is selected and the currency code is
defined as the key. Our key is the currency we wish to display. The multiple pages per key option allows a large list to be filtered to
a smaller list rather than to a single page that we are configuring.
The page property called ExchangeRate is a “refer to” page property pointing to the data page. The data page requires the
currency as a key in addition to the base currency. The page contains the currency information, including the exchange rate,
between our base currency and the selected currency stored in the currency property provided as the key.
Finally, let’s look at the UI definition which shows our data page used as both a list and single page. We can use the data page to
populate a drop down by leaving the key empty. This way the list returns all currencies.
When we have selected the currency we can use the ExchangeRate property to display the rate for the currency
Preload a Data Page
One of the primary benefits of data pages is that they automatically manage the loading of data, taking that responsibility away from
the consumer of the data page. Sometimes the data pages takes time to load which may negatively impact the customer
experience. In such situations, we may want to proactively load the data before it is actually needed.
For example, when a customer contacts a customer service representative it is highly likely that the customer’s account and activity
information will be needed to properly service the customer which may take time to load and aggregate since it often resides in
multiple external systems.
Rather than waiting to load the data until it is actually needed, we could load it while the representative takes a few moments to
verify the customer’s identity and determine the intention of the call. To accomplish this from a technical perspective, an explicit call
is made to the necessary data page in an activity using the method LoadDataPage, which causes the data to load.
If at any point, we require the data to finish loading before proceeding, we can use the ConnectWait method to force the system to
wait for a desired period of time before proceeding or return a fail status if it does not complete in a timely manner.
Both the LoadDataPage and the ConnectWait methods have a PoolID parameter which makes it possible to pair a Load
DataPage method with the ConnectWait method by matching their PoolIDs. Before using these methods, be sure to understand
the performance gain to ensure it outweighs the cost of loading these pages procedurally and thus, maybe sometimes
unnecessarily.
Configure Error Handling for Data Pages
Data Page errors are treated as any toplevel page errors. A message on the property stops flow processing if it is visible on the
client. A page message on the other hand does not stop flow processing.
If the Data Page is referenced to autopopulate a property then both page and property messages propagate from the Data Page
and blocks flow processing from moving forward.
Use the post load processing activity on data pages to handle errors. The ConnectionProblems flow defined on the connector never
gets invoked by a data page because the data pages catch all exceptions and add page messages so that error handling can be
done in the postactivity.
First check for technical errors at the data layer and handle them if possible so that the messages can be cleared. Leave
unhandled errors on the page so that it can be handled at the work layer. Remember to set the message on a case property visible
on the client to block the flow if autopopulate is not used.
Case Study: Data Modeling
Scenario
ResQuest is a premium support team for an online travel site. They have initiated the design of a PRPC application for
managing reservation change requests for their premium clients. The primary work type is “Manage Reservation”. Each
reservation request can be for airfare, hotel, or both.
ResQuest’s Marketing department has come up with the idea of customers being able to define “shared” accounts. An
example of a shared account is where a husband and wife have separate accounts, thereby able to maintain separate travel
itineraries, yet are able to access each other’s information as well as share earned bonus miles and hotel reward points.
High Level Process – Manage Reservation
1. A customer calls or emails ResQuest and provides account ID, email, or name.
2. A ResQuest customer service representative (CSR) verifies the provided information.
3. After verification, recent and upcoming itinerary summaries are retrieved and displayed.
4. In addition, general information about the customer’s account, such as name, address, status level, associated
accounts, etc., is also retrieved and displayed.
5. When a specific itinerary is selected, detailed information about that itinerary is retrieved and shown to the CSR.
6. The CSR can perform a number of actions on the selected itinerary such as change a flight, change hotel dates, add
notes, or request an upgrade.
7. Certain actions, such as an upgrade request, may require approval by a supervisor.
8. Actions may also cause the work object to stay opened for a number of weeks. For example, an upgrade request that
cannot be fulfilled immediately must stay open to be attempted again.
9. All reservation changes require a call to an external service to update the system of record for reservations.
10. Within the context of the same service request, a sharedaccount customer is able to rapidly switch between their own
account and a related account, for example their spouse’s.
Additional Information
An itinerary may consist of a flight portion, a hotel portion, or both. The flight portion may contain multiple flight
segments or “legs”. The hotel portion may also contain multiple hotel reservations.
A SOAP service is used to retrieve a list of summarized itineraries for an account. A different SOAP service is used
to retrieve the full details for a given itinerary.
The list of itineraries is displayed on the CSR’s screen while working on a Manage Reservation work object.
Account information is also retrieved via a SOAP service.
The Account information displayed to the CSR should always be up to date, i.e., not persisted within each Manage
Reservation work object.
When a shared account customer switches focus to a related account, that related account’s list of summarized
itineraries and account information should be displayed.
An individual itinerary must be selected for a Manage Reservation work object to proceed.
Each Manage Reservation work object may only process one itinerary
When a Manage Reservation work object is routed to others, the original CSR’s full name and phone number should
be shown on the screen.
Approach
Keeping in mind the requirements stated above, write down your approach to each question below.
1. Design the data classes that would be embedded in to the work object. Which are pages, page lists or groups?
2. How would you design the retrieval / placement of the customer’s list of itineraries on the clipboard as well as how that
data is displayed on the CSR’s screen?
3. How would you design the retrieval / placement of the customer’s account data on the clipboard as well as how it is
displayed on the screen?
4. How would you implement the requirement: “When work is routed to others, the original CSR’s full name and phone
number should be shown on the screen.”?
5. What effect does the statement: “Some actions may cause the work object to stay opened for a number of weeks”
have on your design?
Using the New Application wizard, i.e., PegaRULES > New Application, build a simple Frameworkonly “ResQuest”
Application for the “RESQ” Organization. Create a single ManageReservation work type. Create the minimal properties and
other rules to prove out your design. Use data transforms to simulate external data. Create a UI that is sufficient to
demonstrate your data model and how data is managed. Do not attempt to build an entire productionquality application.
Instead, focus on the rules need to demonstrate your design.
The Data Transform rule is a powerful tool in Pega 7 which makes data manipulation on properties and
pages easy and efficient. Data Transforms map data from a source to a target, performing any required
conversions or transformations on that data. The source could be one case such as a purchase request
case and the target could be another case, such as a purchase order case. To create the purchase order,
we might require data regarding the vendor from whom we bought the items on the purchase request.
In PRPC, Data Transforms are part of the Data Model category and are instances of the “RuleObjModel”
rule type.
Data Transform Rules
There are multiple situations in which the use of the Data Transform is more appropriate than any other
rule type. Whenever we need to set properties values on a given page, whether they are the initial values
or updated values, a Data Transform is the best option. For example, we can set the default value of “Full
Time” for the employment type of any new hire. This can be changed in the user interface later, if needed.
Also, when updating property values on a page using properties from another existing page there is no
better option than using a Data Transform rule. For example, if we plan to have our purchase delivered to
our home, then our billing address and shipping address are likely the same. We can have a Data
Transform rule read property values from one page, such as the billing address, and copy those properties
to another page – in this case, to the shipping address. We can also use it when there is a need to append
a new or existing page to a page list or to append and map properties from one page to another page.
Multiple features in PRPC accept a reference to a Data Transform rule.
In general, there are only a few situations in which a different approach might be more appropriate than a
data transform:
A property value should be set declaratively, rather than procedurally. To set a property value
declaratively, use a declare expression rule instead.
When defining source data for a data page. Depending on the source of the data use of a report
definition, database lookup, or activity may be more appropriate.
Updating data in a database. Data transforms do not provide any means to write data to a
database.
First, let’s look at the Data Transform rule form itself (shown below). The “Definition” tab is where we define
the actions to be taken. They are presented as a sequence of rows in a tree grid. We will discuss the
structure in detail shortly. The next tab is the “Parameters” tab. On the Parameters tab we can list the
variables that input data to, or return data from the Data Transform. Variables defined here are referenced
on the “Definition” tab by the notation Param.ParamName. We use the “Pages & Classes” tab to specify
the pages referenced on the fields of the “Definition” tab. These two tabs are standard tabs for most rules.
Let’s look at this example to review the capabilities of the Data Transform rule.
In the first row, we delete the vendorlist page from the purchase request.
In the second row, we iterate through each of the line items that comprise the “LineItems” page
list. The rows indented under the second row are performed for each page of the page list.
In the third row, we check whether the vendor listed in the specific line item is unique or not, using
the when condition “IsDistinctVendor.” Any rows indented under this row execute only when the
condition is true. Otherwise, the data transform skips the indented row(s).
In the fourth row, the unique vendor information is copied from the source line items to the target
vendor list whenever the vendor is considered “unique”, as determined by the when condition in
the third row.
Data transforms can be chained with other data transforms defined in their parent classes. To do this, we
need to enable the “Call superclass data transform?” checkbox, and the data transforms must share the
same name.
PRPC checks for the same name in all the parents of this class following both pattern and directed
inheritance. Then it applies all the data transforms starting with the deepest ancestor continuing for each
child upward.
From the pyDefault Data Transform for the Purchase Request case, we can use the open icon next to the
check box to open the Data Transform of the parent class and keep going until we find the highest
ancestor. From the chain, we can see nothing is set for @baseclass first, and it sets pyWorkIDPrefix as W
in Work. Then when it applies the data transform in WorkCover it sets the same property as C. Finally, it
applies the data transform in the Purchase Request class, so the WorkIDPrefix is PR when a new
purchase request instance is created.
This is a powerful feature to set default values for the properties at the appropriate level.
Structure of Data Transform Definition
We use the Definition tab to manipulate the data. Understanding this tab helps us to ensure we have the
correct steps at right place and avoid any potential defects due to the wrong data being transformed. The
Data Transform rule definition is a sequence of rows. The system invokes each row in sequence starting
with the first row except for the conditional and transitional types of actions.
There are four columns on this tab.
Action – This is where we identify what the step is for. This is required for each step. We will look
at the different actions that are possible shortly.
Target – Most actions require a target. In cases where an action is not required, we cannot select
one. The smart prompt box acts both as a select and a text box. We can select a property from
the list or we can type text.
Relation – This is an optional column. The typical relationship between source and target is
"equal to." There are a few actions that use other relationships.
Source This becomes selectable only if the action requires a source. We can specify literal
value, properties or expressions. We can, for example, set the target property “Employment Type”
as the value “Full Time,” target property the “Full Name” as concatenation of two other properties
“First Name” and “Last Name,” and target property “Rate” as an expression, namely, the sum of
source properties “BaseRate,” “LoanRateAdjustment,” and “RiskRateAdjustment.”
To add rows we can use one of two options. We can click the “Add a row” icon or we can rightclick on any
row to access the context menu. The right click menu is context sensitive, so the choices depend on where
we click. In some cases, when we add a child it adds a tree like nested structure. We can use the Delete
this row icon or the right click and select delete from the menu.
As a best practice, and to improve readability, do not create more than 25 steps. If we need to define more
than 25 steps, we can group some of the actions and define them in another data transform rule. We will
see how we can reference another data transform rule from one data transform rule shortly.
Let’s look at some of the key actions available to us.
Set is used to set the target from a source. We can set a value to a Single Value property which exists in
the toplevel page such as the pyWorkIDPrefix or in an embedded page such as the LoanType in the
LoanInfo page.
Remove is used to delete the target and any associated values from the clipboard.
We can also use Update Page to set the properties defined on an embedded page. When we use Update
Page we need to set individual properties using the Set action with the nested rows after the row with
“Update Page.” In fact, we have the option of selecting any of the actions shown above for the nested rows
below the “Update Page” row.
We can reference another data transform rule from our existing data transform rule. This might occur if we
are going over 25 steps and want to break the rule into smaller manageable data transforms. Or we might
have a reusable data transform, such as one for initializing a case with some default values. Whenever we
need to invoke a “Data Transform” from another data transform rule, we use “Apply Data Transform.”
Conditional Actions are Data transforms that execute all the actions in the order defined. However there
are a few conditional actions that are available to add logic, to perform steps based on a condition.
When — To conditionally perform actions.
Otherwise When and Otherwise — To provide actions for the alternative to the When actions.
We can also iterate over a pagelist using For Each Page In action. Using this action we are able to apply
the change for all the child nested rows of the page list. We have the option of selecting any of the actions
for the child nested rows, such as “Update Page.” “Update Page” is primarily for a single page, while “For
Each Page In” is for a page list. We can use the “Append to” action to copy a page from the source to the
target. For instance, if we want to add a new page to the Assets page list, we can select new page. We can
also add another existing page or copy all the pages from another pagelist, by selecting the appropriate
values in the drop down in the relation column. “Append and Map to” is used to map individual properties
in the page list. When we select this action, at least one nested child column is used with “Set” action.
If we don’t want to loop through all the pages in the pagelist, we can use the “Exit For Each” condition to
exit the loop, or we can exit from processing all the remaining steps with the “Exit Data Transform”
condition. Typically, we have these steps after a when condition.
Use of symbolic indexes
When we are iterating through the pages or updating a page, we don’t know the index of the page on
which we are working. We iterate through the pages in both Data Transform and Activity rules.
As we iterate through the list of the line items page list, “<CURRENT>” represents the current index of the
iteration.
Here is the list of symbolic indexes that we can use when we loop through the iteration:
<CURRENT> Identifies the index of the current iteration
<APPEND> Inserts the element at a new position at the end
<LAST> Retrieves the highest index
<PREPEND> Inserts the element at the top
<INSERT> Inserts the element at a specific position that is indicated by the number
Param.pyForEachCount – Same as <CURRENT>. Identifies the index of the current iteration. This can be
used for the Page index and in the expression as well, while <CURRENT> can be used only for the Page
index.
For Data Transform rules, we cannot use the <APPEND> keyword in the For Each Page In action. Instead,
we need to follow the For Each Page action with an Append to action or Append and Map to action. We
can use <APPEND> for “Update Page” action in the Data Transform and also for looping through in the
steps of an Activity rule.
While we are discussing symbolic indexes, let’s understand two more keywords that are useful to access
the correct pages. These are used in a host of rules, wherever pages are referenced.
Top – Refers the top most page of the embedded page.
Parent – Refers to the parent page of the embedded page. If the embedded page has only one parent,
and no grandparents in the hierarchy, Top and Parent refer to the same parent page.
Data Transforms in the Business Processes
Data transforms can be referenced in multiple rule types during the execution of the business process.
Let’s look at the key data transforms that would be used in our business scenarios.
In the starting processes of a case, the data transform rule is defined on the Processes tab, which is
invoked to set initial properties for the case instances. The starting process has the “Creates a new work
object” setting selected as shown below.
Any time, when we create a new case either through any explorer or create new cases with a new
application through Application creation wizard, PRPC creates pyDefault. We always recommend that we
use the same name for all starting process to take advantage of the chaining. The main purpose of Data
Transform usage in the process tab of the starter flow rule is to initialize the properties of the case
instance, when it is instantiated.
Here is an example where we initialize the properties of a Purchase Request case. When a case instance
is instantiated, it is initialized with who it is requested for, the cost center is, and the currency.
In the case hierarchy, we use the data propagation configuration to propagate the data from the parent
case to the subcases when the subcases are instantiated.
We can propagate the data from the parent case Purchase Request into the subcase, Purchase Order. If
we are simply taking the data without making any changes, we can use the data propagation option and
we do not need to select “Also apply Data Transform.” If we are using the data conditionally or looping
through a pagelist or if we need to reuse propagating data from parent to subcase in other rules, we can
use the “Also apply Data Transform,” option and use a Data Transform rule.
We can also use a Data Transform rule in a step where a subcase is instantiated from the case designer of
the parent case. We propagate the data this way if we don’t want to take advantage of propagating the
data from Case Designer “Data Propagation” configuration. Let’s say, we are creating “Inventory Selection”
step in two steps of different stages in the “Purchase Request” case, and in those steps, the data that
needs to be propagated is different. In this scenario, we need to reference the different data transform
rules in the step configuration, instead of relying on the one data transform rule referenced in the data
propagation settings in the details tab of the case designer of the parent case. The example shown here is
using “CopyLineItemsPO” data transform when creating multiple subcase instances of the Purchase Order
case in the step configuration of “Create Orders” step of “Purchase Request” case stage design.
For each step, there is an appropriate flow rule associated with it. A step of step type “Case” uses the
“Create Case(s)” smart shape in the flow rule for that step. Referencing a data transform on step
configuration is the same as setting the properties on the smart shape in the flow rule. We can modify it in
either place and it is reflected in the other place.
To avoid accidentally overwriting data when a subcase is instantiated, it is important to understand the
order in which an application evaluates these Data Transforms:
1. First, the application applies the Data Transform on the starting flow of the subcase
2. Next, the application applies the data propagation and Data Transform configured on parent case
type rule (defined on Case Designer – Details tab)
3. Finally, the application applies the Data Transform defined on the case instantiation step in the Case
Designer of the parent case, or the Data Transform defined in the “Create Case(s)” smart shape in
the “create case” flow rule of the Parent case.
In flows, on any connector between any two shapes, we can set properties or reference data transform
rules. If it is a simple setting of one or two properties, we can use “Set Properties.” But, If we are using data
conditionally or looping through a page list or if we need to reuse the data transform in other places, we
can use the “Also apply Data Transform,” option and use a Data Transform rule. For example, when the
flow is processed on a decision shape, on the connectors, we can use the appropriate data transform rules
based on the decision made by the end user.
In Flow actions, we can specify a data transform on the flow action's Action tab. To populate the properties
from another page, we can invoke a data transform in the “before this action” area. If we want to copy the
values submitted in the user session to another page, then we can invoke “after the action” is submitted.
We can reference a data transform in a section rule, for a change event of any cells directly, and also we
are able to reference a data transform if we select "Refresh This Section" for the client change event. This
is handy, when we change a value from a drop down or select a radio button. Let’s say, a business
requirement is, that for the state of Texas, the state tax is 0 % and the state tax for California is 6% and so
on and so forth. We can use a data transform rule when the state is selected from a dropdown and the
user interface is refreshed to show the tax to be deducted based on the state selected.
We can reference a data transform from another data transform rule with “Apply Data Transform” action,
as we described earlier in this lesson. We can also reference a data transform rule from an activity rule,
which we will discuss in the Activities lesson.
Conclusions
We learned that Data Transforms can process steps in sequence.
Data transforms rules can be used for setting and manipulating data such as:
To set default values for the properties defined on a clipboard page.
To copy one clipboard page to another clipboard page.
To map properties defined on one page to another page.
To append properties from one page list property to another page list property.
To iterate over pages in a page list property to map data on each page.
We can reference Data Transform rules in a host of other rules, wherever we need to set and manipulate
data as in the list above.
In PRPC, Activity rules are part of the Technical category and are instances of the “RuleObj
Activity” rule type.
Each step can call a PRPC method, transfer control to another activity or execute custom inline
java code. As a programming tool, it also provides features such as iterations and conditions.
While activities can appear to some as an easy and flexible way to automate work process, they
can quickly become complex to analyze, execute, debug and maintain. Consequently if writing an
activity is our only option, we must keep the following best practices in mind. Keep activities
short, no more than 25 steps, and avoid inline handcoded java as much as possible by using
library functions instead.
Understand the Purpose of an Activity rule
Activities are structured as a series of steps which are executed in sequence. Each step references a
method, or another activity, or can contain plain java code.
Let’s look for at alternatives before diving into writing an activity. Programmers tend to write too many
activities since they relate to other programming languages that they may know. However, activities are
harder to maintain and not as easy to construct as other rules in PRPC.
We should not use an activity for calculations or property validations that we can implement as constraints
rules or Declare Expression rules. Use of a declarative rule (rather than an activity) eliminates the need to
control how and when an activity executes. A best practice is to use declarative processing rather than
activities when feasible. For data manipulations, we can use a Data Transform instead of an activity. To
query from an external DB, we can use Report Definition rules instead of activities.
We may need to automate the processing of cases with a sequence of steps to be executed. Most of the
time, this is possible with the steps in the case designer or multiple shapes in a flow rule or with declarative
rules or data transform rules or a combination of those and other rules.
Before writing our own activities, let’s look at the standard activities that come with PRPC. These are called
API activities; they are accessible from the landing pages. We can search for an activity and each activity
has a description explaining what it is used for.
There are times when we need to use activities and in order to find out if one exists, we use API’s. Some
standard activities we can use are CorrNew to create and send emails, AddWork to create a new case
instance, and UpdateStatus to set the status of the case instance.
Here are some scenarios , where we cannot avoid writing an activity. For example:
To perform Case related functions such as creating a case instance, routing the case, or updating
the work status, as part of certain operations such as parsing an input file.
To send requests to external systems or to receive requests from external systems.
To perform functions such as writing to a log file or to the history database.
Let’s take a look at the Activity rule form, starting with the standard activity, “UpdateLocaleSettings.” The
“steps” tab is where we define the steps to be processed. They are presented as a sequence of rows in a
tree grid. We will discuss its structure in detail shortly.
Configure the Tabs of an Activity rule
We need to understand the Parameters, Pages and Classes, and Security tabs first to understand the
Steps tab.
We use the Parameters tab to identify any variables serving as input or output parameters to the activity
rule. We can reference the properties listed here as Param keyword (Param.PropertyName) in the Steps
tab for any processing. The Local variables section identifies scalar variables, and they are stored as fields
in the java class that PRPC generates to implement the activity. They are often used to pass information
from step to step. We can reference the properties listed here as Local keyword (Local.PropertyName) in
the Steps tab for any processing.
Before writing an activity, it is important to understand the three common page types we will interact with.
They are:
• Primary pages
• Step pages and
• Parameter pages
A Primary page is a clipboard page which has the same class as the “Applies To” class of the activity and is
designated when the activity is called. This page is the default location of properties referenced with a dot
and no preceding page name. For greater clarity, we can reference a property on the primary page using
the keyword “Primary” followed by a dot and the property name.
When a Branch or Call instruction executes as defined in a step in the Steps tab, the page in the Step Page
column of the step becomes the primary page of the called activity. If the Step Page column is blank, the
primary page of the current activity becomes the primary page of the called or branchedto activity. That is,
the primary page of an activity becomes the step page of each step, except for steps where the Step Page
column is not blank. The step page becomes the primary page for the duration of this step's execution.
A parameter page contains parameter names and values, as listed in the parameters tab. It has no name
or associated class, and is not visible through the Clipboard tool. However, we can display the contents of
the parameter page with the Tracer tool.
On the “Pages and Classes” tab, we list the pages used in the steps of the activity along with their classes.
The Security tab has a few settings that allow us to set who can access the activity and how.
“Allow direct invocation from the client or a service” check box indicates whether user input
processing can start the activity or it must only be called from another activity.
The authenticate checkbox, when selected, only allows authenticated users to run the activity.
The Privilege Class and name identify a privilege a user must have in order to be allowed to
execute the activity.
The Usage type determines whether and how the activity can be referenced in other rules. Select
one usage type depending on the intent of the activity.
Configure an Activity Step
We use the Steps tab to do sequential programming steps.
Each step of an activity contains multiple parts. There are nonaction items such as Label, Description and Step Page. Other parts such as Loop,
When, Method and Jump to indicate an action or condition for an action.
The Label provides an identifier for the step that can be referenced from other steps. The label name is used in the When and Jump conditions,
which we’ll look at later. We can also put two slash characters as a step label to indicate to Process Commander not to execute the step.
The Loop allows us to setup the iteration through the elements of a Value List, Page List, Value Group or Page Group and performs the provided
activity method on each value or each embedded page. Such steps can also be used as comments.
As we iterate through the loop, we can select the “For Each Page” option to sequence through all pages of a specified class or classes and perform a
method or instruction for each page. Leave the Step Page field blank. Each time the step repeats, the step's page changes internally automatically.
Use the “For Each Embedded Page” option to apply a method or instruction to each embedded page in a Page List or Page Group property. Identify
the property containing the embedded pages in the Step Page field.
For the optional Valid Classes (“Only loop for certain classes” as shown below) parameter for these two “Repeat” conditions, we can enter a class or
classes. We can click the “Add Row” icon to add more than one class. When valid classes are populated, iteration processing only processes pages
of the valid class list and the ones derived from them and skips over the pages of classes that are not in the list.
We can select the “For Each Element in a Value List” option to repeat the step for each element in a Value List property. When we select this
iteration form, a Property Name field appears. Identify the Value List property in the Property Name field.
We can select the “For Each Element in a Value Group” option to repeat the step for each element in a Value Group property. When we select this
iteration form, a Property Name field appears. Identify the Value Group property in the Property Name field.
We select the “For Loop” option to repeat the step a number of times determined by the values of integer constants or integer properties. Enter
integer constant values or integer property references for the Start, Stop, and Increment fields. The Increment must be a positive integer.
To add a child step to an iteration, rightclick the iteration step and select “Add Child”.
The When allows us to define a precondition that controls whether the step is executed or skipped.
We need to check the “Enable conditions before this action” to display the row where we can enter a condition or reference a When rule. When
the rule returns true or false, we can select one of the conditions.
Continue Whens To continue to evaluate other when conditions if any
Skip Whens To skip processing rest of the when conditions if any
Jump to Later Step To move to a later step with the label of the step noted in the param field
Skip Step To skip the step and to move to the next step
Exit Iteration To skip the iteration and to move to the next iteration in the same step
Exit Activity To skip processing rest of the steps in the activity
The Method indicates which Process Commander method the step will execute. We will look at some common methods that we can use, later in this
lesson.
The Step Page, as mentioned earlier, identifies a Page to be used as the context for referenced properties within the step
The Description is text that explains to other developers the action that the step is performing. As a best practice, always provide a comment for
each activity step.
.
The Jump condition or post condition is similar to the When precondition. After the step is executed, a condition is evaluated. Based on the result, the
activity may perform different actions such as jump to another step, or exit the activity. Also, there is another option of what needs to be done in case
of an exception in the step processing.
Understand some Common Methods in an Activity
Process Commander provides an extensive library of methods that we can use in our activities. We can
use the “expand method” button to expand the parameters for the selected method or activity.
Some common methods are:
PropertySet — used to set the value of one or more properties. If this is the only step, we would
be using a data transform rule instead of activity rule. When setting the properties, we can use any
of the symbolic indexes that we had described earlier in this lesson.
Call — used to find and execute another activity. After the called activity completes, the calling
activity resumes processing.
PageNew – creates a blank clipboard page of a class specified.
PageRemove — used to delete one or more named pages from the clipboard. If no page is
specified, the page referenced by the step page will be removed.
ApplyDataTransform – used to invoke a Data Transform rule directly from a step.
A full list of the methods provided is available in Developer Help.
PRPC enhancements are continuously reducing the need to create activities, but there might be occasions
when we cannot avoid using them. Some of the activity methods operate on pages and hence we can use
a Data Transform. When using the “PageNew” method, a Data Transform rule can be specified to set
some initial properties values on the new page. Similarly, the “PageCopy” method allows a reference to a
Data Transform rule which is applied to the destination page before copying data from the source page.
“PageChangeClass” method also allows a reference to a Data Transform rule in the target class or any
of its parents to be applied to the page when changing its class.
Conclusion
Best practice is to use data transforms, declarative rules, report definitions, and so on, in place of activity
rules. All the actions provided by a Data Transform rule are also possible through an activity. However,
using a Data Transform rule speeds up development as its rule form is easier to read than the activity rule
form. In addition, , a Data Transform rule provides runtime performance improvements over an activity.
In some instances an activity rule cannot be avoided, often times they are mandated, for example, Service
Rule Request processing.
PegaPulse is a rich social platform that adds social activity streaming capabilities to the user interface. We
can collaborate and converse with other application users or developers.
By default, PegaPulse section is included in the standard portals, where we can share messages, URLs,
and files with others in our work group. It is included in the standard perform harness as well; hence when
processing cases, we can share messages, URLs, and files. Here we can set the shared content to be
viewed in either public mode or private mode. In private mode, only those who have access to the case
can view the content. In public mode, anyone who has access to the case and also users in the workgroup
can view the content. Workgroup users can view the content in the standard portal interface.
A Social User Profile is the profile for any user that includes their name, position, phone, etc. When we
click on the link of any user’s profile, not only can we see that information, but we can also send a
PegaPulse posting to only that user.
PegaPulse is limited to attaching conversations, URLs and files, compared to Case Attachments where we
can attach screen shots, scanned documents, etc. In PegaPulse, we can take actions such as creating a
task, which we cannot do with Case Attachments. We can use the “Post to Pulse” SmartShape to have a
Pega Pulse posting added to case instances. Similarly we can use the “Attach a Content” SmartShape to
add attachments to case instances. The attachment types are limited in PegaPulse; while Case
management lets us add more attachment types. For more information on Case Attachments, see the
Case Attachments lesson in this course.
We can view the details of the PegaPulse in the following video.
At the end of this lesson, you should be able to:
Describe PegaPulse as a social platform
Explain the Contextual Activity Stream in different contexts
Determine how to view Social User Profile and send individual messages.
Configuring an Application to Automatically Post to Pega Pulse
To automate the posting of an update to the contextual activity stream, we can use the Post to Pulse smart shape. The properties
panel for this shape contains four configuration options that govern how updates can be viewed.
The User posting dropdown list lets us select the user under whom the system adds the update to the contextual activity stream.
By default, posts are attributed to the assigned operator (Current Operator). We can also select Other, then select the operator ID
we want to use.
The Message field contains the content of the automated post.
The Message pertains to the current case and Make secure post checkboxes allow us to configure the availability of the post.
The Make secure post option is available only when Message pertains to the current case has been selected.
Configuration Result
Message pertains to the current case is Post is public and appears in the Pega Pulse feed in the
disabled Case Manager
Message pertains to the current case is Post is public and appears in the Pega Pulse feed in the
enabled, Make secure post is disabled Case Manager. Post references the case ID
Both Message pertains to the current Post is private and only appears in the Pega Pulse feed
case and Make secure post are enabled when the case is opened
Introduction
During the normal sequence of processing a case instance, we may have to add
attachments to enable users to take the appropriate actions and also for future
reference. When we process an insurance claim for an automobile accident, we can
attach the police report to identify who is at fault.
A case attachment can be a file, screen shot capture, URL or text note. By default, case
instance attachments are stored in the “pc_data_workattach” table as instances of
concrete classes derived from the “DataWorkAttach“ class. They are not stored
directly with the case instances in the same table.
We learn how to add attachments manually and automatically. Suppose, we want to
provide access to certain users so they can add attachments and we want to provide
rights to delete attachments only to managers. Using the Attachment Category rule type
we can enable security and restrict access for certain attachment operations.
Add Attachments Manually
PRPC comes with six types of attachments. They are identified by concrete classes derived from the “Data
WorkAttach“ class. We have:
Note attachments from the “DataWorkAttachNote” class;
File attachments from the “DataWorkAttachFile” class;
URL attachments from the “DataWorkAttachURL” class;
Screenshot attachments from the “DataWorkAttachScreenShot” class;
Scanned document/image attachments from the “DataWorkAttachScanDocument” class; and
Enterprise Content Management attachments from the “DataWorkAttachECM” class.
While processing a case, end users can add attachments to a case, for future reference. We can do so
easily, with the standard perform and review harnesses. On the right hand side, we can use the “Case
Attachments” section to add attachments. We can use the “Add” or “Advanced” link to add attachments.
We can attach a File or a URL using the “Add” link.
When we click the “Advanced” link, an attachment listing window pops up and it has an “Add” dropdown
button. Using the button, we can attach a Note, File, URL, Screenshot and a Scanned Document.
The Attachment user interface is slightly different for each of these choices so let’s quickly look at how they
work.
When we click “Attach a File” from either menu, we get the “Attach a File” dialog box, where we can drag
and drop a file or browse to the file folder and select a file to attach it.
When we click “Attach a URL” from either menu, we get the “Attach a Link” dialog box, where we can enter
a subject for a description or why we are attaching the link, and the URL.
When we click the “Attach a Note” link in the advanced menu we see the “Attachment” dialog box and we
can enter a subject and the text for our note.
When we click the “Attach a Screen Shot” link in the advanced menu, we see the “Attachment” dialog box
and we can select a window name from the drop down and enter a note as description/reason for the
attachment.
When we click the “Scan and Attach a Document” link in the advanced menu, we see the Attach Scan
Document dialog. We first need to scan a document before we can attach it. So from the Select Source
button, select a scanner. Next, select the Scan button and we will see the scanned document in the viewer.
As mentioned earlier, there is also the “Enterprise Content Management” attachment type. We can
configure our application to store attachments of type file in an external enterprise content management
system (or ECM) using the CMIS protocol such as Microsoft SharePoint or IBM FileNet. We can enable this
option by selecting a checkbox on the application rule “Integration” tab. We can select this checkbox to
enable content management system integration for the application. Two more fields are needed in order to
properly ensure the attached files are stored in the ECM.
We use the “Connector Name” to specify the Connect CMIS rule used to connect to the external CMS
system.
If the Connect CMIS rule has not been created before, we can click the magnifying glass icon to create the
rule now. The second field, “CMIS Folder”, provides the “Browse” button to select the directory location on
the content management system in which we are going to store work item attachments for this application.
Anyone who has access to this case can view the list of the attachments and can also view the contents of
the attachments. In the Attachments section, we can see the list of attachments associated with this case.
To view the contents of an attachment, just click on the link.
In the Advanced attachments window, attachments are grouped by type. Here we get to see the timing,
description, and who attached it as well. If permitted, we can also remove the attachment if the attachment
is not required. For quick viewing, we can use the attachments section in the harnesses. To see other
details and/or to remove the attachments, we can use the advanced attachments window.
If the perform or review standard harnesses are customized or some other standard or custom harness is
used, the case attachment section may not be available to end users to add attachments. In that scenario,
the system architects can provide a local action to the end users who can then select the local action menu
option from the other actions menu. All attachments types can be added as a local action.
AddAttachments allows us to add a file attachment and assign it to an attachment category.
AddMultipleAttachments though provides the ability to add multiple file attachments, with a
category for each, in a single action. For every attachment types, there is one flow action.
AttachAFile as its name implies is used to attach a file to the work item, including an explanatory
note but no attachment category.
AttachANote flow action prompts users to type in a text note that becomes an attachment to the
case instance.
AttachAScreenShot is used to select one window from a list of windows on the Windows desktop,
and then capture and attach a screen shot JPG image to the case instance with an explanatory
note.
AttachAUrl is the flow action you might use to attach a URL reference to the work object with an
explanatory note.
AttachFromScanner allows us to start the Image Viewer tool to scan, view or update a TIFF file
from a scanner and attach it to the case instance.
AttachmentForECM allows us to attach a document from content repository to the case instance.
Add Attachments Automatically
There may be times where we want to add the attachments automatically to the case, so users do not
need to add them manually as optional steps. For example, if the same file should be attached to every
case instance of a certain case type, there's no reason to use a flow action and require user input.
To add a note or link or file, we can use the “Attach Content” smart shape in the flow rules. In the flow rule,
when the smart shape is processed, an attachment is attached to the case instance automatically. End
users can view the attachment in the same way as before, in the perform and review harnesses.
When attaching a file, we can choose either to attach a binary file from a folder or to reference an
attachment from the case type.
“Attach a URL” and “Attach a note” options require a URL and description and a note and description
respectively.
As part of attaching a file automatically, we can generate a PDF file and attach it to the case. To
accomplish this, we can use the smart shape “Create PDF.” For example, a list of the purchase request
items can be generated as a PDF file and attached to the case instance. We provide the section name has
the data that needs to be included in the PDF file plus the description. The description is used for the PDF
file name.
Once the PDF file is generated and attached, it can be viewed the same as any other attachments.
Restrict Access to Attachments
Process Commander provides a powerful tool to restrict operations that can be performed on case
attachments. We restrict access using the attachment category. The attachment category is a business
classification to indicate the content or significance of the attachment. It is different from attachment type.
At the same time, it is used to control operations on attachments including create, edit, review or delete.
In PRPC, Attachment categories are part of the Security category and are instances of the “RuleObj
AttachmentCategory” rule type. Below is the standard File Attachment Category rule. This rule can be
specialized by copying it to a different ruleset and “Applies To” class. We can reuse the standard
categories or create an entirely new attachment category rule instance, such as invoice, packing slip etc.
Two main tabs are available on the rule form. On the first tab, Security, we can control access to the
attachments through Privileges and/or When rules.
In the “Access control List by Privilege” section, we can select a privilege rule in the “Privilege Name” field.
The system uses the “Applies to” class of the attachment category to validate the privilege name. We can
select any of the following checkboxes that apply. If the checkbox next to an option is not selected, the
qualified user cannot perform it.
Create — to grant users the ability to add attachments of this category if they have the privilege in
the Privilege Name field.
Edit — to grant users the ability to edit attachments of this category if they have the privilege. The
permission to Edit implies permission to view.
View to grant users the ability to view attachments of this category if they have the privilege.
Delete Own — to grant users the ability to delete attachments of this category that they added
earlier if they have privilege.
Delete Any — to grant users the ability to delete any attachments of this category if they have the
privilege. The permission to delete any implies permission to delete own.
We can add multiple privileges by using the “Add a row” icon. When we have multiple rows, users must
hold at least one privilege to gain access. The order of rows in this section is not significant.
We can also define a list of When rules to control access to the attachments. In this case though, all When
rules must evaluate to true for a qualified user to be granted access.
In the “Access Control List by When” section, we select a When rule in the “Rule” field. The system uses
the “Applies to” class of the attachment category rule to find the When rule. Next, select any of the
operation checkboxes that apply. If an operation checkbox is not selected, the user cannot perform that
operation. The operations are similar to control by Privilege with the difference that they are now controlled
by a When rule rather than a Privilege rule.
The “Enable Attachment Level Security” option allows the operator at runtime who attaches a work
attachment of this category to identify one or more work groups that have access to the attachment. When
enabled, this attachmentlevel restriction operates in addition to and independently of any restrictions
defined on this tab for the category.
The next tab, “Availability”, provides a list of the attachment types. We select the checkboxes for the
attachment types that are valid for the category and the work type identified by the “Applies to” class of this
attachment category rule.
In this example, only “File” attachments are valid for this category.
If no types are selected, the category is not inaccessible. Instead, a default category rule in Work class is
used for its respective type. The default category has no security restrictions.
When adding an attachment to the work item, users have the ability to choose the “category” to which the
attachment belongs.
Conclusion
End users can add attachments to case instances manually as needed. Or we can add attachments
automatically at a certain point of the flow. We can view the added attachments easily through the section
included in standard harnesses.
We can restrict who can add, view and remove attachments through the Attachment Category rule type.
We can reuse the standard attachment categories or create a new attachment category. When adding an
attachment, the end users can select an appropriate category for the attachment to ensure that the
attachments are secure.
As we learned in the introductory courses, an assignment of a case can be routed to an operator’s worklist or to
a queue known as workbasket. Assignments can be pushed to or pulled by an individual operator. In this
lesson, we will look at more details of the push versus the pull concepts.
Sometimes, we want to assign a task to an external person who is outside of the organization and may not have access
to PRPC. For example, when we are processing a purchase request, we may want to get quotes from multiple preferred
vendors. The quoting task assignment is sent to the vendors and they can respond to the assignment through web. This
process is called Directed Web Access (DWA). We will learn how we can configure DWA and how the external users
can respond to the assignments in this lesson.
We have seen some basic standard routers such as “ToWorkList,” “ToWorkBasket” etc., in the introductory courses.
Now we will learn about standard routers for advanced requirements.
Explain Push and Pull Routing
There are two basic types of routing: Push routing and Pull routing.
Push routing routes the assignment to a single user’s worklist, and as such there is immediate ownership
as soon as the assignment is created. PRPC can use multiple criteria to select the owner, including
availability (if an operator is available or on vacation), the operator’s work group, his/her skills, or current
workload. Depending on how the routing is configured, it is possible to route to a substitute operator if the
chosen operator is not available. The worklist shows all the assignments for that operator, which can be
filtered to show only specific case type instances and can be sorted using any column title. Users can work
on any assignment from the worklist.
Pull routing, on the other hand, routes to a workbasket that is shared across multiple users. Ownership
does not occur until an operator takes action on it. Most applications use “GetNextWork” logic to select the
most urgent assignment from multiple workbaskets. When this happens, the assignment is removed from
the workbasket and is assigned to the operator’s worklist. GetNextWork logic is covered in more detail in
another lesson. In most of the standard portals, we can pull the next assignment to work with “Get Next
Work” logic by clicking on “Next Assignment” button at the top of the portal.
Let’s take a look at how routing is incorporated into a typical user experience, especially from a work
group, manager perspective.
The case manager portal, typically for work group managers and other managers, shows a list of operators
in their work group, on the right side of the dashboard.
As a work group manager, he/she has the privilege to view individual members’ worklists. Selecting a
particular operator (in this case, the operator named “Purchasing Manager”) shows all the items that have
been pushed to his/her worklist. There are at least three that have not been processed for a long time.
The work group managers and all the other operators can see the list of workbaskets to which they have
access, on their portal, on the bottom right hand side of the dashboard.
Selecting a workbasket shows the assignments in a particular workbasket. They are waiting to be pulled by
an operator.
An operator can select one of these items directly, which promotes it to his/her worklist, or use the
GetNextWork function, which selects an item for him. GetNextWork queries both worklist and workbasket
items. GetNextWork is a great way to ensure that the most appropriate item gets the most attention. This
way, we can prevent an operator from cherry picking the item that they want to work on from the
workbasket, instead of the items in the worklist.
Configure Directed Web Access (DWA)
We might need to get information from external users who don’t have operator credentials to login to PRPC
to provide the data that we are looking for. During the Purchase Request process, for the items that we are
planning to acquire, we can request for a quote for the specific item from an external vendor. The vendor is
not an operator who can log into PRPC.
Sending the assignment to an external user is called ‘Directed Web Access” (DWA), because the external
users accesses the assignment on the internet through a web browser. As a prerequisite, we need to
configure PRPC to send emails. (See Designer Studio > Integration > Email > Outbound Email.)
We can send the assignment to an external user, by selecting the assignment type as “External” in the
assignment tab. When we select this assignment type, we can configure what correspondence we want
sent to the vendor.
When the case reaches “request for quote” external type assignment in the flow, PRPC sends an email to
the vendor. The vendor gets an email as shown below, with a link.
When the recipient of the vendor organization, clicks on the link, they get the quote request flow action in a
web page. When they fill in the requested data and submit the flow action, the assignment and the flow
action are complete and the flow moves on to the next shape.
We need to configure the URL that is to be sent in the emails. We can configure this through Designer
Studio > System > Settings > URLs > Public Link URL. Public Link URL is set typically in the form as shown
below.
Identify and Use Some Advanced Standard Routers
When we have the assignment type as “WorkList” or “WorkBasket,” we can select a host of standard
routers. The introductory courses covered some of the standard routers and the purpose of selecting
them. Now, let’s look at some more standard routers.
We can see a list of standard sample routers list in the smart prompt for the router field.
Most of these routers are for pushbased routing, which routes to the worklist of a particular operator. Most
of the routers are parameterized and we can enter the appropriate parameters when we select a router.
“ToWorklist” routes to a particular operator by name.
“ToWorkParty” routes to a work party. The actual operator depends on the work object; for
example, if the workparty is “Initiator”, the actual operator is different for every work object. Work
Parties are covered in more detail in another lesson.
“ToWorkGroupManager” and “ToOrgUnitManager” route to the manager of the operator’s
workgroup and Organizational Unit.
“ToWorkgroup” routes to a random operator within a workgroup.
“ToSkilledGroup” also routes to an operator in a workgroup and takes required skills into account.
“ToLeveledGroup” is similar “ToSkilledGroup”, but instead of the system choosing an operator
randomly, operators are sorted by workload and desired skills.
We should not get confused by the naming of some of these routers. ToWorkgroup, ToSkilledGroup, and
ToLeveledGroup do NOT route to a group; they route to an operator, as is the case with all push routing.
These routers simply use the workgroup as a criterion for selecting the operator.
A couple of workbasket routers are worth noting. Workbasket routers are used with pullbased routing.
The “ToWorkbasket” router does exactly that; routes to a particular workbasket.
“ToSkilledWorkbasket” also routes to a particular workbasket. We will look at the details of this
router later in this lesson
Determine Work Balanced and Skill Based Routing
To assist both push and pull routing, we can associate skills to operators and assignments.
The skill rule stores very little information; the description, as well as a range from Low to High, the high
score representing the highest level of proficiency with the skill.
Operators are associated with any number of these skills, and the appropriate rating is supplied. The
assignment is then associated with skills as well, depending on the type of router selected.
If the operator has a rating that is equal to or above the level required by the assignment, that operator can
be chosen by the router. Note that a skill can be set to be required, or not. If skill is not set, the skills are
fundamentally considered by the system as a “nicetohave”, and are used in ranking the choice, but not
considered as an absolute requirement.
Let’s look at a few outofbox routers that incorporate skills into their algorithm.
For skillbased push routing, an operator is selected for a specified group and required skill. For the
“ToSkilledGroup” router, this selection is random.
For Pullbased routing, the “ToSkilledWorkbasket” router adds an assignment to a workbasket, and also
associates a skill with that assignment. Subsequently, GetNextWork will take these skills into account when
finding an assignment for an operator to work on.
Please do not be confused by the term “ToSkilledWorkbasket”. There is no such thing as a “skilled
workbasket”. This router merely sends an assignment to a workbasket and marks the assignment as
requiring a particular set of skills.
For the “ToLeveledGroup” router, the selection is prioritized by load and desired skills. A high load, in this
case, refers to when an operator has a high number of assignments that are past deadline.
Conclusions
As we saw, work can be pushed to either a worklist or workbasket assignments can be pulled to create a
worklist assignment.
We can send the assignments to external users and external users can process the assignment over the
web by clicking the link in the email.
There are a variety of standard routers available to us and we can choose the appropriate one based on
our need. We saw how some of them relate to load and skill based routing.
Default “Get Next Work” Functionality
PRPC provides different ways to access the “Get Next Work” functionality. On the top left corner of the
case manager or case worker portal, the “Next Assignment” link provides a way to access the
GetNextWork functionality.
On the other hand the “Get Most Urgent” button may also appear at the bottom of the user forms on the
confirmation harness allowing users to access the next available task right after completing one.
When users click the Next Assignment link or the Get Most Urgent button, PRPC starts the standard activity
Work.GetNextWork. This activity calls the final activity Work.getNextWorkObject.
This final activity (Work.getNextWorkObject) examines the property pyWorkBasketFirst. The
“pyWorkBasketFirst” property corresponds to “Get from workbaskets first?” checkbox in the Work tab of the
operator ID data instance, identified with number (1) below.
When “Get from workbaskets first” is checked, the activity Work.findAssignmentinWorkbasket is called.
Otherwise, PRPC calls WorkfindAssignmentinWorklist before examining the workbaskets.
The Work.findAssignmentinWorkbasket uses a standard list view rule Assign
Workbasket.GetNextWork.ALL which returns a list of tasks sorted in decreasing order of urgency.
On the same operator ID rule form, the Work Settings tab has another check box, “Merge workbaskets?,”
which is identified with number (2) in the screen shot above. It indicates whether or not all tasks in all the
workbaskets listed should be combined for the operator in a single list before sorting them. If this check
box is not checked, the task assigned could come from the first workbasket, even though there could be a
task with a higher urgency in subsequent workbaskets listed in the work tab for this operator.
If the “Merge workbaskets?” checkbox is checked, PRPC displays another checkbox “Use all workbasket
assignments in the user’s work group,” which is identified with number (3) in the screen shot above. This
checkbox, when selected, indicates to PRPC to consider tasks only from workbaskets for which the “work
group” field (on the workbasket rule form) is the same as the work group field, (identified with number (4))
of this operator.
The next step is to filter the list to get to the most appropriate task. To do that, PRPC applies the decision
tree Assign.GetNextWorkCriteria to each task in the sorted list.
This decision tree rule first checks to see if the task is ready to be worked on. This means that the
pyActionTime property of the task is NOT in the future.
If it is ready to be worked on, the decision tree checks if the current worker has already worked on or
updated the task earlier that day.
If the current worker did not work on the task, the Assign.GetNextWorkCriteria now examines whether the
worker has the required skills to work on this task. The worker’s skills are recorded on the work tab of the
operator ID rule form. Note that skills are covered in detail in the “Routing” lesson of this course.
The search ends when PRPC finds the first surviving task (if any) that meets all the criteria.
The GetNextWork activity creates and populates the newAssignPage page and locks the case instance. If
the System Setting rule GetNextWork_MoveAssignmentToWorklist is set to true, the selected task is
moved to the operator’s worklist.
If no task is found in the workbasket related operations, PRPC repeats the process but uses the
Work.findAssignmentinWorklist activity and the list view AssignWorklist.GetNextWork.ALL.
Thus the default “Get Next Work” functionality can be summarized into these steps:
1. Users click either the “Next Assignment” link or the “Get Most Urgent” button.
2. The Work.GetNextWork activity starts
3. The Work.GetNextWorkObject activity starts
4. The pyWorkBasketFirst property is checked “Get from workbaskets first?” Check box on Work tab of
the Operator ID rule form
1. If TRUE then it calls Work.findAssignmentinWorkbasket activity first then examines the
worklist
2. If FALSE then it calls Work.findAssignmentinWorklist activity first then examines the
workbasket
5. The Work.findAssignmentinWorkbasket activity starts and uses the list view Assign
Workbasket.GetNextWork.ALL. The list of assignments are sorted in decreasing order by
assignment urgency (property Assign.pxUrgencyAssign)
1. "Merge Workbaskets?” checkbox is NOT checked
1. workbaskets are searched in order defined on Work tab
2. “Merge Workbaskets?” checkbox is checked
1. “use all workbasket assignments in user’s workgroup?” is checked
1. list of assignments from all workbaskets with the same workgroup as this user
3. “use all workbasket assignments in user’s workgroup?” is NOT checked
1. assignments from all listed workbaskets in the Work tab are assembled into a single list
6. Assign.GetNextWorkCriteria Decision Tree is used to filter the most appropriate task
7. First task that passes all the above requirements is selected
8. New newAssignPage is populated with this task and the case instance is locked
1. Based on GetNextWork_MoveAssignmentToWorklist system setting rule
9. No assignment is found from the workbaskets
1. Repeat steps 5 to 8 but with the Work.findAssignmentinWorklist activity and the standard list
view AssignWorklist.GetNextWork.All
Note: The order in Step 9 is reversed if pyWorkBasketFirst property is false, as explained in step 4 – the
worklist is processed first and then the workbaskets.
Conclusion
The “Next Assignment” link and the “Get Most Urgent” button use the “Get Next Work” functionality. This
functionality can be customized for each operator using the settings in the Work tab of the operator ID data
instance. The next assignment to be worked on can come from a workbasket or first from a worklist. All
the workbaskets listed in the tab can be merged before pulling the next assignment. Either the workbaskets
listed in the tab are considered, or workbaskets belonging to the operator’s workgroup are considered.
Your PRPC application selects and provides the best, most appropriate task to operators when they click
the “Next Assignment” link or the “Get Most Urgent” button.
Major components of PRPC Notifications
In order to better understand the notification functionality in PRPC, let’s take a quick step back and identify
the major components of email communication, even outside the PRPC world. This helps to frame our
discussion on PRPC notifications.
An Email is sent from a sender to one or more recipients, and uses a fourstep model.
1. We need very little information for recipients, an email address, and perhaps the recipients’ names.
2. The sender, in addition to needing an address and name, requires account information, and perhaps
a provider, server and port. If an email client program is used, these are typically set once early on in
the process.
3. The body of an email, contain the actual message that needs to be communicated to the recipient.
4. In between the message and the recipient, let’s add a “delivery” step. Gone are the times when
every email requires someone to click a “send” button. We still send these adhoc messages, but our
inboxes fill up with other emails as well, such as Tasktriggered emails, like “You were recently sent
a new credit card. Call to activate." We also get (friendly) reminder emails, like “Your credit card is
now two weeks past due. Pay now or expect a large fee.”
Let’s look at the PRPC rules and data that fit into this fourstep model.
1. The recipients of the emails are represented as PRPC work parties or the email ID itself as that can
be typed in certain configurations. If the recipient is internal to the system, the email address is
stored in the operator record.
2. The Sender information is stored in an “Email Account” rule. This rule holds account information, and
as with life outside PRPC, a single account is used to send multiple emails.
3. The message itself is created from a “Correspondence” rule, which dictates the actual content and
layout of the email body. Correspondence rules can also contain smaller, reusable “Correspondence
Fragment” rules and/or “Paragraph” rules.
4. On the delivery front, taskdriven notification is setup from the PRPC representation of tasks:
Assignments. Assignments point to a special subset activities, called “Notification Activities”. Or they
can be set up with the “Send Email” smart shape after the assignment shape in the flow rules.
Reminders are set at the SLA level. That is, when the goal or deadline of a service level agreement
has been reached, a notification email is sent out. This is done using an “Escalation Actions”. AdHoc
emails are typically sent using the outofthebox “Send Correspondence” flow action, which can be
configured as a local or flow wide or stage wide or case wide flow action.
Creation of an Email Account
In order to send emails, we need to set up an email account. The easiest way to create an email account is
through the Email wizard, which can be accessed by selecting the DesignerStudio button > Integration >
Email > Email wizard.
In the wizard, change the drop down value to “Configure an email account. “Note that this account is being
shared for a workpool. Multiple cases, even multiple assignments for a case type, will share the same
email account record, therefore, we only need to do this once, for a workpool. When creating an email
account using the wizard, when you select a workpool name PRPC makes the Email Account rule name
the same as the workpool name, , and implicitly associates this account rule for the subsequent
notifications.
Use the form below to provide the account information. To help us get started, click the “Select Email
Provider” button to prepopulate the form with information from a wellknown email provider.
Let’s select Yahoo! Mail from this list.
We then fill out other critical information, such as the address, host, user ID, and password. It’s possible to
use different account information for sending and receiving. For now, we’ll use the same.
Correspondence Creation and Configuration
In PRPC, we can create a correspondence from any explorers, under the Process category.
In the correspondence rule form dialog, we need to specify what type of correspondence, by selecting one
of the standard correspondence types, Email, Fax, Mail and PhoneText. We also need to make sure the
Context (App Layer, Apply to class and ruleset name is accurate.
A correspondence rule is a template that contains the standard body of the message that we wish to send.
We can insert PRPC properties and include logic as well as text in this content. The correspondence rule
provides a Rich Text editor with a wide variety of formatting tools to change the appearance of our
correspondence. The toolbar includes selectors for bold, italics, fonts, font size, color, spell checking, list
formatting, alignment and including images and graphic elements.
PRPC properties are marked using angle << >> brackets and these are replaced with the actual values at
runtime before the correspondence is sent to the recipient.
We can also include larger fragments of text, including paragraphs, sections, other correspondence rules,
and correspondence fragments, by clicking the “Include a Rule” button, when we are editing the message
contents of a correspondence rule.
Correspondence Fragment rules are part of the Process category, and Paragraph and Section are part of
the User Interface category.
Add correspondence to a flow
Once the correspondence rule is created, we can configure the flows and assignments to utilize the rule, to
send the correspondence when the case instance is being processed.
We have covered the details of the “Send Email” smart shape in SAE I course. We can use the smart
shape in a flow rule to send the correspondence to any work party. The correspondence rule should be of
Email correspondence type in the correspondence template field.
Instead of the smart shape, we can also use a utility shape and call a correspondence activity. PRPC
includes several standard correspondence activities. The two main ones are CorrNew, which is typically
used when generating correspondence from a utility, and CorrQuickStart, which is a simpler
correspondence generation activity, with fewer options than CorrNew. The usage of CorrNew directly is
greatly reduced, as that is the one used behind the scenes for the “Send Email smart shape with easier
configuration.
In a flow rule, instead of using the send email smart shape or utility shape with a call to a correspondence
activity, in the notify tab of the assignment we can configure it to send correspondence . Assignment shape
correspondence is generally used to send automatic notification messages to the users who have the
assignments, alerting them that more work has arrived in their worklist or acknowledging some other party
that the specific assignment has happened. For example, in an UnderWriteLoan flow of an insurance
application, an assignment can alert the loan officer that a new loan is ready for his or her review. In the
purchase request application, the “Ship Purchase Items” assignment can notify the requestor that the items
requested are about to be shipped.
Note that, since notification is now set up in the notification tab, a mail icon appears on the assignment
shape, similar to the “Send Email” smart shape. We can enter one of the activity rules “Notify,” “NotifyAll,”
“NotifyAllAssignees,” “NotifyAssignee” and “NotifyParty” in the Notify field, and enter the appropriate
parameters.
NotifyAssignee, sends a message to the assignee, or to the first contact in workbasket for
workbasket assignments.
NotifyAllAssignees is similar, but sends to all workbasket contacts.
NotifyParty sends a message to designated work parties
NotifyAll sends to all work parties
“Notify” is a template for creating customized notification activities.
Some of these rules can optionally check for urgency, and only send email when the urgency is equal to or
above this value.
If the service level is not met for an assignment, we can send the correspondence to the appropriate
recipients. Previously we learned that service Level rules are referenced in the Assignment tab of the
assignment shape in a flow.
In the service level rules, we can do one or more escalation action(s), if the goal or deadline is not met and
the deadline is passed. The list of possible actions is shown below.
Notify actions can be used to send correspondence. We can see what activities are called when using the
Notify Assignee and Notify Manager activities.
Whenever an email is sent automatically as part the processing the case instance, or manually (adhoc), a
copy of the email is attached to the case instances, and we can view them from the attachments section in
the perform harness.
Clicking the attachment link opens up the copy of the email with the contents and identifies who sent it.
Adhoc Notifications
We’ve looked at use cases for automated emails; smart shape in a flow, on an assignment, and triggered
from an SLA. Now, let’s look at adhoc messaging. Imagine a situation in which a manager wants to send
an initiator a question about the current purchase request. He doesn’t want to approve the message until
he gets his answer. He could just email the requestor directly, but prefers doing it in PRPC for better record
keeping. The assignment is set with the local flow action “SendCorrespondence”. This could also be set as
a flowwide or stage wide or case wide flow action.
At runtime, from other actions menu, we can click the “Send Correspondence” link and this enables us to
send adhoc notifications.
When we click on the link, the correspondence action is initiated and we see the work parties drop down.
Owner work party is configured as the operator who initiates the purchase request case.
Now, we select from a list of outofthebox correspondence template rules. We can select “SampleLetter”,
which allows us to edit the correspondence before sending it.
We can click the Next button to finish and the correspondence is sent back to the requester. Please note
that all emails sent from PRPC, are included as attachments in the case instances just as we have seen.
Introduction
During the normal sequence of processing a case instance, an event may be considered a
business exception. When that event occurs, a ticket may redirect the processing to another point
in the processing of the case.
In PRPC, tickets are part of the Process category and are instances of the “RuleObjTicket” rule
type.
We raise the ticket when the business exception happens, and we reference the ticket at a point
where we want the flow of the case to resume again. In this lesson, we will learn how to raise and
reference tickets.
Know When to Use A Ticket
With the introduction of alternate stages in Pega 7, the use of tickets is diminished. An alternate stage is
meant for exception flows and processes that deviate from the happypath primary flow. Let’s look at one
example. During the processing of a purchase request, anyone can reject the request for any reason.
Whenever the request is rejected, we have a different starting point for the special processing of the
rejection. This can involve sending a notification to the requestor who requested for the purchase, return
the funds allocated for the purchase request etc. This special processing can be done with the use of
tickets or alternate stages. We recommend the use of alternate stages. Whenever rejected, we change the
stage to an alternate stage such as “rejection,” where we can have the steps to notify, return of funds etc.
Alternate stages are discussed in detail as part of the Case Lifecycle Management lesson group.
In another example, we want to hold the purchase request case from processing further until all the
subcase instances of purchase orders are resolved. This can be done either with the use of tickets or by
using the advanced Wait shape, with Case Dependency being the wait type. We recommend using the
Wait shape. The Wait shape is covered in a different lesson.
But, we still might have instances, where using a ticket is the easiest or the only way to satisfy a
requirement. For example, when a purchased item is delivered to the requestor, they can confirm delivery
or they can return the purchase if the item is defective or did not meet their expectations. So, in the
purchase order sub case, a ticket is raised whenever the purchased item is returned. It is referenced in the
purchase request parent case to restart the purchase request process.
A ticket rule only defines a name and description and not the processing to occur in the case of an
exception.
Raising a Ticket
In the flows for a case, we define when an event may be considered a “business exception” by raising a
ticket.
PRPC provides us with two distinct ways to raise a ticket. One way is to provide users the option of
selecting the Work.SetTicket or @baseclass.SetTicket flow actions when processing an assignment. If an
assignment is presenting a case instance with a condition that meets the business exception criteria, users
can select this flow action to trigger the exception processing. As shown below, users have to select a
ticket from the dropdown, to set the ticket flow action during the processing of the case. We use an HTML
rule, ActionSetTicket to display a list of Tickets. These standard flow actions require users to have the
“ActionSetTicket” privilege.
The second way to raise a ticket is by calling the standard activity @baseclass.SetTicket. This activity
takes the ticket name as a parameter and turns it on or off. Instead of using the standard SetTicket flow
action, in the flow action where the business exception is occurring we use the standard activity to raise the
ticket. In the return to purchase business exception flow action, we raise the “ToPurchasing” ticket that we
created earlier, by passing the name of the ticket as the parameter to the activity.
The SetTicket activity takes the ticket name as parameter and turns it on or off.
Reference a Ticket
In the flows for a case, we also need to define the starting point of the processing that needs to occur
once a ticket is raised. We do this by referencing the ticket.
Once a ticket is raised, flow processing stops and resumes at the point in the flow where the ticket is
referenced.
In the process modeler, specific shapes have the ticket functionality built into the Properties panel.
Use the “Tickets” tab of the Properties panel on the shape we want the processing to resume from when
the ticket is raised.
The following shapes provide a “Tickets” tab in their properties panel:
Assignment
Decision
Subprocess
Utility
End
Split For Each
Split Join
Integrator
Assignment Service
Most of the smart shapes
The shape has a ticket indicator with the description next to it as shown below.
We can use the Ticket landing page available by selecting the Pega button >Process and Rules > Work
Management >, Tickets in our current application to verify which ticket rules are in active use, and by which
flows or activities.
Identify and Use Standard Tickets
Several standard tickets are available in PRPC. Among those, there are few common ones. The advantage
of using these tickets is that we don’t need to raise them. We just need to reference them where the flow
has to resume.
“Work.StatusResolved” is automatically raised by the system when a work item is resolved. We
can then reference it in our flows to indicate the starting point of the processing we would like to
occur when our work item is resolved.
“Work.Withdraw” is similar but is raised when the work item is withdrawn. If this needs to be
handled in a special way in our flow, we can reference it to indicate the starting point of the special
processing.
The “WorkCover.AllCoveredResolved” ticket is raised by PRPC when all covered work items
are resolved. The ticket alerts the covered work item and any needed special processing
referenced by this ticket is triggered. The parent Purchase Request case is waiting in a
workbasket and when all the subcases are resolved, the ticket is raised. It is referenced here in
the Update Status utility shape to resolve the status of the parent case automatically.
The “DataParty.ApprovalDecisionMade” ticket is quite common to use when processing needs
to occur as soon as enough parties have reached a decision either approved or rejected.
Conclusions
Tickets are a powerful feature that helps to handle business exceptions easily and efficiently, when
alternate stages or Wait shape and dependencies cannot be used. Turning tickets on and off is simple.
They are also a great tool to build flexibility into our applications.
The Ticket rule itself is just a name and description. We raise the ticket where the business exception is
happening and we reference the ticket where we want the flow of a case instance to resume.
There are some standard tickets available for us to reference in our flows and they are automatically raised
by PRPC.
Introduction
Traditional programming models involve procedural execution of logic, in the case of a BPM application these are tied to the
business process. In certain scenarios this programming model poses significant challenges especially when it comes to
adapting to change quickly.
Let’s look at an example a customer is entering a questionnaire during the enrollment process that is used to determine the
quote for his insurance plan. There are several factors to consider here, the insurance calculation depends on a variety of
factors, the execution of the calculation should not dictate the order in which the questions are answered and lastly the model
should be flexible enough to handle new or changing factors. PRPC offers a powerful declarative engine that can compute
declarative calculations. These declaratives work in tandem with the business process and do not require any explicit
reference in the business process for these rules to be invoked.
Explain the Benefits of Declarative Processing
Declarative rules are not invoked from other rules. This eliminates the dependency on different rules
making the creation and deletion of rules independent of each other. For example, management decided
to eliminate the discount offered to all employees. If we used a procedural rule like a data transform to
calculate the discount and if management decides not to offer discounts anymore, to effect this change we
would delete the data transform and change other rules that invoke the data transform. If it's written as a
declare expression, we just need to delete the expression.
Keeping the rules independent makes it easy for us to separate business rules from the process. This
gives us the option to delegate these rules to business managers. A declare expression can invoke a
decision table and the managers can modify decision tables directly in production.
Declarative rules offer speedier development because developers only need to configure the rules and not
decide on where to put the rule. Not having developers configure how these rules are invoked offers
multiple benefits. Development time is shorter because the code is less prone to errors since the
declarative engine handles the processing of the rules. Maintenance is easier since newer teams do not
need to dig through to locate these rules and the rules are easier to configure than activities. We can use
the Declarative Network landing page to access the declaratives defined in that application.
Declarative rules support all those features that apply to other rules such as class specialization, ruleset
and rule specialization, circumstancing and or effective date.
When developing PRPC applications, we may have some situations where we need to decide whether this
rule can be written as a declarative or as a procedural rule. In most cases, the answer to that question is to
write the rule as a declarative rule. For example, the unit total for a selected line item is the product of the
price of the line items and the quantity (UnitTotal = LineItemPrice* LineItemQuantity).
This logic can either be written using a data transform or a declare expression. In this case this should be
written using the declare expression so it’s easier to maintain and modify.
Another benefit of declarative processing is Performance. Well written declarative rules offer better
application performance than using a procedural rule. When working on a purchase request application,
the customer can order any number of items. Declarative calculation of line item totals can automatically
trigger the calculation of the grand total for the purchase request. Declarative rules allow us to use chaining
to any number of levels which creates a declarative network.
Differentiate Between Forward Chaining and Backward Chaining
Since Declarative rules are not referenced explicitly PRPC must determine when to execute them. Most
declarative rules use one of two modes Forward or Backward Chaining. All declarative rules use source
and target properties.
The target property is what is being calculated or computed using these declaratives and
Source properties (in some cases there is more than one property) are the ones which are used
as an input.
Forward Chaining
Forward chaining is executed when the value of any of the source properties change. Most of the
declarative rules follow forward chaining.
In the case of Declare expressions, the expression is calculated whenever any of the source properties
change. In our example, the subtotal gets calculated whenever the quantity or the unit price changes.
Declare Expressions are configured in Forward Chaining Mode when we select the option Whenever
inputs change.
In the case of Constraints, when the value changes, the constraint is executed to verify that the value
conforms to the conditions expressed. Constraints can have any number of rows and the order in which
they are listed is not significant. The message text can be entered directly in the rule form or saved as a
message rule which can be referenced as shown below.
Declare OnChange can track several properties that have been saved as part of the case. When one or
more of these properties change it invokes the activity which has the logic to be performed. The
Declarative Engine invokes the activity only once if more than one property gets changed.
The Declare Trigger executes when instance of its classes are created, saved or deleted in the database.
On the database event, it triggers an activity which contains the logic to be performed. Triggers can be
used in cases when we want to capture the history of how a property got different values over the case
lifecycle. For example, the audit trail must contain information about the different values stored as part of
discount. Whenever the values in the discount change, it causes the trigger to fire the standard activity
which writes an entry to the audit trail.
The Declare Index executes when the values of a page list or a page group change which requires the
system to update the records saved in the indexed table (corresponding to the indexed class). This is
primarily used in reporting when we expose the page list properties to improve performance. Whenever a
new page list gets created or deleted or when one or more of the values change the Declare Index
automatically create or update the records stored in the index table.
Backward Chaining
Backward Chaining mode executes when the target property is referenced. Backward chaining is
supported only for declare expressions .
Declare expressions provide three options for Backward chaining.
1. When used, if no value is present
2. When used, if property is missing
3. Whenever used
The target property is considered used when it’s referenced by its name, such as in a UI rule, in a decision
tree or in a data transform.
The no value present / if property is missing options make sure the expression is calculated only once
in the case lifecycle unless the property is removed from clipboard.
When we use Whenever Used the system throws a performance warning, indicating that the expression
will fire as many times as the target property is referenced.
Calling Declare Expressions Procedurally
The remaining two options – When applied by a rule collection and When invoked procedurally – allow a
declare expressions rule to be used in a procedural manner, by a collection rule. These two options
behave similarly, except that chaining in to or out of the expression is not supported by the When invoked
procedurally option.
Both options allow the declare expression to refer to itself, such as in the expression .Property = .Property
+1. With either forward chaining or backward chaining, this type of expression results in a neverending
recursive calculation.
Collection rules are an advanced topic, and not covered in this course. As a result, the remainder of this
lesson focuses only on forward and backward chaining
What are the Performance Impacts of Declarative Processing
Using declarative rules can significantly improve the overall performance of an application, since the
declarative engine controls the execution and it is always better than a procedural rule such as an activity
or a data transform. There are some key points to consider though when designing declaratives.
The decision to use Forward or Backward chaining of declarative expressions is dependent on the
application requirements especially in terms of how the target and source properties are referenced.
Choosing them incorrectly severely impacts performance.
Forward chaining becomes more performance intensive if the expression uses lot of factors and they
change quite frequently. For example, let’s say we are calculating the insurance quote based on fifty
different factors, and these fifty different factors are collected in various screens. When we use forward
chaining, the declare expression gets executed fifty times when the user changes these factors and if the
user changes the same factor more than once it recalculates every single time it changes. If we are only
going to display the insurance quote after we collect all the factors then forward chaining is not a good
choice because the quote gets calculated 50 times instead of 1 time. But if we have used backward
chaining (one of the When If used choices) the quote would only get executed once.
However in the same example, if the application should reflect the quote in each of these screens and it
should reflect the change so users see how their selection impacts the quote then we have no option other
than using Forward chaining.
Similarly, when we are displaying a shopping cart where users add items to it and the cart should reflect
the total based on the changes immediately, forward chaining is the perfect choice. By default, a declare
expression uses Forward chaining. So developers should think about the places where the source is
referenced and how the target property is referenced to make sure we optimize the execution mode.
When using Backward chaining we need to determine the execution depending on how the target property
is referenced. Going back to the same example, we determined the Insurance Quote and this application
requires approval from various people, so if we use backward chaining to fire (Whenever used), the
declare expression gets fired every time the application looks for information to display the insurance
quote. The system throws a performance warning when the option of “Whenever Used” is selected. This
does not mean we should not select that option. Though we can optimize the number of times the declare
expression is calculated, we need to make sure that the value is accurate. So we cannot select When Used
if no value or When Used if not present if we know the source properties can change.
Let’s look at another example, if we are looking at a purchase request application where users tend to
order more than 30 items. So in this case if we have a declare expression for total price written as
Whenever inputs change it gets calculated 30 times. However if we configure the expression for the total
price to use “Whenever used” the total price is calculated only once when we are displaying it. And if we
are displaying the total price in 5 different screens then the expression is calculated five times. If we use
When Used, if no value present then it calculates only once, however if the user goes back and changes
the request to add or delete line items, the order total does not reflect the change.
Using the Tracer helps us to identify places where we can optimize the declarative calculations as well as
making sure the expressions are always displaying the correct values.
Finally, it’s important to remember that declarative evaluation can cause severe application errors that
would not otherwise occur with procedural evaluation. When configured incorrectly, a declarative rule may
cause the system to access pages that have not been created or properties that have not been set,
leading to unpredictable results. With a procedural alternative, we can better control rule evaluation. So, if
we need to perform a calculation at a specific point in a process, a procedural approach may prove less
errorprone than a declarative approach.
Conclusion
In this lesson we learned about declarative processing, and how it provides an alternative to traditional,
declarative processing. The use of declarative rules can reduce errors and maintenance effort, simplify
development, and improve performance – if used properly.
We also learned about forward chaining and backward chaining, how PRPC controls the execution mode of
the declarative calculations using these modes. We will learn more about writing declarative expressions in
another lesson. The Lead System Architect (LSA) course provides more detail on the advanced rules such
as Triggers and OnChange.
Business calculations involving user defined fields should all be implemented using declaratives. Some
calculations use simple mathematical operations like add, subtract, multiply or divide, while there are some
advanced operations which might involve creating a java function.
Business calculations may span multiple layers that together form a deep network. For example, the total price of
an order is calculated as the sum of the subtotals. The subtotal is calculated as the sum of all line item totals and
then deduct the discounts. The line item total is calculated as quantity * unit price. Discounts may involve other
calculations depending on the customer type, geography, time when order is placed, and so on.
Design Declare Expressions
A declare expression can be created by right clicking on the property in the App Explorer and clicking Define
Expression.
In the new rule form, we need to identify the Target Property.
If the target property is of page list or page group then we need to identify the Page Context.
Declare expressions have a number of prebuilt calculation types. This list is different for each type of target
property. In this case UnitPrice is a decimal and you're presented with the list of calculation types used for
decimals. To set a scalar value we use Value Of and then select the free form expression which allows us to write
any expression. There are three other choices that apply to decimals.
If the property is an integer it brings up other functions.
Other calculation types for decimal and integers include: sum of, min/max of, average of, count of and the two
index calculations. Each of these loops over a list and performs their respective calculation.
The result of the decision tree, table and map value allows us to call other business rules and take their return
value and set the target property.
The “Value of first matching property in parent pages” option allows declare expressions to “reflect” a value for a
property of the same name on a parent page. The class of the target page must equal or be a descendent of the
class of the declare expressions rule.
Let’s take a look at the examples of declare expressions that can be used to calculate these values. The subtotal
or line item total is calculated as
while the total price is calculated as the sum of all line item totals.\
When the target property is string, the functions available vary and the selections are only Value Of and Results of
Decision rules. We can also apply a condition to evaluate which expression to pick.
Similarly a date property comes up with different list that applies to date functions. We can also apply a condition in
the In Which field, that applies a condition for each item in a page list or a page group.
Identify and Use Utility Functions
Some complex declare expressions require using a Function rule. The definition is easier by opening the
expression builder to pick the function from the list of available libraries.
A function is a rule type in Pega that is used in various other rules. Functions allow us to extend PRPC
capabilities by using custom java code. Functions belong to Technical Category (RuleUtilityFunction) and
are grouped by Library.
The Library rule is primarily used only for grouping related functions. PRPC ships various libraries each of
them containing a list of functions for us to use asis. The expression builder allows us to select the library
which in turn provides the list of functions that are available in that category. In most cases we should be
able to use these functions as such and we may need to write new ones on rare situations.
In the expression builder, first we need to select the library, if we are not sure which one it might belong to
we can select All.
When a function is selected, it opens up the parameters panel. Functions require one or more parameters
to be passed as inputs. After adding the parameters, we click Insert to add the function. The expression
builder provides functionality to validate if the function is valid.
Functions are also used in other rule forms like when, decision trees, etc. We will learn more about these in
other lessons.
Use Contextfree and Contextsensitive expressions
Declare Expressions can be configured to execute either using forward or backward chaining modes. The
lesson on Declarative processing provides more information on this topic.
The other important concept we need to learn in declare expressions is the Context Execution behavior.
There are two general types of context settings, Context Sensitive, which is set by selecting the first choice
and context free, which is set by choosing the “regardless of any pages it is contained in” option.
The third option is a hybrid where we can explicitly list the classes for which it will behave like a context free
expression. In practice this option has limited appeal as maintaining such a list reduces reusability.
The context sensitive option is the default for expressions. The term context sensitive really means that the
path defined by the page context + target property must be complete from the top level page that derives
from the Applies to class.
In the examples we saw earlier TotalPrice which uses the sum of all lineitemtotals fires only when the
applies to class is the work class in which it belongs (in this case its ADVPurchasingWork
PurchaseRequest).
The LineItemTotal or the subtotal calculation that we saw earlier is used to calculate the totals of each line
item. This expression also uses the context sensitive expression. The only difference being the expression
uses .LineItems() as the page context property so the expression is fired based on the page list property
LineItems that is defined in the work class.
Context free expressions allow us to define the expression without taking into account where it is used in
the expression. This option is best used for calculations that are ALWAYS necessary and the same,
regardless of where the object is embedded.
In our base application we implemented the unit price using a context free expression. In this rule, we copy
the unit price from a page property named SelectedProduct. The page property SelectedProduct is auto
populated by a data page so this unit price is copied from data page irrespective of where the line item is
added.
Instead of defining the applies to class at the work object and supplying the page context we define the
expression directly on the data class and set it to contextfree.
Explore the Declarative Network Analysis
For many applications the complexity of the calculations can be quite significant, with potentially hundreds
of nested calculations. Luckily PRPC provides tools for viewing the expressions directly.
The first tool is the declarative network landing page. We can get to the landing page by going to the Pega
button, clicking Process & Rules > Business Rules > Declarative Network.
The landing page shows us the “top most” expressions. The expressions that are used by these top
expressions are not shown here. Think of this as showing us the final results and not the intermediate
calculations. We can open our Total Price expression to test further by clicking the highlighted icon.
Here we can see that the total price uses subtotal which is an expression and which uses quantity and unit
price. Unit Price is an expression using another unit price.
For complex networks we can zoom out (or in) to see different components of the network. PRPC also
provides alternative views of the network. Here we can see the basic tree view as well as the org tree view.
Another useful feature for more complex networks is the ability to expand the network one level at a time.
Here we can see that, starting at level one and expanding one level at a time, we can clearly see the
properties that factor in to each calculation.
Unit Test a Declare Expression
We can unit test expressions in couple of ways.
In the declarative network on a specific property we can click By clicking the run button to see the test
expression screen.
As we can see all the dependent properties needed to determine the total have been identified. Each
property is editable so we can test directly from here.
To enter a value, we need to click on the property that does not have fx to its left, the ones with fx are
calculated by the expression. We then enter the value in the text box and click update.
Now if we update our Unit Price to 10 the discount expression fires and automatically updates the Unit
Price. Since we have already defined a value for Quantity the subtotal and Total price are also calculated.
For properties that are lists, such as our LineItemList, we can also add additional pages directly from this
screen. The expression testing tool has a number of other features which can be very helpful when
reviewing the calculations with other developers or the members of the business team. This form can also
be accessed directly from the declare expression rule using the actions menu and clicking Run.
Conclusion
Declare expressions are most popular amongst all declarative rules and we learned about defining them. Declare expressions
are useful in cases when we want to calculate the total price on a purchase request, or derive the quote for an insurance policy.
Some applications require us to perform complex business calculations which require us to use function rules. Remember that
function rules are grouped using libraries.
We also looked at examples of how to write context free and context sensitive declare expressions. Finally we learned about
the declarative network analysis tool to explore the nested levels of declarative rule and how we can unit test a declare
expression to make sure it works.
Enterprise applications require us to design the application so it’s flexible and supports modifying the
business logic in realtime. Pega offers us the ability to delegate the rules to business users who can
change the logic directly in a production system without going through the longer development cycle.
Let’s look at couple of simple examples that show how a business user can make changes directly in a
production system.
1. When the employee completes a purchase request, the request is sent to their manager if the order
total is more than 500 dollars. This led to employees tampering with the system by entering requests
less than the order total so that they can get their request autoapproved. Managers wanted to
periodically change the amount so that employees were not aware of the limit and their requests
would get assigned to their managers depending on the new limit.
2. Consider the company is now selling a new product line so we need to update the discount table to
include the new product line.
Delegation of Rules
One of the key benefits of PRPC applications is that they are very adaptive to change. A well designed
PRPC application should involve business users not just to capture requirements in the product but also
provide them with the option to change the rules in realtime.
Some of the changes are related to data, for example the list of products that a user can order or a list of
accounts owned by the customer or the list of transactions made by the customer. This data comes from
either internal or external sources and is usually updated outside of the application. Pega 7 helps us in
designing our application to separate data from the business logic so that changes in data can be handled
easily without impacting the application.
In this lesson, we will look at some changes that impact case processing logic.
1. The bank decides that it wants to automatically approve a credit card dispute if it is less than 30
dollars and if the customer has had the card for more than 2 years.
2. When fulfilling the order, the invoice amount should apply the tax rate based on the state in which
the customer lives.
In the first scenario, if the bank decides to increase the auto approval amount for the credit card dispute
from 30 to 40 dollars or reduce it to 25 dollars or even add another factor such as customer type so that
the amount can be based on the customer type. By delegating these rules, these types of changes can be
made by the user without the need for a developer to be involved.
Suggested steps
1. Identify the group of delegated users – These users must be part of a specific access group and the
rules should be in a specific production ruleset that these users can access. These users should also
be able to access the delegated rules from their portal.
2. Identify the Rules that can be delegated– Typically any of the Decisioning rules can be delegated.
This makes it easier to make changes and PRPC supports additional configuration to Decisioning
rules which makes it easier for us to update them.
3. Improve readability and customization options to make it easier for users to make their updates.
Let’s take a detailed look at how to implement these steps.
Setup Delegation Users
The first step in delegation is to create a new ruleset for end users to make changes. The ruleset should
use the application ruleset as its prerequisites. The ruleset should remain unlocked in production so that
the users can make changes.
If the rules are delegated to more than one user, then we should enable the checkout option so the
changes are not overwritten.
The ruleset should then be added as a production ruleset in the application rule.
The Accessgroup of users should also be modified to include the production ruleset.
Lastly, the operator IDs should be configured to allow checkout of rules.
Delegated Rules to the Business
Decisioning rules are good candidates for being delegated to business users. These rules frequently
change and are easier for business users to make changes. After a rule is identified we need to save a
copy of the rule in the delegation ruleset.
After the rule is saved, we delegate the rules by using the actions menu to add it to favorites.
This gives us the choice to whether add it as a favorite for a specific user or an access group. The last two
choices appear only after we add the rule to a production ruleset.
Once added, the delegated rules appear as part of the user portal for users to access.
1. For users using designer studio, these rules can be seen in the Favorites Explorer.
2. For users using case manager portal, these rules are accessible in the operator menu.
Improving Readability of Decision Trees
Decision trees usually look complex with multiple ifthenelse statements, but we can make these more user
friendly in several ways. We recommend you follow these guidelines when delegating rules to business
users so it’s easier for them to configure.
1. Display Label – The display label button displays the short descriptions saved in the property instead
of using the property name, so meaningful short descriptions make the decision tree more readable.
Shown below is how the decision tree originally looks., Note it uses the property identifier
(.propertyname or Page.PropertyName) which makes it harder to read.
When we click the Display Label button then the same now looks like the screen shot below.
2. Function Alias – Using a function alias adds more meaning to the condition rather than using the
actual function signature. A function signature looks like this:
@(PegaRULES:DateTime).isWithinDaysOfNow({theDate}, {days}).
We can modify that by using a function alias rule and the tree now looks like this.
Function Alias Rules
These rules are part of technical category and they reference a function. So we either need to create a
function and then create an alias or we can create an alias for an existing function.
The key factors involved is to identify how the parameters for the function get values (they can be assigned
to a property value like theDate or can provide an option for users to select when the alias is selected).
In this case the function requires two parameters – the date and the number of days from that date. We set
the first parameter to case creation date, so this alias is used only on cases where we would like to select x
number of days from the case creation date.
A function alias helps in making the decision trees very readable to simulate reading an If – then – else
condition logic.
Customize Options to Reduce Errors
When delegating rules to business users we recommend that you restrict the choices from which users can
select. As shown below, users can select one of three choices in the CustomerType field.
This can be enabled both in trees and tables, and the choices can be added in one of two ways.
We can add these directly in the Allowed Results table. Or, the results can be stored as part of a property and then we can refer
the property in the Allowed Values Property.
We can also set another property when the results are chosen. These property sets are hidden to users, and we
can set one or more properties for each result.
Decision Trees
The options field must be carefully selected on rules that are extended to business users. In most cases
the ability to change functions, call decision or take actions should be disabled. If there is a case where we
need to let users change functions, we should add the list of functions in theFunctions Allowed field.
Then users can select from that list and not the entire list.
Similarly if users need to take actions, then we use the Allowed Action Functions field to add the properties
that we need to add.
Decision Tables
The Allowed results in Decision tables works similarly to how it does in decision trees, except in the
Decision table rule form the options are listed as Delegation Options. Only the highlighted flags impact how
users can make the changes on delegated rules.
1. Allow to update row layout — When this field is disabled, then business users can only modify the
values corresponding to that row. They can neither add new rows nor delete any existing row.
2. Allow to update column layout: — When this field is disabled, then business users can only
change the values corresponding to that column. They cannot add or delete columns in the table. In
addition, they cannot change the property that is referenced in the columns.
3. Allowed to build expressions —: This field enables the end user to use expressions as part of the
cell in the decision table. If disabled, then users can enter either a constant or a property in the cell.
Map Values
Map Values offer two options for delegated users —they can change the matrix layout or use expressions in the
cell.
Introduction
Applications become powerful when we automate its decision making capabilities. Decisioning can be
applied on the action paths taken by the case or to assign a specific value based on a set of specific
conditions. Let’s look at couple of examples:
1. A customer calls in to dispute a transaction that appears on his statement. The company would like
to automatically approve all disputes that are smaller than a specific amount since it is not cost
effective for the company to process the dispute as a claim. In some cases, we may need to include
other factors like the number of such claims filed by the customer in the past year to make the
decision.
2. When completing a purchase request the system determines the discount for which this requestor
qualifies. The discount is determined using a set of specific conditions such as the department,
customer type (platinum, gold, silver), time period in which the order is made and so on.
Know which Decison Rule toChoose Based on the Requirement
Decision rules play a key role in any enterprise application that involves business processes. They
represent the decisions and policies that drive business processes and case management.
Decision rules can be invoked using a decision shape in the flow, using a declare expression or using a
method in an activity step. The referencing of the decision rule depends on the context in which we need
them.
We use flows in case processing to determine which path the case goes down. Using decision rules in our
flows automates the case processing. For example, a decision table can decide if the case requires
approval from another manager or if it can move to the next step in the business process.
Declare expressions can use decision trees or decision tables to get a specific value based on a set of
conditions. For example, if the customer lives in Massachusetts, has a credit score above 720, and is
paying at least 20% as a down payment, then the APR for the car loan is 1.7%.
Decision trees are used in activities when we want the activity to decide whom to route the case to or what
the escalation activity on the SLA should perform.
There are four types of decision rules that we can write to evaluate decisions:
When
Decision trees
Decision tables
Map Value
When Rules
When rules belong to decision category, however they can return only one of two values – True or False.
When rules are used in processes to decide which one of the two paths the case can take. For example, if
the purchase request total is less than 500 dollars then the purchase request does not require approval.
When rules are also used in other rules like UI, data transforms, other decisioning rules and declare
expressions to evaluate a condition. Though most of these rules support adding the condition directly, we
recommend that you use a when rule so the condition is entered once in the when rule and is used
everywhere.
When rules can use a Boolean expression or a function rule part of the library shipped in the product or a
custom function defined as part of the application. When rules can involve any number of conditions and
can be combined using AND, OR & NOT.
Decision Trees
Decision Trees are useful when applying the If Then Else construct. Decision trees can be constructed to
have many branches which evaluate different properties to determine a decision. Decision trees can also
use function rules or when rules for building the condition.
One of the main reasons to use Decision trees is that it allows nesting, which means it can invoke another
decision tree or other decision rules such as a decision table or a map value.
Decision Tables
Decision Tables are useful in presenting the set of conditions using a tabular structure. This is very user
friendly to for managers since it resembles a spreadsheet with rows and columns. Decision Tables are
suited to cases where we use a set of properties to arrive at a decision.
Some of the main reasons we use Decision Tables are that they:
1. Give us the option to evaluate all rows to arrive at a decision.
2. Let us increment values on the specified condition which is useful in implementing scoring.
3. Can invoke another decision table
Map Values
Map Value rules let us determine a specific value based on one to two properties think of this as in a map
determining the location based on the latitude and longitude.
Map Values are usually used in special circumstances where we the values of one to two factors decide
the outcome. For example, if we want to determine the interest rates for bank accounts and customer
types.
If we have five different types of customers and eight different types of accounts, we need forty rows in
decision table to present this while we can do this in a 5x8 matrix. Map values can be chained to other map
values as well if we need to use more than two properties to determine the outcome.
Configure and use a Decision Tree
The decision tree can use the direct expression such as .pyStatusWork = “ResolvedCompleted” or
reference a when rule named StatusIsResolved or use a function rule.
Decision trees usually return a value, these values can then be used in the calling rule such as the declare
expression or the flow.
The decision tree can also be configured to set a value to properties and this can be done using the Take
Action option in the menu. Continue is used to check additional conditions and otherwise is always used as
an else condition.
Decision trees can also be configured to check values on a specific property and decide its course of
action. Evaluate is used in such scenarios.
A new tab Input appears when the Evaluate option is selected to enter the actual property that is
evaluated.
Though the decision tab is where we configure the decision, the developers need to use the entries in the
Results tab to control how decision trees can be used.
The Options area has two options – basic (by default) and advanced. Selecting Advanced enables all the
checkboxes in the options area. Besides the options we can use the Functions Allowed section determines
what function rules are allowed for the selection. Similarly we can configure what options are available for
Take Actions in the Allowed Action Functions area.
The decision tree can also have restricted return values, in this case the decision tree can return either
true or false. We can also enter a property in the Allowed values property, in that case it uses the values
saved as part of local list in property definition. The decision trees must be configured to have all these
selections if they are delegated to business users. This helps our users to select the results that are
allowed, use the functions that are required and only take the appropriate actions.
Configure and use a Decision Table
Decision tables are easier to delegate to business users and in most cases is the sought out decision rule if
the same logic can be written using either a tree or a table. This resembles tools like MS Excel and hence
most developers also feel more comfortable using them.. Decision tables are more apt to be used if we are
using a smaller number of unique properties to evaluate the decisions. Since we use tables, each property
gets a column and it makes sense to have rows added in a way that each of these columns have values.
The table columns can use properties for specifying a set of conditions. The property can be compared
against a single value like Customer Type = Retail or against a range like Total price is greater than or
equal to 300 and less than or equal to 1000.
Decision trees also offer additional capabilities to add OR conditions. Suppose we want to introduce a new
column for Supplier and for two of the three customer types the values in other columns do not change.
We could use OR to split a cell instead of adding a new cell. In the example below we have a single cell
using OR for suppliers (Named vendor, New Vendor) for the first two rows.
We can return values or enable the property set field to set properties directly, using other icons we could
return more than a single property based on the condition.
The results tab provides a similar set of options that we saw in decision trees – we can restrict the results
returned using a property or a set of values. The main difference between the tree and table is the first
option (Evaluate all rows), if this is selected, the decision table continues evaluating other conditions even if
one condition is satisfied. This behavior is not enabled by default and is used only on specific cases. We
will review one such case later in this lesson. When this setting is enabled, the decision table cannot return
values and hence we cannot execute the decision rule from a declare expression.
The delegation options apply only for the delegated user. Refer to Pega 7 Help for the details of each of
these options.
Check for Consistency and Unit Test the Rules
Some rules become increasingly complex especially trees and before we release our rules we need to
verify that they are correct. To help us, PRPC provides two options – check for consistency and check for
completeness.
The Check for Consistency button verifies that there are not any unreachable paths in the rule and that
there are no logic conflicts. For example, if we configure two conditions that evaluate the same thing, say
the first condition is if urgency is < 40 and the second condition is if urgency is > 20, then both these rows
are in conflict since a value of 30 satisfies both conditions.
The Check for Completeness button populates all values that are part of otherwise, giving us an
opportunity to review and maybe add logic for those conditions. Completeness is just a suggestion by
PRPC and we need not always use it.
The rules can be tested by using the Run option in Actions menu.
It shows the first column, we enter a value and click the Run again button to see the second column.
Introduction
Validation of data ensures the quality of the information being used in the application and in the business
process.
In general, data validation involves examining incoming values to ensure that they meet the application’s
requirements. The values coming into the application from a user or external source are compared against
predefined criteria. If the values do not meet the criteria, the system raises an error, and further action can
be taken, such as prompting for the correct information, or rejecting the incoming values entirely.
Validation of data input by the users has been covered in detail in the System Architect Essentials (SAE) I
and II courses, as well as in the user experience lessons in this course. In this lesson, we will look at the
overall validation process. We will review some of the concepts that are already covered, at high level, and
we will explain new concepts in detail.
In PRPC,
Property rules are part of the Data Model category and are instances of the “RuleObjProperty”
rule type.
Control rules are part of the User Interface category and are instances of the “RuleHTML
Property” rule type.
Validate rules are part of the Process category and are instances of the “RuleObjValidate” rule
type.
Edit Validate rules are part of the Data Model category and are instances of the “RuleEdit
Validate” rule type.
Constraints rules are part of the Decision category and are instances of the “RuleDeclare
Constraints” rule type.
At the end of this lesson, you should be able to:
Explain the different levels of validation that PRPC provides
Configure Constraints validation
Describe Case level validation
Use the appropriate validation mechanism for different validation requirements
Different Levels of Validation
PRPC supports different levels of validation:
Mode validation: This validation is automatic and always in force.
Dictionary validation: Part of this level of validation occurs automatically and another part occurs
when it is built into the application and at specific times during the process execution.
Object validation occurs only when explicitly designed into the application.
Another validation level which is optional and occurs only when built in the application
isConstraint validation. Based on declarative constraint rules, it is evaluated automatically by the
system each time a property identified in a constraint rule changes
Mode Validation leverages the property mode setting. A property mode is identified on the “General” tab of
the property rule form and it is combined with property type, as shown below. The Mode Validation
enforces the property mode when a value is being assigned to a property. For example, you cannot set the
value of a Single Value property to a page. As stated earlier this validation is always in force and done
automatically by the system.
The Dictionary validation examines a property value in the context of the corresponding property rule. It
includes multiple validation tests. The first validation test is to ensure that the property value is compatible
with its type. For example, a property of Integer type cannot contain strings as a value. The list of available
standard property modes and types is shown above.
The next validation test is related to the Maximum length assigned to the property. On the “Advanced” tab
of the Property rule form, Pega 7 allows us to specify a character limit for Value mode properties of type
Password, Text or Identifier. The system uses any value specified in this field to restrict the property value
to a specific maximum number of characters. If the length of the input string exceeds this limit, the
clipboard keeps the longer value, but the dictionary validation adds an associated error message.
We may also reference an “Edit Validate” rule which defines a Java routine that tests the validity of an input
value. Normally, if user input fails such processing, the input is rejected and a red X appears next to the
input field in error along with messages that may convey more about the error and the suggested remedy.
Users can change the input and resubmit the form. “Edit Validate” validation is only possible with
properties of mode “Single Value”, “Value List”, or “Value Group.” Any architect who has java knowledge
can build new editvalidate rules. Or we can use any of the standard rules that are available to us.
In this example, we used a standard edit validate rule, USZipCode, which checks for whether the pattern is
5 digits.
The other validation tests performed at this level are the Control and Table validations. Both are configured
in the “General” tab of the property rule form.
The control shown here is for a date property and when this property is included in a UI rule, PRPC
validates that the user can enter a date or select a date from a calendar.
For other properties such as States or Countries in an address, we can have the users select from the list
of values instead of possibly typing an invalid value. The Table type field is used for this purpose. To use
table validation, first make sure to select Single Value, Value List, or Value Group for the Property Mode
field as we cannot use table validation for properties of any other modes. Table values can come for any
one of those in the list shown.
As we just saw, “Dictionary validation” includes multiple different tests. These tests are performed by the
system when:
User input entered on an HTML form is placed on the clipboard.
The PageValidate method (in an activity rule) is applied to a page containing the property, or to
the property if it has mode Page.
An Edit Input rule and PropertyValidate method (in an activity rule) is applied to the property. [An
edit input rule provides a conversion facility. We can use edit input rules to convert data entered
by a user (or received from an external system) from a format that our application doesn't use into
another format. We can reference an edit input rule in the Edit Input Value field on the advanced
tab of Property rule.]
This type of control and display validation can be enabled in a user interface rule as well, under the
“Presentation” tab of the cell, where the property is included. We can configure “required” fields validation
in the “General” tab.
The next level of validation is “Object validation”. This validation is optional and occurs when a flow action
rule calls a “Validate” rule , or when called by a correspondence rule to validate user responses to
prompts, or when the “ObjValidate” method is called in an activity. This method executes a validation rule
to test a set of property values. The details of the validate rules have been covered in the SAE I and SAE II
courses.
The validation error message invalidates the page, preventing it to be persisted. The display of the
validation error message can be configured or customized. This has been covered in SAE I and SAE II
courses
Process Commander provides one more level of validation: the “Constraints” validation with Declare
constraints rules, which we will discuss next.
Constraint Validation
Constraints rules provide an automatic form of property validation every time the property's value changes,
in addition to the validation provided by the property rule or other means. The technique used is called
“forward chaining,” like in many other declarative rule types.
The system automatically adds a message to any property that is present on the clipboard and fails a
constraint. No other rules explicitly reference constraints rules. When we save a constraints rule, Process
Commander enforces it immediately and thereafter.
We can create the constraints rule using any explorer. In the Constraints rule, use the “Constraints” tab to
record the configurations that constrain the property values.
First row checks for a When condition. Here we are checking whether the client is a preferred client for any
application that involves customer service. The second row is the constraint and here we are checking
whether the client name has a value or not. The third line, “Else add message” allows us to add a specific
additional message to become associated with that property or another property when the property fails
the constraint. Like many other rules, we can use or build our own expressions for the when and constraint
conditions. We may add more constraints by using the “Add new Constraint before this one” or “Add new
Constraint after this one” icons.
At runtime, when the system detects that a constraint fails, it adds a message to the page of the property
identified in the left portion of the “Require That” field, if any.
Because constraint validation is enforced automatically by the system anytime the property’s value
changes, it might cause a performance impact. Therefore, it is critical to evaluate the potential impact
before choosing this type of validation.
Case level Validation
We learned in other courses how to use validation rules in flow actions in a flow rule. In the Details tab of
the Case Designer, we can configure the validation rules to run when a case is instantiated and/or
whenever the case is saved in the database. . The advantage of having the validation at case level is,
whenever we need to validate certain data for every saving of the case instances, we can configure it in the
case designer once instead of needing to call the validate rule in multiple flow actions. When a purchase
request is made, we can validate whether the parent case has enough funds to process this request before
adding the case to the Program Fund parent case.
Clicking on these two links opens up WorkCover.OnAdd and WorkCover.Validate rules. These can be
specialized by putting the rules in the work class of the work pool or a specific case. The validation rules
can be configured like any other validation rule that we used in the SAE I and SAE II courses.
Similarly, we can perform validation upon stage entry, which we configure in the Case Designer. For
example, we can have a validation rule to validate whether all the data entry for the vendor information
either filled by the requestor himself/herself or the acquisition department, before final approval by the VP.
Since we would not know who would be filling in that information, we can have that validation in the stage
entry instead of at each flow action meant for the requestor and the acquisition department. We can also
require that any specified attachments be added to the case before the case can enter a stage.
Usage of Validation rules
Most the validations explained in the previous sections, such as property mode, property type, control
(such as pxDateTime), expected length, required field, and table values are selfexplanatory and easy to
understand how they are used. But, will these be enough for validation? Let’s look at an example. We want
to restrict users so that they cannot enter a date in the future for the date of birth field since it is not valid.
Defining date of birth as a date property and using the calendar control users cannot choose anything
other than a date value, however users can select a future date. This is where Validate, Edit Validate, and
Constraints rules come into picture.
Let’s do a comparison of the first two to understand when to use them appropriately.
The edit validate rule tests the validity of a single input value and the tests are built using Java. It is
specified in the property rule for the tested property. The system has a number of standard edit validate
rules that we can use in our application.
In contrast, the validate rule can test the validity of multiple input values at once, and the tests are built
using easiertounderstand "IfAndOr" logical expressions rather than Java code.
It is specified on the flow action rule that presents the form to the user, and multiple flow actions can use
the same validate rule. Or it can be at the case level or stage level validation.
In general, it is a best practice to use validate rules over edit validate rules, because:
Validate rules are easier for nonprogrammers to design and understand. Edit validate rules
require Java programming skills.
Validate rules are more connected with the business process and the user interface, because flow
actions and case designer can call them.
Validate rules simplify application design, because one validate rule can verify multiple input
values at once.
Constraints validation automatically happens when a property changes its value. If we want the automatic
validation at the property level, then we use constraints validation.. But this might have a performance
impact, so we need to use it properly.
Constraints, Edit Validate, and Validate rules are serverside validation –validation occurs on the server
and the result is sent back to the client. All other validations rules are clientside validations. The “Enable
client side validation” flag should be checked in the “Advanced” tab of the harness or in the “HTML” tab of
the flow action rules, for clientside validations to work. The harness rule enables the option for the whole
flow, if the harness is not changed, while the flow action rule can be used to override for a specific action.
Triggers and OnChange
Triggers and OnChange belong to the declarative rules category. They have many similar characteristics
along with some important differences. Let’s start by understanding the similarities these two rule types
provide.
Both rules allow us to execute activities in response to events in the system. Both rules allow for monitoring
property changes that are to be part of the tracked events. And both run on the application server. This is
important as Declare Triggers are sometimes confused with Database triggers which run on the database.
Declare OnChange rules are sometimes confused with OnChange JavaScript events which run on the
browser.
Triggers and OnChange rules differ in some significant ways as well. Triggers are associated with
persistence related events, for example, when objects are saved, deleted or committed. Triggers can
execute their activities asynchronously, as well as track the previous values of properties. These features
are all unique to triggers.
OnChange rules on the other hand, are fired purely based on changes to the clipboard. No persistence of
the object is required. This makes them especially useful in pure business rules engine applications which
often cannot rely on persistence events. Finally, OnChange rules help drive a unique BPM feature, Policy
Overrides. Policy Overrides allow for the dynamic and declarative override of a flow, based on changes to
the data on the clipboard. This is covered in more detail in a separate lesson.
Trigger and OnChange rules both help to solve some common business requirements. For example, one
of the more common requirements is property auditing; where we need to track and sometimes take action
when critical properties are changed. Or perhaps users need to be notified when a property goes over a
certain threshold.
Another common use case is when integrating with systems of record. We can utilize triggers to
synchronize data with an external system of record. In applications with complex calculations OnChange
rules can be used to execute a calculation activity when values change.
Most of the use cases we just discussed can be implemented without these rules in a more procedural
way. However there are some key benefits to using declarative rules in this manner. Since these rules are
declarative they are executed by Pega 7 reducing the chance that a developer forgets to call them. This is
particularly helpful in creating applications that are built for change as we can define the policy and let
Pega 7 enforce them at an engine level. This leads to an application that is easier to maintain and debug.
Features and Behavior of Triggers
Let’s take a look at how a trigger rule can be used to track a specific property. This is also known as Field Level Audit Pattern and
this can be created automatically using the Field Level auditing landing page (accessed by clicking Process Management > Work
Management > Field Level auditing).
On the landing page, we select the property we would like to track and then click Submit.
The Field Level Audit gadget creates a trigger and a data transform rule. The trigger rule named pyTrackedSecuityChanges is
created in the appropriate class.
There are two important fields that need to be configured in a trigger rule. The first field is the “Trigger When An Instance Is” field.
Here, it uses “Saved”, which means the trigger is executed whenever an applicable object is saved using ObjSave. Executing a
trigger involves calling the TrackSecurityChanges activity which is is a standard activity shipped in the product. The rule audit
pattern creates a data transform that is used by this activity.
In the first step of the activity it opens the data transform rule created by the Field Level Audit gadget.
The data transform sets the property which we are tracking, if we track multiple properties on the same class, it adds those
properties to this data transform. This rule displays a warning message because TotalPRCost is calculated by a Declare
Expression.
At run time, when we look at the audit trail of a case, it looks like the screenshot below.
Now let’s talk about all the configurations we need to create if we are going to create a trigger rule that performs other tasks than
tracking properties. In the trigger rule, we have other choices for when the trigger rule gets executed.
Let’s look at the rest of these choices:
Deleted — executes the trigger whenever the instance that belongs to the Applies to class or a descendent of that class, is
deleted using ObjDelete.
Commited Save — gets executed when the saves are committed to the database.
Commited Delete — gets executed when the deletes are committed to the database.
Saved and — executes when an applicable object is saved using ObjSave AND one of the listed properties has been
modified since the last save.
Note: Since Pega 7 normally defers the committing of saved and deleted objects, these two events can occur at different times in
the process.
The second field we need to configure in this rule is the Trigger Activity. This is the activity which runs when the trigger gets
executed.
Activities called by a trigger rule should be of type ‘Trigger’. This is set in the Security tab on the activity rule.
A trigger activity can be run immediately or in the background. When running in the background the primary page is copied to a
separate child requestor and run asynchronously. While this can be useful in specific situations it is generally not advised as
troubleshooting activities run asynchronously can be challenging.
Triggers also allow us to specify a page context. This page context allows a trigger to run for each page in an embedded page list.
For example we can specify SelectedCourses and the appropriate class (SAEHRServicesDataCourse). As shown, the trigger
activity runs for each page in the line item list. Note that while the activity is run for each page in the list the Applies To class of the
activity is still expected to be that of the trigger rule (SAEHRServicesWork) and NOT the page context. In practice the use of Page
Context on triggers is rarely implemented.
Pega 7 creates a clipboard page named pyDeclarativeContext that is available during the life of the trigger activity. This page is of
type CodePegaDeclarativeContext and has a value list of the changed properties. In some cases it may be useful to
programmatically examine this page to see which properties caused the trigger to execute.
Features and Behavior of OnChange
OnChange rules execute based on a property instead of an event that occurs on the database. In an
OnChange rule we can add multiple properties and when multiple properties are listed a change to any
one property causes the action logic to fire. To determine which property or properties changed we can
examine the pyDeclarativeContext page as previously discussed. The conditions section allows us to
define a when rule as well as the action.
There are two actions allowed, Calling an activity and Suspending Work, which is also known as Policy
Overrides. Policy Overrides are a unique feature of OnChange rules and allow us to declaratively alter the
processing of work. This lesson does not discuss Policy Overrides in detail.
If we select the Calling an activity action, we can specify an activity based on if the when rule returns true
or false. If no when rule is specified the “when true” activity runs. The Security tab of this activity is set to
type OnChange.
OnChange rules, unlike triggers, execute an activity based on changes to the properties on the clipboard
and not the database or persistence events. These changes are tracked using standard forward chaining
logic. Activities of type OnChange do not fire other forward chaining declarative rules, such as expressions,
during the activity. The forward chaining rules are executed after the OnChange rule completes. This
avoids any infinite loops.
Like triggers, OnChange rules can specify a page context so that the rule applies to all elements in a list.
However, unlike triggers when using a page context the activity called is expected to be of the page
context’s class not the Applies To class of the rule itself.
Custom Functions
Functions are rule types written to extend the capability of the Pega 7 system and like other rules in the system they belong to a
ruleset and can be versioned. However they do not apply to a class and cannot be circumstanced. Functions provide an extremely
powerful way for us to implement custom functionality. Though it is very powerful it also has a downside as we can easily erode
many of the advantages of a Pega 7 based application if functions are overused. Functions, as compared to model driven rules, are
not Built for Change® and can be challenging to debug. With that said utility functions are preferred over adding java steps in an
activity since they clearly isolate the java code in a separate rule and can often be made more reusable.
Before ever writing a function from scratch we recommend checking to see if one exists as many common functions already exist in
the product. Functions as we learned in the SSA course are grouped by libraries. We also learned about the expression builder and
how to identify an existing function and also how we can use them in applications. Some outofthebox utility functions that can be
used by an expression are standard deviation, median and average. In some cases Pega may not have an out of the box utility
function for what you need. Some examples include finding the exponent or formatting the date which requires a custom function or
generating a random string.
Before creating custom functions we need to create a library where this function can be grouped. The library can optionally include
all Java Packages used in the functions belonging to this library. This is the Dataloader library we defined for the Data loader
application that we created to generate simulated cases for the reporting exercises.
We can look at all the different functions defined for this specific library by looking at the records explorer.
Let’s take a look at couple of functions in this library. The function rule uses the library name as part of its definition. Function rules
use parameters, which get passed when the function is referenced. In this case, we have two parameters; start and end with their
data types.
The logic of the rule is in the Java Code which has the actual java code entered in the canvas. Notice it uses the java util package
we saw earlier.
Now, let’s look at the randomDate function rule. There are few things that are different from the previous function rule. This function
rule references another function. Notice the first two lines, in the first line the function references another function named
differenceBetweenDays that is defined in the business calendar library. The second line references the randomNumber function
that we looked at above.
The function also references the EngineApI DateTimeUtils.
When opening the engine API from the Help menu we can see the DateTimeUtils defined in it. Engine APIs can be accessed from
the Help menu.
We can also access the PublicAPI interface part of the Engine API in the function rule, the system recommends using the keyword
tools to access the PublicAPI interface.
Collections
A Collection is a business rule that can procedurally execute a sequence of other rules. Collections are similar to business flows,
though here it orchestrates the rules that are executed one after another and does not present any UI for user to take action. The
Collection rule is an extremely powerful feature that can be used to easily track all referenced rules and rule executions. Collections
also allows for grouping of “like” rule execution. Any time you need to group a series of decision rules resulting in a single outcome
(e.g., “Approved” or “Rejected”), consider using a Collection rule.
Collections are invoked using the “Collect” activity method.
Collection Rule Components
The Rules tab contains two parts. The left side of the screen is where you tell the Collection which rules to run and in which order
they are to run. It also allows you to give a description of what is happening on each step. In this example, the rules to execute
include a Function rule, a Decision Tree and a Map Value.
When you click on each row, the right side of the screen is opened. This allows you to specify when each action is run, the context
in which it is run, and how each response should be processed. In this example, if the Loan amount the not in the given range, an
audit message is added to the case, and Collection execution stops.
The Pre/Post Actions tab allows you to perform an action before and after the rules in the collection run. These actions may simply
be to record the start and completion of the Collection execution. A typical use of the “Before This Collection” section is to initialize a
return value from the Collection.
The Specifications tab allows you to relate the Specification to a step in the Collection to provide traceability of the business case to
each step in the Collection.
Benefits of using the Collection Rule
Why not just use an Activity rule to do these types of evaluation? Many of the same features are available in the Activity rule.
However, the Collection rule has been designed for the rule engine processing and offers these benefits:
The Collection form itself makes it easy for the business to maintain, along with the decision and declarative rules that are
part of the Collection
Function aliases can be used to make form easier to read and define.
Response actions implement common patterns without activities.
Improved performance over old collection rules and flows for rule orchestration.
Collections can also call other collections. In a strictly rules engine based environment where there is no user interaction with the
case, the ability to create a network of collection rules provides a means of providing robust processing logic, all of which can be
maintained by the business owner.
Delegation Decisions
One of the keys to successfully engaging the business is to consider a number of factors throughout
development, not just at the end.
1. Who are the users we are delegating to?
2. Which rules and rule types are we planning to delegate?
3. How can we ensure the rules we want to delegate are simple enough for business users?
4. How does a business user access these delegated rules?
5. In which environment will the user make the changes?
Delegated Users
Business users differ in the terms of their role and their technical proficiency. Their familiarity with building
rules is a huge factor when we decide which rules we want to allow them to change and the kind of
changes we want them to be able to make. If the user is a specialist involved in making business decisions,
then the changes for example might be amending the discounted rate or updating the factors that
determine the discounted rate. If the user is a manager involved in making decisions impacting how the
case is being processed, the changes for example, might be amending the service level in terms of
urgency, what to do when goal or deadline is reached and so on.
Delegated Rule Types
Pega 7 does not provide any restrictions on which rule can be delegated. Therefore, we need to decide
which rules are best to delegate. Some rules that are commonly delegated are decision trees, tables, map
values, expressions, when rules and custom rules. That does not mean we cannot delegate other rules.
Some rules might require a little more training and experience such as sections, data transforms and
validation rules. Please note that flows and UI rules are excellent candidates for business users to work on
during the initial development phase, however once the application is in production these rules becomes
more challenging for a business to maintain.
Easier to Configure
Selecting a specific rule types determines how we can simplify the way a business user changes them.
Decision rules like decision trees and decision tables allow developers to configure the return values that a
user can change, restricting the operations a user can perform (for example, not allowing them to add or
delete a new row, not allowing them to add or delete a new column) helps in ensuring that users do not
make errors. Decision trees can also reference function alias rules which improves readability. Service
level rules come with a set of choices available for selection such as sending an email, transferring to a
user and so on.
That leaves us with the final two questions. Let’s look at them in little more detail now.
How Do Business Users Access Delegated Rules
Delegated rules internally get saved in the same class where favorites are saved. Each time we delegate a rule it creates or
updates an instance of SystemUserMyRules. If we look closer, we see that they are marked either as Individual or access group.
Unlike the actual delegated rules which are saved in production rulesets, these instances are not part of or associated with a
ruleset. These rules must be manually added to the Product file if we are moving them from one environment to another.
If the business user has access to Designer Studio, then they need to click the Favorites explorer to see the delegated rules. This
explorer provides quick access to both the rules we have saved as personal favorites and the ones that are saved as access
group favorites. Business users can click the rule name and the rule form opens up allowing them to modify the rule.
If users have a different portal access then we can use the My Rules gadget. This gadget is accessed in the standard Case
Manager Portal by clicking the Options> My Rules. From there users can expand the gadget and see the delegated rules. When
they attempt to open the rule a separate window opens.
Out of the box interfaces are just one option, but if we are working with highly specialized requirements we can use the MyRules
instances to provide a custom interface. Since these instances are data instances we can easily use them programmatically to
build custom interfaces for our business users.
Deciding Where Delegated Users Work
Delegated users can work on these rules in various environments and there is no one right way to handle
rule delegation. It is important to consider various factors and find an approach that is best suited to the
application and the users who are making the changes.
1. In Development: When we delegate rules in the development environment the rules are changed
and managed by the business but the promotion of the rules follows a normal product delivery
lifecycle. This approach has the least risk but provides the least agility. It is useful when the changes
can wait and are not time bound. It is a safe approach however it adds complexity since it adds a
dependency that requires migrating rules to see the changes. This defeats the purpose of
delegation.
2. In Production: On the other end of the spectrum is managing rules directly in production. This
approach has the most risk but provides the most agility. This is useful when there is a small set of
volatile rules that can be managed with minimal risk. We can minimize the risk by using the check
in/checkout feature which allows users to test the rule before having it used by others and ensures
some kind of approval process is built. (We will learn about this more this shortly.). This approach is
used in most cases, due to the quickness in which the changes can be seen. However, it must be
remembered that this option requires planning and risk mitigation but provides the business with a
lot of agility.
3. In a Separate Environment: A nice compromise is to setup a separate authoring environment. Here
rules can be managed by the business and tested without the risk of affecting production in any way.
Once tested the rules can be promoted into production on a separate cycle from the standard
development environment. This approach though it looks ideal, it may not be practical because we
need to setup a separate environment just for delegated users to work. This approach can be made
easier by setting up a cloud instance, thereby removing the huge overhead in terms of cost.
Approval Process
It is highly recommended that we use a Checkin approval process especially when the users are making changes directly in the
production system.
The production ruleset (which is typically used in this case) should use the checkout option so that multiple users cannot update
rules at the same time. Checkout also helps us in implementing an approval process.
The production ruleset must also be set to Enable Approval Required field which is set in the Versions tab.
After making these changes in the ruleset we need to:
1. Include WorkRuleCheckin as a workpool in the access group of the users.
2. Enable Allow rule checkout for each of those Operator IDs responsible for making changes.
3. Add the ruleset named CheckInCandidates to the Access group for the operators who are approving the checkins.
4. Make sure the operators who are approving the rules get an access role with the privilege UpdatePrivateRuleSets.
Pega 7 ships a standard checkin process already implemented for us. We can review it and make changes. We can also
customize the flow in the application ruleset that is different from the production ruleset to use a different process.
Chaining Expressions
When discussing expressions, the key concept to understand is change tracking. Forward chaining
indicates that the expression is computed when any of the source properties change. For example, Total =
Quantity*Unit Price, the total gets calculated when Quantity or unit price changes. However, if you request
Total before either of them has a value then the expression does not get calculated. In addition to the
properties used in the expression we can also identify additional properties in the additional dependencies
field. The order in this array is not significant and the expression is calculated when any of these properties
change.
Backward chaining as the name suggests computes the expression calculation based on the target
property. This can be set in one of the 3 ways:
1. When the target property referenced does not have a value.
2. When the target property is not present in the clipboard.
3. Whenever the target property is referenced.
Obviously each of these has its own use but the system generates a warning when we use “Whenever
used”. This option, however can be used in some cases such as when the expression involves values
from properties on multiple pages or when the property is not referenced in many places in the application.
Declare expressions can be invoked procedurally by using a collection rule. When creating a collection
rule, we can include a declare expression along with other rules. If the declare expression is included in a
collection rule then the declare expressions rule should use the option invoked procedurally. Pega versions
prior to 7.1.6 do not have this option.
If the declare expression used in the collection rule is chained to other expressions then we should use the
option “When applied by a rule collection”. This option is there in the product to support this use case and
also for backward compatibility.
Impact on Functions
Declarative expressions are rules that usually define formulas or calculations for a property. When using the
Value of, it allows us to enter calculations directly in the expression.
They also provide a number of outofthebox calculation types, such as Sum of, Average of, Count of and so
on.
In addition, we call the utility function defined as part of the application which makes declare expressions
extremely powerful. When using functions unexpected behavior can arise causing issues especially when
using forward chaining.
Forward chaining when used by expressions determines which properties the system should watch for
changes. The list of properties to watch is determined when the expression is saved by reading the rule data
and looking for property references. When the function dynamically refers to a property or the property is
referenced indirectly as a string, as opposed to a property reference unexpected behavior can arise. In other
words if your property reference has quotes around it there may be an issue.
Let’s look at some examples of how expressions can be written and the effect it has on change tracking.
Number Expression Change Tracked?
1 @Exp(.myValue) Yes
2 @getPropertyValue(".myProperty") No
3 @SizeOfPropertyList(.Borrowers) Yes
4 @countInPageList(true,".IsPrimaryBorrower",.Borrowers) No*
5 @indexOfMax(.Borrowers().FicoScore) Yes
1. The first expression uses the exponent function. Since the property is referenced normally forward
chaining occurs whenever myValue is changed.
2. The second example, calling the getPropertyValue function uses myProperty as a literal string and not a
property reference. In this case forward chaining does not occur, because the expression is not
calculated when myProperty changes.
3. The SizeOfPropertyList example uses a pagelist property named Borrowers and forward chaining
occurs. Since it is a page list, the execution computes when a new page is added or removed.
4. The countInPageList example is an interesting one as it references two properties in different ways.
The borrowers page list is a property reference – notice there are no quotes. On the other hand
IsPrimaryBorrower is a literal string. In this example, the expression is calculated on creation or
removal of a Borrower page however, the expression is not calculated when the IsPrimaryBorrower
property changes since it was referenced as a string.
5. The final example introduces a new syntax. In this example a ClibpoardPropertyCollection is passed to
the function. The system tracks both the page list and the single value property changes. The called
function, indexOfMax, is able to iterate over the collection of fico scores to find the index of the borrower
with the highest FICO score.
In the second example, we need to configure the declare expression since forward chaining expressions allow
us to define additional dependencies to track. This allows us to supplement the standard change tracking
algorithm.
In some cases it may be difficult to pass all the property references and values in a way that works with
forward chaining so we may need to use backward chaining.
Goal Seek Pattern
This pattern is used to ultimately calculate or determine the value for a single decision. For example, the price of a quote or
the acceptance of a submission. This property is the “goal” in goal seek. The pattern uses backward chaining expressions to
determine what values are missing to determine the goal. For each value that is not available the system can prompt the
user for its value. This is the seek part of goal seek.
Goal seek pattern is useful when we need to seek values for one of these dependent properties. For example, assume we
are calculating the expression which uses discount and if it does not have a value, so the total price does not get calculated.
The pattern utilizes backward chaining expressions to determine what values are missing to determine the goal. For each
value that is not available the system can prompt the user for its value or we can procedurally provide a value.
Pega 7 provides two standard flow actions, VerifyProperty and VerifyPropertyWithListing, as examples of goal seek. We need
to add the flow action in the flow either as a connector action or on a control such as a button.
The system is adaptive enough to prompt users only on properties that do not have values, so for example it can ask them to
enter values for Discount, if discount does not have value. We can configure what appears as the label using the short
description field.
Let’s explain the implementation of this pattern using an example where we look at determining the value for the total actual
cost of an order. These are the list of properties that we are using for this example.
Let’s look at the expressions, the total actual cost which uses two other properties; LineItemTotalPrice and Discount.
Both these properties use declare expressions to get their values. LineItemTotalPrice is a product of Quantity and Unit Price.
Whereas discount uses a decision table for its calculation
and the decision table is configured as shown below.
After defining the properties and expressions, the next step is to include the standard flow action VerifyPropertyWithListing.
For the demonstration purposes, we created a sample flow with just this flow action followed by another flow action to display
the result.
The standard flow action VerifyPropertyWithListing has to be modified to change the goal seek property in both pre and post
actions.
Now that we have set up all these rules, if we run the flow we will get the following output:
The highlighted text comes from the short description of the property, so if we look at the expression, the total actual cost
requires LineItemTotalPrice and discount. The system first attempts to calculate the lineitemtotalprice and if we look at the
expression it requires quantity and unit price. The system requests the value for the quantity first. Once we enter the quantity
and click submit, it the system then requests the value for unitprice.
Entering the unit price calculates the item total and the system goes back to seek for the value for discount. The discount
uses the table and it asks for customer type.
When we use one of the standard flow actions for the goal seek pattern, the runtime screen presented to users displays the
value entered in the short description field. It makes sense to set the short description of all other fields that are involved in
the calculation. In most cases the requirements dictate if we could use the standard flow action itself or if we need any
additional customization.
In summary, explore what’s under the covers of the goal seek pattern as it helps to further understand the rules engine by
looking at the flow action rule itself and the activity it uses. Note, that the flow action had a warning, because the standard
flow action uses a deprecated rule type.
Keep goal seek in mind the next time you find yourself manually creating a screen that feels like a questionnaire or that is
duplicating logic already in trees and tables. Goal seek is simple to use, so experiment with it and see if it fits your
applications needs.
Understanding How the ServiceLevelEvents Agent Processes SLAs
In many situations knowledge of how the ServiceLevelEvents agent works is unnecessary. Developing a
solution to a complex SLA requirement, however, benefits from knowing the basic “mechanics” of how the
ServiceLevelsAgent Agent operates.
The ServiceLevelEvents agent is defined as a Standard agent and by default runs every 30 seconds. No
attempt is made to process a SystemQueueServiceLevel queue item until the current DateTime is greater
than or equal to the value of the queue item’s pyMinimumDateTimeForProcessing property. Passing that
test, no attempt is made to invoke the ServiceLevelsAgent’s ProcessEvent activity unless a lock can be
obtained against the queue item’s associated case. If the case happens to be locked, the pxErrorList of
the queue item is set similar to the example below; the pxLastExecutionDateTime will also be set to the
time when the lock was attempted.
A SystemQueueServiceLevel instance has a pxEvent property, the values for which are limited to “Goal”,
“Deadline”, and “Late”. Whenever an assignment is created, the Assign.AddAssign activity called. If a
ServiceLevel rule is configured against the assignment, AddAssign creates a ServiceLevel queue item with
the value of pxEvent set to “Goal”.
When the ProcessEvent activity is eventually called, a check is made to determine whether the current
queue item should be permanently dequeued, in other words, the ServiceLevel has run its course. If not, a
new queue item is constructed using values from the currently examined queue item. The value of the new
queue item’s pxEvent property is set using the ternaryexpression statement shown below.
At the end of the ProcessEvent activity, the ExecuteSLA activity is called. The ExecuteSLA activity is where
the Assignment’s ServiceLevel rule, if any, is opened and processed. An assignment stores the name of its
associated ServiceLevel rule within its pxServiceLevelName property. An examination of the System
QueueServiceLevel class shows a number of Properties whose names begin with “pxGoal” and
“pxDeadline” which is by design. The main purpose of the ExecuteSLA activity is to recompute the
assignment’s urgency as wells as to execute the list of escalation activities, if any, associated to the type of
event that has transpired
Understanding the Difference between Work and Task SLAs
Work SLAs
The OverallSLA flow is a special flow that runs in parallel to any case that configures the ".pySLAName"
property when created. The Case Designer supports this value being set within the “Case Details” tab. The
value of “pySLAName” is the name of the ServiceLevel rule to be invoked by the single workbasket
assignment within the OverallSLA flow. The name of this workbasket is expected to be “default@” +
lowercase(org name). The value of the case’s “.pySLAName” property is transferred to the OverallSLA
Flow workbasket assignment’s “.pxServiceLevelName” property.
It is also possible to define “.pySLAGoal” and “.pySLADeadline” DateTime properties on each case. For
these properties to take effect, the ServiceLevel rule represented by “.pySLAName” must be configured to
use those properties as the Goal and Deadline times, respectively.
There may be some confusion about two of the options used to configure an SLA’s “Assignment Ready”
value, namely “Dynamically defined on a Property” and “Timed delay”, particularly the latter.
Setting the SLA start time in the future does not prevent the assignment from being completed or, in the
case of the overall SLA, prevent the case from being worked on. The effect of using these options can be
observed when assignments are processed backtoback. Even though the user may have permission to
perform the second assignment, if the start time for SLA is in the future, a Confirm harness is displayed.
That user however, can still perform that assignment should they choose there is no penalty for
completing work early. The “Assignment Ready” SLA value is set into Assign.pyActionTime.
The values for a case’s pySLAGoalExecute and pySLADeadlineExecute DateTime properties are strictly set
by overall SLA workbasket assignment ServiceLevels. These values are used by various casewide
“Quality” and “Processes” reports.
“Quality” and “Processes” reports report against case tables. The difference between the two is that
“Quality” reports report against resolved cases (see: https://pdn.pega.com/reporting/understanding
analyzequalityreportsinthemonitoractivityworkspace) whereas “Processes” reports report against
unresolved cases (see: https://pdn.pega.com/reporting/understandingmonitorprocessesreportsinthe
monitoractivityworkspace).
Taskbased SLAs
Performance reports (see: https://pdn.pega.com/reporting/understandinganalyzeperformancereportsin
themonitoractivityworkspace) and “Assignments” reports
(see:https://pdn.pega.com/reporting/understandingmonitorassignmentsreportsinthemonitoractivity
workspace) report against case history records that contain a “pyHistoryType” field. A pyHistoryType value
of “F” indicates the record applies to a flow action; a value of “A” indicated the record applies to an
Assignment records. “Performance” reports report against resolved items whereas “Assignments” reports
report against open assignments.
These pyHistoryType HistoryWork records indicate whether the flow action or assignment did or did not
meet the intended Goal and Deadline time. The properties used to record this information are
“.pxAssignmentPastGoal” and “.pxAssignmentPastDeadline”. If both values are null, performance would
have fallen within the Goal time period. If only “.pxAssignmentPastDeadline” is null, the task would at least
have met the intended Deadline. However if both values are nonnull, the task is considered “Late”.
Understanding How Work and Assignment Urgencies are Computed
Below is the RuleDeclareExpression (RDE) that defines pxUrgencyWork.
If, for example, a Decision Table were used to compute the “should be” value for pxUrgencyWork, the value of
pyUrgencyWorkAdjust would be computed as shown below.
Assignment urgency is computed via the Declare Expression below.
Here if a Decision Table were used to compute the “should be” urgency (pxUrgencyAssign), the value of pyUrgencyAssignAdjust
would be computed as shown below.
It is important to note that the assignment’s value for pxUrgencyWork is the value of the Work.pxUrgencyWork RDE at the
time the Assignment was created. That is, should the OverallSLA’s pySLAName ServiceLevel increase work urgency after an
assignment is created, any existing assignment would be unaffected.
Suppose a complex requirement where both the work and assignment urgency are required to be identical plus increase over
time according to a unique urgency versus number of business days timeline that each major customer is allowed to negotiate?
Each major customer would also negotiate unique overall goal and deadline values. It must be possible to set individual goal
and deadline times for each task within the case. Finally, the customer is not known until after the case is created.
The solution to adjusting the overall goal and deadline for a case once the customer is known is straightforward. The overall SLA
ServiceLevel rule identified by pySLAName could be circumstanced by customer as this would cause the customer
circumstanced version of the ServiceLevel rule to be opened. However, the ServiceLevel rule is not opened until the Goal
dictated pyMinimumDateTimeForProcessing value is exceeded. A better approach is to invoke the pxAdjustSLATimes activity as
if called by the pxAdjustSLATimes local flow action. Prior to invoking this activity, the values for the Goal and Deadline “override”
properties, namely, “.pyAdjustSLAGoalDateTime” and “.pyAdjustSLADeadlineDateTime”, should be set. See
https://pdn.pega.com/casemanagement/usingservicelevelsslasinpega7.
The problem remains how to periodically update the overall work urgency which, in turn, is the value to which every case
Assignment’s pxUrgencyAssign Property’s RDE should compute. An advanced agent could be implemented that periodically
examines every case. However an advanced agent would need to implement case locking and database commit logic.
Similar to the OverallSLA flow, a different parallel flow that contains a single workbasket assignment could be spun off when
each case is created. In contrast to the OverallSLA flow, however, this spunoff flow’s workbasket assignment would loop back
on itself. The ServiceLevel rule for this flow’s workbasket assignment would merely set its Goal time equal to the next time that
the overall work urgency should be computed. Within a ServiceLevel rule the Goal time Property is named
“.pyEscalationProperty”.
At the Goal exceeded event, the SLA runs the following SLA Escalation Actions.
“Run Activity” UpdateUrgencies (custom)
“Run Activity” ComputeNextGoalTime (custom)
“Advance Flow” (OOTB)
The “UpdateUrgencies” activity performs the following:
Call a customercircumstanced Decision Table that computes “should be” work urgency.
Modify pxUrgencyWorkAdjust such that the pxUrgencyWork RDE computes to the “should be” work urgency
For each case assignment, modify pyUrgencyAssignAdjust such that .pxUrgencyAssign computes to the “should be”
work urgency
The ComputeNextGoalTime activity simply computes the next value of the Goal property.”. Any case property can be used. The
ServiceLevel rule would use the “Set to the value of a property” option to name this property. This is the same option thatthe
OverallSLA ServiceLevel rule is expected to use when specifying “.pySLAGoal” as the property that contains the overall case’s
Goal time.
If the SLA urgency requirement is based on business days, an even simpler option is to use the “Interval from when assignment
is ready” option as shown below.
If using this option, the ComputeNextGoalTime Activity would be unnecessary.
Understanding the Relationship Between SLAs and Locking Timeouts
As explained in Help, the System record defines the duration of a lock timeout, the default value being 30
minutes. If a case is idle for 30 minutes, someone or something else can open it and “steal” the lock.
That “something else” can be the ServiceLevelEvents agent. Help does say “However, even after a lock is
marked as soft, the lock holder retains the lock and can save and commit the updated instance. What this
means is that the lock holder retains the lock provided someone or something else has not “stolen” the lock.
Suppose the case is parked at an assignment with a ServiceLevel rule that defines an escalation such as
“Advance Flow” or “Transfer”?
The result of the escalation action could very well result in the case no longer being owned by who is now a
former lock owner.
Another concern with SLAs and locking is that the ServiceLevelEvents agent cannot act on a work object that
is currently locked. Suppose a user forgets to close a case or logout of their application, instead the user
either has either left the browser in its current state or has closed it. Suppose also that this scenario has taken
place just prior to the Deadline time for that case where the Deadline has specified one or more escalation
actions.
The result would be a 30minute delay before the ServiceLevelEvents agent can perform those escalation
actions. Whether a 30minute delay is significant depends on the business use case. Suppose the goal and
deadline times are very short; a customer is waiting for a response.
One approach to dealing with this situation is to define a shorter locking timeout for timecritical cases using
the Case Designer “Detail” tab’s Locking strategy screen.
Note, however, that the above setting affects every assignment within the case.
In extreme situations, it makes sense not to rely solely on a ServiceLevel rule being directly associated to a
subhour, timecritical case Assignment. A possible alternative is to spin off a subcase prior to the timecritical
Assignment. Either Case Propagation could be used to communicate the values of the parent case’s Goal
and Deadline Properties, or the child case could refer to those properties directly using “pyWorkCover”. The
child case’s locking strategy would be defined such that it is not locked when the parent case is locked, for
example the DetermineLockString activity is overridden and defined as “.pzInsKey”.
If the parent case completes prior to its deadline, it could implement a strategy similar to the
UpdateCoveredTasks Declare Trigger that sets a ticket against open subcases. If, however, the subcase
reaches the deadline first, it could send one or more notifications that the assignment is running late. Other
escalation actions are possible but may run the risk of uncommitted, browserentered data being lost.
Case Study: Business Rules
Scenario
SAE wants to enhance the Candidate Interview stage by asking five basic questions as shown in the Conduct Interview screen
below.
SAE would like to evaluate the “Correctly Answered” values within a Business Rules Engine (BRE) following the submission of the
Conduct Interview screen.
The first step in the rule engine process is to check whether evaluation of the five questions can be skipped entirely. The criteria for
question evaluation exclusion are:
Recommended by Employee
All three ratings, i.e., Interview Rating, Technical Skill Rating, and Interpersonal Skill Rating, have a score greater than 0.70.
If not excluded, the rule engine should proceed to evaluate each question from the perspective of each of the three types of ratings.
Each rating type should be able to add an “Audit Item” to any question. An Audit Item has both an integer “adjustment” and a
“message”. The table below shows the criteria for when and how to add Audit Items to the five interview questions.
In this example, the same question is never evaluated twice. However, the solution must account for that possibility in the future.
For the ultimate interview questions sore, SAE would like the sum of the adjustments subtracted from 100 then divided by 100 to
become a decimal value between 0 and 1. The resulting interview question score should then be used in the overall automated
candidate scoring decision. SAE wants any candidate who, in terms of an interview questions, scores less than or equal to .60 to be
rejected. Hence in the above example, the candidate must correctly answer four out of the five questions since only answering three
out of five will result in a score of exactly .60.
SAE is adamant about wanting the rules that make the actual decisions regarding exclusion and audit item scoring to be sufficiently
simple in nature to allow delegation to a nontechnical person to make changes as he/she decides. At the same time, it must be
possible to audit when any change has been made to these decisions.
Finally, SAE would like certain components of the solution to be reusable across all existing and potential applications as opposed to
being specific to the HR Services application only.
Approach
Implement the solution to the requirements stated above. It is not necessary to modify the Conduct Interview screen, nor is it
necessary to present the results afterward. Instead it is merely sufficient to demonstrate that the rule engine works correctly. This
can be achieved by executing a transform prior to the rule engine being invoked, that transform simulating any number of
checkboxes being checked.
Add a reusable ruleset to the HR Services application. After doing so, add a branch ruleset specific to this case study’s solution. The
branch ruleset would declare a dependency on the resusable ruleset.