0% found this document useful (0 votes)
477 views184 pages

DevOps PDF

DevOps is a set of practices that bridge application development and operations to reduce time to market. It allows for quicker feedback cycles and achieving business value faster by encouraging collaboration between development, quality, and operations teams. Pega Platform provides tools like Deployment Manager to support continuous integration, delivery, and deployment through automating best practices in deployment pipelines. It also supports integration with third party tools like Jenkins and Azure DevOps.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
477 views184 pages

DevOps PDF

DevOps is a set of practices that bridge application development and operations to reduce time to market. It allows for quicker feedback cycles and achieving business value faster by encouraging collaboration between development, quality, and operations teams. Pega Platform provides tools like Deployment Manager to support continuous integration, delivery, and deployment through automating best practices in deployment pipelines. It also supports integration with third party tools like Jenkins and Azure DevOps.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 184

DevOps

DevOps is a set of practices that bridge application development and operational behavior to reduce time to market
without compromising on quality and operational effectiveness. It allows application developers and business
owners to quickly respond to customer needs, develop a quicker feedback cycle, and ultimately achieve business
value faster.

DevOps encourages a culture of collaboration between development, quality, and operations teams to reduce or
eliminate barriers through fundamental practices such as continuous integration, continuous delivery, and
continuous deployment. Adopting these practices and the tools to support them creates a standardized deployment
process so that you can deploy predictable, high-quality releases.

Pega Platform provides the tools necessary to support continuous integration, delivery, and deployment through
Deployment Manager, which provides a low-code, model-driven experience to configure and run continuous
integration and delivery (CI/CD) workflows or deployment pipelines for your application. Deployment Manager
provides out-of-the-box tools to enforce best CI/CD practices for your application. You can fully automate
deployment pipelines, starting with automated integration of developer changes through branch merging and
validation, application packaging, artifact repository management, deployments, test execution, guardrail
compliance, and test coverage enforcement.

Pega Platform also includes support for open DevOps integration using popular third party tools such as Jenkins and
Microsoft Azure DevOps by providing an open platform, with all the necessary hooks and services. With open
DevOps integration, you can build a deployment pipeline using third-party tools to automate branch merging,
application packaging and deployment, test execution, and quality metric enforcement.

For more information about configuring DevOps workflows, see the following topics:

Understanding best practices for DevOps-based development workflows

In a DevOps workflow, the most important best practice for application developers to adopt is continuous
integration. Continuous integration is the process by which development changes to an application are
integrated as frequently as possible, at least once a day and preferably multiple times a day, every time
developers complete a meaningful unit of work.

Understanding the DevOps release pipeline

Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development through testing to deployment on your production system. Use Pega Platform tools
and common third-party tools to implement DevOps.

Understanding best practices for version control in the DevOps pipeline

Change the application version number each time you deploy changes to a production system. As a best
practice, use semantic versioning, because it offers a logical set of rules about when to increase each version
number.

Understanding continuous integration and delivery pipelines

DevOps is a culture of collaboration by development, quality, and operations teams to address issues in their
respective areas. To sustain progress and bring continued improvement, tools and processes are put in place.
Use DevOps practices such as continuous integration and delivery (CI/CD) pipelines to break down code into
pieces and automate testing tasks, so that multiple teams can work on the same features and achieve faster
deployment to production.

Installing and enabling for Sonatype Nexus Repository cSomponent for Sonatype Nexus Repository Manager 3

To create a connection between Pega Platform or Deployment Manager and Nexus Repository Manager 3, use
the Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and
metadata support for your application artifacts.

Installing and enabling Sonatype Nexus Repository component for Sonatype Nexus Repository Manager 2

Create a connection between Pega Platform or Deployment Manager and Sonatype Nexus Repository Manager
2 with the Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and
metadata support for your application artifacts.

Automatically deploying applications with prpcUtils and Jenkins

You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Migrating application changes

With minimal disruption, you can safely migrate your application changes throughout the application
development life cycle, from development to deployment on your staging and production environments. In the
event of any issues, you can roll back the deployment and restore your system to a state that was previously
known to be working.

Deploying application changes to your staging or production environment

As part of the Standard Release process, after you set up and package a release on your shared development
environment, you can deploy your application changes to your staging or production environment.

Packaging a release on your development environment

As part of the Standard Release process for migrating your application changes from development to
production, you set up and package the release on your shared development environment.

Understanding application release changes, types, and processes

The following tables provide information about the types of changes that you can make within a release, the
release types, and the release management process to follow based on the types of changes that you want to
deploy.

Testing applications in the DevOps pipeline

Having an effective automation test suite for your application in your continuous delivery DevOps pipeline
ensures that the features and changes that you deliver to your customers are of high-quality and do not
introduce regressions.

Understanding model-driven DevOps with Deployment Manager

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a standardized deployment process so that you
can deploy predictable, high-quality releases without using third-party tools.

Understanding distributed development for an application

When you use continuous integration and delivery (CI/CD) workflows, you set up the systems in your
environment based on your workflow requirements. For example, if only one team is developing an application,
you can use a single system for application development and branch merging.

Understanding continuous integration and delivery pipelines with third party automation servers

Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development, through testing, and to deployment. Use Pega Platform tools and common third-
party tools to implement DevOps.

Understanding best practices for DevOps-based development


workflows
In a DevOps workflow, the most important best practice for application developers to adopt is continuous
integration. Continuous integration is the process by which development changes to an application are integrated as
frequently as possible, at least once a day and preferably multiple times a day, every time developers complete a
meaningful unit of work.

To enforce best practices when developing an application and to ensure that application changes are of high quality,
developers should use Pega Platform features such as branches. Before merging branches and integrating changes,
developers should also verify that the application meets guardrail compliance and that unit tests pass. If the
validation of development changes passes, the branch is merged into the application ruleset.

However, if validation fails, then the merge is rejected, and developers should be notified so that they can address
the failure and resubmit their changes. The feedback cycle of validating and integrating development changes
should be as fast as possible, preferably 15 minutes or less, because it increases productivity in the following ways:

Developers do not spend unnecessary time to see that their changes are valid, which enables them to make
incremental changes.
Incremental changes tend to be easier to validate, debug, and integrate.
Other developers spend reduced time coordinating making changes and can be confident that they are building
on validated functionality.

How you implement best practices for continuous integration depends on whether you have a smaller scale
development with one or two scrum teams using a shared development environment or multiple distributed
development teams. See the following topics for more information:

Understanding development best practices working in a shared environment

Development environments can be shared by one or more teams collaborating on the production application.
To practice continuous integration, use a team application layer, branches, and release toggles.

Understanding development best practices in a distributed development environment with multiple teams

If you have multiple teams working on the same application, each team should have a separate, remote
development server on which developers work. A central Pega Platform server acts as a source development
system, which allows teams to integrate features into the application in a controlled manner and avoid
unexpected conflicts between teams working in the same rulesets.

Understanding development best practices working in a shared


environment
Development environments can be shared by one or more teams collaborating on the production application. To
practice continuous integration, use a team application layer, branches, and release toggles.

Build a team application layer that is built on top of the main production application. The team application layer
contains branches, tests, and other development rulesets that are not intended to go into production. For more
information, see the Pega Community Using multiple built-on applications.
Create a branch of your production ruleset in the team application. For more information, see Adding branches
to your application.
Perform all development work in the branch.
Optional: Use release toggles to disable features that are not ready for general use. Using toggles allows you to
merge branch content frequently even if some content is not final. For more information, see Toggling features
on and off.
Optional: Create formal review tasks for other members of the development team to review your content. For
more information, see Creating a branch review.
Optional: Use the branch developer tools to review the content and quality of your branch. For more
information, see Reviewing branches.
Optional: Lock the branch. For more information, see Locking a branch.
Frequently merge the branch from the team application layer to the production rulesets. For more information,
see Merging branches into target rulesets.

It is recommended that no more than two or three scrum teams share a development environment.

Understanding development best practices in a distributed


development environment with multiple teams
If you have multiple teams working on the same application, each team should have a separate, remote
development server on which developers work. A central Pega Platform server acts as a source development
system, which allows teams to integrate features into the application in a controlled manner and avoid unexpected
conflicts between teams working in the same rulesets.

Remote development systems


Follow these best practices on the remote development systems:

Multiple teams can share development systems, which can depend upon geographical distribution of teams,
system load, risk of teams making system-wide changes, and demand for system restarts.
Build a team application layer that is built on top of the main production application. The team application layer
contains branches, tests, and other development rulesets that are not intended to go into production. For more
information, see Using multiple built-on applications.
Put all necessary configuration information for the development server in a development application that you
can maintain, package, and deploy on demand so that you can quickly start up new remote development
systems.
Create a branch of your production ruleset in the team application. For more information, see Adding branches
to your application.
Name the branch with the feature or bug ID from your project management tool so that you can associate
changes with a corresponding feature or bug.
Perform all development work in the branch in versioned rules. Use branches for targeted collaboration and so
that you can use development best practices such as conducting branch reviews and monitoring application
quality metrics. You can also quickly test and roll back changes before you merge branches.
Do not do branch development on the source development system to avoid having multiple versions of the
same branch on the both the source development system and remote development system. The same branch
might contain different contents that conflict with each other.
Avoid developing rules in unlocked rulesets. Lock rulesets to ensure that rules are not accidentally and directly
changed, because changes should be introduced only when branches are merged. Use a continuous integration
server such as Deployment Manager to ensure that passwords to locked rulesets are not shared publicly.

For more information, see Versions tab on the Ruleset form.

Optional: Use release toggles to disable features that are not ready for general use. Using toggles allows you to
merge branch content frequently even if some content is not final. For more information, see Toggling features
on and off.
Optional: Create formal review tasks for other members of the development team to review your content. For
more information, see Creating a branch review.
Optional: Use the branch developer tools to review the content and quality of your branch. For more
information, see Reviewing branches.
Optional: Lock the branch before you migrate it to the source development system. For more information, see
Locking a branch.
Avoid deleting a branch before you migrate it into the main development system.
Delete a branch after you import it into the main development system so that you do not import older data or
rule instances and unintentionally merge them into the main application.
Maintain a branch only as long as you need it. The longer that you keep a branch, the likelihood increases that
the branch will have conflicts, which can be difficult to resolve.
Be aware that you cannot make some rule updates in branches, such as updates to application records,
classes, libraries, and schema. Senior application architects on he team should make these changes directly on
the main development system.

Source development system


Follow these best practices on the source development system:

Use an established and reliable backup and restore process.


Maintain high availability on the source development system so that development teams are not affected by
extended periods of downtime.
Limit and restrict developer access to the main development system so that developers cannot make
impromptu application changes without going through the DevOps workflow.

Understanding the DevOps release pipeline


Use DevOps practices such as continuous integration and continuous delivery to quickly move application changes
from development through testing to deployment on your production system. Use Pega Platform tools and common
third-party tools to implement DevOps.

The release pipeline in the following diagram illustrates the best practices for using Pega Platform for DevOps. At
each stage in the pipeline, a continuous loop presents the development team with feedback on testing results. This
example includes the following assumptions:

Pega Platform manages all schema changes.


Jenkins is the automation server that helps to coordinate the release pipeline, and JFrog Artifactory is the
application repository; however, other equivalent tools could be used for both.

Open DevOps release pipeline overview

Understanding development

Pega Platform developers use Agile practices to create applications and commit the changes into branches in a
shared development environment. Automated and manual testing provides rapid feedback to developers so
that they can improve the application.

Understanding continuous integration

With continuous integration, application developers frequently check in their changes to the source
environment and use an automated build process to automatically verify these changes. Continuous integration
identifies issues and pinpoints them early in the cycle. Use Jenkins with the prpcServiceUtils tool and the
execute test service to automatically generate a potentially deployable application and export the application
archive to a binary repository such as JFrog Artifactory.

Understanding continuous delivery

With continuous delivery, application changes run through rigorous automated regression testing and are
deployed to a staging environment for further testing to ensure that there is a high confidence the application
is ready to deploy on the production system.
Understanding deployment

After an application change passes the testing requirements, use Jenkins and the prpcServiceUtils tools to
migrate the changes into production after complete validation through automated testing on the staging
system. Use application release guidelines to deploy with minimal downtime.

Understanding development
Pega Platform developers use Agile practices to create applications and commit the changes into branches in a
shared development environment. Automated and manual testing provides rapid feedback to developers so that
they can improve the application.

Follow these best practices to optimize the development process:

Leverage multiple built-on applications to develop and process smaller component applications. Smaller
applications move through the pipeline faster and are easier to develop, test, and maintain.
Create one Pega Platform instance as a source environment that acts as a single source of truth for the
application. This introduces stability into the developer environment and ensures that a problem in one
developer environment does not affect other environments.
Use Pega Platform developer tools, for example:
The rule compare feature allows you to see the differences between two versions of a specific rule.
The rule form search tool allows you to find a specific rule in your application.
Follow branch-based development practices:
Developers can work on a shared development environment or local environments.
Content in branches migrates from the development environments to merge into the source environment.
Create an archive by exporting and storing backup versions of each branch in a separate location in the
application repository. If a corrupted system state requires you to restore the source environment to a
previous known good application version, the branches can be down-merged to reapply the changes in
those branches that were lost as part of the restore.
Use unit tests to ensure quality.
Ensure that the work on a ruleset is reviewed and that the changes are validated. Lock every complete and
validated ruleset.
Regularly synchronize the development environments with the source environment.

For more information, see the following articles and help topics:

Application development
Understanding development best practices in the DevOps pipeline
Using multiple built-on applications
Searching for a rule
Checking out a rule
Checking in a rule
Comparing rules by version
Understanding best practices for version control in the DevOps pipeline
Branching
Rule development in branches
Merging branches into target rulesets
Using the Lock and Roll feature for managing ruleset versions
Adding a branch from a repository
Publishing a branch to a repository
Creating a toggle
Testing
Pega Platform application testing in the DevOps pipeline
PegaUnit testing

Understanding continuous integration


With continuous integration, application developers frequently check in their changes to the source environment
and use an automated build process to automatically verify these changes. Continuous integration identifies issues
and pinpoints them early in the cycle. Use Jenkins with the prpcServiceUtils tool and the execute test service to
automatically generate a potentially deployable application and export the application archive to a binary repository
such as JFrog Artifactory.

During continuous integration, maintain the following best practices:

To automatically generate a valid application, properly define the application Rule-Admin-Product rule and
update the rule whenever the application changes. The prpcServiceUtils tool requires a predefined Rule-Admin-
Product rule.
To identify issues early, run unit tests and critical integration tests before packaging the application. If any one
of these tests fails, stop the release pipeline until the issue is fixed.
Publish the exported application archives into a repository such as JFrog Artifactory to maintain a version
history of deployable applications.

For more information, see the following articles and help topics:

Pega unit tests


Running test cases and suites with the Execute Tests service
Application packaging
Packaging a release on your development environment
Automatically deploying applications with prpcUtils and Jenkins

Understanding continuous delivery


With continuous delivery, application changes run through rigorous automated regression testing and are deployed
to a staging environment for further testing to ensure that there is a high confidence the application is ready to
deploy on the production system.

Use Jenkins with the prpcServiceUtils tool to deploy the packaged application to test environments for regression
testing or for other testing such as performance testing, compatibility testing, acceptance testing, and so on. At the
end of the continuous delivery stage, the application is declared ready to deploy to the production environment.
Follow these best practices to ensure quality:

Use Docker or a similar tool to create test environments for user acceptance tests (UAT) and exploratory tests.
Create a wide variety of regression tests through the user interface and the service layer.
Check the tests into a separate version control system such as Git.
If a test fails, roll back the latest import.
If all the tests pass, annotate the application package to indicate that it is ready to be deployed. Deployment
can be done either automatically with Jenkins and JFrog Artifactory or manually.

For more information, see the following articles and help topics:

Deploying to a staging system


Deploying application changes to your staging or production environment
Automatically deploying applications with prpcUtils and Jenkins
Using restore points to enable error recovery

Understanding deployment
After an application change passes the testing requirements, use Jenkins and the prpcServiceUtils tools to migrate
the changes into production after complete validation through automated testing on the staging system. Use
application release guidelines to deploy with minimal downtime.

For more information, see the following articles and help topics:

Deploying to the production system


Understanding best practices for version control in the DevOps pipeline

Deploying application changes to your staging or production environment


Automatically deploying applications with prpcUtils and Jenkins
Migrating application changes
Understanding application release changes, types, and processes
Enabling changes to the production system
Updating access groups by submitting a request to an active instance

Understanding best practices for version control in the DevOps


pipeline
Change the application version number each time you deploy changes to a production system. As a best practice,
use semantic versioning, because it offers a logical set of rules about when to increase each version number.

When you use semantic versioning, the part of the version number that is incremented communicates the
significance of the change. Additional information about semantic versioning is available on the web.

The version number, in the format NN-NN-NN, defines the major version (first two digits), minor version (middle
digits), and patch version (last digits), for example, 03-01-15.

Major versions include significant features that might cause compatibility issues with earlier releases.
Minor versions include enhancements or incremental updates.
Patch versions include small changes such as bug fixes.

Rulesets include all versions of each rule. Skimming reduces the number of rules by collecting the highest version of
rules in the ruleset and copying them to a new major or minor version of that ruleset, with patch version 01. For
more information about skimming, see Skim to create a higher version.

Best practices for development


Follow these best practices for version control in development:

Work in branches.
Consider creating a major version of your application if you upgrade your application server or database server
to a major new version.
For small single scrum teams:
Increment both the patch and the minor version during every merge.
Developers merge into the next incremented patch version.
For multiple scrum teams:
Assign a user the role of a release manager, who determines the application version and release strategy.
This user should be familiar with concepts and features such as rulesets, ruleset versioning, application
records, and application migration strategies.
The release manager selects a development ruleset version number that includes a patch version
number.
Developers merge into the highest available ruleset version.
Frequently increment ruleset versions to easily track updates to your application over time.
Maintain an application record that is capped at major and minor versions of its component rulesets.

Best practices for deployment


Follow these best practices when you deploy your application to production:

Define target ruleset versions for production deployment.


Use lock and roll to password-protect versions and roll changes to higher versions. For more information, see
RuleSet Stack tab.
Increment the ruleset version every time you migrate your application to production, unless the application is
likely to reach the patch version limit of 99.
Create restore points before each deployment. For more information about restore points, see Using restore
points to enable error recovery.

Understanding continuous integration and delivery pipelines


DevOps is a culture of collaboration by development, quality, and operations teams to address issues in their
respective areas. To sustain progress and bring continued improvement, tools and processes are put in place. Use
DevOps practices such as continuous integration and delivery (CI/CD) pipelines to break down code into pieces and
automate testing tasks, so that multiple teams can work on the same features and achieve faster deployment to
production.

A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous delivery. In
the continuous integration stage, developers can continuously merge branches into a target application. In the
continuous delivery stage, the target application is packaged and moved through quality testing before it is
deployed to a production environment.

You can set up a CI/CD pipeline for your Pega Platform application using one of two methods:

Use third-party tools, such as Jenkins, to start a job and perform operations on your software. For more
information, see the article Continuous integration and delivery pipelines using third-party automation servers
on Pega Community.
Use Deployment Manager, where you use Pega Platform as the orchestration server that runs the pipeline,
packages your application, and manages importing packages from and exporting packages to repositories that
connect from one system to another. For more information, see the article Deployment Manager on Pega
Community.

Adding a branch from a repository

If you are working in a continuous integration and delivery (CI/CD) pipeline, you can add a branch from a
repository to your development application. You cannot add a branch that contains branched versions of a
ruleset that is not in your application stack.

Publishing a branch to a repository

If you are using a continuous integration and delivery (CI/CD) pipeline with third-party tools such as Jenkins, you
can publish a branch from your development application to a Pega repository on the main development system
(remote system of record) to start a merge.

Understanding rule rebasing

If you are using continuous integration in a CI/CD pipeline with either Deployment Manager or third-party
automation servers such as Jenkins, after you merge branches, you can rebase your development application to
obtain the most recently committed rulesets. Rebasing allows development teams working in separate
development environments to share their changes and keep their local rule bases synchronized. Having the
most recently committed rules on your development system decreases the probability of conflicts with other
development teams.

Rebasing rules to obtain latest versions

If you are using a continuous integration and continuous delivery (CI/CD) pipeline with Deployment Manager or
third-party auomation servers such as Jenkins, you can rebase your development application to obtain the most
recently committed rulesets through Pega repositories after you merge branches on the source development
system.

PegaUnit testing
Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

Adding a branch from a repository


If you are working in a continuous integration and delivery (CI/CD) pipeline, you can add a branch from a repository
to your development application. You cannot add a branch that contains branched versions of a ruleset that is not in
your application stack.

1. In the navigation panel, click App, and then click Branches.

2. Right-click the application into which you want to import a branch and select Add branch from repository.

3. In the Add branch from repository dialog box, from the Repository list, select the repository that contains the
branch that you want to import.

4. In the Branch name field, press the Down Arrow key and select the branch that you want to import.

5. Click Import.

6. Click OK.

7. If you are using multiple branches, reorder the list of branches so that it matches the order in which rules
should be resolved.

For more information, see Reordering branches .

8. Create rules and add them to your branch.

When you create rules, you can select the branch and ruleset into which you want to save them. Rulesets are
automatically created. For more information, see Rule development in branches .

Adding branches to your application

Publishing a branch to a repository


If you are using a continuous integration and delivery (CI/CD) pipeline with third-party tools such as Jenkins, you can
publish a branch from your development application to a Pega repository on the main development system (remote
system of record) to start a merge.

For more information, see Using continuous integration and delivery builds with third-party tools on Pega
Community:

1. In the navigation panel, click App, and then click Branches.

2. Right-click the branch that you want to push to a repository and click Publish to repository.

3. In the Push branch to repository dialog box, select the repository from the Repository list.

4. Click Publish.

You receive a message if the repository is not configured properly or is down, and you cannot push the branch
to the repository.

5. Click Close.

Branches and branch rulesets


Integrating with file and content management systems

Understanding rule rebasing


If you are using continuous integration in a CI/CD pipeline with either Deployment Manager or third-party automation
servers such as Jenkins, after you merge branches, you can rebase your development application to obtain the most
recently committed rulesets. Rebasing allows development teams working in separate development environments
to share their changes and keep their local rule bases synchronized. Having the most recently committed rules on
your development system decreases the probability of conflicts with other development teams.

For example, you can publish a branch from your development application to a Pega repository on a source
development system, which starts a job on Jenkins as your automation server and merges branches. You can also
use the Merge branches wizard to start a Deployment Manager build by first merging branches in a distributed or
nondistributed, branch-based environment. After the merge is completed, you can rebase the rulesets on your
development application to obtain the merged rulesets.

Configuring settings for rebasing

Before you can rebase your development system, you must first configure a Pega repository and then enable
ruleset versions for them. You must also have the appropriate permissions for rebasing.
Adding a Pega repository
Rebasing rules to obtain latest versions

If you are using a continuous integration and continuous delivery (CI/CD) pipeline with Deployment Manager or
third-party auomation servers such as Jenkins, you can rebase your development application to obtain the most
recently committed rulesets through Pega repositories after you merge branches on the source development
system.

Integrating with file and content management systems

Configuring settings for rebasing


Before you can rebase your development system, you must first configure a Pega repository and then enable ruleset
versions for them. You must also have the appropriate permissions for rebasing.

1. If you are using Pega repositories with third-party automation servers such as Jenkins, enable the Pega
repository type.

You do not need to enable Pega repositories if you are using Deployment Manager. For more information, see
Enabling the Pega repository type.

2. Create a connection to a Pega type repository that supports ruleset version artifacts. For more information, see
Adding a Pega repository.

3. Enable ruleset versions for Pega repositories by configuring the HostedRulesetsList dynamic system setting on
the system of record. For more information, see Enabling ruleset versions for Pega repositories for rebasing.

4. Ensure that you the pxCanRebase privilege so that you can rebase rules. This privilege is associated with the
sysadmin4 role.

If you do not have this privilege, you can add it to your role. For more information, see Specifying privileges for
an Access of Role to Object rule.

Adding a Pega repository


Add a Pega repository to move application packages when you are using continuous integration and delivery (CI/CD)
pipelines with third-party automation tools. Also add a Pega repository when you are rebasing your development
application when you are using third-party automation tools or Deployment Manager.

The Pega repository type is shown only if you are using Deployment Manager, otherwise, you must enable it. For
more information, see Enabling the Pega repository type.

This content applies only to on-premises deployments and applications. For Cloud services and applications, an S3
repository is created for you.

To create a repository, your access group must have the PegaRULES:RepositoryAdministrator role. To use a
repository, your access group must have the PegaRULES:RepositoryUser role.

1. In the header of Dev Studio, click Create SysAdmin Repository .

2. Enter a short description of the repository and the repository name and click Create and open.

3. In the Definition tab, click Select and select the repository type.

4. In the Host ID field, enter the location of the Pega node. It must start with http:// or https:// and must end with
PRRestService. For example: http://10.1.5.207:8181/prweb/PRRestService.

5. Optional:

In the Use authentication section, specify whether the repository authenticates the user.

6. Optional:

In the Authentication Profile section, specify the authentication profile. If there is no authentication profile, the
username must have PRPC:Administrators as an access group.

7. In the Security section, configure security:

Secure protocol
The lowest level of security that can be used. If this minimum level of security is not available, the
repository call fails with an error message.
Truststore
A Truststore record that contains the server certificate to use in the TLS/SSL handshake.
Keystore
A Keystore record that stores the Pega Platform client's private/public key pair that is used by the server
to authenticate the client.

8. Click Test connectivity to verify the credentials.


9. Click Save.

10. If you are rebasing your development application, enable ruleset versions for Pega repositories by configuring
the HostedRulesetsList dynamic system setting on the main development system. For more information, see
Enabling ruleset versions for Pega repositories for rebasing.

Configuring external storage options for attachments


Storing case attachments using external storage
Requirements and restrictions for case attachments in a file storage repository

Enabling the Pega repository type


When you use continuous integration and delivery (CI/CD pipelines) third-party automation servers, you use Pega
Platform as a binary repository for rule artifacts during development. You also use Pega repositories when you are
rebasing your development application when you are using third-party automation servers or Deployment Manager.

If you are using Deployment Manager, the Pega repository type is already enabled; otherwise, you must first enable
it for your application by completing the following steps:

1. Click Records Decision When and open the pyCanRebase rule that applies to @baseclass.

2. Click Save As Specialize by class or ruleset .

3. Choose a ruleset in your application, then click Create and open.

4. In the Conditions tab, click Actions Edit and change the condition to true.

5. Click Submit.

6. Click Save.

7. If you are rebasing rules to refresh your development system with the latest rulesets that are hosted on a
remote development system, enable ruleset versions for Pega repositories. For more information, see Enabling
ruleset versions for Pega repositories for rebasing.

Creating a repository
Integrating with file and content management systems

Enabling ruleset versions for Pega repositories for rebasing


When you rebase rules, you must enable ruleset versions for Pega repositories so that they can host ruleset
versions. To enable ruleset versions, configure the HostedRulesetsList dynamic system setting on the remote
development system on which you are merging branches.

1. Complete one of the following tasks:

Open the HostedRulesetsList dynamic system setting if it exists.


1. Click Records SysAdmin Dynamic System Settings .
2. Click the record with the HostedRulesetsList Setting Purpose and the Pega-ImportExport Owning
Ruleset.
Create this record if it does not exist.
1. Click Create SysAdmin Dynamic System Settings .
2. Enter a short description.
3. In the Owning Ruleset field, enter Pega-ImportExport.
4. In the Setting Purpose field, enter HostedRulesetsList.
5. Click Create and open.

2. On the Settings tab, in the Value field, enter a comma-separated list of the rulesets on the remote development
system. Enclose each ruleset value within quotation marks, for example, “HRApp.”

3. Click Save.

Enabling the Pega repository type

When you use continuous integration and delivery (CI/CD pipelines) third-party automation servers, you use
Pega Platform as a binary repository for rule artifacts during development. You also use Pega repositories when
you are rebasing your development application when you are using third-party automation servers or
Deployment Manager.

Understanding rule rebasing

If you are using continuous integration in a CI/CD pipeline with either Deployment Manager or third-party
automation servers such as Jenkins, after you merge branches, you can rebase your development application to
obtain the most recently committed rulesets. Rebasing allows development teams working in separate
development environments to share their changes and keep their local rule bases synchronized. Having the
most recently committed rules on your development system decreases the probability of conflicts with other
development teams.
Rebasing rules to obtain latest versions
If you are using a continuous integration and continuous delivery (CI/CD) pipeline with Deployment Manager or third-
party auomation servers such as Jenkins, you can rebase your development application to obtain the most recently
committed rulesets through Pega repositories after you merge branches on the source development system.

To rebase rules, you must first merge branches to make changes to rules. Changes made to rules in an unlocked
ruleset version will not be visible by the rebase functionality.
Only one rebase event at a time is supported per development system to prevent accidentally overriding a rebase
event that is in progress.
You can improve rebase performance by frequently incrementing the application patch version.

To rebase rules, complete the following steps:

1. If you are migrating and merging branches separately, manually migrate branches for an application that has a
new major or minor version. Rebase only pulls the ruleset version that is visible to your current application.

For example, if you previously migrated a branch for an application of version 1.x but are now working on a 2.x
application version, migrate the 2.x branch ruleset to the main development system before rebasing.
Otherwise, rebase refreshes your development system with the 1.x ruleset versions.

2. In the header of Dev Studio click the name of your application, and then click Definition.

3. On the Definition tab, click Get latest ruleset versions.

4. In the Select repository list, select the repository from which to retrieve rules to see a list of ruleset versions
that will be rebased.

5. Click Rebase.

If there are no import conflicts, your development application is refreshed with the rules.
If there are import conflicts, the system displays them. For example, a conflict can occur if you made a
change to the same ruleset version on your local development system or if you modified a non-resolved
rule in the ruleset, such as the Application record. To resolve a conflict, complete the following step.

6. If you have conflicts, you must resolve them before rebasing continues, either by overwriting or rejecting the
changes on your development system. Complete the following steps to import the ruleset and either overwrite
or reject the changes that you made to the ruleset on the development system:

a. For each ruleset, click the Download Archive link and save the .zip file.

b. Click the Click here to launch the Import wizard link at the top of the Rebase rule form to open the Import
wizard, which you use to import the .zip files. For more information, see Importing rules and data from a
product rule by using the Import wizard.

c. In the wizard, specify whether to use the older version of the ruleset or overwrite the older version with
the newer version.

d. After you resolve all conflicts, restart the rebase process by starting from step 1.

Understanding rule rebasing

If you are using continuous integration in a CI/CD pipeline with either Deployment Manager or third-party
automation servers such as Jenkins, after you merge branches, you can rebase your development application to
obtain the most recently committed rulesets. Rebasing allows development teams working in separate
development environments to share their changes and keep their local rule bases synchronized. Having the
most recently committed rules on your development system decreases the probability of conflicts with other
development teams.

Integrating with file and content management systems

Installing and enabling for Sonatype Nexus Repository cSomponent


for Sonatype Nexus Repository Manager 3
To create a connection between Pega Platform or Deployment Manager and Nexus Repository Manager 3, use the
Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and metadata
support for your application artifacts.

The component for Sonatype Nexus Repository Manager 3 supports Pega 8.1, 8.2, 8.3, and 8.4.

Because of potential conflicts, you should not use both Sonatype Nexus Repository Manager 2 and Sonatype Nexus
Repository Manager 3 type repositories in one application. If you want to use both repository types, contact
[email protected].

For questions or issues, send an email to [email protected].

Downloading and enabling the component


Download and enable the component so that you can configure a Nexus Repository Manager 3 repository.

Creating a Sonatype Nexus Repository Manager 3 repository

After downloading and enabling the Sonatype Nexus Repository Manager 3 component, create a repository in
Pega Platform.

Understanding API usage

When you use repository APIs to interact with Nexus Repository Manager 3, note the following information:

Integrating with file and content management systems


Repository APIs

Downloading and enabling the component


Download and enable the component so that you can configure a Nexus Repository Manager 3 repository.

To download and enable the component, do the following steps:

1. Download the component from Pega Marketplace.

2. In the header of Dev Studio, click the name of your application, and then click Definition.

3. In the Application rule form, on the Definition tab, in the Enabled components section, click Manage
components.

4. Click Install new, select the file that you downloaded from Pega Marketplace, and then click Open.

5. Select the Enabled check box to enable this component for your application, and then click OK.

6. In the list of enabled components, select PegaNexus3Repository, select the appropriate version, and then click
Save.

7. If you are using Deployment Manager, on each candidate system and on the orchestration system, perform one
of the following tasks:

Download and enable the component by repeating steps 1 - 6.


Add the PegaNexus3:01-01 and PegaNexusCommon:01-01 rulesets as production rulesets to the
PegaDevOpsFoundation:Administrators access group.

Creating a Sonatype Nexus Repository Manager 3 repository

After downloading and enabling the Sonatype Nexus Repository Manager 3 component, create a repository in
Pega Platform.

Creating a Sonatype Nexus Repository Manager 3 repository


After downloading and enabling the Sonatype Nexus Repository Manager 3 component, create a repository in Pega
Platform.

You can create only raw type repositories.


To create a repository, do the following steps:

1. In the header of Dev Studio, click Create SysAdmin Repository .

2. In the Create Repository rule form, enter a description and name for your repository, and then click Create
and open.

3. In the Edit Repository rule form, on the Definition tab, click Select.

4. In the Select repository type dialog box, click Nexus 3.

5. In the Repository configuration section, configure location information for the repository:

a. In the System URL field, enter the URL of your Nexus Repository Manager 3 server.

b. In the Repository name field, enter the name of the repository.

c. In the Root path field, enter the path of the folder where repository assets are stored. Do not include the
repository folder in the path, and do not start or end the path with the slash (/) character.

To store assets in a folder with the URL http://myexusrepo.com/repository/raw/myCo/devops , enter the following
information:
System URL: http://mynexusrepo.com
Repository name: raw
Root path: myCo/devops

The connector will allow you to browse the assets in this folder from inside Pega Platform.
6. In the Authentication section, configure authentication information:

a. In the Authentication profile field, enter the name of a new authentication profile, and then click the Open
icon to configure the profile.

The authentication profile stores the credentials that Pega Platform needs to authenticate with the Nexus
Repository Manager 3 API.

b. In the Create Authentication Profile rule form, in the Type list, select Basic.

Only Basic authentication is supported. For more information about Basic authentication profiles, see
Configuring a Basic authentication profile.

c. Enter a name and description for the authentication profile.

d. Click Create and open.

7. In the Edit Authentication Profile rule form, configure authentication information:

a. Enter the user name, password, realm, and host name required to authenticate with Sonatype Nexus
Repository Manager 3. For more information, see the Sonatype Nexus Repository Manager 3
documentation.

b. Select the Preemptive authentication check box.

c. Click Save.

8. To verify that the system URL, authentication profile, and repository name are configured properly, in the Edit
Repository rule form, on the Definition tab, click Test connectivity.

If there are any errors, ensure that the credentials in the authentication profile are correct and that Pega
Platform can access the system URL that you entered.

Testing connectivity does not verify that the root path is configured properly.

9. Click Save.

Downloading and enabling the component

Download and enable the component so that you can configure a Nexus Repository Manager 3 repository.

Understanding API usage


When you use repository APIs to interact with Nexus Repository Manager 3, note the following information:

Sonatype Nexus Repository Manager 3 does not support the create folder API (D_pxNewFolder), because the
repository cannot have empty folders.
The create file API (D_pxNewFile) and get file API (D_pxGetFile) only support Basic Authentication and support a
file size of up to 5 GB.
The delete API (D_pxDelete) does not work on folders, only files. If all the files in a folder are deleted, the folder
is also deleted.

Creating a Sonatype Nexus Repository Manager 3 repository

After downloading and enabling the Sonatype Nexus Repository Manager 3 component, create a repository in
Pega Platform.

Installing and enabling Sonatype Nexus Repository component for


Sonatype Nexus Repository Manager 2
Create a connection between Pega Platform or Deployment Manager and Sonatype Nexus Repository Manager 2
with the Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and
metadata support for your application artifacts.

For questions or issues, send an email to [email protected].

Downloading and enabling the component

Download and enable the component so that you can configure a Sonatype Nexus Repository Manager 2
repository.

Creating a Sonatype Nexus Repository Manager 2 repository

After downloading and enabling the component for Sonatype Nexus Repository Manager 2, create a repository
in Pega Platform.

Understanding repository API usage

When you use repository APIs to interact with Nexus Repository Manager 2, note the following information:
Integrating with file and content management systems
Repository APIs

Downloading and enabling the component


Download and enable the component so that you can configure a Sonatype Nexus Repository Manager 2 repository.

To download and enable the component, do the following steps:

1. Download the component from Pega Marketplace.

2. In the header of Dev Studio, click the name of your application, and then click Definition.

3. In the Application rule form, on the Definition tab, in the Enabled components section, click Manage
components.

4. Click Install new, select the file that you downloaded from Pega Marketplace, and then click Open.

5. Select the Enabled check box to enable this component for your application, and then click OK.

6. In the list of enabled components, select Pega Nexus Repository Connector, select the appropriate version, and
then click Save.

7. If you are using Deployment Manager, on each candidate system and on the orchestration system, perform one
of the following tasks:

Download and enable the component by repeating steps 1 - 6.


Add the PegaNexus:01-01 and PegaNexusCommon:01-01 rulesets as production rulesets to the
PegaDevOpsFoundation:Administrators access group.

Creating a Sonatype Nexus Repository Manager 2 repository

After downloading and enabling the component for Sonatype Nexus Repository Manager 2, create a repository
in Pega Platform.

Creating a Sonatype Nexus Repository Manager 2 repository


After downloading and enabling the component for Sonatype Nexus Repository Manager 2, create a repository in
Pega Platform.

To create a repository, do the following steps:

1. In the header of Dev Studio, click Create SysAdmin Repository .

2. In the Create Repository rule form, enter a description and name for your repository, and then click Create
and open.

3. In the Edit Repository rule form, on the Definition tab, click Select.

4. In the Select repository type dialog box, click Nexus 2.

5. In the Repository configuration section, configure location information for the repository:

a. In the System URL field, enter the URL of your repository.

b. In the Repository ID field, enter the ID of the repository, which you can find on the Configuration tab in
Nexus Repository Manager 2.

For more information, see the documentation for Nexus Repository Manager 2.

c. In the Root path field, enter the path of the folder where repository assets are stored. Do not include the
repository folder in the path, and do not start or end the path with the slash (/) character.

To store assets in a folder with the URL http://myexusrepo.com/repository/raw/myCo/devops , enter the following
information:
System URL: http://mynexusrepo.com
Repository ID: raw
Root path: myCo/devops

The connector will allow you to browse the assets in this folder from inside Pega Platform.

6. In the Authentication section, configure authentication information:

a. In the Authentication profile field, enter the name of a new authentication profile and click the Open icon
to configure the profile.

The authentication profile stores the credentials that Pega Platform needs to authenticate with the Nexus
Repository Manager 2 API.
b. In the Create Authentication Profile rule form, in the Type list, select Basic.

Only Basic authentication is supported.

c. Enter a name and description for your authentication profile.

d. Click Create and open.

7. In the Edit Authentication Profile rule form, configure authentication information:

a. Enter the user name, password, realm, and host name required to authenticate with Nexus Repository
Manager 2. For more information, see the Nexus Repository Manager 2 documentation.

b. Select the Preemptive authentication check box.

c. Click Save.

8. To verify that the system URL and authentication profile are configured properly, in the Edit Repository rule
form, on the Definition tab, click Test connectivity.

If there are any errors, ensure that the credentials in the authentication profile are correct and that Pega
Platform can access the system URL that you entered.

Testing connectivity does not verify that the repository ID or root path are configured properly.

9. Click Save.

Downloading and enabling the component

Download and enable the component so that you can configure a Sonatype Nexus Repository Manager 2
repository.

Understanding repository API usage


When you use repository APIs to interact with Nexus Repository Manager 2, note the following information:

Sonatype Nexus Repository Manager 2 does not consider recursiveDelete parameter for the delete API
(D_pxDelete). All folder deletes are considered recursive.
The create file API (D_pxNewFile) and get file API (D_pxGetFile) only support Basic Authentication and support a
file size of up to 5 GB.

Creating a Sonatype Nexus Repository Manager 2 repository

After downloading and enabling the component for Sonatype Nexus Repository Manager 2, create a repository
in Pega Platform.

Automatically deploying applications with prpcUtils and Jenkins


You can use Jenkins to automate exporting and importing Pega Platform applications. Download the prpcServiceUtils
command-line tool and configure Jenkins to export or import archives. You can use a single Jenkins build job to both
export and import an application archive, or you can create separate jobs for each task.

For more information about prpcServiceUtils for service-enabled scripting, see Using service-enabled scripting and
the prpcServiceUtils tool.

Ensure that your system includes the following items:

Jenkins 1.651.1 or later


Jenkins Plugins:
Ant Plugin
Environment Injector Plugin
Build with Parameters Plugin
Ant version 1.9 or later
JDK version 1.7 or later

Downloading and installing prpcServiceUtils

Download and install prpcServiceUtils so that you can use it with Jenkins deploy applications.

Configuring the Jenkins build environment

Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP.

Configuring the Jenkins project

Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP:
Adding build steps to import or export the archive

You can enter build steps to import an archive or export an archive, or you can do both in one job.

Importing or exporting the archive by running the Jenkins job

Run a Jenkins job to import or export the application archive.

Downloading and installing prpcServiceUtils


Download and install prpcServiceUtils so that you can use it with Jenkins deploy applications.

To download and install prpcServiceUtils, do the following steps:

1. Download the prpcServiceUtils.zip file onto the Jenkins server.

For more information, on ,seeRemote configuration command-line tool (prpcServiceUtils).

2. Extract the files onto any location to which Jenkins has access.

Automatically deploying applications with prpcUtils and Jenkins

You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Configuring the Jenkins build environment

Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP.

Configuring the Jenkins build environment


Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass parameters
to import or export the RAP.

To configure the Jenkins build environment, do the following steps:

1. Verify that the following Jenkins plugins are installed:

Ant Plugin
Environment Injector Plugin
Build with Parameters Plugin

2. Open a web browser and navigate to the Jenkins server.

3. Click Manage Jenkins.

4. Click Configure System.

5. Configure the PEGA_HOME environment variable:

a. In the Global properties section, select Environmental variables.

b. In the name field, enter PEGA_HOME.

c. In the value field, enter the location where you extracted the prpcServiceUtils.zip file.

d. Click Add.

6. ​Click Apply, and then click Save.

Automatically deploying applications with prpcUtils and Jenkins

You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Configuring the Jenkins project

Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP:

Configuring the Jenkins project


Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass parameters
to import or export the RAP:
1. Complete one of the following actions:

Create a project if you have not already done so.


Open an existing project.

2. Click Configure.

3. Select This build is parameterized.

4. Click Add Parameter and create the parameters that Jenkins passes to the prpcServiceUtils tool:

To import or export a RAP, create the following parameters:


Parameter name Type Default value
The name of the RAP rule used to generate the
productName String
archive
productVersion String The version number of the RAP rule
To import or export an application, create the following parameters:
Parameter name Type Default value
applicationName String The name of the application
The version number of the
applicationVersion String
application.

5. Select Prepare an environment for the run.

6. In the Properties Content section, set the following properties:

SystemName=$BUILD_TAG
Source Code Management=None
Inject environment variables to the build process=Properties Content

7. In the Properties Content section, set the ImportExistingInstances property to one of the following values. The
default is unset:

override

For rules - If a rule with the same key exists in the system, but the rule resolution properties differ (for
example, ruleset or version), replace the existing rule with the imported rule.

For work - If a work object with the same key exists but belongs to a different application (for example, it
has a different class hierarchy but same classgroup name and same ID prefix), replace the existing work
object with the imported work object.

skip

For rules - If a rule with the same key exists in the system, and the rule resolution properties differ, do not
replace the existing rule.

For work - If a work object with the same key exists but belongs to a different application, do not replace
the existing work object.

unset: The import will fail if keys already exist either for rule instances that have different rule resolution
properties or for work objects that belong to a different applications that use the same classgroup name.

8. Set the artifact directory where exported logs and files are downloaded In the following format: ARTIFACTS_DIR=
<var>path to artifact directory</var>

The default is the logs directory.

You can also set the directory later by specifying -artifactsDir when you run the batch file.

9. In the Properties Content section, enter the static properties in the format ParameterName=Value.

Source properties for export:


Parameter
Default value
name
The host name and port number of the Pega Platform server from which to export the
SourceHost
archive file.
SourceUser The operator name. This operator must have export privileges.
SourcePassword The operator password.
Target properties for import:
Parameter
Default value
name
The host name and port number of the Pega Platform server that contains the archive file
TargetHost
to import.
TargetUser The operator name. This operator must have import privileges.
TargetPassword The operator password.
10. Click Save.

Automatically deploying applications with prpcUtils and Jenkins

You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Adding build steps to import or export the archive

You can enter build steps to import an archive or export an archive, or you can do both in one job.

Adding build steps to import or export the archive


You can enter build steps to import an archive or export an archive, or you can do both in one job.

Adding export build steps

Adding import build steps

Automatically deploying applications with prpcUtils and Jenkins

You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Adding export build steps


Adding import build steps

Adding export build steps


To add export steps to your build job, do the following steps:

1. Add an Invoke Ant build step:

a. In the Build section, click Add build step and select Invoke Ant.

b. n the Targets field, enter exportprops.

c. In the Build File field, enter the path to the build file:

On Windows, enter the following path: $PEGA_HOME\samples\Jenkins-build.xml


On UNIX, enter the following path: $PEGA_HOME/scripts/samples/jenkins/Jenkins-build.xml

2. Add a build step to run either prpcServiceUtils.bat or prpcServiceUtils.sh.

If you are on Windows, go to step 3.


If you are on UNIX, go to step 4.

3. On Windows, create an Execute Windows batch command build step:

a. In the Build section, click Add build step and select Execute Windows batch command.

b. In the Command field, enter the following command: %PEGA_HOME%\scripts\utils\prpcServiceUtils.bat export --connPropFile
%WORKSPACE%\%SystemName%_export.properties --artifactsDir %WORKSPACE%

4. On UNIX, create an Execute Shell batch command build step:

a. In the Build section, click Add build step and select Execute Shell batch command​.

b. In the Command field, enter the following command: $PEGA_HOME/scripts/utils/prpcServiceUtils.sh export --connPropFile
$WORKSPACE/${SystemName}_export.properties --artifactsDir $WORKSPACE

Adding import build steps


Adding build steps to import or export the archive

You can enter build steps to import an archive or export an archive, or you can do both in one job.

Adding import build steps


To add import build steps to your build job, do the following steps:

1. Add an Invoke Ant build step:

a. In the Build section, click Add build step and select Invoke Ant.

b. n the Targets field, enter importprops .


c. In the Build File field, enter the path to the build file:

On Windows, enter the following path: $PEGA_HOME\samples\Jenkins-build.xml


On UNIX, enter the following path: $PEGA_HOME/scripts/samples/jenkins/Jenkins-build.xml

2. Add a build step to run either prpcServiceUtils.bat or prpcServiceUtils.sh.

If you are on Windows, go to step 3.


If you are on UNIX, go to step 4.

3. On Windows, create an Execute Windows batch command build step:

a. In the Build section, click Add build step and select Execute Windows batch command.

b. In the Command field, enter the following command:

%PEGA_HOME%\scripts\utils\prpcServiceUtils.bat import --connPropFile %WORKSPACE%\%SystemName%_import.properties --artifactsDir


%WORKSPACE%

4. On UNIX, create an Execute Shell batch command build step:

a. In the Build section, click Add build step and select Execute Shell batch command​.

b. In the Command field, enter the following command:

$PEGA_HOME/scripts/utils/prpcServiceUtils.sh import --connPropFile $WORKSPACE/${SystemName}_import.properties --artifactsDir


$WORKSPACE

Adding export build steps


Adding build steps to import or export the archive

You can enter build steps to import an archive or export an archive, or you can do both in one job.

Importing or exporting the archive by running the Jenkins job


Run a Jenkins job to import or export the application archive.

Do the following steps:

1. In Jenkins, click Build with Parameters.

2. When the parameter values are displayed, verify the default settings and edit any values.

3. Set the artifact directory where exported logs and files are downloaded in the following format: -artifactsDir=
<var>path to artifact directory</var>

The default directory is the logs directory.

4. Click Build.

Automatically deploying applications with prpcUtils and Jenkins

You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Migrating application changes


With minimal disruption, you can safely migrate your application changes throughout the application development
life cycle, from development to deployment on your staging and production environments. In the event of any
issues, you can roll back the deployment and restore your system to a state that was previously known to be
working.

The process that you use to release changes to your application is different depending on the types of changes that
you are making. This topic describes the Standard Release process that you can use to deploy changes to rules,
data instances, and dynamic system settings. The Standard Release process is a self-service way to deploy changes
without downtime. Other methods for releasing changes to your application are not covered in this article. For more
information, see Application release changes, types, and processes.

This Standard Release process applies to both on-premises and Pega Cloud Services environments. As a Pega Cloud
Services customer, if you use this self-service process to release changes to your application, you are responsible for
those changes. For more information, see Change management in Pega Cloud Services and Service level agreement
for Pega Cloud Services.

The Standard Release process includes the following steps and is scalable to the number and types of environments
that you have:

1. Package the release on your shared development environment. For more information, see Packaging a release
on your development environment.
2. Deploy the changes to your staging or production environment. For more information, see Deploying
application changes to your staging or production environment.

Understanding application migration requirements

Before you migrate your application, both your environment and application must meet certain requirements.
For example, you must be able to download a RAP archive to a file system location with the required available
space.

Understanding application migration scenarios

The Standard Release migration process supports the following scenarios:

Understanding application migration requirements


Before you migrate your application, both your environment and application must meet certain requirements. For
example, you must be able to download a RAP archive to a file system location with the required available space.

Your environments and applications must meet the following requirements:

You have at least two existing Pega Platform environments. These environments can be any combination of
sandbox and production environments, and can be on-premises or in the Pega Cloud virtual private cloud
(VPC).
You can log in to a machine from which you will complete the release process, such as your organization's
laptop, workstation, or server.
You have a location on a file system with enough available space to store the RAP archives. You must be able
to download the RAP archive to this location and upload a RAP archive to another system from this location.
You have complied with stated application guideline and guardrail requirements.
Your application rule must specify rulesets that have a patch version. Most application rules have the ruleset
version in the stack set to 01-01, such as "Ruleset: 01-01". You must change this to specify your rulesets to the
patch level of their version, which is "Ruleset: 01-01-01". This is required for your top-level application rule and
all built-on applications, except the base PegaRULES built-on application. This application structure gives you
greater control over the release of your software, minimizes the impact of the release on application users, and
provides for the smoothest recovery path in case of a troubled deployment.

Understanding application migration scenarios


The Standard Release migration process supports the following scenarios:

All developers in the organization use a single shared development environment (recommended by
Pegasystems).
The organization follows a distributed development model, where individual developers or development teams
have their own isolated development environments.

The release process works for either development scenario, because it begins after changes have been merged into
the appropriate ruleset versions. Regardless of development scenario or team size, development teams must use
branching and merging for releasing applications. Otherwise, you cannot take full advantage of the tools and
capabilities of the Pega Platform. For more information, see Understanding application migration scenarios.

Deploying application changes to your staging or production


environment
As part of the Standard Release process, after you set up and package a release on your shared development
environment, you can deploy your application changes to your staging or production environment.

This Standard Release process applies to both on-premises and Pega Cloud Services environments. As a Pega Cloud
Services customer, if you use this self-service process to release changes to your application, you are responsible for
those changes. For more information, see Change management in Pega Cloud Services and Service level agreement
for Pega Cloud Services.

This process involves completing the following steps:

1. Deploying the application archives


2. Testing the deployment
3. Activating the release for all users

In the event of any issues, you can roll back the deployment and restore your system to a state that was previously
known to be working.

Before you deploy application changes, you must know about the types of changes that you can make within a
release, the release types, and the release management process to follow based on the changes you want to deploy.
For example, SQL changes that remove columns from database tables or remove data types can interrupt service
for users of the application. You must deploy these types of changes during scheduled downtime when users are
offline. For more information, see Understanding application release changes, types, and processes.
Deploying the application archives

After you create the application archives, deploy them to your target system. This process is the best way to
deploy changes into your staging or production environment, control their activation, and recover from
problematic deployments.

Testing the deployment

After you deploy the changes, the release engineer and specified users can test the changes. For the staging
environment, test the performance and the user interface, run automated tests, and do acceptance testing. For
the production environment, perform validation tests.

Activating the release for all users

After your Rules archive and Data archive are successfully deployed, changes are activated in various ways.
Activation is the process by which a category of changes becomes usable by appropriate users of the system, if
they have access.

Rolling back a problematic deployment

In the event of a problematic deployment, the first goal is to prevent further issues from occurring. Then you
can roll back the deployment and restore your system to a state that was previously known to be working.

Deploying the application archives


After you create the application archives, deploy them to your target system. This process is the best way to deploy
changes into your staging or production environment, control their activation, and recover from problematic
deployments.

The user who imports the archives must have the zipMoveImport and SchemaImport privileges on the target system.

1. Ensure that you have connectivity to both the target system and to the location where the archives are stored.

2. Use the prpcServiceUtils command line utility to import the archives to the target system:

Deploy the Rules archive by following the steps in Importing rules and data by using a direct connection to
the database. Your changes are sent to the system, imported to the database, and ready for activation.
Deploy the Data archive by following the steps in Rolling back and committing tracked data. When you
deploy the Data archive, you use the same tool that you used to deploy the Rule archive but with different
properties. You can roll back these changes if required.

For information about allowing automatic schema changes, see Editing administrator privileges for importing
archives with schema changes into production.

Testing the deployment


After you deploy the changes, the release engineer and specified users can test the changes. For the staging
environment, test the performance and the user interface, run automated tests, and do acceptance testing. For the
production environment, perform validation tests.

Do the following steps:

1. On the target system, create a copy of the access group for your application. This step is a one-time process,
because now this access group is available anytime you deploy changes.

2. Update the copied access group so that it references the new application version.

3. Find the operator ID record for a test user and give that operator ID record access to the access group that you
just created.

You can now safely test your changes in the system at the same time as other users who are running on the
previous version.

When you are satisfied that your release was deployed successfully, Release Engineering can activate the release
for all users in the production environment.

If you experience any issues, see Rolling back a problematic deployment.

Activating the release for all users


After your Rules archive and Data archive are successfully deployed, changes are activated in various ways.
Activation is the process by which a category of changes becomes usable by appropriate users of the system, if they
have access.

Data changes, including schema changes, take effect immediately after being imported to the system. Your
application might be able to access these fields immediately. After testing and when you are sufficiently
comfortable, you should commit these changes. To commit data changes, follow the steps in Rolling back and
committing tracked data.
To activate rule changes, you need to update the access groups that point to the prior version of your application
rule:

1. In Dev Studio, click Records.

2. Click Security Access Group .

3. Search for the access groups to be updated by specifying your application name in the search box and filtering
the list.

4. After you locate the access group, open the record and increment the version number for your application to
the new release version.

5. Click Save.

If you deploy code changes that need to be compiled, you must restart the system. Code changes cannot be made
without downtime, and your System Administrator must perform a system restart. For information about the types of
changes that you can make within a release, the release types, and the release management process to follow, see
Understanding application release changes, types, and processes.

Rolling back a problematic deployment


In the event of a problematic deployment, the first goal is to prevent further issues from occurring. Then you can roll
back the deployment and restore your system to a state that was previously known to be working.

Do the following steps:

1. Ensure that no new operators can access the problematic application. You can temporarily disable access to
the entire system. For more information, see How to temporarily disallow new interactive logins with a Dynamic
System Setting.

2. Roll back the problematic rules changes. You can roll back changes by updating the access group for your
application and specifying the previous version of your application.

Roll back the data instances that you changed to their previous version. To roll back data changes, use the
prpcServiceUtils command line utility. For more information, see Rolling back and committing tracked data. This
process replaces those modified data instances with their prior definition, rolling back your data changes to the last
known, good state.

Packaging a release on your development environment


As part of the Standard Release process for migrating your application changes from development to production,
you set up and package the release on your shared development environment.

This Standard Release process applies to both on-premises and Pega Cloud Services environments. As a Pega Cloud
Services customer, if you use this self-service process to release changes to your application, you are responsible for
those changes. For more information, see Change management in Pega Cloud Services and Service level agreement
for Pega Cloud Services.

This process involves completing the following steps:

1. Creating the release target (ruleset version)


2. Locking the release
3. Creating the application archives

After you set up and package the release, you are ready to deploy the changes to your staging or production
environment.

Creating the release target

When developers merge changes by using the Merge Wizard, they must select the ruleset version to which to
merge them. The release engineer is responsible for ensuring that each release has an unlocked ruleset
version that acts as the release target and into which these merges can be performed. Developers are
responsible for merging their branches into the correct, unlocked ruleset version and addressing any conflicts.

Locking the release

After all merges are completed, the release engineer locks the applications and rulesets to be released. They
are also responsible for creating the new, higher-level ruleset versions and higher-level application rules for the
next release.

Creating the application archives

For each release, you create one or two RAP archives, depending on the changes you made to your application.
The user who exports the archives must have the zipMoveExport privilege.

Creating the release target


When developers merge changes by using the Merge Wizard, they must select the ruleset version to which to merge
them. The release engineer is responsible for ensuring that each release has an unlocked ruleset version that acts
as the release target and into which these merges can be performed. Developers are responsible for merging their
branches into the correct, unlocked ruleset version and addressing any conflicts.

Locking the release


After all merges are completed, the release engineer locks the applications and rulesets to be released. They are
also responsible for creating the new, higher-level ruleset versions and higher-level application rules for the next
release.

To lock the release, do the following steps:

1. In Dev Studio, click Application Structure Ruleset Stack .

2. Click Lock & Roll.

As a best practice, lock your built-on applications first, and then work your way up the stack to your top-level
application. This way, as each higher version application rule is created, you can open that rule and update the
version of the built-on application.

3. For each ruleset:

a. Click Lock and provide a password.

b. Click Roll.

4. Click Create a new version of my Application.

5. Click Run.

The application rules and ruleset versions for the current release are locked and require passwords to make
changes. Also, you will have created the higher-level ruleset versions and application rules that will be used for the
next release.

RuleSet Stack tab

Creating the application archives


For each release, you create one or two RAP archives, depending on the changes you made to your application. The
user who exports the archives must have the zipMoveExport privilege.

These RAP archives include:

The Rules RAP, which contains the Rules portion of your release, instances from Rules- types only, and all rules
changes.
The Data RAP, which contains the Data portion of your release, instances from Data classes only, and all data
changes.

Splitting the release into a Rules RAP and a Data RAP provides you with more control over the deployment and
activation of your changes on other systems.

More RAP archives might be created during the development process. Import these RAP archives to a single system
from which the Rules RAP and Data RAP will be created. This method provides the greatest level of control over the
release by separating the release process from the development process.

1. Define the RAPs by using the Application Packaging wizard or by copying an existing RAP rule. For more
information, see Product rules.

2. Export each RAP as an archive:

a. Export the rules. For more information, see Export rules into an archive file

b. Provide a standard name for the archive, such as Application-01-01-02.zip.

c. Store these archives in a standard location to which you have access, and that will be accessible during
deployment.

A Rules archive and a Data archive are created as the result of this process.

Rules by name

Understanding application release changes, types, and processes


The following tables provide information about the types of changes that you can make within a release, the release
types, and the release management process to follow based on the types of changes that you want to deploy.
Requires
a
Activates Support
Technical by Release Request
Change type Activates immediately Activation requires restart (Pega
changes access frequency
group Cloud
Services
only)

Rule-

Rule-
Application-
Rule

Rule-Obj-
Rules Class

(including Rule-Ruleset-
Yes Yes No Daily/Weekly No
non-rule- Name
resolved
rules) Rule-Ruleset-
Version

Rule-Access-
Role-Obj

Rule-Access-
Deny-Obj

Data
Data- No Yes No Weekly No
instances
MonthlyCertain
dynamic
system
YesCertain dynamic NoCertain dynamic settings
system settings activate system settings activate activate only
Dynamic Data-Admin-
only on system restart only on system restart on system
system System- No No
and require you to follow and require you to follow restart and
settings Settings
the Environment release the Environment release require you to
process. process. follow the
Environment
release
process.
Yes No

Treat functional changes Treat functional changes


that reference code as a that reference code as a
Code release, which Code release, which
requires a system restart requires a system restart
to activate if you are to activate if you are
making code changes. making code changes.

Change type – This Change type – This


column lists the high- column lists the high-
level category of level category of
changes that you can changes that you can
make in a release. make in a release.
Technical changes – Technical changes –
Technical changes Technical changes
describe the rule describe the rule
types or artifacts for types or artifacts for
a change type. Rule- a change type. Rule-
and Data- include all and Data- include all
subtypes under that subtypes under that
parent type, unless parent type, unless
specifically identified specifically identified
for a different change for a different change
type. type.
Activates by access Activates by access
group – Rule group – Rule
resolution for this resolution for this
change type is change type is
controlled by the controlled by the
access groups of an access groups of an
operator. operator.
Activates Activates Requires
immediately – Rule immediately – Rule a
Rule-Utility-
Activates resolution uses this resolution uses this Support
Function
change type change type Request
Functional Technical Yes
by Monthly
Release Yes
Change type Rule-Utility- Activates immediately
immediately after Activation requiresafter
immediately restart (Pega
changes access frequency
Library deployment. deployment. Cloud
group
Activation requires Activation requires Services
restart – This change restart – This change only)
type requires a type requires a
system restart before system restart before
it is available to the it is available to the
rule resolution rule resolution
process. process.
Release frequency – Release frequency –
Release frequency Release frequency
indicates the period indicates the period
in which you can in which you can
deploy this type of deploy this type of
change to change to
production. production.
Requires a Support Requires a Support
Request (Pega Cloud Request (Pega Cloud
only) – As a Pega only) – As a Pega
Cloud customer, you Cloud customer, you
are responsible for are responsible for
any application any application
changes that you changes that you
make; however, as a make; however, as a
best practice, inform best practice, inform
and engage Pega and engage Pega
Support before Support before
releasing application releasing application
changes. You can changes. You can
open a Support open a Support
Request on My Request on My
Support Portal. For Support Portal. For
more information, more information,
see My Support see My Support
Portal FAQ. Portal FAQ.

Data model SQL No Yes No Monthly Yes

Java JAR file


Code No No Yes Monthly Yes
Java .class
file

Changes
outside of
Environment Pega No No Yes Quarterly Yes

(JVM, XML
configuration)

Requires
a
New Support
Release Significant Release Self-
Activates for users Application users affected application Request
type UX impact frequency service
version (Pega
Cloud
only)
Bug fix Immediately All No Daily No Yes No
Standard
On access group update By access group No Weekly Yes Yes No
release
Database
Immediately All No Monthly Yes No Yes
release
Code
After restart All No Monthly Yes No Yes
release
Environment
After restart All No Quarterly Yes No Yes
release
Per change type Per change type
Activation of a Major Activation of a Major
release occurs based on
the change types that the release occurs based on Requires
release contains. For the change types that the a
information about how release contains. For New Support
Release each change type is information aboutaffected
how Significant Release Self-
Activates for users Application users application Request
type activated, see Table 1. each change type is UX impact frequency service
version (Pega
activated, see Table 1. Cloud
Release type – This only)
column lists the Release type – This
high-level category column lists the high-
of releases that you level category of
can deploy. releases that you can
Activates for users – deploy.
This column Activates for users –
indicates when this This column
release type takes indicates when this
effect for users. release type takes
Application users effect for users.
affected – This Application users
column provides the affected – This
scope of application column provides the
users that see the scope of application
effect of this release users that see the
type. effect of this release
Significant UX type.
impact – This release Significant UX impact
type might require – This release type
users to significantly might require users
relearn a process or to significantly
has significant layout relearn a process or
changes. has significant layout
Release frequency – changes.
This column provides Release frequency –
the frequency of this This column provides
Major type of release. the frequency of this
New application type of release. Yes Quarterly Yes No Yes
release
version – This New application
column indicates version – This
whether you must column indicates
create a new whether you must
application version create a new
for this release. application version
Self-service – A user for this release.
with appropriate Self-service – A user
permissions can with appropriate
execute this release permissions can
type using the Pega execute this release
Platform, and a Pega type using the Pega
System Platform, and a Pega
Administrator is not System
required to roll back Administrator is not
changes. required to roll back
Requires a Support changes.
Request (Pega Cloud Requires a Support
Services only) – As a Request (Pega Cloud
Pega Cloud Services Services only) – As a
customer, you are Pega Cloud Services
responsible for any customer, you are
application changes responsible for any
that you make; application changes
however, as a best that you make;
practice, inform and however, as a best
engage Pega practice, inform and
Support before engage Pega Support
releasing application before releasing
changes. You can application changes.
open a Support You can open a
Request on My Support Request on
Support Portal. For My Support Portal.
more information, For more
see My Support information, see My
Portal FAQ. Support Portal FAQ.
Does
your Requires a
Follow
release this Support
contain release Request
the (Pega Cloud
process
following Services only)
changes?
Dynamic
Data Data Significant
Rules system Functional Code Environment
instances model UX impact
settings
X X - - - - - - Bug fix No
Standard
X X X X - - - - No
release
Database
- X - - X - - - Yes
release
-

Treat functional changes


that reference code as a
Code release, which
requires a system restart
to activate if you are
making code changes.

Rules – Are you


deploying Rules-
records in this
release?
Data instances – Are
you deploying Data-
records in this
release?
Dynamic System
Settings – Are you
loading Data-Admin-
System-Settings
records in this
release?
Significant UX impact
– Will users need to
significantly relearn a
process, or are there
significant layout
changes?
Code – Are you
loading JAR files as
part of this release?
Data model – Are
there changes to your - Code
- - - X - - Yes
data model in this release
release (SQL)?
Environment changes
– Will there be
operating system or
application server
changes in this
release?
Follow this release
process – Based on
your answers to these
questions, follow this
release process.
Requires a Support
Request (Pega Cloud
Services only) – As a
Pega Cloud Services
customer, you are
responsible for any
application changes
that you make;
however, as a best
practice, inform and
engage Pega Support
before releasing
application changes.
Does You can open a
your Requires a Support Request on
Follow
release this Support My Support Portal. For
contain release Request more information, see
the (Pega Cloud My Support Portal
process
following Services only) FAQ.
changes?
-

Certain
Dynamic
System
Settings
activate only
Environment
- - on system - - - X - Yes
release
restart and
require you to
follow the
Environment
release
process.

Major
X X X X X X X X Yes
release

Testing applications in the DevOps pipeline


Having an effective automation test suite for your application in your continuous delivery DevOps pipeline ensures
that the features and changes that you deliver to your customers are of high-quality and do not introduce
regressions.

At a high level, the recommended test automation strategy for testing your Pega applications is as follows:

Create your automation test suite based on industry best practices for test automation
Build up your automation test suite by using Pega Platform capabilities and industry test solutions
Run the right set of tests at different stages of your delivery pipeline
Test early and test often

Industry best practices for test automation can be graphically shown as a test pyramid. Test types at the bottom of
the pyramid are the least expensive to run, easiest to maintain, take the least amount of time to run, and should
represent the greatest number of tests in the test suite. Test types at the top of the pyramid are the most expensive
to run, hardest to maintain, take the most time to run, and should represent the least number of tests in the test
suite. The higher up the pyramid you go, the higher the overall cost and the lower the benefits.

Ideal test pyramid

Analyzing application quality metrics

Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Setting up for test automation

Before you create Pega unit test cases and test suites, you must configure a test ruleset in which to store the
tests.

PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

UI testing

Perform UI-based functional tests and end-to-end scenario tests to verify that end-to-end cases work as
expected. Use the third party Selenium starter kit for CRM or the built-in scenario testing tool to perform the UI
testing.

Analyzing application quality metrics


Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Viewing application quality metrics

Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Changing application quality metrics settings

The Application Quality settings provides configurable options related to quality metrics. You can change the
default settings for metrics displayed to meet your business needs.

Estimating test coverage

View historical test coverage metrics and generate reports containing the number of executable rules and their
test coverage. Use the data to analyze changes in test coverage, and to verify which rules require testing.

Viewing test coverage reports

View a report that contains the results of test coverage sessions to determine which rules in your application
are not covered with tests. You can improve the quality of your application by creating tests for all uncovered
rules that are indicated in the reports.

Viewing application quality metrics


Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

For example, view your application's compliance score and see the number and severity of guardrail violations that
were found in your application. You can then improve your application's compliance score and overall quality by
investigating and resolving the violations.

To open the Application Quality dashboard, from the Dev Studio header, click Configure Application Quality
Dashboard .

You can view the following metrics:

Rule, case, and application – View the number of executable rules (functional rules that are supported by test
coverage) and the number of case types in the selected applications. To view metrics for a different
combination of applications, select a different list on the Application: Quality Settings page.
Guardrail compliance – View the compliance score and the number of guardrail violations for the included
applications, as well as a graph of changes to the compliance score over time. To see more details about the
application's guardrail compliance, click View details.
Test coverage – View the percentage and number of rules that are covered by tests, and the last generation
date of the application-level coverage report for the selected applications, as well as a graph of changes to
application-level coverage over time. To see test coverage reports or to generate a new coverage report, click
View details.
If the EnableBuiltOnAppSelectionForQuality switch is turned on, then coverage sessions metrics are also
displayed on the Application Quality Dashboard for the built-on applications selected in Application: Quality
Settings.
Unit testing – View the percentage and number of Pega unit test cases that passed for the selected
applications, over the period selected on the Application Quality Settings landing page. The graph illustrates
the changes to the test pass rate over time. To see reports about test compliance and test execution, click
View details.
Case types – View guardrail score, severe guardrail warnings, test coverage, unit test pass rate, and scenario
test pass rate for each case type in the applications. To view additional details about a case type, click View
details.
Data types – View guardrail score, severe guardrail warnings, test coverage, and unit test pass rate for each
data type in the applications. To view additional details about a data type, click View details.
Other rules – View guardrail score, test coverage, test pass rate, the number of warnings, a list of rules with
warnings, the number and list of uncovered rules, and the number and list of failed test cases for rules that are
used in the selected applications but that are not a part of any case type.

Application quality metrics

The Application Quality dashboard displays metrics for guardrails, test coverage, and unit testing that you can
use to assess the overall health of your application and identify areas that require improvement. You can
change the default ranges for the color codes by modifying the corresponding when rules in the Data-
Application-Quality class.

Changing application quality metrics settings

The Application Quality settings provides configurable options related to quality metrics. You can change the
default settings for metrics displayed to meet your business needs.

Application quality metrics

The Application Quality dashboard displays metrics for guardrails, test coverage, and unit testing that you can
use to assess the overall health of your application and identify areas that require improvement. You can
change the default ranges for the color codes by modifying the corresponding when rules in the Data-
Application-Quality class.

Estimating test coverage

View historical test coverage metrics and generate reports containing the number of executable rules and their
test coverage. Use the data to analyze changes in test coverage, and to verify which rules require testing.

Application quality metrics


The Application Quality dashboard displays metrics for guardrails, test coverage, and unit testing that you can use
to assess the overall health of your application and identify areas that require improvement. You can change the
default ranges for the color codes by modifying the corresponding when rules in the Data-Application-Quality class.

The following table describes the relationship between colors, default ranges, and when rules. For each metric in the
Red, Orange, and Green columns, the top row indicates the default range for each color and the bottom row
indicates the corresponding when rule.
Green –
Red – stop development and fix Orange – continue development
Metric continue
issues and fix issues
development
90–
Guardrails Weighted score 0–59 60–89
100
pyIsWeightedScoreSevere pyIsWeightedScoreModerate pyIsWeightedScorePermissible
Number of warnings Not applicable More than 0 0
pyIsWarningsModerate pyIsWarningsPermissible
Number of severe warnings More than 0 Not applicable 0
pyAppContainSevereWarning pyAppContainNoSevereWarning
90%–
Test coverage Rules covered 0%–59% 60%–89%
100%
pyIsRuleCoverageScoreSevere pyIsRuleCoverageScoreModerate pyIsRuleCoverageScorePermissible
90%–
Unit testing Test pass rate 0%–59% 60%–89%
100%
pyIsTestPassRateScoreSevere pyIsTestPassRateScoreModerate pyIsTestPassRateScorePermissible

Viewing application quality metrics

Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Changing application quality metrics settings


The Application Quality settings provides configurable options related to quality metrics. You can change the default
settings for metrics displayed to meet your business needs.

To change settings on the landing page and to enable the EnableBuiltOnAppSelectionForQuality toggle that allow
you to select which built-on applications are included, your operator ID must have the SysAdm4 privilege.

On the Application Quality settings landing page, you can modify the following settings:

Application(s) included – If you want the test coverage report to include only rules from the current application,
select Current application only. If you want the test coverage report to also include rules from built-on
applications, select Include built-on applications. By default, only current application is selected. If you enable
the EnableBuiltOnAppSelectionForQuality toggle, you can select which built-on application will be included.
If a master user starts an application-level coverage session for an application, then that user's configuration of
this setting is in effect for all users that execute test coverage for the duration of this session.
Ignore test rulesets when calculating Guardrail score – When you enable this setting, Guardrail score is
calculated without taking test rulesets into account. This is the default behavior. When you disable this setting,
test rulesets are taken into account when calculating Guardrail score.
Quality trends – Use this setting to change the date range of the trend graphs on the Application Quality,
Application: Test coverage and Application: Unit testing landing pages. The default value is Last 2 weeks.
Test case execution – Use this setting to change the number of days, from the time that they are executed that
tests are treated as executed by the Application Quality dashboard and coverage reports. By default, a test
executed later than seven days ago is considered too old to be included on the Application Quality dashboard
and in reports.
Scenario test case execution – Use this setting to add a delay (in milliseconds) to the execution of steps in a
scenario test.

Improving your compliance score

Estimating test coverage


View historical test coverage metrics and generate reports containing the number of executable rules and their test
coverage. Use the data to analyze changes in test coverage, and to verify which rules require testing.

On the Test Coverage landing page, view a chart displaying test coverage metrics and generate specific user-level,
application-level, and merged coverage reports. User-level reports contain the results of a single test coverage
session that a user performs, while application-level reports contain results from multiple test coverage sessions
that many users run. Merged reports contain results from multiple most recent application-level reports.

The following rule types are included in test coverage reports.

Activity Declare expression


Paragraph
Case type Declare trigger
Report definition
Collection Flow
Scorecard
Correspondence Flow action
Section
Data page Harness
Strategy
Data transform HTML
Validate
Decision data HTML fragment
When
Decision table Map value
XML Stream
Decision tree Navigation

Generating a user-level test coverage report

Generate a user-level test coverage report to identify which executable rules in your currently included
applications are covered and not covered by tests. The results of this type of report are not visible on the
Application Quality Dashboard.

Generating an application-level test coverage report

Generate an application-level coverage report that contains coverage results from multiple users. Use this
report to identify which executable rules in your currently included applications are covered and not covered by
tests. The results of this type of report are visible on the Application Quality Dashboard.

Participating in an application-level test coverage session

When an application-level coverage session is running, you can perform tests of the application to contribute to
an application-level test coverage report that identifies the executable rules in your application that are
covered and not covered by tests.

Generating a merged coverage report

Generate application-level coverage reports for every application in your system and in your application stack,
and then merge the most recent reports to a single report, to gain a consolidated overview of test coverage for
all your top-level or built-on applications.

Generating a user-level test coverage report


Generate a user-level test coverage report to identify which executable rules in your currently included applications
are covered and not covered by tests. The results of this type of report are not visible on the Application Quality
Dashboard.

1. In the header of Dev Studio, click Configure Application Quality Test Coverage .

2. Click User level.

If the Application level coverage is in progress message is displayed, you cannot start a user-level coverage session.

3. Click Start new session.


4. Enter the title of the coverage report, and then click OK.

5. To provide data for the report, run all of the tests that are available for your included applications, for example,
Pega unit automated tests and manual tests.

6. Click Stop coverage, and then click Yes.

If you close the tab or log out without clicking Stop, the report is not generated.

7. Review the results of the coverage session. In the Coverage history section, click Show Report.

8. Optional:

To see whether coverage reports were generated by other users, click Refresh.

9. Optional:

To see a list of application-level coverage reports, click Application level.

Participating in an application-level test coverage session

When an application-level coverage session is running, you can perform tests of the application to contribute to
an application-level test coverage report that identifies the executable rules in your application that are
covered and not covered by tests.

Generating an application-level test coverage report


Generate an application-level coverage report that contains coverage results from multiple users. Use this report to
identify which executable rules in your currently included applications are covered and not covered by tests. The
results of this type of report are visible on the Application Quality Dashboard.

1. In the header of Dev Studio, click Configure Application Quality Test Coverage .

2. Click Application level.

3. Click Start new session.

To start application-level coverage, your operator ID must have the pzStartOrStopMasterAppRuleCoverage


privilege.

4. Enter the title of the coverage report, and then click OK.

5. Optional:

To provide data for the report, run all of the tests that are available for your currently included applications, for
example, Pega unit automated tests and manual tests.

6. Inform all relevant users that they can log in to the application and start running tests.

7. Wait until all users have completed their tests and have logged off.

If you stop an application coverage session before a user has logged off, the coverage data of this user is not
included in the report.

8. Click Stop coverage, and then click Yes.

9. Review the results of coverage session. In the Coverage history section click Show Report.

10. Optional:

To see whether coverage reports were generated by other users, click Refresh.

11. Optional:

To see a list of user-level coverage reports, click User level.

Participating in an application-level test coverage session


When an application-level coverage session is running, you can perform tests of the application to contribute to an
application-level test coverage report that identifies the executable rules in your application that are covered and
not covered by tests.

Ensure that application-level coverage is in progress before you log in. If application coverage is started after you log
in, you cannot contribute to it unless you log off and log in again. Only users with the
pzStartOrStopMasterAppRuleCoverage privilege can initiate application-level coverage.

1. Check if application-level coverage is in progress.

a. In the header of Dev Studio, click Configure Application Quality Test Coverage .
b. Verify that you see the Application level coverage is in progress message.

If you do not see the message, application-level coverage is not active, however, you can still start a user-
level test coverage session.

2. To provide data for the report, execute all the tests that are available for the included applications, for
example, Pega unit automated tests and manual tests.

During the coverage session your local configuration for included applications is overridden by the
configuration of the user that started the application-level coverage session.

3. Click your profile icon and then click Log off.

If you do not log off before the rule coverage session is stopped, you will not contribute to the report. If you log
off and then log in again while the coverage session is still active, your test coverage sessions are saved as a
new session that will be included in the application coverage report.

Generating an application-level test coverage report

Generate an application-level coverage report that contains coverage results from multiple users. Use this
report to identify which executable rules in your currently included applications are covered and not covered by
tests. The results of this type of report are visible on the Application Quality Dashboard.

Generating a merged coverage report


Generate application-level coverage reports for every application in your system and in your application stack, and
then merge the most recent reports to a single report, to gain a consolidated overview of test coverage for all your
top-level or built-on applications.

Insight from a merged application report helps you avoid creating duplicate tests for rules that are used across
multiple applications.
Because a merged report is an instance of an application-level report, when a merged report is the most recent one
for an application, it is included in the next merged report.
Ensure that your operator ID has the pzStartOrStopMasterAppRuleCoverage privilege. Generate at least one
application-level coverage report for another application in your system or for a built-on application in your current
application. For more information, see Generating an application-level test coverage report

1. Switch to the main application that you want to use as the baseline for the merged report. See Switching
between applications.

2. In the header of Dev Studio, click Configure Application Quality Test Coverage .

3. Click the Application level tab.

4. In the Coverage history section, click Merge reports.

5. Enter the title of the merged report, and then click Next.

6. In the list of the most recent reports, select the reports that you want to include in the merged report, and then
click Create.

7. Close the Merge confirmation window.

Dev Studio automatically adds the MRG_ prefix to every merged report to differentiate them from standard
application-level coverage reports and to facilitate finding them.

8. Open the merged report. In the Coverage history section find the merged report that you created and click
Show report.

9. Optional:

To open a report that is included in the merged report, in the Merged reports section, click the report name.

Viewing test coverage reports


View a report that contains the results of test coverage sessions to determine which rules in your application are not
covered with tests. You can improve the quality of your application by creating tests for all uncovered rules that are
indicated in the reports.

1. In the header of Dev Studio, click Configure Application Quality Test Coverage .

2. Choose the type of report that you want to view:

To view application-level test coverages, click the Application level tab.


To view user-level test coverages, click the User level tab.

3. In the Coverage history section, hover over the row with the relevant test coverage session, and then click
Show Report.
4. Optional:

Choose the data you want to include in the report:

To include only the rules that were updated after a specific date, in the Rules updated after field, click the
calendar icon, select a date and time, and then click Apply.
To include all the rules that are covered with tests, click Covered.
To include all the rules that are not covered with tests, click Uncovered.
To filter the rules, in the column header that you want to filter, click the filter icon, enter the filter criteria,
and then click Apply.
To open a single report that is included in a merged report, in the Merged reports section, click the
report name.
To open a rule that is included in the report, click the rule name.

Setting up for test automation


Creating a test ruleset to store test cases

Before you can create unit test cases or scenario tests, you must configure a test ruleset in which to store the
tests.

Creating a test ruleset to store test cases


Before you can create unit test cases or scenario tests, you must configure a test ruleset in which to store the tests.

1. In the Dev Studio header, open your application rule form by clicking your application and selecting Definition.

2. In the Application rulesets section, click Add ruleset.

3. Complete one of the following actions:

Enter a ruleset name and version of an existing ruleset.


Click the Open icon and create a new ruleset.

The test ruleset must always be the last ruleset in your application stack.

4. Open the test ruleset and click the Category tab.

5. Select the Use this ruleset to store test cases check box.

6. Save the test and application ruleset forms.

When you save test cases for rules, they are saved in this ruleset.

PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

PegaUnit testing
Automated unit testing is a key stage of a continuous development and continuous integration model of application
development. With continuous and thorough testing, issues are identified and fixed prior to releasing an application,
thereby improving application quality.

For example, an account executive wants to ensure that a 10% discount is applied to all VIP customers. You create a
test case that verifies that this discount is applied to all VIP customers in the database. If the test does not pass, the
results indicate where the 10% discount is not applied.

Automated unit testing involves creating unit test cases for tests run against individual rules, grouping multiple test
cases into test suites, running the tests, and viewing the results. running the tests. When the tests run, the results
are compared to the expected results defined in assertions.

Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

Understanding unit test cases

A test case identifies one or more testable conditions (assertions) used to determine whether a rule returns an
expected result. Reusable test cases supports the continuous delivery model, providing a means to test rules
on a recurring basis to identify impacts of new or modified rules.

Grouping test cases into suites

You can group related unit test cases or test suites into a test suite so that you can run multiple test cases and
suites in a specified order. For example, you can run related test cases in a regression test suite when changes
are made to application functionality.

Setting up and cleaning the context for a test case or test suite

You can set up the environment and conditions required for running a test case, determine how to clean up test
data at the end of the test run, and set pages on which to automatically run rules.

Viewing unit test reports

View a graph with test pass rate trend data, a summary of Pega unit tests that were run, and an overview of
Pega unit test compliance for currently included applications on the Reports tab on the Unit Testing landing
page.

Viewing unit tests without rules

On the Application: Unit testing landing page you can display a list of unit tests that are not associated with any
rule and export this list to an XLS or a PDF file. You should deactivate these unit tests because they will always
fail.

Running test cases and suites with the Execute Tests service

You can use the Execute Tests service (REST API) to validate the quality of your code after every build is
created by running unit test cases that are configured for the application.

Understanding Pega Platform 7.2.2 and later behavior when switching between Pega unit testing and
Automated Unit Testing features

Beginning with Pega 7.2.2, you can use Pega unit testing to create test cases to validate the quality of your
application by comparing the expected test output with results that are returned by running rules.

Working with the deprecated AUT tool

In older versions of Pega Platform, automated unit tests were created using the Automated Unit Testing (AUT)
tool, which has since been replaced by PegaUnit testing. If you have automated unit tests that were created
using AUT and they haven't been changed to PegaUnit test cases, then you can switch back to AUT to manage
those tests.

Unit testing individual rules


An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration errors
such as incorrectly routed assignments, unit test individual rules as you develop them.

To unit test a rule, open the rule form and select Actions > Run. For some rule types, such as binary file rules,
Pega does not provide an option for unit testing. If the rule cannot be unit tested, the Run option does not appear in
the Actions menu. The appearance of the Run Rule window varies across rule types, so how you run a rule varies
by its type.

In general, you complete the following tasks:

Specify whether to create a test page, or, if any pages of the appropriate class already exist on the clipboard, to
copy one.
Select a data transform to use when Pega Platform creates a test page.
For services, specify whether the service rule is to run in your session or as a newly created service requestor.
If the service is configured to run as an authenticated user, you are prompted for a user name and password.
Provide test data to use when the rule runs.

When you run the rule, the system uses rule resolution. If you click the Run button and there is a higher version of
the rule, the system displays a status message stating it will run the higher version.

To test a circumstance rule, ensure that the circumstances are correct for the rule. Otherwise, the base rule is run.

Clipboard pages created by the Run Rule feature

After running a rule, you can open the Clipboard tool and examine the output as it appears on the resulting
clipboard pages. The Run Rule operation creates the following pages:

Unit testing a harness

The Run Rule feature enables you to test a harness individually before testing it in the context of the
application you are developing. You specify a test page for the rule to use, provide sample data as the input,
run the rule, and examine the results.

Unit testing a section

The Run Rule feature enables you to test a section individually before testing it in the context of the application
you are developing. You specify a test page for the rule to use, provide sample data as the input, run the rule,
and examine the results.
Unit testing a data page

Test a data page to ensure that you get the expected results by using the Run Rule feature before testing it in
the context of the application that you are developing. Additionally, you can convert the test run into a Pega
unit test case for reuse.

Unit testing a data transform

You can use the Run Rule feature to test a data transform individually before testing it in the context of the
application that you are developing. Additionally, you can convert the test run into a Pega unit test case.

Viewing test cases for a data type

You can view, run, and add test cases for a data type in the Data Designer.

Flow markers

A flow marker saves test data and decisions that advance a flow execution to a specific point (mark). It allows
you to jump directly to a specific point in the flow rule without having to input the same information every time
to reach that point. In automated unit testing runs, the flow markers are used to advance the execution of a
flow to that point.

Unit testing a decision table

You can test a decision table individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test run to a Pega unit test case.

Unit testing a decision tree

You can use the Run Rule feature to test a decision tree individually before testing it in the context of the
application that you are developing.

Unit testing a when rule

You can test a when rule individually before testing it in the context of the application that you are developing.
Additionally, you can convert the test into a Pega unit test case to validate application data by comparing
expected property values to the actual values that are returned by the test.

Unit testing a map value

You can test a map value individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test run to a Pega unit test case.

Unit testing a collection

You can test a collection individually, before testing it in the context of the application that you are developing.

Unit testing a declare expression

You can test a declare expression individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test run to a Pega unit test case.

Unit testing a flow

You can test a flow individually before testing it in the context of the entire application that you are developing.

Unit testing a flow action

The Run Rule feature enables you to test a flow action individually before testing it in the context of the
application you are developing. You specify a test page for the rule to use, provide sample data as the input,
run the rule, and examine the results.

Unit testing a report definition

You can test a report definition individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test into a Pega unit test case to validate application data.

Unit testing an activity

You can test an activity individually before testing it in the context of the application that you are developing.
Additionally, you can convert the test run to a Pega unit test case.

Unit testing a Parse Delimited rule

You can test a Parse Delimited rule directly, separate from the activity or other context in your application in
which it will eventually operate.

Unit testing a Parse XML rule

You can test a Parse XML rule directly, separate from the activity or other context in your application in which it
will ultimately operate.
Unit testing a Parse Structured rule

You can test a Parse Structured rule directly, separate from the activity or other context in your application in
which it will ultimately operate.

Unit testing service rules

Clipboard pages created by the Run Rule feature


After running a rule, you can open the Clipboard tool and examine the output as it appears on the resulting clipboard
pages. The Run Rule operation creates the following pages:

RuleToRun — The clipboard representation of the rule that you tested.


runRulePage — Holds the output from the rule.
temp_ pages — Pages created or copied by the Run Rule feature when it ran the rule. The names of these
pages begin with the literal temp_.
pySimulationDataPage — For service rules, a page of the helper class Data-Admin-IS-ClientSimulation. Contains
information about the simulated request and response.

Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

Unit testing a harness


The Run Rule feature enables you to test a harness individually before testing it in the context of the application you
are developing. You specify a test page for the rule to use, provide sample data as the input, run the rule, and
examine the results.

Determine how you will provide the sample data to use when testing the rule. If possible, open a work item of the
appropriate class.

For general information about the Run Rule feature, including a list of the clipboard pages that are generated when
a rule runs, see Unit testing individual rules.

1. Save the rule.

2. Optional:

To review and update your preferences, select Preferences from the Operator menu.

These determine the skin rule used to style the test.

3. Complete any preprocessing necessary to create the appropriate clipboard context and, if the rule is
circumstanced or time-qualified, to set the conditions you want to test.

4. Click the Run toolbar button. The Run Rule window appears.

5. In the Test Page section, specify which page to use as the main page by performing one of the following
actions:

If any pages of the rule's Applies To class already exist, select one to be copied.

If this harness applies to an embedded page, identify a top-level page that contains the embedded page or
pages and supply a Page Context.

Select Create or Reset Test page. Then, in the Apply field, select the data transform to use for the test
page.

6. If the rule being tested is circumstance-qualified, select Set circumstance properties to run exact version of
rule.

7. In the lower section of the Run Rule window, enter the test data and click Execute.

The system runs the harness and displays the results.

Harnesses

Unit testing a section


The Run Rule feature enables you to test a section individually before testing it in the context of the application you
are developing. You specify a test page for the rule to use, provide sample data as the input, run the rule, and
examine the results.

> Before you begin

Before you begin, determine how you will provide the sample data to use when testing the rule. If possible, open a
work item of the appropriate work type.

For general information about the Run Rule feature, including a list of the clipboard pages that are generated when
a rule runs, see Unit testing individual rules.

> Run the rule

To run the rule, complete the following steps:

1. Save the rule.

2. Complete any processing to create the appropriate clipboard context. If the rule is circumstanced or time-
qualified, set the conditions you want to test.

3. Click the Run toolbar action. The Run Rule window appears.

4. In the Test Page section, specify which page to use as the main page. Do one of the following:

If any pages of the rule's Apply To class already exist, select one to be copied. (If this section applies to an
embedded page, identify a top-level page that contains the embedded page or pages and supply a Page
Context.)
Otherwise, select Create or reset Test page. Then, in the Apply field, select the data transform to use for the
test page.

If the rule being tested is circumstance-qualified, select Set circumstance properties to run exact version of rule.

If the rule being tested contains one or more parameter definitions on the Parameters tab, you are prompted for
parameter values. Supply a text value for any required parameters.

5. In the lower section of the Run Rule window click Execute. The system runs the section and displays the results.

Sections

Unit testing a data page


Test a data page to ensure that you get the expected results by using the Run Rule feature before testing it in the
context of the application that you are developing. Additionally, you can convert the test run into a Pega unit test
case for reuse.

1. In the navigation pane of Dev Studio, click Records Decision Decision Table , and then select the data page you
want to open.

2. Click Actions Run .

3. From the Thread list in the Run context pane, select the thread in which you want to run the rule.

4. In the main test page, enter values for parameters, if any, to pass to the rule.

5. Select the Flush all instances of this data page before execution check box to delete any existing instances of
the selected data page.

6. Click Run.

7. Optional:

To convert the test into a unit test case for automated testing, click Convert to Test, and then configure the
test case. For more information, see Configuring unit test cases.

8. Optional:

Click Show Clipboard to open the Clipboard and examine the pages that are generated by the unit test. For
more information, see Clipboard pages created by the Run Rule feature.

9. Optional:

If the rule has errors, click Trace to debug the rule with the Tracer tool.

Data page testing

Data page test cases are a way to validate that application data is loaded correctly. Data page test cases
compare the expected value of one or more properties with their actual values in a data page.

Data pages
Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

Using the Clipboard tool


Application debugging using the Tracer tool
Creating a data page
Viewing rule history
More about data page rules

Data page testing


Data page test cases are a way to validate that application data is loaded correctly. Data page test cases compare
the expected value of one or more properties with their actual values in a data page.

Before you begin creating tests, your application must be configured for testing. See Creating a test ruleset.

The data page test case landing page lets you manage your data page unit tests. It lists all of the data page tests in
your application.

From the landing page you can selectively run data page test cases and view whether they have passed or failed,
see whether data pages have started experiencing changes in their run time, and create new data page test cases.
You can access tests from the Data Explorer by selecting View all test cases from the menu at the top right of the
Data Explorer.

Configuring unit test cases


Converting unit tests to test cases
Viewing test case results

After you run a unit test case, you can view the results of the test run.

Unit testing a data transform


You can use the Run Rule feature to test a data transform individually before testing it in the context of the
application that you are developing. Additionally, you can convert the test run into a Pega unit test case.

1. In the navigation pane of Dev Studio, click Records Data Model Data Transform , and then select the data
transform that you want to open.

2. Click Actions Run .

3. In the Run context pane, select the thread, test page, and data transform you want to use for the test.

a. In the Thread list, select the thread in which you want to run the rule.

b. In the Page list, select whether to copy parameter values from an existing page or to create an empty
page:

To use parameter values from an existing clipboard page in the selected thread, click Copy existing
page, and then select the page you want to copy.

To start from a page containing no parameter values, click Empty test page.

c. Optional:

To apply a data transform, select the Apply data transform check box, and then select the transform to
apply.

The system always runs the rule instance on the RunRecordPrimaryPage, regardless of the page that you
select from this list. If you convert this test run to a test case and the RunRecordPrimaryPage requires
initial values, then configure the clipboard so that it populates the page with initial values. For more
information, see Setting up your test environment.
4. Optional:

To change the values that are passed to the rule, provide the new values on the main test page.

5. Click Run to run the test.

6. To convert the test run into a Pega unit test case, click Convert to test, and then configure the test case. For
more information, see Configuring unit test cases.

7. Optional:

If the rule has errors, then click Trace to debug it using the Tracer tool. For more information, see Application
debugging using the Tracer tool.

8. Click Clipboard to open the Clipboard and examine the pages that are generated by the unit test. For more
information, see Clipboard pages created by the Run Rule feature.

Data Transforms

Viewing test cases for a data type


You can view, run, and add test cases for a data type in the Data Designer.
1. In the navigation pane of Dev Studio, click Data types, and then select the data type for which you want to view
test cases.

2. In the Data Designer, click the Test cases tab. The test case name, data page, last run date and time, the
actual and expected run time of the data page, and the result are displayed.

3. Sort or filter the cases:

a. To sort the cases, click the column name that you want to sort by.

b. To filter the cases, click the triangle to the right of the column name that you want to filter on.

4. Click Run selected to run the selected test.

5. Click Add new to create a new test case.

Creating a data type in


Adding a data page to a data type
Running a unit test case

Run a unit test case to validate rule functionality.

Converting unit tests to test cases

Flow markers
A flow marker saves test data and decisions that advance a flow execution to a specific point (mark). It allows you to
jump directly to a specific point in the flow rule without having to input the same information every time to reach
that point. In automated unit testing runs, the flow markers are used to advance the execution of a flow to that
point.

The following tabs are available on this form:

Inputs
Results

Ordinarily, flow markers are created automatically as you test flows. They are not typically created by using a rule
form. However, you can open and review the Flow Marker rule forms.

Flow marker rules belong to a RuleSet and version, but they do not appear in the Application Explorer display.

Access
Use the Test Cases tab of the rule form for the associated flow to see the saved flow markers for that flow.

Where referenced
Flow marker rules are referenced:

In the Test Cases tab of a flow's rule form.


During playback of a flow's recorded test case.

Category
Test case flow markers are instances of the Rule-AutoTest-Case-FlowMarker class. They belong to the SysAdmin
category.

Flow markers — Completing the Create or Save As form

Because flow markers exist only in the context of the flow rule they are created in, the New window for a flow
marker opens only when you click the Save Flow Marker button while running a flow. In the New window, you
enter a name and a short description; the name that you enter is stored as the Purpose key part of the flow
marker.

Flow Marker form – Understanding the Inputs tab

The Inputs tab shows you the flow data that is saved in the flow marker from a unit testing run of a flow. This
data represents the inputs and conditions at the point where the flow marker was created.

Flow Marker form — Understanding the Results tab

The Results tab shows you the results that were captured at the point in the flow when this flow marker was
saved.

Flow markers — Completing the Create or Save As form


Because flow markers exist only in the context of the flow rule they are created in, the New window for a flow
marker opens only when you click the Save Flow Marker button while running a flow. In the New window, you enter
a name and a short description; the name that you enter is stored as the Purpose key part of the flow marker.

When stored, a flow marker rule has three key parts, listed in the following table.

Field Description
Class Read only. Set to the type of the rule being tested; here, Rule-Obj-Flow, because flow markers are for
Name flows.
InsName Read only. Set to the pxInsName value of the flow rule to which this flow marker applies.
Purpose Set to the Flow Marker Name that is specified in the New window when the flow marker is saved.

Create a separate RuleSet to hold flow markers, typically the same RuleSet as your test cases and unit test suites,
rather than using a RuleSet that will be moved to a production system. For more information, see Testing
Applications in Pega Community.

For general information about the Create and Save As forms, see:

Completing the Create form.


Completing the Save As form.

Rule resolution

As with most rules, when you search for a flow marker, the system shows you only those rules that belong to a
RuleSet and version that you have access to.

Flow marker rules are associated with a single tested flow rule (identified by the second key, InsName) and they
cannot be qualified by time or circumstance.

Flow markers

Flow markers

A flow marker saves test data and decisions that advance a flow execution to a specific point (mark). It allows
you to jump directly to a specific point in the flow rule without having to input the same information every time
to reach that point. In automated unit testing runs, the flow markers are used to advance the execution of a
flow to that point.

Flow Marker form – Understanding the Inputs tab


The Inputs tab shows you the flow data that is saved in the flow marker from a unit testing run of a flow. This data
represents the inputs and conditions at the point where the flow marker was created.

This tab is best used to examine the initial clipboard pages so that you can review the initial properties and values
that were used when this flow marker was saved. If you change values here and then save, inconsistent results from
using this flow marker might occur. Instead, to update a saved flow marker with new input data, the best practice is
to play back the associated flow test case and overwrite this flow marker during the playback.

Data Transform
Read-only. The name of the data transform that was used to create a test page in the unit test run of the flow
(typically, pyDefault ).
Initial Pages
When testing rules, sometimes clipboard pages are created, loaded, or copied before the rule can run. These
pages are saved when you create a flow marker, so that you don't need to recreate the initial conditions before
jumping to that flow marker in the execution of the flow. Use the controls in this section to view those pages
and examine the initial properties and values.
Input Parameters
For review only. A list of the test data that was used — the input properties and their values. If you make
changes here and save them, inconsistent results might occur when this flow marker is used.

Flow markers

A flow marker saves test data and decisions that advance a flow execution to a specific point (mark). It allows
you to jump directly to a specific point in the flow rule without having to input the same information every time
to reach that point. In automated unit testing runs, the flow markers are used to advance the execution of a
flow to that point.

Flow markers — Completing the Create or Save As form

Because flow markers exist only in the context of the flow rule they are created in, the New window for a flow
marker opens only when you click the Save Flow Marker button while running a flow. In the New window, you
enter a name and a short description; the name that you enter is stored as the Purpose key part of the flow
marker.

Flow Marker form — Understanding the Results tab


The Results tab shows you the results that were captured at the point in the flow when this flow marker was saved.
This tab is best used to examine the results' clipboard pages so that you can review the expected properties and
values captured when this flow marker was saved. If you change values here and then save, inconsistent results
from using this flow marker might occur. Instead, to update a saved flow marker, the best practice is to play back
the associated flow test case and overwrite this flow marker during the playback.

Result Value
The value that must be returned when you jump to the flow marker for the results to be considered expected.
Show Result Pages
The clipboard pages that held the results when the flow marker was created. This section allows you to view
the resulting property and values up to the point the flow marker was created.

Flow markers

A flow marker saves test data and decisions that advance a flow execution to a specific point (mark). It allows
you to jump directly to a specific point in the flow rule without having to input the same information every time
to reach that point. In automated unit testing runs, the flow markers are used to advance the execution of a
flow to that point.

Flow markers — Completing the Create or Save As form

Because flow markers exist only in the context of the flow rule they are created in, the New window for a flow
marker opens only when you click the Save Flow Marker button while running a flow. In the New window, you
enter a name and a short description; the name that you enter is stored as the Purpose key part of the flow
marker.

Unit testing a decision table


You can test a decision table individually, before testing it in the context of the application that you are developing.
Additionally, you can convert the test run to a Pega unit test case.

Unit testing a decision table involves specifying a test page for the rule to use, providing sample data as the input,
running the rule, and examining the results.

1. In the navigation pane of Dev Studio, click Records Decision Decision Table , and then select the decision table
you want to test.

2. Click Actions Run .

3. In the Data Context list, click the thread in which you want to run the rule.

4. Select a method for creating the test page.

Select Copy existing page to copy values from a thread of an existing clipboard page to the main test
page.
Select Create or reset test page to create a new test page or reset the values of an existing test page.
To apply a data transform to the values on the test page, click the link in the Apply field and then
select a data transform.
To clear the Result pane, click Reset Page.
To switch the current context, select a thread in the Data Context list and then click Switch Context.

5. To display the test results, enter data in the Result panel and then click Run again.

The value that you enter and the result that is returned are the values that are used for the default decision
result assertion that is generated when you convert this test to a test case.
6. Optional:

To view the pages that are generated by the unit test, click Show Clipboard. For more information, see
Clipboard pages created by the Run Rule feature.

7. To convert the test into a Pega unit test case, click Convert to Test. For more information, see Configuring unit
test cases.

8. Optional:

To view the row on the Table tab that produced the test result, click the Result Decision Paths link. If the
Evaluate All Rows option on the Results tab is selected, all the rows that are true are highlighted.

Debugging decision tables with the Tracer

If your decision table rule does not give you the results that you expect, and you cannot determine the problem
by running the rule and examining the clipboard pages, run the Tracer tool. With the Tracer, you can watch
each step in the evaluation of a decision table as it occurs.

Debugging decision trees with the Tracer

If your decision tree does not give you the results you expect and you cannot determine the problem by
running the rule and examining the clipboard pages, run the Tracer tool. With the Tracer you can watch each
step in the evaluation of a decision tree as it occurs.
Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

About Decision tables


Creating decision tables
Viewing rule history
More about Decision Tables

Debugging decision tables with the Tracer


If your decision table rule does not give you the results that you expect, and you cannot determine the problem by
running the rule and examining the clipboard pages, run the Tracer tool. With the Tracer, you can watch each step in
the evaluation of a decision table as it occurs.

1. On the developer toolbar, click Tracer.

2. Select the ruleset that contains the rule to be traced.

a. In the Tracer window, click Settings.

b. In the Event Types to Trace section, select the Decision Table check box.

c. Select the ruleset that contains the rule to be traced.

d. Clear the other rulesets to avoid memory usage from tracing events occurring in other rulesets.

e. Click OK.

3. Return to the main portal and run the decision table.

4. Watch the Tracer output as the rule runs.

Unit testing a decision table

You can test a decision table individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test run to a Pega unit test case.

Unit testing a decision tree


You can use the Run Rule feature to test a decision tree individually before testing it in the context of the
application that you are developing.

You specify a test page for the rule to use, provide sample data as the input, run the rule, and examine the results.
Additionally, you can convert the test run to a Pega unit test case.

1. In the navigation pane of Dev Studio, click Records Decision Decision Tree , and then select the decision tree
you want to test.

2. Click Actions Run .

3. In the Data Context list, click the thread in which you want to run the rule.

4. Select a method for creating the test page.

Select Copy existing page to copy values from a thread of an existing clipboard page to the main test
page.
Select Create or reset test page to create a new test page or reset the values of an existing test page.
To apply a data transform to the values on the test page, click the link in the Apply field and then
select a data transform.
To clear the Result pane, click Reset Page.
To switch the current context, select a thread in the Data Context list and then click Switch Context.

5. To display the test results, enter data in the Result panel and then click Run again.

The value that you enter and the result that is returned are the values that are used for the default decision
result assertion that is generated when you convert this test to a test case.
6. Optional:

To view the pages that are generated by the unit test, click Show Clipboard. For more information, see
Clipboard pages created by the Run Rule feature.

7. To convert the test into a Pega unit test case, click Convert to Test. For more information, see Configuring unit
test cases.

8. Optional:

To view the row on the Decision tab that produced the test result, click the Result Decision Paths link.
Debugging decision trees with the Tracer

If your decision tree does not give you the results you expect and you cannot determine the problem by
running the rule and examining the clipboard pages, run the Tracer tool. With the Tracer you can watch each
step in the evaluation of a decision tree as it occurs.

Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

About Decision Trees


Creating decision trees
Viewing rule history
More about Decision Trees

Debugging decision trees with the Tracer


If your decision tree does not give you the results you expect and you cannot determine the problem by running the
rule and examining the clipboard pages, run the Tracer tool. With the Tracer you can watch each step in the
evaluation of a decision tree as it occurs.

1. On the developer toolbar, click Tracer.

2. Select the ruleset that contains the rule to be traced.

a. In the Tracer window, click Settings.

b. In the Event Types to Trace section, select the Decision Tree check box.

c. Select the ruleset that contains the rule to be traced.

d. Clear the other rulesets to avoid memory usage from tracing events occurring in other rulesets.

e. Click OK.

3. Return to the main portal and run the decision tree.

4. Watch the Tracer output as the rule runs.

Unit testing a decision tree

You can use the Run Rule feature to test a decision tree individually before testing it in the context of the
application that you are developing.

Unit testing a when rule


You can test a when rule individually before testing it in the context of the application that you are developing.
Additionally, you can convert the test into a Pega unit test case to validate application data by comparing expected
property values to the actual values that are returned by the test.

In a continuous delivery environment, Pega unit testing provides you with feedback on the quality of your
applications, so that you can quickly identify issues and correct them.

By default, when you run the when rule, the result assertion uses the input value that you enter and the result that
is returned. The assertion is generated when you convert this test to a test case.

1. In the navigation pane of Dev Studio, click Records Decision When , and then select the when rule that you
want to test.

2. Click Actions Run .

3. In the Run context pane, select the thread, test page, and data transform you want to use for the test.

a. In the Thread list, select the thread in which you want to run the rule.

b. In the Page list, select whether to copy parameter values from an existing page or to create an empty
page:

To use parameter values from an existing clipboard page in the selected thread, click Copy existing
page, and then select the page you want to copy.

To start from a page containing no parameter values, click Empty test page.

c. Optional:

To apply a data transform, select the Apply data transform check box, and then select the transform to
apply.
The system always runs the rule instance on the RunRecordPrimaryPage, regardless of the page that you
select from this list. If you convert this test run to a test case and the RunRecordPrimaryPage requires
initial values, then configure the clipboard so that it populates the page with initial values. For more
information, see Setting up your test environment.
4. Optional:

To change the values that are passed to the rule, provide the new values on the main test page.

5. Click Run to run the test.

6. To convert the test run into a Pega unit test case, click Convert to test, and then configure the test case. For
more information, see Configuring unit test cases.

7. To view the pages that are generated by the unit test, click Clipboard. For more information, see Clipboard
pages created by the Run Rule feature.

8. If the rule has errors, then debug it using the Tracer tool. For more information, see Application debugging
using the Tracer tool.

Debugging when rules with the Tracer

If your when rule does not give you the results you expect and you cannot determine the problem by running
the rule and examining the clipboard pages, run the Tracer tool. With the Tracer, you can watch each step in
the evaluation of a when rule as it occurs.

When Condition rules


Using the Clipboard tool
Application debugging using the Tracer tool

Debugging when rules with the Tracer


If your when rule does not give you the results you expect and you cannot determine the problem by running the
rule and examining the clipboard pages, run the Tracer tool. With the Tracer, you can watch each step in the
evaluation of a when rule as it occurs.

1. Click Tracer on the developer toolbar in Dev Studio.

2. In the Tracer window, click Settings.

3. In the Event Types to Trace section, select When rule.

4. Select the ruleset that contains the rule to be traced. Clear the other rulesets to avoid memory usage from
tracing events occurring in other rulesets.

5. Click OK.

6. Return to the main portal and run the when rule.

7. Watch the Tracer output as the rule runs.

Unit testing a when rule

You can test a when rule individually before testing it in the context of the application that you are developing.
Additionally, you can convert the test into a Pega unit test case to validate application data by comparing
expected property values to the actual values that are returned by the test.

Unit testing a map value


You can test a map value individually, before testing it in the context of the application that you are developing.
Additionally, you can convert the test run to a Pega unit test case.

Testing a map value involves specifying a test page for the rule to use, providing sample values for required
parameters, running the rule, and then examining the test results.

1. In the navigation pane of Dev Studio, click Records Decision Map Value , and then click the map value that you
want to test.

2. Click Actions Run .

3. In the Test Page pane, select the context and test page to use for the test:

a. In the Data Context list, click the thread in which you want to run the rule. If a test page exists for the
thread, then it is listed and is used for creating the test page.

b. To discard all previous test results and start from a blank test page, click Reset Page.

c. To apply a data transform to the values on the test page, click the data transform link, and then select the
data transform you want to use.
4. Enter sample values to use for required parameters in the Results pane and then click Run Again.

The value that you enter and the result that is returned are the values that are used for the default decision
result assertion that is generated when you convert this test to a test case.
5. Optional:

To view the pages that are generated by the unit test, click Show Clipboard.

6. To convert the test into a Pega unit test case, click Convert to Test. For more information, see Configuring unit
test cases.

7. Optional:

To view the row that produced the test result, click a Result Decision Paths link.

Creating an email listener


Using the Clipboard tool
Application debugging using the Tracer tool

Unit testing a collection


You can test a collection individually, before testing it in the context of the application that you are developing.

You specify a test page for the rule to use, provide sample data as the input, run the rule, and examine the results.
Additionally, you can convert the test run into a Pega unit test case.

1. In the navigation pane of Dev Studio, click Records Decision Collection , and then select the collection that you
want to test.

2. Click Actions Run .

3. In the Run context pane, select the thread, test page, and data transform you want to use for the test.

a. In the Thread list, select the thread in which you want to run the rule.

b. In the Page list, select whether to copy parameter values from an existing page or to create an empty
page:

To use parameter values from an existing clipboard page in the selected thread, click Copy existing
page, and then select the page you want to copy.

To start from a page containing no parameter values, click Empty test page.

c. Optional:

To apply a data transform, select the Apply data transform check box, and then select the transform to
apply.

The system always runs the rule instance on the RunRecordPrimaryPage, regardless of the page that you
select from this list. If you convert this test run to a test case and the RunRecordPrimaryPage requires
initial values, then configure the clipboard so that it populates the page with initial values. For more
information, see Setting up your test environment.
4. Optional:

To change the values that are passed to the rule, provide the new values on the main test page.

5. Click Run to run the test.

6. To convert the test run into a Pega unit test case, click Convert to test, and then configure the test case. For
more information, see Configuring unit test cases.

7. Optional:

If the rule has errors, then click Trace to debug it using the Tracer tool. For more information, see Application
debugging using the Tracer tool.

8. Click Clipboard to open the Clipboard and examine the pages that are generated by the unit test. For more
information, see Clipboard pages created by the Run Rule feature.

Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

Running a unit test case

Run a unit test case to validate rule functionality.

Viewing test case results


After you run a unit test case, you can view the results of the test run.

Exporting a list of test cases

You can export a list of all the Pega unit test cases that are in your application or configured on a rule form.

Unit testing a declare expression


You can test a declare expression individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test run to a Pega unit test case.

Unit testing a declare expression involves specifying a test page for the rule to use, providing sample data as the
input, running the rule, and examining the results.

You can unit test Declare Expression rules for which all properties referenced in the expression belong to a single
Applies To class or to one class plus superclasses of that class. For example, you cannot use this facility to test a
computation that involves both a Data-Account property and an Assign-Workbasket property.

The following considerations apply when unit testing a declare expression:

Using data transforms – You can unit test a declare expression by using a data transform to set property values
or by manually entering values if all properties in the computation are Single Value properties and they belong
to a single Applies To class.
Testing expressions involving aggregate properties – If the expressions involve aggregate properties, use this
facility only with Page Group or Page List properties with a small number of elements.
Avoiding infinite loops – To avoid infinite loops caused by recursion, this facility uses an internal limit of 999
computations, counting both iterations over List and Group properties and expression evaluations. If this limit is
reached during a test execution, the evaluation ends with an error message.

Testing expressions involving special properties – Your expression can involve special properties (standard
properties with names starting with px or your properties marked as special), Value List, or Value Group
properties. You cannot use manual inputs to change the value of a special property. Instead, you can make
changes to special properties by using a data transform.

1. In the navigation pane of Dev Studio, click Records Decision Declare Expression , and then select the declare
expression that you want to test.

2. Click Actions Run .

3. In the Test Page pane, select the context and test page to use for the test:

a. In the Data Context list, click the thread in which you want to run the rule. If a test page exists for the
thread, then it is listed and is used for creating the test page.

b. To discard all previous test results and start from a blank test page, click Reset Page.

c. To apply a data transform to the values on the test page, click the data transform link, and then select the
data transform you want to use.

4. In the bottom area of the window, enter values to use for the properties in the declarative network. For each
property value, click the property, enter a value for the property in the Property field, and then click Update.
Each time an input property is changed, the expression is automatically reevaluated.

5. If the unit test of the Declare Expression rule requires creating an input that is part of an embedded Page List or
Page Group property, create a new page by performing the following actions:

a. Select the Add new page check box.

b. For a Page Group, enter an identifier as a subscript value.

For a Page List, the system appends the new page after existing pages.

c. From the Class list, select the class of the page.

d. Click Update.

You can also add inputs to Value List or Value Group properties.

6. To convert the test run into a Pega unit test case, click Convert to Test. For more information, see Configuring
unit test cases.

About Declare Expression rules


Declarative network
More about Declare Expression rules

Unit testing a flow


You can test a flow individually before testing it in the context of the entire application that you are developing.
You run through the flow, provide sample data as the input, and examine the behavior and results to see if they are
what you expect. When you test a flow, the system creates new test page, starts running the flow, and creates a
case.

1. In the navigation pane of Dev Studio, click Records Flow , and then select the flow.

2. Click Actions Run from the toolbar.

3. Step through each decision or assignment, providing input in each step.

4. Run through the rule as many times as necessary, testing each possible path.

Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

Flows

Unit testing a flow action


The Run Rule feature enables you to test a flow action individually before testing it in the context of the application
you are developing. You specify a test page for the rule to use, provide sample data as the input, run the rule, and
examine the results.

Before you begin

Before you begin, determine how you will provide the sample data to use when testing the rule. If possible, open a
work item of the appropriate work type.

For general information about the Run Rule feature, including a list of the clipboard pages that are generated when
a rule runs, see Unit testing individual rules.

Run the rule

To run the rule, complete the following steps:

1. Save the rule.


2. Complete any preprocessing necessary to create the appropriate clipboard context and, if the rule is
circumstanced or time-qualified, to set the conditions you want to test.
3. Click the Run toolbar button. The Run Rule window appears.
4. In the Test Page section, specify which page to use as the main page. Do one of the following:
If any pages of the rule's Applies To class already exist, select one to be copied. (If this decision tree
applies to an embedded page, identify a top-level page that contains the embedded page or pages and
supply a Page Context.)
Otherwise, select Create or Reset Test page. Then, in the Apply field, select the data transform to use for
the test page.
5. If the rule being tested is circumstance-qualified, select Set circumstance properties to run exact version of
rule.
6. In the lower section of the Run Rule window, enter the test data and click Run Again. The system runs the
decision tree and displays the results.
7. Optional. Click Show Clipboard to open the Clipboard and examine the pages. Click the Hide Clipboard button to
close the tool. For information about the clipboard pages that are generated, see Unit testing individual rules.

About Flow Actions


Flow Actions - Completing the New or Save As form
More about Flow Actions

Unit testing a report definition


You can test a report definition individually, before testing it in the context of the application that you are
developing. Additionally, you can convert the test into a Pega unit test case to validate application data.

In a continuous delivery environment, Pega unit testing provides you with feedback on the quality of your
applications, so that you can quickly identify issues and correct them.

1. In the navigation pane of Dev Studio, click Records Reports Report Definition , and then select the report
definition that you want to test.

2. Click Actions Run .

3. To convert the report definition test run into a Pega unit test case, click Convert to test and then configure the
unit test case. For more information, see Configuring unit test cases.

4. Optional:

To edit the report, click Edit Report and then make the changes. For more information, see Editing a report.
5. Optional:

In the Actions list, you can also select to refresh the report, save a copy of the report, summarize and sort the
report, convert the report from a summary into a list (if applicable), and export the report to a PDF or Excel file.

Creating an email listener

Unit testing an activity


You can test an activity individually before testing it in the context of the application that you are developing.
Additionally, you can convert the test run to a Pega unit test case.

You can use unit test the activity if one of the following criteria is met:

The Require Authentication to run check box is selected on the Security tab of the Activity form.
Your access role allows you to update rules that have the Applies To class of this activity.

1. In the navigation pane of Dev Studio, click Records Technical Activity , and then select the activity that you
want to open.

2. Click Actions Run .

3. In the Run context pane, select the thread, test page, and data transform you want to use for the test.

a. In the Thread list, select the thread in which you want to run the rule.

b. In the Page list, select whether to copy parameter values from an existing page or to create an empty
page:

To use parameter values from an existing clipboard page in the selected thread, click Copy existing
page, and then select the page you want to copy.

To start from a page containing no parameter values, click Empty test page.

c. Optional:

To apply a data transform, select the Apply data transform check box, and then select the transform to
apply.

The system always runs the rule instance on the RunRecordPrimaryPage, regardless of the page that you
select from this list. If you convert this test run to a test case and the RunRecordPrimaryPage requires
initial values, then configure the clipboard so that it populates the page with initial values. For more
information, see Setting up your test environment.
4. Optional:

To change the values that are passed to the rule, provide the new values on the main test page.

5. Click Run to run the test.

6. To convert the test run into a Pega unit test case, click Convert to test, and then configure the test case. For
more information, see Configuring unit test cases.

7. Optional:

If the rule has errors, then click Trace to debug it using the Tracer tool. For more information, see Application
debugging using the Tracer tool.

Clipboard contents created from activity unit tests

When you test activities with the Run Rule feature, the system might create one or more clipboard pages,
including copies of the named pages that you specified in the Run dialog box. The system retains these pages
when you close the dialog box; properties on these pages remain available for you to use when you perform
subsequent testing of this or other rules with the Run Rule feature.

Testing an activity in context

If the activity requires extensive setup and is more appropriately tested in context rather than through unit
testing, you can trigger the Tracer to start only when the activity is reached.

Activities
Unit testing individual rules

An incorrect rule configuration in an application can cause delays in case processing. To avoid configuration
errors such as incorrectly routed assignments, unit test individual rules as you develop them.

Using the Clipboard tool


Application debugging using the Tracer tool

Clipboard contents created from activity unit tests


When you test activities with the Run Rule feature, the system might create one or more clipboard pages, including
copies of the named pages that you specified in the Run dialog box. The system retains these pages when you close
the dialog box; properties on these pages remain available for you to use when you perform subsequent testing of
this or other rules with the Run Rule feature.

If the activity returns results to a parameter marked as Out on the Parameter tab of the activity, you cannot see the
results in the Clipboard. Out parameters reside on the parameter page of the activity, which you can view only with
the Tracer tool. To simplify testing, you can modify the activity temporarily to store an Out parameter value in a
property that is visible on a clipboard page. Delete the temporary step when you finish testing.

Testing an activity in context


If the activity requires extensive setup and is more appropriately tested in context rather than through unit testing,
you can trigger the Tracer to start only when the activity is reached.

To test an activity in the context of other activities and rules that produce the needed initial conditions, complete
the following actions:

1. Open the activity.

2. Select Actions > Trace. The Tracer tool starts.

3. To avoid using more memory than needed while tracing, click Settings in the Tracer window and ensure that
only the ruleset containing the activity is selected in the Rulesets To Trace section.

4. Begin the other processing that eventually calls the activity in your own requestor session. Tracer output
begins when the activity starts.

Application debugging using the Tracer tool

Unit testing a Parse Delimited rule


You can test a Parse Delimited rule directly, separate from the activity or other context in your application in which
it will eventually operate.

Preparation
For a simple test, obtain test data. You can choose to type or paste the data into a form, store it in a local Windows
file, or upload it into a text file rule.

Conducting the test


For basics of unit testing, see Unit testing individual rules.

1. Save the Parse Delimited form.


2. Click the Run toolbar button or the equivalent keyboard shortcut CTRL + R. A guided test window opens.
3. Select a radio button to indicate the source of test data.
4. If the data is to be entered directly, type or paste the data into the text area. If the data is in a local file, click
Browse and navigate to the file. Click OK. If the test data is in a text file rule, enter all three key parts of the
rule separated by periods.
5. Click Execute to evaluate the rule. An XML document appears in a new window, showing properties and the
corresponding parsed values. The clipboard is not altered; no properties are updated.

Parse Delimited rules

Unit testing a Parse XML rule


You can test a Parse XML rule directly, separate from the activity or other context in your application in which it will
ultimately operate.

Preparation
For a simple test, obtain an XML document containing test data. You can choose to type or paste the data into a
form, store it in a local Windows file, or upload it into a text file rule.

Conducting the test


For basics of unit testing, see Unit testing individual rules.

1. Save the Parse XML form.


2. Click Run or the equivalent keyboard shortcut CTRL + R. A test window opens.
3. Select a radio button to indicate the source of test data.
4. If the data is to be entered directly, type or paste the data into the text area. If the data is in a local file, click
Browse and navigate to the file. Click OK. If the test data is in a text file rule, enter all three key parts of the
rule separated by periods.
5. Click Execute. A resulting XML document appears in a new window, showing properties and the corresponding
values. The clipboard is not altered.

Using the Tracer


To trace Parse XML rules:

1. Open the Tracer.


2. Click Settings to open the Tracer Settings panel.
3. Under the Rule Types to Trace section, select the Parse Rules check box.

See Configuring Tracer settings.

Parse XML rules

Unit testing a Parse Structured rule


You can test a Parse Structured rule directly, separate from the activity or other context in your application in which
it will ultimately operate.

Preparation
For a simple test, obtain an XML document that contains test data. You can choose to type or paste the document
into a form, store it in a local Windows file, or upload it into a Text File rule.

Conducting the test


For basics of unit testing, see Unit testing individual rules.

1. Save the Parse Structured form.


2. Click Run or the equivalent keyboard shortcut CTRL + R. A test window opens.
3. Select a radio button to indicate whether a new empty page or an existing page is to provide input property
values for the test.
4. Select a radio button to indicate the source of test data.
5. If the data is to be entered directly, type or paste the data into the text area. If the data is in a local file, click
Browse and navigate to the file. Click OK. If the test data is in a text file rule, enter all three key parts of the
rule separated by periods.
6. Click Execute. The resulting parsed XML document appears in a new window. The clipboard is not altered.

The parseState object and debugging


As it executes, a Parse Structured rule creates and maintains a Java object in memory named parseState. This
corresponds to a method variable in the generated Java.

This object is not visible through the Tracer or the Clipboard tool, but you can use Public API functions to examine it.
To do this, include a Java step containing the Public API call:

myStepPage.putString("debug3", Long.toString(parseState.getStreamOffset()) );

where debug3 is a Single Value property. This function places the byte offset position of the Java object into a clipboard
value. You can review the clipboard value with the Tracer or Clipboard tool.

Four Pega Platform methods operate on the parseState object:

Parse-Byte-Pos (for byte streams)


Parse-Char-Pos (for character streams)
Parse-Fixed-Binary (for byte streams)
Parse-Packed-Decimal (for byte streams)

The result of each method is stored in parseState.lastToken (which can be accessed in a Java step) and optionally is
stored as a property value.

The parseState object is defined in the PublicAPI Interface

com.pega.pegarules.pub.runtime

This facility is based on java.io.* capabilities (not the newer java.nio.* capabilities). The PublicAPI includes methods to
query or operate on the object.

Using the Tracer


To trace the start and end of Parse Structured rule executions:

1. Open the Tracer.


2. Click Settings to open the Tracer Settings panel.
3. Under the Rule Types to Trace section, select the Parse Rules check box.

See Configuring Tracer settings.

Parse structured rules

Unit testing service rules


Unit testing a Service EJB rule

Use the unit testing feature to verify that the operations of a Service EJB rule function correctly before you add
the external client to your testing process.

Unit testing a Service Email rule

Use the unit testing feature to verify that the operations of a Service Email rule function correctly before you
add an external component to your testing process.

Unit testing a Service File rule

Use the unit testing feature to verify that the operations of a service file rule function correctly before you add
an external component to your testing process.

Unit testing a Service HTTP rule

Use the unit testing feature to verify that the operations of a Service HTTP rule function correctly before you
add the external client to your testing process.

Unit testing a Service Java rule

Use the unit testing feature to verify that the operations of a Service Java rule function correctly before you add
the external client to your testing process.

Unit testing a Service JMS rule

Use the unit testing feature to verify that the operations of a Service JMS rule function correctly before you add
an external component to your testing process.

Unit testing a Service JSR 94 rule

Use the unit testing feature to verify that the operations of a Service JSR 94 rule function correctly before you
add the external client to your testing process.

Unit testing a Service dotNet rule

Use the unit testing feature to verify that the operations of a Service dotNet rule function correctly before you
add the external client to your testing process.

Unit testing a Service MQ rule

Use the unit testing feature to verify that the operations of a Service MQ rule function correctly before you add
an external component to your testing process.

Unit testing a Service SAP Rule

Services start their processing in response to a request from an external application. Before you add the
external application to your testing process, use the Simulate SOAP Service Execution feature to verify that the
service processes data appropriately. When using this feature, you manually provide some representative data
to process. You specify a test page for the rule to use, provide sample data as the input, run the rule, and
examine the results to see if they are what you expect.

Unit testing a Service SAPJCo rule

Use the unit testing feature to verify that the operations of a Service SAPJCo rule function correctly before you
add the external client to your testing process.

Unit testing a Service SOAP rule

Services start their processing in response to a request from an external application. Before you add the
external application to your testing process, use the Simulate SOAP Service Execution feature to verify that the
service processes data appropriately. When using this feature, you manually provide some representative data
to process. You specify a test page for the rule to use, provide sample data as the input, run the rule, and
examine the results to see if they are what you expect.

Unit testing a Service EJB rule


Use the unit testing feature to verify that the operations of a Service EJB rule function correctly before you add the
external client to your testing process.
Service EJB rules are no longer being actively developed, and are being considered for deprecation in upcoming
releases. Using Service EJB rules does not follow Pega development best practices. Consider other implementation
options instead.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions > Run.
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard)
Requestor
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Enter Request Specify individual request values — This option appears only if the EJB method parameters are
scalar values, such as strings, numbers, or booleans.
Data
Invoke Initialization activity — A test activity creates values for the EJB method values.

Method If you selected Specify individual request values for the previous field, enter in the Value field a
Parameter literal constant value for each EJB method parameter declared on the Parameters tab. Enter a value
Values that corresponds to the Java data type listed.
If you selected Invoke Initialization activity , enter here the Activity Name key part of an activity that
creates EJB method parameters. The system assumes the Applies To class of the activity matches
Activity
the Primary Page Class value on the Service tab. If the activity applies to a different class, enter the
class name, a period, and the activity name.

Service dotNet rules

Unit testing a Service Email rule


Use the unit testing feature to verify that the operations of a Service Email rule function correctly before you add an
external component to your testing process.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see Testing Services and Connectors, a document in the Integration area of the Pega Community.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described below, and then click Execute.

Requestor context - Select a context to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet list, privileges, and
current clipboard).

Initialize service requestor context — Create a new requestor session based on the APP requestor type and, if
the service package requires authentication, another Operator ID instance.

Authentication user ID - If you selected Initialize service requestor context, and the service package instance for the
service requires authentication, enter the Operator ID to be used to test the service.

Authentication password - If you selected Initialize service requestor context, and the service package instance for
the service requires authentication, enter a password for the Operator ID.
Enter request data
Message header values - Enter one value for each message header declared on the Request tab.

Message content - Type or paste in the text that forms the content of the arriving email message.

Activity for adding attachments - If the arriving test message is to include email attachments, identify here the
Activity Name key part of the test activity described above.

Service dotNet rules


Tracing services

Unit testing a Service File rule


Use the unit testing feature to verify that the operations of a service file rule function correctly before you add an
external component to your testing process.

Before you begin, see How to provide test data when testing service rules.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see Testing Services and Connectors, a document available from the Integration pages of the Pega
Community.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard).
Requestor
Context Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service.
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select:

Supply File Content if you will type in or paste in the text of a test file. This option is not
Enter Request
available if non-text input is expected — that is, if the Data Mode field on the Method tab is set
Data
to a value other than text only .
Upload a local file if the test file is on your workstation or network.

Enter the contents of a test file, including delimiters. This text area appears when you choose Supply
File Content
File Content for the previous field.
Click Browse to upload a test file. This field appears when you choose Upload a local file for the
File Location
previous field.

Service File rules

Unit testing a Service HTTP rule


Use the unit testing feature to verify that the operations of a Service HTTP rule function correctly before you add the
external client to your testing process.

Service HTTP rules are no longer being actively developed. To avoid upgrade issues when these rules are
deprecated, use Service REST rules instead. For more information about Service REST rules, see Service REST rules.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions > Trace. For more information, see Tracing services.
3. Click Actions > Run.
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
Requestor list, privileges, and current clipboard)
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service.
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Specify individual request values — Select if you want to manually enter the values for the
Enter Request message data in the Message Buffer text box.
Data
Invoke Initialization activity — Select if you want to run an activity that creates the string for the
message data.

HTTP Header If you selected Specify individual request values for the previous field, enter in the Value field a
Values literal constant value for each Header Field row on the Request tab.
Message If you selected Specify individual request values , enter or paste the test message data in this text
Buffer box.
If you selected Invoke Initialization activity, specify the Activity Name key part of an activity that
creates the message. The system assumes the Applies To class of the activity matches the Primary
Activity
Page Class value on the Service tab. If the activity applies to a different class, enter the class name,
a period, and the activity name.

Service dotNet rules

Unit testing a Service Java rule


Use the unit testing feature to verify that the operations of a Service Java rule function correctly before you add the
external client to your testing process.

Service Java rules are no longer being actively developed, and are being considered for deprecation in upcoming
releases. Using Service Java rules does not follow Pega development best practices. Consider other implementation
options instead.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard).
Requestor
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service.
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Specify individual request values — This option appears only if the Java method parameters are
Enter Request
scalar values, such as strings, numbers, or booleans.
Data
Invoke Initialization activity — A test activity creates values for the Java method values.
Field Description
Method If you selected Specify individual request values for the previous field, enter in the Value field a
Parameter literal constant value for each Java method parameter declared on the Parameters tab. Enter a value
Values that corresponds to the Java data type listed.
If you selected Invoke Initialization activity, enter here the Activity Name key part of an activity that
creates Java method parameters. The system assumes the Applies To class of the activity matches
Activity
the Primary Page Class value on the Service tab. If the activity applies to a different class, enter the
class name, a period, and the activity name.

Service Java rules

Unit testing a Service JMS rule


Use the unit testing feature to verify that the operations of a Service JMS rule function correctly before you add an
external component to your testing process.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard).
Requestor
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service.
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Specify individual request values — This option appears only if all the JMS buffer fields are
Enter Request scalar values, such as strings, numbers, or Booleans.
Data
Invoke Initialization activity — A test activity creates values for the JMS request message and
for any headers or JMS properties.

If you selected Specify individual request values for the previous field, enter in the Value field a
Message
literal constant value for each header field declared on the Request tab. Enter a value that
Header Values
corresponds to the Java data type listed.
Message If you selected Specify individual request values for the previous field, enter in the Value field a
Property literal constant value for each message field declared on the Request tab. Enter a value that
Values corresponds to the Java data type listed.
If you selected Specify individual request values for the previous field, enter in the Value field a
Message
literal constant value for each message buffer field declared on the Request tab. Enter a value that
Buffer Values
corresponds to the Java data type listed.
If you selected Invoke Initialization activity , enter here the Activity Name key part of an activity that
Activity for
creates JMS message elements. The system assumes the Applies To class of the activity matches the
Initializing
Primary Page Class value on the Service tab. If the activity applies to a different class, enter the
Message Data
class name, a period, and the activity name.

Unit testing a Service JSR 94 rule


Use the unit testing feature to verify that the operations of a Service JSR 94 rule function correctly before you add
the external client to your testing process.

Service JSR94 rules are no longer being actively developed, and are being considered for deprecation in upcoming
releases. Using Service JSR94 rules does not follow Pega development best practices. Consider other
implementation options instead.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.
Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard).
Requestor
Context Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context, and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service.
Authentication If you selected Initialize service requestor context, and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Specify individual request values — This option appears only if the JSR94 input parameters are
Enter Request scalar values, such as strings, numbers, or booleans.
Data
Invoke Initialization activity — A test activity creates values for the JSR94 input parameter
values.

Input If you selected Specify individual request values for the previous field, enter in the Value field a
Parameter literal constant value for each input method parameter declared on the Request tab. Enter a value
Values that corresponds to the Java data type listed.

If you selected Invoke Initialization activity, enter here the Activity Name key part of an activity that
creates JSR94 input parameters. The system assumes the Applies To class of the activity matches
Activity
the Primary Page Class value on the Service tab. If the activity applies to a different class, enter the
class name, a period, and the activity name.

Service JSR94 rules

Unit testing a Service dotNet rule


Use the unit testing feature to verify that the operations of a Service dotNet rule function correctly before you add
the external client to your testing process.

Service dotNet rules are no longer being actively developed, and are being considered for deprecation in upcoming
releases. Using Service dotNet rules does not follow Pega development best practices. Use Service SOAP rules
instead. For more information, see Service SOAP rules.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see Testing Services and Connectors, a document on the Integration pages of the Pega Community.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions > Trace. For more information, see Tracing services.
3. Click Actions > Run.
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your requestor session (including your RuleSet list,
privileges, and current clipboard)
Requestor
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Field Select a radio button to define the source of request data values for this test:
Description

Enter Request Specify individual request values — This option appears only when all elements of the message
Data are simple text values not objects or complex values of type XML Page.

Supply SOAP Envelope — You provide the entire SOAP message including the header.

If you selected Specify individual request values for the previous field, enter in the Value field a
SOAP Header
literal constant value for each Header Field row on the Request tab. Enter a value that matches the
Values
XSD type shown.
SOAP If you selected Specify individual request values for the previous field, enter in the Value field a
Parameter literal constant value for each Request Parameters row listed on the Request tab. Enter a value that
Values corresponds to the XSD data type shown.
If you selected Supply Soap Envelope , enter or paste a well-formed and valid XML document in the
SOAP Request Envelope text area, starting with the <?xml version="1.0"> declaration.
SOAP Request
Envelope If the service expects requests containing an array element or XML Page elements, a skeleton
document is provided as a starting point.

Service dotNet rules

Unit testing a Service MQ rule


Use the unit testing feature to verify that the operations of a Service MQ rule function correctly before you add an
external component to your testing process.

Service MQ rules are no longer being actively developed. To avoid upgrade issues when these rules are deprecated,
use Service JMS rules instead. For more information about Service JMS rules, see Service JMS rules.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard)
Requestor
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Specify individual request values — This option appears only if the Message Data of the
Enter Request
Request tab contains only scalar values, such as strings, numbers, or Boolean values.
Data
Invoke Initialization activity — A test activity creates values for the request message values.

If you selected Specify individual request values for the previous field, enter in the Value field a
Message
literal constant value for each Header Field row on the Request tab. Enter a value that corresponds
Header Values
to the Java data type listed.
If you selected Specify individual request values for the previous field, enter in the Value field a
Message
literal constant value for each Message Data row listed on the Request tab. Enter a value that
Buffer Values
corresponds to the Java data type listed.
If you selected Invoke Initialization activity , enter here the Activity Name key part of an activity that
creates the information from a test MQ request message. The system assumes the Applies To class
Activity
of the activity matches the Primary Page Class value on the Service tab. If the activity applies to a
different class, enter the class name, a period, and the activity name.

Unit testing a Service SAP Rule


Services start their processing in response to a request from an external application. Before you add the external
application to your testing process, use the Simulate SOAP Service Execution feature to verify that the service
processes data appropriately. When using this feature, you manually provide some representative data to process.
You specify a test page for the rule to use, provide sample data as the input, run the rule, and examine the results to
see if they are what you expect.

Service SAP rules are no longer being actively developed, and are being considered for deprecation in upcoming
releases. Using Service SAP rules does not follow Pega development best practices. Use Service SOAP rules instead.
For more information, see Service SOAP rules.

If you have the AutomatedTesting privilege (through an access role), you can use the features of Automated Testing
such as saved test cases and unit test suites for testing your Service SAP rule. See About Automated Unit Testing
and Working with the Test Cases tab for more information.

Before you begin


Before you begin testing the Service SAP rule, determine how you will provide the sample data for the service rule to
process. For help with this step, and for information about additional ways to test your services, see the articles
about testing services and connectors in the Testing Applications category of the Pega Community.

Run the rule


Complete the following steps:

1. Save the rule.


2. Start the Tracer by clicking Actions > Trace. For more information, see Tracing services.
3. Click Actions > Run.
4. Fill out the fields in the form as described in the following table:
Field Description
If you have the AutomatedTesting privilege, Run Against a Saved Test Case , Show Saved
Results , and Run Test Case are available if this rule has saved test cases. To run the rule and
see how its behavior compares to that in a previously saved test case, select a choice from the
Run Against a Saved Test Case drop-down list.

After making your selection, click Run Test Case. If differences are found between the results of
Test Cases running the current state of the rule and the saved test case, they are displayed and you have
the options of choosing to ignore differences for future test runs, overwriting the saved test
case, and saving the results. (See the Playing back saved test cases section in Working with the
Test Cases tab.)

Click Show Saved Results to view any previously saved test case results.

Select one of the following items to specify which requestor session to use in the test:
Use current requestor context — Runs the rule in your session, that is, with your RuleSet
Requestor list, privileges, and current clipboard.
Context Initialize service requestor context — Run the rule in a newly created service requestor
session based on the APP requestor type and, if the service package requires
authentication, another Operator ID instance.
Authentication If you selected Initialize service requestor context , and the service package for the service
User ID requires authentication, enter the Operator ID to use to test the service.
Authentication If you selected Initialize service requestor context , and the service package instance for the
Password service requires authentication, enter a password for the Operator ID.
Select one of the following to define the source of request data values for this test:
Enter Request Specify individual request values — This option appears only when all elements of the
Data message are simple text values, not arrays or complex values of type XML Page.
Supply SOAP Envelope — You can enter an entire SOAP message including the header.
If you selected Specify individual request values for the previous field, in the Value field, enter a
SOAP Header
literal constant value for each Header Field row on the Request tab. Enter a value that matches
Values
the XSD type shown.
SOAP If you selected Specify individual request values for the previous field, in the Value field, enter a
Parameter literal constant value for each Request Parameters row listed on the Request tab. Enter a value
Values that corresponds to the XSD data type shown.
If you selected Supply Soap Envelope , enter or paste a well-formed and valid XML document in
the SOAP Request Envelope text area, starting with the <?xml version=“1.0” ?> declaration.
SOAP Request
Envelope If the service expects requests containing an array element or elements, a skeleton document
is provided as a starting point.

5. Click Execute to run the rule. The system runs the rule and displays the results.
6. Click the Clipboard icon in the Quick Launch area to see the clipboard pages that were generated.
7. Run the rule again using different data, as necessary.
8. Optional. If you have the AutomatedTesting privilege, the Save as Test Case button is available and you can
click it to create a Test Case that holds the test data and the results.
Service SAP rules

Unit testing a Service SAPJCo rule


Use the unit testing feature to verify that the operations of a Service SAPJCo rule function correctly before you add
the external client to your testing process.

Service SAPJCo rules are no longer being actively developed. To avoid upgrade issues when these rules are
deprecated, use other web-based SAP capabilities.

Unit testing provides only partial evidence of a correct implementation. For more comprehensive information on
testing services, see the Pega Community article Testing Services and Connectors.

Before you begin, see How to provide test data when testing service rules.

To run a unit test, complete the following steps:

1. Save the rule form.


2. Start the Tracer by clicking Actions Trace . For more information, see Tracing services.
3. Click Actions Run .
4. Complete the form as described in the following table, and then click Execute.

Field Description
Select a radio button to define the requestor session is to be used in the test:

Use current requestor context — Use your current requestor session (including your RuleSet
list, privileges, and current clipboard).
Requestor
Context
Initialize service requestor context — Create a new requestor session based on the APP
requestor type and, if the service package requires authentication, another Operator ID
instance.

Authentication If you selected Initialize service requestor context , and the service package instance for the service
User ID requires authentication, enter the Operator ID to be used to test the service.
Authentication If you selected Initialize service requestor context , and the service package instance for the service
Password requires authentication, enter a password for the Operator ID.
Select a radio button to define the source of request data values for this test:

Enter Request Specify individual request values — This option appears only if the Java method parameters are
scalar values, such as strings, numbers, or booleans.
Data
Invoke Initialization activity — A test activity creates values for the Java method values.

Method If you selected Specify individual request values for the previous field, enter in the Value field a
Parameter literal constant value for each Java method parameter declared on the Parameters tab. Enter a value
Values that corresponds to the Java data type listed.

If you selected Invoke Initialization activity, enter here the Activity Name key part of an activity that
creates Java method parameters. The system assumes the Applies To class of the activity matches
Activity
the Primary Page Class value on the Service tab. If the activity applies to a different class, enter the
class name, a period, and the activity name.

Service SAPJCo rules

Unit testing a Service SOAP rule


Services start their processing in response to a request from an external application. Before you add the external
application to your testing process, use the Simulate SOAP Service Execution feature to verify that the service
processes data appropriately. When using this feature, you manually provide some representative data to process.
You specify a test page for the rule to use, provide sample data as the input, run the rule, and examine the results to
see if they are what you expect.

If you have the AutomatedTesting privilege (through an access role), you can use the features of Automated Testing
such as saved test cases and unit test suites for testing your Service SOAP rule. See About Automated Unit Testing
and Working with the Test Cases tab for more information.

Before you begin


Before you begin testing the Service SOAP rule, determine how you will provide the sample data for the service rule
to process. For help with this step, and for information about additional ways to test your services, see the articles
about testing services and connectors in the Testing Applications category of the Pega Community.

Run the rule


Complete the following steps:

1. Save the rule.


2. Start the Tracer by clicking Actions > Trace. For more information, see Tracing services.
3. Click Actions > Run.
4. Fill out the fields in the form as described in the following table:
Field Description
If you have the AutomatedTesting privilege, Run Against a Saved Test Case, Show Saved
Results, and Run Test Case are available if this rule has saved test cases. To run the rule and
see how its behavior compares to that in a previously saved test case, select a choice from the
Run Against a Saved Test Case drop-down list.

After making your selection, click Run Test Case. If differences are found between the results of
Test Cases running the current state of the rule and the saved test case, they are displayed and you have
the options of choosing to ignore differences for future test runs, overwriting the saved test
case, and saving the results. (See the Playing back saved test cases section in Working with the
Test Cases tab.)

Click Show Saved Results to view any previously saved test case results.

Select one of the following items to specify which requestor session to use in the test:
Use current requestor context -- Runs the rule in your session, that is, with your RuleSet
Requestor list, privileges, and current clipboard.
Context Initialize service requestor context -- Run the rule in a newly created service requestor
session based on the APP requestor type and, if the service package requires
authentication, another Operator ID instance.
Authentication If you selected Initialize service requestor context, and the service package for the service
User ID requires authentication, enter the Operator ID to use to test the service.
Authentication If you selected Initialize service requestor context, and the service package instance for the
Password service requires authentication, enter a password for the Operator ID.
Select one of the following to define the source of request data values for this test:
Enter Request Specify individual request values -- This option appears only when all elements of the
Data message are simple text values, not arrays or complex values of type XML Page.
Supply SOAP Envelope -- You can enter an entire SOAP message including the header.
If you selected Specify individual request values for the previous field, in the Value field, enter a
SOAP Header
literal constant value for each Header Field row on the Request tab. Enter a value that matches
Values
the XSD type shown.
SOAP If you selected Specify individual request values for the previous field, in the Value field, enter a
Parameter literal constant value for each Request Parameters row listed on the Request tab. Enter a value
Values that corresponds to the XSD data type shown.
If you selected Supply Soap Envelope, enter or paste a well-formed and valid XML document in
the SOAP Request Envelope text area, starting with the <?xml version=“1.0” ?> declaration.
SOAP Request
Envelope If the service expects requests containing an array element or XML Page elements, a skeleton
document is provided as a starting point.

5. Click Execute to run the rule. The system runs the rule and displays the results.
6. Click the Clipboard icon in the Quick Launch area to see the clipboard pages that were generated.
7. Run the rule again using different data, as necessary.
8. Optional. If you have the AutomatedTesting privilege, the Save as Test Case button is available and you can
click it to create a Test Case that holds the test data and the results.

Service SOAP rules


Service SOAP rules - Completing the Create, Save As, or Specialization form
More about Service SOAP rules

Understanding unit test cases


A test case identifies one or more testable conditions (assertions) used to determine whether a rule returns an
expected result. Reusable test cases supports the continuous delivery model, providing a means to test rules on a
recurring basis to identify impacts of new or modified rules.

You can create unit test cases for the following types of rules:

Activities
Case types
Collections
Data pages
Data transforms
Decision tables
Decision trees
Declare expressions
Flows
Map values
Report definitions
Strategies
When

You can use one or more data pages, data transforms, or activities to set up the clipboard data before running the
rule as part of the test case. You can also use activities to create any required test data such as work or data
objects. After you run a unit test case or test suite, data pages used to set up the test environment are
automatically removed. You can also apply additional data transforms or activities to remove other pages or
information on the clipboard.

Creating unit test cases

After you successfully unit test a rule, you can create a test case based on the result of the test. Creating unit
test cases involves converting unit tests to test cases, configuring the unit test case, and defining expected test
results with assertions.

Opening a unit test case

You can view a list of the Pega unit test cases that have been created for your application and select the one
that you want to open.

Viewing test case results

After you run a unit test case, you can view the results of the test run.

Running a unit test case

Run a unit test case to validate rule functionality.

Exporting a list of test cases

You can export a list of all the Pega unit test cases that are in your application or configured on a rule form.

Creating unit test cases


After you successfully unit test a rule, you can create a test case based on the result of the test. Creating unit test
cases involves converting unit tests to test cases, configuring the unit test case, and defining expected test results
with assertions.

Configuring unit test cases


Creating a unit test case for a flow or case type

When you create a unit test case for a flow or case type, you run the flow or case type and enter data for
assignments and decisions. The system records the data that you enter in a data transform, which is created
after you save the test form. You can start recording at any time.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Viewing test results

Configuring unit test cases


Configure unit test cases to compare the expected output of a rule to the results that the test case returns.
Unit test a rule and convert the test run into a test case. For more information, see Unit testing individual rules.

1. In the upper-right corner of the Definition tab, click the Gear icon.

2. In the Edit Details dialog box, change the class, tested rule, or parameters that are sent to the test.

Strategy rules have the following parameters.

componentName – The name of the component (for example, Switch) that you are testing.
pzRandomSeed – Internal parameter, which is the random seed for the Split and Champion Challenger
shapes. It is generated for all components, but applies only to the Split and Champion Challenger
components.

If you rename either the componentName or pzRandomSeed parameter, the test case does not return the
expected results.

If you configure the test to run on a different component, the test might fail if a property is not found.
If you change the pzRandomSeed value on the Split or Champion Challenger shapes, the test fails.

For flow rules and case types, view the path of the flow or case type. The path does not display child cases with
no flow actions and flow actions on which postprocessing is not performed.

3. Click Submit.
4. In the Expected results section, provide the results expected of the test by configuring assertions. For more
information, see Assertions.

5. On the Setup & Cleanup tab, configure the actions to perform and the objects and clipboard pages to be
make availabl before and after the test runs. You can also clean up the clipboard after the test is run by
applying additional data transforms or activities. For more information, see Setting up your test environment.

The Setup & Cleanup tab is displayed for strategy rules only when you create test cases for them in Dev Studio.
6. Optional:

Prevent the test from being run as a part of a test suite or from a REST service by selecting the Disable check
box.

The test case is executed only when it is run manually by clicking Actions Run .

7. In the banner, view more details about the latest test result by clicking View details.

You can view details only after running a test case. For more information, see Viewing test results.

8. In the banner, open a new dialog box that displays the test date, the run time, the expected run time, and the
test result, click View previous runs, and then view detailed test results by clicking the row of a test case.

This link is displayed after a test case has been run.

9. Click Save. If you save the form for the first time, you can modify the identifier; however, after you save the rule
form, you cannot modify this field.

Rules development
Converting unit tests to test cases

Creating a unit test case for a flow or case type


When you create a unit test case for a flow or case type, you run the flow or case type and enter data for
assignments and decisions. The system records the data that you enter in a data transform, which is created after
you save the test form. You can start recording at any time.

For information about the data that you can record, see Data that you can record for flows and case types.

Some properties, like .pyID, are not processed when a Pega unit test case is run. These properties vary for every
test run. The pxDataCapturePropertyIgnore data transform displays the properties that Pega unit tests do not
process.

1. Exclude properties in your work class from the test by modifying the pyDataCapturePropertyIgnores data
transform.

Complete the following steps:

a. Click App Classes .

b. In the search field, enter Work-.

c. Expand Data Model > Data Transform.

d. Click the pyDataCapturePropertyIgnores data transform.

e. Save the data transform to your Work- class and in your test ruleset.

f. On the Definition tab, click the plus sign.

g. From the Action list, select Set.

h. In the Target field, enter the property that you want to exclude.

i. In the Source field, enter two double quotation marks, separated by a space " " .

j. To specify additional properties that you want to exclude, repeat step f through step i.

k. Save the data transform.

2. Complete one of the following tasks:

Open a flow by searching for it or by using the Application Explorer.


Open a case type by clicking Cases and then clicking the case type that you want to open.

3. From the toolbar, click Action Record test case . The system starts running the flow or case type.

4. Enter input as you step through the flow or case type.

5. Click Create test case in the bottom right corner of the browser to start recording the test case and create a
test case in a new tab. The test case contains all the information that you entered up until you created the test
case.

You can continue to run the flow or case type and create additional test cases in the tab that is running the flow
or case type.

6. Configure the unit test case. See Configuring unit test cases for more information.

After you save the test case, a data transform, which captures the input that you entered, is created and associated
with the test case. You can edit this data transform to modify the test case input. The Edit test case form also
displays the path of the flow or case type.

Data that you can record for flows and case types

When you create a Pega unit test case for a flow or case type, the system records the data that you enter.

Creating unit test cases

After you successfully unit test a rule, you can create a test case based on the result of the test. Creating unit
test cases involves converting unit tests to test cases, configuring the unit test case, and defining expected test
results with assertions.

Opening a unit test case

You can view a list of the Pega unit test cases that have been created for your application and select the one
that you want to open.

Running a unit test case

Run a unit test case to validate rule functionality.

Viewing test case results

After you run a unit test case, you can view the results of the test run.

Exporting a list of test cases

You can export a list of all the Pega unit test cases that are in your application or configured on a rule form.

Data that you can record for flows and case types
When you create a Pega unit test case for a flow or case type, the system records the data that you enter.

You can record the following type of information:

Starter and non-starter flows.

Subprocesses that are configured as part of a flow.

The Assignment, Utility, and Approval shapes. For flows, assignments must be routed to the current operator so
that the recording of the flow continues and the system captures data as part of the test case.

Data that is captured on the pyWorkPage.

When a flow or case type runs, a pyWorkPage is created on the clipboard and captures information such as
data that you enter for assignments. It also captures information such as case ID, date and time that the case
was created, and the latest case status.

Three additional assertions that you can configure for flows and case types include case status, assigned to,
and attachment exists assertions. For these assertions, the system compares expected values to the value that
is recorded on the pyWorkPage.

If you refresh or cancel recording the flow or case type, data that is on the pyWorkPage might not be accurate.

Local actions and flow actions that are configured as part of the flow or case type.

Child cases that are created and finish running before the flow or test case resumes running.

All properties, excluding properties that being with either px or pz.

Creating a unit test case for a flow or case type

When you create a unit test case for a flow or case type, you run the flow or case type and enter data for
assignments and decisions. The system records the data that you enter in a data transform, which is created
after you save the test form. You can start recording at any time.

Defining expected test results with assertions


Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the rule.
To define the expected output, you configure assertions (test conditions) on the test cases that the test, when run,
compares to the results returned by the rule.
When a test runs, it applies assertions in the order that you define them on the Definition tab of the test case. All
assertions, except for run time assertions, must pass for the test to be successful.

For example, an account executive wants to ensure that a 10% discount is applied to all VIP customers. You can
create a test case that verifies that this discount is applied to all VIP customers in the database. If the test does not
pass, the results indicate where the 10% discount is not applied.

On decision trees and decision rules, you cannot configure properties from a read-only data page or a data page
that is a declarative target.

Configuring activity status assertions

You can verify that an activity returns the correct status when it runs by configuring an activity status
assertion. You can also assert if an activity has an error and, if it does, what the message is so that you can
validate that the message is correct.

Configuring assigned to assertions

For flows and case types, you can use the assigned to assertion to verify that an assignment is routed to the
appropriate work queue or operator.

Configuring attachment exists assertions

For flows and case types, you can verify that the flow or case type has an attachment of type file or note
(attached using the Attach Content shape) or email (attached using the Send Email shape) attached.

Configuring case instance count assertions

For flows and case types, you can verify the number of cases that were created when the case type or flow was
run.

Configuring case status assertions

You can configure a case status assertion on a flow or case type to verify the status of the case.

Configuring decision result assertions

After you create a unit test case for a decision table or decision tree, the system generates a decision result
assertion. This assertion displays the input values for testing the rule, and the result that is generated by the
rule.

Configuring expected run-time assertions

You can create an assertion for the expected run time of the rule. The expected run-time assertion is less than
or equal to an amount of time that you specify, in seconds.

Configuring list assertions

You can create list assertions for page lists on a rule to determine if either the expected result is anywhere in
the list of results returned by the rule. Even if the order of results changes, the test will continue to work.

Configuring page assertions

Some rules, such as activities and data transforms, can create or remove pages from the system. You can
create page assertions to determine whether or not a page exists after a unit test case runs. You can also
assert if a property has an error and, if it does, what the message is so that you can validate that the message
is correct.

Configuring property assertions

You can configure property assertions to validate that the actual values of properties returned by a rule are the
expected values. You can also assert if a property has an error and, if it does, what the message is so that you
can validate that the message is correct.

Configuring result count assertions

You can configure assertions to compare the number of items returned in a page list, page group, value list, or
value group on the rule to the result that you expect to see on the clipboard.

Configuring activity status assertions


You can verify that an activity returns the correct status when it runs by configuring an activity status assertion. You
can also assert if an activity has an error and, if it does, what the message is so that you can validate that the
message is correct.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. In the Assertion type list, click Activity status.


3. In the Value list, click the status that you expect the activity to return when the test runs.

4. To validate the message that displays for the activity, select Validate message, select a Comparator, and then
enter the message that you want to validate in the Value box.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and then click OK.

6. Click Save.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Creating unit test suites

To create a unit test suite, add test cases and test suites to the suite and then modify the order in which you
want them to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Configuring assigned to assertions


For flows and case types, you can use the assigned to assertion to verify that an assignment is routed to the
appropriate work queue or operator.

If you have multiple assignments on a flow or test case, you can route each assignment to an operator ID or work
queue. Clipboard pages are created for each assignment under the pyWorkPage page and capture the assignment
details, including the operator ID or work queue to which the assignment was routed. The assigned to assertion
compares the operator ID or work queue to the last assignment that is configured on the flow or case type, which
depends on where you stop recording the flow or case type.

For example, your flow has a Customer Details assignment, which is routed to the operator ID johnsmith . It also has a
subprocess with an Account Information assignment, which is routed to the account_processing work queue.

If you record only the Customer Details assignment, the assigned to value is johnsmith . If you also record the Account Information
assignment, the assigned to value is account_processing .

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Assigned to.

3. From the Assigned to list, select Operator or Work queue.

4. Select a comparator from the Comparator list.

5. In the Value field, press the Down Arrow key and select the operator ID or work queue.

6. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

7. Click Save.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases

Configuring attachment exists assertions


For flows and case types, you can verify that the flow or case type has an attachment of type file or note (attached
using the Attach Content shape) or email (attached using the Send Email shape) attached.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Attachment exists.


3. From the Attachment type list, select one of the following options, and then provide the value for each field:

File: Select to specify that the attachment type is file, and then enter the following values:

Description: Enter the text that was provided as the description in the Attach Content shape.

Name: Enter the name of the file that was provided in the Attach Content shape.

Note: Select to specify that the attachment type is note, and then enter text that was entered as the note
description in the Attach Content shape.
Email: Select to specify that the attachment type is an email, and then enter the email subject that was
provided in the Send Email shape.

4. Repeat steps 1 through 3 to add additional attachment assertions.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

6. Click Save.

Attachment exists assertions

On case types and flows, you can test whether an attachment of type file or note, which were attached in the
Attach Content shape, or email, which was attached using the Send Email shape, exists.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases
Attaching content to a case
Sending automatic emails from cases

Attachment exists assertions


On case types and flows, you can test whether an attachment of type file or note, which were attached in the Attach
Content shape, or email, which was attached using the Send Email shape, exists.

If you have multiple attachments on a flow or test case that match the expected value of the assertion, the assertion
runs for every attachment that exists. If the system finds an attachment that matches the assertion value, the
assertion passes and iterates over all the attachments on the flow or case type. If no attachment exists, the
assertion fails

The system compares the expected output on attachments that are recorded on the pyWorkPage page. For
example, if a case type has a parent case that spins off a child case, and you record just the child case, the
pyWorkPage page records attachments for only the child case and not the parent case, which is recorded on the
pyWorkCover page.

In addition, if you create a test case from a parent case that generates a child case that is returned to the parent
case after the child case runs, the pyWorkPage page records the attachments only on the parent case.

For example, your case has an Attach Content shape that attaches a Process immediately note in the first stage of the
case type. In the third stage, your case has a Send Email shape that attaches an email with the subject Request
approved. The assertion passes if you searched for either the Process immediately note or Request approved email subject.

Configuring attachment exists assertions

For flows and case types, you can verify that the flow or case type has an attachment of type file or note
(attached using the Attach Content shape) or email (attached using the Send Email shape) attached.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Attaching content to a case


Sending automatic emails from cases

Configuring case instance count assertions


For flows and case types, you can verify the number of cases that were created when the case type or flow was run.

For example, a Job Application case type runs a child case that processes background checks. If you record the
entire Job Applicant case type and the child case type, the number of case instances for Job Application case type is
one, and the number of case instances of Background Check child case type is one.

If you do not run the run the Background Check child case type when you create the test case, the number of
Background Check case instances is zero.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Case instance count.

3. In the Of case type field, do one of the following:

To select a case type from your work pool, press the Down Arrow key and select the case type.
Enter a case type that is not part of your work pool.

4. Select a comparator from the Comparator list.

5. In the Value field, enter the number of cases to compare against the output.

6. Optional:

Click Add to add another case instance count assertion and repeat steps 4 through 6.

7. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

8. Click Save.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases

Configuring case status assertions


You can configure a case status assertion on a flow or case type to verify the status of the case.

If you have multiple assignments on a flow or case type, you can configure a case status on each assignment. The
pyWorkPage on the clipboard captures the latest case status, which depends on where you stop recording the flow
or case type.

For example, your flow has a Customer Details assignment, with the case status set as New. It also has a subflow with an
Account Information assignment, with the case status set as Pending .

If you record only the Customer Details assignment, the case status, which is captured in the .pyStatusWork property on
the pyWorkPage, is set to New. If you also record the Account Information assignment, the case status is set to Completed.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Case status.

3. Select the comparator from the Comparator list.

4. In the Value field, press the Down Arrow key and select the case status.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

6. Click Save.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases

Configuring decision result assertions


After you create a unit test case for a decision table or decision tree, the system generates a decision result
assertion. This assertion displays the input values for testing the rule, and the result that is generated by the rule.

You can manually update the input values, add properties, remove properties, and modify the default decision result
if the test is modified.

This assertion is supported on when rules, decision tables, and decision trees only.
Open the unit test case. For more information, see Opening a unit test case.

1. Click the Definition tab.

2. To add multiple input values and results to the assertion, or add other assertions, perform one of the following
actions:

You can add multiple input values and results to this assertion but cannot add other assertion types to this test
case. You can add other assertion types to this test case only if you have a single input and result entry for the
assertion.
To add multiple input values and results to the assertion:

a. Select the Multiple input combinations check box.

b. Enter values for the input and result that you expect the assertion to generate when the test stops
running.

c. Click Add and enter values for each additional input and result that you want to test.

To use one input value and result, enter the values that you expect the assertion to generate when the
test stops running. You can then add additional assertions to the test case.

3. To update the assertion to reflect properties that were added to the rule, click Refresh.

Refresh updates the assertion with properties that are added to the rule. If properties have been removed from
the rule, then you need to manually remove the properties from the assertion.

4. Add or remove properties by clicking Manage properties and then entering the changes. You need to enter data
for properties that were added to the rule.

The properties are reflected as unexpected results in test case results.

5. In the rule form, click Save.

The test case runs the decision tree or decision table with each input combination and compares the result with
the expected decision result for that combination.

Other decision result combinations or other configured assertions then run. If the expected result of any of the
input combinations in the decision result assertion does not match the result that the rule returns, the
assertion fails.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases

Configuring expected run-time assertions


You can create an assertion for the expected run time of the rule. The expected run-time assertion is less than or
equal to an amount of time that you specify, in seconds.

An actual run time that is significantly longer than the expected run time can indicate an issue. For example, if you
are using a report definition to obtain initial values for a data page from a database, there might be a connectivity
issue between the application and the database.

By default, after you create a Pega unit test case for a data page, the system generates the expected run-time
assertion. The default value of the expected run time is the time that is taken by the rule to fetch results when the
test was first run. The system compares that time against future run-time tests.

You can change the default value and configure expected run time assertions for all rule types.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Expected run time.

3. In the Value field, enter a value, in seconds, that specifies the amount of time within which the execution of the
rule should be completed.
4. Optional:

If you want the test case to fail when the rule is not run within the specified time, select the Fail the test case
when this validation fails check box.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

6. Click Save.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Creating unit test suites

To create a unit test suite, add test cases and test suites to the suite and then modify the order in which you
want them to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Configuring list assertions


You can create list assertions for page lists on a rule to determine if either the expected result is anywhere in the list
of results returned by the rule. Even if the order of results changes, the test will continue to work.

For example, you can verify if a product is present in a product list in a data page, regardless of where the product
appears in the list results. You can also verify if there is at least one employee with the name John in the results of
the Employee list data page.

You can also configure assertions for page lists to apply assertions to all the results that are returned by a rule so
that you do not have to manually create assertions for each result in the list.

For example, you can verify that a department name is Sales and that a department ID starts with SL for each
department in the list of results in the Sales department data page. You can also verify if a discount of 10% is
applied to each customer in the list of results of the VIP customers data page.

You can configure list assertions for page lists on a rule to apply assertions to all the results that are returned by the
rule. Configure an ordered list assertion so that you do not have to manually create assertions for each result in the
list.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select List.

3. Add properties to the assertion.

a. Click Add properties.

b. If you are adding properties for flows, case types, decision trees, decision tables, or data pages:

1. In the of object field, enter the path of the object with which the properties are compared during the
assertion.

2. Proceed to step d.

c. If you are adding properties for data transforms or activities, complete the following tasks:

1. From the Thread list in the Actual results section, select the thread that contains the page whose
properties or pages you want to add.

2. In the Page field, enter the page whose properties or pages you want to add.

3. In the of object field, enter the path of the object with which the properties are compared during the
assertion.

4. Proceed to step d.

d. Select the properties or pages that you want to add. You can search for a property or its value by entering
text in the search bar and pressing Enter.

If you select a page, all embedded pages and properties from the page are added. Added properties are
displayed in the right pane.

When you add multiple properties, the assertion passes if the expected output and results match for all
properties.

4. Optional:

In the Filter field, enter a property and value on which to filter results or open the Expression Builder by clicking
the Gear icon to provide an expression that is used to filter results. The list assertion applies only to the page
list entries that are specified for this filter value.

5. From the Comparator list, select the comparator that you want to use to compare the property with a specified
value.

Select the is in comparator to compare a text, integer, or decimal property to multiple values. The assertion
passes if the property matches any of the values that you specify.

6. In the Value field, either enter a value with which to compare the property or open the Expression Builder by
clicking the Gear icon to enter an expression that is used to provide the value.

The Gear icon is not displayed until after you have saved the rule form.

7. To add a comment, click the Add comment icon, enter a comment, and click OK.

8. Click Done.

9. Click Save.

When you run the test case, the system searches for the specified properties in the page list. One of the following
occurs:

If you selected In ANY Instance, the assertion passes if all the properties in the set match the expected values
in the page list. If none of the properties match any of the values in the page list, the assertion does not pass.

If you selected In ALL instances, the assertion passes if all the properties in the set match the expected values
in every entry in the page list. If any of the properties do not match any entry in the page list, the assertion
does not pass.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Building expressions with the Expression Builder
Configuring unit test cases

Configuring page assertions


Some rules, such as activities and data transforms, can create or remove pages from the system. You can create
page assertions to determine whether or not a page exists after a unit test case runs. You can also assert if a
property has an error and, if it does, what the message is so that you can validate that the message is correct.

You can configure page assertions for embedded pages, data pages, data pages with parameters, and embedded
pages within data pages that either have or do not have parameter stop-level pages.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. In the Assertion type list, select Page.

3. In the Page field, enter the name of the page.

4. In the Comparator list, select the comparator that you want to use to compare the property with a specified
value:

To ensure that the page is created after the unit test runs, select exists. The assertion passes if the
system does not find the page.
To ensure that the page is removed after the unit test runs, select does not exist. The assertion passes if
the system does not find the page.
To ensure that the page has an error after the unit test runs, select has errors. The assertion passes if the
system finds errors on the page.
To ensure that the page is free of errors after the unit test runs, select has no errors. The assertion passes
if the system finds no errors on the page.
To ensure that the page has a specific error message after the unit test runs, select has error with
message and then enter the message in the Value box or click the Gear icon to build an expression. The
assertion passes if the page contains the complete error message.
To ensure that the page has a portion of an error message after the unit test runs, select has error
message that contains and then enter the message in the Value box or click the Gear icon to build an
expression. The assertion passes if the page contains the words or phrases in the error message.
5. Optional:

To add another page to the assertion, click Add pages, and then perform steps 3 through 4.

6. Optional:

To add a comment, click the Add comment icon, enter a comment, and then click OK.

7. Click Save.

An activity runs every week to check the last login time of all operators and deletes any operator record (page) from
the system if the last login was six months ago. When you test this activity, you can:

1. Set up the clipboard to load an operator page that has the last login time as six months ago.
2. Create a page assertion that ensures that the page no longer exists after the activity runs.

Page assertions

You can configure page assertions to determine if a page exists on the clipboard or if a page has errors.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases

Page assertions
You can configure page assertions to determine if a page exists on the clipboard or if a page has errors.

You can configure page assertions on the following types of pages:

Embedded pages
Data pages
Data pages with parameters
Embedded pages within data pages that either have or do not have parameters
Top-level pages

For example, an activity runs every week to check the last login time of all operators and deletes any operator
record (page) from the system if the last login was six months ago. When you test this activity:

Set up the clipboard to load an operator page that has the last login time as six months ago.
Create a page assertion that ensures that the page no longer exists after the activity runs.

Configuring page assertions

Some rules, such as activities and data transforms, can create or remove pages from the system. You can
create page assertions to determine whether or not a page exists after a unit test case runs. You can also
assert if a property has an error and, if it does, what the message is so that you can validate that the message
is correct.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Configuring property assertions


You can configure property assertions to validate that the actual values of properties returned by a rule are the
expected values. You can also assert if a property has an error and, if it does, what the message is so that you can
validate that the message is correct.

For example, you can create an assertion that verifies that a customer ID, which appears only once on a data page,
is equal to 834234.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. In the Assertion type list, select Property, and then click Add properties.

3. Select the properties to add by doing one of the following.

For data transforms, activities, flows, or case types, in the Actual results section, select the page
containing the properties to add.
For other rules, select the property or page that you want to add.
Properties are displayed in the righte pane. If you selected a page, then all embedded pages and properties
from the page are added.

4. To add another property or page, click Add row, and then repeat step 3.

When you add multiple properties, the assertion passes if the expected output and results match for all
properties.

5. In the Comparator list, select the comparator that you want to use to compare the property with a specified
value. Do one of the following:

Select the is in comparator to compare a text, integer, or decimal property to multiple values. The assertion
passes if the property matches any of the values that you specify.

Select the is not in comparator. The assertion passes if the property does not match any of the values that
you specify.
Select the has error with message comparator to verify that the property has the exact message that you specify
in the Value box.
Select the has error message that contains comparator to verify that the property has a portion of the message that
you specify in the Value box.

6. In the Value field, enter a value with which to compare the property. Separate values for the comparators by
using the pipe (|) character. For text properties, use double quotation marks at the beginning and end of the
value, for example, "23|15|88" .

For example, if you want the assertion to pass when Age property matches either the 5 or 7 values, configure
the assertion as .Age is in 5|7 .

7. Optional:

To add a comment, click the Add comment icon, enter a comment, and then click OK.

8. Click Save.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases
Building expressions with the Expression Builder

Configuring result count assertions


You can configure assertions to compare the number of items returned in a page list, page group, value list, or
value group on the rule to the result that you expect to see on the clipboard.

For example, you can create an assertion that verifies that the number of returned results for the number of
employees is greater than X number of employees, less than Y number of employees, or equal to Z number of
employees.

Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. For activities and data transforms, complete the following tasks:

a. In the Page field, enter the page that contains the property for which you want to test the result count.

b. In the Page class field, select the class to which the page belongs.

3. In the of object field, enter the path of the object with which the results are compared or counted against during
the assertion.

For data pages, this value is usually .pxResults.


For data transforms and activities, you can use any page list on a page.
4. Optional:

In the Filter field, enter a property and value on which to filter results or open the Expression Builder by clicking
the Gear icon to provide an expression that is used to filter results. The list assertion applies only to the page
list entries that are specified for this filter value.

5. Select the appropriate comparator from the Comparator list.

6. In the Value field, enter the value that you want to compare with the object.

7. Optional:
To add a comment, click the Add comment icon, enter a comment, and click OK.

8. Click Save.

When you run the test case, the assertion fails if the expected value does not match the result count returned from
the page list, page group, value list, or value group.

Defining expected test results with assertions

Use Pega unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Converting unit tests to test cases


Configuring unit test cases

Opening a unit test case


You can view a list of the Pega unit test cases that have been created for your application and select the one that
you want to open.

1. Open the test case. Complete one of the following actions:

In the navigation pane of Dev Studio, click Configure Application Quality Automated Testing Unit Testing
Test Cases .

Click the Test Cases tab on the rule form.

2. In the Test case name column, click the test case that you want to open.

Running a unit test case

Run a unit test case to validate rule functionality.

Configuring unit test cases


PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

Viewing test results

Viewing test case results


After you run a unit test case, you can view the results of the test run.

The following information is displayed:

When the test was last run and the user that ran it.
The rule associated with the test.
The parameters sent.
Errors for failed tests.

Unexpected results for failed tests. This information also includes the run time of the test and the expected run
time of the test if the expected run time assertion fails.

1. Open the test case.

2. In the Run history column, click View for the test case that you want to view.

3. In the Test Runs Log dialog box, click the row for the instance of the test case that you want to view to open
the test results in a new tab in Designer Studio.

You can also view test case results in the Edit Test Case form after you immediately run the test, in the Test
Case tab of the rule form or, for data pages, in the Data Page testing landing page.

PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

Viewing test results

Running a unit test case


Run a unit test case to validate rule functionality.
1. Open the test case.

2. Complete one of the following actions:

To run multiple test cases, select the test cases that you want to run, and then click Run selected.
To run a disabled test case or a single test case, click the test case to open it, and then click Actions Run .

PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

Viewing test results

Exporting a list of test cases


You can export a list of all the Pega unit test cases that are in your application or configured on a rule form.

1. Export a list of all the Pega unit test cases that are in your application or for a rule type.

Complete one of the following actions:

To export a list of all the Pega unit test cases that are in your application, in the header of Dev Studio,
click Configure Application Quality Automated Testing Unit Testing Test Cases .
To export a list of Pega unit test cases that are configured on a rule form, click Test Cases in the rule form.

2. Click Export to Excel.

PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, thereby improving application quality.

Viewing test results

Grouping test cases into suites


You can group related unit test cases or test suites into a test suite so that you can run multiple test cases and
suites in a specified order. For example, you can run related test cases in a regression test suite when changes are
made to application functionality.

Creating unit test suites

To create a unit test suite, add test cases and test suites to the suite and then modify the order in which you
want them to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Opening a unit test suite

You can view a list of the unit test suites that have been created for your application and select the one that
you want to open.

Running a unit test suite

You can run a unit test suite to validate rule functionality by comparing the expected value to the output
produced by running the rule. Test cases are run in the order in which they appear in the suite.

Viewing unit test suite run results

After you run a unit test suite, you can view the test run results. For example, you can view the expected and
actual output for assertions that did not pass.

Adding cases to a test suite

You can add test cases to a Pega unit test suite. When you run a test suite, the test cases are run in the order
in which they appear in the suite.

Viewing unit test suite run results

Creating unit test suites


To create a unit test suite, add test cases and test suites to the suite and then modify the order in which you want
them to run. You can also modify the context in which to save the scenario test suite, such as the development
branch or the ruleset.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .
2. Click Create new suite.

3. Optional:

In Description, enter information that you want to include with the test suite. For example, enter information
about when to run the test suite.

4. In the Category list, click the type of scenario test suite you are creating:

To informally test a feature, select Ad-hoc.


To verify critical application functionality, select Smoke.
To confirm that changes have not adversely affected other application functionality, select Regression.
5. Optional:

Provide a value, in seconds, that specifies the length of time within which the run time of the suite should
complete in the Expected max runtime field. If you want the test suite to fail when the expected run time has
been exceeded, select the Fail the test suite when runtime validation fails check box.

6. Add unit tests cases or other test suites to the test suite:

To add test cases to the test suite, in the Test cases section, click Add, select the test cases to include in
the suite, and then click Add.
To add test suites to the test suite, in the Test suites section, click Add, select the test suites to include
in the suite, and then click Add.

To filter information by multiple criteria, click the Advanced filter icon.


7. Optional:

To change the order in which the test cases or test suites run, drag them to a different position in the
sequence.

8. Save the test suite:

a. Click Save and then enter a Label that describes the purpose of the test suite.

Pega Platformautomatically generates the Identifier based on the label you provide. The identifier
identifies the scenario test suite in the system. To change the identifier, click Edit. The identifier must be
unique to the system.
b. Optional:

In the Context section, change details about the environment in which the test suite will run. You can:

Change the development branch in which to save the scenario test suite.
Select a different application for which to run the scenario test suite.
Select a different ruleset in which to save the scenario test.

9. Click Save.

10. Complete any of the following actions:

Remove test cases or suites from the test suite by selecting them and clicking Remove.
Apply one or more data pages, data transforms, or activities to set up the clipboard before running a test
suite in the Setup section of the Setup & Cleanup tab. You can also create objects, load work and data
objects, and add user pages from the clipboard which will be available on the clipboard when running the
test suite. For more information, see Setting up your test environment.
Apply additional data transforms or activities to clean up the clipboard in the Cleanup section of the Setup
& Cleanup tab. You can also prevent the test data from being removed after the test suite runs. For more
information, see Cleaning up your test environment.
Run a configured test suite by clicking Actions Run .
If you made changes to the suite, such as adding or removing test cases or test suites, save those changes
before running the suite. Otherwise, the last saved version of the suite will run.
View more details about the latest result by clicking View details in the banner. Viewing details is possible
after a test suite runs. For more information, see Viewing unit test suite run results.
To view historical information about previous test runs, such as test date, the run time, expected run time,
and whether test passed or failed, click View previous runs.

11. Click Save. If you are saving the form for the first time, you can modify the Identifier. After you save the rule
form, you cannot modify this field.

Rules development
Converting unit tests to test cases

Opening a unit test suite


You can view a list of the unit test suites that have been created for your application and select the one that you
want to open.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .
2. In the Test suite name column, click the test suite that you want to open.

Running a unit test case

Run a unit test case to validate rule functionality.

Viewing test results

Running a unit test suite


You can run a unit test suite to validate rule functionality by comparing the expected value to the output produced
by running the rule. Test cases are run in the order in which they appear in the suite.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. Select the check box for each test suite that you want to run.

3. Click Run selected. The test cases run, and the Result column is updated with the result, which you can click to
open test results.

You can stop the test run by clicking Stop test execution.

The test suite continues to run even if you close or log out of the Pega Platform, close the Automated Testing
landing page, or switch to another Dev Studio tab.

Viewing test results

Viewing unit test suite run results


After you run a unit test suite, you can view the test run results. For example, you can view the expected and actual
output for assertions that did not pass.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. In the Run history column, click View for the test suite that you want to view.

To quickly view results of the most recent run, click the result in the Result column.

3. In the Test suite runs log dialog box, click the row for the instance of the test suite run that you want to view
to open the results of that run in a new tab in Dev Studio.

You can also view test results after you run the test in the Edit Test Suite rule form.

Running a unit test suite

You can run a unit test suite to validate rule functionality by comparing the expected value to the output
produced by running the rule. Test cases are run in the order in which they appear in the suite.

Viewing test results

Adding cases to a test suite


You can add test cases to a Pega unit test suite. When you run a test suite, the test cases are run in the order in
which they appear in the suite.

1. Optional:

Open the Pega unit test suite, if it is not already open.

2. Click Add test cases.

3. In the Add test cases dialog box, select the test cases that you want to add to the test suite.

You can click the Advanced filter icon to filter information by multiple criteria.

4. Click Add.

5. Save the rule form.

Setting up and cleaning the context for a test case or test suite
You can set up the environment and conditions required for running a test case, determine how to clean up test
data at the end of the test run, and set pages on which to automatically run rules.

You can set clipboard pages, apply data transforms, load data pages, execute activities, create and load objects. All
the referenced data pages, and data objects and user pages that were created during a test run will be
automatically removed at the end of each run. To further clean up the clipboard, add steps to apply additional data
transforms and execute activities. You can set up or clean up the clipboard if you are running a test for which the
output or execution depends on other data pages or information.

For example, your application contains a data page D_AccountTransactionsList. This data page is sourced by a
report definition or activity that loads the transactions of the logged-in customer, based on the account type for
which the customer views transactions.

The customer number and account type that the customer selects are dynamic properties that are stored on the
work page of the case. The report definition or activity retrieves these properties as parameters from the work page
and filters the results as it obtains the results for the data page.

When you create a test case for D_AccountTransactionsList, ensure that one of the following conditions is met:

The parameter properties are on the work page of the clipboard before running the data page test.
Your data page has an activity or report definition that refers to the properties of another data page that is on
the clipboard to filter the results.

The system always runs data transforms, activities, and strategies on the RunRecordPrimaryPage page, regardless
of which page you chose when you unit tested the rule in the Run dialog box. The system also runs flows and case
types on the pyWorkPage. To update the page with any information required to set up test data, click the Setup tab.

Setting up your test environment

Configure which actions you want to run and which objects and pages you want to view on the clipboard before,
during, and after the test is run.

Cleaning up your test environment

After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default. You can apply additional data transforms or activities to
remove other pages or information on the clipboard before you run more test cases or suites.

Converting unit tests to test cases


Data pages
Activities
Data Transforms

Setting up your test environment


Configure which actions you want to run and which objects and pages you want to view on the clipboard before,
during, and after the test is run.

To set up the environment and conditions that are required before running this test case, copy or create clipboard
pages, apply data transforms, load data pages, execute activities, and create and load objects. Then, define the
connections to data pages or third-party databases to simulate during the test. Finally, after running the test case,
set up the environment and conditions that are required by applying data transforms, loading data pages, executing
activities, and creating and loading objects.

Open the test case or test suite that you want to set up. For more information, see Opening a unit test case or
Creating unit test suites.

1. Click the Setup & Cleanup tab.

2. Optional:

To make specific conditions available during test execution, expand the Before rule execution section, and then
configure the conditions:

a. Copy or create clipboard pages.

For more information, see Copy or create clipboard pages.

b. Add additional clipboard data.

For more information, see Add additional clipboard data.

3. Optional:

To define simulation settings for the test, expand the Simulation section, and then configure the simulations.

For more information, see Simulating data pages and third-party connections.

4. Optional:

To make specific conditions available after test execution, expand the After rule execution section, and then
add additional clipboard data.

For more information, see Adding additional clipboard data.

You can set up actions after rule execution for test cases only.
5. To run the rule under on a page and avoid copying the entire page to RunRecordPrimaryPage, in the Advanced
section, enter the page under which you want to run the rule.

6. Click Save.

Copying and creating clipboard pages in setup

When setting up your test environment, you can set to copy or create clipboard pages before the test runs.

Adding additional clipboard data

When setting up your test environment, you can add additional clipboad data before or after the test runs. You
can create apply data transforms, load data pages, execute activities, load objects, create data objects, and
create work objects.

Simulating data pages and third-party connections

When setting up your test environment, you can simulate data pages and third-party connections. Such
simulations let you run your tests without depending on the availability of third-party servers.

Clipboard tool
Converting unit tests to test cases
Data pages
Activities
Data Transforms

Copying and creating clipboard pages in setup


When setting up your test environment, you can set to copy or create clipboard pages before the test runs.

1. On the Setup & Cleanup tab for the test case or test suite for which you want to set up the context, expand the
Before rule execution section, and then expand the Setup data section.

2. Click Add data.

3. In the Description box, enter a description of the clipboard page you want to copy or create.

4. Optional:

Copy a clipboard page:

a. In the Type section, select Copy page.

b. To copy a clipboard page from a different thread, click the current thread name, and then click desired
thread name.

c. Select the check box next to the page that you want to be available in the clipboard during test execution,
and then click Next.

d. Edit the clipboard page and then click OK. You can rename the parent page, modify the values of existing
properties or add new properties and their values to the parent page and child pages.

5. Optional:

Create a clipboard page:

a. In the Type section, select Create page.

b. In the Page Name field, enter a name of the page you want to create.

c. In the Class field, enter or select the class of the page you want to create.

d. Click Next.

e. Edit the clipboard page and then click OK. You can rename the parent page, modify the values of existing
properties or add new properties and their values to the parent page and child pages.

6. Save the test case or test suite.

Adding additional clipboard data


When setting up your test environment, you can add additional clipboad data before or after the test runs. You can
create apply data transforms, load data pages, execute activities, load objects, create data objects, and create work
objects.

1. On the Setup & Cleanup tab for the test case or test suite for which you want to set up the context, choose
whether to add the data before or after the rule runs.

To add the data before the test rule runs, select Before rule execution section, expand the Additional
clipboard data subsection.
To add the data after the test rule runs, select After rule execution.

2. For each action you want to perform to the clipboard data, click Add step and then select the action.

To apply a data transform:, select Apply data transform, and then, in the Name field, enter or select the
name of the data transform to apply.
To load a data page, select Load data page, and then, in the Name field, enter or select the name of the
data to apply.
To execute an activity, select Execute activity, and then, in the Name field, enter or select the name of the
activity to apply.
To load an object, select Load object, enter or select the class of the object in the Of class field, and then
enter a name for the page in the Load on page field.
To create a data object, select Create data object, enter or select the class of the object in the Of class
field, and then enter a name for the page in the Load on page field.
To create a work object, select Create work object, and then, in the Of class field, enter or select the class
of the work object.
3. Optional:

If parameters are configured on rules, then you can modify them by clicking the gear icon, and providing values
in the Configure parameters dialog box, and then clicking Submit.

4. Optional:

If keys are configured on loaded or created objects, then you can define their values by clicking the With Keys
gear icon, and providing values in the Configure parameters dialog box.

5. Save the test case or test suite.

Simulating data pages and third-party connections


When setting up your test environment, you can simulate data pages and third-party connections. Such simulations
let you run your tests without depending on the availability of third-party servers.

1. In Dev Studio, Opening a unit test case.

2. Click the Setup & Cleanup tab.

3. In the Setup section, expand the Simulation section, and then click Add rules.

4. To include a rule that the test rule directly references, on the Referenced rules tab, select the rules to simulate,
and then click Add.

5. To include any rule that the test rule does not directly reference, do the following for each rule:

a. Click the Other rules tab and then click Add.

b. In the Rule type list, click the type of rule that you want to simulate.

c. In the Class box, enter the class of the rule that you want to simulate.

d. In the Rules field, enter the rule that you want to simulate.

e. Click the Add button.

The selected rules display in the Simulation section on the Setup & Cleanup tab.

6. In the Simulate with list for each rule, click a simulation method:

Select As defined in the rule to use the default simulation defined in the rule.
Select Select datatransform rule to define your own data transform rule. You can reuse this rule in other
test cases.
Select Define data here to manually provide test data specific to this particular test case. You can copy
pages from the clipboard or create new pages and populate them with required test data.
Select None to disable the simulation.

Data page testing

Data page test cases are a way to validate that application data is loaded correctly. Data page test cases
compare the expected value of one or more properties with their actual values in a data page.

Converting unit tests to test cases


Data Transforms
Activities

Cleaning up your test environment


After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default. You can apply additional data transforms or activities to
remove other pages or information on the clipboard before you run more test cases or suites.

Open the unit test case. For more information, see Opening a unit test case.

1. Click the Setup & Cleanup tab.

2. To keep the test data on the clipboard at the end of the test run, clear the Cleanup the test data at the end of
run check box in the Cleanup section.

3. In the Cleanup section, click Add step.

4. Select Apply data transform or Execute activity, and then provide the appropriate data transform or activity in
the next field.

5. If parameters are configured on the rule, you can configure them by clicking the Parameters link and providing
values in the Configure parameters dialog box.

6. Optional:

Provide additional information in the Enter comments field.

7. Click Save.

Test environment cleanup

After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default.

Data page testing

Data page test cases are a way to validate that application data is loaded correctly. Data page test cases
compare the expected value of one or more properties with their actual values in a data page.

Converting unit tests to test cases


Data Transforms
Activities

Test environment cleanup


After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default.

You can override this behavior if you want the data from the current test to be available to the subsequent tests.

You can also apply additional data transforms or activities to remove other pages or information on the clipboard
before you run more tests. Cleaning up the clipboard ensures that data pages or properties on the clipboard do not
interfere with subsequent tests. For example, when you run a test case, you can use a data transform to set the
values of the pyWorkPage data page with the AvailableDate, ProductID, and ProductName properties.

You can use a data transform to clear these properties from the pyWorkPage. Clearing these values ensures that, if
setup data changes on subsequent test runs, the test uses the latest information. For example, if you change the
value of the AvailableDate property to May 2018, you ensure that the data page uses that value, not the older
(December 2018) information.

Cleaning up your test environment

After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default. You can apply additional data transforms or activities to
remove other pages or information on the clipboard before you run more test cases or suites.

Setting up and cleaning the context for a test case or test suite

You can set up the environment and conditions required for running a test case, determine how to clean up test
data at the end of the test run, and set pages on which to automatically run rules.

Activities
Data Transforms

Viewing unit test reports


View a graph with test pass rate trend data, a summary of Pega unit tests that were run, and an overview of Pega
unit test compliance for currently included applications on the Reports tab on the Unit Testing landing page.

By default, a test case is considered as executed if it ran in the last 7 days. You can change the number of days for
which a test can be considered executed on the Application: Quality Settings landing page. The overview also
includes the percentage of rules on which Pega unit test cases are configured. View Pega unit test reports to check
the quality of your application and identify rules that did not pass Pega unit testing.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Reports .
2. Optional:

To filter information by more than one criterion, click the Advanced filter icon.

3. Optional:

Generate and export a report for test coverage and test runs for a rule type.

a. For the rule type for which you want to export a report, click a number in the column of the table for either
pie chart.

b. Click Actions.

c. Click Export to PDF or Export to Excel.

Changing application quality metrics settings

The Application Quality settings provides configurable options related to quality metrics. You can change the
default settings for metrics displayed to meet your business needs.

Viewing application quality metrics

Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Viewing test results

Viewing unit tests without rules


On the Application: Unit testing landing page you can display a list of unit tests that are not associated with any rule
and export this list to an XLS or a PDF file. You should deactivate these unit tests because they will always fail.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing .

2. Click Tests without rules.

3. Optional:

Generate and export a report that contains a list of test cases that are not associated with any rules.

To export to PDF format, click Actions Export to PDF


To export to XLS format, click Actions Export to Excel

Configuring unit test cases


Viewing test results

Running test cases and suites with the Execute Tests service
You can use the Execute Tests service (REST API) to validate the quality of your code after every build is created by
running unit test cases that are configured for the application.

A continuous integration (CI) tool, such as Jenkins, calls the service, which runs all the unit test cases or test suites in
your application and returns the results in xUnit format. The continuous integration tool interprets the results and, if
the tests are not successful, you can correct errors before you deploy your application.

When you use Jenkins, you can also use the Execute Tests service to run unit tests after you merge a branch on a
remote system of record and start a job. For more information, see Remotely starting automation jobs to perform
branch operations and run unit tests.

The service comprises the following information:

Service name: Pega unit Rule-Test-Unit-Case pzExecuteTests


Service package: Pega unit
End point: http://<yourapplicationURL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests

You can quarantine a test case by marking it Disabled. A disabled test case is not run by the Execute Tests service.
Test case quarantines prevent noncritical tests from running if they are causing failures so that the service can
continue to run.

Request parameters

The Execute Tests service takes certain string request parameters.

Response

The service returns the test results in an XML file in xUnit format and stores them in the location that you
specified in the LocationOfResults request parameter.

Configuring your default access group


When you run the Execute Tests service, you can specify the access group that is associated with the
application for which you want to run all unit test cases or a test suite. If you do not specify an access group or
application name and version, the service runs the unit test cases or test suite for the default access group that
is configured for your Pega Platform operator ID.

Configuring your build environment

Configure your build environment so that it can call the Execute Tests service and run all the unit test cases or
a test suite in your application. Your configuration depends on the external validation engine that you use.

Running tests and verifying results

After you configure your validation engine, run the service and verify the test results. Your test suites and test
cases must be checked in so that you can run them.

Test failures

Test cases and suites that are run using Execute tests services can fail for a few reasons.

Request parameters
The Execute Tests service takes certain string request parameters.

The strings are:

ApplicationInformation – Optional. The name and version of the application for which you want to run Pega unit
test cases. You can pass it instead of the AccessGroup parameter.
If you pass only this parameter, the service runs all the test cases in the application.
If you do not pass this parameter, the service runs all the test cases in the application that are associated
with the default access group that is configured for your operator.

Use the format ApplicationInformation=<application_name:application_version>.

​AccessGroup – Optional. The access group that is associated with the application for which you want to run
Pega unit test cases. You can pass it instead of the ApplicationInformation parameter.
If you pass this parameter, the service runs all the test cases in the application that are associated with
this access group.
If you do not pass this parameter, the service runs all the test cases in the application that are associated
with the default access group that is configured for your operator.

Use the format AccessGroup=<access_group_name:access_group_user>.

​TestSuiteID – The pxInsName of the test suite that you want to run. You can find this value in the XML
document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. You can run one
test suite at a time. When you use this parameter, all the test cases in the test suite are run, but no other test
cases in your application are run. This parameter is required for Pega unit test suites. If test suites share the
same name among applications:
If you pass the ApplicationInformation or AccessGroup parameter with the TestSuiteID parameter, the
service runs the test suite in the application that you specified.
If you do not pass the ApplicationInformation parameter or the AccessGroup parameter with the
TestSuiteID parameter, the system runs the test suite in the application that is associated with the default
access group.

Use the format TestSuiteID=<pxInsName>.

LocationOfResults – The location where the service stores the XML file that contains the test results. This
parameter is optional for test cases and test suites.
RunWithCoverage – Determines whether the application-level test coverage report is generated after the
Execute Tests service runs all relevant test cases or the selected test suite. For more information, see
Generating an application-level test coverage report.
If you set the parameter to False, the application-level test coverage report is not generated. This is the
default behavior.
If you set the parameter to True, and application-level coverage is not running, the Execute Tests service
starts application-level coverage mode, runs all unit tests, stops coverage mode, and generates the
application-level coverage report. This report is displayed on the test coverage landing page in the
Application level section.
If you set the parameter to True, and application-level coverage is already running, the Execute Tests
service returns an error.

Response
The service returns the test results in an XML file in xUnit format and stores them in the location that you specified
in the LocationOfResults request parameter.

The output is similar to the following example:

<test-case errors="2" failures="0" label="Purchase order transformation with a bad element in the output
expected" name="report-bad-element-name" skip="0" tests="7"> <nodes expected="/" result="/"><nodes
xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]"><error
type="Local name comparison">Expected "order" but was "purchase-order"</error><error type="Namespace URI
comparison">Expected "urn:acme-purchase-order" but was ""</error></nodes></nodes><sysout>This text is
captured by the report</sysout><syserr/></test-case>

Configuring your default access group


When you run the Execute Tests service, you can specify the access group that is associated with the application for
which you want to run all unit test cases or a test suite. If you do not specify an access group or application name
and version, the service runs the unit test cases or test suite for the default access group that is configured for your
Pega Platform operator ID.

1. In the navigation pane of Dev Studio, click the Operator menu, and then click Operator.
2. In the Application Access section, select your default access group.
3. Selecting default access group configuration
4. Click Save.

Configuring your build environment


Configure your build environment so that it can call the Execute Tests service and run all the unit test cases or a
test suite in your application. Your configuration depends on the external validation engine that you use.

For example, the following procedure describes how to configure the Jenkins server to call the service.

1. Open a web browser and go to the location of the Jenkins server.

2. Install the HTTP request plug-in for Jenkins to call the service and the JUnit plug-in so that you can view reports
in xUnit format.

a. Click Manage Jenkins.

b. Click Manage Plugins.

c. On the Available tab, select the HTTP Request Plugin and the JUnit Plugin check boxes.

d. Specify whether to install the plug-in without restarting Jenkins or to download the plug-in and install it
after restarting Jenkins.

3. Configure the Pega Platform credentials for the operator who authenticates the Execute Tests service.

a. Click Credentials, and then click System.

b. Click the drop-down arrow next to the domain to which you want to add credentials, and click Add
credentials.

c. In the Username field, enter the operator ID that is used to authenticate the service. This operator should
belong to the access group that is associated with the application for which you want to run test cases and
test suites.

d. In the Password field, enter the password.

e. Click OK.

4. Configure the Jenkins URL that runs the service.

a. Click Manage Jenkins, and then click Configure System.

b. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server.

c. Click Apply, and then click Save.

5. Add a build step to be run after the project is built.

a. Open an existing project or create a project.

b. Click Configure.

c. In the Build section, click Add build step, and select HTTP Request from the list.

d. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use one of the following
formats:

http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests


http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests​?
AccessGroup=<access_group_name:accessgroup_group_users>
http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?
TestSuiteID=<pxInsName>
http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?
ApplicationInformation?=ApplicationInformation:<application_name:application_version>
If you are using multiple parameters, separate them with the ampersand (&) character, for example,
http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?
ApplicationInformation?=ApplicationInformation:<application_name:application_version>&TestSuiteID=
<pxInsName>

6. From the HTTP mode list, select POST.

7. Click Advanced.

8. In the Authorization section, from the Authenticate list, select the Pega Platform operator ID that authenticates
the service that you configured in step 3.

9. In the Response section, in the Output response to file field, enter the name of the XML file where Jenkins
stores the output that it receives from the service. This field corresponds to the LocationOfResults request
parameter. In the Post-build Actions section, from the Add post build section list, select Publish Junit test result
report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which
provides information about test results, such as a graph of test results trends. These results are displayed on
your project page in Jenkins.

10. Click Apply, and then click Save.

Running tests and verifying results


After you configure your validation engine, run the service and verify the test results. Your test suites and test cases
must be checked in so that you can run them.

For example, in Jenkins, complete the following steps.

1. Open the project and click Build Now.

2. In the Build History pane, click the build that you ran.

3. On the next page, click Test Result.

4. In the All Tests section, click root. The results of all tests are displayed.

5. Optional:

Expand a test result in the All Failed Tests section and view details about why the test was not successful.

Test failures
Test cases and suites that are run using Execute tests services can fail for a few reasons.

Reasons for failed tests:

The operator does not have access to the location of the results.
The access group that is passed by the service either does not exist or no access group is associated with the
operator ID.
The application name and version that are passed do not exist.
An application is not associated with the access group that is passed by the service.
No Pega unit test cases or test suites are in the application.
The test suite pxInsName does not exist for the application name and version or for the access group that is
passed by the service.

Understanding Pega Platform 7.2.2 and later behavior when


switching between Pega unit testing and Automated Unit Testing
features
Beginning with Pega 7.2.2, you can use Pega unit testing to create test cases to validate the quality of your
application by comparing the expected test output with results that are returned by running rules.

In addition, if you have the AutomatedTesting privilege, you can use Automated Unit Testing (AUT) and switch
between Pega unit testing and AUT, for example, if you want to view test cases that you created in AUT. The
following list describes the application behavior when you use Pega unit testing and AUT:

When you unit test activities that are supported by both Pega unit testing and AUT, the Run Rule dialog box
displays updated options for creating unit tests for Pega unit testing. However, you cannot create unit test
cases for AUT by using this dialog box.
When you use Pega unit testing, you can create, run, and view the results of Pega unit testing on the Test
Cases tab for the supported rule types.
You can view, run, and view the results of Pega unit test cases by clicking Dev Studio Automated Testing Test
Cases . You can also switch to the AUT landing page by clicking Switch to old version.
When you switch to the AUT landing page, you can create, run, and view the results of unit test cases for AUT
on the Test Cases tab for activities, data transforms, and data tables, which are supported by both Pega unit
testing and AUT. You can create unit test cases only by clicking the Record test case button and using the older
Run Rule dialog box.
In the Automated Unit Testing landing page, you can restore the Automated Rule Testing landing page by
clicking Switch to new version. When you click the Test cases tab in an activity, decision table, or decision tree,
the tab displays options for creating Pega unit test cases.
If you use the Automated Unit Testing landing page, and then log out of the system, Dev Studio displays the
Dev Studio Application Automated Unit Testing > menu option instead of the Dev Studio Application Automated
Testing > option. To return to the Automated Unit Testing landing page, click Switch to new version on the
Automated Unit Testing landing page.

Working with the deprecated AUT tool


In older versions of Pega Platform, automated unit tests were created using the Automated Unit Testing (AUT) tool,
which has since been replaced by PegaUnit testing. If you have automated unit tests that were created using AUT
and they haven't been changed to PegaUnit test cases, then you can switch back to AUT to manage those tests.

AUT has been deprecated and is not supported in the current version of Pega Platform. Switch to and use AUT only if
you have existing automated unit tests created with AUT. See PegaUnit testing more information.

Note the following behavior:

To use AUT, your operator ID must have the AutomatedTesting privilege through an access role.

Switch from PegaUnit testing to AUT by clicking Configure Automated Testing Test Cases and clicking Switch to
old version on the Automated Unit Testing landing page.

Click the Test cases tab of the Automated Unit Testing landing page to display options for creating unit tests for
activities, decision tables, and decision trees.

If you are using the Automated Unit Testing landing page and then log out of the system, you can click
Configure Application Automated Unit Testing , and then click Switch to new version to restore the Automated
Testing landing page.

Viewing, playing back, and rerecording test cases


1. Click the Automated Unit Tests tab.
2. Select Unit Test Cases in the Show field.
To play back a test case, click its name in the Name column.
To rerecord a test case, right-click the test case name and click Re-record.
If the underlying test case rule belongs to a ruleset that uses the check-out feature, you must have the
test case rule checked out to you before re-recording the test case.

Opening rules in test cases and unit test suites


1. Click the Automated Unit Tests tab.
2. Right-click a test case or suite and click Open to open its rule.

Withdrawing test cases and unit test suites


1. Click the Automated Unit Tests tab.
2. Right-click a test case or suite and click Withdraw.

Withdrawn test cases and suites are not displayed on the Automated Unit Tests tab.

Unit test suite run results


You can view the results of your recent unit test suite runs in either the Dashboard tab or Reports tab. The
Dashboard tab displays the ten most recent runs. The Reports tab displays earlier results and, for a given unit test
suite, shows results from the last fifty (50) runs of that unit test suite.

If you ran a unit test against a saved test case for a decision table, decision tree, activity, or Service SOAP rule and
selected the All Cases option in the Run Rule form, those results also appear in the Dashboard tab.

For activity test cases, if the activity test case has an approval list, differences are reported only for pages and
properties on the list. If the test case has an approval list and the only differences are for pages and properties not
on the list, those differences are not reported. If differences are found for items on the approval list, you can remove
the item from the approval list for that test case.

Creating and scheduling unit test suites


To create a unit test suite:

1. Click the Schedule tab.


2. Click Create Suite.
3. In the New Rule form, enter the requested information for creating a unit test suite.
To run a unit test suite or to schedule a run:

1. Click the Schedule tab.


2. Click the Calendar icon in the Schedule column for the unit test suite you want to run.
3. In the Pattern section in the Schedule Unit Test Suite window, specify how to run this unit test suite. When the
run is complete, the system displays the results in the Dashboard tab. When you select To run immediately, the
system runs the test suite in the foreground; for all other options, the system runs the test in the background.
4. For scheduled runs, you can specify additional options.
1. Select to run the unit test suite by using a different operator ID. In the Advanced Settings section, enter
the Operator ID in the Override Default and Run Suite As field. The system runs the unit test suite by using
the rulesets and access rights associated with that operator. If the operator ID form has multiple access
groups, the default access group is used.
2. Send the completion email to multiple email addresses. Use the Send Completion Email to field to specify
the email addresses.

If you do not want any emails sent, clear the Send Completion Email field.

5. Click OK.

By default, the Pega-AutoTest agents run scheduled unit test suites run every five minutes. When the suite is
finished, the agent activity sends an email with the results. By default, this email is sent to the operator who
requested the unit test suite run and to any email addresses listed in the Send Completion Email array. If no email
addresses appear in that field, no email message is sent.

Creating test cases with AUT

You can automate testing of rules by creating test cases for automated unit testing. Automated unit testing
validates application data by comparing expected output to the actual output that is returned by running rules.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

AUT test suite – Create or Save as form

Unit Test Suites – Completing the Create or Save As form

AUT test suite – Contents form

Use the Contents tab to define the unit test suite. Specify a user (Operator ID) that the Pega-AutoTest agents
are to use by default when running the suite, and select the test cases to include.

Viewing test results

Creating test cases with AUT


You can automate testing of rules by creating test cases for automated unit testing. Automated unit testing
validates application data by comparing expected output to the actual output that is returned by running rules.

To create test cases, you must have a rule set that can store test cases. For more information, see Creating a test
ruleset.

AUT is deprecated. Use Pega unit testing instead to create automated test rules.

Automated unit testing information is available on the Testing Applications landing page on Pega Community and in
the automated unit testing topics in the help.

AUT test cases

Create test cases for automated unit testing to validate application data by comparing expected output to the
actual output that is returned by running rules.

Completing the Create form


Copying a rule or data instance
Creating a specialized or circumstance rule

AUT test cases


Create test cases for automated unit testing to validate application data by comparing expected output to the actual
output that is returned by running rules.

You can create automated unit testing test cases for the following rule types:

Activity
Decision table
Decision tree
Flow
Service SOAP

AUT is deprecated. Use Pega unit testing instead to create automated test rules.

Automated unit testing information is available on the Testing Applications landing page on Pega Community and in
the automated unit testing topics in the help.

Creating test cases with AUT

You can automate testing of rules by creating test cases for automated unit testing. Automated unit testing
validates application data by comparing expected output to the actual output that is returned by running rules.

Rules development

Creating unit test suites with AUT


Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose credentials are
used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test cases together and
make unit testing more efficient.

The Unit Test Suite rule form consists of the Contents tab.

AUT is deprecated. Use Pega unit testing instead to create automated test rules.

Automated unit testing information is available on the Testing Applications landing page on Pega Community and
in the automated unit testing topics in the help.

You must have the AutomatedTesting privilege to work with unit test suite rules.

You can create a unit test suite that includes all the test cases for a specific rule type, or you can select individual
rules and specify the sequence in which to run them.

To run a unit test suite, use the Schedule gadget on the Automated Unit Testing landing page. From that gadget,
you can choose to run the unit test suite immediately or schedule the run for a future time.

For unit test suites that are scheduled to run at future times, an agent activity in the Pega-AutoTest agents rule
checks for unit test suite requests every five minutes and runs those that are due. When the agent activity finishes
running a unit test suite, it sends an email message with the results. By default, this completion email message is
sent to the person who scheduled the unit test suite run, and to any additional email addresses specified at the time
the run is scheduled. If no email addresses are specified at the time the run was scheduled, no email message is
sent.

Access
Use the Automated Unit Testing landing page to work with the Unit Test Suites that are available to you. You can:

Use the Automated Unit Tests gadget to see the Unit Test Suites and the test cases in each.
Use the Create Suite button on the Schedule gadget to create unit test suites.
Use the calendar button on the Schedule gadget to run unit test suites immediately and to schedule unit test
suite runs.

Category
Unit Test Suites are instances of the Rule-AutoTest-Suite class. They belong to the SysAdmin category.

AUT test suite – Create or Save as form

Unit Test Suites – Completing the Create or Save As form

AUT test suite – Contents form

Use the Contents tab to define the unit test suite. Specify a user (Operator ID) that the Pega-AutoTest agents
are to use by default when running the suite, and select the test cases to include.

AUT test suite – Create or Save as form


Unit Test Suites – Completing the Create or Save As form

To create a unit test suite rule, use the Create Suite button on the Schedule gadget of the Automated Unit Testing
landing page. To open the Automated Unit Testing landing page, select Dev Studio > Application > Automated Unit
Testing.

You must have the AutomatedTesting privilege to be able to create unit test suites. For information about how to
enable this privilege, see About Automated Unit Testing.

A unit test suite rule has a single key part, the unit test suite name:
Field Description
Name Enter a short, descriptive name for the unit test suite.

Create a separate RuleSet to hold test cases and unit test suites, rather than using a RuleSet that will be moved to a
production system. For more information, consult the articles in the Testing Applications category of Pega
Community.

For general information about the Create and Save As forms, see:

Completing the Create form.


Completing the Save As form.

> Rule resolution

As with most rules, when you search for a Unit Test Suite, the system shows you only those rules that belong to a
RuleSet and version that you have access to.

Unit Test Suite rules cannot be qualified by circumstance or time.

Creating unit test suites with AUT

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

AUT test suite – Contents form


Use the Contents tab to define the unit test suite. Specify a user (Operator ID) that the Pega-AutoTest agents are to
use by default when running the suite, and select the test cases to include.

The Operator ID specified here is the default one used to run the unit test suite. When defining the unit test suite's
run schedule using the Schedule gadget of the Automated Unit Testing landing page, you have the option to specify
a different Operator ID and override the one specified here.

You can specify Test Cases in both the Rule Types To Include section and the Query Test Cases To Include
section of this form. If you specify Test Cases in both sections, when the unit test suite runs, those test cases defined
in the Rule Types To Include section will run before the test cases in the Query Test Cases To Include section.

1. In the RuleSets for Test Cases field, select the RuleSet that holds the test cases you want to include in this test
suite.

If the test cases are in more than one RuleSet, click the Add icon to add rows to specify the additional RuleSets.

2. In the User ID for Agent Processing field, select the Operator ID for the Pega-AutoTest agents to use by default
when they run this test suite.

This ID must provide access to the RuleSet that this test suite belongs to, as well as access to the RuleSets
listed in the RuleSets field.

3. Optional:

To specify that the work items created during the test case execution are to be deleted afterwards, select the
Remove Test Work Objects? check box.

The fields in the Application Test Cases To Include section provide options to specify the test cases by
application name and version.

4. In the Application Name fieldSelect the name of the application that has the test cases you want to include in
the unit test suite.

5. In the Application Version field, select the version of the application that has the test cases you want to include
in the unit test suite.

The fields in the Rule Types To Include section provide options to select the test cases by rule type. You can
specify that all the test cases for a particular rule type are included in this unit test suite, or you can constrain
the list with a When condition rule.

6. In the Rule Type field, select those rule types for which you want to include their test cases in this unit test
suite:

Activities
Decision Tables
Decision Trees
Flows
Service SOAP service records

7. In the When Filter field, do one of the following:


Leave blank to include all the test cases that were created for rules of the type specified in the Rule Type
field.

Select the appropriate when condition rule to constrain the list.

The test cases that meet the conditions in the when condition rule are included in the unit test suite

The fields in the Query Test Cases To Include section provide options to select specific Test Cases to include
in this unit test suite. List the test cases in the order in which you want them to be run.

8. In the Test Case Name field, enter a search string for the test case you want to find.

9. To list test cases that match the query string in the Test Case Name field, click Query.

The list is not limited by RuleSet. If test cases exist that match the search string, the List Test Case window
appears. Select the test cases you want to include and then click OK. The test cases are added to the list in this
section of the form.

10. In the Test Case Key field, enter the three-part key of a Test Case rule.

The key consists of the following parts:

Class Name
Instance Name (Ins Name)
Purpose

When you use the Query button to find and add a test case, the system automatically fills in this field.

11. In the Description field, enter the short description of the Test Case.

When you use the Query button to find and add a test case, the system automatically fills in this field.

12. In the RuleSet field, enter the RuleSet of the test case.

When you use the Query button to find and add a test case, the system automatically fills in this field.

Verify that this RuleSet is included in the RuleSets for Test Cases list at the top of this form. If the RuleSet for
the test case is not in that list, add it now. Otherwise, the Test Case does not run when the unit test suite runs.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

AUT test suite – Create or Save as form

Unit Test Suites – Completing the Create or Save As form

UI testing
Perform UI-based functional tests and end-to-end scenario tests to verify that end-to-end cases work as expected.
Use the third party Selenium starter kit for CRM or the built-in scenario testing tool to perform the UI testing.

Testing with Selenium starter kit for CRM

Pega provides Selenium-based UI test framework and sample UI tests that you can leverage to build up test
automation suite for your Pega application. These test frameworks are built with maintenance and best
practices in mind.

Creating UI-based tests with scenario testing

Run scenario tests against a user interface to verify that the end-to-end scenarios are functioning correctly. The
UI-based scenario testing tool allows you to focus on creating functional and useful tests, rather than writing
complex code.

Testing with Selenium starter kit for CRM


Pega provides Selenium-based UI test framework and sample UI tests that you can leverage to build up test
automation suite for your Pega application. These test frameworks are built with maintenance and best practices in
mind.

The starter kit comes with a generic Selenium based UI test framework that you can use for creating UI page objects
and UI tests for your Pega application. It includes sample UI test framework to support testing core Pega CRM
applications – Pega Sales Automation, Pega Customer Services and Pega Marketing. The kit comes with out of the
box (OOTB) sample tests to validate real core use cases of those CRM applications. You can use this kit as reference
when creating your own UI page objects and end-to-end UI test scripts. The framework and tests are Behavior
Driven Development (BDD) based and leverage the Cucumber framework.
For more information, including guidelines on getting started, running, and writing UI tests, see Selenium Starter Kit
on Pega Marketplace.

Creating UI-based tests with scenario testing


Run scenario tests against a user interface to verify that the end-to-end scenarios are functioning correctly. The UI-
based scenario testing tool allows you to focus on creating functional and useful tests, rather than writing complex
code.

You can test either a specific case type or an entire portal by clicking Scenario Testing in the run-time toolbar to
open the test recorder. When you use the test recorder and hover over a testable element, an orange highlight
indicates that the element can be tested. Interactions are recorded in a visual series of steps and the execution of a
test step can include a delay.

Provide data to your test cases with a predefined data page. This data page can provide unique values for each
execution of the test case. You can populate the data page by using any source, including activities or data
transforms.

Tests are saved in a test ruleset. After they are saved, tests are available on the Application: Scenario testing
landing page. From the landing page you can run a test or view the results of a previous test run.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Opening a scenario test case

You can view a list of the scenario test cases that have been created for your application and select the one
that you want to open.

Updating scenario tests

Ensure that the test covers your current portal or case type scenario by updating an existing scenario test
when the user interface or process flow changes. You can save time and effort by keeping existing tests
functional instead of creating new ones.

Running scenario tests

Grouping scenario tests into suites

Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can
then run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or
outcome-based tests. Additionally, you can disable or quarantine individual scenario tests for an application so
that they are not executed when the test suite runs.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Creating scenario tests


Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and improve
the quality of your application.

Create a test ruleset in which to store the scenario test. For more information, see Creating a test ruleset to store
test cases.

1. Launch the portal in which you want to do the test.

2. On the lower right part of the screen, toggle the run-time toolbar, and then click the Toggle Automation
Recorder icon.

3. In the Scenario tests pane, click the Create test case button, and then select the test type:

To record a test for a portal, select Portal.


To record a test for a case, select Case type.
When you select the case type, a new case of that type is created.

4. Record the steps for the test by clicking the user interface elements.

When you hover over a testable element, an orange highlight box appears. When you click an element, you
record an implicit assertion and add the interaction to the list of test steps.
5. Optional:

To add an explicit assertion to the test, do the following steps:

a. Hover over an element.


b. Click the Mark for assertion icon on the orange highlight box.

c. In the Expected results section, click Add assertion.

d. Define the assertion by completing the Name, Comparator, and Value fields.

e. Click Save step.

6. When you finish adding steps, in the Test case pane, click Stop and save test case.

7. On the New test case form, save the test:

a. Enter a name and a description for the test.

b. In the Context section, select a branch or ruleset in which you want to save the test.

c. In the Apply to field, enter the name of a class that is relevant to the test.

d. Click Save.

The test case appears on the Scenario tests landing page.

Updating scenario tests

Ensure that the test covers your current portal or case type scenario by updating an existing scenario test
when the user interface or process flow changes. You can save time and effort by keeping existing tests
functional instead of creating new ones.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Opening a scenario test case


You can view a list of the scenario test cases that have been created for your application and select the one that you
want to open.

1. In the navigation pane of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing
Test Cases .

2. In the Test case name column, click the test case that you want to open.

Running scenario tests

Updating scenario tests


Ensure that the test covers your current portal or case type scenario by updating an existing scenario test when the
user interface or process flow changes. You can save time and effort by keeping existing tests functional instead of
creating new ones.

Create a scenario test case for a portal or case type. For more information, see Creating scenario tests.

1. Launch the portal in which you want to do the test.

2. On the lower right part of the screen, toggle the run-time toolbar, and then click the Toggle Automation
Recorder icon.

3. In the Scenario tests pane, click the name of the test that you want to edit, and then click Edit.

4. Update the test sequence by clicking the More icon next to a step, and then selecting an action:

To remove a step from the test case, click Remove step.


To add a step to the test case, click Add steps.

The test runs from the start and stops at the selected step so that you can add steps to the specific part of
the test sequence.

To record the test case again from a specific step, click Re-record from here.

All steps after the selected step are removed. The test runs from the start and stops at the selected step
so that you can add steps to the end of the test sequence.

5. Optional:

To modify a step, click the Edit icon next to a step, modify the assertions and other properties of the step, and
then click Save step.

6. Click Save.
Your scenario test case now matches the current user interface and process flow. Run the test to check the quality
of your current portal or case type scenario.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Running scenario tests


.Once the test case is save then gets listed within right panel.Once user clicks on the test, right panel get refreshed
and detailed steps are shown.Same panel has Run and Debug & Modify buttons.

6.1 Run Mode : Clicking Run would run the test in normal mode and if there is a step failure in this mode then test
execution doesn't stop.

6.2 Debug & Modify Mode : Clicking Debug & Modify would run test in debug mode.In debug mode if test step
failure is encountered, test execution stops at that step and user is presented with options to either Save(if there is
any change done to test step) or Resume.Clicking Save would save and checkin the changes done to the step.

Test Case recorded and reports having test case execution can be seen under Application -> Quality -> Automated
Testing -> Scenario Testing Landing Page .

Test Case can be run from the landing page as well.

Running scenario tests in Dev Studio

Run a scenario test case to validate UI functionality.

Running scenario tests in Dev Studio


Run a scenario test case to validate UI functionality.

1. Open the test case.

2. Complete one of the following actions:

To run multiple test cases, select the test cases that you want to run, and then click Run selected.
To run a disabled test case or a single test case, click the test case to open it, and then click Actions Run .

Grouping scenario tests into suites


Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can then
run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or outcome-
based tests. Additionally, you can disable or quarantine individual scenario tests for an application so that they are
not executed when the test suite runs.

Creating scenario test suites

To create a scenario test suite, add scenario test cases to the suite and then specify the order in which you
want the tests to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Running scenario test suites

Run scenario test suites to check application functionality. You can check the run history, add or remove test
cases from the suite, or reorder the test cases before running the suite.

Viewing scenario test suite results

After you run a scenario test suite, you can view the test results. For example, you can view the expected and
actual output for assertions that did not pass.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.
Creating scenario test suites
To create a scenario test suite, add scenario test cases to the suite and then specify the order in which you want the
tests to run. You can also modify the context in which to save the scenario test suite, such as the development
branch or the ruleset.

When the test suite runs, the test cases run in the order that they are listed. You can reorder cases only on the page
in which they display and cannot move cases or suites from one page to another.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing Test Suites
.

2. Click Create new suite.

3. Optional:

In Description, enter information that you want to include with the test suite. For example, enter information
about when to run the test suite.

4. In the Category list, click the type of scenario test suite you are creating:

To informally test a feature, select Ad-hoc.


To verify critical application functionality, select Smoke.
To confirm that changes have not adversely affected other application functionality, select Regression.

5. In the Scenario test cases section, click Add, select the test cases you want to add to the suite, and then click
Add.

To filter information by multiple criteria, click the Advanced filter icon.


6. Optional:

To change the order in which the test cases run, drag the case to a different position in the sequence.

7. Save the scenario test suite:

a. Click Save and then enter a Label that describes the purpose of the test suite.

Pega Platformautomatically generates the Identifier based on the label you provide. The identifier
identifies the scenario test suite in the system. To change the identifier, click Edit. The identifier must be
unique to the system.
b. Optional:

In the Context section, change details about the environment in which the test suite will run. You can:

Change the development branch in which to save the scenario test suite.
Select a different application for which to run the scenario test suite.
Change the class to apply to the scenario test suite.
Select a different ruleset in which to save the scenario test.

c. Click Submit.

Grouping scenario tests into suites

Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can
then run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or
outcome-based tests. Additionally, you can disable or quarantine individual scenario tests for an application so
that they are not executed when the test suite runs.

Running scenario test suites

Run scenario test suites to check application functionality. You can check the run history, add or remove test
cases from the suite, or reorder the test cases before running the suite.

Viewing scenario test suite results

After you run a scenario test suite, you can view the test results. For example, you can view the expected and
actual output for assertions that did not pass.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Running scenario test suites


Run scenario test suites to check application functionality. You can check the run history, add or remove test cases
from the suite, or reorder the test cases before running the suite.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing Test Suites
.

2. Optional:

View or modify the test cases included in a test suite.

a. Click the name of the suite.

b. View summary information about previous test results in the header.

To view more information about the latest test results, click View details.
To view information about earlier results, click View previous runs.

c. Modify the test cases in the suite from the Scenario test cases section.

To remove test cases from the suite, click Remove, and then click Save.
To include additional test cases in the suite, click Add, select the test case, click Add, and then click
Save.

d. To change the order in which the test cases will run, drag a case to a different position in the sequence
and then click Save.

e. To prevent individual test cases from running as part of the suite, select the case, click Disable, click Save,
and then close the test case.

f. Close the test suite, return to the Application: Scenario testing page, and then click Actions Refresh .

3. Optional:

To view details about previous test results, click View in the Run history column.

4. Select the check box for each test suite that you want to run and then click Run selected. The test suites run
and the Result column is updated with the result, which you can click to open test results.

Grouping scenario tests into suites

Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can
then run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or
outcome-based tests. Additionally, you can disable or quarantine individual scenario tests for an application so
that they are not executed when the test suite runs.

Running scenario test suites

Run scenario test suites to check application functionality. You can check the run history, add or remove test
cases from the suite, or reorder the test cases before running the suite.

Viewing scenario test suite results

After you run a scenario test suite, you can view the test results. For example, you can view the expected and
actual output for assertions that did not pass.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Viewing scenario test suite results


After you run a scenario test suite, you can view the test results. For example, you can view the expected and actual
output for assertions that did not pass.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing Test Suites
.

2. To view results of the most recent run, click the result in the Result column. For information about why a test
failed, click Failed in the Result column.

3. To view historical details about a specific test suite, in the Run history column, click View.

Grouping scenario tests into suites


Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can
then run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or
outcome-based tests. Additionally, you can disable or quarantine individual scenario tests for an application so
that they are not executed when the test suite runs.

Creating scenario test suites

To create a scenario test suite, add scenario test cases to the suite and then specify the order in which you
want the tests to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Running scenario test suites

Run scenario test suites to check application functionality. You can check the run history, add or remove test
cases from the suite, or reorder the test cases before running the suite.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Application: Scenario testing landing page


The scenario testing landing page provides a graphical test creation tool that you can use to increase test coverage
without writing complex code.

On the scenario testing landing page, you can view and run scenario test cases. By viewing reports, you can also
identify case types and portals that did not pass scenario testing.

On the scenario testing landing page, you can perform the following tasks:

View test execution and coverage information for case type tests and portal tests.
Open a test case rule where you can add assertions to your test.
View the results of the most recent test run.
Select and run individual test cases, or group tests into test suites that you can use to run multiple tests in a
specified order.
Download a list of tests, their type, the name of a portal or case that is tested, and the time and the result of
the last run.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Updating scenario tests

Ensure that the test covers your current portal or case type scenario by updating an existing scenario test
when the user interface or process flow changes. You can save time and effort by keeping existing tests
functional instead of creating new ones.

Understanding model-driven DevOps with Deployment Manager


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega
applications from within Pega Platform. You can create a standardized deployment process so that you can deploy
predictable, high-quality releases without using third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application
package generation, artifact management, and package promotion to different stages in the workflow.You can
download Deployment Manager for Pega Platform from the Deployment Manager Pega Marketplace page.

For answers to frequently asked questions, see the Deployment Manager FAQ page.

Deployment Manager release notes

These release notes provide information about enhancements, known issues, issues related to updating from a
previous release, and issues that were resolved in each release of Deployment Manager.

Getting started with Deployment Manager

Deployment Manager is a simple, intuitive, and ready-to-use application that offers built-in DevOps capabilities
to users. It leverages Pegasystems’s market-leading case management technology to manage an automated
orchestration engine, enabling you to build and run continuous integration and continuous delivery (CI/CD)
pipelines in a model-driven manner.
Understanding Deployment Manager architecture and workflows

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a standardized deployment process so that you
can deploy predictable, high-quality releases without using third-party tools.

Understanding best practices for using branches with Deployment Manager

Follow these best practices when you use branches in your Deployment Manager pipelines. The specific
practices depend on whether you have a single development team or multiple development teams in a
distributed environment.

Managing test cases separately in Deployment Manager

In Deployment Manager 4.4.x and later, you can package and deploy test cases separately on the candidate
systems in the pipeline. When you configure a pipeline in Deployment Manager, you specify the details of the
test package that you want to deploy, including the stage in the pipeline until which you want to deploy the
package.

Creating and using custom repository types for Deployment Manager

In Deployment Manager 3.1.x and later, you can create custom repository types to store and move your
artifacts. For example, you can create a Nexus repository and use it similarly to how you would use a Pega
Platform-supported repository type such as file system. By creating custom repository types, you can extend
the functionality of Deployment Manager through the use of a wider variety of repository types with your
artifacts.

Configuring Deployment Manager 4.x for Pega Platform 7.4

You can use Deployment Manager 4.x if Pega Platform 7.4 is installed on your candidate systems
(development, QA, staging, and production). You can use many of the latest features that were introduced in
Deployment Manager 4.x, such as managing your deployments in a dedicated portal.

Deployment Manager 4.7.x

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a consistent deployment process so that you can
deploy high-quality releases without the use of third-party tools.

Deployment Manager 3.4.x

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a standardized deployment process so that you
can deploy predictable, high-quality releases without using third-party tools.

Obtaining deprecated Deployment Manager documentation

The Deployment Manager releases for the corresponding versions of documentation are no longer available to
be downloaded from Pega Marketplace.

Deployment Manager release notes


These release notes provide information about enhancements, known issues, issues related to updating from a
previous release, and issues that were resolved in each release of Deployment Manager.

For answers to frequently asked questions, see the Deployment Manager FAQ page.

Deployment Manager 4.7.1

Deployment Manager 4.7.1includes the following enhancements.

Deployment Manager 4.6.1

Deployment Manager 4.6.1 includes the following enhancements and resolved issues.

Deployment Manager 4.5.1

Deployment Manager 4.5.1 includes the following enhancements and resolved issues.

Deployment Manager 4.4.2

Deployment Manager 4.4.2 includes the following resolved issues.

Deployment Manager 4.4.1

Deployment Manager 4.4.1 includes the following enhancements and known issues.

Deployment Manager 4.3.2

Deployment Manager 4.3.2 includes the following resolved issues.


Deployment Manager 4.3.1

Deployment Manager 4.3.1 includes the following enhancements.

Deployment Manager 4.2.1

Deployment Manager 4.2.1 includes the following enhancements.

Deployment Manager 4.1.1

Deployment Manager 4.1.1 includes the following enhancements.

Deployment Manager 3.4.1

Deployment Manager 3.4.1 includes the following enhancements.

Deployment Manager 3.3.1

Deployment Manager 3.3.1 includes the following enhancements and known issues.

Deployment Manager 3.2.1

Deployment Manager 3.2.1 includes the following enhancements.

Deployment Manager 3.1.1

Deployment Manager 3.1.1 includes the following enhancements.

Deployment Manager 2.1.4

Deployment Manager 2.1.4 includes the resolved issues.

Deployment Manager 2.1.3

Deployment Manager 2.1.4 includes the following enhancements.

Deployment Manager 2.1.2

Deployment Manager 2.1.2 includes the known issues.

Deployment Manager 1.1.3

Deployment Manager 1.1.3 includes the following enhancements.

Deployment Manager 1.1.2

Deployment Manager 1.1.2 includes the following known and resolved issues.

Deployment Manager 4.7.1


Deployment Manager 4.7.1includes the following enhancements.

Enhancements
The following enhancements are available in this release:

Stop all ongoing deployments for a pipeline at once.

You can now stop all the ongoing deployments for a pipeline at once. Stop all deployments to quickly
troubleshoot issues and resolve failed pipelines.

Use a chatbot to obtain information about common issues.

You can now use a self-service chatbot to obtain troubleshooting tips and more information about common
Deployment Manager issues. When you search for information, the chatbot provides you with answers and links
to more information.

Troubleshoot pipelines with enhanced diagnostics.

Deployment Manager now provides enhanced diagnostics so that you can troubleshoot more issues. You
receive warnings if you are using the defaultstore repository or Pega type repository in any environment.

Perform new tasks with usability enhancements.

Usability enhancements in Deployment Manager 4.7.1 now include:

Start another pipeline by using the Trigger deployment task in an active pipeline, which allows you to add
pipeline stages.
Stop a deployment if a Jenkins task in the pipeline fails.
Archive inactive pipelines. By default, archived pipelines do not appear in the Deployment Manager
interface.
Temporarily disable pipelines that frequently fail to prevent additional deployments on the pipeline.
Start a new test coverage session for the Enable test coverage task every time you run a pipeline. Starting
a new session prevents deployments from failing if a test coverage session is already running on the
pipeline.
Filter pipelines by application name and version on the Deployment Manager landing page.
In deployment logs, view all the new rule and data instances and all the changed rule and data instances
that are in an application package that imported into a candidate system.

Use APIs for new features.

Deployment Manager now provides APIs so that you can provide for new features in your applications:

Run diagnostics remotely, and retrieve diagnostics results.


Disable and enable pipelines.
Archive and unarchive pipelines.

The Documentation/readme-for-swagger.md file in the DeploymentManager04_07_0x.zip file provides documentation about API
usage.

Deployment Manager 4.6.1


Deployment Manager 4.6.1 includes the following enhancements and resolved issues.

Enhancements
The following enhancements are available in this release:

Ability to use Deployment Manager to automate data migration pipelines

Data migration pipelines allow you export data from a production environment to a simulation environment
where you can test impact of the changes made to your decision framework safely without having to deploy to
a production environment. You can now use Deployment Manager to create data migration pipelines that allow
you to automatically export data from a production environment and import it into a simulation environment.
Additionally, you can configure a job scheduler rule to run pipelines during a specified period of time.

For a tutorial on configuring simulation pipelines, including how to use Deployment Manager with them, see
Deploying sample production data to a simulation environment for testing.

For more information about configuring and using simulation pipelines with Deployment Manager, see Data
migration pipelines with Deployment Manager 4.6.x.

Ability to provide access to Dev Studio to a role

You can now allow a role to access Dev Studio, and all the users of that role can switch to Dev Studio from the
Operator icon. By being able to switch to Dev Studio, users can access Dev Studio tools to further troubleshoot
issues that Deployment Manager cannot diagnose.

Ability to easily move to new orchestration systems by configuring a dynamic system setting

When you move from an existing orchestration system to a new one, you can now configure a dynamic system
setting that specifies the URL of the new orchestration system.

Resolved issues in Deployment Manager 4.6.1


The following issues were resolved in this release:

The position of the Validate test coverage task was not retained.

If you added a Validate test coverage task in a pipeline, the task automatically moved under the Add task
menu option after you saved the pipeline configuration. The position of the task is now saved.

Deployment Manager installation failed on IBM Db2.

Deployment Manager installations on systems running on Db2 failed with a database error. You can now install
Deployment Manager on Db2.

Not all API requests included PRRestService.

Some HTTP requests to the api service package did not include PRRestService. It is now included in all requests
if it is needed to direct all traffic to the API node.

Tasks could not be added before the Deploy task in Deployment Manager 4.5.1 when using the API.

When you used the API to create pipelines, you could not add tasks before the Deploy task, although you could
add a task when you configured the pipeline in Deployment Manager. You can now add tasks before the Deploy
task with the API.
Test changes in branches were merged into incorrect ruleset versions.

Sometimes, test changes in branches were merged into an incorrect ruleset version if multiple application
versions were used and a test application was configured on the pipeline. Test changes in branches are now
merged into the correct ruleset versions.

Deployment Manager displayed a message for reaching the limit for pending changes.

Sometimes, Deployment Manager displayed an error message that you reached the maximum limit for pending
changes. The limit has been increased, and the error no longer appears.

The Jenkins configuration diagnostics check failed when cross-site request forgery (CSRF) protection was
disabled.

When CSRF protection was disabled in Jenkins, pipeline diagnostics for Jenkins configuration failed with an error
message that the Jenkins server was not reachable, even though the Jenkins task in the pipeline worked
correctly. Jenkins diagnostics checks no longer fail in this scenario.

Deployment Manager 4.5.1


Deployment Manager 4.5.1 includes the following enhancements and resolved issues.

Enhancements
The following enhancements are provided in this release:

Ability to add tasks before Deploy and Publish tasks

For additional validation or environment provisioning, you can now add any task before the Deploy and Publish
tasks, which are automatically added to the pipeline. You can add tasks before the Deploy task in any stage of
the pipeline or before the Publish task in the development stage.

Ability to associate bugs and user stories to branch merges

When you start a deployment by submitting a branch into the Merge Branches wizard, you can now associate
user stories and bugs from Agile Workbench so that you can track branch merges.

New REST API to deploy existing artifacts

Deployment Manager now provides a REST API to deploy existing artifacts so that you can start a production
pipeline with the output of the development pipeline for the same application. You can view the
Documentation/readme-for-swagger.md file for more information on using the API.

Ability to access and pass all relevant parameters of the current deployment for Jenkins tasks

For Jenkins tasks, you can now access and pass all the relevant Jenkins parameters for the current deployment,
which include PipelineName, DeploymentID, RespositoryName, and ArtifactPath. When you configure the
Jenkins task in a pipeline, the values of the parameters are automatically populated.

More diagnostics to troubleshoot pipelines

You can now automatically diagnose more issues with your pipeline so that you spend less time manually
troubleshooting. For example, you can now verify that Jenkins steps are properly configured, and you can also
obtain more information about repository connections with enhanced troubleshooting tips.

Elimination of post-upgrade steps when upgrading from Deployment Manager versions 3.2.1 and later

For upgrades from Deployment Manager 3.2.1 or later to version 4.5.1, you no longer need to run activities or
do any other post-upgrade steps. After the upgrade completes, Deployment Manager performs health checks
before running post-upgrade steps for both on-premises and Pega Cloud Services environments.

Resolved issues
The following issue is resolved in Deployment Manager 4.5.1:

Unable to configure keystores in Pega Cloud Services environments

If your target environment is SSL-enabled with private certificates, you can now set the keystore for
Deployment Manager connectors so that they can receive and process tokens. You first configure a keystore
and then update a dynamic system setting to reference the keystore ID. For more information, see "Step 3a:
Configuring authentication profiles on the orchestration server and candidate systems" for your version of
Installing, upgrading, and configuring Deployment Manager.

Deployment Manager 4.4.2


Deployment Manager 4.4.2 includes the following resolved issues.
Resolved issues
The following issues were resolved in this release:

Incorrect status displayed for the Run Pega unit test task

If you refreshed a merge request quickly, the status of the Run Pega unit tests task might have been incorrectly
displayed as the status of the merge. The correct status for the task is now displayed.

Duplicate operator IDs displayed for the Manual task

When you assigned manual tasks to an operator ID, the Manual task auto-complete displayed duplicate entries
for the same operator ID if the operator ID was added as an administrator or user for multiple applications. The
Manual task no longer displays duplicate entries.

Pipeline deployments sometimes froze

Sometimes, a pipeline deployment might freeze if it could not update the task with the status that it received
from the task. The pipeline no longer freezes.

No error messages displayed for issues with artifacts and repositories

The Deploy existing artifact dialog box now validates the repository that you select. Error messages are also
displayed when the repository does not list available artifacts or if the repository does not have any artifacts in
it.

Verify security checklist task failed and displayed a Pega Diagnostic Cloud (PDC) error

The Verify security checklist failed when a pipeline had only one stage (development) and the Production ready
check box was selected on the pipeline configuration. A PDC error message was displayed. The task no longer
fails for pipelines with such a configuration.

32 character token limit for Jenkins tasks

For the Jenkins task, you could only enter a 32 character token to remotely start a Jenkins job. You can now
enter a token with more than 32 characters.

Dependent applications were not deployed

On pipelines on which dependent applications were configured, they were not deployed. They are now
deployed correctly.

Deployment Manager 4.4.1


Deployment Manager 4.4.1 includes the following enhancements and known issues.

Enhancements
The following enhancements are provided in this release:

Simplified configuration and workflow when merging branches in a distributed branch-based environment

The process for merging branches in distributed branch-based environments has been simplified. On the
remote development system, you can now merge branches and start a deployment by using the Merge
Branches wizard to merge branches onto the source development system without having to use a Pega
repository type.

Ability to submit locked branches to the Merge Branches wizard

You can now submit locked branches to the Merge Branches wizard so that you can follow best practices when
working with branches. Best practices include locking branches to prevent changes from being made to them.

Using the Merge Branches wizard to make merge requests now stores the branch in the development
repository

When you use the Merge Branches wizard to merge branches and start a deployment, the wizard now stores
the branch in the development repository. Also, after the merge is completed, Deployment Manager deletes
the branch from the development system. By storing branches in the development repository, Deployment
Manager keeps a history, which you can view, of the branches in a centralized location.

Ability to create separate product rules for test cases

You can now separately manage both application changes and test cases in the same pipeline by using a
separate product rule that contains only test cases. You can also choose a stage until which test cases are
deployed to ensure that test cases are not deployed on environments such as staging and production, where
they might not be needed. When you create test and production applications in Deployment Manager on your
development system by using the New Application wizard, the wizard automatically creates separate product
rules for your production and test applications.
API documentation now available

Documentation for Deployment Manager APIs is now included in the Documentation/readme-for-swagger.md


file. This file is included in the DeploymentManager04_04_0x.zip file, which you can download from Pega
Exchange. For example, you can quickly create pipelines without using the Deployment Manager interface.

Usability enhancements
For the Check guardrail compliance task, the default guardrail compliance score has been increased to 97.
Email notifications for Jenkins jobs now include a link to the Jenkins job.
You can now start a Jenkins job when Jenkins has cross-site request forgery (CSRF) protection enabled.
For pipelines that have Jenkins tasks, job history details for successful deployments have a link to the
Jenkins job.
The Pipeline list in the Merge Branches wizard no longer displays pipelines that are not configured to
support branches; previously, you received an error after submitting pipelines that did not support
branches.
If you are using the Merge Branches Wizard but do not have pipelines configured for an application, you
can use still use the wizard to merge branches into target applications.

Known issues
The following are known issues in this release:

The Pega Platform 8.1 and 8.2 versions of the Rule rebasing and Rebasing rules to obtain latest versions help
topics should state that rule rebasing is supported in Deployment Manager.
The Publishing a branch to a repository help topic should state that you can use Deployment Manager to start a
deployment by publishing a branch to the source development system even if you have multiple pipelines per
application version. Also, the note in this help topic no longer applies.

Deployment Manager 4.3.2


Deployment Manager 4.3.2 includes the following resolved issues.

Resolved issues
The following issue has been resolved:

Pipelines not visible on the Deployment Manager landing page

On systems running Pega CRM applications, pipelines were not visible on the Deployment Manager landing
page when the datapage/newgenpages dynamic system setting was set to false. This setting disabled the new
clipboard implementation for optimized read-only data pages. Pipelines are now visible regardless of the
dynamic system setting value.

Deployment Manager 4.3.1


Deployment Manager 4.3.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Ability to configure notifications in Deployment Manager

You can now configure notifications in Deployment Manager without having to configure an email account and
listener in Dev Studio. You can also choose which notifications to receive such as whether Pega unit test tasks
succeeded or failed. You can receive notifications through email, in the notification gadget, or both, and you
can create custom notification channels to receive notifications through other means such as text messages or
mobile push notifications.
To use notifications, you must install or upgrade to Pega Platfor 8.1.3 on the orchestration server.

Publishing application changes has been consolidated with viewing application versions in App Studio

You can now publish application changes in App Studio and view information about your Deployment Manager
application versions on one page. By accessing publishing features and viewing information in one place, you
can more intuitively use Deployment Manager with App Studio.

Deployment Manager 4.2.1


Deployment Manager 4.2.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:
Ability to add and manage roles, privileges, and users

Deployment Manager now provides default roles that specify privileges for super administrators and application
administrators. Super administrators can add roles and specify their privileges, and both super administrators
and application administrators can add users and assign them roles for specified applications. By specifying
roles and privileges for Deployment Manager users, you can manage your users more effectively by controlling
access to features for each type of user.

New Deployment Manager portal

Deployment Manager now provides a dedicated Deployment Manager portal that does not require access to
the Dev Studio portal to access Deployment Manager features. The portal also provides enhancements such as
a navigation panel from which you can easily acdcess features such as reports, without having to open specific
pipelines. Additionally, when you add a pipeline or modify pipeline settings, you can now open the rule forms
for repositories and authentication profiles in Dev Studio from within Deployment Manager.

Ability to merge branches that span multiple application layers

You can now merge a branch that has rulesets that are in multiple applications if all the rulesets are in the
application stack for the pipeline application. By doing so, you can, for example, merge changes that affect
both a framework and an application layer. You can also merge test assets with the rules that you are testing
without the test assets and rules being in the same application.

Deployment Manager 4.1.1


Deployment Manager 4.1.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Redesigned, more intuitive landing page and user interface

Deployment Manager has been redesigned to have a more intuitive interface so that you can quickly access
features as you interact with your pipeline. The Deployment Manager landing page now displays a snapshot of
your pipeline configuration, which provides status information such as whether a deployment failed and on
what stage the failure occurred. Additionally, when you click a pipeline to open it, Deployment Manager now
displays important information about your pipeline such as the number of branches that are queued for
merging on the development system.

Manage aged updates

You can now manage rules and data types, which are in an application package, that are older than the
instances that are on a system. By importing aged updates, skipping the import, or manually deploying
application packages on a system, you have more flexibility in determining the application contents that you
want to deploy.

New testing tasks, which include running Pega scenario tests

Several new test tasks have been added so that you deliver higher quality software by ensuring that your
application meets the test criteria that you specify. On the candidate systems in your pipeline, you can now
perform the following actions:
Run Pega scenario tests, which are end-to-end, UI-based tests that you create within Pega Platform.
Start and stop test coverage at the application level to generate a report that identifies the executable
rules in your application that are covered or not covered by tests.
Refresh the Application Quality dashboard with the latest information so that you can see the health of
your application and identify areas that need improvement before you deploy your application.

Enhancements to publishing application changes to a pipeline in App Studio

You can submit application changes to a pipeline in App Studio to start a deployment in Deployment Manager.
The following enhancements have been made:
When you submit application changes into a pipeline, patch versions of the main application are now
created.
You can now add comments, which will be published with your application.
You can now associate user stories and bugs with an application.
You can now view information such as who published the application and when for the application versions
that you have submitted
Run Pega unit tests on branches before merging

You can now run Pega unit tests on branches before they are merged in the pipeline for either the pipeline
application or an application that is associated with an access group. By validating your data against Pega unit
tests, you can deploy higher quality applications.

Deployment Manager 3.4.1


Deployment Manager 3.4.1 includes the following enhancements.
Enhancements
The following enhancements are provided in this release:

Manage aged updates

You can now manage rules and data types, which are in an application package, that are older than the
instances that are on a system. By importing aged updates, skipping the import, or manually deploying
application packages on a system, you have more flexibility in determining the application contents that you
want to deploy.

Ability to merge branches that span multiple application layers

You can now merge a branch that has rulesets that are in multiple applications if all the rulesets are in the
application stack for the pipeline application. By doing so, you can, for example, merge changes that affect
both a framework and an application layer. You can also merge test assets with the rules that you are testing
without the test assets and rules being in the same application.

Deployment Manager 3.3.1


Deployment Manager 3.3.1 includes the following enhancements and known issues.

Enhancements
The following enhancements are provided in this release:

New Verify security checklist task

You can now use the Verify security checklist task to ensure that your pipeline complies with security best
practices. It is automatically added to the stage before production when you create a pipeline.

Ability to diagnose pipelines

You can now diagnose your pipeline to verify information such as whether the target application and product
rule are on the development environment, connectivity between systems and repositories is working, and pre-
merge settings are correctly configured. You can also view troubleshooting tips and download logs.

Known issues
The following known issue exists in this release:

Rollback does not work for Pega CRM applications

If you are using a CRM application, you cannot roll back a deployment to a previous deployment.

Deployment Manager 3.2.1


Deployment Manager 3.2.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Simplified pipeline setup

Pipeline setup has been simplified when you install Deployment Manager and when you configure pipelines.
The following enhancements have been made:
Deployment Manager now provides the Pega Deployment Manager application with default operators and
authentication profiles when you install it. You do not need to create authentication profiles for
communication between candidate systems and the orchestration server.
If you are using Pega Cloud, Deployment Manager is automatically populated with the URLs of all the
systems in your pipeline so that you do not need to configure them.

New Check guardrail compliance task.

You can now use the Check guardrail compliance task to ensure that the deployment does not proceed if the
application does not comply with best practices for building applications in Pega Platform. This task is
automatically added to all the stages in your pipeline.

New Approve for production task

Deployment Manager now provides an Approve for production task, which is automatically added to the stage
before production when you create a pipeline. You can assign this task to a user who approves the application
changes before the changes are deployed to production.

Ability to specify the test suite ID and access group for Pega unit testing tasks
For Pega unit testing tasks, you can now run all the Pega unit tests that are defined in a test suite for the
application pipeline. By using a test suite ID, you can run a subset of Pega unit tests instead of all Pega unit
tests for a pipeline application. You can also run all the Pega unit tests for an application that is associated with
an access group so that you can run Pega unit tests for an application other than the pipeline application.

Deployment Manager now supports first time deployments

Deployment Manager now supports first-time deployments, so you do not have to import your application into
each Pega Platform server on your candidate systems the first time that you configure Deployment Manager.

Deployment Manager 3.1.1


Deployment Manager 3.1.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Ability to create custom repository types

You can now create custom repository types and manage your artifacts with them when you use Deployment
Manager. For example, you can create a Nexus repository type and use it to move your application package
between candidate systems in a pipeline. By creating custom repository types, you can use a wider variety of
repository types with your artifacts to extend the functionality of Deployment Manager.

Use the Merge Branches wizard to submit branches into a continuous integration and delivery pipeline.

You can now submit branches into a continuous integration and delivery (CI/CD) pipeline by using the Merge
Branches wizard in Designer Studio. Deployment Manager can then run pre-merge criteria on branches on one
system so that you do not need to configure additional systems for both branch development and merging.

Support for Pega Cloud.

Beginning with Pega 7.4, all current and new Pega Cloud customers have a free dedicated sandbox to run
Deployment Manager, which provides the following features:
Default repositories that store and move your application package between systems in the pipeline.
Ability to view, download, and remove application packages from repositories so that you can manage
your cloud storage space.
Ability to deploy an existing application package.
Ability to create multiple pipelines for one version of an application.
Ability to create multiple pipelines for one version of an application. For example, you can create a
pipeline with only a production stage if you want to deploy a build to production separately from the rest of
the pipeline.

Ability to manage application package artifacts.

You can now browse, download, and delete application package artifacts from the orchestration server. You do
not have to log in to repositories to delete artifacts from them.

Ability to move existing artifacts through pipelines.

You can move existing artifacts through your pipelines. Existing artifacts are maintained in repositories, and
you can move them through progressive stages in the pipeline.

Deployment Manager 2.1.4


Deployment Manager 2.1.4 includes the resolved issues.

Issues addressed in this release


The following issues were addressed in this release:

Publishing application packages to the production repository sometimes fails in multinode environments

In multinode staging environments, a node retrieves an application package from the development repository
and places it into its service export folder to be published to the production repository. However, Deployment
Manager sometimes cannot publish it to the production repository, because the request might be sent to a
different node. This issue has been fixed so that if Deployment Manager sends a request to a node that does
not have the application package, that node retrieves the package from the development repository and
publishes it to the production repository.

Deployment Manager 2.1.3


Deployment Manager 2.1.4 includes the following enhancements.
Enhancements
The following enhancement is provided in this release:

Improved structure and content of email notifications.

Improvements have been made to email notifications that are sent to users when an event has occurred. For
example, the email that is sent when a step on which PegaUnit test task fails now includes an attached log file
that provides details of each failed PegaUnit test case.

Deployment Manager 2.1.2


Deployment Manager 2.1.2 includes the known issues.

Known issues
The following issue exists in this release:

The PegaDevOps-ReleaseManager agent points to the wrong access group.

To resolve the issue, after you import and install Deployment Manager 02.01.02, perform the following steps on
the orchestration server:

Because this agent is not associated with the correct access group, it cannot process Deployment Manager
activities in the background.

1. Update your Pega Platform application so that it is built on PegaDeploymentManager 02.01.02:


1. In the Designer Studio header, click the name of your application, and then click Definition.
2. In the Built on application section, in the Version field, press the Down Arrow key and select 02.01.02.
3. Click Save.
2. Update the agent schedule for the Pega-DevOps-ReleaseManager agent to use the
PegaDeploymentManager:Administrators access group.
1. In Designer Studio, click RecordsSysAdminAgent Schedule.
2. Click the Pega-DevOps-ReleaseManager agent.
3. Click Security.
4. In the Access Group field, press the Down Arrow key and select
PegaDeploymentManager:Administrators.
5. Click Save.

Deployment Manager 1.1.3


Deployment Manager 1.1.3 includes the following enhancements.

Enhancements
The following enhancement is provided in this release:

Improved structure and content of email notifications

Improvements have been made to email notifications that are sent to users when an event has occurred. For
example, the email that is sent when a step on which PegaUnit test task fails now includes an attached log file
that provides details of each failed PegaUnit test case.

Deployment Manager 1.1.2


Deployment Manager 1.1.2 includes the following known and resolved issues.

Known issues
The following issue exists in this release:

The PegaDevOps-ReleaseManager agent points to the wrong access group.

Because this agent is not associated with the correct access group, it cannot process Deployment Manager
activities in the background.

To resolve the issue, after you import and install Deployment Manager 01.01.02, perform the following steps on
the orchestration server:

1. Update your Pega Platform application so that it is built on PegaDeploymentManager 01.01.02:


1. In the Designer Studio header, click the name of your application, and then click Definition.
2. In the Built on application section, in the Version field, press the Down Arrow key and select 01.01.02.
3. Click Save.
2. Update the agent schedule for the Pega-DevOps-ReleaseManager agent to use the
PegaDeploymentManager:Administrators access group.
1. In Designer Studio, click Records SysAdmin Agent Schedule .
2. Click the Pega-DevOps-ReleaseManager agent.
3. Click Security.
4. In the Access Group field, press the Down Arrow key and select
PegaDeploymentManager:Administrators.
5. Click Save.

Resolved issues
The following issue was resolved in this release:

Selections that were made to the Start build on merge check box were not applied when editing a pipeline.

When you edit a pipeline and either select or clear the Start build on merge check box, your changes are now
applied. Additionally, the check box is cleared by default.

Getting started with Deployment Manager


Deployment Manager is a simple, intuitive, and ready-to-use application that offers built-in DevOps capabilities to
users. It leverages Pegasystems’s market-leading case management technology to manage an automated
orchestration engine, enabling you to build and run continuous integration and continuous delivery (CI/CD) pipelines
in a model-driven manner.

You can run deployments involving your application updates with the click of a button, without the need for third-
party automation services such as Jenkins or Bamboo. Fully automated pipelines helpto significantly reduce the lead
time to deliver value to end users.

Using a standardized way to deploy application changes with guardrail-related and testing-related best practices
that are built into the out-of-the-box CI/CD models results in substantial operational efficiencies.

For answers to frequently asked questions, see the Deployment Manager FAQ page.

Understanding key features supported

Deployment Manager provides a number of features so that you can manage your workflows. For example,
Deployment Manager supports continuous integration, continuous delivery, test execution, reporting,
diagnostics, manual approvals, deployment cancellations, change rollbacks, roles and privileges, and
notifications.

Viewing the overview video

The following video provides an overview of Deployment Manager: https://community.pega.com/video-


library/overview-infinity-deployment-manager.

Installing Deployment Manager

If you are using Deployment Manager on-premises, you must first install it.

Upgrading to a new release

If you are using Deployment Manager either on-premises or on Pega Cloud Services environments, you must
perform steps to upgrade to a new release.

Setting up and configuring Deployment Manager for a quick start

Deployment Manager is ready to use out of the box. There is no need to build on top of it; however, some initial
configurations are needed before you can get started. For details about how Deployment Manager works, see
Understanding Deployment Manager architecture and workflows.

Using Deployment Manager

After you set up and configure Deployment Manager, you can begin using it to create pipelines. You can also do
a number of other tasks, such as creating Deployment Manager roles and users and configuring the
notifications that you want to receive. For detailed information, see Configuring and running pipelines with
Deployment Manager 4.7.x.

Using troubleshooting tips

If you encounter issues with Deployment Manager, you can troubleshoot it in a number of ways.

Obtaining support

If you experience problems using Deployment Manager, submit a support request to My Support Portal. For
feedback or non-urgent questions, send an email to [email protected].

Understanding key features supported


Deployment Manager provides a number of features so that you can manage your workflows. For example,
Deployment Manager supports continuous integration, continuous delivery, test execution, reporting, diagnostics,
manual approvals, deployment cancellations, change rollbacks, roles and privileges, and notifications.

Viewing the overview video


Installing Deployment Manager
If you are using Deployment Manager on-premises, you must first install it.

On-premises users can download Deployment Manager from


https://community1.pega.com/exchange/components/deployment-manager.

For information about installing Deployment Manager, see Installing or upgrading to Deployment Manager 4.7.x.

Beginning with Pega Platform 7.4, Pega Cloud Services users have a dedicated instance in their virtual private cloud
(VPC) at the time of onboarding with Deployment Manager functionality preinstalled.

This instance is referred to as the orchestration server and contains the “DevOps” keyword in the URL.

Upgrading to a new release


If you are using Deployment Manager either on-premises or on Pega Cloud Services environments, you must
perform steps to upgrade to a new release.

On-premises users can directly download the latest release from


https://community1.pega.com/exchange/components/deployment-manager.

Pega Cloud Services users should create a support ticket to request a new release.

After you obtain the latest release, refer to the upgrade documentation for information about upgrading to the latest
release. For more information, see Installing or upgrading to Deployment Manager 4.7.x.

Setting up and configuring Deployment Manager for a quick start


Deployment Manager is ready to use out of the box. There is no need to build on top of it; however, some initial
configurations are needed before you can get started. For details about how Deployment Manager works, see
Understanding Deployment Manager architecture and workflows.

The following list of terms defines key Deployment Manager concepts:

Candidate systems – the individual environments that host the target application, typically the development,
QA, staging, and production environments.
Repository – the artifact repository that stores the application archive as defined by a product rule.
DMAppAdmin – the operator ID, provided out of the box, that is used by an application pipeline to execute all
the tasks such as deploying, running tests, checking guardrail scores, and so on.
DMReleaseAdmin – the operator ID, provided out of the box, that has administrative privileges for Deployment
Manager. This is the user that you will start with for the Deployment Manager.

You should make changes only in the development environment and then move them to higher environments.
Do not make changes in any other environment.

1. Enable the DMAppAdmin and DMReleaseAdmin operators IDs:

a. Log in to the orchestration server and enable the DMReleaseAdmin operator ID.

b. Log in to candidate systems (development, QA, staging, and production) and enable the DMAppAdmin
operator ID. Ensure that the same password is set on all environments.

c. On the orchestration server, open the DMAppAdmin authentication profile and set the password to the
DMAppAdmin operator ID password that you set in step 1b.

d. On all candidate systems, open the DMReleaseAdmin authentication profile and set the password to the
DMReleaseAdmin operator ID password that you set in step 1a.

For detailed steps, see Configuring authentication profiles.

2. On each candidate system, open your target application and add PegaDevOpsFoundation as a built-on
application. For more information, see Configuring candidate systems.

3. To use branches for application development, set the RMURL dynamic system setting on the development
environment to be the orchestration server URL.

4. For on-premises users, set up repositories for artifact archiving. For more information, see Creating repositories
on the orchestration server and candidate systems.

Deployment Manager leverages JFrog Artifactory, Amazon S3, Microsoft Azure, or file system repository types.
After you configure one of these repositories, you will select one to use when you create your pipelines.
5. Configure the product rule for your application.

You will specify this product rule when you create your pipeline.

6. To receive email notification for deployments, configure email accounts on the orchestration server.

For more information, see Configuring email accounts on the orchestration server.

7. If you are using Jenkins, configure Jenkins so that it can communicate with the orchestration server.

For more information, see Configuring Jenkins.

Using Deployment Manager


After you set up and configure Deployment Manager, you can begin using it to create pipelines. You can also do a
number of other tasks, such as creating Deployment Manager roles and users and configuring the notifications that
you want to receive. For detailed information, see Configuring and running pipelines with Deployment Manager
4.7.x.

In general, perform the following steps:

1. Log in to the Deployment Manager portal on the orchestration server with the DMReleaseAdmin operator ID.

2. Create a pipeline by modeling stages and steps and specifying environments, applications, product rules, and
repositories.

3. Run diagnostics by clicking Actions Diagnose pipeline to verify that your pipeline is correctly configured.

4. Run deployments directly from Deployment Manager or from development environments as you merge your
branches.

Using troubleshooting tips


If you encounter issues with Deployment Manager, you can troubleshoot it in a number of ways.

Remember the following troubleshooting tips:

Run diagnostics and follow troubleshooting tips if your deployments fail to run.
Review pipeline logs that are available on the pipeline landing page and the output from diagnostics to
troubleshoot your workflows.
Attach logs from Deployment Manager and the output from diagnostics in your support tickets.

Obtaining support
If you experience problems using Deployment Manager, submit a support request to My Support Portal. For feedback
or non-urgent questions, send an email to [email protected].

Understanding Deployment Manager architecture and workflows


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega
applications from within Pega Platform. You can create a standardized deployment process so that you can deploy
predictable, high-quality releases without using third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application
package generation, artifact management, and package promotion to different stages in the workflow.

Deployment Manager supports artifact management on repository types such as Amazon S3, file system, Microsoft
Azure, and JFrog Artifactory. Additionally, in Deployment Manager 3.3.x and later, you can create your own
repository types; for more information, see Creating and using custom repository types for Deployment Manager.
Deployment Manager also supports running automations on Jenkins that are not supported in Pega Platform such as
running external regression or performance tests. In addition, Pega Cloud pipelines are preconfigured to use Amazon
S3 repositories and are configured to use several best practices related to compliance and automated testing.

Deployment Manager is installed on the orchestration server, on which release managers configure and run
pipelines. With Deployment Manager, you can see the run-time view of your pipeline as it moves through the CI/CD
workflow. Deployment Manager provides key performance indicators (KPIs) and dashboards that provide
performance information such as the deployment success rate, deployment frequency, and task failures. Use this
information to monitor and optimize the efficiency of your DevOps process.

Understanding CI/CD pipelines

A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous
delivery.

Understanding systems in the Deployment Manager CI/CD pipeline

The CI/CD pipeline comprises several systems and involves interaction with various Pega Platform servers. For
example, you can use a QA system to run tests to validate application changes.

Understanding repositories in the pipeline

Deployment Manager supports Microsoft Azure, JFrog Artifactory, Amazon S3, and file system repositories for
artifact management of application packages. For each run of a pipeline, Deployment Manager packages and
promotes the application changes that are configured in a product rule. The application package artifact is
generated on the development environment, published in the repository, and then deployed to the next stage
in the pipeline.

Understanding pipelines in a branch-based environment

If you use branches for application development, you can configure merge criteria on the pipeline to receive
feedback about branches, such as whether a branch has been reviewed or meets guardrail compliance scores.

Understanding pipelines in a branch-based environment

Understanding CI/CD pipelines


A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous delivery.

In the continuous integration stage, developers continuously validate and merge branches into a target application.
In the continuous delivery stage, the target application is packaged and moved through progressive stages in the
pipeline. After application changes have moved through testing cycles, including Pega unit, regression,
performance, and load testing, application packages are deployed to a production system either manually or, if you
want to continuously deploy changes, automatically.

You should make changes only in the development environment and then move those changes to a higher
environment. Do not make changes in any other environment.

Understanding systems in the Deployment Manager CI/CD pipeline


The CI/CD pipeline comprises several systems and involves interaction with various Pega Platform servers. For
example, you can use a QA system to run tests to validate application changes.

Pipelines comprise the following systems:

Orchestration server – Pega Platform system on which the Deployment Manager application runs and on which
release managers or application teams model and run their CI/CD pipelines. This system manages the CI/CD
workflow involving candidate systems in the pipeline
Candidate systems – Pega Platform servers that manage your application's life cycle; they include the following
systems:
Development system – The Pega Platform server on which developers build applications and merge
branches into them. The product rule that defines the application package that is promoted to other
candidate systems in the pipeline is configured on this system. Distributed development environments
might have multiple development systems.

In this environment, developers develop applications on remote Pega Platform development systems and
then merge their changes on a main development system, from which they are packaged and moved in
the Deployment Manager workflow.
QA and staging systems – Pega Platform servers that validate application changes by using various types
of testing, such as Pega unit, regression, security, load, and performance testing.
Production system – Pega Platform server on which end users access the application.

Understanding repositories in the pipeline


Deployment Manager supports Microsoft Azure, JFrog Artifactory, Amazon S3, and file system repositories for artifact
management of application packages. For each run of a pipeline, Deployment Manager packages and promotes the
application changes that are configured in a product rule. The application package artifact is generated on the
development environment, published in the repository, and then deployed to the next stage in the pipeline.

A pipeline uses development and production repositories. After a pipeline is started, the application package moves
through the pipeline life cycle in the following steps:

1. The development system publishes the application package to the development repository.
2. The QA system retrieves the artifact from the development repository and performs tasks on the artifact.
3. The staging system retrieves the artifact from the development repository and publishes it to the production
repository.
4. The production system deploys the artifact from the production repository

Understanding pipelines in a branch-based environment


If you use branches for application development, you can configure merge criteria on the pipeline to receive
feedback about branches, such as whether a branch has been reviewed or meets guardrail compliance scores.

If there are no merge conflicts, and merge criteria is met, the branch is merged; the continuous delivery pipeline is
then started either manually or automatically.

The workflow of tasks in a branch-based pipeline is as follows:

1. One or more developers make changes in their respective branches.


2. Merge criteria, which are configured in Deployment Manager, are evaluated when branches are merged.
3. Continuous delivery starts in one of the following ways:
1. Automatically, after a branch successfully passes the merge criteria. If another continuous delivery
workflow is in progress, branches are queued and started after the previous workflow has been completed.
2. Manually, if you have multiple development teams and want to start pipelines on a certain schedule.
4. During a deployment run, branches are queued for merging and merged after the deployment has been
completed.

The following figure describes the workflow in a branch-based environment.

Workflow in a branch-based environment

In a distributed, branch-based environment, you can have multiple development systems, and developers author
and test the application on remote Pega Platform development systems. They then merge their changes on a
development source development, from which they are packaged and moved in the Deployment Manager workflow.

The following figure describes the workflow in a distributed, branch-based environment.

Workflow in a distributed branch-based environment


Understanding pipelines in an environment without branches
If you do not use branches for application development, but you use ruleset-based development instead, you
configure the continuous delivery pipeline in Deployment Manager.

The workflow of tasks in this pipeline is as follows:

1. Developers update rules and check them in directly to the application rulesets on the development system.
2. The product rule that contains the application rules to be packaged and moved through the systems in the
pipeline is on the development system.
3. Continuous delivery is started manually at a defined schedule by using Deployment Manager.

The following figure describes the workflow of a pipeline in an environment without branches.

Workflow in an environment without branches

Understanding best practices for using branches with Deployment


Manager
Follow these best practices when you use branches in your Deployment Manager pipelines. The specific practices
depend on whether you have a single development team or multiple development teams in a distributed
environment.

If you use branches for application development in a non-distributed environment, developers work on branches and
merge them on the development system, after which the continuous delivery pipeline is started automatically or
manually.

In a distributed branch-based environment, you can have multiple development systems, and developers author and
test the application on a remote development system. They then merge their changes on a source development
system, from which the changes are merged and moved in the Deployment Manager workflow.

For more information about best practices to follow in the DevOps pipeline, see Understanding the DevOps release
pipeline.

Using branches with Deployment Manager

Best practices for using branches in Deployment Manager depend on whether you have a single development
team or multiple teams in a distributed environment.

Using branches with Deployment Manager


Best practices for using branches in Deployment Manager depend on whether you have a single development team
or multiple teams in a distributed environment.

In general, perform the following steps when you use branches with Deployment Manager:

1. In Deployment Manager, create a pipeline for the target application. If your application consists of multiple
built-on applications, it is recommended that you create separate pipelines for each application.

By using separate pipelines for built-on applications, you can perform targeted testing of each built-on
application, and other developers can independently contribute to application development.

For more information about multiple built-on applications, see Using multiple built-on applications.

2. Ensure that the target application is password-protected on all your systems in the pipeline.

a. In Designer Studio (if you are using Deployment Manager 3.4.x) or Dev Studio (if you are using
Deployment Manager 4.1.x or later), switch to the target application by clicking the name of the
application in the header, clicking Switch Application, and then clicking the target application.

b. In the Designer Studio or Dev Studio header, click the name of the target application, and then click
Definition.

c. Click Integration & Security.

d. In the Edit Application form, click the Require password to update application checkbox.

e. Click Update password.

f. In the Update password dialog box, enter a password, reenter it to confirm it, and click Submit.

g. Save the rule form.

3. If you want to create a separate product rule for a test application, create a test application that is built on top
of the main target application. For more information, see Using branches and test cases.

4. On the source development system (in a distributed environment) or development system (in a nondistributed
environment), create a development application that is built on top of the either the target application (if you
are not using a test application) or the test application.

5. Include the PegaDevOpsFoundation application as a built-on application for either the team application or the
target application.

a. In either the development application or target application, in the Dev Studio or Designer Studio header,
click the application, and then click Definition.

b. In the Edit Application form, on the Definition tab, in the Built on applications section, click Add application.

c. In the Name field, press the down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the down arrow key and select the version for the Deployment Manager version
that you are using.

e. Save the rule form.

f. If you are using a distributed environment, import the application package, including the target,
development, and test (if applicable) applications, into the remote development system.

6. Do one of the following actions:

If you are using a distributed environment, add branches to the team application on the remote
development system. For more information, see Adding branches to your application.

If you are using multiple built-on applications, maintain separate branches for each target application. For
more information, see the Pega Communityarticle Using multiple built-on applications.

If you are using a non-distributed environment, create a branch of your production rulesets in the team
application. For more information, see Adding branches to your application. You should create separate
branches for each target pipeline.

7. Perform all development work in the branch.

8. To merge branches, do one of the following actions:

If you are using either a non-distributed network (in any version of Deployment Manager) or a distributed
network (in Deployment Manager 4.4.x or later), first lock the branches that you want to validate and
merge in the application pipeline and then submit the branches in the Merge Branches wizard.

For more information, see Submitting a branch into an application by using the Merge Branches wizard.

If you are using a distributed network and Deployment Manager 4.4.x or and are publishing branches to a
source development system to start a build, do the following actions:

1. On the remote development system, publish the branch to the repository on the source development
system to start the pipeline. For more information, see Publishing a branch to a repository.
2. If there are merge conflicts, log in to the team application on the source development system, add
the branch to the application, resolve the conflict, and then merge the branch.

If you using a distributed network and versions of Deployment Manager earlier than 4.4.x with one pipeline
per application, do the following steps so that you can merge branches onto the source development
system:
1. On the remote development system, create a Pega repository that points to the target application on
the source development system. For more information, see Adding a Pega repository.
2. On the remote development system, publish the branch to the repository on the source development
system to start the pipeline. For more information, see Publishing a branch to a repository.
3. If there are merge conflicts, log in to the team application on the source development system, add
the branch to the application, resolve the conflict, and then merge the branch.
If you using a distributed network and versions of Deployment Manager earlier than 4.4.x with multiple
pipelines per application and application version:
1. Package the branch on the remote development system. For more information, see Packaging a
branch.
2. Export the branch.
3. Import the branch to the source development system and add it to the team application. For more
information, see Importing rules and data from a product rule by using the Import wizard.
4. Merge branches into the target application to start the pipeline by using the Merge Branches wizard.

For more information, see Merging branches into target rulesets.

Managing test cases separately in Deployment Manager


In Deployment Manager 4.4.x and later, you can package and deploy test cases separately on the candidate
systems in the pipeline. When you configure a pipeline in Deployment Manager, you specify the details of the test
package that you want to deploy, including the stage in the pipeline until which you want to deploy the package.

To use a separate test package, you must create a test application layer on the development systems in your
pipeline.

Configuring the application stack on the development or source development system

You must first configure your application stack on either the development or source development system.

Configuring the application stack on the remote development system in a distributed, branch-based
environment

If you are using a distributed, branch-based environment, you must configure the application stack on the
remote development system.

Using branches and test cases

Branches in the development application can contain rulesets that belong to the target application, test
application, or both. When you start a deployment either by using the Merge Branches wizard or by publishing
a branch to a repository on the main development system, the branches in both the target and test
applications are merged in the pipeline.

Configuring pipelines to use test cases

When you add or modify a pipeline, you specify whether you want to deploy test cases and then configure
details for the test application, including its name and access group to which it belongs, in the Application test
cases section. You also select the stage until which you want to deploy the pipeline. For more information
about using Deployment Manager, see the Creating pipelines with Deployment Manager topic for your version
of Deployment Manager.

Configuring the application stack on the development or source


development system
You must first configure your application stack on either the development or source development system.

Configure the application stack according to one of the following scenarios:

If you are using a distributed, branch-based environment, complete the following steps on the remote
development system.
If you are using a branch-based environment, complete the following steps on the development system.
If you are not using branches, complete the following steps on the development system.

Configure the application stack by performing the following steps:

1. Create the target application.

2. Create a test application, which contains the test rulesets that you want to separately deploy, that is built on
the target application.

3. Create a development application that is built on top of the test application, which developers can log in to so
that they can create and work in branches.

4. Lock both the target and test applications

Configuring the application stack on the remote development


system in a distributed, branch-based environment
If you are using a distributed, branch-based environment, you must configure the application stack on the remote
development system.

Complete the following steps:

1. Create the target application.

2. Create a test application, which contains the test rulesets that you want to separately deploy, that is built on
the target application.

3. Lock both the target and test applications.

4. Lock both the target and test application rulesets.

Using branches and test cases


Branches in the development application can contain rulesets that belong to the target application, test application,
or both. When you start a deployment either by using the Merge Branches wizard or by publishing a branch to a
repository on the main development system, the branches in both the target and test applications are merged in the
pipeline.

Configuring pipelines to use test cases


When you add or modify a pipeline, you specify whether you want to deploy test cases and then configure details for
the test application, including its name and access group to which it belongs, in the Application test cases section.
You also select the stage until which you want to deploy the pipeline. For more information about using Deployment
Manager, see the Creating pipelines with Deployment Manager topic for your version of Deployment Manager.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable test
coverage, and Validate test coverage tasks are run for the access group that is specified in the Application test
cases section.

You must also perform the following steps on the candidate system on which you are running tests:

1. Log in to the test application.

2. In the header of Dev Studio, click Configure Application Quality Settings .

3. Select the Include built-on applications radio button, and then click Save.

Creating and using custom repository types for Deployment


Manager
In Deployment Manager 3.1.x and later, you can create custom repository types to store and move your artifacts.
For example, you can create a Nexus repository and use it similarly to how you would use a Pega Platform-
supported repository type such as file system. By creating custom repository types, you can extend the functionality
of Deployment Manager through the use of a wider variety of repository types with your artifacts.

To create a custom repository type to use with Deployment Manager, complete the following steps:

1. Create a custom repository type. For more information, see Creating a custom repository type.

2. If you are using Deployment Manager 3.3.x or 4.1.x or later on each candidate system, add the ruleset that
contains the custom repository type as a production ruleset to the PegaDevOpsFoundation:Administrators
access group.

a. In the header of either Designer Studio (if you are using Deployment Manager 3.3.x) or Dev Studio (if you
are using Deployment Manager 4.1.x or later), click Records Security Access Group .

b. Click PegaDevOpsFoundation:Administrators.

c. Click Advanced

d. In the Run time configuration section, click the Production Rulesets field, press the Down arrow key, and
select the ruleset that contains the custom repository type.

e. Save the rule form.

3. Import the ruleset on which the custom repository is configured in to the orchestration system and add the
ruleset to the PegaDeploymentManager application stack.

a. On the orchestration system, import the ruleset by using the Import wizard. For more information, see
Importing rules and data from a product rule by using the Import wizard.

b. In either the Designer Studio or Dev Studio header, in the Application field, click PegaDeploymentManager,
and then click Definition.

c. On the Edit Application rule form, in the Application rulesetsfield, click Add ruleset.

d. Click the field that is displayed, press the Down arrow key, and then select the ruleset that contains the
custom repository type.

e. Save the rule form.

Configuring Deployment Manager 4.x for Pega Platform 7.4


You can use Deployment Manager 4.x if Pega Platform 7.4 is installed on your candidate systems (development, QA,
staging, and production). You can use many of the latest features that were introduced in Deployment Manager 4.x,
such as managing your deployments in a dedicated portal.

Understanding usage information

When you use Deployment Manager 4.x with Pega 7.4, certain features are not supported. These features
include pipeline tasks and enhancements to the Merge Branches wizard.

Configuring Deployment Manager 4.x to work with Pega 7.4

Configure the orchestration server and candidate systems so that Deployment Manager 4.x works with Pega
7.4. Use Deployment Manager 4.x on the orchestration system with candidate systems that are running Pega
7.4. and Deployment Manager 3.4.x.

Understanding usage information


When you use Deployment Manager 4.x with Pega 7.4, certain features are not supported. These features include
pipeline tasks and enhancements to the Merge Branches wizard.

Note the following usage limitations:

This configuration does not support the following features:


Pipeline tasks:
Validate test coverage
Refresh application quality
Run Pega scenario tests
Enable test coverage
Merge Branches wizard:
Associating user stories and bugs with a branch
Locked branches
Merging branches that span application layers

In Deployment Manager 4.5.x, some of the repository diagnostics do not work for candidate systems that are
running Pega 7.4. These diagnostics work in Deployment Manager 4.6.x.

Configuring Deployment Manager 4.x to work with Pega 7.4


Configure the orchestration server and candidate systems so that Deployment Manager 4.x works with Pega 7.4.
Use Deployment Manager 4.x on the orchestration system with candidate systems that are running Pega 7.4. and
Deployment Manager 3.4.x.

1. On the orchestration system, install or upgrade to the latest version of Pega Platform.

2. On the orchestration system, install or upgrade to the latest version of Deployment Manager 4.x. For more
information, see Installing or upgrading to Deployment Manager 4.6.x.

3. If Deployment Manager 3.4.1 is installed on the candidate systems, go to step 4; otherwise, do one of the
following actions:

If Deployment Manager 3.1.x - 3.3.x is installed on the candidate systems, upgrade to Deployment
Manager 3.4.x. For more information, see Upgrading to Deployment Manager 3.4.x.
For systems without Deployment Manager, do the following steps:
1. Install and configure the latest version of Deployment Manager 4.x on your candidate systems. For
more information, see Installing or upgrading to Deployment Manager 4.7.x.
2. Add PegaDevOpsFoundation 3.4.1 to your application stack by to Pega Marketplace and downloading
it.
3. Extract the DeploymentManager_03.04.01.zip file.
4. Use the Import wizard to import the PegaDevOpsFoundation_4.zip file.
5. In the header of Dev Studio, click the name of your application, and then click Definition.

For more information about the Import wizard, see .

6. In the Built on application section, click Add application.


7. n the Name field, press the Down arrow key and select PegaDevOpsFoundation.
8. In the Version field, press the Down arrow key and select 3.4.1.
9. Click Save.

4. Create and configure an application pipeline.

For more information, see Configuring an application pipeline.

5. Run diagnostics to ensure that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Deployment Manager 4.7.x


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega
applications from within Pega Platform. You can create a consistent deployment process so that you can deploy
high-quality releases without the use of third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging; application
package generation; artifact management; and package promotion, to different stages in the workflow.

Deployment Manager 4.7.x is compatible with Pega 8.1, 8.2, 8.3, and 8.4. You can download it for Pega Platform
from the Deployment Manager Pega Marketplace page.

For answers to frequently asked questions, see the Deployment Manager FAQ page.

Each customer Virtual Private Cloud (VPC) on Pega Cloud Services has a dedicated orchestrator instance to use
Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application.
To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server.

Installing, upgrading, and configuring Deployment Manager 4.7.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks and allow you to quickly deploy high-quality software to production.

Configuring and running pipelines with Deployment Manager 4.7.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks so that you can quickly deploy high-quality software to production.

Using data migration pipelines with Deployment Manager 4.7.x

Data migration tests provide you with significant insight into how the changes that you make to decision logic
affect the results of your strategies. To ensure that your simulations are reliable enough to help you make
important business decisions, you can deploy a sample of your production data to a dedicated data migration
test environment.

Installing, upgrading, and configuring Deployment Manager 4.7.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and
allow you to quickly deploy high-quality software to production.

You should make changes only in the development environment and then move them to higher environments. Do
not make changes in any other environment.
Each customer virtual private cloud (VPC) on Pega Cloud Services has a dedicated orchestrator instance to use
Deployment Manager. If you are upgrading from an earlier release, contact Pegasystems Global Client Support
(GCS) support to request a new version.
This document describes the procedures for the latest version of Deployment Manager 4.7.x. To use notifications,
you must install or upgrade to Pega 8.1.3 on the orchestration server.

For information on configuring Deployment Manager for data migration pipelines, see Installing, upgrading, and
configuring Deployment Manager 4.7.x for data migration pipelines.

Installing or upgrading to Deployment Manager 4.7.x

You must install Deployment Manager if you are using it on-premises. Because Pega Cloud Services manages
the orchestration server in any Pega Cloud subscription, Pega Cloud Services manages the installation and
upgrades of Deployment Manager orchestration servers; therefore, only post-upgrade steps are required if you
are upgrading from versions of Deployment Manager earlier than 3.2.1.For more information, seeRunning post-
upgrade steps.

Running post-upgrade steps

If you are upgrading from Deployment Manager versions earlier than 3.2.1, you must run post-upgrade steps to
complete the upgrade. Before you run post-upgrade steps, ensure that no deployments are running, have
errors, or are paused. In Pega Cloud Services environments, the orchestration server name is similar to
[environmentname]-DevOps.

Configuring systems in the pipeline

Configure the orchestration server and candidates in your pipeline for all supported CI/CD workflows. If you are
using branches, you must configure additional settings on the development system after you perform the
required steps.

Configuring the development system for branch-based development

If you are using branches in either a distributed or nondistributed branch-based environment, configure the
development system so that you can start deployments when branches are merged. Configuring the
development system includes defining the URL of the orchestration server, creating development and target
applications, and locking application rulesets.

Configuring additional settings

As part of your pipeline, users can optionally receive notifications through email when events occur. For
example, users can receive emails when tasks or pipeline deployments succeed or fail. For more information
about the notifications that users can receive, see Understanding email notifications.

Installing or upgrading to Deployment Manager 4.7.x


You must install Deployment Manager if you are using it on-premises. Because Pega Cloud Services manages the
orchestration server in any Pega Cloud subscription, Pega Cloud Services manages the installation and upgrades of
Deployment Manager orchestration servers; therefore, only post-upgrade steps are required if you are upgrading
from versions of Deployment Manager earlier than 3.2.1.For more information, seeRunning post-upgrade steps.

To install Deployment Manager on-premises, do the following steps:

1. Install Pega Platform 8.1, 8.2, 8.3, or 8.4 on all systems in the pipeline.

2. On each system, browse to the Deployment Manager Pega Marketplace page, and then download the
DeploymentManager04.07.0x.zip file for your version of Deployment Manager.

3. Extract the DeploymentManager04.07.0x.zip file.

4. Use the Import wizard to import files into the appropriate systems. For more information about the Import
wizard, see Importing rules and data from a product rule by using the Import wizard.

5. On the orchestration server, import the following files:

PegaDevOpsFoundation_4.7.zip
PegaDeploymentManager_4.7.zip

6. On the candidate systems, import the PegaDevOpsFoundation_4.7.zip file.

7. If you are using a distributed development for CI/CD workflows, on the remote development system, import the
PegaDevOpsFoundation_4.7.zip file.

8. Do one of the following actions:

If you are upgrading from version 3.2.1 or later, the upgrade automatically runs, and you can use
Deployment Manager when post-upgrade steps are run. You do not need to perform any of the required
configuration procedures but can configure Jenkins and email notifications. For more information, see
Configuring additional settings.
If you are not upgrading, continue the installation procedure at Configuring authentication profiles.

Running post-upgrade steps


If you are upgrading from Deployment Manager versions earlier than 3.2.1, you must run post-upgrade steps to
complete the upgrade. Before you run post-upgrade steps, ensure that no deployments are running, have errors, or
are paused. In Pega Cloud Services environments, the orchestration server name is similar to [environmentname]-
DevOps.

If you are upgrading from Deployment Manager 3.2.1 or later, skip this section; otherwise, do the following steps:

1. On each candidate system, update the PegaDevOpsFoundation application version to the version of
Deployment Manager that you are using.

a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Built on application section for the PegaDevOpsFoundation application, in the Version field, press
the Down arrow key and select the version of Deployment Manager that you are using.

c. Click Save.

2. Modify the current release management application so that it is built on PegaDeploymentManager:4.7.x:

a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the
PegaDeploymentManager application, press the Down arrow key and select 4.6.
c. Click Save.

3. If you do not see the pipelines that you created in earlier releases, run the pxMigrateOldPipelinesTo42 activity:

a. In the header of Dev Studio, search for pxMigrateOldPipelinesTo42, and then click the activity in the dialog
box that displays the results.

b. Click Actions Run .

c. In the dialog box that is displayed, click Run.

4. On the orchestration server, run the pxUpdateDescription activity.

a. In the header of Dev Studio search for pxUpdateDescription, and then click the activity in the dialog box
that displays the results.

b. Click Actions Run .

c. In the dialog box that is displayed, click Run.

5. On the orchestration server, run the pxUpdatePipeline activity.

a. In the header of Dev Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that
displays the results.

b. Click Actions Run .

c. In the dialog box that is displayed, click Run.

6. Merge rulesets to the PipelineData ruleset.

a. Click Configure System Refactor Rulesets .

b. Click Copy/Merge RuleSet.

c. Click the Merge Source RuleSet(s) to Target RuleSet radio button.

d. Click the RuleSet Versions radio button.

e. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and
then click the Move icon.

7. All your current pipelines are stored in the first open ruleset.

If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline
data.

a. In the target RuleSet/Information section, in the Name field, press the Down arrow key and select Pipeline
Data.

b. In the Version field, enter 01-01-01.

c. For the Delete Source RuleSet(s) upon completion of merge? option, click No.

d. Click Next.

e. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset.

f. Click Done.

Your pipelines are migrated to the Pega Deployment Manager application.


Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that
you specified for it.
You do not need to perform any of the required steps in the remainder of this document. If you want to use Jenkins
tasks to configure email notifications, see Configuring Jenkins.

Configuring systems in the pipeline


Configure the orchestration server and candidates in your pipeline for all supported CI/CD workflows. If you are using
branches, you must configure additional settings on the development system after you perform the required steps.

To configure systems in the pipeline, do the following steps:

1. Configuring authentication profiles

2. Configuring the orchestration server

3. Configuring candidate systems

4. Creating repositories on the orchestration server and candidate systems


Configuring authentication profiles

Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate with
the candidate systems.

Configuring the orchestration server

The orchestration server is the system on which the Deployment Manager application is installed and release
managers configure and manage CI/CD pipelines. Configure settings on it before you can use it in your pipeline.

Configuring candidate systems

Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

Creating repositories on the orchestration server and candidate systems

If you are using Deployment Manager on premises, create repositories on the orchestration server and all
candidate systems to move your application between all the systems in the pipeline. You can use a supported
repository type that is provided in Pega Platform, or you can create a custom repository type.

Configuring authentication profiles


Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate with the
candidate systems.

Configure the default authentication profile by following these steps:

1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password.

a. Log in to the orchestration server with [email protected]/install.

b. In the header of Dev Studio, click Records Organization Operator ID , and then click DMReleaseAdmin.

c. On the Edit Operator ID rule form, click the Security tab.

d. Clear the Disable Operator check box.

e. Click Save.

f. Click Update password.

g. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then
click Submit.

h. Log out of the orchestration server.

2. On each candidate system, which includes the development, QA, staging, and production systems, enable the
DMAppAdmin operator ID.

If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application.

a. Log in to each candidate system with [email protected]/install.

b. In the header of Dev Studio, click Records Organization Operator ID , and then click DMAppAdmin.

c. In the Explorer panel, click the operator ID initials, and then click Operator.

d. On the Edit Operator ID rule form, click the Security tab.

e. Clear the Disable Operator check box.

f. Click Save.

g. Click Update password.

h. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click
Submit.

i. Log out of each candidate system.

3. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All
candidate systems use this authentication profile to communicate with the orchestration server about the
status of the tasks in the pipeline.

a. Log in to each candidate system with the DMReleaseAdmin operator ID and the password that you
specified.

b. In the header of Dev Studio, click Records Security Authentication Profile .


c. Click DMReleaseAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The
orchestration server uses this authentication profile to communicate with candidate systems so that it can run
tasks in the pipeline.

a. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified.

b. In the header of Dev Studio, click Records Security Authentication Profile .

c. Click DMAppAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

5. If your target environment is SSL-enabled with private certificates, configure the Deployment Manager
connectors so that they can receive and process tokens by doing setting the keystore:

a. In the header of Dev Studio, create and configure a keystore. For more information, see Creating a
keystore.

b. Configure the Pega-DeploymentManager/TrustStore dynamic system setting to reference the keystore ID


by clicking Records SysAdmin Dynamic System Settings .

c. Click the Pega-DeploymentManager/TrustStore dynamic system setting.

d. On the Settings tab, in the Value field, enter the ID of the keystore that you created in the previous step.

e. Click Save.

For more information about dynamic system settings, see Creating a dynamic system setting.

6. Do one of the following actions:

If you are upgrading to Deployment Manager 4.6.x, resume the post-upgrade procedure from step 2. For
more information, see Running post-upgrade steps.
If you are not upgrading, continue the installation procedure. For more information, see Configuring the
orchestration server.

Understanding default authentication profiles and operator IDs

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs,
and authentication profiles are installed. Authentication profiles enable communication between the
orchestration server and candidate systems.

Understanding default authentication profiles and operator IDs


When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and
authentication profiles are installed. Authentication profiles enable communication between the orchestration server
and candidate systems.

On the orchestration server, the following items are installed:

The Pega Deployment Manager application.


The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager
application. You must enable this operator ID and specify its password.
The DMAppAdmin authentication profile. The orchestration server uses this authentication profile to
communicate with candidate systems so that it can run tasks in the pipeline. You must update this
authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is
configured on all the candidate systems.

On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application.


The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this
operator ID and specify its password.
The DMReleaseAdmin authentication profile. All candidate systems use this authentication profile to
communicate with the orchestration server about the status of the tasks in the pipeline. You must update this
authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is
configured on the orchestration server.
The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords.

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and
authentication profiles that communicate between the orchestration server and candidate systems are also
installed.

On the orchestration server, the following items are installed:

The Pega Deployment Manager application.


The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager
application. You must enable this operator ID and specify its password.
The DMAppAdmin authentication profile. The orchestration server uses this authentication profile to
communicate with candidate systems so that it can run tasks in the pipeline. You must update this
authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is
configured on all the candidate systems.

On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application.


The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this
operator ID and specify its password.
The DMReleaseAdmin authentication profile. All candidate systems use this authentication profile to
communicate with the orchestration server about the status of the tasks in the pipeline. You must update this
authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is
configured on the orchestration server.

The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords.

Configuring the orchestration server


The orchestration server is the system on which the Deployment Manager application is installed and release
managers configure and manage CI/CD pipelines. Configure settings on it before you can use it in your pipeline.

To configure the orchestration server, do the following steps:

1. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. In the header of Dev Studio, click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

d. Click Records Integration-Resources Service Package .

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

2. To move the orchestration server to a different environment, first migrate your pipelines to the new
orchestration server, and then configure its URL on the new orchestration server.

This URL will be used for callbacks and for diagnostics checks.

a. In the header of Dev Studio, click Create SysAdmin Dynamic System Settings .

b. In the Owning Ruleset field, enter Pega-DeploymentManager.

c. In the Setting Purpose field, enter OrchestratorURL.

d. Click Create and open.

e. On the Settings tab, in the Value field, enter the URL of the new orchestration server in the format
http://hostname:port/prweb.

f. Click Save.

3. Configure the candidate systems in your pipeline.

For more information, see Configuring candidate systems.

Configuring candidate systems


Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

To configure candidate systems, complete the following steps:

1. On each candidate system, add the PegaDevOpsFoundation application to your application stack.
a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Built on application section, click Add application.

c. In the Name field, press the Down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the Down arrow key and select the version of Deployment Manager that you are
using.

e. Click Save.

2. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. Click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

d. Click RecordsIntegration-ResourcesService Package.

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

3. To use a product rule for your target application, test application, or both, other than the default rules that are
created by the New Application wizard, on the development system, create product rules that define the test
application package and the target application package that will be moved through repositories in the pipeline.

For more information, see Creating a product rule that includes associated data by using the Create menu.

When you use the New Application wizard, a default product rule for your target application is created that has
the same name as your application. Additionally, if you are using a test application, a product rule is created
with the same name as the target application, with _Tests appended to the name.

4. Configure repositories through which to move artifacts in your pipeline.

For more information, see Creating repositories on the orchestration server and candidate systems.

Creating repositories on the orchestration server and candidate


systems
If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate
systems to move your application between all the systems in the pipeline. You can use a supported repository type
that is provided in Pega Platform, or you can create a custom repository type.

If you are using Deployment Manager on Pega Cloud Services, default repositories, named "pegacloudcustomerroot"
for both the development and production repositories, are provided. If you want to use repositories other than the
ones provided, you can create your own.

For more information about creating a supported repository, see Creating a repository.

For more information about creating a custom repository type, see Creating and using custom repository types for
Deployment Manager.

When you create repositories, note the following information:

You cannot use the Pega repository type to store application artifacts for the following reasons:
The Pega repository type points to the temporary folder where the Pega Platform node that is associated
with Deployment Manager stores caches. This node might not be persistent.
This repository type is suitable onlyfor single node deployments. In multinode deployments, each time a
requestor is authenticated, the requestor could be in a different node, and published artifacts are not
visible to the repository.
At most companies, the security practice is that lower environments should not connect to higher
environments. Using a Pega repository typically means that a lower environment can access a higher
environment.

You can only use Pega type repositories if are rebasing your development system to obtain the most
recently committed rulesets after merging them.

You can use file system type repositories if you do not want to use proprietary repositories such as
Amazon s3 or JFrog Artifactory.

You cannot use the defaultstore repository type to host artifacts or product archives for the production
applications. It is a system-managed file system repository; it points to the temporary folder where the Pega
Platform node that is associated with Deployment Manager stores caches.
Ensure that each repository has the same name on all systems.
When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog
Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must
select the Preemptive authentication check box.

After you configure a pipeline, you can verify that the repository connects to the URL of the development and
production repositories by clicking Test Connectivity on the Repository rule form.

Configuring the development system for branch-based development


If you are using branches in either a distributed or nondistributed branch-based environment, configure the
development system so that you can start deployments when branches are merged. Configuring the development
system includes defining the URL of the orchestration server, creating development and target applications, and
locking application rulesets.

1. On the development system (in nondistributed environment) or the main development system (in a distributed
environment), create a dynamic system setting to define the URL of the orchestration server, even if the
orchestration server and the development system are the same system.

a. Click Create Records SysAdmin Dynamic System Settings .

b. In the Owning Ruleset field, enter Pega-DevOps-Foundation.

c. In the Setting Purpose field, enter RMURL.

d. Click Create and open.

e. On the Settings tab, in the Value field, enter the URL of the orchestration server in the format
http://hostname:port/prweb/PRRestService .

f. Click Save.

For more information about dynamic system settings, see Creating a dynamic system setting

2. Complete the following steps on either the development system (in a non-distributed environment) or the
remote development system (in a distributed environment).

a. Use the New Application wizard to create a new development application that developers will log in to.

This application allows development teams to maintain a list of development branches without modifying
the definition of the target application.

b. Add the target application of the pipeline as a built-on application layer of the development application by
first logging into the application.

c. In the header of Dev Studio, click the name of your application, and then click Definition.

d. In the Built-on application section, click Add application.

e. In the Name field, press the Down arrow key and select the name of the target application.

f. In the Version field, press the Down arrow key and select the target application version.

g. Click Save.

3. Lock the application rulesets to prevent developers from making changes to rules after branches have been
merged.

a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Application rulesets section, click the Open icon for each ruleset that you want to lock.

c. Click Lock and Save.

4. Copy the development repository that you configured on the remote development system to the source
development system.

5. If you are managing test cases separately from the target application, create a test application. For more
information, see Managing test cases separately in Deployment Manager.

6. Optional:

To rebase your development application to obtain the most recently committed rulesets after you merge your
branches, configure Pega Platform so that you can use rule rebasing.

For more information, see Understanding rule rebasing.

Configuring additional settings


As part of your pipeline, users can optionally receive notifications through email when events occur. For example,
users can receive emails when tasks or pipeline deployments succeed or fail. For more information about the
notifications that users can receive, see Understanding email notifications.

For either new Deployment Manager installations or upgrades, you must configure settings on the orchestration
server so that users can receive email notifications. For more information, see Configuring email accounts on the
orchestration server.

Additionally, you can configure Jenkins if you are using Jenkins tasks in a pipeline. For more information, see
Configuring Jenkins.

Configuring email accounts on the orchestration server

Deployment Manager provides the Pega-Pipeline-CD email account and the DMEmailListener email listener. If
you are configuring email accounts for the first time, specify your details for this account in Pega Platform. For
more information, see Configuring email accounts for new Deployment Manager installations.

Configuring Jenkins

If you are using a Run Jenkins step task in your pipeline, configure Jenkins so that it can communicate with the
orchestration server.

Configuring email accounts on the orchestration server


Deployment Manager provides the Pega-Pipeline-CD email account and the DMEmailListener email listener. If you
are configuring email accounts for the first time, specify your details for this account in Pega Platform. For more
information, see Configuring email accounts for new Deployment Manager installations.

Otherwise, if you are upgrading, do the appropriate steps for the email account that you are using. See one of the
following topics for more information:

Configuring an email account when upgrading and using the Pega-Pipeline-CD email account
Configuring email accounts when upgrading and using the Default email account.

Configuring email accounts for new Deployment Manager installations

For new Deployment Manager installations, on the orchestration server, configure the Pega-Pipeline-CD email
account so users can receive email notifications for events such as task completion or failure.

Configuring an email account when upgrading and using the Pega-Pipeline-CD email account

If you are upgrading to Deployment Manager 4.7.x and using the Pega-Pipeline-CD email account for sending
emails, the DMEmailListener email listener always listens to the Pega-Pipeline-CD account. If you are using a
different listener, you must delete it.

Configuring email accounts when upgrading and using the Default email account

If you are upgrading to Deployment Manager and using the Default email account, after you upgrade to
Deployment Manager 4.7.x, you must do certain steps so that you can send email notifications.

Understanding email notifications

Emails are preconfigured with information about each notification type. For example, when a deployment
failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on
which the deployment failure occurred.

Configuring email accounts for new Deployment Manager


installations
For new Deployment Manager installations, on the orchestration server, configure the Pega-Pipeline-CD email
account so users can receive email notifications for events such as task completion or failure.

Do the following steps:

1. In the navigation pane of Dev Studio, click Records, and then click Integration-Resources Email Account .

2. Click Pega-Pipeline-CD.

3. In the Edit Email Account rule form, configure and save the email account.

For more information about configuring email accounts, see Creating an email account in Dev Studio.

Configuring an email account when upgrading and using the Pega-


Pipeline-CD email account
If you are upgrading to Deployment Manager 4.7.x and using the Pega-Pipeline-CD email account for sending emails,
the DMEmailListener email listener always listens to the Pega-Pipeline-CD account. If you are using a different
listener, you must delete it.
Delete the listener that is listening to the Pega-Pipeline-CD account by doing the following steps:

1. In the header of Dev Studio, click Configure Integration Email Email listeners .

2. On the Email: Integration page, on the Email Listeners tab, click the listener that you want to delete.

3. Click Delete.

Configuring email accounts when upgrading and using the Default


email account
If you are upgrading to Deployment Manager and using the Default email account, after you upgrade to Deployment
Manager 4.7.x, you must do certain steps so that you can send email notifications.

Do the following steps:

1. Update the email sender and recipient in Pega Platform.

a. In the navigation pane of Dev Studio, click Records, and then click Integration-Resources Email Account .

b. Click Default.

c. Edit Email Account form, configure and save the email account.

For more information about configuring email accounts, see Creating an email account in Dev Studio

2. If you have an email listener that listens to the same email address that you configured in Deployment
Manager in the previous step, delete the listener to ensure that the DMEmailListener is listening to the email
account that you configured.

a. In the header of Dev StudioDev Studio, click Configure Integration Email Email listeners .

b. On the Email: Integration page, on the Email Listeners tab, click the listener that you want to delete.

c. Click Delete.

Understanding email notifications


Emails are preconfigured with information about each notification type. For example, when a deployment failure
occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the
deployment failure occurred.

Preconfigured emails are sent in the following scenarios:

Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using
branches, to the operator who started a deployment.
Deployment step completion or failure – When a step either completes or fails, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge. The deployment
pauses if there are any errors.
Deployment completion – When a deployment is successfully completed, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge.
Stage completion or failure – When a stage in a deployment process either succeeds or fails, an email is sent to
the release manager and, if you are using branches, to the operator who started the branch merge.
Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to
the user, who can approve or reject the task from the email.
Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are
using branches, to the operator who started the branch merge.
Pega unit testing success or failure – If you are using the Run Pega unit tests task, and the task either succeeds
or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the
branch merge.
Schema changes required – If you do not have the required schema privileges to deploy schema changes on
application packages that require those changes, an email is sent to the operator who started the deployment.
Guardrail compliance score success or failure – If you are using the Check guardrail compliance task, an email is
sent to the release manager if the task either succeeds or fails.
Approve for production – If you are using the Approve for production task, which requires approval from a user
before application changes are deployed to production, an email is sent to the user. The user can reject or
approve the changes.
Verify security checklist success or failure – If you are using the Verify security checklist task, which requires
that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with
security best practices, an email is sent to the release manager if the test either succeeds or fails.
Pega scenario testing success or failure – If you are using the Run Pega scenario tests task, an email is sent to
the release manager and, if you are using branches, to the operator who started the branch merge, if Pega
scenario testing either succeeds or fails.
Start test coverage success or failure – If you are using the Enable test coverage task to generate a test
coverage report, an email is sent to the release manager if the task either fails or succeeds.
Verify test coverage success or failure – If you are using the Verify test coverage task, an email is sent to the
release manager if the task either fails or succeeds.
Application quality statistics refreshed – If you are using the Refresh application quality statistics task, an email
is sent to the release manager when the task is run.
Jenkins job success or failure – If you are using a Jenkins task, an email is sent to the release manager if a
Jenkins job either succeeds or fails.

Configuring Jenkins
If you are using a Run Jenkins step task in your pipeline, configure Jenkins so that it can communicate with the
orchestration server.

1. On the orchestration server, create an authentication profile that uses Jenkins credentials.

If you are using a version of Jenkins earlier than 2.17.6, create an authentication profile on the
orchestration server that specifies the credentials to use.
1. Click Create Security Authentication Profile .
2. Enter a name, and then click Create and open.
3. In the User name field, enter Jenkins user ID.
4. Click Set password, enter the Jenkins password, and then click Submit.
5. Click the Preemptive authentication check box.
6. Click Save.
7. Go to step 4.

For more information about configuring authentication profiles, see Creating an authentication profile.

If you are using Jenkins 2.17.6 or later and want to use an API token for authentication, go to step 2.
If you are using Jenkins 2.17.6 or later and want to use a Crumb Issuer for authentication, go to step 3.

2. If you are using Jenkins version 2.17.6 or later and want to use an API token for authentication, do the following
steps:

a. Log in to the Jenkins server.

b. Click People, click the user who is running the Jenkins job, and then click Configure API token .

c. Generate the API token.

d. Create an authentication profile on the orchestration server by clicking Create Security Authentication
Profile .

e. In the User name field, enter the Jenkins user ID.

f. Click Set password, enter the API token that you generated, and then click Submit.

g. Click the Preemptive authentication check box.

h. Click Save.

i. Go to step 4.

For more information about configuring authentication profiles, see Creating an authentication profile.

3. If you are using Jenkins version 2.17.6 or later and want to use a Crumb Issuer for authentication, do the
following steps:

a. Log in to the Jenkins server.

b. Click Manage Jenkins Manage Plugins and select the check box for the Strict Crumb Issuer plug-in.

c. Click Manage Jenkins Configure Global Security .

d. In the CSRF protection section, in the Crumb Issuer list, select Strict Crumb Issuer.

e. Click Advanced, and then clear the Check the session ID check box.

f. Click Save.

g. Create an authentication profile on the orchestration server by clicking Create Security Authentication
Profile .

h. In the User name field, enter the Jenkins user ID.

i. Click Set password, enter the Jenkins password, and then click Submit.

j. Click the Preemptive authentication check box.

k. Click Save.

l. Go to step 4.

For more information about configuring authentication profiles, see Creating an authentication profile.
4. Install the Post build task plug-in.

5. Install the curl command on the Jenkins server.

6. Create a new freestyle project.

7. On the General tab, select the This project is parameterized check box.

8. Add the BuildID and CallBackURL parameters.

a. Click Add parameter, and then select String parameter.

b. In the String field, enter BuildID.

c. Click Add parameter, and then select String parameter.

d. In the String field, enter CallBackURL.

9. To add parameters that you can use in Run Jenkins tasks in the pipeline, click Add parameter, select String
parameter, and enter the string of the parameter. The system automatically populates these values in Jenkins
tasks. You can add any of the following strings:

PipelineName: Pipeline name on which the Jenkins task is configured.


RepositoryName: Repository that the Deploy task uses for the stage (for example, development) on which
the Jenkins task is configured.
DeploymentID: ID of the current deployment.
DeploymentArtifactName: Artifact name that the Deploy task uses on the stage on which the Jenkins task
is configured.
StartedBy: Operator ID who started the deployment.
CurrentStage: Name of the stage on which the Jenkins task is configured.
CurrentStageURL: URL of the system on which the Jenkins task is configured.
ArtifactPath: Full path to the artifact that the Deploy task uses.
OrchestratorURL: URL of the orchestration server. Add this parameter to stop a pipeline when the Run
Jenkins step fails in a pipeline.
PipelineID: ID of the pipeline on which the Jenkins task is configured. Add this parameter to stop a pipeline
when the Run Jenkins step fails in a pipeline.

10. In the Build Triggers section, select the Trigger builds remotely check box.

11. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely.

12. In the Build Environment section, select the Use Secret text(s) or file(s) check box.

13. In the Bindings section, do the following actions:

a. Click Add, and then select User name and password (conjoined).

b. In the Variable field, enter RMCREDENTIALS

c. In the Credentials field, click Specific credentials.

d. Click Add, and then select Jenkins.

e. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager
operator that is configured on the orchestration server.

f. In the Password field, enter the password.

g. Click Save.

14. Add post-build tasks by doing one of the following actions:

a. If Jenkins is running on Microsoft Windows, go to step 15.

b. If Jenkins is running on Linux, go to step 16.

15. If Jenkins is running on Microsoft Windows, add the following post-build tasks:

a. Click Add post-build action, and then select Post build task.

b. In the Post-Build Actionssection, in the Log text field, enter a unique string for the message that is
displayed in the build console output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%".

d. Click Add another task.

e. In the Post-Build Actions section, in the Log text field, enter a unique string for the message that is
displayed in the build console output when a build is successful, for example BUILD SUCCESS.
f. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}"
"%CallBackURL%"

g. Click Save.

h. Go to step 17.

16. If Jenkins is running on Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent
sign (%) to access the environment variables:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

d. Click Add another task.

e. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

g. Click Save.

h. Go to step 17.

17. To stop a pipeline deployment if a Jenkins build fails, add a post-build script:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string for the message that is displayed in the build console output
when a build fails, for example JENKINS BUILD FAILURE .

c. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X PUT --data "{"AbortNote":"Aborted from
jenkins job"}" %OrchestratorURL%/PRRestService/cicd/v1/pipelines/%PipelineID%/builds/%DeploymentID%/abort

d. Click Save.

Configuring and running pipelines with Deployment Manager 4.7.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks so
that you can quickly deploy high-quality software to production.

On the orchestration server, release managers use the Deployment Manager landing page to configure CI/CD
pipelines for their Pega Platform applications. The landing page displays all the running and queued application
deployments, branches that are to be merged, and reports that provide information about your DevOps
environment such as key performance indicators (KPIs).

These topics describes the features for the latest version of Deployment Manager 4.7.x.
To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server.

For more information about using Deployment Manager and data migration pipelines, see Exporting and importing
simulation data automatically with Deployment Manager.

Logging in to Deployment Manager

Deployment Manager provides a dedicated portal from which you can access features.

Accessing the Dev Studio portal

If your role has the appropriate permission, you can access Dev Studio from within Deployment Manager. You
can switch to Dev Studio to access features such as additional tools to troubleshoot issues. You can also open,
modify, and create repositories and authentication profiles.

Accessing API documentation

Deployment manager provides REST APIs for interacting with resources that in the Deployment Manager
interface. Use these APIs to create and manage pipelines by using automated scripts or external information.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.
Understanding Deployment Manager notifications

You can enable notifications to receive updates about the events that occur in your pipeline. For example, you
can choose to receive emails about whether unit tests failed or succeeded. You can receive notifications in the
Deployment Manager notifications gadget, through email, or both. By default, all notifications are enabled for
users who are configured in Deployment Manager.

Configuring an application pipeline

When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery
workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you
can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline.

Accessing systems in your pipeline

You can open the systems in your pipeline and log in to the Pega Platform instances on each system. For
example, you can access the system on which the QA stage is installed.

Filtering pipelines in the dashboard

You can filter the pipelines that the dashboard displays by application name, version, and pipeline deployment
status. By filtering pipelines, the dashboard displays only the information that is relevant to you.

Viewing merge requests

You can view the status of the merge requests for a pipeline to gain more visibility into the status of your
pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged.

Viewing deployment reports for a specific deployment

Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started
and ended. If there were schema changes on the deployment, the report displays the schema changes.

Starting deployments

You can start deployments in a number of ways. For example, you can start a deployment manually if you are
not using branches, by submitting a branch into the Merge Branches wizard, or by publishing application
changes in App Studio to create a patch version of your application. Your user role determines if you can start a
deployment.

Pausing and resuming deployments

When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at
the next step. Your user role determines if you can pause a deployment.

Stopping a deployment

If your role has the appropriate permissions, you can a deployment to prevent it from moving through the
pipeline.

Managing a deployment that has errors

If a deployment has errors, the pipeline stops processing on it. You can do actions such as rolling back the
deployment or skipping the step on which the error occurred.

Troubleshooting issues with your pipeline

Deployment Manager provides several features that help you troubleshoot and resolve issues with your
pipeline.

Understanding schema changes in application packages

If an application package that is to be deployed on candidate systems contains schema changes, the Pega
Platform orchestration server checks the candidate system to verify that you have the required privileges to
deploy the schema changes. One of the following results occurs:

Completing or rejecting a manual step

If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either
complete it or reject it if your role has the appropriate permissions. For example, if a user was assigned a task
and completed it, you can complete the task in the pipeline to continue the deployment. Deployment Manager
also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either
within the pipeline or through email.

Managing aged updates

If your role has the appropriate permissions, you can manage aged updates in a number of ways, such as
importing them, skipping the import, or manually deploying applications. Managing aged updates gives you
more flexibility in how you deploy application changes.
Managing artifacts generated by Deployment Manager

You can view, download, and delete application packages in repositories that are on the orchestration server. If
you are using Deployment Manager on Pega Cloud Services, application packages that you have deployed to
cloud repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can download
and permanently delete the packages.

Archiving and activating pipelines

If your role has the appropriate permissions, you can archive inactive pipelines so that they are not displayed
on the Deployment Manager landing page.

Disabling and enabling a pipeline

If your role has the appropriate permissions, you can disable a pipeline on which errors continuously cause a
deployment to fail. Disabling a pipeline prevents branch merging, but you can still view, edit, and stop
deployments on a disabled pipeline.

Deleting a pipeline

If your role has the appropriate permission, you can delete a pipeline. When you delete a pipeline, its
associated application packages are not removed from the repositories that the pipeline is configured to use.

Logging in to Deployment Manager


Deployment Manager provides a dedicated portal from which you can access features.

To log in to Deployment Manager, on the orchestration server, enter the DMAppAdmin operator ID and the password
that you specified for it.

Accessing the Dev Studio portal


If your role has the appropriate permission, you can access Dev Studio from within Deployment Manager. You can
switch to Dev Studio to access features such as additional tools to troubleshoot issues. You can also open, modify,
and create repositories and authentication profiles.

To access Dev Studio click Operator icon Switch to Dev Studio .

For more information on enabling a role to access Dev Studio, see Providing access to the Dev Studio portal.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Accessing API documentation


Deployment manager provides REST APIs for interacting with resources that in the Deployment Manager interface.
Use these APIs to create and manage pipelines by using automated scripts or external information.

To access API documentation, open the Documentation/readme-for-swagger.md file in the


DeploymentManager04_07_0x.zip file that you downloaded.

Understanding roles and users


Define roles and users to manage which users can access Deployment Manager and which features they can access.
For example, you can create a role that does not permit users to delete pipelines for a specific application.

Deployment Manager provides two default roles, which you cannot modify or delete, that define privileges for super
administrators and application administrators. Privileges for super administrators are applied across all applications,
and privileges for application administrators are applied to specific applications. Super administrators can also add
roles and specify the privileges to assign to them. Super administrators and application administrators can add
users and assign them access to the applications that they manage.

Using roles and privileges by creating a dynamic system setting

You can create roles that have specific privileges and then assign users to those roles to manager Deployment
Manager users. To use roles and privileges, you must first create the EnableAttributeBasedSecurity dynamic
system setting.

Adding and modifying roles

If you are a super administrator, you can add and modify roles. Users within a role share defined responsibilities
such as starting a pipeline.

Providing access to Dev Studio to a role


Deployment Manager provides a dedicated portal from which you can access features. From within
Deployment Manager, when you configure pipeline details, you can open, modify, and create repositories and
authentication profiles in Dev Studio if you have permissions to use the Dev Studio portal.

Adding users and specifying their roles

If you are a super administrator or application administrator, you can add users to Deployment Manager and
specify their roles. Only super administrators can create other super administrators or application
administrators who can access one or more applications. Application administrators can create other
application administrators for the applications that they manage.

Modifying user roles and privileges

Modifying your user details and password

You can modify your own user details, such as first and last name, and you can change your password.

Deleting users

If you are a super administrator or application administrator, you can delete users for the applications that you
manage.

Using roles and privileges by creating a dynamic system setting


You can create roles that have specific privileges and then assign users to those roles to manager Deployment
Manager users. To use roles and privileges, you must first create the EnableAttributeBasedSecurity dynamic system
setting.

Do the following steps:

1. In the header of Dev Studio, click Create SysAdmin Dynamic System Settings .

2. In the Short Description field, enter a short description.

3. In the Owning Ruleset field, enter Pega-RulesEngine .

4. In the Setting Purpose field, enter EnableAttributeBasedSecurity.

5. Click Create and open.

6. On the Settings tab, in the value field, enter true.

7. Click Save.

Adding and modifying roles


If you are a super administrator, you can add and modify roles. Users within a role share defined responsibilities
such as starting a pipeline.

If you are a super administrator, add or modify a role by doing the following steps:

1. In the navigation pane, click Users, and then click Roles and privileges.

2. Do one of the following actions:

To add a role, click Add role.


To modify a role, click a role, and then click Edit.

3. In the Add role or Edit role dialog box, in the Name field, enter a name for the role.

4. Select the privileges that you want to assign to the role.

5. Click Submit.

Providing access to Dev Studio to a role


Deployment Manager provides a dedicated portal from which you can access features. From within Deployment
Manager, when you configure pipeline details, you can open, modify, and create repositories and authentication
profiles in Dev Studio if you have permissions to use the Dev Studio portal.

To provide access to the Dev Studio portal for a role, complete the following steps:

1. In the navigation pane, click Users, and then click Roles and privileges.

2. Do one of the following actions:

To add a role, click Add role.


To modify a role, click Edit.
3. In the Add role or Edit Role dialog box, in the Name field, enter a name for the role.

4. Click Access to Dev Studio.

5. Click Submit.

If you specify Dev Studio as a default portal for the PegaDeploymentManager:Administrators access group, all the
users that you add in the Deployment Manager portal can access Dev Studio.

Adding users and specifying their roles


If you are a super administrator or application administrator, you can add users to Deployment Manager and specify
their roles. Only super administrators can create other super administrators or application administrators who can
access one or more applications. Application administrators can create other application administrators for the
applications that they manage.

To add users, do the following steps:

1. In the navigation pane, click Users, and then click People.

2. On the People page, click Add user.

3. In the Add user dialog box, click the User field, and do one of the following actions:

Press the Down arrow key and select the user that you want to add.
Enter an email address.

4. Click Add.

5. From the Role list, select the role to assign to the user.

6. If you selected the App admin role or a custom role, in the Applications field, enter the application name that
the user can access.

7. Click Send invite to send an email, which contains the user name and a randomly generated password for the
user to log in to Deployment Manager with, to the user.

Modifying user roles and privileges


Super administrators can give other users super administrative privileges or assign them as application
administrators to any application. Application administrators can assign other users as application administrators for
the applications that they manage.

1. In the navigation pane, click Users, and then click People.

2. On the People page, click the user.

3. In the Roles and privileges section, modify the user role and applications that they can access, as appropriate.

4. Click Save.

Modifying your user details and password


You can modify your own user details, such as first and last name, and you can change your password.

To update your information, do the following steps:

1. In the navigation pane, click Users, and then click People.

2. On the People page, click your user name.

3. In the Personal details section, modify your name, email address, and phone number, as appropriate.

4. To change your password:

a. Click Update password.

b. In the Change operator ID dialog box, enter your new password, reenter it to confirm it, and then click
Submit.

5. Click Save.

Deleting users
If you are a super administrator or application administrator, you can delete users for the applications that you
manage.

To delete users, do the following steps:


1. In the navigation pane, click Users, and then click People.

2. On the People page, click the Delete icon for the user that you want to delete.

Understanding Deployment Manager notifications


You can enable notifications to receive updates about the events that occur in your pipeline. For example, you can
choose to receive emails about whether unit tests failed or succeeded. You can receive notifications in the
Deployment Manager notifications gadget, through email, or both. By default, all notifications are enabled for users
who are configured in Deployment Manager.

By default, all notifications are enabled for users who are configured in Deployment Manager.

Viewing and updating email accounts for notifications

Receiving email notifications requires that an email account is configured on the orchestration server. You can
view and update your email settings in Deployment Manager.

Creating custom Deployment Manager notification channels

You can extend Deployment Manager notification capabilities by creating custom notification channels. For
example, you can send text messages to mobile devices when tasks start, stop, and are unsuccessful.

Managing notifications

Enable and receive notifications so that you can remain informed about important tasks in your pipeline. For
example, you can receive emails when certain tasks fail.

Viewing and updating email accounts for notifications


Receiving email notifications requires that an email account is configured on the orchestration server. You can view
and update your email settings in Deployment Manager.

Changing your email settings requires access to Dev Studio, so your user role must have permission to access Dev
Studio. For more information, see Understanding roles and users.

1. In the navigation pane of Deployment Manager, click Settings Email configuration .

2. To update your email settings, perform the following steps:

a. At the top of the Settings: Email configuration page, click Dev Studio.

b. In the Edit Email Account rule form, configure and save the email account that you want to use to receive
notifications.

c. In the bottom left corner of Dev Studio, click Back to Deployment Manager to return to the Deployment
Manager portal.

d. Click the refresh icon to refresh your email configuration information.

Creating custom Deployment Manager notification channels


You can extend Deployment Manager notification capabilities by creating custom notification channels. For
example, you can send text messages to mobile devices when tasks start, stop, and are unsuccessful.

To create a custom notification channel, complete the following steps:

1. On the orchestration server, in Pega Platform, create a custom notification channel.

For more information, see Adding a custom notification channel.

2. Add the application ruleset, which contains the channel that you created, to the Deployment Manager
application.

a. In the header of Dev Studio, click Deployment Manager, and then click Definition.

b. On the Edit Application rule form, in the Application rulesets section, click Add ruleset.

c. Press the Down arrow key and select the ruleset and version that contains the custom notification
channel.

d. Save the rule form.

3. Enable the channel that you created on the appropriate notifications by saving the notification in the
application ruleset that contains the channel.

For example, if you want to use the Mobile channel for the pyStartDeployment notification, save the
pyStartDeployment notification in the application ruleset that contains the Mobile channel.
4. Enable the channel on the notification.

a. Open the notification by clicking Records Notifications .

b. Click the Channels tab.

c. On the Channel configurations page, select the channel that you want to use.

d. Save the rule form.

Understanding custom Deployment Manager notification channels

When notifications are enabled, you can receive notifications about the events that occur in your pipeline, such
as when tasks start or stop. You can receive notifications through email, the Deployment Manager notifications
gadget, or both. You can also create custom notification channels to meet application requirements such as
sending notifications as phone text messages or as push notifications on mobile devices.

Understanding custom Deployment Manager notification channels


When notifications are enabled, you can receive notifications about the events that occur in your pipeline, such as
when tasks start or stop. You can receive notifications through email, the Deployment Manager notifications gadget,
or both. You can also create custom notification channels to meet application requirements such as sending
notifications as phone text messages or as push notifications on mobile devices.

Deployment Manager provides the following notifications to which you can add channels:

pyAbortDeployment
pyTaskFailure
pyTaskFailure
pyTaskCompletion
pyStartDeployment
pyStageCompletion
pySchemaChange
pyDeploymentCompletion
pyAgedUpdateActionTaken
pyAgedUpdateActionRequired

Managing notifications
Enable and receive notifications so that you can remain informed about important tasks in your pipeline. For
example, you can receive emails when certain tasks fail.

To enable notifications and select the notifications that you want to receive, do the following steps:

1. In the navigation pane, click your profile icon.

2. Click Notification preferences.

3. Select the events for which you want to receive notifications.

4. Specify how you want to receive notifications.

5. Click Submit.

Configuring an application pipeline


When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery
workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can
specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline.

You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in
the following scenarios:

To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline
that has only a production stage or development and production stages.
To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud Services

If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application name
and version for the pipeline. Many fields are populated by default, such as the URL of your development system
and product rule name and version.

Adding a pipeline on premises

When you add a pipeline on premises, you define all the stages and tasks that you want to do on each system.
For example, if you are using branches, you can start a build when a branch is merged. If you are using a QA
system, you can run test tasks to validate application data.
Modifying an application pipeline

You can modify the details of your pipeline, such as configuring tasks, updating the repositories that the
pipeline uses, and modifying the URLs of the systems in your environment. You cannot modify information if
your pipeline is running.

Adding a pipeline on Pega Cloud Services


If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application name and
version for the pipeline. Many fields are populated by default, such as the URL of your development system and
product rule name and version.

To add a pipeline on Pega Cloud Services, do the following steps:

1. In the navigation pane, click Pipelines Application pipelines .

2. Click New.

3. Specify the details of the application for which you are creating the pipeline.

a. To change the URL of your development system, which is populated by default with your development
system URL, in the Development environment field, press the Down arrow key and select the URL.

This is the system on which the product rule that defines the application package that moves through the
repository is located.

b. In the Application field, press the Down arrow key and select the name of the application.

c. In the Version field, press the Down arrow key and select the application version.

d. Click the Access group field and select the access group for which pipeline tasks are run.

This access group must be present on all the candidate systems and have at least the sysadmin4 role.
Ensure that the access group is correctly pointing to the application name and version that is configured
in the pipeline.

e. In the Pipeline name field, enter a unique name for the pipeline.

4. If you are using a separate product rule to manage test cases, to deploy a test case, in the Application test
cases section, select the Deploy test applications check box; then, complete the following steps:

a. In the Test application field, enter the name of the test application.

b. In the Version field, enter the version of the test case product rule.

c. In the Access group field, enter the access group for which test cases are run.

d. In the Product rule field, enter the name of the test case product rule.

e. From the Deploy until field, select the pipeline stage until the test case product rule will be deployed.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable
test coverage, and Verify test coverage tasks are run for the access group that is specified in this section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

5. To change product rule that defines the contents of the application, the Product rule field, enter the name of
the product rule that defines the contents of the application, which is populated by default with the application
name.

6. To change the product rule version, in Version field, enter the version, which is populated by default with the
application version.

7. Click Create.

The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a
workflow, for example, Deploy and Generate Artifact. For Pega Cloud Services, it also adds mandatory tasks
that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security checklist
task.

8. Add tasks that you want to perform on your pipeline, such as Pega unit testing.

For more information, see Modifying stages and tasks in the pipeline.

9. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Adding a pipeline on premises


When you add a pipeline on premises, you define all the stages and tasks that you want to do on each system. For
example, if you are using branches, you can start a build when a branch is merged. If you are using a QA system,
you can run test tasks to validate application data.

To add a pipeline on premises, complete the following steps:

1. Click Pipelines Application pipelines .

2. Click New.

3. Specify the details of the application for which you are creating the pipeline.

a. In the Development environment field, enter the URL of the development system.

This is the system on which the product rule that defines the application package that moves through the
repository is located.

b. In the Application field, press the Down arrow key and select the name of the application.

c. In the Version field, press the Down arrow key and select the application version.

d. In the Access group field, press the Down arrow key and select the access group for which pipeline tasks
are run.

This access group must be present on all the candidate systems and have at least the sysadmin4 role.

e. In the Pipeline name field, enter a unique name for the pipeline.

f. In the Product rule field, enter the name of the product rule that defines the contents of the application.

g. In the Version field, enter the product rule version.

4. If you are using a separate product rule to manage test cases, in the Application test cases section, to deploy a
test case, select the Deploy test applications check box; then, complete the following steps:

a. In the Test application field, enter the name of the test application.

b. In the Version field, enter the version of the test case product rule.

c. In the Access group field, enter the access group for which test cases are run. Ensure that the access
group is correctly pointing to the application name and version that is configured in the pipeline.

d. In the Product rule field, enter the name of the test case product rule.

e. From the Deploy until field, select the pipeline stage until which the test case product rule will be
deployed.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable
test coverage, and Verify test coverage tasks are run for the access group that is specified in this section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

5. To configure dependent applications, click Dependencies.

a. Click Add.

b. In the Application name field, press the Down arrow key and select the application name.

c. In the Application version field, press the Down arrow key and select the application version.

d. In the Repository name field, press the Down arrow key and select the repository that contains the
production-ready artifact of the dependent application.

If you want the latest artifact of the dependent application to be automatically populated, ensure that the
repository that contains the production-ready artifact of the dependent application is configured to
support file updates.

e. In the Artifact name field, press the Down arrow key and select the artifact.

For more information about dependent applications, see Listing product dependencies.

a. Click Next.

6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the
authentication profile that each system uses to communicate with the orchestration system.

a. In the Environments field for the system, press the Down arrow key and select the URL of the system.

b. If you are using your own authentication profiles, in the Authentication field for the system, press the Down
arrow key and select the authentication profile that you want to communicate from the orchestration
server to the system.

By default, the fields are populated with the DMAppAdmin authentication profile.

7. In the Artifact management section, specify the development and production repositories through which the
product rule that contains application contents moves through the pipeline.

8. In the Development repository field, press the Down arrow key and select the development repository.

9. In the Production repository field, press the Down arrow key and select the production repository.

10. In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify the Jenkins
details.

a. In the URL field, enter the URL of the Jenkins server.

b. In the Authentication profile field, press the Down arrow key and select the authentication profile on the
orchestration server that specifies the Jenkins credentials to use for Jenkins jobs.

11. Click Next.

12. Specify whether you are using branches in your application.

If you are not using branches, click the No radio button, and then go to step 15.
If you are using branches, go to step 14.

13. To specify branch options, do the following steps:

a. Click the Yes radio button.

b. Do one of the following options:

To merge branches into the highest existing ruleset in the application, click Highest existing ruleset.

To merge branches into a new ruleset, click New ruleset.

a. In the Password field, enter the password that locks the rulesets on the development system.

14. Click Next.

The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a
workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best
practices such as Check guardrail compliance and Verify security checklist.

15. To specify that a branch must meet a compliance score before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check guardrail compliance.

c. In the Weighted compliance score field, enter the minimum required compliance score.

d. Click Submit.

For more information about compliance scores, see Compliance score logic

16. To specify that a branch must be reviewed:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check review status.

c. Click Submit.

For more information about branch reviews, see Branch reviews.

17. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with
an access group before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Pega unit testing.

c. To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.

d. Click Submit.

For more information about creating Pega unit tests, see Creating Pega unit test cases.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable test
coverage, and Verify test coverage tasks are run for the access group that is specified in the Application test
cases section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group that is
associated with the test application.

18. To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check
box.

Do not select this check box if you want to manually start deployments.

For more information, see Manually starting a deployment in Deployment Manager.

19. Clear a check box for a deployment life cycle stage to skip it.

20. In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline. See
the following topics for more information:

Running Pega unit tests by adding the Run Pega unit tests task
Running Jenkins steps by adding the Run Jenkins step task
Specifying that an application meet a compliance score by adding the Check guardrail compliance score
task
Ensuring that the Application Security Checklist is completed by adding the Verify security checklist task
Starting test coverage by adding the Enable test coverage task
Stopping test coverage by adding the Validate test coverage task
Running Pega scenario tests by adding the Run Pega scenario tests task
Refreshing application quality by adding the Refresh application quality task
Modifying the Approve for production task

21. Clear the Production ready check box if you do not want to generate an application package, which is sent to
the production repository.

You cannot clear this check box if you are using a production stage in the life cycle.

22. Click Finish.

23. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Running Pega unit tests by adding the Run Pega unit tests task

If you use Pega unit tests to validate application data, add the Pega unit testing task on the pipeline stage
where you want to run it. For example, you can run Pega unit tests on a QA system.

Running Jenkins steps by adding the Run Jenkins step task

If you are using Jenkins to perform tasks in your pipeline, you can add the Run Jenkins step to the stage on
which you want it to run. If you have configured the Jenkins OrchestratorURL and PipelineID parameters, when
this task fails, the pipeline stops running. For more information about configuring these parameters, see
Configuring Jenkins.

Continuing or stopping a deployment by adding the Perform manual step task

Use manual steps so that users must take an action before a pipeline deployment can continue. Users can
either accept the task to continue the deployment or reject the task to stop it.

Specifying that an application meet a compliance score by adding the Check guardrail compliance score task

You can use the Check guardrail compliance score task so that an application must meet a compliance score
for the deployment to continue. The default value is 97, which you can modify.

Starting another pipeline by adding the Trigger deployment task

You can start another pipeline by adding the Trigger deployment task to a stage in your current pipeline. By
starting another pipeline from a current pipeline, you can add more stages to your pipeline.

Ensuring that the Application Security Checklist is completed by adding the Verify security checklist task

For your pipeline to comply with security best practices, you can add a task so that to ensure that all the steps
in Application Security Checklist are performed.

Starting test coverage by adding the Enable test coverage task

Add the Enable test coverage task to start test coverage. Starting and stopping test coverage generates a
report that identifies the executable rules in your application that are either covered or not covered by tests. As
a best practice, to ensure application quality, you should test all the rules in your application for which testing
is supported.

Stopping test coverage by adding the Validate test coverage task


Add this task to stop a test coverage session. Starting and stopping test coverage generates a report that
identifies the executable rules in your application that are either covered or not covered by tests. As a best
practice, to ensure application quality, you should test all the rules in your application for which testing is
supported.

Running Pega scenario tests by adding the Run Pega scenario tests task

If you are using Pega scenario tasks, you can run them in your pipeline by using the Run Pega scenario tests
task. Deployment Manager supports Selenium 3.141.59.

Refreshing application quality by adding the Refresh application quality task

To refresh the Application Quality dashboard, which provides information about the health of your application,
on the candidate system, add the Refresh application quality task. You can refresh the dashboard after running
Pega unit tests, checking guardrail compliance, running Pega scenario tests, and starting or stopping test
coverage.

Modifying the Approve for production task

The Approve for production task is added to the stage before production. Use this task if you want a user to
approve application changes before those changes are send to production.

Running Pega unit tests by adding the Run Pega unit tests task
If you use Pega unit tests to validate application data, add the Pega unit testing task on the pipeline stage where
you want to run it. For example, you can run Pega unit tests on a QA system.

When you use separate product rules for test cases and run a pipeline, the Pega unit testing task is run for the
access group that is specified in the Application test cases section, which you configure when you add or modify a
pipeline.

To run Pega unit tests for either the pipeline application or for an application that is associated with an access
group, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the task list, click Pega unit testing.

3. Do one of the following actions:

To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID
field, enter the pxInsName of the test suite.

You can find this value in the XML document that comprises the test suite by clicking, in Dev Studio,
Actions XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the
pipeline application are run.

To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.

For more information about creating Pega unit tests, see Creating Pega unit test cases.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Running Jenkins steps by adding the Run Jenkins step task


If you are using Jenkins to perform tasks in your pipeline, you can add the Run Jenkins step to the stage on which
you want it to run. If you have configured the Jenkins OrchestratorURL and PipelineID parameters, when this task
fails, the pipeline stops running. For more information about configuring these parameters, see Configuring Jenkins.

To add this task, do the following steps:

1. Do one of the following actions:

Click a manually added task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Run Jenkins step.

3. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you
want to run.
4. In the Token field, enter the Jenkins authentication token.

5. In the Parameters field, enter parameters, if any, to send to the Jenkins job.

6. Click Submit.

7. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Continuing or stopping a deployment by adding the Perform manual


step task
Use manual steps so that users must take an action before a pipeline deployment can continue. Users can either
accept the task to continue the deployment or reject the task to stop it.

To add a manual step that a user must perform in the pipeline, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Perform manual step.

3. In the Job name field, enter text that describes the action that you want the user to take.

4. In the Assigned to field, press the Down arrow key and select the operator ID to assign the task to.

5. Click Submit.

6. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Specifying that an application meet a compliance score by adding


the Check guardrail compliance score task
You can use the Check guardrail compliance score task so that an application must meet a compliance score for the
deployment to continue. The default value is 97, which you can modify.

To specify that an application must meet a compliance score, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Check guardrail compliance.

3. In the Weighted compliance score field, enter the minimum required compliance score.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Starting another pipeline by adding the Trigger deployment task


You can start another pipeline by adding the Trigger deployment task to a stage in your current pipeline. By starting
another pipeline from a current pipeline, you can add more stages to your pipeline.

To add the Trigger deployment task to a stage in your pipeline, perform the following steps:

1. In Deployment Manager, do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
From within a stage, click Add task.

2. In the Task list, click Trigger deployment.

3. In the Application name field, press the Down arrow key and then select the application that you want to
deploy.

4. In the Pipeline name field, press the Down arrow key and then select the pipeline that you want to start.
5. If you want to deploy the artifact that you are deploying in the current pipeline, select the Deploy current
artifact check box. Otherwise, the system deploys a new application on the pipeline.

6. Click Submit.

Ensuring that the Application Security Checklist is completed by


adding the Verify security checklist task
For your pipeline to comply with security best practices, you can add a task so that to ensure that all the steps in
Application Security Checklist are performed.

You must log in to the system for which this task is configured, and then mark all the tasks in the Application
Security checklist as completed for the pipeline application. For more information about completing the checklist,
see Preparing your application for secure deployment.

To add the Verify security checklist task, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Verify Security checklist.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Starting test coverage by adding the Enable test coverage task


Add the Enable test coverage task to start test coverage. Starting and stopping test coverage generates a report
that identifies the executable rules in your application that are either covered or not covered by tests. As a best
practice, to ensure application quality, you should test all the rules in your application for which testing is supported.

For more information about application-level coverage reports, see Generating an application-level test coverage
report.

When you use separate product rules for test cases and run a pipeline, the Enable test coverage task is run for the
access group that is specified in the Application test cases section, which you configure when you add or modify a
pipeline.

To add this task, complete the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Enable test coverage.

3. Select the Start a new session check box to start a test coverage session every time that the pipeline runs the
deployment. If you do not select this check box, if a test coverage session is already running, the pipeline
pauses and returns an error.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Adding a pipeline on premises

When you add a pipeline on premises, you define all the stages and tasks that you want to do on each system.
For example, if you are using branches, you can start a build when a branch is merged. If you are using a QA
system, you can run test tasks to validate application data.

Adding a pipeline on Pega Cloud Services

If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application name
and version for the pipeline. Many fields are populated by default, such as the URL of your development system
and product rule name and version.

Modifying application details

You can modify application details, such as the product rule that defines the content of the application that
moves through the pipeline.

Stopping test coverage by adding the Validate test coverage task


Add this task to stop a test coverage session. Starting and stopping test coverage generates a report that identifies
the executable rules in your application that are either covered or not covered by tests. As a best practice, to ensure
application quality, you should test all the rules in your application for which testing is supported.

For more information about application-level coverage reports, see Generating an application-level test coverage
report.

When you use separate product rules for test cases and run a pipeline, the Validate test coverage task is run for the
access group that is specified in the Application test cases section, which you configure when you add or modify a
pipeline.

1. Add this task below the Enable test coverage task by doing one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Validate test coverage.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Running Pega scenario tests by adding the Run Pega scenario tests
task
If you are using Pega scenario tasks, you can run them in your pipeline by using the Run Pega scenario tests task.
Deployment Manager supports Selenium 3.141.59.

To add the Run Pega scenario tests task, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Run Pega scenario tests.

3. In the User name field, enter the user name for the Pega Platform instance on which you are running scenario
tests.

For the Run Pega scenario tests task, if you are using a separate product rule for a test application, the user
name that you provide should belong to the access group that is associated with the test application.

4. In the Password field, enter the Pega Platform password.

5. From the Test Service Provider field, select the browser that you are using to run the scenario tests in the
pipeline.

6. Do one of the following actions:

If you selected CrossBrowserTesting, BrowserStack, or SauceLabs, go to step 7.


If you selected Standalone, go to step 8.

7. If you selected CrossBrowserTesting, BrowserStack, or SauceLabs:

a. In the Provider auth name field, enter the auth name that you you use to log in to the test service provider.

b. In the Provider auth key field, enter the key for the test service provider.

c. Go to step 9.

8. If you selected Standalone, in the Provider URL field, enter the URL of the Selenium Standalone Server by using
one of the following:

a. Hub hostname and port: Use the format Hubhostname:port.

b. IP address: Enclose the IP address in double quotation marks.

9. In the Browser field, enter the browser that you are using to record scenario tests.

10. In the Browserversion field, enter the browser version.


11. In the Platform field, enter the development platform that you are using to record tests.

12. In the Screen resolution field, enter the resolution at which are recording scenario tests.

13. Click Submit.

14. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Refreshing application quality by adding the Refresh application


quality task
To refresh the Application Quality dashboard, which provides information about the health of your application, on
the candidate system, add the Refresh application quality task. You can refresh the dashboard after running Pega
unit tests, checking guardrail compliance, running Pega scenario tests, and starting or stopping test coverage.

To add this task, complete the following steps:

1. Do one of the following actions:

a. Click a task, click the More icon, and then click either Add task above or Add task below.

b. Click Add task in the stage.

2. In the Task list, click Refresh application quality.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Modifying the Approve for production task


The Approve for production task is added to the stage before production. Use this task if you want a user to approve
application changes before those changes are send to production.

To modify the Approve for production task, do the following steps:

1. Click the Info icon.

2. In the Job name field, enter a name for the task.

3. In the Assign to field, press the Down arrow key and select the user who approves the application for
production.

An email is sent to this user, who can approve or reject application changes from within the email.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Modifying an application pipeline


You can modify the details of your pipeline, such as configuring tasks, updating the repositories that the pipeline
uses, and modifying the URLs of the systems in your environment. You cannot modify information if your pipeline is
running.

Modifying application details

You can modify application details, such as the product rule that defines the content of the application that
moves through the pipeline.

Modifying URLs and authentication profiles

You can modify the URLs of your development and candidate systems and the authentication profiles that are
used to communicate between those systems and the orchestration server.

Modifying repositories

You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the development
repository, and all the production-ready artifacts are archived in the production repository.

Configuring Jenkins server information for running Jenkins jobs

If you are using a Run Jenkins step, configure Jenkins server information so that you can run Jenkins jobs.

Modifying merge options for branches

If you are using branches in your application, specify options for merging branches into the base application.

Modifying stages and tasks in the pipeline

You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can
skip a stage or add tasks such as Pega unit testing to be done on the QA stage.

Modifying application details


You can modify application details, such as the product rule that defines the content of the application that moves
through the pipeline.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click Actions Pipeline settings .

3. Click Application details.

4. In the Development environment field, enter the URL of the development system, which is the system on which
the product rule that defines the application package that moves through the repository is located.

5. In the Version field, press the Down arrow key and select the application version.

6. In the Product rule field, enter the product rule that defines the contents of the application.

7. In the Version field, enter the product rule version.

8. f you are using a separate product rule to manage test cases, in the Application test cases section, complete
the following steps:

a. To deploy test cases, select the Deploy test applications check box.

b. In the Test application field, enter the name of the test application.

c. In the Version field, enter the version of the test case product rule.

d. In the Access group field, enter the access group for which test cases are run.

e. In the Product rule field, enter the name of the test case product rule.

f. From the Deploy until field, select the pipeline stage until which the test case product rule will be
deployed.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable
test coverage, and Verify test coverage tasks are run for the access group that is specified in this section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

9. If the application depends on other applications, in the Dependencies section, add those applications.

a. Click Add.

b. In the Application name field, press the Down arrow key and select the application name.

c. In the Application version field, press the Down arrow key and select the application version.

d. In the Repository name field, press the Down arrow key and select the repository that contains the
production-ready artifact of the dependent application.

If you want the latest artifact of the dependent application to be automatically populated, ensure that the
repository that contains the production-ready artifact of the dependent application is configured to
support file updates.

e. In the Artifact name field, press the Down arrow key and select the artifact.

For more information about dependent applications, see Listing product dependencies.

10. Click Save.


11. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying URLs and authentication profiles


You can modify the URLs of your development and candidate systems and the authentication profiles that are used
to communicate between those systems and the orchestration server.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click Actions Pipeline settings .

3. Click Deployment stages.

4. In the Environments field for the system, press the Down arrow key and select the URL of the system.

5. In the Authentication field for the system, press the Down arrow key and select the authentication profile that
you want to communicate from the orchestration server to the system.

6. Click Save.

7. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying repositories
You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the development
repository, and all the production-ready artifacts are archived in the production repository.

You do not need to configure repositories if you are using Pega Cloud Services; you can use different repositories
other than the default ones that are provided.

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click Actions Pipeline settings .

3. Click Artifact Management.

4. If you are using Deployment Manager on premises, or on Pega Cloud Services with default repositories,
complete the following tasks:

a. In the Application repository section, in the Development repository field, press the Down arrow key and
select the development repository

b. In the Production repository field, press the Down arrow key and select the production repository.

5. If you are using Deployment Manager on Pega Cloud Services and want to use different repositories other than
the default repositories, complete the following tasks:

a. In the Artifact repository section, click Yes.

b. In the Development repository field, press the Down arrow key and select the development repository.

c. In the Production repository field, press the Down arrow key and select the production repository.

6. Click Save.

7. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Configuring Jenkins server information for running Jenkins jobs


If you are using a Run Jenkins step, configure Jenkins server information so that you can run Jenkins jobs.

1. If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application pipelines ,
and then click the name of the pipeline.

2. Click Actions Pipeline settings .

3. Click External orchestration server.


4. In the URL field, enter the URL of the Jenkins server.

5. In the Authentication profile field, press the Down arrow key and select the authentication profile on the
orchestration server that specifies the Jenkins credentials to use for Jenkins jobs.

6. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying merge options for branches


If you are using branches in your application, specify options for merging branches into the base application.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click Actions Pipeline settings .

3. Click Merge policy.

4. If you are not using branches, click the No radio button, and then go to step 6.

5. If you are using branches, do the following actions:

a. Click Yes.

b. Do one of the following actions:

To merge branches into the highest existing ruleset in the application, click Highest existing ruleset.
To merge branches into a new ruleset, click New ruleset.

a. In the Password field, enter the password that locks the rulesets on the development system.

6. Click Save.

7. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying stages and tasks in the pipeline


You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip
a stage or add tasks such as Pega unit testing to be done on the QA stage.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click Pipeline model.

3. To specify that a branch must meet a compliance score before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check guardrail compliance.

c. In the Weighted compliance score field, enter the minimum required compliance score.

d. Click Submit.

For more information about compliance scores, see Compliance score logic.

4. To specify that a branch must be reviewed before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check review status.

c. Click Submit.

For more information about branch reviews, see Branch reviews.

5. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with
an access group before it can be merged:

a. In the Merge criteria pane, click Add task.


b. From the Task list, select Pega unit testing.

c. To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.

d. Click Submit.

For more information about creating Pega unit tests, see Creating Pega unit test cases

6. To start a deployment automatically when a branch is merged, select the Trigger deployment on merge check
box. Do not select this check box if you want to manually start a deployment.

For more information, see Manually starting a deployment in Deployment Manager.

7. Clear a check box for a deployment life cycle stage to skip it.

8. In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline. See
the following topics for more information:

Running Pega unit tests by adding the Run Pega unit tests task
Running Jenkins steps by adding the Run Jenkins step task
Specifying that an application meet a compliance score by adding the Check guardrail compliance score
task
Ensuring that the Application Security Checklist is completed by adding the Verify security checklist task
Starting test coverage by adding the Enable test coverage task
Stopping test coverage by adding the Validate test coverage task
Running Pega scenario tests by adding the Run Pega scenario tests task
Refreshing application quality by adding the Refresh application quality task
Modifying the Approve for production task

9. Clear the Production ready check box if you do not want to generate an application package, which is sent to
the production repository. You cannot clear this check box if you are using a production stage in the life cycle.

10. Click Finish.

11. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Accessing systems in your pipeline


You can open the systems in your pipeline and log in to the Pega Platform instances on each system. For example,
you can access the system on which the QA stage is installed.

To access systems, do the following steps:

1. If the pipeline is not already open, in the navigation pane, click Pipelines Application pipelines , and then click
the name of your pipeline.

2. Click the pop-out arrow for the system that you want to open.

Filtering pipelines in the dashboard


You can filter the pipelines that the dashboard displays by application name, version, and pipeline deployment
status. By filtering pipelines, the dashboard displays only the information that is relevant to you.

To filter the display of pipelines, perform the following steps:

1. In the navigation pane of Deployment Manager, click Pipelines Application pipelines .

2. At the top of the dashboard, in the View lists, select the information with which you want to filter the display of
pipelines, and then click Apply.

Viewing merge requests


You can view the status of the merge requests for a pipeline to gain more visibility into the status of your pipeline.
For example, you can see whether a branch was merged in a deployment and when it was merged.

To view merge requests, do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. In the Development stage, click X Merges in queue to view all the branches that are in the queue or for which
merge is in progress.

3. In the Merge requests ready for deployment dialog box, click View all merge requests to view all the branches
that are merged into the pipeline.
Viewing deployment reports for a specific deployment
Deployment reports provide information about a specific deployment. You can view information such as the number
of tasks that you configured on a deployment that have been completed and when each task started and ended. If
there were schema changes on the deployment, the report displays the schema changes.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Perform one of the following actions:

To view the report for the current deployment, click the More icon, and then click View report.
To view the report for a previous deployment, expand the Deployment History pane and click Reports for
the appropriate deployment.

Viewing reports for all deployments

Reports provide a variety of information about all the deployments in your pipeline. For example, you can view
the frequency of new deployments to production.

Understanding schema changes in application packages

If an application package that is to be deployed on candidate systems contains schema changes, the Pega
Platform orchestration server checks the candidate system to verify that you have the required privileges to
deploy the schema changes. One of the following results occurs:

Viewing reports for all deployments


Reports provide a variety of information about all the deployments in your pipeline. For example, you can view the
frequency of new deployments to production.

You can view the following key performance indicators (KPI):

Deployment Success – Percentage of deployments that are successfully deployed to production


Deployment Frequency – Frequency of new deployments to production
Deployment Speed – Average time taken to deploy to production
Start frequency – Frequency at which new deployments are triggered
Failure rate – Average number of failures per deployment
Merges per day – Average number of branches that are successfully merged per day

Do the following steps:

1. Open the pipeline by doing one of the following actions:

If the pipeline open, click Actions View report .


If a pipeline is not open, in the navigation pane, click Reports. Next, in the Pipeline field, press the Down
arrow key and select the name of the pipeline for which to view the report.

2. From the list that appears in the top right of the Reports page, select whether you want to view reports for all
deployments, the last 20 deployments, or the last 50 deployments.

Starting deployments
Manually starting a deployment in Deployment Manager

You can start a deployment manually if you are not using branches and are working directly in rulesets. You
can also start a deployment manually if you do not want deployments to start automatically when branches are
merged.

Publishing application changes in App Studio

You can publish application changes that you make in App Studio to the pipeline. Publishing your changes
creates a patch version of the application and starts a deployment. For example, you can change a life cycle,
data model, or user interface elements in a screen and submit those changes to systems in the pipeline.

Starting a deployment as you merge branches from the development environment

In either a branch-based or distributed, branch-based environment, you can immediately start a deployment by
submitting a branch into a pipeline in the Merge Branches wizard. The wizard displays the merge status of
branches so that you do not need to open Deployment Manager to view it.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.
Manually starting a deployment in Deployment Manager
You can start a deployment manually if you are not using branches and are working directly in rulesets. You can also
start a deployment manually if you do not want deployments to start automatically when branches are merged.

To start a deployment manually, do the following steps:

1. If you do not want deployments to start automatically when branches are merged:

a. If the pipeline is not open, in the navigation pane, click PipelinesApplication pipelines, and then click the
name of the pipeline.

b. Click Pipeline model.

c. Select the Trigger deployment on merge check box.

2. Do one of the following actions:

If the pipeline that you want to start is open, click Start deployment.
In the navigation pane, click Pipelines Application pipelines , and then click Start deployment for the
pipeline that you want to start.

3. In the Start deployment dialog box, start a new deployment or deploy an existing application by completing
one of the following actions:

To deploy a new application package, go to step 3.


To deploy an application that is on a cloud repository, go to step 4.

4. To start a deployment and deploy a new application package, do the following steps:

a. Click Generate new artifact.

a. In the Deployment name field, enter the name of the deployment.

b. Click Deploy.

c. Go to step 5.

5. To start a deployment and deploy an application package that is on a cloud repository, do the following steps:

a. Click Deploy an existing artifact.

b. In the Deployment name field, enter the name of the deployment.

c. In the Select a repository field, press the Down arrow key and select the repository.

d. In the Select an artifact field, press the Down arrow key and select the application package.

6. Click Deploy.

Publishing application changes in App Studio


You can publish application changes that you make in App Studio to the pipeline. Publishing your changes creates a
patch version of the application and starts a deployment. For example, you can change a life cycle, data model, or
user interface elements in a screen and submit those changes to systems in the pipeline.

If you do not have a product rule for the pipeline application, you must create one that has the same name and
version as the pipeline application. For more information, see Creating a product rule that includes associated data
by using the Create menu.

Your pipeline should have at least a quality assurance or staging stage with a manual task so that you do not deploy
changes to production that have not been approved by stakeholders. You can submit applications to a pipeline when
there is only one unlocked ruleset version in each ruleset of your application.

When you publish an application to a stage, your rules are deployed immediately to that system. To allow
stakeholders to inspect and verify changes before they are deployed the stage, configure a manual task in on the
previous stage. When the pipeline runs, it is paused during a manual step that is assigned to a user, which allows
stakeholders to review your changes before they approve the step and resume running the pipeline.

1. In App Studio, do one of the following actions:

Click Turn editing on, and then, in the navigation pane, click Settings Versions .
In the App Studio header, click Publish.

The Settings page displays the stages that are enabled in the application pipeline in Deployment Manager.
The available stages are, in order, quality assurance, staging, and production.

It also displays the application versions that are on each system. The version numbers are taken from the
number at the end of each application deployment name in Deployment Manager. For example, if a
deployment has a name of "MyNewApp:01_01_75", the dialog box displays "v75".

2. Submit an application from development to quality assurance or staging in your pipeline by completing the
following steps:

a. Click either Publish to QA or Publish to staging.

b. To add a comment, which will be published when you submit the application, add a comment in the
Publish confirmation dialog box.

c. If Agile Workbench has been configured, associate a bug or user story with the application, in the
Associated User stories/Bugs field, press the Down arrow key and select the bug or user story.

d. Click OK.

Each unlocked ruleset version in your application is locked and rolled to the next highest version and is
packaged and imported into the system. The amount of time that publishing application changes takes
depends on the size of your application.

A new application is also copied from the application that is defined on the pipeline in Deployment
Manager. The application patch version is updated to reflect the version of the new rulesets; for example,
if the ruleset versions of the patch application are 01-01-15, the application version is updated to be
01.01.15. A new product rule is also created.

In addition, this application is locked and cannot be unlocked. You can use this application to test specific
patch versions of your application on quality assurance or staging systems. You can also use it to roll back
a deployment.

3. Make changes to your application in the unlocked rulesets, which you can publish again into the pipeline. If an
application is already on the system, it is overridden by the new version that you publish.

4. If you configured a manual step, request that stakeholders review and test your changes. After they
communicate to you that they have completed testing, you can publish your changes to the next stage in the
pipeline.

5. Publish the application to the next stage in the pipeline by clicking the link that is displayed.

The name of the link is the Job name field of the manual task that is defined on the stage.

If you do not have a manual task defined, the application automatically moves to the next stage.

Starting a deployment as you merge branches from the development


environment
In either a branch-based or distributed, branch-based environment, you can immediately start a deployment by
submitting a branch into a pipeline in the Merge Branches wizard. The wizard displays the merge status of branches
so that you do not need to open Deployment Manager to view it.

If you are using a separate product rule for a test application, after you start a deployment either by using the Merge
Branches wizard, the branches of both the target and test applications are merged in the pipeline.

You can submit a branch to your application and start the continuous integration portion of the pipeline when the
following criteria is met:

You have created a pipeline for your application in Deployment Manager.


You are merging a single branch.
The RMURL dynamic system setting, which defines the URL of orchestration server, is configured on the
system.
All the rulesets in your branch belong to a single application that is associated with your pipeline. Therefore,
your branch cannot contain rulesets that belong to different application layers.

Configuring settings before using the Merge Branches wizard

You can start a branch merge, which triggers a deployment, by using the Merge Branches wizard. You must
configure certain settings before you can submit a branch to your application.

Submitting a branch into an application by using the Merge Branches wizard

You can start a branch merge, which triggers a deployment, by submitting a branch into an application in the
Merge Branches wizard. By using the wizard to start merges, you can start a deployment without additional
configuration.

Configuring settings before using the Merge Branches wizard


You can start a branch merge, which triggers a deployment, by using the Merge Branches wizard. You must
configure certain settings before you can submit a branch to your application.

Before start a branch merge, do the following tasks.


1. Check all rules into their base rulesets before you merge them.

2. Check if there are any potential conflicts to address before merging branches. For more information, see
Viewing branch quality and branch contents.

3. As a best practice, lock a branch after development is complete so that no more changes can be made. For
more information, see Locking a branch.

Submitting a branch into an application by using the Merge


Branches wizard
You can start a branch merge, which triggers a deployment, by submitting a branch into an application in the Merge
Branches wizard. By using the wizard to start merges, you can start a deployment without additional configuration.

To submit a branch into an application by using the Merge Branches wizard, perform the following steps:

1. In the navigation pane of Dev Studio, click App, and then click Branches.

2. Right-click the branch and click Merge.

3. Click Proceed.

The wizard displays a message in the following scenarios:


If there are no pipelines that are configured for your application or there are no branches in the target
application.
If the value for the RMURL dynamic system setting is not valid.

4. Click Switch to standard merge to switch to the Merge Branches wizard that you can use to merge branches
into target rulesets. For more information, see Merging branches into target rulesets .

5. In the Application pipelines section, from the Pipeline list, select the application for which the pipeline is
configured into which you want to merge branches.

6. In the Merge Description field, enter information that you want to capture about the merge.

This information appears when you view deployment details.

7. In the Associated User stories/bugs field, press the Down arrow key, and then select the Agile Workbench user
story or bug that you want to associate with this branch merge.

8. Click Merge.

The system queues the branch for merging, generates a case ID for the merge, and runs the continuous integration
criteria that you specified.

If there are errors, and the merge is not successful, an email is sent to the operator ID of the release manager that is
specified on the orchestration server.

The branch is stored in the development repository and, after the merge is completed, Deployment Manager deletes
the branch from the development system. By storing branches in the development repository, Deployment Manager
keeps a history, which you can view, of the branches in a centralized location.

If your development system is appropriately configured, you can rebase your development application to obtain the
most recently committed rulesets after you merge your branches. For more information, see Rebasing rules to
obtain latest versions.

Pausing and resuming deployments


When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the
next step. Your user role determines if you can pause a deployment.

To pause a deployment:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

2. Click the pipeline.

3. Click Pause.

4. Click the Pause button again to resume the deployment.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Stopping a deployment
If your role has the appropriate permissions, you can a deployment to prevent it from moving through the pipeline.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

2. Click the More icon, and then click Abort.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Managing a deployment that has errors


If a deployment has errors, the pipeline stops processing on it. You can do actions such as rolling back the
deployment or skipping the step on which the error occurred.

Do the following actions:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

2. Click the More icon, and then do one of the following actions:

To resume running the pipeline from the task, click Resume from current task.
To skip the step and continue running the pipeline, click Skip current task and continue.
To roll back to an earlier deployment, click Rollback.
To stop running the pipeline, click Abort.

Troubleshooting issues with your pipeline


Deployment Manager provides several features that help you troubleshoot and resolve issues with your pipeline.

You can:

View deployment logs for information about the completion status of operations.
Run diagnostics to verify that your environment is correctly configured.
Stop all deployments that are running on a pipeline.
Use a chatbot to obtain information about common issues.

Viewing deployment logs

View logs for a deployment to see the completion status of operations, for example, when a deployment moves
from staging to production. When the Deploy task runs, the application package is imported in to the candidate
system. By default, logs record all the new rule and data instances and all the updated rule and data instances
that are in this application package. You can disable the logging of such rule and data types and can change
the logging level to control which events are displayed in the log.

Diagnosing a pipeline

You can diagnose your pipeline to troubleshoot issues and verify that your pipeline is configured properly

Stopping all deployments

You can stop all the deployments on a pipeline at once to quickly troubleshoot issues and resolve failed
pipelines.

Obtaining information about common issues by using the chatbot

Deployment Manager provides a chatbot that you can use to obtain information about common issues, such as
connectivity between systems, Jenkins configuration, and branch merging. After you enter your search text, the
chatbot provides you with relevant answers and links to more information.

Viewing deployment logs


View logs for a deployment to see the completion status of operations, for example, when a deployment moves from
staging to production. When the Deploy task runs, the application package is imported in to the candidate system.
By default, logs record all the new rule and data instances and all the updated rule and data instances that are in
this application package. You can disable the logging of such rule and data types and can change the logging level
to control which events are displayed in the log.

To view a deployment log, do the following steps:

1. In Dev Studio, on the appropriate candidate system, change the logging level to control which events the log
displays.

For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting
purposes. For more information, see Logging Level Settings tool.

2. To disable logging of new and updated rule and data instances in imported application packages, perform the
following steps:

1. On the candidate system for which you want to disable reporting, in the navigation pane of Admin Studio,
click Resources Log categories .
2. On the Log categories page, for the DeploymentManager.DeltaInstanceLogging log level, click the More
icon, and then click Change logging level.
3. In the Change pxBackgroundProcessing.Agents log level dialog box, in the Update log level of category to
list, select OFF.
4. Click Submit.

3. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

4. Do one of the following actions:

To view the log for the current deployment, click the More icon, and then click View logs.
To view the log for a previous deployment, expand the Deployment History pane, and then click Logs for
the deployment.

Diagnosing a pipeline
You can diagnose your pipeline to troubleshoot issues and verify that your pipeline is configured properly

For example, you can determine if the target application and product rule are in the development environment,
connectivity between systems and repositories is working, and pre-merge settings are correctly configured.

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

2. Click Actions Diagnose pipeline .

3. In the Diagnostics window, review the errors, if any.

If the RMURL dynamic system setting is not configured, Deployment Manager displays a message that you can
disregard if you are not using branches, because you do not need to configure the dynamic system setting.

Stopping all deployments


You can stop all the deployments on a pipeline at once to quickly troubleshoot issues and resolve failed pipelines.

Take the following steps to stop all deployments on a pipeline:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

2. Click Actions Abort open deployments .

3. In the Abort open deployments dialog box, enter a reason for stopping the deployments, and then click OK.

Obtaining information about common issues by using the chatbot


Deployment Manager provides a chatbot that you can use to obtain information about common issues, such as
connectivity between systems, Jenkins configuration, and branch merging. After you enter your search text, the
chatbot provides you with relevant answers and links to more information.

If the chatbot is disabled, enable it. For more information, see Enabling and disabling the chatbot.

To use the Deployment Manager chatbot to help resolve issues, perform the following steps:

1. In the bottom right corner of the Deployment Manager portal, click the chatbot icon.

2. Do one of the following actions:

a. Click the appropriate link from the list of issues that the chatbot displays.

b. Enter text for which you want to receive more information, and then click Enter.

3. To clear the chatbot history, in the chatbot window, click the More icon, and then click Clear chat history.

Enabling and disabling the chatbot

Use the chatbot to obtain more information about common Deployment Manager issues, such as branch
merging and pipeline configuration. You can disable and enable the chatbot. By default, the chatbot is enabled.

Enabling and disabling the chatbot


Use the chatbot to obtain more information about common Deployment Manager issues, such as branch merging
and pipeline configuration. You can disable and enable the chatbot. By default, the chatbot is enabled.
Only super administrators can enable and disable the chatbot. For more information about user roles, see
Understanding roles and users.

1. In the navigation pane, click Settings General settings .

2. Do one of the following actions:

To enable the chatbot, select the Enable self-service Deployment Manager web chatbot check box.
To disable the chatbot, clear the check box.

3. Click Save.

4. At the top of the General Settings page, click the Page back icon.

5. Click the Refresh icon to refresh Deployment Manager and apply your changes.

Understanding schema changes in application packages


If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform
orchestration server checks the candidate system to verify that you have the required privileges to deploy the
schema changes. One of the following results occurs:

If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the
application package is deployed to the candidate system, and the pipeline continues.
If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema
changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can
apply the schema changes. After you complete the step, the pipeline continues. For more information about
completing a step, see Completing or rejecting a manual step.

You can also configure settings to automatically deploy schema changes so that you do not have to manually apply
them if you do not have the required privileges. For more information, see Configuring settings to automatically
deploy schema changes.

Your user role must have the appropriate permissions so that you can manage schema changes.

Configuring settings to automatically apply schema changes

You can configure settings to automatically deploy schema changes that are in an application package that is
to be deployed on candidate systems. Configure these settings so that you do not have to apply schema
changes if you do not have the privileges to deploy them.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Configuring settings to automatically apply schema changes


You can configure settings to automatically deploy schema changes that are in an application package that is to be
deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do
not have the privileges to deploy them.

Do the following steps:

1. On the candidate system, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true to
enable schema changes at the system level.

a. In Dev Studio, search for AutoDBSchemaChanges.

b. In the dialog box that appears for the search results, click AutoDBSchemaChanges.

c. On the Settings tab, in the Value field, enter true.

d. Click Save.

2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more
information, see Specifying privileges for an Access of Role to Object rule.

These settings are applied sequentially. If the AutoDBSchemaChanges dynamic system setting is set to false, you
cannot deploy schema changes, even if you have the SchemaImport privilege.

For more information about the database/AutoDBSchemaChanges dynamic system setting, see Importing rules and
data by using a direct connection to the database.

Schema changes are also attached to the deployment report for the pipeline.

Viewing deployment reports for a specific deployment


Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started
and ended. If there were schema changes on the deployment, the report displays the schema changes.

Completing or rejecting a manual step


If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either
complete it or reject it if your role has the appropriate permissions. For example, if a user was assigned a task and
completed it, you can complete the task in the pipeline to continue the deployment. Deployment Manager also
sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the
pipeline or through email.

Deployment Manager also generates a manual step if there are schema changes in the application package that the
release manager must apply. For more information, see Schema changes in application packages.

To complete or reject a manual step within the deployment, do the following steps:

1. To complete or reject a manual step from within an email, click either Accept or Reject.

2. To complete or reject a manual step in the pipeline,

a. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the
name of the pipeline.

b. Accept or reject the step by doing one of the following actions:

To resolve the task so that the deployment continues through the pipeline, click Complete.
To reject the task so that the deployment does not proceed, click Reject.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Continuing or stopping a deployment by adding the Perform manual step task

Use manual steps so that users must take an action before a pipeline deployment can continue. Users can
either accept the task to continue the deployment or reject the task to stop it.

Managing aged updates


If your role has the appropriate permissions, you can manage aged updates in a number of ways, such as importing
them, skipping the import, or manually deploying applications. Managing aged updates gives you more flexibility in
how you deploy application changes.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click View aged updates to view a list of the rules and data instances, which are in the application package,
that are older than the instances that are on the system.

3. Click the More icon and do one of the following actions:

To import the older rule and data instances that are in the application package into the system, which
overwrites the newer versions that are on the system, click Overwrite aged updates.
To skip the import, click Skip aged updates.
To manually deploy the package from the Import wizard on the system, click Deploy manually and
resume. Deployment Manager does not run the Deploy step on the stage.

Understanding aged updates

An aged update is a rule or data instance in an application package that is older than an instance that is on a
system to which you want to deploy the application package. By being able to import aged updates, skip the
import, or manually deploy your application changes, you now have more flexibility in determining the rules
that you want in your application and how you want to deploy them.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Understanding aged updates


An aged update is a rule or data instance in an application package that is older than an instance that is on a
system to which you want to deploy the application package. By being able to import aged updates, skip the import,
or manually deploy your application changes, you now have more flexibility in determining the rules that you want in
your application and how you want to deploy them.

For example, you can update a dynamic system setting on a quality assurance system, which has an application
package that contains the older instance of the dynamic system setting. Before Deployment Manager deploys the
package, the system detects that the version of the dynamic system setting on the system is newer than the
version in the package and creates a manual step in the pipeline.

Managing artifacts generated by Deployment Manager


You can view, download, and delete application packages in repositories that are on the orchestration server. If you
are using Deployment Manager on Pega Cloud Services, application packages that you have deployed to cloud
repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can download and
permanently delete the packages.

If you are using a separate product rule to manage a test application, the name of the product rule is the same as
that of the product rule with _Tests appended to it.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click the name
of the pipeline.

2. Click the pipeline for which you want to download or delete packages.

3. Click Actions Browse artifacts .

4. Click either Development Repository or Production Repository.

5. To download a package, click the package, and then save it to the appropriate location.

6. To delete a package, select the check boxes for the packages that you want to delete and then click Delete.

Archiving and activating pipelines


If your role has the appropriate permissions, you can archive inactive pipelines so that they are not displayed on the
Deployment Manager landing page.

To archive or activate a pipeline, do the following steps:

1. In the navigation pane of Deployment Manager, click Pipelines Application Pipelines .

2. To archive a pipeline, perform the following steps:

a. Click the More icon, and then click Archive for the pipeline that you want to archive.

b. In the Archive pipeline dialog box, click Submit.

3. To activate an archived pipeline, perform the following steps:

a. Click Pipelines Archived Pipelines .

b. Click Activate for the pipeline that you want to activate.

c. In the Activate pipeline dialog box, click Submit.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Disabling and enabling a pipeline


If your role has the appropriate permissions, you can disable a pipeline on which errors continuously cause a
deployment to fail. Disabling a pipeline prevents branch merging, but you can still view, edit, and stop deployments
on a disabled pipeline.

To disable and enable a pipeline, perform the following steps:

1. In the navigation pane of Deployment Manager, click Pipelines Application pipelines .

2. To disable a pipeline, perform the following steps:

a. Click the More icon, and then click Disable for the pipeline that you want to disable.

b. In the Disable pipeline dialog box, click Submit.


3. To enable a disabled pipeline, click the More icon, and then click Enable.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Deleting a pipeline
If your role has the appropriate permission, you can delete a pipeline. When you delete a pipeline, its associated
application packages are not removed from the repositories that the pipeline is configured to use.

To delete a pipeline, do the following steps:

1. In the navigation pane, click Pipelines Application pipelines .

2. Click the More icon, and then click Delete for the pipeline that you want to delete.

3. In the Delete pipeline dialog box, click Submit.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Using data migration pipelines with Deployment Manager 4.7.x


Data migration tests provide you with significant insight into how the changes that you make to decision logic affect
the results of your strategies. To ensure that your simulations are reliable enough to help you make important
business decisions, you can deploy a sample of your production data to a dedicated data migration test
environment.

When you use Deployment Manager 4.7.x in data migration pipelines, you automate exporting data from the
production environment and into the simulation environment. Data migration pipelines also require the following:

Pega Platform 8.3.x or 8.4.x


Decision management
Pega Marketing

For more information about data migration pipelines, see these articles on Pega Community:

Deploying sample production data to a simulation environment for testing


Creating simulation tests

For information about using all the Deployment Manager 4.7.x features, see Configuring and running pipelines with
Deployment Manager 4.7.x.

Installing, upgrading, and configuring Deployment Manager 4.7.x for data migration pipelines

You can use Deployment Manager 4.6.x or later in data migration pipelines so that you can automatically
export simulation data from a production system and import it into a simulation system. For more information
about using Deployment Manager 4.7.x with data migration pipelines, see Exporting and importing simulation
data automatically with Deployment Manager.

Exporting and importing simulation data automatically with Deployment Manager

Create and run data migration pipelines in Deployment Manager 4.7.x to automatically export simulation data
from a production environment into a simulation environment in which you can test simulation data. You can
also use Deployment Manager to monitor and obtain information about your simulations, for example, by
running diagnostics to ensure that your environment configurations are correct and by and viewing reports that
display key performance indicators (KPIs).

Installing, upgrading, and configuring Deployment Manager 4.7.x for


data migration pipelines
You can use Deployment Manager 4.6.x or later in data migration pipelines so that you can automatically export
simulation data from a production system and import it into a simulation system. For more information about using
Deployment Manager 4.7.x with data migration pipelines, see Exporting and importing simulation data automatically
with Deployment Manager.

To install, upgrade, and configure Deployment Manager on the simulation and production environments and on the
orchestration server, perform the following steps:

1. Install or upgrade Deployment Manager.


For first-time installations or upgrades from Deployment Manager 3.2.1, install Deployment Manager on
the candidate systems (production and simulation environments) and the orchestration server. Upgrading
is done automatically, and you do not need to do post-upgrade steps.

For more information, see Installing or upgrading to Deployment Manager 4.7.x.


For upgrades from Deployment Manager releases earlier than 3.2.1, do post-upgrade steps. You do not
need to do post-upgrade steps if you are upgrading from version 3.2.1 or later.

For more information, seeRunning post-upgrade steps.

2. For first-time installations, configure communication between the orchestration server and the candidate
systems:

a. Enable the default operators on each system.

b. Configure the authentication profiles, which enable communication between systems, on each system.
Deployment Manager provides default authentication profiles, or you can create your own.

For more information, see Configuring authentication profiles.

3. To move the orchestration server to a different environment, migrate your pipelines to the new orchestration
server, and then, on the new orchestration server, configure the URL of the new orchestration server. This URL
is used to update the task status on the orchestration server and diagnostics checks.

For more information, see step 2 in Configuring the orchestration server.

Exporting and importing simulation data automatically with


Deployment Manager
Create and run data migration pipelines in Deployment Manager 4.7.x to automatically export simulation data from
a production environment into a simulation environment in which you can test simulation data. You can also use
Deployment Manager to monitor and obtain information about your simulations, for example, by running diagnostics
to ensure that your environment configurations are correct and by and viewing reports that display key
performance indicators (KPIs).

Creating a pipeline

Create a pipeline by defining the production and simulation environments and the application details for the
pipeline. By using a data migration pipeline, you can export and import simulation data automatically.

Modifying a pipeline

You can change the URLs of your production and simulation environments. You can also change the application
information for which you are creating the pipeline.

Diagnosing a pipeline

You can diagnose your pipeline to verify its configuration. For example, you can verify that the orchestration
system can connect to the production and simulation environments.

Scheduling a pipeline by creating a job scheduler rule

You can schedule a data migration pipeline to run during a specified period of time by creating and running a
job scheduler. The job scheduler runs a Deployment Manager activity (pzScheduleDataSyncPipeline) on the
specified pipeline, based on your configuration, such as weekly or monthly.

Starting a pipeline manually

If you do not run a data migration pipeline based on a job scheduler, you can run it manually in Deployment
Manager.

Pausing a pipeline

Pause a pipeline to stop processing the data migration. When you pause a data migration, the pipeline
completes the current task and stops the data migration.

Stopping a pipeline

Stop a pipeline to stop data migrations from being exported and imported.

Stopping and resuming a pipeline that has errors

If a data migration has errors, the pipeline stops processing on it, and you can either resume or stop running
the pipeline.

Deleting a pipeline

Viewing data migration logs


View the logs for a data migration to see the completion status of operations, for example, when a data
migration moves to a new stage. You can change the logging level to control the events are displayed in the
log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting
purposes. For more information, see Logging Level Settings tool.

Viewing reports for all data migrations in a pipeline

Reports provide a variety of information about all the data migrations in your pipeline so that you can gain
more visibility into data migration processing. For example, you can view the average time taken to complete
data migrations.

Creating a pipeline
Create a pipeline by defining the production and simulation environments and the application details for the
pipeline. By using a data migration pipeline, you can export and import simulation data automatically.

Do the following steps:

1. In the navigation pane, click Pipelines Data migration pipelines .

2. Click New.

3. On the Environment Details page, if you are using Deployment Manager on-premises, configure environment
details.

This information is automatically populated if you are using Deployment in Pega Cloud Services environments,
but you can change it.

a. In the Environment fields, enter the URLs of the production and simulation environments.

b. If you are using your own authentication profiles, from the Auth profile lists, select the authentication
profiles that you want the orchestration server to use to communicate with the production and simulation
environments.

c. Click Next.

4. On the Application details page, specify the application information for which you are creating the pipeline.

a. From the Application list, select the name of the application.

b. From the Version list, select the application version.

c. From the Access group list, select the access group for which you want to run pipeline tasks. This access
group must be present on the production and simulation environments and have at least the sysadmin4
role.

d. In the Name of the pipeline field, enter the pipeline name.

e. Click Next.

The Pipeline page displays the stages and tasks, which you cannot delete, that are in the pipeline.

5. Click Finish.

Modifying a pipeline
You can change the URLs of your production and simulation environments. You can also change the application
information for which you are creating the pipeline.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Data migration pipelines , and then click the
name of the pipeline.

2. Click Action Settings .

3. Modify environment details by doing the following:

a. Click Environment Details.

b. In the Environment fields, enter the URLs of the production and simulation environments.

4. To change the application information for which you are creating the pipeline, click Application details.

a. From the Version list, select the application version.

b. From the Access group list, select the access group for which you want to run pipeline tasks.

This access group must be present on the production and simulation environments and have at least the
sysadmin4 role.
5. Click Save.

Diagnosing a pipeline
You can diagnose your pipeline to verify its configuration. For example, you can verify that the orchestration system
can connect to the production and simulation environments.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines, and then click the name of the pipeline.

2. Click Actions Diagnose pipeline .

3. In the Diagnostics window, review the errors, if any.

Scheduling a pipeline by creating a job scheduler rule


You can schedule a data migration pipeline to run during a specified period of time by creating and running a job
scheduler. The job scheduler runs a Deployment Manager activity ( pzScheduleDataSyncPipeline) on the specified
pipeline, based on your configuration, such as weekly or monthly.

For more information about job scheduler rules, see Job Scheduler rules.

Do the following steps:

1. On the orchestration server, in the navigation panel of Dev Studio, click Records SysAdmin Job Scheduler .

2. On the Create Job Scheduler rule form, enter the label of the scheduler and select the ruleset into which to
save the job scheduler.

3. Click Create and open.

4. On the Edit Job Scheduler rule form, on the Definition tab, from the Runs on list, configure the job scheduler to
run on all or one nodes:

To run the job scheduler on all nodes in a cluster, click All associated nodes.
To run the job scheduler on only one node in a cluster, click Any one associated node.

5. From the Schedule list, select how often you want to start the job scheduler, and then specify the options for it.

6. Select the context for the activity resolution.

If you want to resolve the pzScheduleDataSyncPipeline activity in the context of Deployment Manager, go
to step 7.
If you want to resolve the activity in the context that is specified in the System Runtime Context, go to
step 8.

7. To resolve the pzScheduleDataSyncPipeline activity in the context of Deployment Manager:

a. From the Context list, select Specify access group.

b. In the Access group field, press the Down arrow key and select the access group that can access
Deployment Manager.

c. Go to step 9.

8. to resolve the activity in the context that is specified in the System Runtime Context:

a. From the Context list, select Use System Runtime Context.

b. Update the access group of the batch requestor type access group with the access group that can access
Deployment Manager; in the header of Dev Studio, clicking Configure System General .

c. On the System:General page, on the Requestors tab, click the BATCH requestor type.

d. On the Edit Requestor Type rule form, on the Definition tab, in the Access Group Name field, press the
Down arrow key and select the access group that can access Deployment Manager.

e. Click Save.

9. On the Job Schedule rule form, in the Class field, press the Down arrow key and select Pega-Pipeline-DataSync.

10. In the Activity field, press the Down arrow key and select pzScheduleDataSyncPipeline.

11. Click the Parameters link that appears below the Activity field.

12. In the Activity Parameters dialog box, in the Parameter value field for the PipelineName parameter, enter the
data migration pipeline that the job scheduler runs.

13. In the Parameter value field for the ApplicationName parameter, enter the application that the data migration
pipeline is running.

14. Click Submit.

15. Save the Job schedule rule form.

Starting a pipeline manually


If you do not run a data migration pipeline based on a job scheduler, you can run it manually in Deployment
Manager.

Do the following steps:

1. Do one of the following actions:

If the pipeline for which you want to run a data migration is open, click Start data migration.
If the pipeline is not open, click PipelinesData migration pipelines, and then click Start data migration.

2. In the Start data migration dialog box, click Yes.

Pausing a pipeline
Pause a pipeline to stop processing the data migration. When you pause a data migration, the pipeline completes
the current task and stops the data migration.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Data migration pipelines , and then click the
name of the pipeline.

2. Click Pause.

Stopping a pipeline
Stop a pipeline to stop data migrations from being exported and imported.

To stop a pipeline, do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Data migration pipelines , and then click the
name of the pipeline.

2. Click the More icon, and then clickAbort.

Stopping and resuming a pipeline that has errors


If a data migration has errors, the pipeline stops processing on it, and you can either resume or stop running the
pipeline.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines Data migration pipelines , and then click the
name of the pipeline.

2. Click the More icon, and then do one of the following:

To resume running the pipeline from the task, click Start data migration pipeline.
To stop running the pipeline, click Abort.

Deleting a pipeline
When you delete a pipeline, its associated application packages are not deleted from the pipeline repositories.

1. In the navigation pane, click Pipelines Data migration pipelines .

2. Click the Delete icon for the pipeline that you want to delete.

3. Click Submit.

Viewing data migration logs


View the logs for a data migration to see the completion status of operations, for example, when a data migration
moves to a new stage. You can change the logging level to control the events are displayed in the log. For example,
you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more
information, see Logging Level Settings tool.

1. If the pipeline is not open, in the navigation pane, click Pipelines Data migration pipelines , and then click the
name of the pipeline.

2. Do one of the following actions:

To view the log for the current data migration, click the More icon, and then click View logs.
To view the log for a previous data migration, expand the Deployment History pane and click Logsfor the
appropriate deployment.

Viewing reports for all data migrations in a pipeline


Reports provide a variety of information about all the data migrations in your pipeline so that you can gain more
visibility into data migration processing. For example, you can view the average time taken to complete data
migrations.

You can view the following key performance indicators (KPI):

Data migration success – Percentage of successfully completed data migrations


Data migration frequency – Frequency of new deployments to production
Data migration speed – Average time taken to complete data migrations
Start frequency – Frequency at which new data migrations are triggered
Failure rate – Average number of failures per data migration

To view reports, do the following tasks:

1. Do one of the following actions:

If the pipeline is open, click Actions >View report.


If a pipeline is not open, in the navigation pane, click Reports. Next, in the Pipeline field, press the Down
arrow key and select the name of the pipeline for which to view the report.

2. From the list that appears in the top right of the Reports page, select whether you want to view reports for all
deployments, the last 20 deployments, or the last 50 deployments.

Deployment Manager 3.4.x


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega
applications from within Pega Platform. You can create a standardized deployment process so that you can deploy
predictable, high-quality releases without using third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application
package generation, artifact management, and package promotion to different stages in the workflow.

Deployment Manager 3.4.x is compatible with Pega 7.4. You can download it for Pega Platform from the Deployment
Manager Pega Exchange page.

These topics describe the features for the latest version of Deployment Manager 3.4.x.

Each customer Virtual Private Cloud (VPC) on Pega Cloud Services has a dedicated orchestrator instance to use
Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud Services
application.

Installing, upgrading, and configuring Deployment Manager 3.4.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks and allow you to quickly deploy high-quality software to production.

Configuring and running pipelines with Deployment Manager 3.4.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks so that you can quickly deploy high-quality software to production.

Installing, upgrading, and configuring Deployment Manager 3.4.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and
allow you to quickly deploy high-quality software to production.

This document describes the features for the latest version of Deployment Manager 3.4.x.

Installing Deployment Manager 3.4.x

Install Deployment Manager 3.4.x on-premises. Each customer virtual private cloud (VPC) on Pega Cloud has a
dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager
to use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager
3.4.x, contact Pegasystems Global Customer Support (GCS) to request a new version.

Upgrading to Deployment Manager 3.4.x

After you install Deployment Manager 3.4.x, you must do post-upgrade steps. Before you upgrade, ensure that
no deployments are running, have errors, or are paused.

Configuring systems in the pipeline

Complete the following tasks to set up a pipeline for all supported CI/CD workflows. If you are using branches,
you must configure additional settings after you perform the required steps.

Configuring the development system

After you configure the orchestration server and all your candidate systems, configure additional settings so
that you can create pipelines if you are using branches in a distributed or non-distributed branch-based
environment.

Configuring additional settings (optional)

As part of your pipeline, you can optionally send email notifications to users or configure Jenkins if you are
using a Jenkins task.

Installing Deployment Manager 3.4.x


Install Deployment Manager 3.4.x on-premises. Each customer virtual private cloud (VPC) on Pega Cloud has a
dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use
it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager 3.4.x,
contact Pegasystems Global Customer Support (GCS) to request a new version.

If you are using Deployment Manager on premises, complete the following steps to install it:

If you are upgrading from Deployment Manger 3.2.1, after you import files on premises or Deployment Manager
3.4.x is deployed on Pega Cloud Services, finish the upgrade immediately so that your pipelines work in Deployment
Manager 3.4.x.

1. Install Pega 7.4 on all systems in the CI/CD pipeline.

2. Browse to the Deployment Manager Pega Marketplace page, and then download the
DeploymentManager03.0240x.zip file to your local disk on each system.

3. Extract the DeploymentManager03.0240x.zip file.

4. Use the Import wizard to import files into the appropriate systems. For more information about the Import
wizard, see Importing a file by using the Import wizard.

5. On the orchestration server, import the following files:

PegaDevOpsFoundation_03.04.0x.zip
PegaDeploymentManager_03.04.0x.zip

6. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_03.04.0x.zip
file.

7. If you are using a distributed development, on the remote development system, import the
PegaDevOpsFoundation_03.04.0x.zip file.

8. Do one of the following actions

If you are upgrading to Deployment Manager 3.4.x, perform the upgrade. For more information, see
Upgrading to Deployment Manager 3.4.x.
If you are not upgrading Deployment Manager 3.4.x, continue the installation procedure. For more
information, see Configuring the orchestration server.

Upgrading to Deployment Manager 3.4.x


After you install Deployment Manager 3.4.x, you must do post-upgrade steps. Before you upgrade, ensure that no
deployments are running, have errors, or are paused.

To upgrade to Deployment Manager 3.4.x either on Pega Cloud Services or on premises, perform the following
steps:

1. Enable default operators and configure authentication profiles on the orchestration server and candidate
systems. For more information, see Configuring authentication profiles on the orchestration server and
candidate systems.

2. On each candidate system, add the PegaDevOpsFoundation application to your application stack.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Built on application section, click Add application.

c. In the Name field, press the Down arrow key and select PegaDevOpsFoundation.
d. In the Version field, press the Down arrow key and select the version of Deployment Manager that you are
using.

e. Click Save.

If you are upgrading from Deployment Manager 3.2.1, you do not need to do the rest of the steps in this
procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases
and have pipelines configured, complete this procedure.

If you are upgrading from Deployment Manager 3.2.1, you do not need to do the rest of the steps in this
procedure or the required configuration steps steps. If you are upgrading from earlier releases and have
pipelines configured, complete this procedure.

3. On the orchestration server, log in to the release management application.

4. In Designer Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the
results.

5. Click Actions Run .

6. In the dialog box that is displayed, click Run.

7. Modify the current release management application so that it is built on PegaDeploymentManager:03-04-01.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the
PegaDeploymentManager application, press the Down arrow key and select 03.04.01.

c. Click Save.

8. Merge rulesets to the PipelineData ruleset.

a. Click Designer Studio System Refactor Rulesets .

b. Click Copy/Merge RuleSet.

c. Click the Merge Source RuleSet(s) to Target RuleSet radio button.

d. Click the RuleSet Versions radio button.

e. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and
then click the Move icon.

All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created
the application, select all the ruleset versions that contain pipeline data.

9. In the target RuleSet/Information section, in the Name field, press the Down arrow key and select Pipeline Data.

10. In the Version field, enter 01-01-01.

11. For the Delete Source RuleSet(s) upon completion of merge? option, click No.

12. Click Next.

13. Click Merge to merge your pipelines to the PipelineData:01-01-01 ruleset.

14. Click Done.

Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and
log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it.
You do not need to perform any of the required configuration procedures.

Configuring systems in the pipeline


Complete the following tasks to set up a pipeline for all supported CI/CD workflows. If you are using branches, you
must configure additional settings after you perform the required steps.

To configure systems in the pipeline, do the following steps:

1. Configuring authentication profiles on the orchestration server and candidate systems

2. Configuring the orchestration server

3. Configuring candidate systems

4. Configuring repositories on the orchestration server and candidate systems

Configuring authentication profiles on the orchestration server and candidate systems

Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate with
the candidate systems.

Configuring the orchestration server

The orchestration server is the system on which release managers configure and manage CI/CD pipelines.
Configure it before you use it in your pipeline.

Configuring candidate systems

Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

Configuring repositories on the orchestration server and candidate systems

If you are using Deployment Manager on-premises, create repositories on the orchestration server and all
candidate systems to move your application between all the systems in the pipeline. You can use a supported
repository type that is provided in Pega Platform, or you can create a custom repository type.

Configuring authentication profiles on the orchestration server and


candidate systems
Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate with the
candidate systems.

Configure the default authentication profile by doing these steps:

1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password.

a. Log in to the orchestration server with [email protected]/install.

b. In Designer Studio, click Records Organization Operator ID , and then click DMReleaseAdmin.

c. In the Designer Studio header, click the operator ID initials, and then click Operator.

d. On the Edit Operator ID rule form, click the Security tab.

e. Clear the Disable Operator check box.

f. Click Save.

g. Click Update password.

h. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then
click Submit.

i. Clear the Force password change on next login check box if you do not want to change the password for
the DMReleaseAdmin operator ID the next time that you log in.

j. Log out of the orchestration server.

2. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All
candidate systems use this authentication profile to communicate with the orchestration server about the
status of the tasks in the pipeline.

a. Log in to each candidate system with the DMAppAdmin user name and the password that you specified.

b. Click Records Security Authentication Profile .

c. Click DMReleaseAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

3. On each candidate system, which includes the development, QA, staging, and production systems, enable the
DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the
PegaDevOpsFoundation application.

a. Log in to each candidate system with [email protected]/install.

b. In Designer Studio, click Records Organization Operator ID , and then click DMAppAdmin.

c. In the Designer Studio header, click the operator ID initials, and then click Operator.

d. On the Edit Operator ID rule form, click the Security tab.

e. Clear the Disable Operator check box.


f. Click Save.

g. Click Update password.

h. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click
Submit.

i. Clear the Force password change on next login check box if you do not want to change the password for
the DMReleaseAdmin operator ID the next time that you log in.

j. Log out of each candidate system.

4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The
orchestration server uses this authentication profile to communicate with candidate systems so that it can run
tasks in the pipeline.

a. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified.

b. Click Records Security Authentication Profile .

c. Click DMAppAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

5. Do one of the following actions:

a. If you are upgrading to Deployment Manager 3.4.x, resume the upgrade procedure from step 2. For more
information, see Upgrading to Deployment Manager 3.4.x.

b. If you are not upgrading, continue the installation procedure. For more information, see Step 3b:
Configuring the orchestration server.

Understanding default operator IDs and authentication profiles

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs,
and authentication profiles that communicate between the orchestration server and candidate systems are also
installed.

Understanding default operator IDs and authentication profiles


When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and
authentication profiles that communicate between the orchestration server and candidate systems are also
installed.

On the orchestration server, the following items are installed:

The Pega Deployment Manager application.


The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager
application. You must enable this operator ID and specify its password.
The DMAppAdmin authentication profile. You must update this authentication profile to use the password that
you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems.

On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application.


The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation application. You must enable this
operator ID and specify its password.
The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password
that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server.

The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords.

Configuring the orchestration server


The orchestration server is the system on which release managers configure and manage CI/CD pipelines. Configure
it before you use it in your pipeline.

To configure the orchestration server, complete the following tasks:

1. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. Click Records Integration-Resources Service Package .

b. Click api.
c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

d. Click Records Integration-Resources Service Package .

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

2. Configure the candidate systems in your pipeline.

For more information, see Configuring candidate systems.

Configuring candidate systems


Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

Do the following steps:

1. On each candidate system, add the PegaDevOpsFoundation application to your application stack.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Built on application section, click Add application.

c. In the Name field, press the Down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the Down arrow key and select the version of Deployment Manager that you are
using.

e. Click Save.

2. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. Click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

d. Click RecordsIntegration-ResourcesService Package.

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared.

3. To use a product rule for your target application, test application, or both, other than the default rules that are
created by the New Application wizard, on the development system, create product rules that define the test
application package and the target application package that will be moved through repositories in the pipeline.

For more information, see Product rules: Completing the Create, Save As, or Specialization form.

When you use the New Application wizard, a default product rule is created that has the same name as your
application.

4. Configure repositories through which to move artifacts in your pipeline.

For more information, see Configuring repositories on the orchestration server and candidate systems.

Configuring repositories on the orchestration server and candidate


systems
If you are using Deployment Manager on-premises, create repositories on the orchestration server and all candidate
systems to move your application between all the systems in the pipeline. You can use a supported repository type
that is provided in Pega Platform, or you can create a custom repository type.

If you are using Deployment Manager on Pega Cloud Services, default repositories are provided. If you want to use
repositories other than the ones provided, you can create your own.

For more information about creating a supported repository type, see Creating a repository connection.

For more information about creating a custom repository type, see Creating and using custom repository types for
Deployment Manager.

When you create repositories, note the following information:

The Pega repository type is not supported.


Ensure that each repository has the same name on all systems.
When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog
Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must
select the Preemptive authentication check box.

After you configure a pipeline, you can verify that the repository connects to the URL of the development and
production repositories by clicking Test Connectivity on the Repository rule form.

Configuring the development system


After you configure the orchestration server and all your candidate systems, configure additional settings so that
you can create pipelines if you are using branches in a distributed or non-distributed branch-based environment.

To configure the development system, complete the following steps:

1. On the development system (in nondistributed environment) or the main development system (in a distributed
environment), create a dynamic system setting to define the URL of the orchestration server, even if the
orchestration server and the development system are the same system.

a. Click Create Records SysAdmin Dynamic System Settings .

b. In the Owning Ruleset field, enter Pega-DevOps-Foundation.

c. In the Setting Purpose field, enter RMURL.

d. Click Create and open.

e. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format:
http://hostname:port/prweb/PRRestService .

f. Click Save.

2. On either the development system (in a non-distributed environment) or the remote development system (in a
distributed environment), use the New Application wizard to create a new development application that
developers will log in to.

This application allows development teams to maintain a list of development branches without modifying the
definition of the target application.

3. On either the development system or remote development system, add the target application of the pipeline as
a built-on application layer of the development application.

a. Log in to the application.

b. In the Designer Studio header, click the name of your application, and then click Definition.

c. In the Built-on application section, click Add application.

d. In the Name field, press the Down arrow key and select the name of the target application.

e. In the Version field, press the Down arrow key and select the target application version.

f. Click Save.

4. On either the development system or remote development system, lock the application rulesets to prevent
developers from making changes to rules after branches have been merged.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Application rulesets section, click the Open icon for each ruleset that you want to lock.

c. Click Lock and Save.

5. To publish branches to a development system to start a branch to merge, configure a Pega repository.

It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a
branch to the remote development system to start a deployment. Publishing a branch when you have multiple
pipelines per application is not supported.

a. On either the development system or remote development system, in Designer Studio, enable Pega
repository types.

For more information, see Enabling the Pega repository type.

b. Create a new Pega repository type. For more information, see Creating a repository connection.

c. In the Host ID field, enter the URL of the development system.

d. Ensure that the default access group of the operator that is configured for the authentication profile of this
repository points to the pipeline application on the development system (in a nondistributed environment)
or source development system (in a distributed environment).
Configuring additional settings (optional)
Configuring email notifications on the orchestration server

You can optionally configure email notifications on the orchestration server. For example, users can receive
emails when pre-merge criteria are not met and the system cannot create a deployment.

Configuring Jenkins

If you are using a Jenkins task in your pipeline, configure Jenkins.

Configuring email notifications on the orchestration server


You can optionally configure email notifications on the orchestration server. For example, users can receive emails
when pre-merge criteria are not met and the system cannot create a deployment.

To configure the orchestration server to send emails, complete the following steps:

1. ​Use the Email wizard to configure an email account and listener by clicking Designer Studio Integration Email
Email Wizard .

This email account sends notifications to users when events occur, for example, if there are merge conflicts.
For detailed information, see the procedure for “Configuring an email account that receives email and creates
or manages work” in Entering email information in the Email wizard

2. From the What would you like to do? list, select Receive an email and create/manage a work object.

3. From the What is the class of your work type? list, select Pega-Pipeline-CD.

4. From the What is your starting flow name? list, select NewWork.

5. From the What is your organization? list, select the organization that is associated with the work item.

6. In the What Ruleset? field, select the ruleset that contains the generated email service rule.

This ruleset applies to the work class.

7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule.

8. Click Next to configure the email listener.

9. In the Email Account Name field, enter Pega-Pipeline-CD, which is the name of the email account that the listener
references for incoming and outgoing email.

10. In the Email Listener Name field, enter the name of the email listener.

Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens.

11. In the Folder Name field, enter the name of the email folder that the listener monitors.

Typically, this folder is INBOX.

12. In the Service Package field, enter the name of the service package to be deployed.

Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier.

13. In the Service Class field, enter the service class name.

14. In the Requestor User ID field, press the Down arrow key, and select the operator ID of the release manager
operator.

15. In the Requestor Password field, enter the password for the release manager operator.

16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs.

17. In the Password field, enter the password for the operator ID.

18. Click Next to continue the wizard and configure the service package.

For more information, see Configuring the service package in the Email wizard.

19. After you complete the wizard, enable the listener that you created in the Email Wizard.

For more information, see Starting a listener.

Understanding email notifications

Emails are also preconfigured with information about each notification type. For example, when a deployment
failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on
which the deployment failure occurred.
Understanding email notifications
Emails are also preconfigured with information about each notification type. For example, when a deployment
failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on
which the deployment failure occurred.

Preconfigured emails are sent in the following scenarios:

Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using
branches, to the operator who started a deployment.
Deployment step failure – If any step in the deployment process is unsuccessful, the deployment pauses. An
email is sent to the release manager and, if you are using branches, to the operator who started the branch
merge.
Deployment step completion – When a step in a deployment process is completed, an email is sent to the
release manager and, if you are using branches, to the operator who started the branch merge.
Deployment completion – When a deployment is successfully completed, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge.
Stage completion – When a stage in a deployment process is completed, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge.
Stage failure – If a stage fails to be completed, an email is sent to the release manager and, if you are using
branches, to the operator who started the branch merge.
Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to
the user, who can approve or reject the task from the email.
Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are
using branches, to the operator who started the branch merge.
Pega unit testing failure – If a Pega unit test cannot successfully run on a step in the deployment, an email is
sent to the release manager and, if you are using branches, to the operator who started the branch merge.
Pega unit testing success – If a Pega unit test is successfully run on a step in the deployment, an email is sent
to the release manager and, if you are using branches, to the operator who started the branch merge.
Schema changes required – If you do not have the required schema privileges to deploy the changes on
application packages that require those changes, an email is sent to the operator who started the deployment.
Guardrail compliance score failure – If you are using the Check guardrail compliance task, and the compliance
score is less than the score that is specified in the task, an email with the score is sent to the release manager.
Guardrail compliance score success – If you are using the Check guardrail compliance task, and the task is
successful, an email with the score is sent to the release manager.
Approve for production – If you are using the Approve for production task, which requires approval from a user
before application changes are deployed to production, an email is sent to the user. The user can reject or
approve the changes.
Verify security checklist failure – If you are using the Verify security checklist task, which requires that all tasks
be completed in the Application Security Checklist to ensure that the pipeline complies with security best
practices, the release manager receives an email.
Verify security checklist success – If you are using the Verify security checklist task, which requires that all
tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best
practices, the release manager receives an email.

Configuring Jenkins
If you are using a Jenkins task in your pipeline, configure Jenkins.

Do the following steps:

1. On the orchestration server, create an authentication profile that uses Jenkins credentials.

a. Click Create Security Authentication profile .

a. Enter a name, and then click Create and open.

b. In the User name field, enter the user name of the Jenkins user.

c. Click Set password, enter the Jenkins password, and then click Submit.

d. Click the Preemptive authentication check box.

e. Click Save.

2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the
following steps:

a. In Jenkins, click Manage Jenkins.

b. Click Configure Global Security.

c. In the CSRF Protection section, clear the Prevent Cross Site Request Forgery exploits check box.

d. Click Save.

3. Install the Post build task plug-in.


4. Install the curl command on the Jenkins server.

5. Create a new freestyle project.

6. On the General tab, select the This project is parameterized check box.

7. Add the BuildID and CallBackURL parameters.

a. Click Add parameter, and then select String parameter.

b. In the String field, enter BuildID.

c. Click Add parameter, and then select String parameter.

d. In the String field, enter CallBackURL.

8. In the Build Triggers section, select the Trigger builds remotely check box.

9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely.

10. In the Build Environment section, select the Use Secret text(s) or file(s) check box.

11. In the Bindings section, do the following actions:

a. Click Add, and then select User name and password (conjoined).

b. In the Variable field, enter RMCREDENTIALS

c. .In the Credentials field, click Specific credentials.

d. Click Add, and then select Jenkins.

e. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager
operator that is configured on the orchestration server.

f. In the Password field, enter the password.

g. Click Save.

12. Configure information in the Post-Build Actions section, depending on your operating system:

If Jenkins is running on Microsoft Windows, go to step 13.


If Jenkins is running on Linux, go to step 14.

13. If Jenkins is running on Microsoft Windows, add the following post-build tasks:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%".

d. Click Add another task.

e. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}"
"%CallBackURL%"

g. Click Save.

14. If Jenkins is running on Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent
sign (%) to access the environment variables:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

d. Click Add another task.

e. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

g. Click Save.

Configuring and running pipelines with Deployment Manager 3.4.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks so
that you can quickly deploy high-quality software to production.

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and
allow you to quickly deploy high-quality software to production.

On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their
Pega Platform applications. The landing page displays all the running and queued application deployments,
branches that are to be merged, and reports that provide information about your DevOps environment such as key
performance indicators (KPIs).

These topics describe the features for the latest version of Deployment Manager 3.4.x.

Configuring an application pipeline

When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery
workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you
can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline.

Manually starting a deployment

You can start a deployment manually if you are not using branches and are working directly in rulesets. You
can also start a deployment manually if you do not want deployments to start automatically when branches are
merged.

Starting a deployment in a distributed, branch-based environment

If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines
per application, first export the branch to the source development system, and then merge it.

Completing or rejecting a manual step

If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either
complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task
to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the
pipeline. You can complete or reject a step either within the pipeline or through email.

Managing aged updates

You can manage aged updates in a number of ways such as importing them, skipping the import, or manually
deploying applications. Managing aged updates gives you more flexibility in how you deploy application
changes.

Configuring settings to automatically apply schema changes

You can configure settings to automatically deploy schema changes that are in an application package that is
to be deployed on candidate systems. Configure these settings so that you do not have to apply schema
changes if you do not have the privileges to deploy them.

Pausing a deployment

When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at
the next step.

Stopping a deployment

Stop a deployment to discontinue processing.

Performing actions on a deployment that has errors

If a deployment has errors, the pipeline stops processing on it. You can perform actions on it, such as rolling
back the deployment or skipping the step on which the error occurred.

Viewing branch status

You can view the status of all the branches that are in your pipeline. For example,you can see whether a
branch was merged in a deployment and when it was merged.

Viewing deployment logs

View logs for a deployment to see the completion status of operations, for example, when a data simulation is
moved to the simulation environment. You can change the logging level to control which events are displayed
in the log.
Viewing deployment reports for a specific deployment

Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started
and ended.

Viewing reports for all deployments

Reports provide a variety of information about all the deployments in your pipeline. For example, you can view
the frequency of new deployments to production.

Deleting an application pipeline

When you delete a pipeline, its associated application packages are not removed from the repositories that the
pipeline is configured to use.

Viewing, downloading, and deleting application packages

You can view, download, and delete application packages in repositories that are on the orchestration server. If
you are using Deployment Manager on Pega Cloud Services, application packages that you have deployed to
cloud repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can download
and permanently delete the packages.

Configuring an application pipeline


When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery
workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can
specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline.

You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in
the following scenarios:

To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline
that has only a production stage or development and production stages.
To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud Services

If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application name
and version for the pipeline. Many fields are populated by default, such as the URL of your development system
and product rule name and version.

Adding a pipeline on-premises

When you add a pipeline on premises, you define all the stages and tasks that you want to do on each system.
For example, if you are using branches, you can start a build when a branch is merged. If you are using a QA
system, you can run test tasks to validate application data.

Modifying application details

You can modify application details, such as the product rule that defines the content of the application that
moves through the pipeline.

Modifying URLs and authentication profiles

You can modify the URLs of your development and candidate systems and the authentication profiles that are
used to communicate between those systems and the orchestration server.

Modifying repositories

You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the Development
repository, and all the production-ready artifacts are archived in the Production repository.

Configuring Jenkins server information

If you are using a Jenkins step, specify details about the Jenkins server such as its URL.

Specifying merge options for branches

If you are using branches in your application, specify options for merging branches into the base application.

Modifying stages and tasks in the pipeline

You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can
skip a stage or add tasks such as Pega unit testing to be done on the QA stage.

Understanding distributed development for an application


When you use continuous integration and delivery (CI/CD) workflows, you set up the systems in your environment
based on your workflow requirements. For example, if only one team is developing an application, you can use a
single system for application development and branch merging.

However, you can use a distributed development environment if multiple teams are simultaneously developing an
application. A distributed development environment can comprise multiple development systems, on which
developers author and test the application. They then migrate their changes into and merge them on a development
source system from which those changes are packaged and moved in the CI/CD workflow.

When you configure a distributed development environment, ensure that you are following best practices for
development and version control.

For more information about development best practices, see Understanding best practices for DevOps-based
development workflows.
For more information about versioning best practices, see Understanding best practices for version control in
the DevOps pipeline.

Understanding the benefits of distributed development

Distributed development environments offer a number of benefits when multiple development teams are
working on the same application. For example, each development team can continue to work on its own Pega
Platform server even if other team servers or the source development system are unavailable.

Understanding the components of a distributed development environment

Distributed development consists of several systems, including remote development systems, the source
development system, and an automation server.

Developing applications, merging branches, and deploying changes in a distributed development environment

When you work in a distributed development environment, you generally work in branches and merge them to
incorporate changes into the base application. The implementation of some of your tasks depends on your
specific configuration, such as which automation server you are using.

Understanding the benefits of distributed development


Distributed development environments offer a number of benefits when multiple development teams are working
on the same application. For example, each development team can continue to work on its own Pega Platform
server even if other team servers or the source development system are unavailable.

With distributed development, you can accomplish the following:

Reduce disruption across the development organization.

Each development team can do system-wide configuration and maintenance on its own Pega Platform server
without affecting other team systems.

Increase overall productivity.

Because each team works on its own remote development system, teams can continue working even if the
development source system or another team server experiences system or application issues. System or
application issues are introduced to the source development system or to another team server.

Ensure higher quality change management.

A distributed development setup helps to insulate the source development system from changes introduced by
developers. Distributed development also reduces or eliminates the creation of unnecessary rules or data
instances application testing generates.

Reduce latency for geographically distributed teams.

Teams can have co-located development servers that have reduced latency, which also increases productivity.

Reduce the need for coordination across teams when introducing changes and packaging the final application.

Distributed development simplifies the application packaging process, because developers package the
application on the development source system, which includes all the latest application rulesets to be
packaged.

Capture application changes.

If you use an automation server such as Deployment Manager, when you merge changes on the source
development system, you can audit application updates.

Understanding the components of a distributed development


environment
Distributed development consists of several systems, including remote development systems, the source
development system, and an automation server.

The distributed development environment comprises systems that perform the following roles:

Remote development systems – the systems on which development work takes place, typically in branches.
Each team usually uses one Pega Platform server on each system.

Development teams can use tools such as container management or provisioning scripts to quickly start up
remote development systems.

Source development system – the Pega Platform server that stores the base application, which contains only
the latest production changes. It is also the system from which the application is packaged. You merge
branches on this system from remote development systems.

You should maintain high availability and have a reliable backup and restore strategy for the source
development system.

Automation server – the server that automates continuous integration or continuous delivery jobs that are part
of an application lifecycle, such as automated testing, application packaging, task approval, and deployment.

You can use a number of tools as the automation server, such as Deployment Manager, Jenkins, or Bamboo.

While an automation server is not a requirement, it is recommended that you use one, because it reduces the
manual steps that you need to do in a DevOps workflow.

Developing applications, merging branches, and deploying changes


in a distributed development environment
When you work in a distributed development environment, you generally work in branches and merge them to
incorporate changes into the base application. The implementation of some of your tasks depends on your specific
configuration, such as which automation server you are using.

In general, working in a distributed development environment consists of the following tasks and methods:

1. On the remote development system, build a team application layer that is built on top of the main production
application. The team application layer contains branches, tests, and other development rulesets that do not
go into the production application. For more information, see the Pega Community Using multiple built-on
applications .

2. Lock the application ruleset by performing the following steps:

a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Edit Application rule form, in the Application rulesets section, click the Open icon for the ruleset
that you want to lock.

c. On the Edit Ruleset rule form, click Lock and Save.

d. In the Lock Ruleset Version dialog box, in the Password field, enter the password that locks the ruleset.

e. In the Confirm Password field, reenter the password to confirm it.

f. Click Submit.

g. Save the Edit Ruleset rule form.

h. Save the Edit Application rule form.

3. Create a branch of your production ruleset in the team application. For more information, see Adding branches
to your application.

4. Work in branches on remote development systems.

5. Use release toggles to disable features that are not available for general use. For more information, see
Toggling features on and off.

6. Create a review so that other developers can review branch content. For more information, see Creating a
branch review.

7. Conduct developer reviews to review the content and quality of the branch. For more information, see
Reviewing branches.

8. Lock the branch. For more information, see Locking a branch.

9. Migrate branches to the source development system and then merge and validate the branches. Depending on
your configuration, you can either do both steps at the same time or separately. Do one of the following tasks:

a. To migrate and merge branches at the same time, perform step 10.

b. To migrate and merge branches separately, perform steps 11 - 13.


10. To migrate and merge branches at the same time, do one of the following actions:

Use Deployment Manager to create pipelines and start a deployment. For more information, see Migrating
and merging branches by using Deployment Manager.
Configure third-party automation servers to automatically merge branches after you publish branches to
the source development system. For more information, see Migrating and merging branches with third-
party automation servers.

11. To migrate a branch and then separately merge and validate the branch, migrate branches to the source
development system by doing one of the following tasks:

Publish a branch to the source development system. For more information, see Publishing a branch to a
repository.
Use prpcUtils to automatically package and migrate the application. For more information, see
Automatically deploying applications with prpcUtils and Jenkins.
Manually migrate the application package by packaging and exporting it. For more information, see
Exporting a branch to the source development system.

12. Merge and validate branches by using the Merge Branches wizard. For more information, see Merging branches
into target rulesets.

13. Migrate the merged rules back to the remote development systems by doing one of the following tasks:

Rebase the development application to obtain the latest ruleset versions from the source development
system. For more information, see Understanding rule rebasing.
Use prpcServiceUtils to export a product archive of your application and import it to the remote
development systems. For more information, see Automatically deploying applications with prpcUtils and
Jenkins.
Manually migrate the application by exporting it from the development source system and then importing
it into the remote development system. For information, see Importing a branch into remote development
systems after merging.

Migrating and merging branches by using Deployment Manager

If you are using Deployment Manager as your automation server, you can use it to merge branches on the
source development system. You must configure certain settings on the source development system before
you can create pipelines that model pre-merge criteria and can merge branches.

Migrating and merging branches with third-party automation servers

If you are using a third-party automation server such as Jenkins, you can automatically start a branch merge
after you publish the branch to the development source system.

Publishing a branch to the source development system

You can migrate a branch to the source development system by publishing a branch to it through a Pega
repository.

Exporting a branch to the source development system

In a distributed development environment, developers migrate branches to a source development system on


which they then merge the branches. You can manually migrate a branch to the source development system
by packaging the branch on your remote development system and then exporting it to the source development
system.

Importing a branch into remote development systems after merging

After you merge branches on the source development system, manually migrate the merged branches back to
the remote development system by packaging and then importing it.

Migrating and merging branches by using Deployment Manager


If you are using Deployment Manager as your automation server, you can use it to merge branches on the source
development system. You must configure certain settings on the source development system before you can create
pipelines that model pre-merge criteria and can merge branches.

Do the following tasks to configure Deployment Manager to merge branches on the source development system:

1. Configure the source development system so that you can merge branches on it. For more information, see
Configuring the development system for branch-based development.

2. Create a pipeline for your application, which includes modeling pre-merge criteria, such as adding a task that
developers must complete a branch review before merging branches. For more information, see Configuring an
application pipeline.

3. Start a deployment by doing one of the following tasks:

Submit an application into the Merge Branches wizard. For more information, see Starting a deployment
as you merge branches from the development environment.
Publish application changes in App Studio. For more information, see Publishing application changes in
App Studio.

Migrating and merging branches with third-party automation servers


If you are using a third-party automation server such as Jenkins, you can automatically start a branch merge after
you publish the branch to the development source system.

To publish a branch and automatically start a merge, do the following tasks:

1. Create a Pega repository connection between the remote development system and the development source
system. For more information, see Adding a Pega repository.

2. Configure the pyPostPutArtifactSuccess activity to automatically merge branches after you publish them to the
development source system. For more information, see Configuring the pyPostPutArtifactSuccess activity.

Ensure that you add and configure a step with the Call pxImportArchive method to import the application
package after you publish it to the source development system. If you do not, the package is only copied to the
service export directory.

3. Publish the branch to the source development system through the Pega repository. For more information, see
Publishing a branch to a repository.

Publishing a branch to the source development system


You can migrate a branch to the source development system by publishing a branch to it through a Pega repository.

To automatically merge the branch after publishing it, follow the procedure in Migrating and merging branches with
third-party automation servers.

1. Create a Pega repository connection between the remote development system and the source development
system. For more information, see Adding a Pega repository.

2. Publish the branch to the source development system through the Pega repository. For more information, see
Publishing a branch to a repository.

Exporting a branch to the source development system


In a distributed development environment, developers migrate branches to a source development system on which
they then merge the branches. You can manually migrate a branch to the source development system by packaging
the branch on your remote development system and then exporting it to the source development system.

To migrate a branch to the source development system, do the following tasks:

1. On the remote development system, package the branch. For more information, see Packaging a branch.

2. On the source development system, import the application package by using the Import wizard. For more
information, see Importing rules and data from a product rule by using the Import wizard.

Importing a branch into remote development systems after merging


After you merge branches on the source development system, manually migrate the merged branches back to the
remote development system by packaging and then importing it.

To migrate a branch back to the remote development system, do the following tasks:

1. On the source development system, package the branch. For more information, see Packaging a branch.

2. On the remote development systems, import the application package by using the Import wizard. For more
information, see Importing rules and data from a product rule by using the Import wizard.

Understanding continuous integration and delivery pipelines with


third party automation servers
Use DevOps practices such as continuous integration and continuous delivery to quickly move application changes
from development, through testing, and to deployment. Use Pega Platform tools and common third-party tools to
implement DevOps.

You can set up a continuous integration and delivery (CI/CD) pipeline that uses a Pega repository in which you can
store and test software and a third-party automation server such as Jenkins that starts jobs and performs operations
on your software. Use a CI/CD pipeline to quickly detect and resolve issues before deploying your application to a
production environment.

For example, you can configure an automation server with REST services to automatically merge branches after you
publish them to a Pega repository. You can also configure Jenkins to create branch reviews, run PegaUnit tests, and
return the status of a merge.
Using branches with Pega repositories in a continuous integration and delivery pipeline

When you work in a continuous integration and development environment, you can configure a Pega repository
on a source development system to store and test software. You publish branches to repositories to store and
test them. You can also configure a pipeline with REST services on your automation server to perform branch
operations, such as detecting conflicts, merging branches, and creating branch reviews, immediately after you
push a branch to the repository.

Remotely starting automation jobs to perform branch operations and run unit tests

In a continuous integration and delivery (CI/CD) workflow, repositories provide centralized storage for software
that is to be tested, released, or deployed. You can start a job remotely from an automation server, such as
Jenkins, and use the branches REST and merges REST services to merge branches when you push them from
your development system to a Pega repository on a source development system.

Implementing a CI/CD pipeline with repository APIs

After you have configured an automation server and system of record (SOR) so that you can remotely start jobs
on the automation server, you can implement a continuous integration and development pipeline with the
branches REST and merges REST services. These services detect potential conflicts before a merge, merge
rules in a branch, obtain the status of the merge, and create branch reviews. By remotely starting jobs that
automatically perform branch operations, your organization can deliver higher-quality software more quickly.

Using branches with Pega repositories in a continuous integration


and delivery pipeline
When you work in a continuous integration and development environment, you can configure a Pega repository on a
source development system to store and test software. You publish branches to repositories to store and test them.
You can also configure a pipeline with REST services on your automation server to perform branch operations, such
as detecting conflicts, merging branches, and creating branch reviews, immediately after you push a branch to the
repository.

To use branches with Pega repositories, you must perform the following tasks:

1. In Dev Studio, enable the Pega repository type. For more information, see Enabling the Pega repository type.

2. Create a Pega type repository. For more information, see Adding a Pega repository.

3. On the source development system, create a development application that is built on all the applications that
will go into production. You must also create a ruleset in the development application that contains all the rules
that you are using for continuous integration.

For example, if you have a production application MyCoAppwith with the rulesets MyCo:01-01 and MyCoInt:01-
01, you can create a MyCoDevAppdevelopment application that is built on MyCoAppand has only one ruleset,
MyCoCIDev:01-01. This ruleset contains the data transforms that are needed to set default information, such as
the application into which branches will be merged.

You can use the branches REST and merge REST services in your pipeline to perform branch operations. The
branches REST service provides subresources that you can use to detect conflicts, merge branches, and create
branch reviews. You must configure certain settings on the source development system so that you can use
the branches REST service. Complete steps 4 through 6.

4. ​Specify the application name and version that you want to use for conflict detection and merging:

a. Search for the pySetApplicationDefaults data transform.

b. Save the data transform to the ruleset in your development application that contains the continuous
integration rules.

c. In the Source field for the Param.ApplicationName parameter, enter the name of the application that you
want to use for conflict detection and merging.

d. In the Source field for the Param.ApplicationVersion parameter, enter the application version.

e. Save the rule form.

5. Set the target ruleset version that you want to use for conflict detection and merging.

If you do not perform this step, a new ruleset version is created into which rules are merged. Complete the
following steps:

a. Search for the pySetVersionDefaults data transform.

b. Save the data transform to the ruleset in your development application that contains the continuous
integration rules.

c. In the Source field for the pyTargetRuleSetVersionparameter, enter the ruleset version into which you
want to merge.
d. Save the rule form.

6. Set passwords that are needed during merge operations.

As a best practice, lock these rulesets with a password. Complete the following steps:

a. Search for the pySetVersionPasswordDefaults data transform.

b. ​Save the data transform to the ruleset in your development application that contains the continuous
integration rules.

c. Specify the passwords that are required for merging.

d. Save the rule form.

7. Configure a continuous integration and development pipeline so that your automation server, such as Jenkins,
starts a job immediately after you push a branch to the source development system.

Use the branches REST and merge REST services in the pipeline to perform branch operations, such as
detecting conflicts and merging branches. For more information, see Remotely starting automation jobs to
perform branch operations and run unit tests.

Remotely starting automation jobs to perform branch operations


and run unit tests
In a continuous integration and delivery (CI/CD) workflow, repositories provide centralized storage for software that
is to be tested, released, or deployed. You can start a job remotely from an automation server, such as Jenkins, and
use the branches REST and merges REST services to merge branches when you push them from your development
system to a Pega repository on a source development system.

Pega Platform can communicate with common repository technologies and also can act as a binary repository. Pega
Platform can browse, publish, or fetch artifacts that are created whenever an action creates a RAP file: for example,
exporting an application, product, branch, or component into a remote system of record. By starting jobs remotely
and using the automation server to detect conflicts and merge branches, your organization can deliver higher-
quality software more quickly.

For more information about using branches with repositories, see Using branches with Pega repositories in a
continuous integration and delivery pipeline.

After you push a branch to a system of record, your automation server tool runs a job. Your pipeline can detect
conflicts before a merge. If there are conflicts, the merge does not proceed. If there are no conflicts, the merge
proceeds on the development source system. Your pipeline can run all unit test cases or a test suite to validate the
quality of your build.

After a merge is completed, you can rebase the rules on your development system to import the most recently
committed rules from your system of record. For more information, see Understanding rule rebasing. In addition,
you can configure your pipeline to send emails to users, such as when a job starts or when a conflict is detected.

The following figure displays an example workflow of the pipeline:

Workflow of a continuous integration pipeline on a development source


Configuring your automation server

Configure your automation server so that you can remotely start jobs on it. Your configuration depends on the
automation server that you use.

Defining the automation server URL

Configure a dynamic system setting on the main development system to define your automation server URL.
Your configuration depends on the automation server that you use.

Configuring the pyPostPutArtifactSuccess activity

If you are using Jenkins, configure the pyPostPutArtifactSuccess activity on your source development system to
create a job after a branch is published on the system of record. If you are using other automation servers,
create and call a connector rule that is supported by your continuous integration tool.

Configuring a continuous integration and delivery pipeline

After you configure your automation server and your source development system, you can configure a pipeline
on your job to automate the testing and merging of rules. Actions that you can do include obtaining merge
conflicts, creating branch reviews, and running unit tests.

Configuring your automation server


Configure your automation server so that you can remotely start jobs on it. Your configuration depends on the
automation server that you use.

For example, do the following steps to configure Jenkins:

1. Open a web browser and navigate to the location of the Jenkins server.

2. Install the Build Authorization Token Root Plugin.

a. Click Manage Jenkins.

b. Click Manage Plugins.

c. On the Available tab, select the Build Authorization Token Root Plugin check box.

d. Specify whether to install the plug-in without restarting Jenkins or download the plug-in and install it after
restarting Jenkins.

3. Configure your Jenkins job to use parameters.

a. Open the job and click Configure.

b. On the General tab, click the This project is parameterized check box.

c. Click Add Parameter, and then click String Parameter.

d. In the Name field, enter notificationSendToID , which is the operator ID of the user who started the Jenkins job.

Email notifications about the job are sent to the email address that is associated with the user ID.

e. Click Add Parameter, and then click String Parameter.

f. In the Name field, enter branchName.

g. Click Save.

4. ​Configure the build trigger for your job.

a. Click Configure.

b. On the General tab, in the Build Triggers section, select the Trigger builds remotely (e.g., from scripts)
check box.

c. In the Authentication Token field, enter an authentication token, which can be any string.

d. Click Save.

Defining the automation server URL


Configure a dynamic system setting on the main development system to define your automation server URL. Your
configuration depends on the automation server that you use.

For example, do the following steps if you are using Jenkins:

1. Click Create Sysadmin Dynamic System Settings


2. Enter a description in the Short description field.

3. In the Owning Ruleset field, enter Pega-API.

4. In the Setting Purpose field, enter JenkinsURL.

5. Click Create and open.

6. On the Settings tab, in the Value field, enter http://myJenkinsServerURL/buildByToken/buildWithParameters.

7. Click Save.

Configuring the pyPostPutArtifactSuccess activity


If you are using Jenkins, configure the pyPostPutArtifactSuccess activity on your source development system to
create a job after a branch is published on the system of record. If you are using other automation servers, create
and call a connector rule that is supported by your continuous integration tool.

To configure the activity, complete the following steps:

1. Click App Settings .

2. In the search field, enter Pega-RepositoryManagement .

3. Expand Technical Activity .

4. Click pyPostPutArtifactSuccess.

5. Save the activity to your application ruleset.

6. On the Steps tab, in the Method field, enter Call pxImportArchive.

7. Expand the arrow to the left of the Method field.

8. Click the Pass current parameter page check box to import the archive that was published to the main
development system. If there are errors during import, you can exit the activity.

9. ​Ensure that the session authenticated by the Pega Repository Service Package has access to the ruleset that
contains the pyPostPutArtifactSuccess activity.

For more information about configuring authentication on service packages, see Service Package form –
Completing the Context tab.

10. Define the page and its class.

a. Click the Pages & Classes tab.

b. In the Page name field, enter a name for the page.

c. In the Class field, enter Pega-API-CI-AutomationServer.

11. Click the Steps tab.

12. Add a step to create the new page on the clipboard.

a. In the Method field, press the Down arrow key and click Property-Set.

b. In the Step page field, enter the name of the page that you entered on the Pages & Classes tab.

13. Configure the parameters to pass to the pzTriggerJenkins activity.

a. Click Add a step.

b. In the Method field, press the Down arrow key and click Property-Set.

c. Click the arrow to the left of the Method field to open the Method Parameters section.

d. In the PropertiesName field, enter Param.Job.

e. In the PropertiesValue field, enter the name of your project.

f. Click the plus sign.

g. In the PropertiesName field, enter Param.Token.

h. In the PropertiesValue field, enter the authentication token that you provided for your project.

i. Click the plus sign.

j. In the PropertiesName field, enter Param.BranchName.


k. In the PropertiesValue field, enter @whatComesBeforeFirst(Param.ArtifactName,'_').

l. To specify a different URL from the JenkinsURL dynamic system setting that you created in Configuring
your automation server, click the Plus sign icon.

m. In the PropertiesName field, enter Param.OverrideEndPointURL.

n. In the PropertiesValue field, enter the endpoint URL.

o. To send notifications to users if you are calling the activity in a context where there is no operator ID
page, click the plus sign.

p. In the PropertiesName field, enter Param.OverrideNotificationSendToID.

q. In the PropertiesValue field, enter Param.PutArtifactOperatorID.

14. ​Add a step to call the pzTriggerJenkinsJob activity.

a. Click Add a step.

b. In the Method field, enter Call pzTriggerJenkinsJob.

c. In the Step page field, enter the name of the page.

d. Click the arrow to the left of the Method field to expand it.

e. Select the Pass current parameter page check box.

15. Configure other activity settings, as appropriate. For more information, see Activities.

16. Save the rule form.

Configuring a continuous integration and delivery pipeline


After you configure your automation server and your source development system, you can configure a pipeline on
your job to automate the testing and merging of rules. Actions that you can do include obtaining merge conflicts,
creating branch reviews, and running unit tests.

You can do the following actions:

Send a notification with the job URL to the user who published the branch or started the job.
Call the branches REST service with GET /branches/{ID}/conflicts to obtain a list of conflicts. If there are no
conflicts, you can continue the job; otherwise, you can end the job and send a notification to the user to
indicate that the job failed.
Use the merges subresource for the branches REST service to merge branches.
Call the merges REST service with GET /branches/{ID}/merge to obtain the status of a merge.
Use the reviews subresource for the branches REST service to create a branch review.
Use the Execute Tests service to run unit test cases or test suites. For more information, see Running test
cases and suites with the Execute Tests service.
Set up Jenkins to poll the job, using the unique ID that the branches service returned when you merged the
branch, until the status is no longer set to Processing. If the merge is successful, you can continue the job;
otherwise, you can send a notification to the user to indicate that the job failed.
Publish the rulesets into which the branches were merged to a repository such as JFrog Artifactory.
Notify the user that the job is complete.

For more information about the branches REST and merges REST services, seeImplementing a CI/CD pipeline with
repository APIs.

Implementing a CI/CD pipeline with repository APIs


After you have configured an automation server and system of record (SOR) so that you can remotely start jobs on
the automation server, you can implement a continuous integration and development pipeline with the branches
REST and merges REST services. These services detect potential conflicts before a merge, merge rules in a branch,
obtain the status of the merge, and create branch reviews. By remotely starting jobs that automatically perform
branch operations, your organization can deliver higher-quality software more quickly.

To access the documentation about the data model, click Resources API .

For more information about response codes, see Pega API HTTP status codes and errors.

Understanding the branches REST service

Use the branches REST service to retrieve a list of conflicts before you run tests and merge branches and
perform additional tests on conflicts before performing a merge operation. You can also create branch reviews.

Understanding the merges REST service

Understanding the branches REST service


Use the branches REST service to retrieve a list of conflicts before you run tests and merge branches and perform
additional tests on conflicts before performing a merge operation. You can also create branch reviews.

Understanding the conflicts subresource


Understanding the merge subresource
Understanding the review subresource

You might also like