Devops 84

Download as pdf or txt
Download as pdf or txt
You are on page 1of 197

DevOps

DevOps is a set of practices that bridge application development and operational behavior to reduce time to
market without compromising on quality and operational effectiveness. It allows application developers and
business owners to quickly respond to customer needs, develop a quicker feedback cycle, and ultimately achieve
business value faster.

DevOps encourages a culture of collaboration between development, quality, and operations teams to reduce or
eliminate barriers through fundamental practices such as continuous integration, continuous delivery, and
continuous deployment. Adopting these practices and the tools to support them creates a standardized
deployment process so that you can deploy predictable, high-quality releases.

Pega Platform provides the tools necessary to support continuous integration, delivery, and deployment through
Deployment Manager, which provides a low-code, model-driven experience to configure and run continuous
integration and delivery (CI/CD) workflows or deployment pipelines for your application. Deployment Manager
provides out-of-the-box tools to enforce best CI/CD practices for your application. You can fully automate
deployment pipelines, starting with automated integration of developer changes through branch merging and
validation, application packaging, artifact repository management, deployments, test execution, guardrail
compliance, and test coverage enforcement.

Pega Platform also includes support for open DevOps integration using popular third party tools such as Jenkins
and Microsoft Azure DevOps by providing an open platform, with all the necessary hooks and services. With open
DevOps integration, you can build a deployment pipeline using third-party tools to automate branch merging,
application packaging and deployment, test execution, and quality metric enforcement.

For more information about configuring DevOps workflows, see the following topics:

Understanding best practices for DevOps-based development workflows

In a DevOps workflow, the most important best practice for application developers to adopt is continuous
integration. Continuous integration is the process by which development changes to an application are
integrated as frequently as possible, at least once a day and preferably multiple times a day, every time
developers complete a meaningful unit of work.

Understanding the DevOps release pipeline

Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development through testing to deployment on your production system. Use Pega Platform
tools and common third-party tools to implement DevOps.

Understanding best practices for version control in the DevOps pipeline

Change the application version number each time you deploy changes to a production system. As a best
practice, use semantic versioning, because it offers a logical set of rules about when to increase each
version number.

Understanding continuous integration and delivery pipelines

DevOps is a culture of collaboration by development, quality, and operations teams to address issues in their
respective areas. To sustain progress and bring continued improvement, tools and processes are put in
place. Use DevOps practices such as continuous integration and delivery (CI/CD) pipelines to break down
code into pieces and automate testing tasks, so that multiple teams can work on the same features and
achieve faster deployment to production.

Installing and enabling for Sonatype Nexus Repository component for Sonatype Nexus Repository Manager 3

To create a connection between Pega Platform or Deployment Manager and Nexus Repository Manager 3,
use the Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and
metadata support for your application artifacts.

Installing and enabling Sonatype Nexus Repository component for Sonatype Nexus Repository Manager 2

Create a connection between Pega Platform or Deployment Manager and Sonatype Nexus Repository
Manager 2 with the Sonatype Nexus Repository component. Use this repository for centralized storage,
versioning, and metadata support for your application artifacts.

Automatically deploying applications with prpcUtils and Jenkins


You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each
task.

Migrating application changes

With minimal disruption, you can safely migrate your application changes throughout the application
development life cycle, from development to deployment on your staging and production environments. In
the event of any issues, you can roll back the deployment and restore your system to a state that was
previously known to be working.

Deploying application changes to your staging or production environment

As part of the Standard Release process, after you set up and package a release on your shared
development environment, you can deploy your application changes to your staging or production
environment.

Packaging a release on your development environment

As part of the Standard Release process for migrating your application changes from development to
production, you set up and package the release on your shared development environment.

Understanding application release changes, types, and processes

The following tables provide information about the types of changes that you can make within a release, the
release types, and the release management process to follow based on the types of changes that you want
to deploy.

Testing applications in the DevOps pipeline

Having an effective automation test suite for your application in your continuous delivery DevOps pipeline
ensures that the features and changes that you deliver to your customers are of high-quality and do not
introduce regressions.

Understanding model-driven DevOps with Deployment Manager

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for
your Pega applications from within Pega Platform. You can create a standardized deployment process so
that you can deploy predictable, high-quality releases without using third-party tools.

Understanding distributed development for an application

When you use continuous integration and delivery (CI/CD) workflows, you set up the systems in your
environment based on your workflow requirements. For example, if only one team is developing an
application, you can use a single system for application development and branch merging.

Understanding continuous integration and delivery pipelines with third party automation servers

Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development, through testing, and to deployment. Use Pega Platform tools and common third-
party tools to implement DevOps.

Understanding best practices for DevOps-based development


workflows
In a DevOps workflow, the most important best practice for application developers to adopt is continuous
integration. Continuous integration is the process by which development changes to an application are integrated
as frequently as possible, at least once a day and preferably multiple times a day, every time developers
complete a meaningful unit of work.

To enforce best practices when developing an application and to ensure that application changes are of high
quality, developers should use Pega Platform features such as branches. Before merging branches and
integrating changes, developers should also verify that the application meets guardrail compliance and that unit
tests pass. If the validation of development changes passes, the branch is merged into the application ruleset.

However, if validation fails, then the merge is rejected, and developers should be notified so that they can
address the failure and resubmit their changes. The feedback cycle of validating and integrating development
changes should be as fast as possible, preferably 15 minutes or less, because it increases productivity in the
following ways:

Developers do not spend unnecessary time to see that their changes are valid, which enables them to make
incremental changes.
Incremental changes tend to be easier to validate, debug, and integrate.
Other developers spend reduced time coordinating making changes and can be confident that they are
building on validated functionality.

How you implement best practices for continuous integration depends on whether you have a smaller scale
development with one or two scrum teams using a shared development environment or multiple distributed
development teams. See the following topics for more information:

Understanding development best practices working in a shared environment

Development environments can be shared by one or more teams collaborating on the production
application. To practice continuous integration, use a team application layer, branches, and release toggles.

Understanding development best practices in a distributed development environment with multiple teams

If you have multiple teams working on the same application, each team should have a separate, remote
development server on which developers work. A central Pega Platform server acts as a source development
system, which allows teams to integrate features into the application in a controlled manner and avoid
unexpected conflicts between teams working in the same rulesets.

Understanding development best practices working in a shared


environment
Development environments can be shared by one or more teams collaborating on the production application. To
practice continuous integration, use a team application layer, branches, and release toggles.

Build a team application layer that is built on top of the main production application. The team application
layer contains branches, tests, and other development rulesets that are not intended to go into production.
For more information, see the Pega Community Using multiple built-on applications.
Create a branch of your production ruleset in the team application. For more information, see Adding
branches to your application.
Perform all development work in the branch.
Optional: Use release toggles to disable features that are not ready for general use. Using toggles allows you
to merge branch content frequently even if some content is not final. For more information, see Toggling
features on and off.
Optional: Create formal review tasks for other members of the development team to review your content.
For more information, see Creating a branch review.
Optional: Use the branch developer tools to review the content and quality of your branch. For more
information, see Reviewing branches.
Optional: Lock the branch. For more information, see Locking a branch.
Frequently merge the branch from the team application layer to the production rulesets. For more
information, see Merging branches into target rulesets.

Note: It is recommended that no more than two or three scrum teams share a development environment.

Understanding development best practices in a distributed


development environment with multiple teams
If you have multiple teams working on the same application, each team should have a separate, remote
development server on which developers work. A central Pega Platform server acts as a source development
system, which allows teams to integrate features into the application in a controlled manner and avoid
unexpected conflicts between teams working in the same rulesets.

Remote development systems


Follow these best practices on the remote development systems:

Multiple teams can share development systems, which can depend upon geographical distribution of teams,
system load, risk of teams making system-wide changes, and demand for system restarts.
Build a team application layer that is built on top of the main production application. The team application
layer contains branches, tests, and other development rulesets that are not intended to go into production.
For more information, see Using multiple built-on applications.
Put all necessary configuration information for the development server in a development application that
you can maintain, package, and deploy on demand so that you can quickly start up new remote
development systems.
Create a branch of your production ruleset in the team application. For more information, see Adding
branches to your application.
Name the branch with the feature or bug ID from your project management tool so that you can associate
changes with a corresponding feature or bug.
Perform all development work in the branch in versioned rules. Use branches for targeted collaboration and
so that you can use development best practices such as conducting branch reviews and monitoring
application quality metrics. You can also quickly test and roll back changes before you merge branches.
Do not do branch development on the source development system to avoid having multiple versions of the
same branch on the both the source development system and remote development system. The same
branch might contain different contents that conflict with each other.
Avoid developing rules in unlocked rulesets. Lock rulesets to ensure that rules are not accidentally and
directly changed, because changes should be introduced only when branches are merged. Use a continuous
integration server such as Deployment Manager to ensure that passwords to locked rulesets are not shared
publicly.

For more information, see Versions tab on the Ruleset form.

Optional: Use release toggles to disable features that are not ready for general use. Using toggles allows you
to merge branch content frequently even if some content is not final. For more information, see Toggling
features on and off.
Optional: Create formal review tasks for other members of the development team to review your content.
For more information, see Creating a branch review.
Optional: Use the branch developer tools to review the content and quality of your branch. For more
information, see Reviewing branches.
Optional: Lock the branch before you migrate it to the source development system. For more information,
see Locking a branch.
Avoid deleting a branch before you migrate it into the main development system.
Delete a branch after you import it into the main development system so that you do not import older data
or rule instances and unintentionally merge them into the main application.
Maintain a branch only as long as you need it. The longer that you keep a branch, the likelihood increases
that the branch will have conflicts, which can be difficult to resolve.
Be aware that you cannot make some rule updates in branches, such as updates to application records,
classes, libraries, and schema. Senior application architects on he team should make these changes directly
on the main development system.

Source development system


Follow these best practices on the source development system:

Use an established and reliable backup and restore process.


Maintain high availability on the source development system so that development teams are not affected by
extended periods of downtime.
Limit and restrict developer access to the main development system so that developers cannot make
impromptu application changes without going through the DevOps workflow.

Understanding the DevOps release pipeline


Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development through testing to deployment on your production system. Use Pega Platform tools
and common third-party tools to implement DevOps.

The release pipeline in the following diagram illustrates the best practices for using Pega Platform for DevOps. At
each stage in the pipeline, a continuous loop presents the development team with feedback on testing results.
This example includes the following assumptions:

Pega Platform manages all schema changes.


Jenkins is the automation server that helps to coordinate the release pipeline, and JFrog Artifactory is the
application repository; however, other equivalent tools could be used for both.

Open DevOps release pipeline overview


Open DevOps release pipeline overview

Understanding development

Pega Platform developers use Agile practices to create applications and commit the changes into branches
in a shared development environment. Automated and manual testing provides rapid feedback to
developers so that they can improve the application.

Understanding continuous integration

With continuous integration, application developers frequently check in their changes to the source
environment and use an automated build process to automatically verify these changes. Continuous
integration identifies issues and pinpoints them early in the cycle. Use Jenkins with the prpcServiceUtils tool
and the execute test service to automatically generate a potentially deployable application and export the
application archive to a binary repository such as JFrog Artifactory.

Understanding continuous delivery

With continuous delivery, application changes run through rigorous automated regression testing and are
deployed to a staging environment for further testing to ensure that there is a high confidence the
application is ready to deploy on the production system.

Understanding deployment

After an application change passes the testing requirements, use Jenkins and the prpcServiceUtils tools to
migrate the changes into production after complete validation through automated testing on the staging
system. Use application release guidelines to deploy with minimal downtime.

Understanding development
Pega Platform developers use Agile practices to create applications and commit the changes into branches in a
shared development environment. Automated and manual testing provides rapid feedback to developers so that
they can improve the application.

Follow these best practices to optimize the development process:

Leverage multiple built-on applications to develop and process smaller component applications. Smaller
applications move through the pipeline faster and are easier to develop, test, and maintain.
Create one Pega Platform instance as a source environment that acts as a single source of truth for the
application. This introduces stability into the developer environment and ensures that a problem in one
developer environment does not affect other environments.
Use Pega Platform developer tools, for example:
The rule compare feature allows you to see the differences between two versions of a specific rule.
The rule form search tool allows you to find a specific rule in your application.
Follow branch-based development practices:
Developers can work on a shared development environment or local environments.
Content in branches migrates from the development environments to merge into the source
environment.
Create an archive by exporting and storing backup versions of each branch in a separate location in the
application repository. If a corrupted system state requires you to restore the source environment to a
previous known good application version, the branches can be down-merged to reapply the changes in
those branches that were lost as part of the restore.
Use unit tests to ensure quality.
Ensure that the work on a ruleset is reviewed and that the changes are validated. Lock every complete and
validated ruleset.
Regularly synchronize the development environments with the source environment.

For more information, see the following articles and help topics:

Application development
Understanding best practices for DevOps-based development workflows
Using multiple built-on applications
Searching for a rule
Checking out a rule
Checking in a rule
Comparing rule versions
Understanding best practices for version control in the DevOps pipeline
Branching
Rule development in branches
Merging branches into target rulesets
Using the Lock and Roll feature for managing ruleset versions
Adding a branch from a repository
Publishing a branch to a repository
Creating a toggle
Testing
Pega Platform application testing in the DevOps pipeline
PegaUnit testing

Understanding continuous integration


With continuous integration, application developers frequently check in their changes to the source environment
and use an automated build process to automatically verify these changes. Continuous integration identifies
issues and pinpoints them early in the cycle. Use Jenkins with the prpcServiceUtils tool and the execute test
service to automatically generate a potentially deployable application and export the application archive to a
binary repository such as JFrog Artifactory.

During continuous integration, maintain the following best practices:

To automatically generate a valid application, properly define the application Rule-Admin-Product rule and
update the rule whenever the application changes. The prpcServiceUtils tool requires a predefined Rule-
Admin-Product rule.
To identify issues early, run unit tests and critical integration tests before packaging the application. If any
one of these tests fails, stop the release pipeline until the issue is fixed.
Publish the exported application archives into a repository such as JFrog Artifactory to maintain a version
history of deployable applications.

For more information, see the following articles and help topics:

Pega unit tests


Running test cases and suites with the Execute Tests service
Application packaging
Packaging a release on your development environment
Automatically deploying applications with prpcUtils and Jenkins

Understanding continuous delivery


With continuous delivery, application changes run through rigorous automated regression testing and are
deployed to a staging environment for further testing to ensure that there is a high confidence the application is
ready to deploy on the production system.

Use Jenkins with the prpcServiceUtils tool to deploy the packaged application to test environments for regression
testing or for other testing such as performance testing, compatibility testing, acceptance testing, and so on. At
the end of the continuous delivery stage, the application is declared ready to deploy to the production
environment. Follow these best practices to ensure quality:

Use Docker or a similar tool to create test environments for user acceptance tests (UAT) and exploratory
tests.
Create a wide variety of regression tests through the user interface and the service layer.
Check the tests into a separate version control system such as Git.
If a test fails, roll back the latest import.
If all the tests pass, annotate the application package to indicate that it is ready to be deployed. Deployment
can be done either automatically with Jenkins and JFrog Artifactory or manually.

For more information, see the following articles and help topics:

Deploying to a staging system


Deploying application changes to your staging or production environment
Automatically deploying applications with prpcUtils and Jenkins
Using restore points to enable error recovery

Understanding deployment
After an application change passes the testing requirements, use Jenkins and the prpcServiceUtils tools to
migrate the changes into production after complete validation through automated testing on the staging system.
Use application release guidelines to deploy with minimal downtime.

For more information, see the following articles and help topics:

Deploying to the production system


Understanding best practices for version control in the DevOps pipeline

Deploying application changes to your staging or production environment


Automatically deploying applications with prpcUtils and Jenkins
Migrating application changes
Understanding application release changes, types, and processes
Enabling changes to the production system
Updating access groups by submitting a request to an active instance

Understanding best practices for version control in the DevOps


pipeline
Change the application version number each time you deploy changes to a production system. As a best practice,
use semantic versioning, because it offers a logical set of rules about when to increase each version number.

When you use semantic versioning, the part of the version number that is incremented communicates the
significance of the change. Additional information about semantic versioning is available on the web.

The version number, in the format NN-NN-NN, defines the major version (first two digits), minor version (middle
digits), and patch version (last digits), for example, 03-01-15.

Major versions include significant features that might cause compatibility issues with earlier releases.
Minor versions include enhancements or incremental updates.
Patch versions include small changes such as bug fixes.

Rulesets include all versions of each rule. Skimming reduces the number of rules by collecting the highest version
of rules in the ruleset and copying them to a new major or minor version of that ruleset, with patch version 01.
For more information about skimming, see Skim to create a higher version.

Best practices for development


Follow these best practices for version control in development:

Work in branches.
Consider creating a major version of your application if you upgrade your application server or database
server to a major new version.
For small single scrum teams:
Increment both the patch and the minor version during every merge.
Developers merge into the next incremented patch version.
For multiple scrum teams:
Assign a user the role of a release manager, who determines the application version and release
strategy. This user should be familiar with concepts and features such as rulesets, ruleset versioning,
application records, and application migration strategies.
The release manager selects a development ruleset version number that includes a patch version
number.
Developers merge into the highest available ruleset version.
Frequently increment ruleset versions to easily track updates to your application over time.
Maintain an application record that is capped at major and minor versions of its component rulesets.

Best practices for deployment


Follow these best practices when you deploy your application to production:

Define target ruleset versions for production deployment.


Use lock and roll to password-protect versions and roll changes to higher versions. For more information, see
RuleSet Stack tab.
Increment the ruleset version every time you migrate your application to production, unless the application
is likely to reach the patch version limit of 99.
Create restore points before each deployment. For more information about restore points, see Using restore
points to enable error recovery.

Understanding continuous integration and delivery pipelines


DevOps is a culture of collaboration by development, quality, and operations teams to address issues in their
respective areas. To sustain progress and bring continued improvement, tools and processes are put in place.
Use DevOps practices such as continuous integration and delivery (CI/CD) pipelines to break down code into
pieces and automate testing tasks, so that multiple teams can work on the same features and achieve faster
deployment to production.
A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous delivery.
In the continuous integration stage, developers can continuously merge branches into a target application. In the
continuous delivery stage, the target application is packaged and moved through quality testing before it is
deployed to a production environment.

You can set up a CI/CD pipeline for your Pega Platform application using one of two methods:

Use third-party tools, such as Jenkins, to start a job and perform operations on your software. For more
information, see Configuring a continuous integration and delivery pipeline.
Use Deployment Manager, where you use Pega Platform as the orchestration server that runs the pipeline,
packages your application, and manages importing packages from and exporting packages to repositories
that connect from one system to another. For more information, see Understanding model-driven DevOps
with Deployment Manager.

Adding a branch from a repository

If you are working in a continuous integration and delivery (CI/CD) pipeline, you can add a branch from a
repository to your development application. You cannot add a branch that contains branched versions of a
ruleset that is not in your application stack.

Publishing a branch to a repository

If you are using a continuous integration and delivery (CI/CD) pipeline with third-party tools such as Jenkins,
you can publish a branch from your development application to a Pega repository on the main development
system (remote system of record) to start a merge.

Understanding rule rebasing

If you are using continuous integration in a CI/CD pipeline with either Deployment Manager or third-party
automation servers such as Jenkins, after you merge branches, you can rebase your development application
to obtain the most recently committed rulesets. Rebasing allows development teams working in separate
development environments to share their changes and keep their local rule bases synchronized. Having the
most recently committed rules on your development system decreases the probability of conflicts with other
development teams.

Related Content
Article

Rebasing rules to obtain latest versions

Article

PegaUnit testing

Adding a branch from a repository


If you are working in a continuous integration and delivery (CI/CD) pipeline, you can add a branch from a
repository to your development application. You cannot add a branch that contains branched versions of a ruleset
that is not in your application stack.

1. In the navigation panel, click App, and then click Branches.

2. Right-click the application into which you want to import a branch and select Add branch from repository.

3. In the Add branch from repository dialog box, from the Repository list, select the repository that contains the
branch that you want to import.

4. In the Branch name field, press the Down Arrow key and select the branch that you want to import.

5. Click Import.

6. Click OK.

7. If you are using multiple branches, reorder the list of branches so that it matches the order in which rules
should be resolved.

For more information, see ">Reordering branches.

8. Create rules and add them to your branch.


When you create rules, you can select the branch and ruleset into which you want to save them. Rulesets are
automatically created. For more information, see ">Rule development in branches.

Related Content
Article

Adding branches to your application

Publishing a branch to a repository


If you are using a continuous integration and delivery (CI/CD) pipeline with third-party tools such as Jenkins, you
can publish a branch from your development application to a Pega repository on the main development system
(remote system of record) to start a merge.

To publish a branch to a repository, do the following steps:

1. In the navigation panel, click App, and then click Branches.

2. Right-click the branch that you want to push to a repository and click Publish to repository.

3. In the Push branch to repository dialog box, select the repository from the Repository list.

4. Click Publish.

Result: You receive a message if the repository is not configured properly or is down, and you cannot push
the branch to the repository.

5. Click Close.

Related Content
Article

Branches and branch rulesets

Article

Integrating with file and content management systems

Understanding rule rebasing


If you are using continuous integration in a CI/CD pipeline with either Deployment Manager or third-party
automation servers such as Jenkins, after you merge branches, you can rebase your development application to
obtain the most recently committed rulesets. Rebasing allows development teams working in separate
development environments to share their changes and keep their local rule bases synchronized. Having the most
recently committed rules on your development system decreases the probability of conflicts with other
development teams.

For example, you can publish a branch from your development application to a Pega repository on a source
development system, which starts a job on Jenkins as your automation server and merges branches. You can also
use the Merge branches wizard to start a Deployment Manager build by first merging branches in a distributed or
nondistributed, branch-based environment. After the merge is completed, you can rebase the rulesets on your
development application to obtain the merged rulesets.

Configuring settings for rebasing

Before you can rebase your development system, you must first configure a Pega repository and then enable
ruleset versions for them. You must also have the appropriate permissions for rebasing.

Enabling the Pega repository type

When you use continuous integration and delivery (CI/CD pipelines) third-party automation servers, you use
Pega Platform as a binary repository for rule artifacts during development. You also use Pega repositories
when you are rebasing your development application when you are using third-party automation servers or
Deployment Manager.

Enabling ruleset versions for Pega repositories for rebasing


When you rebase rules, you must enable ruleset versions for Pega repositories so that they can host ruleset
versions. To enable ruleset versions, configure the HostedRulesetsList dynamic system setting on the
remote development system on which you are merging branches.

Rebasing rules to obtain latest versions

If you are using a continuous integration and continuous delivery (CI/CD) pipeline with Deployment Manager
or third-party auomation servers such as Jenkins, you can rebase your development application to obtain the
most recently committed rulesets through Pega repositories after you merge branches on the source
development system.

Related Content
Article

Integrating with file and content management systems

Configuring settings for rebasing


Before you can rebase your development system, you must first configure a Pega repository and then enable
ruleset versions for them. You must also have the appropriate permissions for rebasing.

1. If you are using Pega repositories with third-party automation servers such as Jenkins, enable the Pega
repository type.

You do not need to enable Pega repositories if you are using Deployment Manager. For more information,
see Enabling the Pega repository type.

2. Create a connection to a Pega type repository that supports ruleset version artifacts. For more information,
see Adding a Pega repository.

3. Enable ruleset versions for Pega repositories by configuring the HostedRulesetsList dynamic system setting
on the system of record. For more information, see Enabling ruleset versions for Pega repositories for
rebasing.

4. Ensure that you the pxCanRebase privilege so that you can rebase rules. This privilege is associated with the
sysadmin4 role.

If you do not have this privilege, you can add it to your role. For more information, see Specifying privileges
for an Access of Role to Object rule.

Enabling ruleset versions for Pega repositories for rebasing


When you rebase rules, you must enable ruleset versions for Pega repositories so that they can host ruleset
versions. To enable ruleset versions, configure the HostedRulesetsList dynamic system setting on the remote
development system on which you are merging branches.

1. Complete one of the following tasks:

Open the HostedRulesetsList dynamic system setting if it exists.


1. Click Records SysAdmin Dynamic System Settings .
2. Click the record with the HostedRulesetsList Setting Purpose and the Pega-ImportExport Owning
Ruleset.
Create this record if it does not exist.
1. Click Create SysAdmin Dynamic System Settings .
2. Enter a short description.
3. In the Owning Ruleset field, enter Pega-ImportExport.
4. In the Setting Purpose field, enter HostedRulesetsList.
5. Click Create and open.

2. On the Settings tab, in the Value field, enter a comma-separated list of the rulesets on the remote
development system. Enclose each ruleset value within quotation marks, for example, “HRApp.”

3. Click Save.

Related Content
Article
Enabling the Pega repository type

Article

Understanding rule rebasing

Enabling the Pega repository type


When you use continuous integration and delivery (CI/CD pipelines) third-party automation servers, you use Pega
Platform as a binary repository for rule artifacts during development. You also use Pega repositories when you are
rebasing your development application when you are using third-party automation servers or Deployment
Manager.

If you are using Deployment Manager, the Pega repository type is already enabled; otherwise, you must first
enable it for your application by completing the following steps:

1. Click Records Decision When and open the pyCanRebase rule that applies to @baseclass.

2. Click Save As Specialize by class or ruleset .

3. Choose a ruleset in your application, then click Create and open.

4. In the Conditions tab, click Actions Edit and change the condition to true.

5. Click Submit.

6. Click Save.

7. If you are rebasing rules to refresh your development system with the latest rulesets that are hosted on a
remote development system, enable ruleset versions for Pega repositories. For more information, see
Enabling ruleset versions for the Pega repository.

Related Content
Article

Creating a repository

Article

Integrating with file and content management systems

Rebasing rules to obtain latest versions


If you are using a continuous integration and continuous delivery (CI/CD) pipeline with Deployment Manager or
third-party auomation servers such as Jenkins, you can rebase your development application to obtain the most
recently committed rulesets through Pega repositories after you merge branches on the source development
system.

Before you begin: To rebase rules, you must first merge branches to make changes to rules. Changes made to
rules in an unlocked ruleset version will not be visible by the rebase functionality.

Note: Only one rebase event at a time is supported per development system to prevent accidentally overriding a
rebase event that is in progress.Note: You can improve rebase performance by frequently incrementing the
application patch version.

To rebase rules, complete the following steps:

1. If you are migrating and merging branches separately, manually migrate branches for an application that
has a new major or minor version. Rebase only pulls the ruleset version that is visible to your current
application.

For example: For example, if you previously migrated a branch for an application of version 1.x but are
now working on a 2.x application version, migrate the 2.x branch ruleset to the main development system
before rebasing. Otherwise, rebase refreshes your development system with the 1.x ruleset versions.

2. In the header of Dev Studio click the name of your application, and then click Definition.
3. On the Definition tab, click Get latest ruleset versions.

4. In the Select repository list, select the repository from which to retrieve rules to see a list of ruleset versions
that will be rebased.

5. Click Rebase.

If there are no import conflicts, your development application is refreshed with the rules.
If there are import conflicts, the system displays them. For example, a conflict can occur if you made a
change to the same ruleset version on your local development system or if you modified a non-resolved
rule in the ruleset, such as the Application record. To resolve a conflict, complete the following step.

6. If you have conflicts, you must resolve them before rebasing continues, either by overwriting or rejecting the
changes on your development system. Complete the following steps to import the ruleset and either
overwrite or reject the changes that you made to the ruleset on the development system:

a. For each ruleset, click the Download Archive link and save the .zip file.

b. Click the Click here to launch the Import wizard link at the top of the Rebase rule form to open the
Import wizard, which you use to import the .zip files. For more information, see Importing rules and data
by using the Import wizard.

c. In the wizard, specify whether to use the older version of the ruleset or overwrite the older version with
the newer version.

d. After you resolve all conflicts, restart the rebase process by starting from step 1.

Related Content
Article

Understanding rule rebasing

Article

Integrating with file and content management systems

Installing and enabling for Sonatype Nexus Repository component


for Sonatype Nexus Repository Manager 3
To create a connection between Pega Platform or Deployment Manager and Nexus Repository Manager 3, use the
Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and metadata
support for your application artifacts.

The component for Sonatype Nexus Repository Manager 3 supports Pega 8.1, 8.2, 8.3, and 8.4.

Note: Because of potential conflicts, you should not use both Sonatype Nexus Repository Manager 2 and
Sonatype Nexus Repository Manager 3 type repositories in one application. If you want to use both repository
types, contact [email protected].

For answers to frequently asked questions, see the Nexus FAQ page.

For questions or issues, send an email to [email protected].

Downloading and enabling the component

Download and enable the component so that you can configure a Sonatype Nexus Repository Manager 2
repository.

Creating a Sonatype Nexus Repository Manager 3 repository

After downloading and enabling the Sonatype Nexus Repository Manager 3 component, create a repository
in Pega Platform.

Understanding API usage

When you use repository APIs to interact with Nexus Repository Manager 3, note the following information:

Related Content
Article

Integrating with file and content management systems

Article

Repository APIs

Downloading and enabling the component


Download and enable the component so that you can configure a Nexus Repository Manager 3 repository.

To download and enable the component, do the following steps:

1. Download the component from Pega Marketplace.

2. In the header of Dev Studio, click the name of your application, and then click Definition.

3. In the Application rule form, on the Definition tab, in the Enabled components section, click Manage
components.

4. Click Install new, select the file that you downloaded from Pega Marketplace, and then click Open.

5. Select the Enabled check box to enable this component for your application, and then click OK.

6. In the list of enabled components, select PegaNexus3Repository, select the appropriate version, and then
click Save.

7. If you are using Deployment Manager, on each candidate system and on the orchestration system, perform
one of the following tasks:

Download and enable the component by repeating steps 1 - 6.


Add the PegaNexus3:01-01 and PegaNexusCommon:01-01 rulesets as production rulesets to the
PegaDevOpsFoundation:Administrators access group.

Related Content
Article

Creating a Sonatype Nexus Repository Manager 3 repository

Creating a Sonatype Nexus Repository Manager 3 repository


After downloading and enabling the Sonatype Nexus Repository Manager 3 component, create a repository in
Pega Platform.

Note: You can create only raw type repositories.To create a repository, do the following steps:

1. In the header of Dev Studio, click Create SysAdmin Repository .

2. In the Create Repository rule form, enter a description and name for your repository, and then click Create
and open.

3. In the Edit Repository rule form, on the Definition tab, click Select.

4. In the Select repository type dialog box, click Nexus 3.

5. In the Repository configuration section, configure location information for the repository:

a. In the System URL field, enter the URL of your Nexus Repository Manager 3 server.

b. In the Repository name field, enter the name of the repository.

c. In the Root path field, enter the path of the folder where repository assets are stored. Do not include the
repository folder in the path, and do not start or end the path with the slash (/) character.

For example: To store assets in a folder with the URL http://myexusrepo.com/repository/raw/myCo/devops , enter the
following information:
System URL: http://mynexusrepo.com
Repository name: raw
Root path: myCo/devops

The connector will allow you to browse the assets in this folder from inside Pega Platform.

6. In the Authentication section, configure authentication information:

a. In the Authentication profile field, enter the name of a new authentication profile, and then click the
Open icon to configure the profile.

The authentication profile stores the credentials that Pega Platform needs to authenticate with the
Nexus Repository Manager 3 API.

b. In the Create Authentication Profile rule form, in the Type list, select Basic.

Only Basic authentication is supported. For more information about Basic authentication profiles, see
Configuring a Basic authentication profile.

c. Enter a name and description for the authentication profile.

d. Click Create and open.

7. In the Edit Authentication Profile rule form, configure authentication information:

a. Enter the user name, password, realm, and host name required to authenticate with Sonatype Nexus
Repository Manager 3. For more information, see the Sonatype Nexus Repository Manager 3
documentation.

b. Select the Preemptive authentication check box.

c. Click Save.

8. To verify that the system URL, authentication profile, and repository name are configured properly, in the
Edit Repository rule form, on the Definition tab, click Test connectivity.

If there are any errors, ensure that the credentials in the authentication profile are correct and that Pega
Platform can access the system URL that you entered.

Note: Testing connectivity does not verify that the root path is configured properly.

9. Click Save.

Related Content
Article

Downloading and enabling the component

Understanding API usage


When you use repository APIs to interact with Nexus Repository Manager 3, note the following information:

Sonatype Nexus Repository Manager 3 does not support the create folder API (D_pxNewFolder), because the
repository cannot have empty folders.
The create file API (D_pxNewFile) and get file API (D_pxGetFile) only support Basic Authentication and support
a file size of up to 5 GB.
The delete API (D_pxDelete) does not work on folders, only files. If all the files in a folder are deleted, the
folder is also deleted.

Related Content
Article

Creating a Sonatype Nexus Repository Manager 3 repository

Installing and enabling Sonatype Nexus Repository component for


Sonatype Nexus Repository Manager 2
Create a connection between Pega Platform or Deployment Manager and Sonatype Nexus Repository Manager 2
with the Sonatype Nexus Repository component. Use this repository for centralized storage, versioning, and
metadata support for your application artifacts.

For answers to frequently asked questions, see the Nexus FAQ page.

For questions or issues, send an email to [email protected].

Downloading and enabling the component

Download and enable the component so that you can configure a Nexus Repository Manager 3 repository.

Creating a Sonatype Nexus Repository Manager 2 repository

After downloading and enabling the component for Sonatype Nexus Repository Manager 2, create a
repository in Pega Platform.

Understanding repository API usage

When you use repository APIs to interact with Nexus Repository Manager 2, note the following information:

Related Content
Article

Integrating with file and content management systems

Article

Repository APIs

Downloading and enabling the component


Download and enable the component so that you can configure a Sonatype Nexus Repository Manager 2
repository.

To download and enable the component, do the following steps:

1. Download the component from Pega Marketplace.

2. In the header of Dev Studio, click the name of your application, and then click Definition.

3. In the Application rule form, on the Definition tab, in the Enabled components section, click Manage
components.

4. Click Install new, select the file that you downloaded from Pega Marketplace, and then click Open.

5. Select the Enabled check box to enable this component for your application, and then click OK.

6. In the list of enabled components, select Pega Nexus Repository Connector, select the appropriate version,
and then click Save.

7. If you are using Deployment Manager, on each candidate system and on the orchestration system, perform
one of the following tasks:

Download and enable the component by repeating steps 1 - 6.


Add the PegaNexus:01-01 and PegaNexusCommon:01-01 rulesets as production rulesets to the
PegaDevOpsFoundation:Administrators access group.

Related Content
Article

Creating a Sonatype Nexus Repository Manager 2 repository

Creating a Sonatype Nexus Repository Manager 2 repository


After downloading and enabling the component for Sonatype Nexus Repository Manager 2, create a repository in
Pega Platform.
To create a repository, do the following steps:

1. In the header of Dev Studio, click Create SysAdmin Repository .

2. In the Create Repository rule form, enter a description and name for your repository, and then click Create
and open.

3. In the Edit Repository rule form, on the Definition tab, click Select.

4. In the Select repository type dialog box, click Nexus 2.

5. In the Repository configuration section, configure location information for the repository:

a. In the System URL field, enter the URL of your repository.

b. In the Repository ID field, enter the ID of the repository, which you can find on the Configuration tab in
Nexus Repository Manager 2.

For more information, see the documentation for Nexus Repository Manager 2.

c. In the Root path field, enter the path of the folder where repository assets are stored. Do not include the
repository folder in the path, and do not start or end the path with the slash (/) character.

For example: To store assets in a folder with the URL http://myexusrepo.com/repository/raw/myCo/devops , enter the
following information:
System URL: http://mynexusrepo.com
Repository ID: raw
Root path: myCo/devops

The connector will allow you to browse the assets in this folder from inside Pega Platform.

6. In the Authentication section, configure authentication information:

a. In the Authentication profile field, enter the name of a new authentication profile and click the Open
icon to configure the profile.

The authentication profile stores the credentials that Pega Platform needs to authenticate with the
Nexus Repository Manager 2 API.

b. In the Create Authentication Profile rule form, in the Type list, select Basic.

Only Basic authentication is supported.

c. Enter a name and description for your authentication profile.

d. Click Create and open.

7. In the Edit Authentication Profile rule form, configure authentication information:

a. Enter the user name, password, realm, and host name required to authenticate with Nexus Repository
Manager 2. For more information, see the Nexus Repository Manager 2 documentation.

b. Select the Preemptive authentication check box.

c. Click Save.

8. To verify that the system URL and authentication profile are configured properly, in the Edit Repository
rule form, on the Definition tab, click Test connectivity.

If there are any errors, ensure that the credentials in the authentication profile are correct and that Pega
Platform can access the system URL that you entered.

Note: Testing connectivity does not verify that the repository ID or root path are configured properly.

9. Click Save.

Related Content
Article
Downloading and enabling the component

Understanding repository API usage


When you use repository APIs to interact with Nexus Repository Manager 2, note the following information:

Sonatype Nexus Repository Manager 2 does not consider recursiveDelete parameter for the delete API
(D_pxDelete). All folder deletes are considered recursive.
The create file API (D_pxNewFile) and get file API (D_pxGetFile) only support Basic Authentication and support
a file size of up to 5 GB.

Related Content
Article

Creating a Sonatype Nexus Repository Manager 2 repository

Automatically deploying applications with prpcUtils and Jenkins


You can use Jenkins to automate exporting and importing Pega Platform applications. Download the
prpcServiceUtils command-line tool and configure Jenkins to export or import archives. You can use a single
Jenkins build job to both export and import an application archive, or you can create separate jobs for each task.

For more information about prpcServiceUtils for service-enabled scripting, see Using service-enabled scripting and
the prpcServiceUtils tool.

Ensure that your system includes the following items:

Jenkins 1.651.1 or later


Jenkins Plugins:
Ant Plugin
Environment Injector Plugin
Build with Parameters Plugin
Ant version 1.9 or later
JDK version 1.7 or later

Downloading and installing prpcServiceUtils

Download and install prpcServiceUtils so that you can use it with Jenkins deploy applications.

Configuring the Jenkins build environment

Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP.

Configuring the Jenkins project

Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP:

Adding build steps to import or export the archive

You can enter build steps to import an archive or export an archive, or you can do both in one job.

Importing or exporting the archive by running the Jenkins job

Run a Jenkins job to import or export the application archive.

Downloading and installing prpcServiceUtils


Download and install prpcServiceUtils so that you can use it with Jenkins deploy applications.

To download and install prpcServiceUtils, do the following steps:

1. Download the prpcServiceUtils.zip file onto the Jenkins server.

For more information, on ,seeRemote configuration command-line tool (prpcServiceUtils).

2. Extract the files onto any location to which Jenkins has access.
Related Content
Article

Automatically deploying applications with prpcUtils and Jenkins

Article

Configuring the Jenkins build environment

Configuring the Jenkins build environment


Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP.

To configure the Jenkins build environment, do the following steps:

1. Verify that the following Jenkins plugins are installed:

Ant Plugin
Environment Injector Plugin
Build with Parameters Plugin

2. Open a web browser and navigate to the Jenkins server.

3. Click Manage Jenkins.

4. Click Configure System.

5. Configure the PEGA_HOME environment variable:

a. In the Global properties section, select Environmental variables.

b. In the name field, enter PEGA_HOME.

c. In the value field, enter the location where you extracted the prpcServiceUtils.zip file.

d. Click Add.

6. ​Click Apply, and then click Save.

Related Content
Article

Automatically deploying applications with prpcUtils and Jenkins

Article

Configuring the Jenkins project

Configuring the Jenkins project


Configure your build environment to call the prpcServiceUtils.bat or prpcServiceUtils.sh script and pass
parameters to import or export the RAP:

1. Complete one of the following actions:

Create a project if you have not already done so.


Open an existing project.

2. Click Configure.

3. Select This build is parameterized.

4. Click Add Parameter and create the parameters that Jenkins passes to the prpcServiceUtils tool:

To import or export a RAP, create the following parameters:

Product parameters
Parameter name Type Default value

The name of the RAP rule used to generate the


productName String
archive

productVersion String The version number of the RAP rule


To import or export an application, create the following parameters:

Application parameters

Parameter name Type Default value

applicationName String The name of the application

The version number of the


applicationVersion String
application.

5. Select Prepare an environment for the run.

6. In the Properties Content section, set the following properties:

SystemName=$BUILD_TAG
Source Code Management=None
Inject environment variables to the build process=Properties Content

7. In the Properties Content section, set the ImportExistingInstances property to one of the following values.
The default is unset:

override

For rules - If a rule with the same key exists in the system, but the rule resolution properties differ (for
example, ruleset or version), replace the existing rule with the imported rule.

For work - If a work object with the same key exists but belongs to a different application (for example,
it has a different class hierarchy but same classgroup name and same ID prefix), replace the existing
work object with the imported work object.

skip

For rules - If a rule with the same key exists in the system, and the rule resolution properties differ, do
not replace the existing rule.

For work - If a work object with the same key exists but belongs to a different application, do not
replace the existing work object.

unset: The import will fail if keys already exist either for rule instances that have different rule
resolution properties or for work objects that belong to a different applications that use the same
classgroup name.

8. Set the artifact directory where exported logs and files are downloaded In the following format: ARTIFACTS_DIR=
<var>path to artifact directory</var>

The default is the logs directory.

Note: You can also set the directory later by specifying -artifactsDir when you run the batch file.

9. In the Properties Content section, enter the static properties in the format ParameterName=Value.

Source properties for export:

Source properties for export

Parameter
Default value
name

The host name and port number of the Pega Platform server from which to export
SourceHost
the archive file.
SourceUser The operator name. This operator must have export privileges.
Parameter
Default value
name
SourcePassword The operator password.
Target properties for import:

Target properties for import

Parameter
Default value
name

The host name and port number of the Pega Platform server that contains the
TargetHost
archive file to import.

TargetUser The operator name. This operator must have import privileges.

TargetPassword The operator password.

10. Click Save.

Related Content
Article

Automatically deploying applications with prpcUtils and Jenkins

Article

Adding build steps to import or export the archive

Adding build steps to import or export the archive


You can enter build steps to import an archive or export an archive, or you can do both in one job.

Adding export build steps


Adding import build steps

Related Content
Article

Automatically deploying applications with prpcUtils and Jenkins

Article

Adding export build steps

Article

Adding import build steps

Adding export build steps


To add export steps to your build job, do the following steps:

1. Add an Invoke Ant build step:

a. In the Build section, click Add build step and select Invoke Ant.

b. n the Targets field, enter exportprops.

c. In the Build File field, enter the path to the build file:

On Windows, enter the following path: $PEGA_HOME\samples\Jenkins-build.xml


On UNIX, enter the following path: $PEGA_HOME/scripts/samples/jenkins/Jenkins-build.xml

2. Add a build step to run either prpcServiceUtils.bat or prpcServiceUtils.sh.

If you are on Windows, go to step 3.


If you are on UNIX, go to step 4.
3. On Windows, create an Execute Windows batch command build step:

a. In the Build section, click Add build step and select Execute Windows batch command.

b. In the Command field, enter the following command: %PEGA_HOME%\scripts\utils\prpcServiceUtils.bat export --


connPropFile %WORKSPACE%\%SystemName%_export.properties --artifactsDir %WORKSPACE%

4. On UNIX, create an Execute Shell batch command build step:

a. In the Build section, click Add build step and select Execute Shell batch command​.

b. In the Command field, enter the following command: $PEGA_HOME/scripts/utils/prpcServiceUtils.sh export --connPropFile
$WORKSPACE/${SystemName}_export.properties --artifactsDir $WORKSPACE

Related Content
Article

Adding import build steps

Article

Adding build steps to import or export the archive

Adding import build steps


To add import build steps to your build job, do the following steps:

1. Add an Invoke Ant build step:

a. In the Build section, click Add build step and select Invoke Ant.

b. n the Targets field, enter importprops .

c. In the Build File field, enter the path to the build file:

On Windows, enter the following path: $PEGA_HOME\samples\Jenkins-build.xml


On UNIX, enter the following path: $PEGA_HOME/scripts/samples/jenkins/Jenkins-build.xml

2. Add a build step to run either prpcServiceUtils.bat or prpcServiceUtils.sh.

If you are on Windows, go to step 3.


If you are on UNIX, go to step 4.

3. On Windows, create an Execute Windows batch command build step:

a. In the Build section, click Add build step and select Execute Windows batch command.

b. In the Command field, enter the following command:

%PEGA_HOME%\scripts\utils\prpcServiceUtils.bat import --connPropFile %WORKSPACE%\%SystemName%_import.properties --artifactsDir


%WORKSPACE%

4. On UNIX, create an Execute Shell batch command build step:

a. In the Build section, click Add build step and select Execute Shell batch command​.

b. In the Command field, enter the following command:

$PEGA_HOME/scripts/utils/prpcServiceUtils.sh import --connPropFile $WORKSPACE/${SystemName}_import.properties --artifactsDir


$WORKSPACE

Related Content
Article

Adding export build steps

Article

Adding build steps to import or export the archive


Importing or exporting the archive by running the Jenkins job
Run a Jenkins job to import or export the application archive.

Do the following steps:

1. In Jenkins, click Build with Parameters.

2. When the parameter values are displayed, verify the default settings and edit any values.

3. Set the artifact directory where exported logs and files are downloaded in the following format: -artifactsDir=
<var>path to artifact directory</var>

The default directory is the logs directory.

4. Click Build.

Related Content
Article

Automatically deploying applications with prpcUtils and Jenkins

Migrating application changes


With minimal disruption, you can safely migrate your application changes throughout the application
development life cycle, from development to deployment on your staging and production environments. In the
event of any issues, you can roll back the deployment and restore your system to a state that was previously
known to be working.

The process that you use to release changes to your application is different depending on the types of changes
that you are making. This topic describes the Standard Release process that you can use to deploy changes to
rules, data instances, and dynamic system settings. The Standard Release process is a self-service way to deploy
changes without downtime. Other methods for releasing changes to your application are not covered in this
article. For more information, see Application release changes, types, and processes.

This Standard Release process applies to both on-premises and Pega Cloud Services environments. As a Pega
Cloud Services customer, if you use this self-service process to release changes to your application, you are
responsible for those changes. For more information, see Change management in Pega Cloud Services and
Service level agreement for Pega Cloud Services.

The Standard Release process includes the following steps and is scalable to the number and types of
environments that you have:

1. Package the release on your shared development environment. For more information, see Packaging a
release on your development environment.
2. Deploy the changes to your staging or production environment. For more information, see Deploying
application changes to your staging or production environment.

Understanding application migration requirements

Before you migrate your application, both your environment and application must meet certain
requirements. For example, you must be able to download a RAP archive to a file system location with the
required available space.

Understanding application migration scenarios

The Standard Release migration process supports the following scenarios:

Understanding application migration requirements


Before you migrate your application, both your environment and application must meet certain requirements. For
example, you must be able to download a RAP archive to a file system location with the required available space.

Your environments and applications must meet the following requirements:

You have at least two existing Pega Platform environments. These environments can be any combination of
sandbox and production environments, and can be on-premises or in the Pega Cloud virtual private cloud
(VPC).
You can log in to a machine from which you will complete the release process, such as your organization's
laptop, workstation, or server.
You have a location on a file system with enough available space to store the RAP archives. You must be
able to download the RAP archive to this location and upload a RAP archive to another system from this
location.
You have complied with stated application guideline and guardrail requirements.
Your application rule must specify rulesets that have a patch version. Most application rules have the ruleset
version in the stack set to 01-01, such as "Ruleset: 01-01". You must change this to specify your rulesets to
the patch level of their version, which is "Ruleset: 01-01-01". This is required for your top-level application
rule and all built-on applications, except the base PegaRULES built-on application. This application structure
gives you greater control over the release of your software, minimizes the impact of the release on
application users, and provides for the smoothest recovery path in case of a troubled deployment.

Understanding application migration scenarios


The Standard Release migration process supports the following scenarios:

All developers in the organization use a single shared development environment (recommended by
Pegasystems).
The organization follows a distributed development model, where individual developers or development
teams have their own isolated development environments.

The release process works for either development scenario, because it begins after changes have been merged
into the appropriate ruleset versions. Regardless of development scenario or team size, development teams must
use branching and merging for releasing applications. Otherwise, you cannot take full advantage of the tools and
capabilities of the Pega Platform. For more information, see Understanding application migration scenarios.

Deploying application changes to your staging or production


environment
As part of the Standard Release process, after you set up and package a release on your shared development
environment, you can deploy your application changes to your staging or production environment.

This Standard Release process applies to both on-premises and Pega Cloud Services environments. As a Pega
Cloud Services customer, if you use this self-service process to release changes to your application, you are
responsible for those changes. For more information, see Change management in Pega Cloud Services and
Service level agreement for Pega Cloud Services.

This process involves completing the following steps:

1. Deploying the application archives


2. Testing the deployment
3. Activating the release for all users

In the event of any issues, you can roll back the deployment and restore your system to a state that was
previously known to be working.

Before you deploy application changes, you must know about the types of changes that you can make within a
release, the release types, and the release management process to follow based on the changes you want to
deploy. For example, SQL changes that remove columns from database tables or remove data types can interrupt
service for users of the application. You must deploy these types of changes during scheduled downtime when
users are offline. For more information, see Understanding application release changes, types, and processes.

Deploying the application archives

After you create the application archives, deploy them to your target system. This process is the best way to
deploy changes into your staging or production environment, control their activation, and recover from
problematic deployments.

Testing the deployment

After you deploy the changes, the release engineer and specified users can test the changes. For the staging
environment, test the performance and the user interface, run automated tests, and do acceptance testing.
For the production environment, perform validation tests.

Activating the release for all users

After your Rules archive and Data archive are successfully deployed, changes are activated in various ways.
Activation is the process by which a category of changes becomes usable by appropriate users of the
system, if they have access.

Rolling back a problematic deployment

In the event of a problematic deployment, the first goal is to prevent further issues from occurring. Then you
can roll back the deployment and restore your system to a state that was previously known to be working.

Deploying the application archives


After you create the application archives, deploy them to your target system. This process is the best way to
deploy changes into your staging or production environment, control their activation, and recover from
problematic deployments.

The user who imports the archives must have the zipMoveImport and SchemaImport privileges on the target
system.

1. Ensure that you have connectivity to both the target system and to the location where the archives are
stored.

2. Use the prpcServiceUtils command line utility to import the archives to the target system:

Deploy the Rules archive by following the steps in Importing rules and data by using a direct connection
to the database. Your changes are sent to the system, imported to the database, and ready for
activation.
Deploy the Data archive by following the steps in Rolling back and committing tracked data. When you
deploy the Data archive, you use the same tool that you used to deploy the Rule archive but with
different properties. You can roll back these changes if required.

Result: For information about allowing automatic schema changes, see Editing administrator privileges for
importing archives with schema changes into production.

Testing the deployment


After you deploy the changes, the release engineer and specified users can test the changes. For the staging
environment, test the performance and the user interface, run automated tests, and do acceptance testing. For
the production environment, perform validation tests.

Do the following steps:

1. On the target system, create a copy of the access group for your application. This step is a one-time process,
because now this access group is available anytime you deploy changes.

2. Update the copied access group so that it references the new application version.

3. Find the operator ID record for a test user and give that operator ID record access to the access group that
you just created.

Result: You can now safely test your changes in the system at the same time as other users who are running on
the previous version.

When you are satisfied that your release was deployed successfully, Release Engineering can activate the release
for all users in the production environment.

If you experience any issues, see Rolling back a problematic deployment.

Activating the release for all users


After your Rules archive and Data archive are successfully deployed, changes are activated in various ways.
Activation is the process by which a category of changes becomes usable by appropriate users of the system, if
they have access.

Data changes, including schema changes, take effect immediately after being imported to the system. Your
application might be able to access these fields immediately. After testing and when you are sufficiently
comfortable, you should commit these changes. To commit data changes, follow the steps in Rolling back and
committing tracked data.

To activate rule changes, you need to update the access groups that point to the prior version of your application
rule:
1. In Dev Studio, click Records.

2. Click Security Access Group .

3. Search for the access groups to be updated by specifying your application name in the search box and
filtering the list.

4. After you locate the access group, open the record and increment the version number for your application to
the new release version.

5. Click Save.

Result: If you deploy code changes that need to be compiled, you must restart the system. Code changes cannot
be made without downtime, and your System Administrator must perform a system restart. For information about
the types of changes that you can make within a release, the release types, and the release management process
to follow, see Understanding application release changes, types, and processes.

Rolling back a problematic deployment


In the event of a problematic deployment, the first goal is to prevent further issues from occurring. Then you can
roll back the deployment and restore your system to a state that was previously known to be working.

Do the following steps:

1. Ensure that no new operators can access the problematic application. You can temporarily disable access to
the entire system. For more information, see How to temporarily disallow new interactive logins with a
Dynamic System Setting.

2. Roll back the problematic rules changes. You can roll back changes by updating the access group for your
application and specifying the previous version of your application.

Result: Roll back the data instances that you changed to their previous version. To roll back data changes, use
the prpcServiceUtils command line utility. For more information, see Rolling back and committing tracked data.
This process replaces those modified data instances with their prior definition, rolling back your data changes to
the last known, good state.

Packaging a release on your development environment


As part of the Standard Release process for migrating your application changes from development to production,
you set up and package the release on your shared development environment.

This Standard Release process applies to both on-premises and Pega Cloud Services environments. As a Pega
Cloud Services customer, if you use this self-service process to release changes to your application, you are
responsible for those changes. For more information, see Change management in Pega Cloud Services and
Service level agreement for Pega Cloud Services.

This process involves completing the following steps:

1. Creating the release target (ruleset version)


2. Locking the release
3. Creating the application archives

After you set up and package the release, you are ready to deploy the changes to your staging or production
environment.

Creating the release target

When developers merge changes by using the Merge Wizard, they must select the ruleset version to which
to merge them. The release engineer is responsible for ensuring that each release has an unlocked ruleset
version that acts as the release target and into which these merges can be performed. Developers are
responsible for merging their branches into the correct, unlocked ruleset version and addressing any
conflicts.

Locking the release

After all merges are completed, the release engineer locks the applications and rulesets to be released. They
are also responsible for creating the new, higher-level ruleset versions and higher-level application rules for
the next release.
Creating the application archives

For each release, you create one or two RAP archives, depending on the changes you made to your
application. The user who exports the archives must have the zipMoveExport privilege.

Creating the release target


When developers merge changes by using the Merge Wizard, they must select the ruleset version to which to
merge them. The release engineer is responsible for ensuring that each release has an unlocked ruleset version
that acts as the release target and into which these merges can be performed. Developers are responsible for
merging their branches into the correct, unlocked ruleset version and addressing any conflicts.

Locking the release


After all merges are completed, the release engineer locks the applications and rulesets to be released. They are
also responsible for creating the new, higher-level ruleset versions and higher-level application rules for the next
release.

To lock the release, do the following steps:

1. In Dev Studio, click Application Structure Ruleset Stack .

2. Click Lock & Roll.

Note: As a best practice, lock your built-on applications first, and then work your way up the stack to your
top-level application. This way, as each higher version application rule is created, you can open that rule and
update the version of the built-on application.

3. For each ruleset:

a. Click Lock and provide a password.

b. Click Roll.

4. Click Create a new version of my Application.

5. Click Run.

Result: The application rules and ruleset versions for the current release are locked and require passwords to
make changes. Also, you will have created the higher-level ruleset versions and application rules that will be used
for the next release.

Related Content
Article

RuleSet Stack tab

Creating the application archives


For each release, you create one or two RAP archives, depending on the changes you made to your application.
The user who exports the archives must have the zipMoveExport privilege.

These RAP archives include:

The Rules RAP, which contains the Rules portion of your release, instances from Rules- types only, and all
rules changes.
The Data RAP, which contains the Data portion of your release, instances from Data classes only, and all
data changes.

Splitting the release into a Rules RAP and a Data RAP provides you with more control over the deployment and
activation of your changes on other systems.

More RAP archives might be created during the development process. Import these RAP archives to a single
system from which the Rules RAP and Data RAP will be created. This method provides the greatest level of control
over the release by separating the release process from the development process.

1. Define the RAPs by using the Application Packaging wizard or by copying an existing RAP rule. For more
information, see Product rules.

2. Export each RAP as an archive:

a. Export the rules. For more information, see Export rules into an archive file

b. Provide a standard name for the archive, such as Application-01-01-02.zip.

c. Store these archives in a standard location to which you have access, and that will be accessible during
deployment.

Result: A Rules archive and a Data archive are created as the result of this process.

Related Content
Article

Rules by name

Understanding application release changes, types, and processes


The following tables provide information about the types of changes that you can make within a release, the
release types, and the release management process to follow based on the types of changes that you want to
deploy.

Types of changes within a release

Requires
a
Activates Support
Technical by Activates Activation requires Release Request
Change type
changes access immediately restart frequency (Pega
group Cloud
Services
only)

Rule-

Rule-
Application-
Rule

Rule-Obj-
Rules Class

(including Rule-Ruleset-
non-rule- Yes Yes No Daily/Weekly No
Name
resolved
rules) Rule-Ruleset-
Version

Rule-Access-
Role-Obj

Rule-Access-
Deny-Obj

Data
Data- No Yes No Weekly No
instances

MonthlyCertain
dynamic
system
YesCertain dynamic NoCertain dynamic settings
system settings system settings activate only
Dynamic Data-Admin- activate only on activate only on on system
system System- No system restart and system restart and restart and No
settings Settings require you to follow require you to follow require you to
Requires
the Environment the Environment follow the
a
Activates release process. release process. Environment Support
Technical by Activates Activation requires release
Release Request
Change type process.
changes access immediately restart frequency (Pega
group Yes Cloud
No
Services
Treat functional Treat functional only)
changes that changes that
reference code as a reference code as a
Code release, which Code release, which
requires a system requires a system
restart to activate if restart to activate if
you are making code you are making code
changes. changes.
Change type – Change type –
This column This column
lists the high- lists the high-
level category level category
of changes that of changes that
you can make you can make
in a release. in a release.
Technical Technical
changes – changes –
Technical Technical
changes changes
describe the describe the
rule types or rule types or
artifacts for a artifacts for a
change type. change type.
Rule- and Data- Rule- and Data-
include all include all
subtypes under subtypes under
that parent that parent
type, unless type, unless
specifically specifically
identified for a identified for a
different different
change type. change type.
Activates by Activates by
access group – access group –
Rule resolution Rule resolution
for this change for this change
type is type is
controlled by controlled by
the access the access
groups of an groups of an
operator. operator.
Activates Activates
immediately – immediately –
Rule resolution Rule resolution
uses this uses this
Rule-Utility- change type
Function change type
Functional Yes immediately immediately Monthly Yes
Rule-Utility- after after
Library deployment. deployment.
Activation Activation
requires restart requires restart
– This change – This change
type requires a type requires a
system restart system restart
before it is before it is
available to the available to the
rule resolution rule resolution
process. process.
Release Release
frequency – frequency – Requires
Release Release a
Activates frequency frequency Support
indicates the indicates the
Technical by Activates Activation requires Release Request
Change type period in which period in which
changes access immediately restart frequency (Pega
group you can deploy you can deploy Cloud
this type of this type of Services
change to change to only)
production. production.
Requires a Requires a
Support Support
Request (Pega Request (Pega
Cloud only) – As Cloud only) – As
a Pega Cloud a Pega Cloud
customer, you customer, you
are responsible are responsible
for any for any
application application
changes that changes that
you make; you make;
however, as a however, as a
best practice, best practice,
inform and inform and
engage Pega engage Pega
Support before Support before
releasing releasing
application application
changes. You changes. You
can open a can open a
Support Support
Request on My Request on My
Support Portal. Support Portal.
For more For more
information, information,
see My Support see My Support
Portal FAQ. Portal FAQ.

Data model SQL No Yes No Monthly Yes

Java JAR file

Code Java .class No No Yes Monthly Yes


file

Changes
outside of
Environment Pega No No Yes Quarterly Yes
(JVM, XML
configuration)

Understanding release types

Requires
a
New Support
Release Application users Significant Release Self-
Activates for users application Request
type affected UX impact frequency service
version (Pega
Cloud
only)

Bug fix Immediately All No Daily No Yes No


Standard On access group
By access group No Weekly Yes Yes Requires
No
release update a
Database New Support
Release Immediately Application
All users Significant
No Release
Monthly Yes Self-
No Yes
release Activates for users application Request
type affected UX impact frequency service
version (Pega
Code Cloud
After restart All No Monthly Yes No Yes
release only)

Environment
After restart All No Quarterly Yes No Yes
release

Per change type Per change type

Activation of a Activation of a
Major release Major release
occurs based on occurs based on the
the change types change types that
that the release the release
contains. For contains. For
information about information about
how each change how each change
type is activated, type is activated,
see Table 1. see Table 1.

Release type – Release type –


This column This column
lists the high- lists the high-
level category level category
of releases of releases
that you can that you can
deploy. deploy.
Activates for Activates for
users – This users – This
column column
indicates when indicates when
this release this release
type takes type takes
effect for effect for
users. users.
Application Application
users affected users affected
– This column – This column
provides the provides the
scope of scope of
application application
users that see users that see
the effect of the effect of
this release this release
type. type.
Significant UX Significant UX
impact – This impact – This
release type release type
might require might require
users to users to
significantly significantly
relearn a relearn a
process or has process or has
significant significant
layout layout
changes. changes.
Release Release
frequency – frequency –
This column This column
provides the provides the
frequency of frequency of
this type of this type of
release. release.
Major New New Yes Quarterly Yes No Yes
release application application Requires
version – This version – This a
column column
New Support
Release indicates Application users
indicates Significant Release Self-
Activates for users application Request
type whether you affected
whether you UX impact frequency service
version (Pega
must create a must create a Cloud
new new only)
application application
version for this version for this
release. release.
Self-service – Self-service –
A user with A user with
appropriate appropriate
permissions permissions
can execute can execute
this release this release
type using the type using the
Pega Platform, Pega Platform,
and a Pega and a Pega
System System
Administrator Administrator
is not required is not required
to roll back to roll back
changes. changes.
Requires a Requires a
Support Support
Request (Pega Request (Pega
Cloud Services Cloud Services
only) – As a only) – As a
Pega Cloud Pega Cloud
Services Services
customer, you customer, you
are are
responsible for responsible for
any any
application application
changes that changes that
you make; you make;
however, as a however, as a
best practice, best practice,
inform and inform and
engage Pega engage Pega
Support before Support before
releasing releasing
application application
changes. You changes. You
can open a can open a
Support Support
Request on My Request on My
Support Portal. Support Portal.
For more For more
information, information,
see My see My
Support Portal Support Portal
FAQ. FAQ.

Understanding the process to follow based on types of changes

Does
Requires a
your
Follow Support
release
this Request
contain
release (Pega Cloud
the
process Services
following
only)
changes?
Does Data Dynamic Data Significant
Rules Requires
system a Functional Code Environment
your instances model UX impact
Follow Support
settings
release
this Request
contain
X X
release (Pega-Cloud - - - - - Bug fix No
the
process Services
following Standard
X X X
only) X - - - - No
changes? release

Database
- X - - X - - - Yes
release

Treat functional
changes that
reference code
as a Code
release, which
requires a
system restart to
activate if you
are making code
changes.

Rules – Are
you
deploying
Rules-
records in
this release?
Data
instances –
Are you
deploying
Data-
records in
this release?
Dynamic
System
Settings –
Are you
loading
Data-Admin-
System-
Settings
records in
this release?
Significant
UX impact –
Will users
need to
significantly
relearn a
process, or
are there
significant
layout
changes?
Code – Are
you loading
JAR files as
part of this
release?
Data model
– Are there
changes to
your data
model in
Does this release
Requires a (SQL)? Code
your
- - - - X - - Yes
Follow Support Environment release
release
this Request changes –
contain
release (Pega Cloud Will there be
the
process Services operating
following
only) system or
changes?
application
server
changes in
this release?
Follow this
release
process –
Based on
your
answers to
these
questions,
follow this
release
process.
Requires a
Support
Request
(Pega Cloud
Services
only) – As a
Pega Cloud
Services
customer,
you are
responsible
for any
application
changes
that you
make;
however, as
a best
practice,
inform and
engage
Pega
Support
before
releasing
application
changes.
You can
open a
Support
Request on
My Support
Portal. For
more
information,
see My
Support
Portal FAQ.

Certain
Dynamic
System
Settings
Does activate
Requires
only on a Environment
-
your - - - - X - Yes
Follow Support
system release
release
this Request
restart and
contain
release (Pega
requireCloud
you
the
process to Services
follow the
following
only)
Environment
changes?
release
process.

Major
X X X X X X X X Yes
release

Testing applications in the DevOps pipeline


Having an effective automation test suite for your application in your continuous delivery DevOps pipeline
ensures that the features and changes that you deliver to your customers are of high-quality and do not introduce
regressions.

At a high level, the recommended test automation strategy for testing your Pega applications is as follows:

Create your automation test suite based on industry best practices for test automation
Build up your automation test suite by using Pega Platform capabilities and industry test solutions
Run the right set of tests at different stages of your delivery pipeline
Test early and test often

Industry best practices for test automation can be graphically shown as a test pyramid. Test types at the bottom
of the pyramid are the least expensive to run, easiest to maintain, take the least amount of time to run, and
should represent the greatest number of tests in the test suite. Test types at the top of the pyramid are the most
expensive to run, hardest to maintain, take the most time to run, and should represent the least number of tests
in the test suite. The higher up the pyramid you go, the higher the overall cost and the lower the benefits.

Ideal test pyramid

Analyzing application quality metrics

Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Setting up for test automation

Before you create Pega unit test cases and test suites, you must configure a test ruleset in which to store
the tests.

PegaUnit testing

Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to
releasing an application, which improves the application quality.

UI testing

Perform UI-based functional tests and end-to-end scenario tests to verify that end-to-end cases work as
expected. Use the third party Selenium starter kit for CRM or the built-in scenario testing tool to perform the
UI testing.

Analyzing application quality metrics


Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Viewing application quality metrics

Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

Changing application quality metrics settings

The Application Quality settings provides configurable options related to quality metrics. You can change the
default settings for metrics displayed to meet your business needs.

Estimating test coverage

View historical test coverage metrics and generate reports containing the number of executable rules and
their test coverage. Use the data to analyze changes in test coverage, and to verify which rules require
testing.

Viewing test coverage reports

View a report that contains the results of test coverage sessions to determine which rules in your application
are not covered with tests. You can improve the quality of your application by creating tests for all
uncovered rules that are indicated in the reports.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Viewing application quality metrics


Quickly identify areas within your application that need improvement by viewing metrics related to your
application's health on the Application Quality dashboard.

For example, view your application's compliance score and see the number and severity of guardrail violations
that were found in your application. You can then improve your application's compliance score and overall quality
by investigating and resolving the violations.

To open the Application Quality dashboard, from the Dev Studio header, click Configure Application Quality
Dashboard .

You can view the following metrics:

Rule, case, and application – View the number of executable rules (functional rules that are supported by
test coverage) and the number of case types in the selected applications. To view metrics for a different
combination of applications, select a different list on the Application: Quality Settings page.
Guardrail compliance – View the compliance score and the number of guardrail violations for the included
applications, as well as a graph of changes to the compliance score over time. To see more details about the
application's guardrail compliance, click View details.
Test coverage – View the percentage and number of rules that are covered by tests, and the last generation
date of the application-level coverage report for the selected applications, as well as a graph of changes to
application-level coverage over time. To see test coverage reports or to generate a new coverage report,
click View details.

Note: If the EnableBuiltOnAppSelectionForQuality switch is turned on, then coverage sessions metrics are
also displayed on the Application Quality Dashboard for the built-on applications selected in Application:
Quality Settings.
Unit testing – View the percentage and number of Pega unit test cases that passed for the selected
applications, over the period selected on the Application Quality Settings landing page. The graph illustrates
the changes to the test pass rate over time. To see reports about test compliance and test execution, click
View details.
Case types – View guardrail score, severe guardrail warnings, test coverage, unit test pass rate, and scenario
test pass rate for each case type in the applications. To view additional details about a case type, click View
details.
Data types – View guardrail score, severe guardrail warnings, test coverage, and unit test pass rate for each
data type in the applications. To view additional details about a data type, click View details.
Other rules – View guardrail score, test coverage, test pass rate, the number of warnings, a list of rules with
warnings, the number and list of uncovered rules, and the number and list of failed test cases for rules that
are used in the selected applications but that are not a part of any case type.

Application quality metrics

The Application Quality dashboard displays metrics for guardrails, test coverage, and unit testing that you
can use to assess the overall health of your application and identify areas that require improvement. You can
change the default ranges for the color codes by modifying the corresponding when rules in the Data-
Application-Quality class.

Related Content
Article

Changing application quality metrics settings

Article

Application quality metrics

Article

Estimating test coverage

Application quality metrics


The Application Quality dashboard displays metrics for guardrails, test coverage, and unit testing that you can
use to assess the overall health of your application and identify areas that require improvement. You can change
the default ranges for the color codes by modifying the corresponding when rules in the Data-Application-Quality
class.

The following table describes the relationship between colors, default ranges, and when rules. For each metric in
the Red, Orange, and Green columns, the top row indicates the default range for each color and the bottom row
indicates the corresponding when rule.
Related Content
Article

Viewing application quality metrics

Changing application quality metrics settings


The Application Quality settings provides configurable options related to quality metrics. You can change the
default settings for metrics displayed to meet your business needs.

Note: To change settings on the landing page and to enable the EnableBuiltOnAppSelectionForQuality toggle that
allow you to select which built-on applications are included, your operator ID must have the SysAdm4 privilege.

On the Application Quality settings landing page, you can modify the following settings:
Application(s) included – If you want the test coverage report to include only rules from the current
application, select Current application only. If you want the test coverage report to also include rules from
built-on applications, select Include built-on applications. By default, only current application is selected. If
you enable the EnableBuiltOnAppSelectionForQuality toggle, you can select which built-on application will be
included.

Note: If a master user starts an application-level coverage session for an application, then that user's
configuration of this setting is in effect for all users that execute test coverage for the duration of this
session.
Ignore test rulesets when calculating Guardrail score – When you enable this setting, Guardrail score is
calculated without taking test rulesets into account. This is the default behavior. When you disable this
setting, test rulesets are taken into account when calculating Guardrail score.
Quality trends – Use this setting to change the date range of the trend graphs on the Application Quality,
Application: Test coverage and Application: Unit testing landing pages. The default value is Last 2 weeks.
Test case execution – Use this setting to change the number of days, from the time that they are executed
that tests are treated as executed by the Application Quality dashboard and coverage reports. By default, a
test executed later than seven days ago is considered too old to be included on the Application Quality
dashboard and in reports.
Scenario test case execution – Use this setting to add a delay (in milliseconds) to the execution of steps in a
scenario test.

Related Content
Article

Improving your compliance score

Article

Viewing application quality metrics

Article

Estimating test coverage

Article

Application quality metrics

Estimating test coverage


View historical test coverage metrics and generate reports containing the number of executable rules and their
test coverage. Use the data to analyze changes in test coverage, and to verify which rules require testing.

On the Test Coverage landing page, view a chart displaying test coverage metrics and generate specific user-
level, application-level, and merged coverage reports. User-level reports contain the results of a single test
coverage session that a user performs, while application-level reports contain results from multiple test coverage
sessions that many users run. Merged reports contain results from multiple most recent application-level reports.

The following rule types are included in test coverage reports.

Activity Declare expression


Paragraph
Case type Declare trigger
Report definition
Collection Flow
Scorecard
Correspondence Flow action
Section
Data page Harness
Strategy
Data transform HTML
Validate
Decision data HTML fragment
When
Decision table Map value
XML Stream
Decision tree Navigation

Generating a user-level test coverage report

Generate a user-level test coverage report to identify which executable rules in your currently included
applications are covered and not covered by tests. The results of this type of report are not visible on the
Application Quality Dashboard.
Generating an application-level test coverage report

Generate an application-level coverage report that contains coverage results from multiple users. Use this
report to identify which executable rules in your currently included applications are covered and not covered
by tests. The results of this type of report are visible on the Application Quality Dashboard.

Participating in an application-level test coverage session

When an application-level coverage session is running, you can perform tests of the application to contribute
to an application-level test coverage report that identifies the executable rules in your application that are
covered and not covered by tests.

Generating a merged coverage report

Generate application-level coverage reports for every application in your system and in your application
stack, and then merge the most recent reports to a single report, to gain a consolidated overview of test
coverage for all your top-level or built-on applications.

Generating a user-level test coverage report


Generate a user-level test coverage report to identify which executable rules in your currently included
applications are covered and not covered by tests. The results of this type of report are not visible on the
Application Quality Dashboard.

1. In the header of Dev Studio, click Configure Application Quality Test Coverage .

2. Click User level.

Note: If the Application level coverage is in progress message is displayed, you cannot start a user-level coverage
session.

3. Click Start new session.

4. Enter the title of the coverage report, and then click OK.

5. To provide data for the report, run all of the tests that are available for your included applications, for
example, Pega unit automated tests and manual tests.

6. Click Stop coverage, and then click Yes.

Note: If you close the tab or log out without clicking Stop, the report is not generated.

7. Review the results of the coverage session. In the Coverage history section, click Show Report.

8. Optional:

To see whether coverage reports were generated by other users, click Refresh.

9. Optional:

To see a list of application-level coverage reports, click Application level.

Related Content
Article

Participating in an application-level test coverage session

Generating an application-level test coverage report


Generate an application-level coverage report that contains coverage results from multiple users. Use this report
to identify which executable rules in your currently included applications are covered and not covered by tests.
The results of this type of report are visible on the Application Quality Dashboard.

1. In the header of Dev Studio, click Configure Application Quality Test Coverage .

2. Click Application level.


3. Click Start new session.

Note: To start application-level coverage, your operator ID must have the


pzStartOrStopMasterAppRuleCoverage privilege.

4. Enter the title of the coverage report, and then click OK.

5. Optional:

To provide data for the report, run all of the tests that are available for your currently included applications,
for example, Pega unit automated tests and manual tests.

6. Inform all relevant users that they can log in to the application and start running tests.

7. Wait until all users have completed their tests and have logged off.

Note: If you stop an application coverage session before a user has logged off, the coverage data of this
user is not included in the report.

8. Click Stop coverage, and then click Yes.

9. Review the results of coverage session. In the Coverage history section click Show Report.

10. Optional:

To see whether coverage reports were generated by other users, click Refresh.

11. Optional:

To see a list of user-level coverage reports, click User level.

Participating in an application-level test coverage session


When an application-level coverage session is running, you can perform tests of the application to contribute to
an application-level test coverage report that identifies the executable rules in your application that are covered
and not covered by tests.

Before you begin: Ensure that application-level coverage is in progress before you log in. If application
coverage is started after you log in, you cannot contribute to it unless you log off and log in again. Only users with
the pzStartOrStopMasterAppRuleCoverage privilege can initiate application-level coverage.

1. Check if application-level coverage is in progress.

a. In the header of Dev Studio, click Configure Application Quality Test Coverage .

b. Verify that you see the Application level coverage is in progress message.

If you do not see the message, application-level coverage is not active, however, you can still start a
user-level test coverage session.

2. To provide data for the report, execute all the tests that are available for the included applications, for
example, Pega unit automated tests and manual tests.

Note: During the coverage session your local configuration for included applications is overridden by the
configuration of the user that started the application-level coverage session.

3. Click your profile icon and then click Log off.

Note: If you do not log off before the rule coverage session is stopped, you will not contribute to the report. If
you log off and then log in again while the coverage session is still active, your test coverage sessions are
saved as a new session that will be included in the application coverage report.

Related Content
Article

Generating an application-level test coverage report


Generating a merged coverage report
Generate application-level coverage reports for every application in your system and in your application stack,
and then merge the most recent reports to a single report, to gain a consolidated overview of test coverage for all
your top-level or built-on applications.

Insight from a merged application report helps you avoid creating duplicate tests for rules that are used across
multiple applications. Note: Because a merged report is an instance of an application-level report, when a
merged report is the most recent one for an application, it is included in the next merged report.
Before you begin: Ensure that your operator ID has the pzStartOrStopMasterAppRuleCoverage privilege.
Generate at least one application-level coverage report for another application in your system or for a built-on
application in your current application. For more information, see Generating an application-level test coverage
report

1. Switch to the main application that you want to use as the baseline for the merged report. See Switching
between applications in.

2. In the header of Dev Studio, click Configure Application Quality Test Coverage .

3. Click the Application level tab.

4. In the Coverage history section, click Merge reports.

5. Enter the title of the merged report, and then click Next.

6. In the list of the most recent reports, select the reports that you want to include in the merged report, and
then click Create.

7. Close the Merge confirmation window.

Result: Dev Studio automatically adds the MRG_ prefix to every merged report to differentiate them from
standard application-level coverage reports and to facilitate finding them.

8. Open the merged report. In the Coverage history section find the merged report that you created and click
Show report.

9. Optional:

To open a report that is included in the merged report, in the Merged reports section, click the report name.

Viewing test coverage reports


View a report that contains the results of test coverage sessions to determine which rules in your application are
not covered with tests. You can improve the quality of your application by creating tests for all uncovered rules
that are indicated in the reports.

1. In the header of Dev Studio, click Configure Application Quality Test Coverage .

2. Choose the type of report that you want to view:

To view application-level test coverages, click the Application level tab.


To view user-level test coverages, click the User level tab.

3. In the Coverage history section, hover over the row with the relevant test coverage session, and then click
Show Report.

4. Optional:

Choose the data you want to include in the report:

To include only the rules that were updated after a specific date, in the Rules updated after field, click
the calendar icon, select a date and time, and then click Apply.
To include all the rules that are covered with tests, click Covered.
To include all the rules that are not covered with tests, click Uncovered.
To filter the rules, in the column header that you want to filter, click the filter icon, enter the filter
criteria, and then click Apply.
To open a single report that is included in a merged report, in the Merged reports section, click the
report name.
To open a rule that is included in the report, click the rule name.
Setting up for test automation
Before you create Pega unit test cases and test suites, you must configure a test ruleset in which to store the
tests.

Creating a test ruleset to store test cases

Before you can create unit test cases or scenario tests, you must configure a test ruleset in which to store
the tests.

Creating a test ruleset to store test cases


Before you can create unit test cases or scenario tests, you must configure a test ruleset in which to store the
tests.

1. In the Dev Studio header, open your application rule form by clicking your application and selecting
Definition.

2. In the Application rulesets section, click Add ruleset.

3. Complete one of the following actions:

Enter a ruleset name and version of an existing ruleset.


Click the Open icon and create a new ruleset.

Note: The test ruleset must always be the last ruleset in your application stack.

4. Open the test ruleset and click the Category tab.

5. Select the Use this ruleset to store test cases check box.

6. Save the test and application ruleset forms.

Result: When you save test cases for rules, they are saved in this ruleset.

Related Content
Article

PegaUnit testing

PegaUnit testing
Automated unit testing is a key stage of a continuous development and continuous integration model of
application development. With continuous and thorough testing, issues are identified and fixed prior to releasing
an application, which improves the application quality.

Automated unit testing involves creating unit test cases for tests that are run against individual rules, grouping
multiple test cases into test suites, running the tests, and viewing the results. When the tests run, the results are
compared to the expected results that are defined in assertions.

Understanding unit test cases

A test case identifies one or more testable conditions (assertions) that are used to determine whether a rule
returns an expected result. Reusable test cases support the continuous delivery model, providing a way to
test rules on a recurring basis to identify the effects of new or modified rules.

Grouping test cases into suites

You can group related unit test cases or test suites into a test suite so that you can run multiple test cases
and suites in a specified order. For example, you can run related test cases in a regression test suite when
changes are made to application functionality.

Setting up and cleaning the context for a test case or test suite

You can set up the environment and conditions required for running a test case, determine how to clean up
test data at the end of the test run, and set pages on which to automatically run rules.
Viewing unit test reports

View a graph with test pass rate trend data, a summary of Pega unit tests that were run, and an overview of
Pega unit test compliance for currently included applications on the Reports tab on the Unit Testing landing
page.

Viewing unit tests without rules

On the Application: Unit testing landing page you can display a list of unit tests that are not associated with
any rule and export this list to an XLS or a PDF file. You should deactivate these unit tests because they will
always fail.

Running test cases and suites with the Execute Tests service

You can use the Execute Tests service (REST API) to validate the quality of your code after every build is
created by running unit test cases that are configured for the application.

Understanding Pega Platform 7.2.2 and later behavior when switching between Pega unit testing and
Automated Unit Testing features

Beginning with Pega 7.2.2, you can use Pega unit testing to create test cases to validate the quality of your
application by comparing the expected test output with results that are returned by running rules.

Working with the deprecated AUT tool

In older versions of Pega Platform, automated unit tests were created using the Automated Unit Testing
(AUT) tool, which has since been replaced by PegaUnit testing. If you have automated unit tests that were
created using AUT and they haven't been changed to PegaUnit test cases, then you can switch back to AUT
to manage those tests.

Creating test cases with AUT


You can automate testing of rules by creating test cases for automated unit testing. Automated unit testing
validates application data by comparing expected output to the actual output that is returned by running rules.

To create test cases, you must have a rule set that can store test cases. For more information, see Creating a test
ruleset to store test cases.

Note: AUT is deprecated. Use PegaUnit testing instead to create automated test rules.

Automated unit testing information is available on the Testing Applications landing page on Pega Community and
in the automated unit testing topics in the help.

AUT test suite Create or Save as form

Unit Test Suites Completing the Create or Save As form

AUT test cases

Create test cases for automated unit testing to validate application data by comparing expected output to
the actual output that is returned by running rules.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Related Content
Article

Creating a rule

Article

Copying a rule or data instance

Article
Creating a specialized or circumstance rule

AUT test cases


Create test cases for automated unit testing to validate application data by comparing expected output to the
actual output that is returned by running rules.

You can create automated unit testing test cases for the following rule types:

Activity
Decision table
Decision tree
Flow
Service SOAP

Note: AUT is deprecated. Use PegaUnit testing instead to create automated test rules.

Automated unit testing information is available on the Testing Applications landing page on Pega Community and
in the automated unit testing topics in the help.

Related Content
Article

Creating test cases with AUT

Article

Rules development

Creating unit test suites with AUT


Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose credentials
are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test cases together
and make unit testing more efficient.

The Unit Test Suite rule form consists of the Contents tab.

Note: AUT is deprecated. Use PegaUnit testing instead to create automated test rules.

Automated unit testing information is available on the Testing Applications landing page on Pega Community
and in the automated unit testing topics in the help.

You must have the AutomatedTesting privilege to work with unit test suite rules.

You can create a unit test suite that includes all the test cases for a specific rule type, or you can select individual
rules and specify the sequence in which to run them.

To run a unit test suite, use the Schedule gadget on the Automated Unit Testing landing page. For more
information, see Working with the deprecated AUT tool. From that gadget, you can choose to run the unit test
suite immediately or schedule the run for a future time.

For unit test suites that are scheduled to run at future times, an agent activity in the Pega-AutoTest agents rule
checks for unit test suite requests every five minutes and runs those that are due. When the agent activity
finishes running a unit test suite, it sends an email message with the results. By default, this completion email
message is sent to the person who scheduled the unit test suite run, and to any additional email addresses
specified at the time the run is scheduled. If no email addresses are specified at the time the run was scheduled,
no email message is sent.

Access
For more information, see Working with the deprecated AUT tool to work with the Unit Test Suites that are
available to you. You can:

Use the Automated Unit Tests gadget to see the Unit Test Suites and the test cases in each.
Use the Create Suite button on the Schedule gadget to create unit test suites.
Use the calendar button on the Schedule gadget to run unit test suites immediately and to schedule unit
test suite runs.

Category
Unit Test Suites are instances of the Rule-AutoTest-Suite class. They belong to the SysAdmin category.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

AUT test suite – Create or Save as form

Unit Test Suites – Completing the Create or Save As form

AUT test suite Create or Save as form

Unit Test Suites Completing the Create or Save As form

AUT test suite – Contents form

Use the Contents tab to define the unit test suite. Specify a user (Operator ID) that the Pega-AutoTest agents
are to use by default when running the suite, and select the test cases to include.

AUT test suite – Create or Save as form


Unit Test Suites – Completing the Create or Save As form

To create a unit test suite rule, use the Create Suite button on the Schedule gadget of the Automated Unit Testing
landing page. To open the Automated Unit Testing landing page, select Dev Studio > Application > Automated
Unit Testing.

You must have the AutomatedTesting privilege to be able to create unit test suites. For information about how to
enable this privilege, see About Automated Unit Testing.

A unit test suite rule has a single key part, the unit test suite name:

Field Description

Name Enter a short, descriptive name for the unit test suite.

Create a separate RuleSet to hold test cases and unit test suites, rather than using a RuleSet that will be moved
to a production system. For more information, consult the articles in the Testing Applications category of Pega
Community.

For general information about the Create and Save As forms, see:

Creating a rule.
Copying a rule or data instance .

> Rule resolution

As with most rules, when you search for a Unit Test Suite, the system shows you only those rules that belong to a
RuleSet and version that you have access to.

Unit Test Suite rules cannot be qualified by circumstance or time.

Creating unit test suites with AUT

Related Content
Article

Creating unit test suites with AUT

Opening a unit test case


You can view a list of the unit test cases that have been created for your application and select the one that you
want to open.

1. Open the test case. Complete one of the following actions:

In the navigation pane of Dev Studio, click Configure Application Quality Automated Testing Unit Testing
Test Cases .

Click the Test Cases tab on the rule form.

2. In the Test case name column, click the test case that you want to open.

Related Content
Article

Running a unit test case

Article

Creating unit test cases for rules

Article

PegaUnit testing

Article

Viewing test details and results on the Application:Unit testing landing page

Creating unit test cases for rules


For most rules, you can create a reusable test case by converting a unit test to a test case, configuring case
details, and then defining expected test results with assertions (test conditions). When the test case runs, the test
results are compared to the expected results defined for the rule’s assertions. If the test results do not meet the
defined assertions, then the test fails.

Before you begin: Unit test a rule and convert the test run into a test case. For more information, see Unit
testing individual rules.

1. Optional:

To modify the rule or class that is used for the test, in the upper-right corner of the Definition tab, click the
Gear icon, select the rule or class and then click Submit.

If you are testing a strategy rule, then the componentName and pzRandomSeed parameters are also
displayed. If you change either of these parameters, the test case does not return the expected results.

componentName – The name of the component (for example, Switch) that you are testing.

pzRandomSeed – Internal parameter, which is the random seed for the Split and Champion Challenger
shapes.

2. Optional:

To prevent the test from being run as part of a test suite or from a REST service, on the Definition tab, select
the Disable check box.

The test case will be run only when you click Actions Run .

3. In the Expected results section, add assertions that define the expected results of the test. For more
information about creating assertions, see Assertions.

4. On the Setup & Cleanup tab, configure the actions to perform and the objects and clipboard pages to be
available before and after the test runs. You can also clean up the clipboard after the test is run by applying
additional data transforms or activities. For more information, see Setting up your test environment.

5. Click Save.

6. In the Details dialog box, enter a label that identifies the test case. The test case identifier is generated
based on the label and cannot be modified after it is saved.
Viewing test case results
After you run a unit test case, you can view the results of the test run.

The following information is displayed:

When the test was last run and the user who ran it.
The rule associated with the test.
The parameters sent.
Errors for failed tests.

Unexpected results for failed tests. This information also includes the run time of the test and the expected
run time of the test if the expected run time assertion fails.

1. Open the test case.

2. In the Run history column, click View for the test case that you want to view.

3. In the Test Runs Log dialog box, click the row for the instance of the test case that you want to view to
open the test results in a new tab in Dev Studio.

You can also view test case results in the Edit Test Case form after you immediately run the test, in the Test
Case tab of the rule form or, for data pages, in the Data Page testing landing page.

Related Content
Article

PegaUnit testing

Article

Viewing test details and results on the Application:Unit testing landing page

Running a unit test case


Run a unit test case to validate rule functionality.

1. Open the test case.

2. Complete one of the following actions:

To run multiple test cases, select the test cases that you want to run, and then click Run selected.
To run a disabled test case or a single test case, click the test case to open it, and then click Actions
Run .

Related Content
Article

PegaUnit testing

Article

Viewing test details and results on the Application:Unit testing landing page

Exporting a list of test cases


You can export a list of all the unit test cases that are in your application or configured on a rule form.

1. Export a list of all the Pega unit test cases that are in your application or for a rule type.

Complete one of the following actions:

To export a list of all the unit test cases that are in your application, in the header of Dev Studio, click
Configure Application Quality Automated Testing Unit Testing Test Cases .
To export a list of Pega unit test cases that are configured on a rule form, click Test Cases in the rule
form.
2. Click Export to Excel.

Related Content
Article

PegaUnit testing

Article

Viewing test details and results on the Application:Unit testing landing page

Understanding unit test cases


A test case identifies one or more testable conditions (assertions) that are used to determine whether a rule
returns an expected result. Reusable test cases support the continuous delivery model, providing a way to test
rules on a recurring basis to identify the effects of new or modified rules.

You can run test cases whenever code changes are made that might affect existing functionality. For example, an
account executive wants to ensure that a 10% discount is applied to all preferred customers. You create a test
case that verifies that this discount is applied to all preferred customers in the database. The test case test fails if
there are any preferred customers for which the 10% discount is not applied. You then add a new preferred
customer to the database and run the test case to make sure that the customer is correctly configured to receive
the discount and that the discount for other preferred customers is not affected.

Additionally, you can group related unit test cases into a test suite so that you can run multiple test cases and
suites in a specified order. For example, you can run related test cases in a regression test suite when changes
are made to application functionality. For more information about test suites, see Grouping test cases into suites.

After you create test cases and test suites, you can run them in a CI/CD pipeline for your application by using
Deployment Manager or a third-party automation server such as Jenkins. For more information, see
Understanding continuous integration and delivery pipelines.

You can use unit test the following types of rules:

Activities
Declare expressions
Case types
Flows
Collections
Map values
Data pages
Report definitions
Data transforms
Strategies
Decision tables
When
Decision trees

Typically, you unit test a rule, and then convert it to a test case. For flow and case type rules, you record the test
case.

Creating unit test cases for rules

For most rules, you can create a reusable test case by converting a unit test to a test case, configuring case
details, and then defining expected test results with assertions (test conditions). When the test case runs,
the test results are compared to the expected results defined for the rule’s assertions. If the test results do
not meet the defined assertions, then the test fails.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Creating unit test cases for flows and case types

When you create a unit test case for a flow or case type, you run the flow or case type and enter data for
assignments and decisions. The system records the data that you enter in a data transform, which is created
after you save the test form. You can start recording at any time.

Defining expected test results with assertions


Use unit test cases to compare the expected output of a rule to the actual results returned by running the
rule. To define the expected output, you configure assertions (test conditions) on the test cases that the test,
when run, compares to the results returned by the rule.

Opening a unit test case

You can view a list of the unit test cases that have been created for your application and select the one that
you want to open.

Running a unit test case

Run a unit test case to validate rule functionality.

Viewing test case results

After you run a unit test case, you can view the results of the test run.

Exporting a list of test cases

You can export a list of all the unit test cases that are in your application or configured on a rule form.

Related Content
Article

Running test cases and suites with the Execute Tests service

Article

Getting started with DevOps for a Pega application

Creating unit test cases for flows and case types


When you create a unit test case for a flow or case type, you run the flow or case type and enter data for
assignments and decisions. The system records the data that you enter in a data transform, which is created after
you save the test form. You can start recording at any time.

Certain conditions apply on the data that you can record for flow and case types. For information, see Data that
you can record for flows and case types about the data that you can record.

Before you begin: Exclude properties in your work class from the test by modifying the
pxCapturePropertyIgnore data transform. For more information, see Excluding work class properties from form
and case type tests.

1. On the toolbar, click Actions Record test case .

The system starts running the flow or case type.

2. Enter input as you step through the flow or case type.

3. Click Create test case in the lower-right of the screen to save the recording as a test case.

4. Click Save, enter a label that identifies the test case, and then click Submit.

5. Optional:

To modify the rule or class that is used for the test, in the upper-right corner of the Definition tab, click the
Gear icon, select the rule or class, and then click Submit.

6. Optional:

To prevent the test from being run as a part of a test suite or from a REST service, on the Definition tab,
select the Disable check box.

The test case will be run only when you manually click Actions Run .

7. In the Expected results section, add assertions that define the expected results of the test. For more
information about creating assertions, see Defining expected test results with assertions.

8. On the Setup & Cleanup tab, configure the actions to perform and the objects and clipboard pages to be
available before and after the test runs. You can also clean up the clipboard after the test is run by applying
additional data transforms or activities. For more information, see Setting up and cleaning the context for a
test case or test suite.

9. Click Save.

10. Configure the unit test case. See Creating unit test cases for rules for more information.

Result:

After you save the test case, a data transform, which captures the input that you entered, is created and
associated with the test case. You can edit this data transform to modify the test case input. The Edit test case
form also displays the path of the flow or case type.

Data that you can record for flows and case types

When you create a unit test case for a flow or case type, the system records the data that you enter.

Excluding work class properties from form and case type tests

Exclude properties in your work class from the test by modifying the pyDataCapturePropertyIgnores data
transform.

Related Content
Article

Running a unit test case

Article

Viewing test case results

Article

Exporting a list of test cases

Data that you can record for flows and case types
When you create a unit test case for a flow or case type, the system records the data that you enter.

You can record the following type of information:

Starter flows. Non-starter flows may be tested from a starter flow that calls on the non-starter flow.

Subprocesses that are configured as part of a flow.


The Assignment, Utility, and Approval shapes. For flows, assignments must be routed to the current operator
so that the recording of the flow continues and the system captures data as part of the test case.
Data that is captured on the pyWorkPage.

When a flow or case type runs, a pyWorkPage is created on the clipboard and captures information such as
data that you enter for assignments. It also captures information such as case ID, date and time that the
case was created, and the latest case status.

There are additional assertions that you can configure for flows and case types, including case status,
assigned to, and attachment exists. For these assertions, the system compares expected values to the value
that is recorded on the pyWorkPage.

If you refresh or cancel recording the flow or case type, data that is on the pyWorkPage might not be
accurate.

Local actions and flow actions that are configured as part of the flow or case type.
Child cases that are created and finish running before the flow or test case resumes running.

All properties, excluding properties that begin with either px or pz.

Related Content
Article

Creating unit test cases for flows and case types


Excluding work class properties from form and case type tests
Exclude properties in your work class from the test by modifying the pyDataCapturePropertyIgnores data
transform.

Some properties, like .pyID, are not processed when a Pega unit test case is run. These properties vary for every
test run. The pxDataCapturePropertyIgnore data transform displays the properties that Pega unit tests do not
process.

1. Open the data transform:

a. In the navigation pane click App Classes , and enter Work- in the Search field.

b. Expand Data Model > Data Transform and then click pyDataCapturePropertyIgnores.

2. Save the data transform to your Work- class and in your test ruleset.

3. For each property that you want to exclude in the data transform, do the following steps:

a. On the Definition tab, click the Add icon.

b. From the Action list, select Set.

c. In the Target field, enter the property that you want to exclude.

d. In the Source field, enter two double quotation marks, separated by a space: " " .

4. Save the data transform.

Defining expected test results with assertions


Use unit test cases to compare the expected output of a rule to the actual results returned by running the rule. To
define the expected output, you configure assertions (test conditions) on the test cases that the test, when run,
compares to the results returned by the rule.

When a test runs, it applies assertions in the order that you define them on the Definition tab of the test case. All
assertions, except for run time assertions, must pass for the test to be successful.

For example, an account executive wants to ensure that a 10% discount is applied to all preferred customers. You
can create a test case that verifies that this discount is applied to all preferred customers in the database. If the
test does not pass, the results indicate where the 10% discount is not applied.

Note: On decision trees and decision rules, you cannot configure properties from a read-only data page or a data
page that is a declarative target.

Configuring activity status assertions

You can verify that an activity returns the correct status when it runs by configuring an activity status
assertion. You can also assert if an activity has an error and, if it does, what the message is so that you can
validate that the message is correct.

Configuring assigned to assertions

For flows and case types, you can use the assigned to assertion to verify that an assignment is routed to the
appropriate work queue or operator.

Configuring attachment exists assertions

For flows and case types, you can verify that the flow or case type has an attachment of type file or note
(attached using the Attach Content shape) or email (attached using the Send Email shape) attached.

Configuring case instance count assertions

For flows and case types, you can verify the number of cases that were created when the case type or flow
was run.

Configuring case status assertions

You can configure a case status assertion on a flow or case type to verify the status of the case.
Configuring decision result assertions

After you create a unit test case for a decision table or decision tree, the system generates a decision result
assertion. This assertion displays the input values for testing the rule, and the result that is generated by the
rule.

Configuring expected run-time assertions

You can create an assertion for the expected run time of the rule. The expected run-time assertion is less
than or equal to an amount of time that you specify, in seconds.

Configuring list assertions

You can create list assertions for page lists on a rule to determine if either the expected result is anywhere
in the list of results returned by the rule. Even if the order of results changes, the test will continue to work.

Configuring page assertions

Some rules, such as activities and data transforms, can create or remove pages from the system. You can
create page assertions to determine whether or not a page exists after a unit test case runs. You can also
assert if a property has an error and, if it does, what the message is so that you can validate that the
message is correct.

Configuring property assertions

You can configure property assertions to validate that the actual values of properties returned by a rule are
the expected values. You can also assert if a property has an error and, if it does, what the message is so
that you can validate that the message is correct.

Configuring result count assertions

You can configure assertions to compare the number of items returned in a page list, page group, value list,
or value group on the rule to the result that you expect to see on the clipboard.

Configuring activity status assertions


You can verify that an activity returns the correct status when it runs by configuring an activity status assertion.
You can also assert if an activity has an error and, if it does, what the message is so that you can validate that the
message is correct.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. In the Assertion type list, click Activity status.

3. In the Value list, click the status that you expect the activity to return when the test runs.

4. To validate the message that displays for the activity, select Validate message, select a Comparator, and
then enter the message that you want to validate in the Value box.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and then click OK.

6. Click Save.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test suites


Configuring assigned to assertions
For flows and case types, you can use the assigned to assertion to verify that an assignment is routed to the
appropriate work queue or operator.

If you have multiple assignments on a flow or test case, you can route each assignment to an operator ID or work
queue. Clipboard pages are created for each assignment under the pyWorkPage page and capture the
assignment details, including the operator ID or work queue to which the assignment was routed. The assigned to
assertion compares the operator ID or work queue to the last assignment that is configured on the flow or case
type, which depends on where you stop recording the flow or case type.

For example, your flow has a Customer Details assignment, which is routed to the operator ID johnsmith . It also has a
subprocess with an Account Information assignment, which is routed to the account_processing work queue.

If you record only the Customer Details assignment, the assigned to value is johnsmith . If you also record the Account
Information assignment, the assigned to value is account_processing .

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Assigned to.

3. From the Assigned to list, select Operator or Work queue.

4. Select a comparator from the Comparator list.

5. In the Value field, press the Down Arrow key and select the operator ID or work queue.

6. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

7. Click Save.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Configuring attachment exists assertions


For flows and case types, you can verify that the flow or case type has an attachment of type file or note
(attached using the Attach Content shape) or email (attached using the Send Email shape) attached.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Attachment exists.

3. From the Attachment type list, select one of the following options, and then provide the value for each field:

File: Select to specify that the attachment type is file, and then enter the following values:

Description: Enter the text that was provided as the description in the Attach Content shape.

Name: Enter the name of the file that was provided in the Attach Content shape.

Note: Select to specify that the attachment type is note, and then enter text that was entered as the
note description in the Attach Content shape.
Email: Select to specify that the attachment type is an email, and then enter the email subject that was
provided in the Send Email shape.

4. Repeat steps 1 through 3 to add additional attachment assertions.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

6. Click Save.

Attachment exists assertions

On case types and flows, you can test whether an attachment of type file or note, which were attached in the
Attach Content shape, or email, which was attached using the Send Email shape, exists.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Article

Attaching content to a case

Article

Sending automatic emails from cases

Attachment exists assertions


On case types and flows, you can test whether an attachment of type file or note, which were attached in the
Attach Content shape, or email, which was attached using the Send Email shape, exists.

If you have multiple attachments on a flow or test case that match the expected value of the assertion, the
assertion runs for every attachment that exists. If the system finds an attachment that matches the assertion
value, the assertion passes and iterates over all the attachments on the flow or case type. If no attachment
exists, the assertion fails

The system compares the expected output on attachments that are recorded on the pyWorkPage page. For
example, if a case type has a parent case that spins off a child case, and you record just the child case, the
pyWorkPage page records attachments for only the child case and not the parent case, which is recorded on the
pyWorkCover page.

In addition, if you create a test case from a parent case that generates a child case that is returned to the parent
case after the child case runs, the pyWorkPage page records the attachments only on the parent case.

For example, your case has an Attach Content shape that attaches a Process immediately note in the first stage of the
case type. In the third stage, your case has a Send Email shape that attaches an email with the subject Request
approved. The assertion passes if you searched for either the Process immediately note or Request approved email subject.

Related Content
Article

Configuring attachment exists assertions

Article

Defining expected test results with assertions

Article
Attaching content to a case

Article

Sending automatic emails from cases

Configuring case instance count assertions


For flows and case types, you can verify the number of cases that were created when the case type or flow was
run.

For example, a Job Application case type runs a child case that processes background checks. If you record the
entire Job Applicant case type and the child case type, the number of case instances for Job Application case type
is one, and the number of case instances of Background Check child case type is one.

If you do not run the run the Background Check child case type when you create the test case, the number of
Background Check case instances is zero.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Case instance count.

3. In the Of case type field, do one of the following:

To select a case type from your work pool, press the Down Arrow key and select the case type.
Enter a case type that is not part of your work pool.

4. Select a comparator from the Comparator list.

5. In the Value field, enter the number of cases to compare against the output.

6. Optional:

Click Add to add another case instance count assertion and repeat steps 4 through 6.

7. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

8. Click Save.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Configuring case status assertions


You can configure a case status assertion on a flow or case type to verify the status of the case.

If you have multiple assignments on a flow or case type, you can configure a case status on each assignment. The
pyWorkPage on the clipboard captures the latest case status, which depends on where you stop recording the
flow or case type.

For example, your flow has a Customer Details assignment, with the case status set as New. It also has a subflow with
an Account Information assignment, with the case status set as Pending.

If you record only the Customer Details assignment, the case status, which is captured in the .pyStatusWork property
on the pyWorkPage, is set to New. If you also record the Account Information assignment, the case status is set to
Completed.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Case status.

3. Select the comparator from the Comparator list.

4. In the Value field, press the Down Arrow key and select the case status.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

6. Click Save.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Configuring decision result assertions


After you create a unit test case for a decision table or decision tree, the system generates a decision result
assertion. This assertion displays the input values for testing the rule, and the result that is generated by the rule.

You can manually update the input values, add properties, remove properties, and modify the default decision
result if the test is modified.

Note: This assertion is supported on when rules, decision tables, and decision trees only.
Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. Click the Definition tab.

2. To add multiple input values and results to the assertion, or add other assertions, perform one of the
following actions:

Note: You can add multiple input values and results to this assertion but cannot add other assertion types to
this test case. You can add other assertion types to this test case only if you have a single input and result
entry for the assertion.
To add multiple input values and results to the assertion:
1. Select the Multiple input combinations check box.
2. Enter values for the input and result that you expect the assertion to generate when the test stops
running.
3. Click Add and enter values for each additional input and result that you want to test.
To use one input value and result, enter the values that you expect the assertion to generate when the
test stops running. You can then add additional assertions to the test case.

3. To update the assertion to reflect properties that were added to the rule, click Refresh.

Note: Refresh updates the assertion with properties that are added to the rule. If properties have been
removed from the rule, then you need to manually remove the properties from the assertion.

4. Add or remove properties by clicking Manage properties and then entering the changes. You need to enter
data for properties that were added to the rule.

Result: The properties are reflected as unexpected results in test case results.
5. In the rule form, click Save.

Result:

The test case runs the decision tree or decision table with each input combination and compares the result
with the expected decision result for that combination.

Other decision result combinations or other configured assertions then run. If the expected result of any of
the input combinations in the decision result assertion does not match the result that the rule returns, the
assertion fails.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Configuring expected run-time assertions


You can create an assertion for the expected run time of the rule. The expected run-time assertion is less than or
equal to an amount of time that you specify, in seconds.

An actual run time that is significantly longer than the expected run time can indicate an issue. For example, if
you are using a report definition to obtain initial values for a data page from a database, there might be a
connectivity issue between the application and the database.

By default, after you create a Pega unit test case for a data page, the system generates the expected run-time
assertion. The default value of the expected run time is the time that is taken by the rule to fetch results when
the test was first run. The system compares that time against future run-time tests.

You can change the default value and configure expected run time assertions for all rule types.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select Expected run time.

3. In the Value field, enter a value, in seconds, that specifies the amount of time within which the execution of
the rule should be completed.

4. Optional:

If you want the test case to fail when the rule is not run within the specified time, select the Fail the test case
when this validation fails check box.

5. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

6. Click Save.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases


Article

Creating unit test suites

Configuring list assertions


You can create list assertions for page lists on a rule to determine if either the expected result is anywhere in the
list of results returned by the rule. Even if the order of results changes, the test will continue to work.

For example, you can verify if a product is present in a product list in a data page, regardless of where the
product appears in the list results. You can also verify if there is at least one employee with the name John in the
results of the Employee list data page.

You can also configure assertions for page lists to apply assertions to all the results that are returned by a rule so
that you do not have to manually create assertions for each result in the list.

For example, you can verify that a department name is Sales and that a department ID starts with SL for each
department in the list of results in the Sales department data page. You can also verify if a discount of 10% is
applied to each customer in the list of results of the VIP customers data page.

You can configure list assertions for page lists on a rule to apply assertions to all the results that are returned by
the rule. Configure an ordered list assertion so that you do not have to manually create assertions for each result
in the list.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. From the Assertion type list, select List.

3. Add properties to the assertion.

a. Click Add properties.

b. If you are adding properties for flows, case types, decision trees, decision tables, or data pages:

1. In the of object field, enter the path of the object with which the properties are compared during
the assertion. Proceed to step d.

c. If you are adding properties for data transforms or activities, complete the following tasks:

1. From the Thread list in the Actual results section, select the thread that contains the page whose
properties or pages you want to add. In the Page field, enter the page whose properties or pages
you want to add.
2. In the of object field, enter the path of the object with which the properties are compared during
the assertion.
3. Proceed to step d.

d. Select the properties or pages that you want to add. You can search for a property or its value by
entering text in the search bar and pressing Enter.

If you select a page, all embedded pages and properties from the page are added. Added properties are
displayed in the right pane.

When you add multiple properties, the assertion passes if the expected output and results match for all
properties.

4. Optional:

In the Filter field, enter a property and value on which to filter results or open the Expression Builder by
clicking the Gear icon to provide an expression that is used to filter results. The list assertion applies only to
the page list entries that are specified for this filter value.

5. From the Comparator list, select the comparator that you want to use to compare the property with a
specified value.

Select the is in comparator to compare a text, integer, or decimal property to multiple values. The assertion
passes if the property matches any of the values that you specify.

6. In the Value field, either enter a value with which to compare the property or open the Expression Builder by
clicking the Gear icon to enter an expression that is used to provide the value.

Note: The Gear icon is not displayed until after you have saved the rule form.

7. To add a comment, click the Add comment icon, enter a comment, and click OK.

8. Click Done.

9. Click Save.

Result:

When you run the test case, the system searches for the specified properties in the page list. One of the following
occurs:

If you selected In ANY Instance, the assertion passes if all the properties in the set match the expected
values in the page list. If none of the properties match any of the values in the page list, the assertion does
not pass.

If you selected In ALL instances, the assertion passes if all the properties in the set match the expected
values in every entry in the page list. If any of the properties do not match any entry in the page list, the
assertion does not pass.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Building expressions with the Expression Builder

Article

Creating unit test cases for rules

Configuring page assertions


Some rules, such as activities and data transforms, can create or remove pages from the system. You can create
page assertions to determine whether or not a page exists after a unit test case runs. You can also assert if a
property has an error and, if it does, what the message is so that you can validate that the message is correct.

You can configure page assertions for embedded pages, data pages, data pages with parameters, and embedded
pages within data pages that either have or do not have parameter stop-level pages.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. In the Assertion type list, select Page.

3. In the Page field, enter the name of the page.

4. In the Comparator list, select the comparator that you want to use to compare the property with a specified
value:

To ensure that the page is created after the unit test runs, select exists. The assertion passes if the
system does not find the page.
To ensure that the page is removed after the unit test runs, select does not exist. The assertion passes
if the system does not find the page.
To ensure that the page has an error after the unit test runs, select has errors. The assertion passes if
the system finds errors on the page.
To ensure that the page is free of errors after the unit test runs, select has no errors. The assertion
passes if the system finds no errors on the page.
To ensure that the page has a specific error message after the unit test runs, select has error with
message and then enter the message in the Value box or click the Gear icon to build an expression. The
assertion passes if the page contains the complete error message.
To ensure that the page has a portion of an error message after the unit test runs, select has error
message that contains and then enter the message in the Value box or click the Gear icon to build an
expression. The assertion passes if the page contains the words or phrases in the error message.
5. Optional:

To add another page to the assertion, click Add pages, and then perform steps 3 through 4.

6. Optional:

To add a comment, click the Add comment icon, enter a comment, and then click OK.

7. Click Save.

For example: An activity runs every week to check the last login time of all operators and deletes any operator
record (page) from the system if the last login was six months ago. When you test this activity, you can:

1. Set up the clipboard to load an operator page that has the last login time as six months ago.
2. Create a page assertion that ensures that the page no longer exists after the activity runs.

Page assertions

You can configure page assertions to determine if a page exists on the clipboard or if a page has errors.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Page assertions
You can configure page assertions to determine if a page exists on the clipboard or if a page has errors.

You can configure page assertions on the following types of pages:

Embedded pages
Data pages
Data pages with parameters
Embedded pages within data pages that either have or do not have parameters
Top-level pages

For example, an activity runs every week to check the last login time of all operators and deletes any operator
record (page) from the system if the last login was six months ago. When you test this activity:

Set up the clipboard to load an operator page that has the last login time as six months ago.
Create a page assertion that ensures that the page no longer exists after the activity runs.

Related Content
Article

Configuring page assertions

Article

Defining expected test results with assertions

Configuring property assertions


You can configure property assertions to validate that the actual values of properties returned by a rule are the
expected values. You can also assert if a property has an error and, if it does, what the message is so that you
can validate that the message is correct.

For example, you can create an assertion that verifies that a customer ID, which appears only once on a data
page, is equal to 834234.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. In the Assertion type list, select Property, and then click Add properties.

3. Select the properties to add by doing one of the following.

For data transforms, activities, flows, or case types, in the Actual results section, select the page
containing the properties to add.
For other rules, select the property or page that you want to add.
Result: Properties are displayed in the righte pane. If you selected a page, then all embedded pages and
properties from the page are added.

4. To add another property or page, click Add row, and then repeat step 3.

Result:

When you add multiple properties, the assertion passes if the expected output and results match for all
properties.

5. In the Comparator list, select the comparator that you want to use to compare the property with a specified
value. Do one of the following:

Select the is in comparator to compare a text, integer, or decimal property to multiple values. The
assertion passes if the property matches any of the values that you specify.

Select the is not in comparator. The assertion passes if the property does not match any of the values that
you specify.

Select the has error with message comparator to verify that the property has the exact message that you
specify in the Value box.
Select the has error message that contains comparator to verify that the property has a portion of the message
that you specify in the Value box.

6. In the Value field, enter a value with which to compare the property. Separate values for the comparators by
using the pipe (|) character. For text properties, use double quotation marks at the beginning and end of the
value, for example, "23|15|88" .

For example:

For example, if you want the assertion to pass when Age property matches either the 5 or 7 values,
configure the assertion as .Age is in 5|7 .

7. Optional:

To add a comment, click the Add comment icon, enter a comment, and then click OK.

8. Click Save.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules


Article

Building expressions with the Expression Builder

Configuring result count assertions


You can configure assertions to compare the number of items returned in a page list, page group, value list, or
value group on the rule to the result that you expect to see on the clipboard.

For example, you can create an assertion that verifies that the number of returned results for the number of
employees is greater than X number of employees, less than Y number of employees, or equal to Z number of
employees.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. On the bottom of the Definition tab, click Add expected result.

2. For activities and data transforms, complete the following tasks:

a. In the Page field, enter the page that contains the property for which you want to test the result count.

b. In the Page class field, select the class to which the page belongs.

3. In the of object field, enter the path of the object with which the results are compared or counted against
during the assertion.

For data pages, this value is usually .pxResults.


For data transforms and activities, you can use any page list on a page.
4. Optional:

In the Filter field, enter a property and value on which to filter results or open the Expression Builder by
clicking the Gear icon to provide an expression that is used to filter results. The list assertion applies only to
the page list entries that are specified for this filter value.

5. Select the appropriate comparator from the Comparator list.

6. In the Value field, enter the value that you want to compare with the object.

7. Optional:

To add a comment, click the Add comment icon, enter a comment, and click OK.

8. Click Save.

Result:

When you run the test case, the assertion fails if the expected value does not match the result count returned
from the page list, page group, value list, or value group.

Related Content
Article

Defining expected test results with assertions

Article

Converting unit tests to test cases

Article

Creating unit test cases for rules

Grouping test cases into suites


You can group related unit test cases or test suites into a test suite so that you can run multiple test cases and
suites in a specified order. For example, you can run related test cases in a regression test suite when changes
are made to application functionality.

Creating unit test suites


To create a unit test suite, add test cases and test suites to the suite and then modify the order in which you
want them to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Opening a unit test suite

You can view a list of the unit test suites that have been created for your application and select the one that
you want to open.

Running a unit test suite

You can run a unit test suite to validate rule functionality by comparing the expected value to the output
produced by running the rule. Test cases are run in the order in which they appear in the suite.

Viewing unit test suite run results

After you run a unit test suite, you can view the test run results. For example, you can view the expected and
actual output for assertions that did not pass.

Adding cases to a test suite

You can add test cases to a unit test suite. When you run a test suite, the test cases are run in the order in
which they appear in the suite.

Viewing unit test suite run results

After you run a unit test suite, you can view the test run results. For example, you can view the expected and
actual output for assertions that did not pass.

Related Content
Article

Opening a unit test suite

Article

Running a unit test case

Creating unit test suites


To create a unit test suite, add test cases and test suites to the suite and then modify the order in which you want
them to run. You can also modify the context in which to save the scenario test suite, such as the development
branch or the ruleset.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. Click Create new suite.

3. Optional:

In Description, enter information that you want to include with the test suite. For example, enter information
about when to run the test suite.

4. In the Category list, click the type of scenario test suite you are creating:

To informally test a feature, select Ad-hoc.


To verify critical application functionality, select Smoke.
To confirm that changes have not adversely affected other application functionality, select Regression.
5. Optional:

Provide a value, in seconds, that specifies the length of time within which the run time of the suite should
complete in the Expected max runtime field. If you want the test suite to fail when the expected run time has
been exceeded, select the Fail the test suite when runtime validation fails check box.
6. Add unit tests cases or other test suites to the test suite:

To add test cases to the test suite, in the Test cases section, click Add, select the test cases to include
in the suite, and then click Add.
To add test suites to the test suite, in the Test suites section, click Add, select the test suites to
include in the suite, and then click Add.

Note: To filter information by multiple criteria, click the Advanced filter icon.
7. Optional:

To change the order in which the test cases or test suites run, drag them to a different position in the
sequence.

8. Save the test suite:

a. Click Save and then enter a Label that describes the purpose of the test suite.

Note: Pega Platformautomatically generates the Identifier based on the label you provide. The
identifier identifies the scenario test suite in the system. To change the identifier, click Edit. The
identifier must be unique to the system.
b. Optional:

In the Context section, change details about the environment in which the test suite will run. You can:

Change the development branch in which to save the scenario test suite.
Select a different application for which to run the scenario test suite.
Select a different ruleset in which to save the scenario test.

9. Click Save.

10. Complete any of the following actions:

Remove test cases or suites from the test suite by selecting them and clicking Remove.
Apply one or more data pages, data transforms, or activities to set up the clipboard before running a
test suite in the Setup section of the Setup & Cleanup tab. You can also create objects, load work and
data objects, and add user pages from the clipboard which will be available on the clipboard when
running the test suite. For more information, see Setting up your test environment.
Apply additional data transforms or activities to clean up the clipboard in the Cleanup section of the
Setup & Cleanup tab. You can also prevent the test data from being removed after the test suite runs.
For more information, see Cleaning up your test environment.
Run a configured test suite by clicking Actions Run .

Note: If you made changes to the suite, such as adding or removing test cases or test suites, save
those changes before running the suite. Otherwise, the last saved version of the suite will run.
View more details about the latest result by clicking View details in the banner. Viewing details is
possible after a test suite runs. For more information, see Viewing unit test suite run results.
To view historical information about previous test runs, such as test date, the run time, expected run
time, and whether test passed or failed, click View previous runs.

11. Click Save. If you are saving the form for the first time, you can modify the Identifier. After you save the rule
form, you cannot modify this field.

Related Content
Article

Converting unit tests to test cases

Opening a unit test suite


You can view a list of the unit test suites that have been created for your application and select the one that you
want to open.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. In the Test suite name column, click the test suite that you want to open.
Related Content
Article

Running a unit test case

Article

Viewing test details and results on the Application:Unit testing landing page

Running a unit test suite


You can run a unit test suite to validate rule functionality by comparing the expected value to the output
produced by running the rule. Test cases are run in the order in which they appear in the suite.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. Select the check box for each test suite that you want to run.

3. Click Run selected. The test cases run, and the Result column is updated with the result, which you can click
to open test results.

You can stop the test run by clicking Stop test execution.

Note: The test suite continues to run even if you close or log out of the Pega Platform, close the Automated
Testing landing page, or switch to another Dev Studio tab.

Related Content
Article

Viewing test details and results on the Application:Unit testing landing page

Viewing unit test suite run results


After you run a unit test suite, you can view the test run results. For example, you can view the expected and
actual output for assertions that did not pass.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. In the Run history column, click View for the test suite that you want to view.

Note: To quickly view results of the most recent run, click the result in the Result column.

3. In the Test suite runs log dialog box, click the row for the instance of the test suite run that you want to
view to open the results of that run in a new tab in Dev Studio.

Note: You can also view test results after you run the test in the Edit Test Suite rule form.

Related Content
Article

Running a unit test suite

Article

Viewing test details and results on the Application:Unit testing landing page

Adding cases to a test suite


You can add test cases to a unit test suite. When you run a test suite, the test cases are run in the order in which
they appear in the suite.

1. Optional:
Open the Pega unit test suite, if it is not already open.

2. Click Add test cases.

3. In the Add test cases dialog box, select the test cases that you want to add to the test suite.

Note: You can click the Advanced filter icon to filter information by multiple criteria.

4. Click Add.

5. Save the rule form.

Related Content
Article

Grouping test cases into suites

Viewing unit test suite run results


Viewing unit test suite run results
After you run a unit test suite, you can view the test run results. For example, you can view the expected and
actual output for assertions that did not pass.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Test Suites .

2. In the Run history column, click View for the test suite that you want to view.

Note: To quickly view results of the most recent run, click the result in the Result column.

3. In the Test suite runs log dialog box, click the row for the instance of the test suite run that you want to
view to open the results of that run in a new tab in Dev Studio.

Note: You can also view test results after you run the test in the Edit Test Suite rule form.

Related Content
Article

Running a unit test suite

Article

Viewing test details and results on the Application:Unit testing landing page

Setting up and cleaning the context for a test case or test suite
You can set up the environment and conditions required for running a test case, determine how to clean up test
data at the end of the test run, and set pages on which to automatically run rules.

You can set clipboard pages, apply data transforms, load data pages, execute activities, create and load objects.
All the referenced data pages, and data objects and user pages that were created during a test run will be
automatically removed at the end of each run. To further clean up the clipboard, add steps to apply additional
data transforms and execute activities. You can set up or clean up the clipboard if you are running a test for which
the output or execution depends on other data pages or information.

For example, your application contains a data page D_AccountTransactionsList. This data page is sourced by a
report definition or activity that loads the transactions of the logged-in customer, based on the account type for
which the customer views transactions.

The customer number and account type that the customer selects are dynamic properties that are stored on the
work page of the case. The report definition or activity retrieves these properties as parameters from the work
page and filters the results as it obtains the results for the data page.

When you create a test case for D_AccountTransactionsList, ensure that one of the following conditions is met:
The parameter properties are on the work page of the clipboard before running the data page test.
Your data page has an activity or report definition that refers to the properties of another data page that is
on the clipboard to filter the results.

The system always runs data transforms, activities, and strategies on the RunRecordPrimaryPage page,
regardless of which page you chose when you unit tested the rule in the Run dialog box. The system also runs
flows and case types on the pyWorkPage. To update the page with any information required to set up test data,
click the Setup tab.

Setting up your test environment

Configure which actions you want to run and which objects and pages you want to view on the clipboard
before, during, and after the test is run.

Cleaning up your test environment

After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default. You can apply additional data transforms or activities
to remove other pages or information on the clipboard before you run more test cases or suites.

Related Content
Article

Converting unit tests to test cases

Article

Data pages

Article

Creating unit test suites with AUT

Article

Data Transforms

Article

Unit testing a data transform

Setting up your test environment


Configure which actions you want to run and which objects and pages you want to view on the clipboard before,
during, and after the test is run.

To set up the environment and conditions that are required before running this test case, copy or create clipboard
pages, apply data transforms, load data pages, execute activities, and create and load objects. Then, define the
connections to data pages or third-party databases to simulate during the test. Finally, after running the test case,
set up the environment and conditions that are required by applying data transforms, loading data pages,
executing activities, and creating and loading objects.

Before you begin: Open the test case or test suite that you want to set up. For more information, see Opening a
unit test case or Creating unit test suites.

1. Click the Setup & Cleanup tab.

2. Optional:

To make specific conditions available during test execution, expand the Before rule execution section, and
then configure the conditions:

a. Copy or create clipboard pages.

For more information, see Copy or create clipboard pages.

b. Add additional clipboard data.

For more information, see Add additional clipboard data.


3. Optional:

To define simulation settings for the test, expand the Simulation section, and then configure the simulations.

For more information, see Simulating data pages and third-party connections.

4. Optional:

To make specific conditions available after test execution, expand the After rule execution section, and then
add additional clipboard data.

For more information, see Adding additional clipboard data.

Note: You can set up actions after rule execution for test cases only.

5. To run the rule under on a page and avoid copying the entire page to RunRecordPrimaryPage, in the
Advanced section, enter the page under which you want to run the rule.

6. Click Save.

Copying and creating clipboard pages in setup

When setting up your test environment, you can set to copy or create clipboard pages before the test runs.

Adding additional clipboard data

When setting up your test environment, you can add additional clipboad data before or after the test runs.
You can create apply data transforms, load data pages, execute activities, load objects, create data objects,
and create work objects.

Simulating data pages and third-party connections

When setting up your test environment, you can simulate data pages and third-party connections. Such
simulations let you run your tests without depending on the availability of third-party servers.

Related Content
Article

Clipboard tool

Article

Converting unit tests to test cases

Article

Data pages

Article

Creating unit test suites with AUT

Article

Data Transforms

Copying and creating clipboard pages in setup


When setting up your test environment, you can set to copy or create clipboard pages before the test runs.

1. On the Setup & Cleanup tab for the test case or test suite for which you want to set up the context, expand
the Before rule execution section, and then expand the Setup data section.

2. Click Add data.

3. In the Description box, enter a description of the clipboard page you want to copy or create.

4. Optional:

Copy a clipboard page:


a. In the Type section, select Copy page.

b. To copy a clipboard page from a different thread, click the current thread name, and then click desired
thread name.

c. Select the check box next to the page that you want to be available in the clipboard during test
execution, and then click Next.

d. Edit the clipboard page and then click OK. You can rename the parent page, modify the values of
existing properties or add new properties and their values to the parent page and child pages.

5. Optional:

Create a clipboard page:

a. In the Type section, select Create page.

b. In the Page Name field, enter a name of the page you want to create.

c. In the Class field, enter or select the class of the page you want to create.

d. Click Next.

e. Edit the clipboard page and then click OK. You can rename the parent page, modify the values of
existing properties or add new properties and their values to the parent page and child pages.

6. Save the test case or test suite.

Adding additional clipboard data


When setting up your test environment, you can add additional clipboad data before or after the test runs. You
can create apply data transforms, load data pages, execute activities, load objects, create data objects, and
create work objects.

1. On the Setup & Cleanup tab for the test case or test suite for which you want to set up the context, choose
whether to add the data before or after the rule runs.

To add the data before the test rule runs, select Before rule execution section, expand the
Additional clipboard data subsection.
To add the data after the test rule runs, select After rule execution.

2. For each action you want to perform to the clipboard data, click Add step and then select the action.

To apply a data transform:, select Apply data transform, and then, in the Name field, enter or select the
name of the data transform to apply.
To load a data page, select Load data page, and then, in the Name field, enter or select the name of the
data to apply.
To execute an activity, select Execute activity, and then, in the Name field, enter or select the name of
the activity to apply.
To load an object, select Load object, enter or select the class of the object in the Of class field, and
then enter a name for the page in the Load on page field.
To create a data object, select Create data object, enter or select the class of the object in the Of class
field, and then enter a name for the page in the Load on page field.
To create a work object, select Create work object, and then, in the Of class field, enter or select the
class of the work object.
3. Optional:

If parameters are configured on rules, then you can modify them by clicking the gear icon, and providing
values in the Configure parameters dialog box, and then clicking Submit.

4. Optional:

If keys are configured on loaded or created objects, then you can define their values by clicking the With
Keys gear icon, and providing values in the Configure parameters dialog box.

5. Save the test case or test suite.

Simulating data pages and third-party connections


When setting up your test environment, you can simulate data pages and third-party connections. Such
simulations let you run your tests without depending on the availability of third-party servers.

1. In Dev Studio, open a Pega unit test case.

2. Click the Setup & Cleanup tab.

3. In the Setup section, expand the Simulation section, and then click Add rules.

4. To include a rule that the test rule directly references, on the Referenced rules tab, select the rules to
simulate, and then click Add.

5. To include any rule that the test rule does not directly reference, do the following for each rule:

a. Click the Other rules tab and then click Add.

b. In the Rule type list, click the type of rule that you want to simulate.

c. In the Class box, enter the class of the rule that you want to simulate.

d. In the Rules field, enter the rule that you want to simulate.

e. Click the Add button.

The selected rules display in the Simulation section on the Setup & Cleanup tab.

6. In the Simulate with list for each rule, click a simulation method:

Select As defined in the rule to use the default simulation defined in the rule.
Select Select datatransform rule to define your own data transform rule. You can reuse this rule in other
test cases.
Select Define data here to manually provide test data specific to this particular test case. You can copy
pages from the clipboard or create new pages and populate them with required test data.
Select None to disable the simulation.

Related Content
Article

Data page testing

Article

Data Transforms

Article

Creating unit test suites with AUT

Cleaning up your test environment


After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default. You can apply additional data transforms or activities to
remove other pages or information on the clipboard before you run more test cases or suites.

Before you begin: Open the unit test case. For more information, see Opening a unit test case.

1. Click the Setup & Cleanup tab.

2. To keep the test data on the clipboard at the end of the test run, clear the Cleanup the test data at the end
of run check box in the Cleanup section.

3. In the Cleanup section, click Add step.

4. Select Apply data transform or Execute activity, and then provide the appropriate data transform or activity
in the next field.

5. If parameters are configured on the rule, you can configure them by clicking the Parameters link and
providing values in the Configure parameters dialog box.

6. Optional:
Provide additional information in the Enter comments field.

7. Click Save.

Test environment cleanup

After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default.

Related Content
Article

Data page testing

Article

Creating unit test cases for rules

Article

Creating unit test cases for flows and case types

Test environment cleanup


After you run a unit test case or test suite, user and data pages used to set up the test environment and the
parameter page are automatically removed by default.

Note: You can override this behavior if you want the data from the current test to be available to the subsequent
tests.

You can also apply additional data transforms or activities to remove other pages or information on the clipboard
before you run more tests. Cleaning up the clipboard ensures that data pages or properties on the clipboard do
not interfere with subsequent tests. For example, when you run a test case, you can use a data transform to set
the values of the pyWorkPage data page with the AvailableDate, ProductID, and ProductName properties.

You can use a data transform to clear these properties from the pyWorkPage. Clearing these values ensures that,
if setup data changes on subsequent test runs, the test uses the latest information. For example, if you change
the value of the AvailableDate property to May 2018, you ensure that the data page uses that value, not the older
(December 2018) information.

Related Content
Article

Cleaning up your test environment

Article

Setting up and cleaning the context for a test case or test suite

Article

Creating unit test suites with AUT

Article

Data Transforms

Viewing unit test reports


View a graph with test pass rate trend data, a summary of Pega unit tests that were run, and an overview of Pega
unit test compliance for currently included applications on the Reports tab on the Unit Testing landing page.

By default, a test case is considered as executed if it ran in the last 7 days. You can change the number of days
for which a test can be considered executed on the Application: Quality Settings landing page. The overview also
includes the percentage of rules on which Pega unit test cases are configured. View Pega unit test reports to
check the quality of your application and identify rules that did not pass Pega unit testing.
1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing Reports .

2. Optional:

To filter information by more than one criterion, click the Advanced filter icon.

3. Optional:

Generate and export a report for test coverage and test runs for a rule type.

a. For the rule type for which you want to export a report, click a number in the column of the table for
either pie chart.

b. Click Actions.

c. Click Export to PDF or Export to Excel.

Related Content
Article

Changing application quality metrics settings

Article

Viewing application quality metrics

Article

Viewing test details and results on the Application:Unit testing landing page

Article

Viewing unit tests without rules

Viewing unit tests without rules


On the Application: Unit testing landing page you can display a list of unit tests that are not associated with any
rule and export this list to an XLS or a PDF file. You should deactivate these unit tests because they will always
fail.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Unit Testing .

2. Click Tests without rules.

3. Optional:

Generate and export a report that contains a list of test cases that are not associated with any rules.

To export to PDF format, click Actions Export to PDF


To export to XLS format, click Actions Export to Excel

Related Content
Article

Creating unit test cases for rules

Article

Viewing test details and results on the Application:Unit testing landing page

Article

Viewing unit test reports

Running test cases and suites with the Execute Tests service
You can use the Execute Tests service (REST API) to validate the quality of your code after every build is created
by running unit test cases that are configured for the application.
A continuous integration (CI) tool, such as Jenkins, calls the service, which runs all the unit test cases or test suites
in your application and returns the results in xUnit format. The continuous integration tool interprets the results
and, if the tests are not successful, you can correct errors before you deploy your application.

When you use Jenkins, you can also use the Execute Tests service to run unit tests after you merge a branch on a
remote system of record and start a job. For more information, see Remotely starting automation jobs to perform
branch operations and run unit tests.

The service comprises the following information:

Service name: Pega unit Rule-Test-Unit-Case pzExecuteTests


Service package: Pega unit
End point: http://<yourapplicationURL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests

You can quarantine a test case by marking it Disabled. A disabled test case is not run by the Execute Tests
service. Test case quarantines prevent noncritical tests from running if they are causing failures so that the
service can continue to run.

Request parameters

The Execute Tests service takes certain string request parameters.

Response

The service returns the test results in an XML file in xUnit format and stores them in the location that you
specified in the LocationOfResults request parameter.

Configuring your default access group

When you run the Execute Tests service, you can specify the access group that is associated with the
application for which you want to run all unit test cases or a test suite. If you do not specify an access group
or application name and version, the service runs the unit test cases or test suite for the default access
group that is configured for your Pega Platform operator ID.

Configuring your build environment

Configure your build environment so that it can call the Execute Tests service and run all the unit test cases
or a test suite in your application. Your configuration depends on the external validation engine that you use.

Running tests and verifying results

After you configure your validation engine, run the service and verify the test results. Your test suites and
test cases must be checked in so that you can run them.

Test failures

Test cases and suites that are run using Execute tests services can fail for a few reasons.

Request parameters
The Execute Tests service takes certain string request parameters.

The strings are:

ApplicationInformation – Optional. The name and version of the application for which you want to run Pega
unit test cases. You can pass it instead of the AccessGroup parameter.
If you pass only this parameter, the service runs all the test cases in the application.
If you do not pass this parameter, the service runs all the test cases in the application that are
associated with the default access group that is configured for your operator.

Use the format ApplicationInformation=<application_name:application_version>.

​AccessGroup – Optional. The access group that is associated with the application for which you want to run
Pega unit test cases. You can pass it instead of the ApplicationInformation parameter.
If you pass this parameter, the service runs all the test cases in the application that are associated with
this access group.
If you do not pass this parameter, the service runs all the test cases in the application that are
associated with the default access group that is configured for your operator.

Use the format AccessGroup=<access_group_name:access_group_user>.


​TestSuiteID – The pxInsName of the test suite that you want to run. You can find this value in the XML
document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. You can run
one test suite at a time. When you use this parameter, all the test cases in the test suite are run, but no
other test cases in your application are run. This parameter is required for Pega unit test suites. If test suites
share the same name among applications:
If you pass the ApplicationInformation or AccessGroup parameter with the TestSuiteID parameter, the
service runs the test suite in the application that you specified.
If you do not pass the ApplicationInformation parameter or the AccessGroup parameter with the
TestSuiteID parameter, the system runs the test suite in the application that is associated with the
default access group.

Use the format TestSuiteID=<pxInsName>.

LocationOfResults – The location where the service stores the XML file that contains the test results. This
parameter is optional for test cases and test suites.
RunWithCoverage – Determines whether the application-level test coverage report is generated after the
Execute Tests service runs all relevant test cases or the selected test suite. For more information, see
Generating an application-level test coverage report.
If you set the parameter to False, the application-level test coverage report is not generated. This is the
default behavior.
If you set the parameter to True, and application-level coverage is not running, the Execute Tests
service starts application-level coverage mode, runs all unit tests, stops coverage mode, and generates
the application-level coverage report. This report is displayed on the test coverage landing page in the
Application level section.
If you set the parameter to True, and application-level coverage is already running, the Execute Tests
service returns an error.

Response
The service returns the test results in an XML file in xUnit format and stores them in the location that you
specified in the LocationOfResults request parameter.

The output is similar to the following example:

<test-case errors="2" failures="0" label="Purchase order transformation with a bad element in the output
expected" name="report-bad-element-name" skip="0" tests="7"> <nodes expected="/" result="/"><nodes
xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]"><error
type="Local name comparison">Expected "order" but was "purchase-order"</error><error type="Namespace
URI comparison">Expected "urn:acme-purchase-order" but was ""</error></nodes></nodes><sysout>This text
is captured by the report</sysout><syserr/></test-case>

Configuring your default access group


When you run the Execute Tests service, you can specify the access group that is associated with the application
for which you want to run all unit test cases or a test suite. If you do not specify an access group or application
name and version, the service runs the unit test cases or test suite for the default access group that is configured
for your Pega Platform operator ID.

1. In the navigation pane of Dev Studio, click the Operator menu, and then click Operator.
2. In the Application Access section, select your default access group.
3. Selecting default access group configuration
4. Click Save.

Configuring your build environment


Configure your build environment so that it can call the Execute Tests service and run all the unit test cases or a
test suite in your application. Your configuration depends on the external validation engine that you use.

For example, the following procedure describes how to configure the Jenkins server to call the service.

1. Open a web browser and go to the location of the Jenkins server.

2. Install the HTTP request plug-in for Jenkins to call the service and the JUnit plug-in so that you can view
reports in xUnit format.

a. Click Manage Jenkins.

b. Click Manage Plugins.


c. On the Available tab, select the HTTP Request Plugin and the JUnit Plugin check boxes.

d. Specify whether to install the plug-in without restarting Jenkins or to download the plug-in and install it
after restarting Jenkins.

3. Configure the Pega Platform credentials for the operator who authenticates the Execute Tests service.

a. Click Credentials, and then click System.

b. Click the drop-down arrow next to the domain to which you want to add credentials, and click Add
credentials.

c. In the Username field, enter the operator ID that is used to authenticate the service. This operator
should belong to the access group that is associated with the application for which you want to run test
cases and test suites.

d. In the Password field, enter the password.

e. Click OK.

4. Configure the Jenkins URL that runs the service.

a. Click Manage Jenkins, and then click Configure System.

b. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server.

c. Click Apply, and then click Save.

5. Add a build step to be run after the project is built.

a. Open an existing project or create a project.

b. Click Configure.

c. In the Build section, click Add build step, and select HTTP Request from the list.

d. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use one of the following
formats:

http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests


http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests​
?AccessGroup=<access_group_name:accessgroup_group_users>
http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-
Case/pzExecuteTests?TestSuiteID=<pxInsName>
http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-
Case/pzExecuteTests?ApplicationInformation?=ApplicationInformation:
<application_name:application_version>

If you are using multiple parameters, separate them with the ampersand (&) character, for example,
http://<your application URL>/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?
ApplicationInformation?=ApplicationInformation:<application_name:application_version>&TestSuiteID=
<pxInsName>

6. From the HTTP mode list, select POST.

7. Click Advanced.

8. In the Authorization section, from the Authenticate list, select the Pega Platform operator ID that
authenticates the service that you configured in step 3.

9. In the Response section, in the Output response to file field, enter the name of the XML file where Jenkins
stores the output that it receives from the service. This field corresponds to the LocationOfResults request
parameter. In the Post-build Actions section, from the Add post build section list, select Publish Junit test
result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit
format, which provides information about test results, such as a graph of test results trends. These results
are displayed on your project page in Jenkins.

10. Click Apply, and then click Save.

Running tests and verifying results


After you configure your validation engine, run the service and verify the test results. Your test suites and test
cases must be checked in so that you can run them.

For example, in Jenkins, complete the following steps.

1. Open the project and click Build Now.

2. In the Build History pane, click the build that you ran.

3. On the next page, click Test Result.

4. In the All Tests section, click root. The results of all tests are displayed.

5. Optional:

Expand a test result in the All Failed Tests section and view details about why the test was not successful.

Test failures
Test cases and suites that are run using Execute tests services can fail for a few reasons.

Reasons for failed tests:

The operator does not have access to the location of the results.
The access group that is passed by the service either does not exist or no access group is associated with
the operator ID.
The application name and version that are passed do not exist.
An application is not associated with the access group that is passed by the service.
No Pega unit test cases or test suites are in the application.
The test suite pxInsName does not exist for the application name and version or for the access group that is
passed by the service.

Understanding Pega Platform 7.2.2 and later behavior when


switching between Pega unit testing and Automated Unit Testing
features
Beginning with Pega 7.2.2, you can use Pega unit testing to create test cases to validate the quality of your
application by comparing the expected test output with results that are returned by running rules.

In addition, if you have the AutomatedTesting privilege, you can use Automated Unit Testing (AUT) and switch
between Pega unit testing and AUT, for example, if you want to view test cases that you created in AUT. The
following list describes the application behavior when you use Pega unit testing and AUT:

When you unit test activities that are supported by both Pega unit testing and AUT, the Run Rule dialog box
displays updated options for creating unit tests for Pega unit testing. However, you cannot create unit test
cases for AUT by using this dialog box.
When you use Pega unit testing, you can create, run, and view the results of Pega unit testing on the Test
Cases tab for the supported rule types.
You can view, run, and view the results of Pega unit test cases by clicking Dev Studio Automated Testing
Test Cases . You can also switch to the AUT landing page by clicking Switch to old version.
When you switch to the AUT landing page, you can create, run, and view the results of unit test cases for AUT
on the Test Cases tab for activities, data transforms, and data tables, which are supported by both Pega unit
testing and AUT. You can create unit test cases only by clicking the Record test case button and using the
older Run Rule dialog box.
In the Automated Unit Testing landing page, you can restore the Automated Rule Testing landing page by
clicking Switch to new version. When you click the Test cases tab in an activity, decision table, or decision
tree, the tab displays options for creating Pega unit test cases.
If you use the Automated Unit Testing landing page, and then log out of the system, Dev Studio displays the
Dev Studio Application Automated Unit Testing > menu option instead of the Dev Studio Application
Automated Testing > option. To return to the Automated Unit Testing landing page, click Switch to new
version on the Automated Unit Testing landing page.

Working with the deprecated AUT tool


In older versions of Pega Platform, automated unit tests were created using the Automated Unit Testing (AUT)
tool, which has since been replaced by PegaUnit testing. If you have automated unit tests that were created using
AUT and they haven't been changed to PegaUnit test cases, then you can switch back to AUT to manage those
tests.

Note: AUT has been deprecated and is not supported in the current version of Pega Platform. Switch to and use
AUT only if you have existing automated unit tests created with AUT. See PegaUnit testing more information.

Note the following behavior:

To use AUT, your operator ID must have the AutomatedTesting privilege through an access role.

Switch from PegaUnit testing to AUT by clicking Configure Automated Testing Test Cases and clicking Switch
to old version on the Automated Unit Testing landing page.

Click the Test cases tab of the Automated Unit Testing landing page to display options for creating unit tests
for activities, decision tables, and decision trees.

If you are using the Automated Unit Testing landing page and then log out of the system, you can click
Configure Application Automated Unit Testing , and then click Switch to new version to restore the
Automated Testing landing page.

Viewing, playing back, and rerecording test cases


1. Click the Automated Unit Tests tab.
2. Select Unit Test Cases in the Show field.
To play back a test case, click its name in the Name column.
To rerecord a test case, right-click the test case name and click Re-record.

Note: If the underlying test case rule belongs to a ruleset that uses the check-out feature, you must
have the test case rule checked out to you before re-recording the test case.

Opening rules in test cases and unit test suites


1. Click the Automated Unit Tests tab.
2. Right-click a test case or suite and click Open to open its rule.

Withdrawing test cases and unit test suites


1. Click the Automated Unit Tests tab.
2. Right-click a test case or suite and click Withdraw.

Withdrawn test cases and suites are not displayed on the Automated Unit Tests tab.

Unit test suite run results


You can view the results of your recent unit test suite runs in either the Dashboard tab or Reports tab. The
Dashboard tab displays the ten most recent runs. The Reports tab displays earlier results and, for a given unit test
suite, shows results from the last fifty (50) runs of that unit test suite.

If you ran a unit test against a saved test case for a decision table, decision tree, activity, or Service SOAP rule
and selected the All Cases option in the Run Rule form, those results also appear in the Dashboard tab.

For activity test cases, if the activity test case has an approval list, differences are reported only for pages and
properties on the list. If the test case has an approval list and the only differences are for pages and properties
not on the list, those differences are not reported. If differences are found for items on the approval list, you can
remove the item from the approval list for that test case.

Creating and scheduling unit test suites


To create a unit test suite:

1. Click the Schedule tab.


2. Click Create Suite.
3. In the New Rule form, enter the requested information for creating a unit test suite.

To run a unit test suite or to schedule a run:

1. Click the Schedule tab.


2. Click the Calendar icon in the Schedule column for the unit test suite you want to run.
3. In the Pattern section in the Schedule Unit Test Suite window, specify how to run this unit test suite. When
the run is complete, the system displays the results in the Dashboard tab. When you select To run
immediately, the system runs the test suite in the foreground; for all other options, the system runs the test
in the background.
4. For scheduled runs, you can specify additional options.
a. Select to run the unit test suite by using a different operator ID. In the Advanced Settings section, enter
the Operator ID in the Override Default and Run Suite As field. The system runs the unit test suite by
using the rulesets and access rights associated with that operator. If the operator ID form has multiple
access groups, the default access group is used.
b. Send the completion email to multiple email addresses. Use the Send Completion Email to field to
specify the email addresses.

If you do not want any emails sent, clear the Send Completion Email field.

5. Click OK.

By default, the Pega-AutoTest agents run scheduled unit test suites run every five minutes. When the suite is
finished, the agent activity sends an email with the results. By default, this email is sent to the operator who
requested the unit test suite run and to any email addresses listed in the Send Completion Email array. If no email
addresses appear in that field, no email message is sent.

Creating test cases with AUT

You can automate testing of rules by creating test cases for automated unit testing. Automated unit testing
validates application data by comparing expected output to the actual output that is returned by running
rules.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Related Content
Article

AUT test suite – Create or Save as form

Article

AUT test suite – Contents form

Article

Viewing test details and results on the Application:Unit testing landing page

AUT test suite – Contents form


Use the Contents tab to define the unit test suite. Specify a user (Operator ID) that the Pega-AutoTest agents are
to use by default when running the suite, and select the test cases to include.

The Operator ID specified here is the default one used to run the unit test suite. When defining the unit test
suite's run schedule using the Schedule gadget of the Automated Unit Testing landing page, you have the option
to specify a different Operator ID and override the one specified here.

You can specify Test Cases in both the Rule Types To Include section and the Query Test Cases To Include
section of this form. If you specify Test Cases in both sections, when the unit test suite runs, those test cases
defined in the Rule Types To Include section will run before the test cases in the Query Test Cases To
Include section.

1. In the RuleSets for Test Cases field, select the RuleSet that holds the test cases you want to include in this
test suite.

If the test cases are in more than one RuleSet, click the Add icon to add rows to specify the additional
RuleSets.

2. In the User ID for Agent Processing field, select the Operator ID for the Pega-AutoTest agents to use by
default when they run this test suite.
This ID must provide access to the RuleSet that this test suite belongs to, as well as access to the RuleSets
listed in the RuleSets field.

3. Optional:

To specify that the work items created during the test case execution are to be deleted afterwards, select
the Remove Test Work Objects? check box.

The fields in the Application Test Cases To Include section provide options to specify the test cases by
application name and version.

4. In the Application Name fieldSelect the name of the application that has the test cases you want to include in
the unit test suite.

5. In the Application Version field, select the version of the application that has the test cases you want to
include in the unit test suite.

The fields in the Rule Types To Include section provide options to select the test cases by rule type. You
can specify that all the test cases for a particular rule type are included in this unit test suite, or you can
constrain the list with a When condition rule.

6. In the Rule Type field, select those rule types for which you want to include their test cases in this unit test
suite:

Activities
Decision Tables
Decision Trees
Flows
Service SOAP service records

7. In the When Filter field, do one of the following:

Leave blank to include all the test cases that were created for rules of the type specified in the Rule
Type field.

Select the appropriate when condition rule to constrain the list.

The test cases that meet the conditions in the when condition rule are included in the unit test suite

The fields in the Query Test Cases To Include section provide options to select specific Test Cases to
include in this unit test suite. List the test cases in the order in which you want them to be run.

8. In the Test Case Name field, enter a search string for the test case you want to find.

9. To list test cases that match the query string in the Test Case Name field, click Query.

The list is not limited by RuleSet. If test cases exist that match the search string, the List Test Case window
appears. Select the test cases you want to include and then click OK. The test cases are added to the list in
this section of the form.

10. In the Test Case Key field, enter the three-part key of a Test Case rule.

The key consists of the following parts:

Class Name
Instance Name (Ins Name)
Purpose

When you use the Query button to find and add a test case, the system automatically fills in this field.

11. In the Description field, enter the short description of the Test Case.

When you use the Query button to find and add a test case, the system automatically fills in this field.

12. In the RuleSet field, enter the RuleSet of the test case.

When you use the Query button to find and add a test case, the system automatically fills in this field.

Verify that this RuleSet is included in the RuleSets for Test Cases list at the top of this form. If the RuleSet for
the test case is not in that list, add it now. Otherwise, the Test Case does not run when the unit test suite
runs.
Related Content
Article

Creating unit test suites with AUT

Article

AUT test suite – Create or Save as form

UI testing
Perform UI-based functional tests and end-to-end scenario tests to verify that end-to-end cases work as expected.
Use the third party Selenium starter kit for CRM or the built-in scenario testing tool to perform the UI testing.

Testing with Selenium starter kit for CRM

Pega provides Selenium-based UI test framework and sample UI tests that you can leverage to build up test
automation suite for your Pega application. These test frameworks are built with maintenance and best
practices in mind.

Creating UI-based tests with scenario testing

Run scenario tests against a user interface to verify that the end-to-end scenarios are functioning correctly.
The UI-based scenario testing tool allows you to focus on creating functional and useful tests, rather than
writing complex code.

Testing with Selenium starter kit for CRM


Pega provides Selenium-based UI test framework and sample UI tests that you can leverage to build up test
automation suite for your Pega application. These test frameworks are built with maintenance and best practices
in mind.

The starter kit comes with a generic Selenium based UI test framework that you can use for creating UI page
objects and UI tests for your Pega application. It includes sample UI test framework to support testing core Pega
CRM applications – Pega Sales Automation, Pega Customer Services and Pega Marketing. The kit comes with out
of the box (OOTB) sample tests to validate real core use cases of those CRM applications. You can use this kit as
reference when creating your own UI page objects and end-to-end UI test scripts. The framework and tests are
Behavior Driven Development (BDD) based and leverage the Cucumber framework.

For more information, including guidelines on getting started, running, and writing UI tests, see Selenium Starter
Kit on Pega Marketplace.

Creating UI-based tests with scenario testing


Run scenario tests against a user interface to verify that the end-to-end scenarios are functioning correctly. The
UI-based scenario testing tool allows you to focus on creating functional and useful tests, rather than writing
complex code.

You can test either a specific case type or an entire portal by clicking Scenario Testing in the run-time toolbar to
open the test recorder. When you use the test recorder and hover over a testable element, an orange highlight
indicates that the element can be tested. Interactions are recorded in a visual series of steps and the execution of
a test step can include a delay.

Provide data to your test cases with a predefined data page. This data page can provide unique values for each
execution of the test case. You can populate the data page by using any source, including activities or data
transforms.

Tests are saved in a test ruleset. After they are saved, tests are available on the Application: Scenario testing
landing page. From the landing page you can run a test or view the results of a previous test run.

Creating scenario tests

Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Updating scenario tests


Creating unit test suites with AUT
Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Opening a scenario test case

You can view a list of the scenario test cases that have been created for your application and select the one
that you want to open.

Grouping scenario tests into suites

Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can
then run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or
outcome-based tests. Additionally, you can disable or quarantine individual scenario tests for an application
so that they are not executed when the test suite runs.

Application: Scenario testing landing page

The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

Creating scenario tests


Record a set of interactions for a case type or portal in scenario tests. You can run these tests to verify and
improve the quality of your application.

Before you begin: Create a test ruleset in which to store the scenario test. For more information, see Creating a
test ruleset to store test cases.

1. Launch the portal in which you want to do the test.

2. Do one of the following to open the automation recorder:

In App Studio, on the lower-left side of the screen, click the Test icon.
In Dev Studio, on the lower-right side of the screen, toggle the run-time toolbar, and then click the
Toggle Automation Recorder icon.

3. In the Scenario tests pane, click Create test case, and then select the test type:

To record a test for a portal, select Portal.


To record a test for a case, select Case type, and then select the type of case for which you want to
record the test.Note: When you select the case type, a new case of that type is created.

4. Record the steps for the test by clicking the user interface elements.

Result: When you hover over a testable element, an orange highlight box appears. When you click an
element, you record an implicit assertion and add the interaction to the list of test steps.
5. Optional:

To add an explicit assertion to the test, do the following steps:

a. Hover over an element.

b. Click the Mark for assertion icon on the orange highlight box.

c. In the Expected results section, click Add assertion.

d. Define the assertion by completing the Name, Comparator, and Value fields.

e. Click Save step.

6. When you finish adding steps, in the Test case pane, click Stop and save test case.

7. On the New test case form, save the test:

a. Enter a name and a description for the test.

b. In the Context section, select a branch or ruleset in which you want to save the test.

c. In the Apply to field, enter the name of a class that is relevant to the test.
d. Click Save.

Result: The test case appears on the Scenario testing landing page.

Related Content
Article

Updating scenario tests

Article

Application: Scenario testing landing page

Opening a scenario test case


You can view a list of the scenario test cases that have been created for your application and select the one that
you want to open.

1. In the navigation pane of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing
Test Cases .

2. In the Test case name column, click the test case that you want to open.

Related Content
Article

Running scenario tests

Grouping scenario tests into suites


Group related scenario tests into test suites to run multiple scenario test cases in a specified order. You can then
run the scenario test suites as part of purpose-specific tests, such as smoke tests, regression tests, or outcome-
based tests. Additionally, you can disable or quarantine individual scenario tests for an application so that they
are not executed when the test suite runs.

Creating scenario test suites

To create a scenario test suite, add scenario test cases to the suite and then specify the order in which you
want the tests to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

Creating unit test suites with AUT

Unit Test Suites identify a collection of Test Cases and their rulesets, and a user (Operator ID) whose
credentials are used to run the Unit Test Suite. Unit Test Suites are used to automatically run groups of test
cases together and make unit testing more efficient.

Running scenario test suites

Run scenario test suites to check application functionality. You can check the run history, add or remove test
cases from the suite, or reorder the test cases before running the suite.

Viewing scenario test suite results

After you run a scenario test suite, you can view the test results. For example, you can view the expected
and actual output for assertions that did not pass.

Related Content
Article

Creating scenario tests

Article

Application: Scenario testing landing page


Creating scenario test suites
To create a scenario test suite, add scenario test cases to the suite and then specify the order in which you want
the tests to run. You can also modify the context in which to save the scenario test suite, such as the
development branch or the ruleset.

When the test suite runs, the test cases run in the order that they are listed. You can reorder cases only on the
page in which they display and cannot move cases or suites from one page to another.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing Test
Suites .

2. Click Create new suite.

3. Optional:

In Description, enter information that you want to include with the test suite. For example, enter information
about when to run the test suite.

4. In the Category list, click the type of scenario test suite you are creating:

To informally test a feature, select Ad-hoc.


To verify critical application functionality, select Smoke.
To confirm that changes have not adversely affected other application functionality, select Regression.

5. In the Scenario test cases section, click Add, select the test cases you want to add to the suite, and then click
Add.

Note: To filter information by multiple criteria, click the Advanced filter icon.
6. Optional:

To change the order in which the test cases run, drag the case to a different position in the sequence.

7. Save the scenario test suite:

a. Click Save and then enter a Label that describes the purpose of the test suite.

Note: Pega Platformautomatically generates the Identifier based on the label you provide. The
identifier identifies the scenario test suite in the system. To change the identifier, click Edit. The
identifier must be unique to the system.
b. Optional:

In the Context section, change details about the environment in which the test suite will run. You can:

Change the development branch in which to save the scenario test suite.
Select a different application for which to run the scenario test suite.
Change the class to apply to the scenario test suite.
Select a different ruleset in which to save the scenario test.

c. Click Submit.

Related Content
Article

Grouping scenario tests into suites

Article

Running scenario test suites

Article

Viewing scenario test suite results

Article

Creating scenario tests


Article

Application: Scenario testing landing page

Running scenario test suites


Run scenario test suites to check application functionality. You can check the run history, add or remove test
cases from the suite, or reorder the test cases before running the suite.

1. In the header of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing Test
Suites .

2. Optional:

View or modify the test cases included in a test suite.

a. Click the name of the suite.

b. View summary information about previous test results in the header.

To view more information about the latest test results, click View details.
To view information about earlier results, click View previous runs.

c. Modify the test cases in the suite from the Scenario test cases section.

To remove test cases from the suite, click Remove, and then click Save.
To include additional test cases in the suite, click Add, select the test case, click Add, and then
click Save.

d. To change the order in which the test cases will run, drag a case to a different position in the sequence
and then click Save.

e. To prevent individual test cases from running as part of the suite, select the case, click Disable, click
Save, and then close the test case.

f. Close the test suite, return to the Application: Scenario testing page, and then click Actions Refresh .

3. Optional:

To view details about previous test results, click View in the Run history column.

4. Select the check box for each test suite that you want to run and then click Run selected. The test suites run
and the Result column is updated with the result, which you can click to open test results.

Related Content
Article

Grouping scenario tests into suites

Article

Running scenario test suites

Article

Viewing scenario test suite results

Article

Creating scenario tests

Article

Application: Scenario testing landing page

Viewing scenario test suite results


After you run a scenario test suite, you can view the test results. For example, you can view the expected and
actual output for assertions that did not pass.
1. In the header of Dev Studio, click Configure Application Quality Automated Testing Scenario Testing Test
Suites .

2. To view results of the most recent run, click the result in the Result column. For information about why a test
failed, click Failed in the Result column.

3. To view historical details about a specific test suite, in the Run history column, click View.

Related Content
Article

Grouping scenario tests into suites

Article

Creating scenario test suites

Article

Running scenario test suites

Article

Creating scenario tests

Article

Application: Scenario testing landing page

Application: Scenario testing landing page


The scenario testing landing page provides a graphical test creation tool that you can use to increase test
coverage without writing complex code.

On the scenario testing landing page, you can view and run scenario test cases. By viewing reports, you can also
identify case types and portals that did not pass scenario testing.

On the scenario testing landing page, you can perform the following tasks:

View test execution and coverage information for case type tests and portal tests.
Open a test case rule where you can add assertions to your test.
View the results of the most recent test run.
Select and run individual test cases, or group tests into test suites that you can use to run multiple tests in a
specified order.
Download a list of tests, their type, the name of a portal or case that is tested, and the time and the result of
the last run.

Related Content
Article

Creating scenario tests

Article

Updating scenario tests

Understanding model-driven DevOps with Deployment Manager


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a standardized deployment process so that you can
deploy predictable, high-quality releases without using third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application
package generation, artifact management, and package promotion to different stages in the workflow. You can
download Deployment Manager for Pega Platform from the Deployment Manager Pega Marketplace page.

For answers to frequently asked questions, see the Deployment Manager FAQ page.
Deployment Manager release notes

These release notes provide information about enhancements, known issues, issues related to updating from
a previous release, and issues that were resolved in each release of Deployment Manager.

Getting started with Deployment Manager

Deployment Manager is a simple, intuitive, and ready-to-use application that offers built-in DevOps
capabilities to users. It leverages Pegasystems’s market-leading case management technology to manage an
automated orchestration engine, enabling you to build and run continuous integration and continuous
delivery (CI/CD) pipelines in a model-driven manner.

Understanding Deployment Manager architecture and workflows

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for
your Pega applications from within Pega Platform. You can create a standardized deployment process so
that you can deploy predictable, high-quality releases without using third-party tools.

Understanding best practices for using branches with Deployment Manager

Follow these best practices when you use branches in your Deployment Manager pipelines. The specific
practices depend on whether you have a single development team or multiple development teams in a
distributed environment.

Managing test cases separately in Deployment Manager

In Deployment Manager 4.4.x and later, you can package and deploy test cases separately on the candidate
systems in the pipeline. When you configure a pipeline in Deployment Manager, you specify the details of
the test package that you want to deploy, including the stage in the pipeline until which you want to deploy
the package.

Creating and using custom repository types for Deployment Manager

In Deployment Manager 3.1.x and later, you can create custom repository types to store and move your
artifacts. For example, you can create a Nexus repository and use it similarly to how you would use a Pega
Platform-supported repository type such as file system. By creating custom repository types, you can extend
the functionality of Deployment Manager through the use of a wider variety of repository types with your
artifacts.

Configuring Deployment Manager 4.x for Pega Platform 7.4

You can use Deployment Manager 4.x if Pega Platform 7.4 is installed on your candidate systems
(development, QA, staging, and production). You can use many of the latest features that were introduced in
Deployment Manager 4.x, such as managing your deployments in a dedicated portal.

Deployment Manager 4.8.x

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for
your Pega applications from within Pega Platform. You can create a consistent deployment process so that
you can deploy high-quality releases without the use of third-party tools.

Obtaining deprecated Deployment Manager documentation

The Deployment Manager releases for the corresponding versions of documentation are no longer available
to be downloaded from Pega Marketplace.

Deployment Manager release notes


These release notes provide information about enhancements, known issues, issues related to updating from a
previous release, and issues that were resolved in each release of Deployment Manager.

For answers to frequently asked questions, see the Deployment Manager FAQ page.

Deployment Manager 4.8.1

Deployment Manager 4.8.1 includes the following enhancements and resolved issues:

Deployment Manager 4.7.1

Deployment Manager 4.7.1includes the following enhancements.


Deployment Manager 4.6.1

Deployment Manager 4.6.1 includes the following enhancements and resolved issues.

Deployment Manager 4.5.1

Deployment Manager 4.5.1 includes the following enhancements and resolved issues.

Deployment Manager 4.4.2

Deployment Manager 4.4.2 includes the following resolved issues.

Deployment Manager 4.4.1

Deployment Manager 4.4.1 includes the following enhancements and known issues.

Deployment Manager 4.3.2

Deployment Manager 4.3.2 includes the following resolved issues.

Deployment Manager 4.3.1

Deployment Manager 4.3.1 includes the following enhancements.

Deployment Manager 4.2.1

Deployment Manager 4.2.1 includes the following enhancements.

Deployment Manager 4.1.1

Deployment Manager 4.1.1 includes the following enhancements.

Deployment Manager 3.4.1

Deployment Manager 3.4.1 includes the following enhancements.

Deployment Manager 3.3.1

Deployment Manager 3.3.1 includes the following enhancements and known issues.

Deployment Manager 3.2.1

Deployment Manager 3.2.1 includes the following enhancements.

Deployment Manager 3.1.1

Deployment Manager 3.1.1 includes the following enhancements.

Deployment Manager 2.1.4

Deployment Manager 2.1.4 includes the resolved issues.

Deployment Manager 2.1.3

Deployment Manager 2.1.4 includes the following enhancements.

Deployment Manager 2.1.2

Deployment Manager 2.1.2 includes the known issues.

Deployment Manager 1.1.3

Deployment Manager 1.1.3 includes the following enhancements.

Deployment Manager 1.1.2

Deployment Manager 1.1.2 includes the following known and resolved issues.

Deployment Manager 4.8.1


Deployment Manager 4.8.1 includes the following enhancements and resolved issues:
Note: A mandatory hotfix (HFix-62670) has been released for Platform 8.4.1 impacting users of Deployment
Manager 4.8. Apply the hotfix on the orchestrator as well as on all candidate environments and restart the service
to complete the update.

Enhancements
The following enhancements are available in this release:

Asses application security by using the enhanced Pega Security Checklist task

With the Pega Security Checklist, you can easily secure your applications and systems in one convenient
area of Pega Platform™. The Security Checklist:
Performs a detailed assessment of your current security configuration to determine whether the
settings follow best practices for application development.
Provides a status on each task in the Security Checklist page and blocks your application deployment if
any task fails.
Stores an audit trail of the security configuration analysis and status at the time of deployment.

Deploy revision packages by using Deployment Manager

With Deployment Manager 4.8, you can now use Deployment Manager pipelines for deploying application
revisions. If you choose not to migrate your deployments to a DevOps pipeline, Revision Manager remains
backward compatible.

Configure authentication profiles in Deployment Manager

Deployment Manager service administrators can now create or edit authentication profiles in the
Deployment Manager UI instead of accessing profiles in Dev Studio.

Application-level rollback to a restore point

Application-level rollbacks now provide a more granular approach to restore points, which you can use to
revert rules and data instances in a specific application. This feature requires Pega Platform 8.4 and later.

Trigger deployments by using existing deployment artifacts

You can now trigger deployments by using production-ready artifacts from a previous production
deployment. Two new fields, Pipeline and Deployment, replace the existing fields, Select a repository
and Select an artifact, to provide a more efficient deployment approach and eliminate repository
interactions from Orchestrator.

You cannot use development artifacts to trigger a deployment.

Changes to pipeline dependencies configuration

The Deployment Manager UI for updating application dependencies is now read-only for pipelines that are
created in versions 4.7 and earlier. This functionality supports client migrations to the new deployment-
centric method of dependency configuration.

Add Jenkins tasks to any pipeline phase

You can now add Jenkins tasks to the Continuous Integration (CI) phase of a deployment pipeline. For a list of
available parameters for a Jenkins task in CI, see the Configuring Jenkins documentation.

Block draft flows in systems with production level 5

Deployment manager now blocks deployments in systems with a production level of 5 if the artifact contains
draft flows. If the production level is lower than 5, a warning message is displayed in the Deployment History
and Reports section, which indicates that draft flows might cause production failures.

Resolved issues in Deployment Manager 4.8.1


In this release, minor security issues are resolved.

Deployment Manager 4.7.1


Deployment Manager 4.7.1includes the following enhancements.
Enhancements
The following enhancements are available in this release:

Stop all ongoing deployments for a pipeline at once.

You can now stop all the ongoing deployments for a pipeline at once. Stop all deployments to quickly
troubleshoot issues and resolve failed pipelines.

Use a chatbot to obtain information about common issues.

You can now use a self-service chatbot to obtain troubleshooting tips and more information about common
Deployment Manager issues. When you search for information, the chatbot provides you with answers and
links to more information.

Troubleshoot pipelines with enhanced diagnostics.

Deployment Manager now provides enhanced diagnostics so that you can troubleshoot more issues. You
receive warnings if you are using the defaultstore repository or Pega type repository in any environment.

Perform new tasks with usability enhancements.

Usability enhancements in Deployment Manager 4.7.1 now include:

Start another pipeline by using the Trigger deployment task in an active pipeline, which allows you to
add pipeline stages.
Stop a deployment if a Jenkins task in the pipeline fails.
Archive inactive pipelines. By default, archived pipelines do not appear in the Deployment Manager
interface.
Temporarily disable pipelines that frequently fail to prevent additional deployments on the pipeline.
Start a new test coverage session for the Enable test coverage task every time you run a pipeline.
Starting a new session prevents deployments from failing if a test coverage session is already running
on the pipeline.
Filter pipelines by application name and version on the Deployment Manager landing page.
In deployment logs, view all the new rule and data instances and all the changed rule and data
instances that are in an application package that imported into a candidate system.

Use APIs for new features.

Deployment Manager now provides APIs so that you can provide for new features in your applications:

Run diagnostics remotely, and retrieve diagnostics results.


Disable and enable pipelines.
Archive and unarchive pipelines.

The Documentation/readme-for-swagger.md file in the DeploymentManager04_07_0x.zip file provides documentation about


API usage.

Deployment Manager 4.6.1


Deployment Manager 4.6.1 includes the following enhancements and resolved issues.

Enhancements
The following enhancements are available in this release:

Ability to use Deployment Manager to automate data migration pipelines

Data migration pipelines allow you export data from a production environment to a simulation environment
where you can test impact of the changes made to your decision framework safely without having to deploy
to a production environment. You can now use Deployment Manager to create data migration pipelines that
allow you to automatically export data from a production environment and import it into a simulation
environment. Additionally, you can configure a job scheduler rule to run pipelines during a specified period of
time.

For a tutorial on configuring simulation pipelines, including how to use Deployment Manager with them, see
Deploying sample production data to a simulation environment for testing.

For more information about configuring and using simulation pipelines with Deployment Manager, see Data
migration pipelines with Deployment Manager 4.6.x.

Ability to provide access to Dev Studio to a role

You can now allow a role to access Dev Studio, and all the users of that role can switch to Dev Studio from
the Operator icon. By being able to switch to Dev Studio, users can access Dev Studio tools to further
troubleshoot issues that Deployment Manager cannot diagnose.

Ability to easily move to new orchestration systems by configuring a dynamic system setting

When you move from an existing orchestration system to a new one, you can now configure a dynamic
system setting that specifies the URL of the new orchestration system.

Resolved issues in Deployment Manager 4.6.1


The following issues were resolved in this release:

The position of the Validate test coverage task was not retained.

If you added a Validate test coverage task in a pipeline, the task automatically moved under the Add task
menu option after you saved the pipeline configuration. The position of the task is now saved.

Deployment Manager installation failed on IBM Db2.

Deployment Manager installations on systems running on Db2 failed with a database error. You can now
install Deployment Manager on Db2.

Not all API requests included PRRestService.

Some HTTP requests to the api service package did not include PRRestService. It is now included in all
requests if it is needed to direct all traffic to the API node.

Tasks could not be added before the Deploy task in Deployment Manager 4.5.1 when using the API.

When you used the API to create pipelines, you could not add tasks before the Deploy task, although you
could add a task when you configured the pipeline in Deployment Manager. You can now add tasks before
the Deploy task with the API.

Test changes in branches were merged into incorrect ruleset versions.

Sometimes, test changes in branches were merged into an incorrect ruleset version if multiple application
versions were used and a test application was configured on the pipeline. Test changes in branches are now
merged into the correct ruleset versions.

Deployment Manager displayed a message for reaching the limit for pending changes.

Sometimes, Deployment Manager displayed an error message that you reached the maximum limit for
pending changes. The limit has been increased, and the error no longer appears.

The Jenkins configuration diagnostics check failed when cross-site request forgery (CSRF) protection was
disabled.

When CSRF protection was disabled in Jenkins, pipeline diagnostics for Jenkins configuration failed with an
error message that the Jenkins server was not reachable, even though the Jenkins task in the pipeline
worked correctly. Jenkins diagnostics checks no longer fail in this scenario.

Deployment Manager 4.5.1


Deployment Manager 4.5.1 includes the following enhancements and resolved issues.

Enhancements
The following enhancements are provided in this release:

Ability to add tasks before Deploy and Publish tasks

For additional validation or environment provisioning, you can now add any task before the Deploy and
Publish tasks, which are automatically added to the pipeline. You can add tasks before the Deploy task in
any stage of the pipeline or before the Publish task in the development stage.
Ability to associate bugs and user stories to branch merges

When you start a deployment by submitting a branch into the Merge Branches wizard, you can now
associate user stories and bugs from Agile Workbench so that you can track branch merges.

New REST API to deploy existing artifacts

Deployment Manager now provides a REST API to deploy existing artifacts so that you can start a production
pipeline with the output of the development pipeline for the same application. You can view the
Documentation/readme-for-swagger.md file for more information on using the API.

Ability to access and pass all relevant parameters of the current deployment for Jenkins tasks

For Jenkins tasks, you can now access and pass all the relevant Jenkins parameters for the current
deployment, which include PipelineName, DeploymentID, RespositoryName, and ArtifactPath. When you
configure the Jenkins task in a pipeline, the values of the parameters are automatically populated.

More diagnostics to troubleshoot pipelines

You can now automatically diagnose more issues with your pipeline so that you spend less time manually
troubleshooting. For example, you can now verify that Jenkins steps are properly configured, and you can
also obtain more information about repository connections with enhanced troubleshooting tips.

Elimination of post-upgrade steps when upgrading from Deployment Manager versions 3.2.1 and later

For upgrades from Deployment Manager 3.2.1 or later to version 4.5.1, you no longer need to run activities
or do any other post-upgrade steps. After the upgrade completes, Deployment Manager performs health
checks before running post-upgrade steps for both on-premises and Pega Cloud Services environments.

Resolved issues
The following issue is resolved in Deployment Manager 4.5.1:

Unable to configure keystores in Pega Cloud Services environments

If your target environment is SSL-enabled with private certificates, you can now set the keystore for
Deployment Manager connectors so that they can receive and process tokens. You first configure a keystore
and then update a dynamic system setting to reference the keystore ID. For more information, see "Step 3a:
Configuring authentication profiles on the orchestration server and candidate systems" for your version of
Installing, upgrading, and configuring Deployment Manager.

Deployment Manager 4.4.2


Deployment Manager 4.4.2 includes the following resolved issues.

Resolved issues
The following issues were resolved in this release:

Incorrect status displayed for the Run Pega unit test task

If you refreshed a merge request quickly, the status of the Run Pega unit tests task might have been
incorrectly displayed as the status of the merge. The correct status for the task is now displayed.

Duplicate operator IDs displayed for the Manual task

When you assigned manual tasks to an operator ID, the Manual task auto-complete displayed duplicate
entries for the same operator ID if the operator ID was added as an administrator or user for multiple
applications. The Manual task no longer displays duplicate entries.

Pipeline deployments sometimes froze

Sometimes, a pipeline deployment might freeze if it could not update the task with the status that it
received from the task. The pipeline no longer freezes.

No error messages displayed for issues with artifacts and repositories

The Deploy existing artifact dialog box now validates the repository that you select. Error messages are also
displayed when the repository does not list available artifacts or if the repository does not have any artifacts
in it.

Verify security checklist task failed and displayed a Pega Diagnostic Cloud (PDC) error

The Verify security checklist failed when a pipeline had only one stage (development) and the Production
ready check box was selected on the pipeline configuration. A PDC error message was displayed. The task no
longer fails for pipelines with such a configuration.

32 character token limit for Jenkins tasks

For the Jenkins task, you could only enter a 32 character token to remotely start a Jenkins job. You can now
enter a token with more than 32 characters.

Dependent applications were not deployed

On pipelines on which dependent applications were configured, they were not deployed. They are now
deployed correctly.

Deployment Manager 4.4.1


Deployment Manager 4.4.1 includes the following enhancements and known issues.

Enhancements
The following enhancements are provided in this release:

Simplified configuration and workflow when merging branches in a distributed branch-based environment

The process for merging branches in distributed branch-based environments has been simplified. On the
remote development system, you can now merge branches and start a deployment by using the Merge
Branches wizard to merge branches onto the source development system without having to use a Pega
repository type.

Ability to submit locked branches to the Merge Branches wizard

You can now submit locked branches to the Merge Branches wizard so that you can follow best practices
when working with branches. Best practices include locking branches to prevent changes from being made
to them.

Using the Merge Branches wizard to make merge requests now stores the branch in the development
repository

When you use the Merge Branches wizard to merge branches and start a deployment, the wizard now stores
the branch in the development repository. Also, after the merge is completed, Deployment Manager deletes
the branch from the development system. By storing branches in the development repository, Deployment
Manager keeps a history, which you can view, of the branches in a centralized location.

Ability to create separate product rules for test cases

You can now separately manage both application changes and test cases in the same pipeline by using a
separate product rule that contains only test cases. You can also choose a stage until which test cases are
deployed to ensure that test cases are not deployed on environments such as staging and production, where
they might not be needed. When you create test and production applications in Deployment Manager on
your development system by using the New Application wizard, the wizard automatically creates separate
product rules for your production and test applications.

API documentation now available

Documentation for Deployment Manager APIs is now included in the Documentation/readme-for-swagger.md


file. This file is included in the DeploymentManager04_04_0x.zip file, which you can download from Pega
Exchange. For example, you can quickly create pipelines without using the Deployment Manager interface.

Usability enhancements
For the Check guardrail compliance task, the default guardrail compliance score has been increased to
97.
Email notifications for Jenkins jobs now include a link to the Jenkins job.
You can now start a Jenkins job when Jenkins has cross-site request forgery (CSRF) protection enabled.
For pipelines that have Jenkins tasks, job history details for successful deployments have a link to the
Jenkins job.
The Pipeline list in the Merge Branches wizard no longer displays pipelines that are not configured to
support branches; previously, you received an error after submitting pipelines that did not support
branches.
If you are using the Merge Branches Wizard but do not have pipelines configured for an application, you
can use still use the wizard to merge branches into target applications.

Known issues
The following are known issues in this release:

The Pega Platform 8.1 and 8.2 versions of the Rule rebasing and Rebasing rules to obtain latest versions
help topics should state that rule rebasing is supported in Deployment Manager.
The Publishing a branch to a repository help topic should state that you can use Deployment Manager to
start a deployment by publishing a branch to the source development system even if you have multiple
pipelines per application version. Also, the note in this help topic no longer applies.

Deployment Manager 4.3.2


Deployment Manager 4.3.2 includes the following resolved issues.

Resolved issues
The following issue has been resolved:

Pipelines not visible on the Deployment Manager landing page

On systems running Pega CRM applications, pipelines were not visible on the Deployment Manager landing
page when the datapage/newgenpages dynamic system setting was set to false. This setting disabled the new
clipboard implementation for optimized read-only data pages. Pipelines are now visible regardless of the
dynamic system setting value.

Deployment Manager 4.3.1


Deployment Manager 4.3.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Ability to configure notifications in Deployment Manager

You can now configure notifications in Deployment Manager without having to configure an email account
and listener in Dev Studio. You can also choose which notifications to receive such as whether Pega unit test
tasks succeeded or failed. You can receive notifications through email, in the notification gadget, or both,
and you can create custom notification channels to receive notifications through other means such as text
messages or mobile push notifications.

Note: To use notifications, you must install or upgrade to Pega Platfor 8.1.3 on the orchestration server.

Publishing application changes has been consolidated with viewing application versions in App Studio

You can now publish application changes in App Studio and view information about your Deployment
Manager application versions on one page. By accessing publishing features and viewing information in one
place, you can more intuitively use Deployment Manager with App Studio.

Deployment Manager 4.2.1


Deployment Manager 4.2.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Ability to add and manage roles, privileges, and users

Deployment Manager now provides default roles that specify privileges for super administrators and
application administrators. Super administrators can add roles and specify their privileges, and both super
administrators and application administrators can add users and assign them roles for specified applications.
By specifying roles and privileges for Deployment Manager users, you can manage your users more
effectively by controlling access to features for each type of user.

New Deployment Manager portal

Deployment Manager now provides a dedicated Deployment Manager portal that does not require access to
the Dev Studio portal to access Deployment Manager features. The portal also provides enhancements such
as a navigation panel from which you can easily acdcess features such as reports, without having to open
specific pipelines. Additionally, when you add a pipeline or modify pipeline settings, you can now open the
rule forms for repositories and authentication profiles in Dev Studio from within Deployment Manager.

Ability to merge branches that span multiple application layers

You can now merge a branch that has rulesets that are in multiple applications if all the rulesets are in the
application stack for the pipeline application. By doing so, you can, for example, merge changes that affect
both a framework and an application layer. You can also merge test assets with the rules that you are testing
without the test assets and rules being in the same application.

Deployment Manager 4.1.1


Deployment Manager 4.1.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Redesigned, more intuitive landing page and user interface

Deployment Manager has been redesigned to have a more intuitive interface so that you can quickly access
features as you interact with your pipeline. The Deployment Manager landing page now displays a snapshot
of your pipeline configuration, which provides status information such as whether a deployment failed and
on what stage the failure occurred. Additionally, when you click a pipeline to open it, Deployment Manager
now displays important information about your pipeline such as the number of branches that are queued for
merging on the development system.

Manage aged updates

You can now manage rules and data types, which are in an application package, that are older than the
instances that are on a system. By importing aged updates, skipping the import, or manually deploying
application packages on a system, you have more flexibility in determining the application contents that you
want to deploy.

New testing tasks, which include running Pega scenario tests

Several new test tasks have been added so that you deliver higher quality software by ensuring that your
application meets the test criteria that you specify. On the candidate systems in your pipeline, you can now
perform the following actions:
Run Pega scenario tests, which are end-to-end, UI-based tests that you create within Pega Platform.
Start and stop test coverage at the application level to generate a report that identifies the executable
rules in your application that are covered or not covered by tests.
Refresh the Application Quality dashboard with the latest information so that you can see the health of
your application and identify areas that need improvement before you deploy your application.

Enhancements to publishing application changes to a pipeline in App Studio

You can submit application changes to a pipeline in App Studio to start a deployment in Deployment
Manager. The following enhancements have been made:
When you submit application changes into a pipeline, patch versions of the main application are now
created.
You can now add comments, which will be published with your application.
You can now associate user stories and bugs with an application.
You can now view information such as who published the application and when for the application
versions that you have submitted
Run Pega unit tests on branches before merging

You can now run Pega unit tests on branches before they are merged in the pipeline for either the pipeline
application or an application that is associated with an access group. By validating your data against Pega
unit tests, you can deploy higher quality applications.

Deployment Manager 3.4.1


Deployment Manager 3.4.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Manage aged updates

You can now manage rules and data types, which are in an application package, that are older than the
instances that are on a system. By importing aged updates, skipping the import, or manually deploying
application packages on a system, you have more flexibility in determining the application contents that you
want to deploy.

Ability to merge branches that span multiple application layers

You can now merge a branch that has rulesets that are in multiple applications if all the rulesets are in the
application stack for the pipeline application. By doing so, you can, for example, merge changes that affect
both a framework and an application layer. You can also merge test assets with the rules that you are testing
without the test assets and rules being in the same application.

Deployment Manager 3.3.1


Deployment Manager 3.3.1 includes the following enhancements and known issues.

Enhancements
The following enhancements are provided in this release:

New Verify security checklist task

You can now use the Verify security checklist task to ensure that your pipeline complies with security best
practices. It is automatically added to the stage before production when you create a pipeline.

Ability to diagnose pipelines

You can now diagnose your pipeline to verify information such as whether the target application and product
rule are on the development environment, connectivity between systems and repositories is working, and
pre-merge settings are correctly configured. You can also view troubleshooting tips and download logs.

Known issues
The following known issue exists in this release:

Rollback does not work for Pega CRM applications

If you are using a CRM application, you cannot roll back a deployment to a previous deployment.

Deployment Manager 3.2.1


Deployment Manager 3.2.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Simplified pipeline setup

Pipeline setup has been simplified when you install Deployment Manager and when you configure pipelines.
The following enhancements have been made:
Deployment Manager now provides the Pega Deployment Manager application with default operators
and authentication profiles when you install it. You do not need to create authentication profiles for
communication between candidate systems and the orchestration server.
If you are using Pega Cloud, Deployment Manager is automatically populated with the URLs of all the
systems in your pipeline so that you do not need to configure them.

New Check guardrail compliance task.

You can now use the Check guardrail compliance task to ensure that the deployment does not proceed if the
application does not comply with best practices for building applications in Pega Platform. This task is
automatically added to all the stages in your pipeline.

New Approve for production task

Deployment Manager now provides an Approve for production task, which is automatically added to the
stage before production when you create a pipeline. You can assign this task to a user who approves the
application changes before the changes are deployed to production.

Ability to specify the test suite ID and access group for Pega unit testing tasks

For Pega unit testing tasks, you can now run all the Pega unit tests that are defined in a test suite for the
application pipeline. By using a test suite ID, you can run a subset of Pega unit tests instead of all Pega unit
tests for a pipeline application. You can also run all the Pega unit tests for an application that is associated
with an access group so that you can run Pega unit tests for an application other than the pipeline
application.

Deployment Manager now supports first time deployments

Deployment Manager now supports first-time deployments, so you do not have to import your application
into each Pega Platform server on your candidate systems the first time that you configure Deployment
Manager.

Deployment Manager 3.1.1


Deployment Manager 3.1.1 includes the following enhancements.

Enhancements
The following enhancements are provided in this release:

Ability to create custom repository types

You can now create custom repository types and manage your artifacts with them when you use Deployment
Manager. For example, you can create a Nexus repository type and use it to move your application package
between candidate systems in a pipeline. By creating custom repository types, you can use a wider variety
of repository types with your artifacts to extend the functionality of Deployment Manager.

Use the Merge Branches wizard to submit branches into a continuous integration and delivery pipeline.

You can now submit branches into a continuous integration and delivery (CI/CD) pipeline by using the Merge
Branches wizard in Designer Studio. Deployment Manager can then run pre-merge criteria on branches on
one system so that you do not need to configure additional systems for both branch development and
merging.

Support for Pega Cloud.

Beginning with Pega 7.4, all current and new Pega Cloud customers have a free dedicated sandbox to run
Deployment Manager, which provides the following features:
Default repositories that store and move your application package between systems in the pipeline.
Ability to view, download, and remove application packages from repositories so that you can manage
your cloud storage space.
Ability to deploy an existing application package.
Ability to create multiple pipelines for one version of an application.
Ability to create multiple pipelines for one version of an application. For example, you can create a
pipeline with only a production stage if you want to deploy a build to production separately from the
rest of the pipeline.

Ability to manage application package artifacts.

You can now browse, download, and delete application package artifacts from the orchestration server. You
do not have to log in to repositories to delete artifacts from them.
Ability to move existing artifacts through pipelines.

You can move existing artifacts through your pipelines. Existing artifacts are maintained in repositories, and
you can move them through progressive stages in the pipeline.

Deployment Manager 2.1.4


Deployment Manager 2.1.4 includes the resolved issues.

Issues addressed in this release


The following issues were addressed in this release:

Publishing application packages to the production repository sometimes fails in multinode environments

In multinode staging environments, a node retrieves an application package from the development
repository and places it into its service export folder to be published to the production repository. However,
Deployment Manager sometimes cannot publish it to the production repository, because the request might
be sent to a different node. This issue has been fixed so that if Deployment Manager sends a request to a
node that does not have the application package, that node retrieves the package from the development
repository and publishes it to the production repository.

Deployment Manager 2.1.3


Deployment Manager 2.1.4 includes the following enhancements.

Enhancements
The following enhancement is provided in this release:

Improved structure and content of email notifications.

Improvements have been made to email notifications that are sent to users when an event has occurred. For
example, the email that is sent when a step on which PegaUnit test task fails now includes an attached log
file that provides details of each failed PegaUnit test case.

Deployment Manager 2.1.2


Deployment Manager 2.1.2 includes the known issues.

Known issues
The following issue exists in this release:

The PegaDevOps-ReleaseManager agent points to the wrong access group.

To resolve the issue, after you import and install Deployment Manager 02.01.02, perform the following steps
on the orchestration server:

Because this agent is not associated with the correct access group, it cannot process Deployment Manager
activities in the background.

1. Update your Pega Platform application so that it is built on PegaDeploymentManager 02.01.02:


a. In the Designer Studio header, click the name of your application, and then click Definition.
b. In the Built on application section, in the Version field, press the Down Arrow key and select
02.01.02.
c. Click Save.
2. Update the agent schedule for the Pega-DevOps-ReleaseManager agent to use the
PegaDeploymentManager:Administrators access group.
a. In Designer Studio, click RecordsSysAdminAgent Schedule.
b. Click the Pega-DevOps-ReleaseManager agent.
c. Click Security.
d. In the Access Group field, press the Down Arrow key and select
PegaDeploymentManager:Administrators.
e. Click Save.
Deployment Manager 1.1.3
Deployment Manager 1.1.3 includes the following enhancements.

Enhancements
The following enhancement is provided in this release:

Improved structure and content of email notifications

Improvements have been made to email notifications that are sent to users when an event has occurred. For
example, the email that is sent when a step on which PegaUnit test task fails now includes an attached log
file that provides details of each failed PegaUnit test case.

Deployment Manager 1.1.2


Deployment Manager 1.1.2 includes the following known and resolved issues.

Known issues
The following issue exists in this release:

The PegaDevOps-ReleaseManager agent points to the wrong access group.

Because this agent is not associated with the correct access group, it cannot process Deployment Manager
activities in the background.

To resolve the issue, after you import and install Deployment Manager 01.01.02, perform the following steps
on the orchestration server:

1. Update your Pega Platform application so that it is built on PegaDeploymentManager 01.01.02:


a. In the Designer Studio header, click the name of your application, and then click Definition.
b. In the Built on application section, in the Version field, press the Down Arrow key and select
01.01.02.
c. Click Save.
2. Update the agent schedule for the Pega-DevOps-ReleaseManager agent to use the
PegaDeploymentManager:Administrators access group.
a. In Designer Studio, click Records SysAdmin Agent Schedule .
b. Click the Pega-DevOps-ReleaseManager agent.
c. Click Security.
d. In the Access Group field, press the Down Arrow key and select
PegaDeploymentManager:Administrators.
e. Click Save.

Resolved issues
The following issue was resolved in this release:

Selections that were made to the Start build on merge check box were not applied when editing a pipeline.

When you edit a pipeline and either select or clear the Start build on merge check box, your changes are
now applied. Additionally, the check box is cleared by default.

Getting started with Deployment Manager


Deployment Manager is a simple, intuitive, and ready-to-use application that offers built-in DevOps capabilities to
users. It leverages Pegasystems’s market-leading case management technology to manage an automated
orchestration engine, enabling you to build and run continuous integration and continuous delivery (CI/CD)
pipelines in a model-driven manner.

You can run deployments involving your application updates with the click of a button, without the need for third-
party automation services such as Jenkins or Bamboo. Fully automated pipelines helpto significantly reduce the
lead time to deliver value to end users.

Using a standardized way to deploy application changes with guardrail-related and testing-related best practices
that are built into the out-of-the-box CI/CD models results in substantial operational efficiencies.
For answers to frequently asked questions, see the Deployment Manager FAQ page.

Understanding key features supported

Deployment Manager provides a number of features so that you can manage your workflows. For example,
Deployment Manager supports continuous integration, continuous delivery, test execution, reporting,
diagnostics, manual approvals, deployment cancellations, change rollbacks, roles and privileges, and
notifications.

Viewing the overview video

The following video provides an overview of Deployment Manager: https://community.pega.com/video-


library/overview-infinity-deployment-manager.

Installing Deployment Manager

If you are using Deployment Manager on-premises, you must first install it.

Upgrading to a new release

If you are using Deployment Manager either on-premises or on Pega Cloud Services environments, you must
perform steps to upgrade to a new release.

Setting up and configuring Deployment Manager for a quick start

Deployment Manager is ready to use out of the box. There is no need to build on top of it; however, some
initial configurations are needed before you can get started. For details about how Deployment Manager
works, see .

Using Deployment Manager

After you set up and configure Deployment Manager, you can begin using it to create pipelines. You can also
do a number of other tasks, such as creating Deployment Manager roles and users and configuring the
notifications that you want to receive. For detailed information, see .

Using troubleshooting tips

If you encounter issues with Deployment Manager, you can troubleshoot it in a number of ways.

Obtaining support

If you experience problems using Deployment Manager, submit a support request to My Support Portal.

Understanding key features supported


Deployment Manager provides a number of features so that you can manage your workflows. For example,
Deployment Manager supports continuous integration, continuous delivery, test execution, reporting, diagnostics,
manual approvals, deployment cancellations, change rollbacks, roles and privileges, and notifications.

Viewing the overview video


The following video provides an overview of Deployment Manager: https://community.pega.com/video-
library/overview-infinity-deployment-manager.

Installing Deployment Manager


If you are using Deployment Manager on-premises, you must first install it.

On-premises users can download Deployment Manager from


https://community1.pega.com/exchange/components/deployment-manager.

For information about installing Deployment Manager, see Installing or upgrading to Deployment Manager 4.7.x.

Beginning with Pega Platform 7.4, Pega Cloud Services users have a dedicated instance in their virtual private
cloud (VPC) at the time of onboarding with Deployment Manager functionality preinstalled.

Note: This instance is referred to as the orchestration server and contains the “DevOps” keyword in the URL.
Upgrading to a new release
If you are using Deployment Manager either on-premises or on Pega Cloud Services environments, you must
perform steps to upgrade to a new release.

On-premises users can directly download the latest release from


https://community1.pega.com/exchange/components/deployment-manager.

Pega Cloud Services users should create a support ticket to request a new release.

After you obtain the latest release, refer to the upgrade documentation for information about upgrading to the
latest release. For more information, see Installing or upgrading to Deployment Manager 4.8.x.

Setting up and configuring Deployment Manager for a quick start


Deployment Manager is ready to use out of the box. There is no need to build on top of it; however, some initial
configurations are needed before you can get started. For details about how Deployment Manager works, see
Understanding Deployment Manager architecture and workflows.

The following list of terms defines key Deployment Manager concepts:

Candidate systems – the individual environments that host the target application, typically the development,
QA, staging, and production environments.
Repository – the artifact repository that stores the application archive as defined by a product rule.
DMAppAdmin – the operator ID, provided out of the box, that is used by an application pipeline to execute all
the tasks such as deploying, running tests, checking guardrail scores, and so on.
DMReleaseAdmin – the operator ID, provided out of the box, that has administrative privileges for
Deployment Manager. This is the user that you will start with for the Deployment Manager.

Note: You should make changes only in the development environment and then move them to higher
environments. Do not make changes in any other environment.

1. Enable the DMAppAdmin and DMReleaseAdmin operators IDs:

a. Log in to the orchestration server and enable the DMReleaseAdmin operator ID.

b. Log in to candidate systems (development, QA, staging, and production) and enable the DMAppAdmin
operator ID. Ensure that the same password is set on all environments.

c. On the orchestration server, open the DMAppAdmin authentication profile and set the password to the
DMAppAdmin operator ID password that you set in step 1b.

d. On all candidate systems, open the DMReleaseAdmin authentication profile and set the password to the
DMReleaseAdmin operator ID password that you set in step 1a.

For detailed steps, see Configuring authentication profiles.

2. On each candidate system, open your target application and add PegaDevOpsFoundation as a built-on
application. For more information, see Configuring candidate systems.

3. To use branches for application development, set the RMURL dynamic system setting on the development
environment to be the orchestration server URL.

4. For on-premises users, set up repositories for artifact archiving. For more information, see Creating
repositories on the orchestration server and candidate systems.

Deployment Manager leverages JFrog Artifactory, Amazon S3, Microsoft Azure, or file system repository
types. After you configure one of these repositories, you will select one to use when you create your
pipelines.

5. Configure the product rule for your application.

You will specify this product rule when you create your pipeline.

6. To receive email notification for deployments, configure email accounts on the orchestration server.

For more information, see Configuring email accounts on the orchestration server.

7. If you are using Jenkins, configure Jenkins so that it can communicate with the orchestration server.
For more information, see Configuring Jenkins.

Using Deployment Manager


After you set up and configure Deployment Manager, you can begin using it to create pipelines. You can also do a
number of other tasks, such as creating Deployment Manager roles and users and configuring the notifications
that you want to receive. For detailed information, see Configuring and running pipelines with Deployment
Manager 4.8.x.

In general, perform the following steps:

1. Log in to the Deployment Manager portal on the orchestration server with the DMReleaseAdmin operator ID.

2. Create a pipeline by modeling stages and steps and specifying environments, applications, product rules,
and repositories.

3. Run diagnostics by clicking Actions Diagnose pipeline to verify that your pipeline is correctly configured.

4. Run deployments directly from Deployment Manager or from development environments as you merge your
branches.

Using troubleshooting tips


If you encounter issues with Deployment Manager, you can troubleshoot it in a number of ways.

Remember the following troubleshooting tips:

Run diagnostics and follow troubleshooting tips if your deployments fail to run.
Review pipeline logs that are available on the pipeline landing page and the output from diagnostics to
troubleshoot your workflows.
Attach logs from Deployment Manager and the output from diagnostics in your support tickets.

Obtaining support
If you experience problems using Deployment Manager, submit a support request to My Support Portal.

Understanding Deployment Manager architecture and workflows


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a standardized deployment process so that you can
deploy predictable, high-quality releases without using third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application
package generation, artifact management, and package promotion to different stages in the workflow.

Deployment Manager supports artifact management on repository types such as Amazon S3, file system,
Microsoft Azure, and JFrog Artifactory. Additionally, in Deployment Manager 3.3.x and later, you can create your
own repository types; for more information, see Creating and using custom repository types for Deployment
Manager. Deployment Manager also supports running automations on Jenkins that are not supported in Pega
Platform such as running external regression or performance tests. In addition, Pega Cloud pipelines are
preconfigured to use Amazon S3 repositories and are configured to use several best practices related to
compliance and automated testing.

Deployment Manager is installed on the orchestration server, on which release managers configure and run
pipelines. With Deployment Manager, you can see the run-time view of your pipeline as it moves through the
CI/CD workflow. Deployment Manager provides key performance indicators (KPIs) and dashboards that provide
performance information such as the deployment success rate, deployment frequency, and task failures. Use this
information to monitor and optimize the efficiency of your DevOps process.

Understanding CI/CD pipelines

A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous
delivery.

Understanding systems in the Deployment Manager CI/CD pipeline

The CI/CD pipeline comprises several systems and involves interaction with various Pega Platform servers.
For example, you can use a QA system to run tests to validate application changes.
Understanding repositories in the pipeline

Deployment Manager supports Microsoft Azure, JFrog Artifactory, Amazon S3, and file system repositories for
artifact management of application packages. For each run of a pipeline, Deployment Manager packages and
promotes the application changes that are configured in a product rule. The application package artifact is
generated on the development environment, published in the repository, and then deployed to the next
stage in the pipeline.

Understanding pipelines in a branch-based environment

If you use branches for application development, you can configure merge criteria on the pipeline to receive
feedback about branches, such as whether a branch has been reviewed or meets guardrail compliance
scores.

Understanding pipelines in an environment without branches

If you do not use branches for application development, but you use ruleset-based development instead, you
configure the continuous delivery pipeline in Deployment Manager.

Understanding CI/CD pipelines


A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous delivery.

In the continuous integration stage, developers continuously validate and merge branches into a target
application. In the continuous delivery stage, the target application is packaged and moved through progressive
stages in the pipeline. After application changes have moved through testing cycles, including Pega unit,
regression, performance, and load testing, application packages are deployed to a production system either
manually or, if you want to continuously deploy changes, automatically.

Note: You should make changes only in the development environment and then move those changes to a higher
environment. Do not make changes in any other environment.

Understanding systems in the Deployment Manager CI/CD pipeline


The CI/CD pipeline comprises several systems and involves interaction with various Pega Platform servers. For
example, you can use a QA system to run tests to validate application changes.

Pipelines comprise the following systems:

Orchestration server – Pega Platform system on which the Deployment Manager application runs and on
which release managers or application teams model and run their CI/CD pipelines. This system manages the
CI/CD workflow involving candidate systems in the pipeline
Candidate systems – Pega Platform servers that manage your application's life cycle; they include the
following systems:
Development system – The Pega Platform server on which developers build applications and merge
branches into them. The product rule that defines the application package that is promoted to other
candidate systems in the pipeline is configured on this system. Distributed development environments
might have multiple development systems.

In this environment, developers develop applications on remote Pega Platform development systems
and then merge their changes on a main development system, from which they are packaged and
moved in the Deployment Manager workflow.
QA and staging systems – Pega Platform servers that validate application changes by using various
types of testing, such as Pega unit, regression, security, load, and performance testing.
Production system – Pega Platform server on which end users access the application.

Understanding repositories in the pipeline


Deployment Manager supports Microsoft Azure, JFrog Artifactory, Amazon S3, and file system repositories for
artifact management of application packages. For each run of a pipeline, Deployment Manager packages and
promotes the application changes that are configured in a product rule. The application package artifact is
generated on the development environment, published in the repository, and then deployed to the next stage in
the pipeline.

A pipeline uses development and production repositories. After a pipeline is started, the application package
moves through the pipeline life cycle in the following steps:
1. The development system publishes the application package to the development repository.
2. The QA system retrieves the artifact from the development repository and performs tasks on the artifact.
3. The staging system retrieves the artifact from the development repository and publishes it to the production
repository.
4. The production system deploys the artifact from the production repository

Understanding pipelines in a branch-based environment


If you use branches for application development, you can configure merge criteria on the pipeline to receive
feedback about branches, such as whether a branch has been reviewed or meets guardrail compliance scores.

If there are no merge conflicts, and merge criteria is met, the branch is merged; the continuous delivery pipeline
is then started either manually or automatically.

The workflow of tasks in a branch-based pipeline is as follows:

1. One or more developers make changes in their respective branches.


2. Merge criteria, which are configured in Deployment Manager, are evaluated when branches are merged.
3. Continuous delivery starts in one of the following ways:
a. Automatically, after a branch successfully passes the merge criteria. If another continuous delivery
workflow is in progress, branches are queued and started after the previous workflow has been
completed.
b. Manually, if you have multiple development teams and want to start pipelines on a certain schedule.
4. During a deployment run, branches are queued for merging and merged after the deployment has been
completed.

The following figure describes the workflow in a branch-based environment.

Workflow in a branch-based environment


Workflow in a branch-based environment

In a distributed, branch-based environment, you can have multiple development systems, and developers author
and test the application on remote Pega Platform development systems. They then merge their changes on a
development source development, from which they are packaged and moved in the Deployment Manager
workflow.

The following figure describes the workflow in a distributed, branch-based environment.

Workflow in a distributed branch-based environment


Workflow in a distributed branch-based environment

Understanding pipelines in an environment without branches


If you do not use branches for application development, but you use ruleset-based development instead, you
configure the continuous delivery pipeline in Deployment Manager.

The workflow of tasks in this pipeline is as follows:

1. Developers update rules and check them in directly to the application rulesets on the development system.
2. The product rule that contains the application rules to be packaged and moved through the systems in the
pipeline is on the development system.
3. Continuous delivery is started manually at a defined schedule by using Deployment Manager.

The following figure describes the workflow of a pipeline in an environment without branches.

Workflow in an environment without branches


Workflow in an environment without branches

Understanding best practices for using branches with Deployment


Manager
Follow these best practices when you use branches in your Deployment Manager pipelines. The specific practices
depend on whether you have a single development team or multiple development teams in a distributed
environment.

If you use branches for application development in a non-distributed environment, developers work on branches
and merge them on the development system, after which the continuous delivery pipeline is started
automatically or manually.
In a distributed branch-based environment, you can have multiple development systems, and developers author
and test the application on a remote development system. They then merge their changes on a source
development system, from which the changes are merged and moved in the Deployment Manager workflow.

For more information about best practices to follow in the DevOps pipeline, see Understanding the DevOps
release pipeline.

Using branches with Deployment Manager

Best practices for using branches in Deployment Manager depend on whether you have a single
development team or multiple teams in a distributed environment.

Using branches with Deployment Manager


Best practices for using branches in Deployment Manager depend on whether you have a single development
team or multiple teams in a distributed environment.

In general, perform the following steps when you use branches with Deployment Manager:

1. In Deployment Manager, create a pipeline for the target application. If your application consists of multiple
built-on applications, it is recommended that you create separate pipelines for each application.

By using separate pipelines for built-on applications, you can perform targeted testing of each built-on
application, and other developers can independently contribute to application development.

For more information about multiple built-on applications, see Using multiple built-on applications.

2. Ensure that the target application is password-protected on all your systems in the pipeline.

a. In Designer Studio (if you are using Deployment Manager 3.4.x) or Dev Studio (if you are using
Deployment Manager 4.1.x or later), switch to the target application by clicking the name of the
application in the header, clicking Switch Application, and then clicking the target application.

b. In the Designer Studio or Dev Studio header, click the name of the target application, and then click
Definition.

c. Click Integration & Security.

d. In the Edit Application form, click the Require password to update application checkbox.

e. Click Update password.

f. In the Update password dialog box, enter a password, reenter it to confirm it, and click Submit.

g. Save the rule form.

3. If you want to create a separate product rule for a test application, create a test application that is built on
top of the main target application. For more information, see Using branches and test cases.

4. On the source development system (in a distributed environment) or development system (in a
nondistributed environment), create a development application that is built on top of the either the target
application (if you are not using a test application) or the test application.

5. Include the PegaDevOpsFoundation application as a built-on application for either the team application or the
target application.

a. In either the development application or target application, in the Dev Studio or Designer Studio
header, click the application, and then click Definition.

b. In the Edit Application form, on the Definition tab, in the Built on applications section, click Add
application.

c. In the Name field, press the down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the down arrow key and select the version for the Deployment Manager
version that you are using.

e. Save the rule form.

f. If you are using a distributed environment, import the application package, including the target,
development, and test (if applicable) applications, into the remote development system.
6. Do one of the following actions:

If you are using a distributed environment, add branches to the team application on the remote
development system. For more information, see Adding branches to your application.

If you are using multiple built-on applications, maintain separate branches for each target application.
For more information, see the Pega Communityarticle Using multiple built-on applications.

If you are using a non-distributed environment, create a branch of your production rulesets in the team
application. For more information, see Adding branches to your application. You should create separate
branches for each target pipeline.

7. Perform all development work in the branch.

8. To merge branches, do one of the following actions:

If you are using either a non-distributed network (in any version of Deployment Manager) or a
distributed network (in Deployment Manager 4.4.x or later), first lock the branches that you want to
validate and merge in the application pipeline and then submit the branches in the Merge Branches
wizard.

For more information, see Submitting a branch into an application by using the Merge Branches wizard.

If you are using a distributed network and Deployment Manager 4.4.x or and are publishing branches to
a source development system to start a build, do the following actions:

1. On the remote development system, publish the branch to the repository on the source
development system to start the pipeline. For more information, see Publishing a branch to a
repository.
2. If there are merge conflicts, log in to the team application on the source development system, add
the branch to the application, resolve the conflict, and then merge the branch.

If you using a distributed network and versions of Deployment Manager earlier than 4.4.x with one
pipeline per application, do the following steps so that you can merge branches onto the source
development system:

1. On the remote development system, create a Pega repository that points to the target application
on the source development system. For more information, see Adding a Pega repository.
2. On the remote development system, publish the branch to the repository on the source
development system to start the pipeline. For more information, see Publishing a branch to a
repository.
3. If there are merge conflicts, log in to the team application on the source development system, add
the branch to the application, resolve the conflict, and then merge the branch.
If you using a distributed network and versions of Deployment Manager earlier than 4.4.x with multiple
pipelines per application and application version:
1. Package the branch on the remote development system. For more information, see Packaging a
branch.
2. Export the branch.
3. Import the branch to the source development system and add it to the team application. For more
information, see Importing rules and data by using the Import wizard.
4. Merge branches into the target application to start the pipeline by using the Merge Branches
wizard.

For more information, see Merging branches into target rulesets.

Managing test cases separately in Deployment Manager


In Deployment Manager 4.4.x and later, you can package and deploy test cases separately on the candidate
systems in the pipeline. When you configure a pipeline in Deployment Manager, you specify the details of the test
package that you want to deploy, including the stage in the pipeline until which you want to deploy the package.

To use a separate test package, you must create a test application layer on the development systems in your
pipeline.

Configuring the application stack on the development or source development system

You must first configure your application stack on either the development or source development system.

Configuring the application stack on the remote development system in a distributed, branch-based
environment
If you are using a distributed, branch-based environment, you must configure the application stack on the
remote development system.

Using branches and test cases

Branches in the development application can contain rulesets that belong to the target application, test
application, or both. When you start a deployment either by using the Merge Branches wizard or by
publishing a branch to a repository on the main development system, the branches in both the target and
test applications are merged in the pipeline.

Configuring pipelines to use test cases

When you add or modify a pipeline, you specify whether you want to deploy test cases and then configure
details for the test application, including its name and access group to which it belongs, in the Application
test cases section. You also select the stage until which you want to deploy the pipeline. For more
information about using Deployment Manager, see .

Configuring the application stack on the development or source


development system
You must first configure your application stack on either the development or source development system.

Configure the application stack according to one of the following scenarios:

If you are using a distributed, branch-based environment, complete the following steps on the remote
development system.
If you are using a branch-based environment, complete the following steps on the development system.
If you are not using branches, complete the following steps on the development system.

Configure the application stack by performing the following steps:

1. Create the target application.

2. Create a test application, which contains the test rulesets that you want to separately deploy, that is built on
the target application.

3. Create a development application that is built on top of the test application, which developers can log in to
so that they can create and work in branches.

4. Lock both the target and test applications

Configuring the application stack on the remote development


system in a distributed, branch-based environment
If you are using a distributed, branch-based environment, you must configure the application stack on the remote
development system.

Complete the following steps:

1. Create the target application.

2. Create a test application, which contains the test rulesets that you want to separately deploy, that is built on
the target application.

3. Lock both the target and test applications.

4. Lock both the target and test application rulesets.

Using branches and test cases


Branches in the development application can contain rulesets that belong to the target application, test
application, or both. When you start a deployment either by using the Merge Branches wizard or by publishing a
branch to a repository on the main development system, the branches in both the target and test applications are
merged in the pipeline.

Configuring pipelines to use test cases


When you add or modify a pipeline, you specify whether you want to deploy test cases and then configure details
for the test application, including its name and access group to which it belongs, in the Application test cases
section. You also select the stage until which you want to deploy the pipeline. For more information about using
Deployment Manager, see Configuring an application pipeline.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable test
coverage, and Validate test coverage tasks are run for the access group that is specified in the Application test
cases section.

You must also perform the following steps on the candidate system on which you are running tests:

1. Log in to the test application.

2. In the header of Dev Studio, click Configure Application Quality Settings .

3. Select the Include built-on applications radio button, and then click Save.

Creating and using custom repository types for Deployment


Manager
In Deployment Manager 3.1.x and later, you can create custom repository types to store and move your artifacts.
For example, you can create a Nexus repository and use it similarly to how you would use a Pega Platform-
supported repository type such as file system. By creating custom repository types, you can extend the
functionality of Deployment Manager through the use of a wider variety of repository types with your artifacts.

To create a custom repository type to use with Deployment Manager, complete the following steps:

1. Create a custom repository type. For more information, see Creating a custom repository type.

2. If you are using Deployment Manager 3.3.x or 4.1.x or later on each candidate system, add the ruleset that
contains the custom repository type as a production ruleset to the PegaDevOpsFoundation:Administrators
access group.

a. In the header of either Designer Studio (if you are using Deployment Manager 3.3.x) or Dev Studio (if
you are using Deployment Manager 4.1.x or later), click Records Security Access Group .

b. Click PegaDevOpsFoundation:Administrators.

c. Click Advanced

d. In the Run time configuration section, click the Production Rulesets field, press the Down arrow key, and
select the ruleset that contains the custom repository type.

e. Save the rule form.

3. Import the ruleset on which the custom repository is configured in to the orchestration system and add the
ruleset to the PegaDeploymentManager application stack.

a. On the orchestration system, import the ruleset by using the Import wizard. For more information, see
Importing rules and data by using the Import wizard.

b. In either the Designer Studio or Dev Studio header, in the Application field, click
PegaDeploymentManager, and then click Definition.

c. On the Edit Application rule form, in the Application rulesetsfield, click Add ruleset.

d. Click the field that is displayed, press the Down arrow key, and then select the ruleset that contains the
custom repository type.

e. Save the rule form.

Configuring Deployment Manager 4.x for Pega Platform 7.4


You can use Deployment Manager 4.x if Pega Platform 7.4 is installed on your candidate systems (development,
QA, staging, and production). You can use many of the latest features that were introduced in Deployment
Manager 4.x, such as managing your deployments in a dedicated portal.

Understanding usage information


When you use Deployment Manager 4.x with Pega 7.4, certain features are not supported. These features
include pipeline tasks and enhancements to the Merge Branches wizard.

Configuring Deployment Manager 4.x to work with Pega 7.4

Configure the orchestration server and candidate systems so that Deployment Manager 4.x works with Pega
7.4. Use Deployment Manager 4.x on the orchestration system with candidate systems that are running
Pega 7.4. and Deployment Manager 3.4.x.

Understanding aged updates


An aged update is a rule or data instance in an application package that is older than an instance that is on a
system to which you want to deploy the application package. By being able to import aged updates, skip the
import, or manually deploy your application changes, you now have more flexibility in determining the rules that
you want in your application and how you want to deploy them.

For example, you can update a dynamic system setting on a quality assurance system, which has an application
package that contains the older instance of the dynamic system setting. Before Deployment Manager deploys the
package, the system detects that the version of the dynamic system setting on the system is newer than the
version in the package and creates a manual step in the pipeline.

Understanding usage information


When you use Deployment Manager 4.x with Pega 7.4, certain features are not supported. These features include
pipeline tasks and enhancements to the Merge Branches wizard.

Note the following usage limitations:

This configuration does not support the following features:


Pipeline tasks:
Validate test coverage
Refresh application quality
Run Pega scenario tests
Enable test coverage
Merge Branches wizard:
Associating user stories and bugs with a branch
Locked branches
Merging branches that span application layers

In Deployment Manager 4.5.x, some of the repository diagnostics do not work for candidate systems that are
running Pega 7.4. These diagnostics work in Deployment Manager 4.6.x.

Configuring Deployment Manager 4.x to work with Pega 7.4


Configure the orchestration server and candidate systems so that Deployment Manager 4.x works with Pega 7.4.
Use Deployment Manager 4.x on the orchestration system with candidate systems that are running Pega 7.4. and
Deployment Manager 3.4.x.

1. On the orchestration system, install or upgrade to the latest version of Pega Platform.

2. On the orchestration system, install or upgrade to the latest version of Deployment Manager 4.x. For more
information, see Installing or upgrading to Deployment Manager 4.8.x.

3. If Deployment Manager 3.4.1 is installed on the candidate systems, go to step 4; otherwise:

Do the following steps:


1. Install and configure the latest version of Deployment Manager 4.x on your candidate systems. For
more information, see Installing or upgrading to Deployment Manager 4.8.x.
2. Add PegaDevOpsFoundation 3.4.1 to your application stack by to Pega Marketplace and
downloading it.
3. Extract the DeploymentManager_03.04.01.zip file.
4. Use the Import wizard to import the PegaDevOpsFoundation_4.zip file.
5. In the header of Dev Studio, click the name of your application, and then click Definition.

For more information about the Import wizard, see Importing rules and data by using the Import
wizard.

6. In the Built on application section, click Add application.


7. n the Name field, press the Down arrow key and select PegaDevOpsFoundation.
8. In the Version field, press the Down arrow key and select 3.4.1.
9. Click Save.

4. Create and configure an application pipeline.

For more information, see Configuring an application pipeline.

5. Run diagnostics to ensure that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Related Content
Article

Installing or upgrading to Deployment Manager 4.8.x

Enabling and disabling the chatbot


Use the chatbot to obtain more information about common Deployment Manager issues, such as branch merging
and pipeline configuration. You can disable and enable the chatbot. By default, the chatbot is enabled.

Only super administrators can enable and disable the chatbot. For more information about user roles, see
Understanding roles and users.

1. In the navigation pane, click Settings General settings .

2. Do one of the following actions:

To enable the chatbot, select the Enable self-service Deployment Manager web chatbot check box.
To disable the chatbot, clear the check box.

3. Click Save.

4. At the top of the General Settings page, click the Page back icon.

5. Click the Refresh icon to refresh Deployment Manager and apply your changes.

Deployment Manager 4.8.x


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a consistent deployment process so that you can
deploy high-quality releases without the use of third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging; application
package generation; artifact management; and package promotion, to different stages in the workflow.

Deployment Manager 4.8.x is compatible with Pega 8.1, 8.2, 8.3, and 8.4. You can download it for Pega Platform
from the Deployment Manager Pega Marketplace page.

For answers to frequently asked questions, see the Deployment Manager FAQ page.

Note: Each customer Virtual Private Cloud (VPC) on Pega Cloud Services has a dedicated orchestrator instance to
use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud
application.

Note: To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server.

Installing, upgrading, and configuring Deployment Manager 4.8.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks and allow you to quickly deploy high-quality software to production.

Configuring and running pipelines with Deployment Manager 3.4.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks so that you can quickly deploy high-quality software to production.
Using data migration pipelines with Deployment Manager 4.8.x

Data migration tests provide you with significant insight into how the changes that you make to decision
logic affect the results of your strategies. To ensure that your simulations are reliable enough to help you
make important business decisions, you can deploy a sample of your production data to a dedicated data
migration test environment.

Installing, upgrading, and configuring Deployment Manager 4.8.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks
and allow you to quickly deploy high-quality software to production.

Note: You should make changes only in the development environment and then move them to higher
environments. Do not make changes in any other environment.

Note: Each customer virtual private cloud (VPC) on Pega Cloud Services has a dedicated orchestrator instance to
use Deployment Manager. If you are upgrading from an earlier release, contact Pegasystems Global Client
Support (GCS) support to request a new version.

Note: This document describes the procedures for the latest version of Deployment Manager 4.8.x. To use
notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server.

For information on configuring Deployment Manager for data migration pipelines, see Installing, upgrading, and
configuring Deployment Manager 4.8.x for data migration pipelines.

Installing or upgrading to Deployment Manager 4.8.x

You must install Deployment Manager if you are using it on-premises. Because Pega Cloud Services
manages the orchestration server in any Pega Cloud subscription, Pega Cloud Services manages the
installation and upgrades of Deployment Manager orchestration servers.

Configuring systems in the pipeline

Configure the orchestration server and candidates in your pipeline for all supported CI/CD workflows. If you
are using branches, you must configure additional settings on the development system after you perform the
required steps.

Configuring the development system for branch-based development

If you are using branches in either a distributed or nondistributed branch-based environment, configure the
development system so that you can start deployments when branches are merged. Configuring the
development system includes defining the URL of the orchestration server, creating development and target
applications, and locking application rulesets.

Configuring additional settings

As part of your pipeline, users can optionally receive notifications through email when events occur. For
example, users can receive emails when tasks or pipeline deployments succeed or fail. For more information
about the notifications that users can receive, see .

Installing or upgrading to Deployment Manager 4.8.x


You must install Deployment Manager if you are using it on-premises. Because Pega Cloud Services manages the
orchestration server in any Pega Cloud subscription, Pega Cloud Services manages the installation and upgrades
of Deployment Manager orchestration servers.

To install Deployment Manager on-premises, do the following steps:

1. Install Pega Platform 8.1, 8.2, 8.3, or 8.4 on all systems in the pipeline.

2. On each system, browse to the Deployment Manager Pega Marketplace page, and then download the
DeploymentManager04.08.0x.zip file for your version of Deployment Manager.

3. Extract the DeploymentManager04.08.0x.zip file.

4. Use the Import wizard to import files into the appropriate systems. For more information about the Import
wizard, see Importing rules and data by using the Import wizard.
5. On the orchestration server, import the following files:

PegaDevOpsFoundation_4.8.zip
PegaDeploymentManager_4.8.zip

6. On the candidate systems, import the PegaDevOpsFoundation_4.8.zip file.

7. If you are using a distributed development for CI/CD workflows, on the remote development system, import
the PegaDevOpsFoundation_4.8.zip file.

8. Do one of the following actions:

If you are upgrading from version 3.2.1 or later, the upgrade automatically runs, and you can use
Deployment Manager when post-upgrade steps are run. You do not need to perform any of the required
configuration procedures but can configure Jenkins and email notifications. For more information, see
Configuring additional settings.
If you are not upgrading, continue the installation procedure at Configuring authentication profiles.

Configuring systems in the pipeline


Configure the orchestration server and candidates in your pipeline for all supported CI/CD workflows. If you are
using branches, you must configure additional settings on the development system after you perform the
required steps.

To configure systems in the pipeline, do the following steps:

1. Configuring authentication profiles

2. Configuring the orchestration server

3. Configuring candidate systems

4. Creating repositories on the orchestration server and candidate systems

Configuring authentication profiles

Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate
with the candidate systems.

Configuring the orchestration server

The orchestration server is the system on which the Deployment Manager application is installed and release
managers configure and manage CI/CD pipelines. Configure settings on it before you can use it in your
pipeline.

Configuring candidate systems

Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

Creating repositories on the orchestration server and candidate systems

If you are using Deployment Manager on premises, create repositories on the orchestration server and all
candidate systems to move your application between all the systems in the pipeline. You can use a
supported repository type that is provided in Pega Platform, or you can create a custom repository type.

Configuring authentication profiles


Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate with the
candidate systems.

Configure the default authentication profile by following these steps:

1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password.

a. Log in to the orchestration server with [email protected]/install.

b. In the header of Dev Studio, click Records Organization Operator ID , and then click DMReleaseAdmin.
c. On the Edit Operator ID rule form, click the Security tab.

d. Clear the Disable Operator check box.

e. Click Save.

f. Click Update password.

g. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then
click Submit.

h. Log out of the orchestration server.

2. On each candidate system, which includes the development, QA, staging, and production systems, enable
the DMAppAdmin operator ID.

If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation
application.

a. Log in to each candidate system with [email protected]/install.

b. In the header of Dev Studio, click Records Organization Operator ID , and then click DMAppAdmin.

c. In the Explorer panel, click the operator ID initials, and then click Operator.

d. On the Edit Operator ID rule form, click the Security tab.

e. Clear the Disable Operator check box.

f. Click Save.

g. Click Update password.

h. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then
click Submit.

i. Log out of each candidate system.

3. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All
candidate systems use this authentication profile to communicate with the orchestration server about the
status of the tasks in the pipeline.

a. Log in to each candidate system with the DMAppAdmin operator ID and the password that you
specified.

b. In the header of Dev Studio, click Records Security Authentication Profile .

c. Click DMReleaseAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The
orchestration server uses this authentication profile to communicate with candidate systems so that it can
run tasks in the pipeline.

a. Log in to the orchestration server with the DMAppAdmin user name and the password that you
specified.

b. In the header of Dev Studio, click Records Security Authentication Profile .

c. Click DMAppAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.


5. If your target environment is SSL-enabled with private certificates, configure the Deployment Manager
connectors so that they can receive and process tokens by doing setting the keystore:

a. In the header of Dev Studio, create and configure a keystore. For more information, see Creating a
keystore for application data encryption.

b. Configure the Pega-DeploymentManager/TrustStore dynamic system setting to reference the keystore


ID by clicking Records SysAdmin Dynamic System Settings .

c. Click the Pega-DeploymentManager/TrustStore dynamic system setting.

d. On the Settings tab, in the Value field, enter the ID of the keystore that you created in the previous
step.

e. Click Save.

For more information about dynamic system settings, see Creating a dynamic system setting.

6. Do one of the following actions:

If you are upgrading to Deployment Manager 4.8.x, resume the post-upgrade procedure from step 2.
For more information, see Running post-upgrade steps.
If you are not upgrading, continue the installation procedure. For more information, see Configuring the
orchestration server.

Understanding default authentication profiles and operator IDs

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs,
and authentication profiles are installed. Authentication profiles enable communication between the
orchestration server and candidate systems.

Understanding default authentication profiles and operator IDs


When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and
authentication profiles are installed. Authentication profiles enable communication between the orchestration
server and candidate systems.

On the orchestration server, the following items are installed:

The Pega Deployment Manager application.


The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager
application. You must enable this operator ID and specify its password.
The DMAppAdmin authentication profile. The orchestration server uses this authentication profile to
communicate with candidate systems so that it can run tasks in the pipeline. You must update this
authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is
configured on all the candidate systems.

On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application.


The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this
operator ID and specify its password.
The DMReleaseAdmin authentication profile. All candidate systems use this authentication profile to
communicate with the orchestration server about the status of the tasks in the pipeline. You must update
this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which
is configured on the orchestration server.

The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords.

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and
authentication profiles that communicate between the orchestration server and candidate systems are also
installed.

On the orchestration server, the following items are installed:

The Pega Deployment Manager application.


The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager
application. You must enable this operator ID and specify its password.
The DMAppAdmin authentication profile. The orchestration server uses this authentication profile to
communicate with candidate systems so that it can run tasks in the pipeline. You must update this
authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is
configured on all the candidate systems.

On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application.


The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this
operator ID and specify its password.
The DMReleaseAdmin authentication profile. All candidate systems use this authentication profile to
communicate with the orchestration server about the status of the tasks in the pipeline. You must update
this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which
is configured on the orchestration server.

Note: The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords.

Configuring the orchestration server


The orchestration server is the system on which the Deployment Manager application is installed and release
managers configure and manage CI/CD pipelines. Configure settings on it before you can use it in your pipeline.

To configure the orchestration server, do the following steps:

1. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. In the header of Dev Studio, click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

d. Click Records Integration-Resources Service Package .

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

2. To move the orchestration server to a different environment, first migrate your pipelines to the new
orchestration server, and then configure its URL on the new orchestration server.

This URL will be used for callbacks and for diagnostics checks.

a. In the header of Dev Studio, click Create SysAdmin Dynamic System Settings .

b. In the Owning Ruleset field, enter Pega-DeploymentManager.

c. In the Setting Purpose field, enter OrchestratorURL.

d. Click Create and open.

e. On the Settings tab, in the Value field, enter the URL of the new orchestration server in the format
http://hostname:port/prweb.

f. Click Save.

Configure the candidate systems in your pipeline. For more information, see Configuring candidate systems.

Configuring candidate systems


Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

To configure candidate systems, complete the following steps:

1. On each candidate system, add the PegaDevOpsFoundation application to your application stack.

a. In the header of Dev Studio, click the name of your application, and then click Definition.
b. In the Built on application section, click Add application.

c. In the Name field, press the Down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the Down arrow key and select the version of Deployment Manager that you
are using.

e. Click Save.

2. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. Click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

d. Click RecordsIntegration-ResourcesService Package.

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

3. To use a product rule for your target application, test application, or both, other than the default rules that
are created by the New Application wizard, on the development system, create product rules that define the
test application package and the target application package that will be moved through repositories in the
pipeline.

For more information, see Creating a product rule that includes associated data by using the Create menu.

When you use the New Application wizard, a default product rule for your target application is created that
has the same name as your application. Additionally, if you are using a test application, a product rule is
created with the same name as the target application, with _Tests appended to the name.

Configure repositories through which to move artifacts in your pipeline. For more information, see Creating
repositories on the orchestration server and candidate systems

Creating repositories on the orchestration server and candidate


systems
If you are using Deployment Manager on premises, create repositories on the orchestration server and all
candidate systems to move your application between all the systems in the pipeline. You can use a supported
repository type that is provided in Pega Platform, or you can create a custom repository type.

If you are using Deployment Manager on Pega Cloud Services, default repositories, named
"pegacloudcustomerroot" for both the development and production repositories, are provided. If you want to use
repositories other than the ones provided, you can create your own.

For more information about creating a supported repository, see Creating a repository.

For more information about creating a custom repository type, see Creating and using custom repository types for
Deployment Manager.

When you create repositories, note the following information:

You cannot use the Pega repository type to store application artifacts for the following reasons:
The Pega repository type points to the temporary folder where the Pega Platform node that is
associated with Deployment Manager stores caches. This node might not be persistent.
This repository type is suitable onlyfor single node deployments. In multinode deployments, each time a
requestor is authenticated, the requestor could be in a different node, and published artifacts are not
visible to the repository.
At most companies, the security practice is that lower environments should not connect to higher
environments. Using a Pega repository typically means that a lower environment can access a higher
environment.

You can only use Pega type repositories if are rebasing your development system to obtain the most
recently committed rulesets after merging them.
You can use file system type repositories if you do not want to use proprietary repositories such as
Amazon s3 or JFrog Artifactory.

You cannot use the defaultstore repository type to host artifacts or product archives for the production
applications. It is a system-managed file system repository; it points to the temporary folder where the Pega
Platform node that is associated with Deployment Manager stores caches.
Ensure that each repository has the same name on all systems.
When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog
Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must
select the Preemptive authentication check box.

After you configure a pipeline, you can verify that the repository connects to the URL of the development and
production repositories by clicking Test Connectivity on the Repository rule form.

Configuring the development system for branch-based


development
If you are using branches in either a distributed or nondistributed branch-based environment, configure the
development system so that you can start deployments when branches are merged. Configuring the development
system includes defining the URL of the orchestration server, creating development and target applications, and
locking application rulesets.

1. On the development system (in nondistributed environment) or the main development system (in a
distributed environment), create a dynamic system setting to define the URL of the orchestration server,
even if the orchestration server and the development system are the same system.

a. Click Create Records SysAdmin Dynamic System Settings .

b. In the Owning Ruleset field, enter Pega-DevOps-Foundation.

c. In the Setting Purpose field, enter RMURL.

d. Click Create and open.

e. On the Settings tab, in the Value field, enter the URL of the orchestration server in the format
http://hostname:port/prweb/PRRestService .

f. Click Save.

For more information about dynamic system settings, see Creating a dynamic system setting

2. Complete the following steps on either the development system (in a non-distributed environment) or the
remote development system (in a distributed environment).

a. Use the New Application wizard to create a new development application that developers will log in to.

This application allows development teams to maintain a list of development branches without
modifying the definition of the target application.

b. Add the target application of the pipeline as a built-on application layer of the development application
by first logging into the application.

c. In the header of Dev Studio, click the name of your application, and then click Definition.

d. In the Built-on application section, click Add application.

e. In the Name field, press the Down arrow key and select the name of the target application.

f. In the Version field, press the Down arrow key and select the target application version.

g. Click Save.

3. Lock the application rulesets to prevent developers from making changes to rules after branches have been
merged.

a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Application rulesets section, click the Open icon for each ruleset that you want to lock.

c. Click Lock and Save.


4. Copy the development repository that you configured on the remote development system to the source
development system.

5. If you are managing test cases separately from the target application, create a test application. For more
information, see Managing test cases separately in Deployment Manager.

6. Optional:

To rebase your development application to obtain the most recently committed rulesets after you merge
your branches, configure Pega Platform so that you can use rule rebasing.

For more information, see Understanding rule rebasing.

Configuring additional settings


As part of your pipeline, users can optionally receive notifications through email when events occur. For example,
users can receive emails when tasks or pipeline deployments succeed or fail. For more information about the
notifications that users can receive, see Understanding email notifications.

For either new Deployment Manager installations or upgrades, you must configure settings on the orchestration
server so that users can receive email notifications. For more information, see Configuring email accounts on the
orchestration server.

Additionally, you can configure Jenkins if you are using Jenkins tasks in a pipeline. For more information, see
Configuring Jenkins.

Configuring email accounts on the orchestration server

Deployment Manager provides the Pega-Pipeline-CD email account and the DMEmailListener email listener. If
you are configuring email accounts for the first time, specify your details for this account in Pega Platform.
For more information, see .

Configuring Jenkins

If you are using a Run Jenkins step task in your pipeline, configure Jenkins so that it can communicate with
the orchestration server.

Configuring email accounts on the orchestration server


Deployment Manager provides the Pega-Pipeline-CD email account and the DMEmailListener email listener. If you
are configuring email accounts for the first time, specify your details for this account in Pega Platform. For more
information, see Configuring email accounts for new Deployment Manager installations.

Otherwise, if you are upgrading, do the appropriate steps for the email account that you are using. See one of the
following topics for more information:

Configuring an email account when upgrading and using the Pega-Pipeline-CD email account
Configuring email accounts when upgrading and using the Default email account.

Configuring email accounts for new Deployment Manager installations

For new Deployment Manager installations, on the orchestration server, configure the Pega-Pipeline-CD
email account so users can receive email notifications for events such as task completion or failure.

Configuring an email account when upgrading and using the Pega-Pipeline-CD email account

If you are upgrading to Deployment Manager 4.7.x and using the Pega-Pipeline-CD email account for sending
emails, the DMEmailListener email listener always listens to the Pega-Pipeline-CD account. If you are using a
different listener, you must delete it.

Configuring email accounts when upgrading and using the Default email account

If you are upgrading to Deployment Manager and using the Default email account, after you upgrade to
Deployment Manager 4.7.x, you must do certain steps so that you can send email notifications.

Understanding email notifications

Emails are preconfigured with information about each notification type. For example, when a deployment
failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system
on which the deployment failure occurred.
Configuring email accounts for new Deployment Manager
installations
For new Deployment Manager installations, on the orchestration server, configure the Pega-Pipeline-CD email
account so users can receive email notifications for events such as task completion or failure.

Do the following steps:

1. In the navigation pane of Dev Studio,click Records, and then click Integration-Resources Email Account .

2. Click Pega-Pipeline-CD.

3. In the Edit Email Account rule form, configure and save the email account.

For more information about configuring email accounts, see Creating an email account in Dev Studio.

Configuring an email account when upgrading and using the Pega-


Pipeline-CD email account
If you are upgrading to Deployment Manager 4.7.x and using the Pega-Pipeline-CD email account for sending
emails, the DMEmailListener email listener always listens to the Pega-Pipeline-CD account. If you are using a
different listener, you must delete it.

Delete the listener that is listening to the Pega-Pipeline-CD account by doing the following steps:

1. In the header of Dev Studio, click Configure Integration Email Email listeners .

2. On the Email: Integration page, on the Email Listeners tab, click the listener that you want to delete.

3. Click Delete.

Configuring email accounts when upgrading and using the Default


email account
If you are upgrading to Deployment Manager and using the Default email account, after you upgrade to
Deployment Manager 4.7.x, you must do certain steps so that you can send email notifications.

Do the following steps:

1. Update the email sender and recipient in Pega Platform.

a. In the navigation pane of Dev Studio,, click Records, and then click Integration-Resources Email Account
.

b. Click Default.

c. On the Edit Email Account form, configure and save the email account.

For more information about configuring email accounts, see Creating an email account in Dev Studio

2. If you have an email listener that listens to the same email address that you configured in Deployment
Manager in the previous step, delete the listener to ensure that the DMEmailListener is listening to the email
account that you configured.

a. In the header of Dev Studio,, click Configure Integration Email Email listeners .

b. On the Email: Integration page, on the Email Listeners tab, click the listener that you want to delete.

c. Click Delete.

Understanding email notifications


Emails are preconfigured with information about each notification type. For example, when a deployment failure
occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the
deployment failure occurred.

Preconfigured emails are sent in the following scenarios:


Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using
branches, to the operator who started a deployment.
Deployment step completion or failure – When a step either completes or fails, an email is sent to the
release manager and, if you are using branches, to the operator who started the branch merge. The
deployment pauses if there are any errors.
Deployment completion – When a deployment is successfully completed, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge.
Stage completion or failure – When a stage in a deployment process either succeeds or fails, an email is sent
to the release manager and, if you are using branches, to the operator who started the branch merge.
Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent
to the user, who can approve or reject the task from the email.
Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you
are using branches, to the operator who started the branch merge.
Pega unit testing success or failure – If you are using the Run Pega unit tests task, and the task either
succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who
started the branch merge.
Schema changes required – If you do not have the required schema privileges to deploy schema changes on
application packages that require those changes, an email is sent to the operator who started the
deployment.
Guardrail compliance score success or failure – If you are using the Check guardrail compliance task, an
email is sent to the release manager if the task either succeeds or fails.
Approve for production – If you are using the Approve for production task, which requires approval from a
user before application changes are deployed to production, an email is sent to the user. The user can reject
or approve the changes.
Verify security checklist success or failure – If you are using the Verify security checklist task, which requires
that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with
security best practices, an email is sent to the release manager if the test either succeeds or fails.
Pega scenario testing success or failure – If you are using the Run Pega scenario tests task, an email is sent
to the release manager and, if you are using branches, to the operator who started the branch merge, if
Pega scenario testing either succeeds or fails.
Start test coverage success or failure – If you are using the Enable test coverage task to generate a test
coverage report, an email is sent to the release manager if the task either fails or succeeds.
Verify test coverage success or failure – If you are using the Verify test coverage task, an email is sent to the
release manager if the task either fails or succeeds.
Application quality statistics refreshed – If you are using the Refresh application quality statistics task, an
email is sent to the release manager when the task is run.
Jenkins job success or failure – If you are using a Jenkins task, an email is sent to the release manager if a
Jenkins job either succeeds or fails.

Configuring Jenkins
If you are using a Run Jenkins step task in your pipeline, configure Jenkins so that it can communicate with the
orchestration server.

1. On the orchestration server, create an authentication profile that uses Jenkins credentials.

If you are using a version of Jenkins earlier than 2.17.6, create an authentication profile on the
orchestration server that specifies the credentials to use.
1. Click Create Security Authentication Profile .
2. Enter a name, and then click Create and open.
3. In the User name field, enter Jenkins user ID.
4. Click Set password, enter the Jenkins password, and then click Submit.
5. Click the Preemptive authentication check box.
6. Click Save.
7. Go to step 4.

For more information about configuring authentication profiles, see Creating an authentication
profile.

If you are using Jenkins 2.17.6 or later and want to use an API token for authentication, go to step 2.
If you are using Jenkins 2.17.6 or later and want to use a Crumb Issuer for authentication, go to step 3.

2. If you are using Jenkins version 2.17.6 or later and want to use an API token for authentication, do the
following steps:

a. Log in to the Jenkins server.

b. Click People, click the user who is running the Jenkins job, and then click Configure API token .
c. Generate the API token.

d. Create an authentication profile on the orchestration server by clicking Create Security Authentication
Profile .

e. In the User name field, enter the Jenkins user ID.

f. Click Set password, enter the API token that you generated, and then click Submit.

g. Click the Preemptive authentication check box.

h. Click Save.

i. Go to step 4.

For more information about configuring authentication profiles, see Creating an authentication profile.

3. If you are using Jenkins version 2.17.6 or later and want to use a Crumb Issuer for authentication, do the
following steps:

a. Log in to the Jenkins server.

b. Click Manage Jenkins Manage Plugins and select the check box for the Strict Crumb Issuer plug-in.

c. Click Manage Jenkins Configure Global Security .

d. In the CSRF protection section, in the Crumb Issuer list, select Strict Crumb Issuer.

e. Click Advanced, and then clear the Check the session ID check box.

f. Click Save.

g. Create an authentication profile on the orchestration server by clicking Create Security Authentication
Profile .

h. In the User name field, enter the Jenkins user ID.

i. Click Set password, enter the Jenkins password, and then click Submit.

j. Click the Preemptive authentication check box.

k. Click Save.

l. Go to step 4.

For more information about configuring authentication profiles, see Creating an authentication profile.

4. Install the Post build task plug-in.

5. Install the curl command on the Jenkins server.

6. Create a new freestyle project.

7. On the General tab, select the This project is parameterized check box.

8. Add the BuildID and CallBackURL parameters.

a. Click Add parameter, and then select String parameter.

b. In the String field, enter BuildID.

c. Click Add parameter, and then select String parameter.

d. In the String field, enter CallBackURL.

9. To add parameters that you can use in Run Jenkins step tasks in the pipeline, click Add parameter, select
String parameter, and enter the string of the parameter. The system automatically populates these values in
Jenkins tasks. You can add any of the following strings:

a. If you are using Jenkins tasks for continuous integration (you are using branches), add any of the
following strings:

PipelineName: Pipeline name on which the Run Jenkins step task is configured.
ApplicationName: Application for which the pipeline is configured.
RepositoryName: Repository to which the merged branch is published.
BranchName: Branch for which the Run Jenkins step task is configured.
BranchFilePath: Location of the branch.
OrchestratorURL: URL of the orchestration server. Add this parameter to stop a pipeline when
the Run Jenkins step fails in a pipeline.
PipelineID: ID of the pipeline on which the Run Jenkins step task is configured.

b. If you are using Run Jenkins step tasks for continuous delivery (in staging, QA, or production
environments), add any of the following strings:

PipelineName: Pipeline name on which the Run Jenkins step task is configured.
RepositoryName: Repository that the Deploy task uses for the stage (for example, staging) on
which the Run Jenkins step task is configured.
DeploymentID: ID of the current deployment.
DeploymentArtifactName: Artifact name that the Deploy task uses on the stage on which the
Run Jenkins step task is configured.
StartedBy: ID of the operator who started the deployment.
CurrentStage: Name of the stage on which the Run Jenkins step task is configured.
ArtifactPath: Full path to the artifact that the Deploy task uses.
OrchestratorURL: URL of the orchestration server. Add this parameter to stop a pipeline when
the Run Jenkins step fails in a pipeline.
PipelineID: ID of the pipeline on which the Run Jenkins step task is configured. Add this parameter
to stop a pipeline when the Run Jenkins step task fails in a pipeline.

10. In the Build Triggers section, select the Trigger builds remotely check box.

11. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely.

12. In the Build Environment section, select the Use Secret text(s) or file(s) check box.

13. In the Bindings section, do the following actions:

a. Click Add, and then select User name and password (conjoined).

b. In the Variable field, enter RMCREDENTIALS

c. In the Credentials field, click Specific credentials.

d. Click Add, and then select Jenkins.

e. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager
operator that is configured on the orchestration server.

f. In the Password field, enter the password.

g. Click Save.

14. Add post-build tasks by doing one of the following actions:

a. If Jenkins is running on Microsoft Windows, go to step 15.

b. If Jenkins is running on Linux, go to step 16.

15. If Jenkins is running on Microsoft Windows, add the following post-build tasks:

a. Click Add post-build action, and then select Post build task.

b. In the Post-Build Actionssection, in the Log text field, enter a unique string for the message that is
displayed in the build console output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" .

d. Click Add another task.

e. In the Post-Build Actions section, in the Log text field, enter a unique string for the message that is
displayed in the build console output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}"
"%CallBackURL%"

g. Click Save.

h. Go to step 17.

16. If Jenkins is running on Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent
sign (%) to access the environment variables:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

d. Click Add another task.

e. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

g. Click Save.

h. Go to step 17.

17. To stop a pipeline deployment if a Jenkins build fails, add a post-build script:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string for the message that is displayed in the build console output
when a build fails, for example JENKINS BUILD FAILURE.

c. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X PUT --data "{"AbortNote":"Aborted
from jenkins job"}" %OrchestratorURL%/PRRestService/cicd/v1/pipelines/%PipelineID%/builds/%DeploymentID%/abort

d. Click Save.

Configuring and running pipelines with Deployment Manager 4.8.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks
so that you can quickly deploy high-quality software to production.

On the orchestration server, release managers use the Deployment Manager landing page to configure CI/CD
pipelines for their Pega Platform applications. The landing page displays all the running and queued application
deployments, branches that are to be merged, and reports that provide information about your DevOps
environment such as key performance indicators (KPIs).

Note: These topics describes the features for the latest version of Deployment Manager 4.8.x.

Note: To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server.

For more information about using Deployment Manager and data migration pipelines, see Exporting and
importing simulation data automatically with Deployment Manager.

Logging in to Deployment Manager

Deployment Manager provides a dedicated portal from which you can access features.

Accessing the Dev Studio portal

If your role has the appropriate permission, you can access Dev Studio from within Deployment Manager.
You can switch to Dev Studio to access features such as additional tools to troubleshoot issues. You can also
open, modify, and create repositories and authentication profiles.

Accessing API documentation


Deployment manager provides REST APIs for interacting with resources that in the Deployment Manager
interface. Use these APIs to create and manage pipelines by using automated scripts or external information.

Understanding roles and users

Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Understanding Deployment Manager notifications

You can enable notifications to receive updates about the events that occur in your pipeline. For example,
you can choose to receive emails about whether unit tests failed or succeeded. You can receive notifications
in the Deployment Manager notifications gadget, through email, or both. By default, all notifications are
enabled for users who are configured in Deployment Manager.

Configuring an application pipeline

When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous
delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be
merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA
stage of the pipeline.

Accessing systems in your pipeline

You can open the systems in your pipeline and log in to the Pega Platform instances on each system. For
example, you can access the system on which the QA stage is installed.

Filtering pipelines in the dashboard

You can filter the pipelines that the dashboard displays by application name, version, and pipeline
deployment status. By filtering pipelines, the dashboard displays only the information that is relevant to you.

Viewing merge requests

You can view the status of the merge requests for a pipeline to gain more visibility into the status of your
pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged.

Viewing deployment reports for a specific deployment

Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started
and ended.

Starting deployments

You can start deployments in a number of ways. For example, you can start a deployment manually if you
are not using branches, by submitting a branch into the Merge Branches wizard, or by publishing application
changes in App Studio to create a patch version of your application. Your user role determines if you can
start a deployment.

Pausing a deployment

When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment
at the next step.

Stopping a deployment

Stop a deployment to discontinue processing.

Performing actions on a deployment that has errors

If a deployment has errors, the pipeline stops processing on it. You can perform actions on it, such as rolling
back the deployment or skipping the step on which the error occurred.

Troubleshooting issues with your pipeline

Deployment Manager provides several features that help you troubleshoot and resolve issues with your
pipeline.

Understanding chema changes in application packages


If an application package that is to be deployed on candidate systems contains schema changes, the Pega
Platform orchestration server checks the candidate system to verify that you have the required privileges to
deploy the schema changes. One of the following results occurs:

Completing or rejecting a manual step

If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can
either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete
the task to continue the deployment. Deployment Manager also sends you an email when there is a manual
step in the pipeline. You can complete or reject a step either within the pipeline or through email.

Managing aged updates

You can manage aged updates in a number of ways such as importing them, skipping the import, or
manually deploying applications. Managing aged updates gives you more flexibility in how you deploy
application changes.

Viewing, downloading, and deleting application packages

You can view, download, and delete application packages in repositories that are on the orchestration server.
If you are using Deployment Manager on Pega Cloud Services, application packages that you have deployed
to cloud repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can
download and permanently delete the packages.

Archiving and activating pipelines

If your role has the appropriate permissions, you can archive inactive pipelines so that they are not displayed
on the Deployment Manager landing page.

Disabling and enabling a pipeline

If your role has the appropriate permissions, you can disable a pipeline on which errors continuously cause a
deployment to fail. Disabling a pipeline prevents branch merging, but you can still view, edit, and stop
deployments on a disabled pipeline.

Deleting an application pipeline

When you delete a pipeline, its associated application packages are not removed from the repositories that
the pipeline is configured to use.

Logging in to Deployment Manager


Deployment Manager provides a dedicated portal from which you can access features.

To log in to Deployment Manager, on the orchestration server, enter the DMReleaseAdmin operator ID and the
password that you specified for it.

Accessing the Dev Studio portal


If your role has the appropriate permission, you can access Dev Studio from within Deployment Manager. You can
switch to Dev Studio to access features such as additional tools to troubleshoot issues. You can also open, modify,
and create repositories and authentication profiles.

To access Dev Studio click Operator icon Switch to Dev Studio .

For more information on enabling a role to access Dev Studio, see Providing access to the Dev Studio portal.

Related Content
Article

Understanding roles and users

Accessing API documentation


Deployment manager provides REST APIs for interacting with resources that in the Deployment Manager
interface. Use these APIs to create and manage pipelines by using automated scripts or external information.

To access API documentation, open the Documentation/readme-for-swagger.md file in the


DeploymentManager04_07_0x.zip file that you downloaded.

Understanding roles and users


Define roles and users to manage which users can access Deployment Manager and which features they can
access. For example, you can create a role that does not permit users to delete pipelines for a specific
application.

Deployment Manager provides two default roles, which you cannot modify or delete, that define privileges for
super administrators and application administrators. Privileges for super administrators are applied across all
applications, and privileges for application administrators are applied to specific applications. Super
administrators can also add roles and specify the privileges to assign to them. Super administrators and
application administrators can add users and assign them access to the applications that they manage.

Using roles and privileges by creating a dynamic system setting

You can create roles that have specific privileges and then assign users to those roles to manager
Deployment Manager users. To use roles and privileges, you must first create the
EnableAttributeBasedSecurity dynamic system setting.

Adding and modifying roles

If you are a super administrator, you can add and modify roles. Users within a role share defined
responsibilities such as starting a pipeline.

Providing access to Dev Studio to a role

Deployment Manager provides a dedicated portal from which you can access features. From within
Deployment Manager, when you configure pipeline details, you can open, modify, and create repositories
and authentication profiles in Dev Studio if you have permissions to use the Dev Studio portal.

Adding users and specifying their roles

If you are a super administrator or application administrator, you can add users to Deployment Manager and
specify their roles. Only super administrators can create other super administrators or application
administrators who can access one or more applications. Application administrators can create other
application administrators for the applications that they manage.

Modifying user roles and privileges

Super administrators can give other users super administrative privileges or assign them as application
administrators to any application. Application administrators can assign other users as application
administrators for the applications that they manage.

Modifying your user details and password

You can modify your own user details, such as first and last name, and you can change your password.

Deleting users

If you are a super administrator or application administrator, you can delete users for the applications that
you manage.

Using roles and privileges by creating a dynamic system setting


You can create roles that have specific privileges and then assign users to those roles to manager Deployment
Manager users. To use roles and privileges, you must first create the EnableAttributeBasedSecurity dynamic
system setting.

Do the following steps:

1. In the header of Dev Studio, click Create SysAdmin Dynamic System Settings .

2. In the Short Description field, enter a short description.

3. In the Owning Ruleset field, enter Pega-RulesEngine .

4. In the Setting Purpose field, enter EnableAttributeBasedSecurity.

5. Click Create and open.


6. On the Settings tab, in the value field, enter true.

7. Click Save.

Adding and modifying roles


If you are a super administrator, you can add and modify roles. Users within a role share defined responsibilities
such as starting a pipeline.

If you are a super administrator, add or modify a role by doing the following steps:

1. In the navigation pane of Deployment Manager click Users, and then click Roles and privileges.

2. Do one of the following actions:

To add a role, click Add role.


To modify a role, click a role, and then click Edit.

3. In the Add role or Edit role dialog box, in the Name field, enter a name for the role.

4. Select the privileges that you want to assign to the role.

5. Click Submit.

Providing access to Dev Studio to a role


Deployment Manager provides a dedicated portal from which you can access features. From within Deployment
Manager, when you configure pipeline details, you can open, modify, and create repositories and authentication
profiles in Dev Studio if you have permissions to use the Dev Studio portal.

To provide access to the Dev Studio portal for a role, complete the following steps:

1. In the navigation pane of Deployment Manager, click Users, and then click Roles and privileges.

2. Do one of the following actions:

To add a role, click Add role.


To modify a role, click Edit.

3. In the Add role or Edit Role dialog box, in the Name field, enter a name for the role.

4. Click Access to Dev Studio.

5. Click Submit.

Result: If you specify Dev Studio as a default portal for the PegaDeploymentManager:Administrators access
group, all the users that you add in the Deployment Manager portal can access Dev Studio.

Adding users and specifying their roles


If you are a super administrator or application administrator, you can add users to Deployment Manager and
specify their roles. Only super administrators can create other super administrators or application administrators
who can access one or more applications. Application administrators can create other application administrators
for the applications that they manage.

To add users, do the following steps:

1. In the navigation pane of Deployment Manager click Users, and then click People.

2. On the People page, click Add user.

3. In the Add user dialog box, click the User field, and do one of the following actions:

Press the Down arrow key and select the user that you want to add.
Enter an email address.

4. Click Add.

5. From the Role list, select the role to assign to the user.
6. If you selected the App admin role or a custom role, in the Applications field, enter the application name that
the user can access.

7. Click Send invite to send an email, which contains the user name and a randomly generated password for
the user to log in to Deployment Manager with, to the user.

Modifying user roles and privileges


Super administrators can give other users super administrative privileges or assign them as application
administrators to any application. Application administrators can assign other users as application administrators
for the applications that they manage.

To modify user roles and privileges, do the following steps:

1. In the navigation pane of Deployment Manager click Users, and then click People.

2. On the People page, click the user.

3. In the Roles and privileges section, modify the user role and applications that they can access, as
appropriate.

4. Click Save.

Modifying your user details and password


You can modify your own user details, such as first and last name, and you can change your password.

To update your information, do the following steps:

1. In the navigation pane of Deployment Manager click Users, and then click People.

2. On the People page, click your user name.

3. In the Personal details section, modify your name, email address, and phone number, as appropriate.

4. To change your password:

a. Click Update password.

b. In the Change operator ID dialog box, enter your new password, reenter it to confirm it, and then
click Submit.

5. Click Save.

Deleting users
If you are a super administrator or application administrator, you can delete users for the applications that you
manage.

To delete users, do the following steps:

1. In the navigation panel of Deployment Manager, click Users, and then click People.

2. On the People page, click the Delete icon for the user that you want to delete.

Understanding Deployment Manager notifications


You can enable notifications to receive updates about the events that occur in your pipeline. For example, you
can choose to receive emails about whether unit tests failed or succeeded. You can receive notifications in the
Deployment Manager notifications gadget, through email, or both. By default, all notifications are enabled for
users who are configured in Deployment Manager.

By default, all notifications are enabled for users who are configured in Deployment Manager.

Viewing and updating email accounts for notifications

Receiving email notifications requires that an email account is configured on the orchestration server. You
can view and update your email settings in Deployment Manager.
Creating custom Deployment Manager notification channels

You can extend Deployment Manager notification capabilities by creating custom notification channels. For
example, you can send text messages to mobile devices when tasks start, stop, and are unsuccessful.

Managing notifications

Enable and receive notifications so that you can remain informed about important tasks in your pipeline. For
example, you can receive emails when certain tasks fail.

Viewing and updating email accounts for notifications


Receiving email notifications requires that an email account is configured on the orchestration server. You can
view and update your email settings in Deployment Manager.

Changing your email settings requires access to Dev Studio, so your user role must have permission to access
Dev Studio. For more information, see Understanding roles and users.

1. In the navigation pane of Deployment Manager click Settings Email configuration .

2. To update your email settings, perform the following steps:

a. At the top of the Settings: Email configuration page, click Dev Studio.

b. In the Edit Email Account form, configure and save the email account that you want to use to receive
notifications.

c. In the bottom left corner of Dev Studio, click Back to Deployment Manager to return to the Deployment
Manager portal.

d. Click the refresh icon to refresh your email configuration.

Creating custom Deployment Manager notification channels


You can extend Deployment Manager notification capabilities by creating custom notification channels. For
example, you can send text messages to mobile devices when tasks start, stop, and are unsuccessful.

To create a custom notification channel, complete the following steps:

1. On the orchestration server, in Pega Platform, create a custom notification channel.

For more information, see Adding a custom notification channel.

2. Add the application ruleset, which contains the channel that you created, to the Deployment Manager
application.

a. In the header of Dev Studio, click Deployment Manager, and then click Definition.

b. On the Edit Application rule form, in the Application rulesets section, click Add ruleset.

c. Press the Down arrow key and select the ruleset and version that contains the custom notification
channel.

d. Save the rule form.

3. Enable the channel that you created on the appropriate notifications by saving the notification in the
application ruleset that contains the channel.

For example: If you want to use the Mobile channel for the pyStartDeployment notification, save the
pyStartDeployment notification in the application ruleset that contains the Mobile channel.

4. Enable the channel on the notification.

a. Open the notification by clicking Records Notifications .

b. Click the Channels tab.

c. On the Channel configurations page, select the channel that you want to use.
d. Save the rule form.

Understanding custom Deployment Manager notification channels

When notifications are enabled, you can receive notifications about the events that occur in your pipeline,
such as when tasks start or stop. You can receive notifications through email, the Deployment Manager
notifications gadget, or both. You can also create custom notification channels to meet application
requirements such as sending notifications as phone text messages or as push notifications on mobile
devices.

Understanding custom Deployment Manager notification channels


When notifications are enabled, you can receive notifications about the events that occur in your pipeline, such as
when tasks start or stop. You can receive notifications through email, the Deployment Manager notifications
gadget, or both. You can also create custom notification channels to meet application requirements such as
sending notifications as phone text messages or as push notifications on mobile devices.

Deployment Manager provides the following notifications to which you can add channels:

pyAbortDeployment
pyTaskFailure
pyTaskFailure
pyTaskCompletion
pyStartDeployment
pyStageCompletion
pySchemaChange
pyDeploymentCompletion
pyAgedUpdateActionTaken
pyAgedUpdateActionRequired

Managing notifications
Enable and receive notifications so that you can remain informed about important tasks in your pipeline. For
example, you can receive emails when certain tasks fail.

To enable notifications and select the notifications that you want to receive, do the following steps:

1. In the navigation pane of Deployment Manager click your profile icon.

2. Click Notification preferences.

3. Select the events for which you want to receive notifications.

4. Specify how you want to receive notifications.

5. Click Submit.

Configuring an application pipeline


When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery
workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you
can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline.

You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in
the following scenarios:

To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline
that has only a production stage or development and production stages.
To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud Services

If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application
name and version for the pipeline. Many fields are populated by default, such as the URL of your
development system and product rule name and version.

Adding a pipeline on-premises

When you add a pipeline on premises, you define all the stages and tasks that you want to do on each
system. For example, if you are using branches, you can start a build when a branch is merged. If you are
using a QA system, you can run test tasks to validate application data.

Modifying an application pipeline

You can modify the details of your pipeline, such as configuring tasks, updating the repositories that the
pipeline uses, and modifying the URLs of the systems in your environment. You cannot modify information if
your pipeline is running.

Adding a pipeline on Pega Cloud Services


If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application name
and version for the pipeline. Many fields are populated by default, such as the URL of your development system
and product rule name and version.

To add a pipeline on Pega Cloud Services, do the following steps:

1. In the navigation pane, click Pipelines Application pipelines .

2. Click New.

3. Specify the details of the application for which you are creating the pipeline.

a. To change the URL of your development system, which is populated by default with your development
system URL, in the Development environment field, press the Down arrow key and select the URL.

This is the system on which the product rule that defines the application package that moves through
the repository is located.

b. In the Application field, press the Down arrow key and select the name of the application.

c. In the Version field, press the Down arrow key and select the application version.

d. Click the Access group field and select the access group for which pipeline tasks are run.

This access group must be present on all the candidate systems and have at least the sysadmin4 role.
Ensure that the access group is correctly pointing to the application name and version that is
configured in the pipeline.

e. In the Pipeline name field, enter a unique name for the pipeline.

4. If you are using a separate product rule to manage test cases, to deploy a test case, in the Application test
cases section, select the Deploy test applications check box; then, complete the following steps:

a. In the Test application field, enter the name of the test application.

b. In the Version field, enter the version of the test case product rule.

c. In the Access group field, enter the access group for which test cases are run.

d. In the Product rule field, enter the name of the test case product rule.

e. From the Deploy until field, select the pipeline stage until the test case product rule will be deployed.

Note: When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests,
Enable test coverage, and Verify test coverage tasks are run for the access group that is specified in
this section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

5. To change product rule that defines the contents of the application, the Product rule field, enter the name of
the product rule that defines the contents of the application, which is populated by default with the
application name.

6. To change the product rule version, in Version field, enter the version, which is populated by default with the
application version.

7. Click Create.
Result: The system adds tasks, which you cannot delete, to the pipeline that are required to successfully
run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud Services, it also adds mandatory
tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security
checklist task.

8. Add tasks that you want to perform on your pipeline, such as Pega unit testing.

For more information, see Modifying stages and tasks in the pipeline.

9. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Adding a pipeline on premises


When you add a pipeline on premises, you define all the stages and tasks that you want to do on each system.
For example, if you are using branches, you can start a build when a branch is merged. If you are using a QA
system, you can run test tasks to validate application data.

To add a pipeline on premises, complete the following steps:

1. Click Pipelines Application pipelines .

2. Click New.

3. Specify the details of the application for which you are creating the pipeline.

a. In the Development environment field, enter the URL of the development system.

This is the system on which the product rule that defines the application package that moves through
the repository is located.

b. In the Application field, press the Down arrow key and select the name of the application.

c. In the Version field, press the Down arrow key and select the application version.

d. In the Access group field, press the Down arrow key and select the access group for which pipeline
tasks are run.

This access group must be present on all the candidate systems and have at least the sysadmin4 role.

e. In the Pipeline name field, enter a unique name for the pipeline.

f. In the Product rule field, enter the name of the product rule that defines the contents of the application.

g. In the Version field, enter the product rule version.

4. If you are using a separate product rule to manage test cases, in the Application test cases section, to deploy
a test case, select the Deploy test applications check box; then, complete the following steps:

a. In the Test application field, enter the name of the test application.

b. In the Version field, enter the version of the test case product rule.

c. In the Access group field, enter the access group for which test cases are run. Ensure that the access
group is correctly pointing to the application name and version that is configured in the pipeline.

d. In the Product rule field, enter the name of the test case product rule.

e. From the Deploy until field, select the pipeline stage until which the test case product rule will be
deployed.

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable
test coverage, and Verify test coverage tasks are run for the access group that is specified in this
section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

5. To configure dependent applications, click Dependencies.


a. Click Add.

b. In the Application name field, press the Down arrow key and select the application name.

c. In the Pipeline field, press the Down arrow key and select the pipeline.

d. In the Deployment field, press the Down arrow key and select the deployment that contains the
production-ready artifact of the dependent application.

If you want the latest artifact of the dependent application to be automatically populated, select latest.

For more information about dependent applications, see Managing application dependencies.

For more information on updating existing dependencies, see Updating application dependencies in
Deployment Manager.

a. Click Next.

6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the
authentication profile that each system uses to communicate with the orchestration system.

a. In the Environments field for the system, press the Down arrow key and select the URL of the system.

b. If you are using your own authentication profiles, in the Authentication field for the system, press the
Down arrow key and select the authentication profile that you want to communicate from the
orchestration server to the system.

By default, the fields are populated with the DMAppAdmin authentication profile.

7. In the Artifact management section, specify the development and production repositories through which the
product rule that contains application contents moves through the pipeline.

8. In the Development repository field, press the Down arrow key and select the development repository.

9. In the Production repository field, press the Down arrow key and select the production repository.

10. In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify the Jenkins
details.

a. In the URL field, enter the URL of the Jenkins server.

b. In the Authentication profile field, press the Down arrow key and select the authentication profile on the
orchestration server that specifies the Jenkins credentials to use for Jenkins jobs.

11. Click Next.

12. Specify whether you are using branches in your application.

If you are not using branches, click the No radio button, and then go to step 15.
If you are using branches, go to step 14.

13. To specify branch options, do the following steps:

a. Click the Yes radio button.

b. Do one of the following options:

To merge branches into the highest existing ruleset in the application, click Highest existing ruleset.

To merge branches into a new ruleset, click New ruleset.

a. In the Password field, enter the password that locks the rulesets on the development system.

14. Click Next.

Result: The system adds tasks, which you cannot delete, to the pipeline that are required to successfully
run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best
practices such as Check guardrail compliance and Verify security checklist.

15. To specify that a branch must meet a compliance score before it can be merged:

a. In the Merge criteria pane, click Add task.


b. From the Task list, select Check guardrail compliance.

c. In the Weighted compliance score field, enter the minimum required compliance score.

d. Click Submit.

For more information about compliance scores, see Compliance score logic

16. To specify that a branch must be reviewed:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check review status.

c. Click Submit.

For more information about branch reviews, see Branch reviews.

17. To run Pega unit tests on the branches for the pipeline application or for an application that is associated
with an access group before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Pega unit testing.

c. To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.

d. Click Submit.

For more information about creating Pega unit tests, see Creating Pega unit test cases.

Result:

When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests, Enable test
coverage, and Verify test coverage tasks are run for the access group that is specified in the Application test
cases section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group that
is associated with the test application.

18. To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check
box.

Do not select this check box if you want to manually start deployments.

For more information, see Manually starting a deployment in Deployment Manager.

19. Clear a check box for a deployment life cycle stage to skip it.

20. In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline.
See the following topics for more information:

Running Pega unit tests by adding the Run Pega unit tests task
Running Jenkins steps by adding the Run Jenkins step task
Specifying that an application meet a compliance score by adding the Check guardrail compliance score
task
Ensuring that the Application Security Checklist is completed by adding the Verify security checklist
task
Starting test coverage by adding the Enable test coverage task
Stopping test coverage by adding the Validate test coverage task
Running Pega scenario tests by adding the Run Pega scenario tests task
Refreshing application quality by adding the Refresh application quality task
Modifying the Approve for production task

21. Clear the Production ready check box if you do not want to generate an application package, which is sent to
the production repository.

You cannot clear this check box if you are using a production stage in the life cycle.

22. Click Finish.


23. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Adding the Pega unit testing task

If you use Pega unit tests to validate application data, add the Pega unit testing task on the pipeline stage
where you want to run it. For example, you can run Pega unit tests on a QA system.

Adding the Jenkins task

If you are using Jenkins to perform tasks in your pipeline, you can add the Jenkins task to the stage on which
you want it to run.

Adding the manual step task

Use manual steps so that users must take an action before a pipeline deployment can continue. Users can
either accept the task to continue the deployment or reject the task to stop it.

Adding the Check guardrail compliance score task

You can use the Check guardrail compliance score task so that an application must meet a compliance score
for the deployment to continue. The default value is 95, which you can modify.

Starting another pipeline by adding the Trigger deployment task

You can start another pipeline by adding the Trigger deployment task to a stage in your current pipeline. By
starting another pipeline from a current pipeline, you can add more stages to your pipeline.

Adding the Verify security checklist task

For your pipeline to comply with security best practices, you can add a task so that to ensure that all the
steps in Application Security Checklist are performed.

Starting test coverage by adding the Enable test coverage task

Add the Enable test coverage task to start test coverage. Starting and stopping test coverage generates a
report that identifies the executable rules in your application that are either covered or not covered by tests.
As a best practice, to ensure application quality, you should test all the rules in your application for which
testing is supported.

Stopping test coverage by adding the Validate test coverage task

Add this task to stop a test coverage session. Starting and stopping test coverage generates a report that
identifies the executable rules in your application that are either covered or not covered by tests. As a best
practice, to ensure application quality, you should test all the rules in your application for which testing is
supported.

Running Pega scenario tests by adding the Run Pega scenario tests task

If you are using Pega scenario tasks, you can run them in your pipeline by using the Run Pega scenario tests
task. Deployment Manager supports Selenium 3.141.59.

Refreshing application quality by adding the Refresh application quality task

To refresh the Application Quality dashboard, which provides information about the health of your
application, on the candidate system, add the Refresh application quality task. You can refresh the
dashboard after running Pega unit tests, checking guardrail compliance, running Pega scenario tests, and
starting or stopping test coverage.

Modifying the Approve for production task

The Approve for production task is added to the stage before production. Use this task if you want a user to
approve application changes before those changes are send to production.

Running Pega unit tests by adding the Run Pega unit tests task
If you use Pega unit tests to validate application data, add the Pega unit testing task on the pipeline stage where
you want to run it. For example, you can run Pega unit tests on a QA system.

When you use separate product rules for test cases and run a pipeline, the Pega unit testing task is run for the
access group that is specified in the Application test cases section, which you configure when you add or modify a
pipeline.

To run Pega unit tests for either the pipeline application or for an application that is associated with an access
group, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the task list, click Pega unit testing.

3. Do one of the following actions:

To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite
ID field, enter the pxInsName of the test suite.

You can find this value in the XML document that comprises the test suite by clicking, in Dev Studio,
Actions XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the
pipeline application are run.

To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.

For more information about creating Pega unit tests, see Creating Pega unit test cases.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Running Jenkins steps by adding the Run Jenkins step task


If you are using Jenkins to perform tasks in your pipeline, you can add the Run Jenkins step to the stage on which
you want it to run. If you have configured the Jenkins OrchestratorURL and PipelineID parameters, when this task
fails, the pipeline stops running. For more information about configuring these parameters, see Configuring
Jenkins.

To add this task, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Run Jenkins step.

3. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that
you want to run.

4. In the Token field, enter the Jenkins authentication token.

5. In the Parameters field, enter parameters, if any, to send to the Jenkins job.

6. Click Submit.

7. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Continuing or stopping a deployment by adding the Perform


manual step task
Use manual steps so that users must take an action before a pipeline deployment can continue. Users can either
accept the task to continue the deployment or reject the task to stop it.
To add a manual step that a user must perform in the pipeline, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Perform manual step.

3. In the Job name field, enter text that describes the action that you want the user to take.

4. In the Assigned to field, press the Down arrow key and select the operator ID to assign the task to.

5. Click Submit.

6. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Specifying that an application meet a compliance score by adding


the Check guardrail compliance score task
You can use the Check guardrail compliance score task so that an application must meet a compliance score for
the deployment to continue. The default value is 97, which you can modify.

To specify that an application must meet a compliance score, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Check guardrail compliance.

3. In the Weighted compliance score field, enter the minimum required compliance score.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Starting another pipeline by adding the Trigger deployment task


You can start another pipeline by adding the Trigger deployment task to a stage in your current pipeline. By
starting another pipeline from a current pipeline, you can add more stages to your pipeline.

To add the Trigger deployment task to a stage in your pipeline, perform the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Trigger deployment.

3. In the Application name field, press the Down arrow key, and then select the application that you want to
deploy.

4. In the Pipeline name field, press the Down arrow key and, then select the pipeline that you want to start.

5. If you want to deploy the artifact that you are deploying in the current pipeline, select the Deploy current
artifact check box. Otherwise, a new application is deployed on the pipeline.

6. Click Submit.

Ensuring that the Application Security Checklist is completed by


adding the Verify security checklist task
For your pipeline to comply with security best practices, you can add a task to ensure that all the steps in the
Application Security Checklist are performed. For customers on Pega Platform 8.4 and above, a new Security
Checklist API is available to provide an automated security configuration assessment. Both candidate and
orchestrator environments should be on or above Deployment Manager 4.8 to utilize this functionality.

Before you begin: You must log in to the system for which this task is configured, and then mark all the tasks in
the Application Security checklist as completed for the pipeline application. For more information about
completing the checklist, see Preparing your application for secure deployment.

To add the Verify security checklist task, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Verify Security checklist.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Starting test coverage by adding the Enable test coverage task


Add the Enable test coverage task to start test coverage. Starting and stopping test coverage generates a report
that identifies the executable rules in your application that are either covered or not covered by tests. As a best
practice, to ensure application quality, you should test all the rules in your application for which testing is
supported.

For more information about application-level coverage reports, see Generating an application-level test coverage
report.

When you use separate product rules for test cases and run a pipeline, the Enable test coverage task is run for
the access group that is specified in the Application test cases section, which you configure when you add or
modify a pipeline.

To add this task, complete the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Enable test coverage.

3. Select the Start a new session check box to start a test coverage session every time that the pipeline runs
the deployment. If you do not select this check box, if a test coverage session is already running, the pipeline
pauses and returns an error.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Related Content
Article

Adding a pipeline on premises

Article

Adding a pipeline on Pega Cloud Services

Article
Modifying application details

Stopping test coverage by adding the Validate test coverage task


Add this task to stop a test coverage session. Starting and stopping test coverage generates a report that
identifies the executable rules in your application that are either covered or not covered by tests. As a best
practice, to ensure application quality, you should test all the rules in your application for which testing is
supported.

For more information about application-level coverage reports, see Generating an application-level test coverage
report.

When you use separate product rules for test cases and run a pipeline, the Validate test coverage task is run for
the access group that is specified in the Application test cases section, which you configure when you add or
modify a pipeline.

1. Add this task below the Enable test coverage task by doing one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Validate test coverage.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Running Pega scenario tests by adding the Run Pega scenario tests
task
If you are using Pega scenario tasks, you can run them in your pipeline by using the Run Pega scenario tests task.
Deployment Manager supports Selenium 3.141.59.

To add the Run Pega scenario tests task, do the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Run Pega scenario tests.

3. In the User name field, enter the user name for the Pega Platform instance on which you are running
scenario tests.

Note: For the Run Pega scenario tests task, if you are using a separate product rule for a test application,
the user name that you provide should belong to the access group that is associated with the test
application.

4. In the Password field, enter the Pega Platform password.

5. From the Test Service Provider field, select the browser that you are using to run the scenario tests in the
pipeline.

6. Do one of the following actions:

If you selected CrossBrowserTesting, BrowserStack, or SauceLabs, go to step 7.


If you selected Standalone, go to step 8.

7. If you selected CrossBrowserTesting, BrowserStack, or SauceLabs:

a. In the Provider auth name field, enter the auth name that you you use to log in to the test service
provider.

b. In the Provider auth key field, enter the key for the test service provider.
c. Go to step 9.

8. If you selected Standalone, in the Provider URL field, enter the URL of the Selenium Standalone Server by
using one of the following:

a. Hub hostname and port: Use the format Hubhostname:port.

b. IP address: Enclose the IP address in double quotation marks.

9. In the Browser field, enter the browser that you are using to record scenario tests.

10. In the Browserversion field, enter the browser version.

11. In the Platform field, enter the development platform that you are using to record tests.

12. In the Screen resolution field, enter the resolution at which are recording scenario tests.

13. Click Submit.

14. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Refreshing application quality by adding the Refresh application


quality task
To refresh the Application Quality dashboard, which provides information about the health of your application, on
the candidate system, add the Refresh application quality task. You can refresh the dashboard after running Pega
unit tests, checking guardrail compliance, running Pega scenario tests, and starting or stopping test coverage.

To add this task, complete the following steps:

1. Do one of the following actions:

Click a task, click the More icon, and then click either Add task above or Add task below.
Click Add task in the stage.

2. In the Task list, click Refresh application quality.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Modifying the Approve for production task


The Approve for production task is added to the stage before production. Use this task if you want a user to
approve application changes before those changes are send to production.

To modify the Approve for production task, do the following steps:

1. Click the Info icon.

2. In the Job name field, enter a name for the task.

3. In the Assign to field, press the Down arrow key and select the user who approves the application for
production.

An email is sent to this user, who can approve or reject application changes from within the email.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline
Modifying an application pipeline
You can modify the details of your pipeline, such as configuring tasks, updating the repositories that the pipeline
uses, and modifying the URLs of the systems in your environment. You cannot modify information if your pipeline
is running.

Modifying application details

You can modify application details, such as the product rule that defines the content of the application that
moves through the pipeline.

Modifying URLs and authentication profiles

You can modify the URLs of your development and candidate systems and the authentication profiles that
are used to communicate between those systems and the orchestration server.

Modifying repositories

You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the Development
repository, and all the production-ready artifacts are archived in the Production repository.

Configuring Jenkins server information

If you are using a Jenkins step, specify details about the Jenkins server such as its URL.

Modifying merge options for branches

If you are using branches in your application, specify options for merging branches into the base application.

Modifying stages and tasks in the pipeline

You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you
can skip a stage or add tasks such as Pega unit testing to be done on the QA stage.

Modifying application details


You can modify application details, such as the product rule that defines the content of the application that moves
through the pipeline.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Pipeline settings .

3. Click Application details.

4. In the Development environment field, enter the URL of the development system, which is the system on
which the product rule that defines the application package that moves through the repository is located.

5. In the Version field, press the Down arrow key and select the application version.

6. In the Product rule field, enter the product rule that defines the contents of the application.

7. In the Version field, enter the product rule version.

8. f you are using a separate product rule to manage test cases, in the Application test cases section, complete
the following steps:

a. To deploy test cases, select the Deploy test applications check box.

b. In the Test application field, enter the name of the test application.

c. In the Version field, enter the version of the test case product rule.

d. In the Access group field, enter the access group for which test cases are run.
e. In the Product rule field, enter the name of the test case product rule.

f. From the Deploy until field, select the pipeline stage until which the test case product rule will be
deployed.

Note: When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests,
Enable test coverage, and Verify test coverage tasks are run for the access group that is specified in
this section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

9. If the application depends on other applications, in the Dependencies section, add those applications.

a. Click Add.

b. In the Application name field, press the Down arrow key and select the application name.

c. In the Application version field, press the Down arrow key and select the application version.

d. In the Repository name field, press the Down arrow key and select the repository that contains the
production-ready artifact of the dependent application.

If you want the latest artifact of the dependent application to be automatically populated, ensure that
the repository that contains the production-ready artifact of the dependent application is configured to
support file updates.

e. In the Artifact name field, press the Down arrow key and select the artifact.

For more information about dependent applications, see Listing product dependencies.

10. Click Save.

11. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying URLs and authentication profiles


You can modify the URLs of your development and candidate systems and the authentication profiles that are
used to communicate between those systems and the orchestration server.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Pipeline settings .

3. Click Deployment stages.

4. In the Environments field for the system, press the Down arrow key and select the URL of the system.

5. In the Authentication field for the system, press the Down arrow key and select the authentication profile
that you want to communicate from the orchestration server to the system.

6. Click Save.

7. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying repositories
You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the development
repository, and all the production-ready artifacts are archived in the production repository.

You do not need to configure repositories if you are using Pega Cloud Services; you can use different repositories
other than the default ones that are provided.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Pipeline settings .

3. Click Artifact Management.

4. If you are using Deployment Manager on premises, or on Pega Cloud Services with default repositories,
complete the following tasks:

a. In the Application repository section, in the Development repository field, press the Down arrow key and
select the development repository

b. In the Production repository field, press the Down arrow key and select the production repository.

5. If you are using Deployment Manager on Pega Cloud Services and want to use different repositories other
than the default repositories, complete the following tasks:

a. In the Artifact repository section, click Yes.

b. In the Development repository field, press the Down arrow key and select the development repository.

c. In the Production repository field, press the Down arrow key and select the production repository.

6. Click Save.

7. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Configuring Jenkins server information for running Jenkins jobs


If you are using a Run Jenkins step, configure Jenkins server information so that you can run Jenkins jobs.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Pipeline settings .

3. Click External orchestration server.

4. In the URL field, enter the URL of the Jenkins server.

5. In the Authentication profile field, press the Down arrow key and select the authentication profile on the
orchestration server that specifies the Jenkins credentials to use for Jenkins jobs.

6. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying merge options for branches


If you are using branches in your application, specify options for merging branches into the base application.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Pipeline settings .

3. Click Merge policy.

4. If you are not using branches, click the No radio button, and then go to step 6.

5. If you are using branches, do the following actions:

a. Click Yes.

b. Do one of the following actions:

To merge branches into the highest existing ruleset in the application, click Highest existing ruleset.
To merge branches into a new ruleset, click New ruleset.

a. In the Password field, enter the password that locks the rulesets on the development system.

6. Click Save.

7. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Modifying stages and tasks in the pipeline


You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can
skip a stage or add tasks such as Pega unit testing to be done on the QA stage.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Pipeline model.

3. To specify that a branch must meet a compliance score before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check guardrail compliance.

c. In the Weighted compliance score field, enter the minimum required compliance score.

d. Click Submit.

For more information about compliance scores, see Compliance score logic.

4. To specify that a branch must be reviewed before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check review status.

c. Click Submit.

For more information about branch reviews, see Branch reviews.

5. To run Pega unit tests on the branches for the pipeline application or for an application that is associated
with an access group before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Pega unit testing.

c. To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.
d. Click Submit.

For more information about creating Pega unit tests, see Creating Pega unit test cases

6. To start a deployment automatically when a branch is merged, select the Trigger deployment on merge
check box. Do not select this check box if you want to manually start a deployment.

For more information, see Manually starting a deployment in Deployment Manager.

7. Clear a check box for a deployment life cycle stage to skip it.

8. In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline.
See the following topics for more information:

Running Pega unit tests by adding the Run Pega unit tests task
Running Jenkins steps by adding the Run Jenkins step task
Specifying that an application meet a compliance score by adding the Check guardrail compliance score
task
Ensuring that the Application Security Checklist is completed by adding the Verify security checklist
task
Starting test coverage by adding the Enable test coverage task
Stopping test coverage by adding the Validate test coverage task
Running Pega scenario tests by adding the Run Pega scenario tests task
Refreshing application quality by adding the Refresh application quality task
Modifying the Approve for production task

9. Clear the Production ready check box if you do not want to generate an application package, which is sent to
the production repository. You cannot clear this check box if you are using a production stage in the life
cycle.

10. Click Finish.

11. Run diagnostics to verify that your pipeline is configured correctly.

For more information, see Diagnosing a pipeline.

Accessing systems in your pipeline


You can open the systems in your pipeline and log in to the Pega Platform instances on each system. For
example, you can access the system on which the QA stage is installed.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the pop-out arrow for the system that you want to open.

Filtering pipelines in the dashboard


You can filter the pipelines that the dashboard displays by application name, version, and pipeline deployment
status. By filtering pipelines, the dashboard displays only the information that is relevant to you.

To filter the display of pipelines, perform the following steps:

1. In the navigation pane of Deployment Manager click Pipelines Application pipelines .

2. At the top of the dashboard, in the View lists, select the information with which you want to filter the display
of pipelines, and then click Apply.

Viewing merge requests


You can view the status of the merge requests for a pipeline to gain more visibility into the status of your
pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged.

To view merge requests, do the following steps:

1. Do one of the following actions:


If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. In the Development stage, click X Merges in queue to view all the branches that are in the queue or for
which merge is in progress.

3. In the Merge requests ready for deployment dialog box, click View all merge requests to view all the
branches that are merged into the pipeline.

Viewing deployment reports for a specific deployment


Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started and
ended. If there were schema changes on the deployment, the report displays the schema changes.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Perform one of the following actions:

To view the report for the current deployment, click the More icon, and then click View report.
To view the report for a previous deployment, expand the Deployment History pane and click Reports
for the appropriate deployment.

Viewing reports for all deployments

Reports provide a variety of information about all the deployments in your pipeline. For example, you can
view the frequency of new deployments to production.

Related Content
Article

Understanding chema changes in application packages

Viewing reports for all deployments


Reports provide a variety of information about all the deployments in your pipeline. For example, you can view
the frequency of new deployments to production.

You can view the following key performance indicators (KPI):

Deployment Success – Percentage of deployments that are successfully deployed to production


Deployment Frequency – Frequency of new deployments to production
Deployment Speed – Average time taken to deploy to production
Start frequency – Frequency at which new deployments are triggered
Failure rate – Average number of failures per deployment
Merges per day – Average number of branches that are successfully merged per day

Do the following steps:

1. Open the pipeline by doing one of the following actions:

If the pipeline open, click Actions View report .


If a pipeline is not open, in the navigation pane, click Reports. Next, in the Pipeline field, press the Down
arrow key and select the name of the pipeline for which to view the report.

2. From the list that appears in the top right of the Reports page, select whether you want to view reports for
all deployments, the last 20 deployments, or the last 50 deployments.

Starting deployments
You can start deployments in a number of ways. For example, you can start a deployment manually if you are not
using branches, by submitting a branch into the Merge Branches wizard, or by publishing application changes in
App Studio to create a patch version of your application. Your user role determines if you can start a deployment.

Manually starting a deployment

You can start a deployment manually if you are not using branches and are working directly in rulesets. You
can also start a deployment manually if you do not want deployments to start automatically when branches
are merged.

Understanding application changes made in App Studio

You can publish application changes that you make in App Studio to the pipeline. Publishing your changes
creates a patch version of the application and starts a deployment. For example, you can change a life cycle,
data model, or user interface elements in a screen and submit those changes to systems in the pipeline.

Starting a deployment as you merge branches from the development environment

In either a branch-based or distributed, branch-based environment, you can immediately start a deployment
by submitting a branch into a pipeline in the Merge Branches wizard. The wizard displays the merge status
of branches so that you do not need to open Deployment Manager to view it.

Related Content
Article

Understanding roles and users

Manually starting a deployment in Deployment Manager


You can start a deployment manually if you are not using branches and are working directly in rulesets. You can
also start a deployment manually if you do not want deployments to start automatically when branches are
merged.

To start a deployment manually, do the following steps:

1. If you do not want deployments to start automatically when branches are merged:

a. If the pipeline is not open, in the navigation pane, click PipelinesApplication pipelines, and then click the
name of the pipeline.

b. Click Pipeline model.

c. Select the Trigger deployment on merge check box.

2. Do one of the following actions:

If the pipeline that you want to start is open, click Start deployment.
In the navigation pane, click Pipelines Application pipelines , and then click Start deployment for the
pipeline that you want to start.

3. In the Start deployment dialog box, start a new deployment or deploy an existing application by
completing one of the following actions:

To deploy a new application package, go to step 3.


To deploy an application that is on a cloud repository, go to step 4.

4. To start a deployment and deploy a new application package, do the following steps:

a. Click Generate new artifact.

a. In the Deployment name field, enter the name of the deployment.

b. Click Deploy.

c. Go to step 5.

5. To start a deployment and deploy an application package that is packaged in one of the previous production
deployments, do the following steps (only production-ready deployments can be used here, development
artifacts are not supported):

a. Click Deploy an existing artifact.


b. In the Deployment name field, enter the name of the deployment.

c. In the Pipeline field, press the Down arrow key and select the pipeline.

d. In the Deployment field, press the Down arrow key and select the previous deployment of that pipeline.

6. Click Deploy.

Understanding application changes made in App Studio


You can publish application changes that you make in App Studio to the pipeline. Publishing your changes creates
a patch version of the application and starts a deployment. For example, you can change a life cycle, data model,
or user interface elements in a screen and submit those changes to systems in the pipeline.

Ensure the following items are properly configured before making application changes in App Studio.

A product rule exists with the same name and version as the application being deployed. For more
information see Creating a product rule by using the create menu
A pipeline has been created in Deployment Manager for the application being deployed.
There is at least one unlocked ruleset in the application.
The users who will publish changes are logged into the application being deployed on the development
system.
The users who will publish changes have been granted a role in Deployment Manager that can start
deployments.
The RMURL dynamic system setting, which defines the URL of orchestration server, is configured on the
system.

Your pipeline should have at least one quality assurance or staging stage with a manual task so that you do not
deploy changes to production that have not been approved by stakeholders.

Publishing application changes in App Studio

Configuring settings before using the Merge Branches wizard


You can start a branch merge, which triggers a deployment, by using the Merge Branches wizard. You must
configure certain settings before you can submit a branch to your application.

Before you begin: Before start a branch merge, do the following tasks.

1. Check all rules into their base rulesets before you merge them.

2. Check if there are any potential conflicts to address before merging branches. For more information, see
Viewing branch quality and branch contents.

3. As a best practice, lock a branch after development is complete so that no more changes can be made. For
more information, see Locking a branch.

Publishing application changes in App Studio


When you publish an application to a stage, your rules are deployed immediately to that system. To allow
stakeholders to inspect and verify changes before they are deployed the stage, configure a manual task in on the
previous stage. When the pipeline runs, it is paused during a manual step that is assigned to a user, which allows
stakeholders to review your changes before they approve the step and resume running the pipeline.

1. In App Studio, do one of the following actions:

Click Turn editing on, and then, in the navigation pane, click Settings Versions .
In the App Studio header, click Publish.

The Settings page displays the stages that are enabled in the application pipeline in Deployment Manager.
The available stages are, in order, quality assurance, staging, and production.

It also displays the application versions that are on each system. The version numbers are taken from the
number at the end of each application deployment name in Deployment Manager. For example, if a
deployment has a name of "MyNewApp:01_01_75", the dialog box displays "v75".

2. Submit an application from development to quality assurance or staging in your pipeline by completing the
following steps:
a. Click either Publish to QA or Publish to staging.

b. To add a comment, which will be published when you submit the application, add a comment in the
Publish confirmation dialog box.

c. If Agile Workbench has been configured, associate a bug or user story with the application, in the
Associated User stories/Bugs field, press the Down arrow key and select the bug or user story.

d. Click OK.

Result: Each unlocked ruleset version in your application is locked and rolled to the next highest
version and is packaged and imported into the system. The amount of time that publishing application
changes takes depends on the size of your application.

A new application is also copied from the application that is defined on the pipeline in Deployment
Manager. The application patch version is updated to reflect the version of the new rulesets; for
example, if the ruleset versions of the patch application are 01-01-15, the application version is updated
to be 01.01.15. A new product rule is also created.

In addition, this application is locked and cannot be unlocked. You can use this application to test
specific patch versions of your application on quality assurance or staging systems. You can also use it
to roll back a deployment.

3. Make changes to your application in the unlocked rulesets, which you can publish again into the pipeline. If
an application is already on the system, it is overridden by the new version that you publish.

4. If you configured a manual step, request that stakeholders review and test your changes. After they
communicate to you that they have completed testing, you can publish your changes to the next stage in
the pipeline.

5. Publish the application to the next stage in the pipeline by clicking the link that is displayed.

The name of the link is the Job name field of the manual task that is defined on the stage.

Result: If you do not have a manual task defined, the application automatically moves to the next stage.

Submitting a branch into an application by using the Merge


Branches wizard
You can start a branch merge, which triggers a deployment, by submitting a branch into an application in the
Merge Branches wizard. By using the wizard to start merges, you can start a deployment without additional
configuration.

To submit a branch into an application by using the Merge Branches wizard, perform the following steps:

1. In the navigation pane of Dev Studio,, click App, and then click Branches.

2. Right-click the branch and click Merge.

3. Click Proceed.

Result: The wizard displays a message in the following scenarios:


If there are no pipelines that are configured for your application or there are no branches in the target
application.
If the value for the RMURL dynamic system setting is not valid.

4. Click Switch to standard merge to switch to the Merge Branches wizard that you can use to merge branches
into target rulesets. For more information, see Merging branches into target rulesets .

5. In the Application pipelines section, from the Pipeline list, select the application for which the pipeline is
configured into which you want to merge branches.

6. In the Merge Description field, enter information that you want to capture about the merge.

This information appears when you view deployment details.

7. In the Associated User stories/bugs field, press the Down arrow key, and then select the Agile Workbench
user story or bug that you want to associate with this branch merge.

8. Click Merge.
Result:

The system queues the branch for merging, generates a case ID for the merge, and runs the continuous
integration criteria that you specified.

If there are errors, and the merge is not successful, an email is sent to the operator ID of the release manager
that is specified on the orchestration server.

The branch is stored in the development repository and, after the merge is completed, Deployment Manager
deletes the branch from the development system. By storing branches in the development repository,
Deployment Manager keeps a history, which you can view, of the branches in a centralized location.

If your development system is appropriately configured, you can rebase your development application to obtain
the most recently committed rulesets after you merge your branches. For more information, see Rebasing rules to
obtain latest versions.

Starting a deployment as you merge branches from the


development environment
In either a branch-based or distributed, branch-based environment, you can immediately start a deployment by
submitting a branch into a pipeline in the Merge Branches wizard. The wizard displays the merge status of
branches so that you do not need to open Deployment Manager to view it.

If you are using a separate product rule for a test application, after you start a deployment either by using the
Merge Branches wizard, the branches of both the target and test applications are merged in the pipeline.

You can submit a branch to your application and start the continuous integration portion of the pipeline when the
following criteria is met:

You have created a pipeline for your application in Deployment Manager.


You are merging a single branch.
The RMURL dynamic system setting, which defines the URL of orchestration server, is configured on the
system.
All the rulesets in your branch belong to a single application that is associated with your pipeline. Therefore,
your branch cannot contain rulesets that belong to different application layers.

Configuring settings before using the Merge Branches wizard

You can start a branch merge, which triggers a deployment, by using the Merge Branches wizard. You must
configure certain settings before you can submit a branch to your application.

Submitting a branch into an application by using the Merge Branches wizard

You can start a branch merge, which triggers a deployment, by submitting a branch into an application in
the Merge Branches wizard. By using the wizard to start merges, you can start a deployment without
additional configuration.

Pausing and resuming deployments


When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at
the next step. Your user role determines if you can pause a deployment.

To pause a deployment:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the pipeline.

3. Click Pause.

4. Click the Pause button again to resume the deployment.

Related Content
Article
Understanding roles and users

Stopping a deployment
If your role has the appropriate permissions, you can a deployment to prevent it from moving through the
pipeline.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the More icon, and then click Abort.

Related Content
Article

Understanding roles and users

Managing a deployment that has errors


If a deployment has errors, the pipeline stops processing on it. You can perform actions such as rolling back the
deployment or skipping the step on which the error occurred.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the More icon, and then do one of the following actions:

To resume running the pipeline from the task, click Resume from current task.
To skip the step and continue running the pipeline, click Skip current task and continue.
To roll back to an earlier deployment, click Rollback.
Pega Platform 8.4 supports application-level rollback. To leverage this functionality, candidate and
orchestrator environments must be on Deployment Manager 4.8 or above. For older versions of
Deployment Manager (4.7.x and below) or users on Pega Platform 8.3 or below, rollback defaults to
the system-level.

Refer to the application level rollback document for more information.

To stop running the pipeline, click Abort.

Troubleshooting issues with your pipeline


Deployment Manager provides several features that help you troubleshoot and resolve issues with your pipeline.

You can:

View deployment logs for information about the completion status of operations.
Run diagnostics to verify that your environment is correctly configured.
Stop all deployments that are running on a pipeline.
Use a chatbot to obtain information about common issues.

Viewing deployment logs

View logs for a deployment to see the completion status of operations, for example, when a data simulation
is moved to the simulation environment. You can change the logging level to control which events are
displayed in the log.

Diagnosing a pipeline

You can diagnose your pipeline to troubleshoot issues and verify that your pipeline is configured properly
Stopping all deployments

You can stop all the deployments on a pipeline at once to quickly troubleshoot issues and resolve failed
pipelines.

Obtaining information about common issues by using the chatbot

Deployment Manager provides a chatbot that you can use to obtain information about common issues, such
as connectivity between systems, Jenkins configuration, and branch merging. After you enter your search
text, the chatbot provides you with relevant answers and links to more information.

Viewing deployment logs


View logs for a deployment to see the completion status of operations, for example, when a deployment moves
from staging to production. When the Deploy task runs, the application package is imported in to the candidate
system. By default, logs record all the new rule and data instances and all the updated rule and data instances
that are in this application package. You can disable the logging of such rule and data types and can change the
logging level to control which events are displayed in the log.

To view a deployment log, do the following steps:

1. In Dev Studio, on the appropriate candidate system, change the logging level to control which events the log
displays.

For example: For example, you can change logging levels of your deployment from INFO to DEBUG for
troubleshooting purposes. For more information, see Logging Level Settings tool.

2. To disable logging of new and updated rule and data instances in imported application packages, perform
the following steps:

1. On the candidate system for which you want to disable reporting, in the navigation pane of Admin
Studio, click Resources Log categories .
2. On the Log categories page, for the DeploymentManager.DeltaInstanceLogging log level, click the More
icon, and then click Change logging level.
3. In the Change pxBackgroundProcessing.Agents log level dialog box, in the Update log level of category
to list, select OFF.
4. Click Submit.

3. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines .

4. Do one of the following actions:

To view the log for the current deployment, click the More icon, and then click View logs.
To view the log for a previous deployment, expand the Deployment History pane, and then click Logs
for the deployment.

Diagnosing a pipeline
You can diagnose your pipeline to troubleshoot issues and verify that your pipeline is configured properly

For example, you can determine if the target application and product rule are in the development environment,
connectivity between systems and repositories is working, and pre-merge settings are correctly configured.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Diagnose pipeline .

3. In the Diagnostics window, review the errors, if any.

Note: If the RMURL dynamic system setting is not configured, Deployment Manager displays a message that
you can disregard if you are not using branches, because you do not need to configure the dynamic system
setting.
Stopping all deployments
You can stop all the deployments on a pipeline at once to quickly troubleshoot issues and resolve failed pipelines.

Take the following steps to stop all deployments on a pipeline:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Abort open deployments .

3. In the Abort open deployments dialog box, enter a reason for stopping the deployments, and then click
OK.

Obtaining information about common issues by using the chatbot


Deployment Manager provides a chatbot that you can use to obtain information about common issues, such as
connectivity between systems, Jenkins configuration, and branch merging. After you enter your search text, the
chatbot provides you with relevant answers and links to more information.

Before you begin: If the chatbot is disabled, enable it. For more information, see Enabling and disabling the
chatbot.

To use the Deployment Manager chatbot to help resolve issues, perform the following steps:

1. In the bottom right corner of the Deployment Manager portal, click the chatbot icon.

2. Do one of the following actions:

a. Click the appropriate link from the list of issues that the chatbot displays.

b. Enter text for which you want to receive more information, and then click Enter.

3. To clear the chatbot history, in the chatbot window, click the More icon, and then click Clear chat history.

Enabling and disabling the chatbot

Use the chatbot to obtain more information about common Deployment Manager issues, such as branch
merging and pipeline configuration. You can disable and enable the chatbot. By default, the chatbot is
enabled.

Understanding schema changes in application packages


If an application package that is to be deployed on candidate systems contains schema changes, the Pega
Platform orchestration server checks the candidate system to verify that you have the required privileges to
deploy the schema changes. One of the following results occurs:

If you have the appropriate privileges, schema changes are automatically applied to the candidate system,
the application package is deployed to the candidate system, and the pipeline continues.
If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the
schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so
that you can apply the schema changes. After you complete the step, the pipeline continues. For more
information about completing a step, see Completing or rejecting a manual step.

You can also configure settings to automatically deploy schema changes so that you do not have to manually
apply them if you do not have the required privileges. For more information, see Configuring settings to
automatically deploy schema changes.

Your user role must have the appropriate permissions so that you can manage schema changes.

Configuring settings to automatically apply schema changes

You can configure settings to automatically deploy schema changes that are in an application package that
is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema
changes if you do not have the privileges to deploy them.
Related Content
Article

Understanding roles and users

Configuring settings to automatically apply schema changes


You can configure settings to automatically deploy schema changes that are in an application package that is to
be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if
you do not have the privileges to deploy them.

Do the following steps:

1. On the candidate system, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true
to enable schema changes at the system level.

a. In Dev Studio, search for AutoDBSchemaChanges.

b. In the dialog box that appears for the search results, click AutoDBSchemaChanges.

c. On the Settings tab, in the Value field, enter true.

d. Click Save.

2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more
information, see Specifying privileges for an Access of Role to Object rule.

Result: These settings are applied sequentially. If the AutoDBSchemaChanges dynamic system setting is set to
false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

For more information about the database/AutoDBSchemaChanges dynamic system setting, see Importing rules
and data by using a direct connection to the database.

Schema changes are also attached to the deployment report for the pipeline.

Related Content
Article

Viewing deployment reports for a specific deployment

Completing or rejecting a manual step


If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either
complete it or reject it if your role has the appropriate permissions. For example, if a user was assigned a task
and completed it, you can complete the task in the pipeline to continue the deployment. Deployment Manager
also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either
within the pipeline or through email.

Deployment Manager also generates a manual step if there are schema changes in the application package that
the release manager must apply. For more information, see Schema changes in application packages.

To complete or reject a manual step within the deployment, do the following steps:

1. To complete or reject a manual step from within an email, click either Accept or Reject.

2. To complete or reject a manual step in the pipeline,

a. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines , and then click
the name of the pipeline.

b. Accept or reject the step by doing one of the following actions:

To resolve the task so that the deployment continues through the pipeline, click Complete.
To reject the task so that the deployment does not proceed, click Reject.

Related Content
Article

Understanding roles and users

Article

Continuing or stopping a deployment by adding the Perform manual step task

Managing aged updates


If your role has the appropriate permissions, you can manage aged updates in a number of ways, such as
importing them, skipping the import, or manually deploying applications. Managing aged updates gives you more
flexibility in how you deploy application changes.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click View aged updates to view a list of the rules and data instances, which are in the application package,
that are older than the instances that are on the system.

3. Click the More icon and do one of the following actions:

To import the older rule and data instances that are in the application package into the system, which
overwrites the newer versions that are on the system, click Overwrite aged updates.
To skip the import, click Skip aged updates.
To manually deploy the package from the Import wizard on the system, click Deploy manually and
resume. Deployment Manager does not run the Deploy step on the stage.

Understanding aged updates

An aged update is a rule or data instance in an application package that is older than an instance that is on a
system to which you want to deploy the application package. By being able to import aged updates, skip the
import, or manually deploy your application changes, you now have more flexibility in determining the rules
that you want in your application and how you want to deploy them.

Related Content
Article

Understanding roles and users

Managing artifacts generated by Deployment Manager


You can view, download, and delete application packages in repositories that are on the orchestration server. If
you are using Deployment Manager on Pega Cloud Services, application packages that you have deployed to
cloud repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can download
and permanently delete the packages.

If you are using a separate product rule to manage a test application, the name of the product rule is the same as
that of the product rule with _Tests appended to it.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the pipeline for which you want to download or delete packages.

3. Click Actions Browse artifacts .

4. Click either Development Repository or Production Repository.


5. To download a package, click the package, and then save it to the appropriate location.

6. To delete a package, select the check boxes for the packages that you want to delete and then click Delete.

Archiving and activating pipelines


If your role has the appropriate permissions, you can archive inactive pipelines so that they are not displayed on
the Deployment Manager landing page.

To archive or activate a pipeline, do the following steps:

1. In the navigation pane of Deployment Manager click Pipelines Application Pipelines .

2. To archive a pipeline, perform the following steps:

a. Click the More icon, and then click Archive for the pipeline that you want to archive.

b. In the Archive pipeline dialog box, click Submit.

3. To activate an archived pipeline, perform the following steps:

a. Click Pipelines Archived Pipelines .

b. Click Activate for the pipeline that you want to activate.

c. In the Activate pipeline dialog box, click Submit.

Related Content
Article

Understanding roles and users

Disabling and enabling a pipeline


If your role has the appropriate permissions, you can disable a pipeline on which errors continuously cause a
deployment to fail. Disabling a pipeline prevents branch merging, but you can still view, edit, and stop
deployments on a disabled pipeline.

To disable and enable a pipeline, perform the following steps:

1. In the navigation pane of Deployment Manager click Pipelines Application pipelines .

2. To disable a pipeline, perform the following steps:

a. Click the More icon, and then click Disable for the pipeline that you want to disable.

b. In the Disable pipeline dialog box, click Submit.

3. To enable a disabled pipeline, click the More icon, and then click Enable.

Related Content
Article

Understanding roles and users

Deleting a pipeline
If your role has the appropriate permission, you can delete a pipeline. When you delete a pipeline, its associated
application packages are not removed from the repositories that the pipeline is configured to use.

To delete a pipeline, do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Application
pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.
2. Click the More icon, and then click Delete for the pipeline that you want to delete.

3. In the Delete pipeline dialog box, click Submit.

Related Content
Article

Understanding roles and users

Using data migration pipelines with Deployment Manager 4.8.x


Data migration tests provide you with significant insight into how the changes that you make to decision logic
affect the results of your strategies. To ensure that your simulations are reliable enough to help you make
important business decisions, you can deploy a sample of your production data to a dedicated data migration test
environment.

When you use Deployment Manager 4.8.x in data migration pipelines, you automate exporting data from the
production environment and into the simulation environment. Data migration pipelines also require the following:

Pega Platform 8.3.x or 8.4.x


Decision management
Pega Marketing

For more information about data migration pipelines, see these articles on Pega Community:

Deploying sample production data to a simulation environment for testing


Creating simulation tests

For information about using all the Deployment Manager 4.8.x features, see Configuring and running pipelines
with Deployment Manager 4.8.x.

Installing, upgrading, and configuring Deployment Manager 4.8.x for data migration pipelines

You can use Deployment Manager 4.6.x or later in data migration pipelines so that you can automatically
export simulation data from a production system and import it into a simulation system. For more
information about using Deployment Manager 4.8.x with data migration pipelines, see .

Exporting and importing simulation data automatically with Deployment Manager

Create and run data migration pipelines in Deployment Manager 4.8.x to automatically export simulation
data from a production environment into a simulation environment in which you can test simulation data.
You can also use Deployment Manager to monitor and obtain information about your simulations, for
example, by running diagnostics to ensure that your environment configurations are correct and by and
viewing reports that display key performance indicators (KPIs).

Installing, upgrading, and configuring Deployment Manager 4.8.x


for data migration pipelines
You can use Deployment Manager 4.6.x or later in data migration pipelines so that you can automatically export
simulation data from a production system and import it into a simulation system. For more information about
using Deployment Manager 4.8.x with data migration pipelines, see Exporting and importing simulation data
automatically with Deployment Manager.

To install, upgrade, and configure Deployment Manager on the simulation and production environments and on
the orchestration server, perform the following steps:

1. Install or upgrade Deployment Manager.

For first-time installations or upgrades from Deployment Manager 3.2.1, install Deployment Manager on
the candidate systems (production and simulation environments) and the orchestration server.
Upgrading is done automatically, and you do not need to do post-upgrade steps.

For more information, see Installing or upgrading to Deployment Manager 4.8.x.

2. For first-time installations, configure communication between the orchestration server and the candidate
systems:

a. Enable the default operators on each system.


b. Configure the authentication profiles, which enable communication between systems, on each system.
Deployment Manager provides default authentication profiles, or you can create your own.

For more information, see Configuring authentication profiles.

3. To move the orchestration server to a different environment, migrate your pipelines to the new orchestration
server, and then, on the new orchestration server, configure the URL of the new orchestration server. This
URL is used to update the task status on the orchestration server and diagnostics checks.

For more information, see step 2 in Configuring the orchestration server.

Exporting and importing simulation data automatically with


Deployment Manager
Create and run data migration pipelines in Deployment Manager 4.8.x to automatically export simulation data
from a production environment into a simulation environment in which you can test simulation data. You can also
use Deployment Manager to monitor and obtain information about your simulations, for example, by running
diagnostics to ensure that your environment configurations are correct and by and viewing reports that display
key performance indicators (KPIs).

Creating a pipeline

Create a pipeline by defining the production and simulation environments and the application details for the
pipeline. By using a data migration pipeline, you can export and import simulation data automatically.

Modifying a pipeline

You can change the URLs of your production and simulation environments. You can also change the
application information for which you are creating the pipeline.

Diagnosing a pipeline

You can diagnose your pipeline to verify its configuration. For example, you can verify that the orchestration
system can connect to the production and simulation environments.

Scheduling a pipeline by creating a job scheduler rule

You can schedule a data migration pipeline to run during a specified period of time by creating and running
a job scheduler. The job scheduler runs a Deployment Manager activity (pzScheduleDataSyncPipeline) on the
specified pipeline, based on your configuration, such as weekly or monthly.

Starting a pipeline manually

If you do not run a data migration pipeline based on a job scheduler, you can run it manually in Deployment
Manager.

Pausing a pipeline

Pause a pipeline to stop processing the data migration. When you pause a data migration, the pipeline
completes the current task and stops the data migration.

Stopping a pipeline

Stop a pipeline to stop data migrations from being exported and imported.

Stopping and resuming a pipeline that has errors

If a data migration has errors, the pipeline stops processing on it, and you can either resume or stop running
the pipeline.

Deleting a pipeline

When you delete a pipeline, its associated application packages are not deleted from the pipeline
repositories.

Viewing data migration logs

View the logs for a data migration to see the completion status of operations, for example, when a data
migration moves to a new stage. You can change the logging level to control the events are displayed in the
log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting
purposes. For more information, see Logging Level Settings tool.

Viewing a report for a specific data migration

You can view a report for a specific data migration to gain more visibility into data migrations on a pipeline.

Viewing reports for all data migrations in a pipeline

Reports provide a variety of information about all the data migrations in your pipeline so that you can gain
more visibility into data migration processing. For example, you can view the average time taken to
complete data migrations.

Creating a pipeline
Create a pipeline by defining the production and simulation environments and the application details for the
pipeline. By using a data migration pipeline, you can export and import simulation data automatically.

Do the following steps:

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click New.

3. On the Environment Details page, if you are using Deployment Manager on-premises, configure
environment details.

This information is automatically populated if you are using Deployment in Pega Cloud Services
environments, but you can change it.

a. In the Environment fields, enter the URLs of the production and simulation environments.

b. If you are using your own authentication profiles, from the Auth profile lists, select the authentication
profiles that you want the orchestration server to use to communicate with the production and
simulation environments.

c. Click Next.

4. On the Application details page, specify the application information for which you are creating the
pipeline.

a. From the Application list, select the name of the application.

b. From the Version list, select the application version.

c. From the Access group list, select the access group for which you want to run pipeline tasks. This
access group must be present on the production and simulation environments and have at least the
sysadmin4 role.

d. In the Name of the pipeline field, enter the pipeline name.

e. Click Next.

Result: The Pipeline page displays the stages and tasks, which you cannot delete, that are in the pipeline.

5. Click Finish.

Modifying a pipeline
You can change the URLs of your production and simulation environments. You can also change the application
information for which you are creating the pipeline.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.
2. Click Action Settings .

3. Modify environment details by doing the following:

a. Click Environment Details.

b. In the Environment fields, enter the URLs of the production and simulation environments.

4. To change the application information for which you are creating the pipeline, click Application details.

a. From the Version list, select the application version.

b. From the Access group list, select the access group for which you want to run pipeline tasks.

This access group must be present on the production and simulation environments and have at least
the sysadmin4 role.

5. Click Save.

Diagnosing a pipeline
You can diagnose your pipeline to verify its configuration. For example, you can verify that the orchestration
system can connect to the production and simulation environments.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Actions Diagnose pipeline .

3. In the Diagnostics window, review the errors, if any.

Scheduling a pipeline by creating a job scheduler rule


You can schedule a data migration pipeline to run during a specified period of time by creating and running a job
scheduler. The job scheduler runs a Deployment Manager activity (pzScheduleDataSyncPipeline) on the specified
pipeline, based on your configuration, such as weekly or monthly.

For more information about job scheduler rules, see Job Scheduler rules.

Do the following steps:

1. On the orchestration server, in the navigation panel of Dev Studio, click Records SysAdmin Job Scheduler .

2. On the Create Job Scheduler rule form, enter the label of the scheduler and select the ruleset into which to
save the job scheduler.

3. Click Create and open.

4. On the Edit Job Scheduler rule form, on the Definition tab, from the Runs on list, configure the job scheduler
to run on all or one nodes:

To run the job scheduler on all nodes in a cluster, click All associated nodes.
To run the job scheduler on only one node in a cluster, click Any one associated node.

5. From the Schedule list, select how often you want to start the job scheduler, and then specify the options for
it.

6. Select the context for the activity resolution.

If you want to resolve the pzScheduleDataSyncPipeline activity in the context of Deployment Manager,
go to step 7.
If you want to resolve the activity in the context that is specified in the System Runtime Context, go to
step 8.

7. To resolve the pzScheduleDataSyncPipeline activity in the context of Deployment Manager:

a. From the Context list, select Specify access group.


b. In the Access group field, press the Down arrow key and select the access group that can access
Deployment Manager.

c. Go to step 9.

8. to resolve the activity in the context that is specified in the System Runtime Context:

a. From the Context list, select Use System Runtime Context.

b. Update the access group of the batch requestor type access group with the access group that can
access Deployment Manager; in the header of Dev Studio, clicking Configure System General .

c. On the System:General page, on the Requestors tab, click the BATCH requestor type.

d. On the Edit Requestor Type rule form, on the Definition tab, in the Access Group Name field, press the
Down arrow key and select the access group that can access Deployment Manager.

e. Click Save.

9. On the Job Schedule rule form, in the Class field, press the Down arrow key and select Pega-Pipeline-
DataSync.

10. In the Activity field, press the Down arrow key and select pzScheduleDataSyncPipeline.

11. Click the Parameters link that appears below the Activity field.

12. In the Activity Parameters dialog box, in the Parameter value field for the PipelineName parameter, enter
the data migration pipeline that the job scheduler runs.

13. In the Parameter value field for the ApplicationName parameter, enter the application that the data
migration pipeline is running.

14. Click Submit.

15. Save the Job schedule rule form.

Starting a pipeline manually


If you do not run a data migration pipeline based on a job scheduler, you can run it manually in Deployment
Manager.

1. Do one of the following actions:

If the pipeline for which you want to run a data migration is open, click Start data migration.
If the pipeline is not open, click Pipelines Data migration pipelines , and then click Start data migration.

2. In the Start data migration dialog box, click Yes.

Pausing a pipeline
Pause a pipeline to stop processing the data migration. When you pause a data migration, the pipeline completes
the current task and stops the data migration.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click Pause.

Stopping a pipeline
Stop a pipeline to stop data migrations from being exported and imported.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.
2. Click the More icon, and then clickAbort.

Stopping and resuming a pipeline that has errors


If a data migration has errors, the pipeline stops processing on it, and you can either resume or stop running the
pipeline.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the More icon, and then do one of the following:

To resume running the pipeline from the task, click Start data migration pipeline.
To stop running the pipeline, click Abort.

Deleting a pipeline
When you delete a pipeline, its associated application packages are not deleted from the pipeline repositories.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Click the Delete icon for the pipeline that you want to delete.

3. Click Submit.

Viewing data migration logs


View the logs for a data migration to see the completion status of operations, for example, when a data migration
moves to a new stage. You can change the logging level to control the events are displayed in the log. For
example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes.
For more information, see Logging Level Settings tool.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Do one of the following actions:

To view the log for the current data migration, click the More icon, and then click View logs.
To view the log for a previous data migration, expand the Deployment History pane and click Logsfor
the appropriate deployment.

Viewing a report for a specific data migration


You can view a report for a specific data migration to gain more visibility into data migrations on a pipeline.

1. Do one of the following actions:

If the pipeline is not open, in the navigation pane of Deployment Manager, click Pipelines Data
migration pipelines , and then click the name of the pipeline.
If the pipeline is open, click the name of the pipeline.

2. Perform one of the following actions:

To view the report for the current deployment, click the More icon, and then click View report.
To view the report for a previous deployment, expand the Deployment History pane and click Reports
for the appropriate deployment.

Viewing reports for all data migrations in a pipeline


Reports provide a variety of information about all the data migrations in your pipeline so that you can gain more
visibility into data migration processing. For example, you can view the average time taken to complete data
migrations.

You can view the following key performance indicators (KPI):

Data migration success – Percentage of successfully completed data migrations


Data migration frequency – Frequency of new deployments to production
Data migration speed – Average time taken to complete data migrations
Start frequency – Frequency at which new data migrations are triggered
Failure rate – Average number of failures per data migration

To view reports, do the following tasks:

1. Do one of the following actions:

If the pipeline is open, click Actions >View report.


If a pipeline is not open, in the navigation pane, click Reports. Next, in the Pipeline field, press the Down
arrow key and select the name of the pipeline for which to view the report.

2. From the list that appears in the top right of the Reports page, select whether you want to view reports for
all deployments, the last 20 deployments, or the last 50 deployments.

Obtaining deprecated Deployment Manager documentation


The Deployment Manager releases for the corresponding versions of documentation are no longer available to be
downloaded from Pega Marketplace.

The following documentation is archived and available for reference:

Deployment Manager 3.4.x

Deployment Manager 3.4.x

Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for
your Pega applications from within Pega Platform. You can create a standardized deployment process so
that you can deploy predictable, high-quality releases without using third-party tools.

Deployment Manager 3.4.x


Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your
Pega applications from within Pega Platform. You can create a standardized deployment process so that you can
deploy predictable, high-quality releases without using third-party tools.

With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application
package generation, artifact management, and package promotion to different stages in the workflow.

Deployment Manager 3.4.x is compatible with Pega 7.4. You can download it for Pega Platform from the
Deployment Manager Pega Exchange page.

Note: These topics describe the features for the latest version of Deployment Manager 3.4.x.

Note: Each customer Virtual Private Cloud (VPC) on Pega Cloud Services has a dedicated orchestrator instance to
use Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud Services
application.

Installing, upgrading, and configuring Deployment Manager 3.4.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks and allow you to quickly deploy high-quality software to production.

Configuring and running pipelines with Deployment Manager 4.8.x

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate
tasks so that you can quickly deploy high-quality software to production.

Installing, upgrading, and configuring Deployment Manager 3.4.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks
and allow you to quickly deploy high-quality software to production.

Note: This document describes the features for the latest version of Deployment Manager 3.4.x.

Installing Deployment Manager 3.4.x

Install Deployment Manager 3.4.x on-premises. Each customer virtual private cloud (VPC) on Pega Cloud has
a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment
Manager to use it with your Pega Cloud application. If you are upgrading from an earlier release to
Deployment Manager 3.4.x, contact Pegasystems Global Customer Support (GCS) to request a new version.

Upgrading to Deployment Manager 3.4.x

After you install Deployment Manager 3.4.x, you must do post-upgrade steps. Before you upgrade, ensure
that no deployments are running, have errors, or are paused.

Configuring systems in the pipeline

Complete the following tasks to set up a pipeline for all supported CI/CD workflows. If you are using
branches, you must configure additional settings after you perform the required steps.

Configuring the development system

After you configure the orchestration server and all your candidate systems, configure additional settings so
that you can create pipelines if you are using branches in a distributed or non-distributed branch-based
environment.

Configuring additional settings

As part of your pipeline, you can optionally send email notifications to users or configure Jenkins if you are
using a Jenkins task.

Installing Deployment Manager 3.4.x


Install Deployment Manager 3.4.x on-premises. Each customer virtual private cloud (VPC) on Pega Cloud has a
dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to
use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager
3.4.x, contact Pegasystems Global Customer Support (GCS) to request a new version.

If you are using Deployment Manager on premises, complete the following steps to install it:

Note: If you are upgrading from Deployment Manger 3.2.1, after you import files on premises or Deployment
Manager 3.4.x is deployed on Pega Cloud Services, finish the upgrade immediately so that your pipelines work in
Deployment Manager 3.4.x.

1. Install Pega 7.4 on all systems in the CI/CD pipeline.

2. Browse to the Deployment Manager Pega Marketplace page, and then download the
DeploymentManager03.0240x.zip file to your local disk on each system.

3. Extract the DeploymentManager03.0240x.zip file.

4. Use the Import wizard to import files into the appropriate systems. For more information about the Import
wizard, see Importing a file by using the Import wizard.

5. On the orchestration server, import the following files:

PegaDevOpsFoundation_03.04.0x.zip
PegaDeploymentManager_03.04.0x.zip

6. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_03.04.0x.zip
file.

7. If you are using a distributed development, on the remote development system, import the
PegaDevOpsFoundation_03.04.0x.zip file.

8. Do one of the following actions


If you are upgrading to Deployment Manager 3.4.x, perform the upgrade. For more information, see
Upgrading to Deployment Manager 3.4.x.
If you are not upgrading Deployment Manager 3.4.x, continue the installation procedure. For more
information, see Configuring the orchestration server.

Upgrading to Deployment Manager 3.4.x


After you install Deployment Manager 3.4.x, you must do post-upgrade steps. Before you upgrade, ensure that no
deployments are running, have errors, or are paused.

To upgrade to Deployment Manager 3.4.x either on Pega Cloud Services or on premises, perform the following
steps:

1. Enable default operators and configure authentication profiles on the orchestration server and candidate
systems. For more information, see Configuring authentication profiles on the orchestration server and
candidate systems.

2. On each candidate system, add the PegaDevOpsFoundation application to your application stack.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Built on application section, click Add application.

c. In the Name field, press the Down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the Down arrow key and select the version of Deployment Manager that you
are using.

e. Click Save.

If you are upgrading from Deployment Manager 3.2.1, you do not need to do the rest of the steps in this
procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases
and have pipelines configured, complete this procedure.

Note: If you are upgrading from Deployment Manager 3.2.1, you do not need to do the rest of the steps in
this procedure or the required configuration steps steps. If you are upgrading from earlier releases and have
pipelines configured, complete this procedure.

3. On the orchestration server, log in to the release management application.

4. In Designer Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the
results.

5. Click Actions Run .

6. In the dialog box that is displayed, click Run.

7. Modify the current release management application so that it is built on PegaDeploymentManager:03-04-01.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the
PegaDeploymentManager application, press the Down arrow key and select 03.04.01.

c. Click Save.

8. Merge rulesets to the PipelineData ruleset.

a. Click Designer Studio System Refactor Rulesets .

b. Click Copy/Merge RuleSet.

c. Click the Merge Source RuleSet(s) to Target RuleSet radio button.

d. Click the RuleSet Versions radio button.

e. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list,
and then click the Move icon.
Note: All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you
created the application, select all the ruleset versions that contain pipeline data.

9. In the target RuleSet/Information section, in the Name field, press the Down arrow key and select Pipeline
Data.

10. In the Version field, enter 01-01-01.

11. For the Delete Source RuleSet(s) upon completion of merge? option, click No.

12. Click Next.

13. Click Merge to merge your pipelines to the PipelineData:01-01-01 ruleset.

14. Click Done.

Result: Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration
server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it.
Note: You do not need to perform any of the required configuration procedures.

Configuring systems in the pipeline


Complete the following tasks to set up a pipeline for all supported CI/CD workflows. If you are using branches, you
must configure additional settings after you perform the required steps.

To configure systems in the pipeline, do the following steps:

1. Configuring authentication profiles on the orchestration server and candidate systems

2. Configuring the orchestration server

3. Configuring candidate systems

4. Configuring repositories on the orchestration server and candidate systems

Configuring authentication profiles on the orchestration server and candidate systems

Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate
with the candidate systems.

Configuring the orchestration server

The orchestration server is the system on which release managers configure and manage CI/CD pipelines.
Configure it before you use it in your pipeline.

Configuring candidate systems

Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

Configuring repositories on the orchestration server and candidate systems

If you are using Deployment Manager on-premises, create repositories on the orchestration server and all
candidate systems to move your application between all the systems in the pipeline. You can use a
supported repository type that is provided in Pega Platform, or you can create a custom repository type.

Configuring authentication profiles on the orchestration server and


candidate systems
Deployment Manager provides default operator IDs and authentication profiles. You must enable the default
operator IDs and configure the authentication profiles that the orchestration server uses to communicate with the
candidate systems.

Configure the default authentication profile by doing these steps:

1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password.

a. Log in to the orchestration server with [email protected]/install.


b. In Designer Studio, click Records Organization Operator ID , and then click DMReleaseAdmin.

c. In the Designer Studio header, click the operator ID initials, and then click Operator.

d. On the Edit Operator ID rule form, click the Security tab.

e. Clear the Disable Operator check box.

f. Click Save.

g. Click Update password.

h. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then
click Submit.

i. Clear the Force password change on next login check box if you do not want to change the password for
the DMReleaseAdmin operator ID the next time that you log in.

j. Log out of the orchestration server.

2. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All
candidate systems use this authentication profile to communicate with the orchestration server about the
status of the tasks in the pipeline.

a. Log in to each candidate system with the DMAppAdmin user name and the password that you specified.

b. Click Records Security Authentication Profile .

c. Click DMReleaseAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

3. On each candidate system, which includes the development, QA, staging, and production systems, enable
the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the
PegaDevOpsFoundation application.

a. Log in to each candidate system with [email protected]/install.

b. In the navigation pane of Designer Studio, click Records Organization Operator ID , and then click
DMAppAdmin.

c. In the header of Designer Studio, click the operator ID initials, and then click Operator.

d. On the Edit Operator ID rule form, click the Security tab.

e. Clear the Disable Operator check box.

f. Click Save.

g. Click Update password.

h. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then
click Submit.

i. Clear the Force password change on next login check box if you do not want to change the password for
the DMReleaseAdmin operator ID the next time that you log in.

j. Log out of each candidate system.

4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The
orchestration server uses this authentication profile to communicate with candidate systems so that it can
run tasks in the pipeline.

a. Log in to the orchestration server with the DMAppAdmin user name and the password that you
specified.

b. Click Records Security Authentication Profile .


c. Click DMAppAdmin.

d. On the Edit Authentication Profile rule form, click Set password.

e. In the Password dialog box, enter the password, and then click Submit.

f. Save the rule form.

5. Do one of the following actions:

a. If you are upgrading to Deployment Manager 3.4.x, resume the upgrade procedure from step 2. For
more information, see Upgrading to Deployment Manager 3.4.x.

b. If you are not upgrading, continue the installation procedure. For more information, see Configuring the
orchestration server.

Understanding default operator IDs and authentication profiles

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs,
and authentication profiles that communicate between the orchestration server and candidate systems are
also installed.

Understanding default operator IDs and authentication profiles


When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and
authentication profiles that communicate between the orchestration server and candidate systems are also
installed.

On the orchestration server, the following items are installed:

The Pega Deployment Manager application.


The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager
application. You must enable this operator ID and specify its password.
The DMAppAdmin authentication profile. You must update this authentication profile to use the password
that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems.

On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application.


The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation application. You must enable this
operator ID and specify its password.
The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password
that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server.

Note: The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords.

Configuring the orchestration server


The orchestration server is the system on which release managers configure and manage CI/CD pipelines.
Configure it before you use it in your pipeline.

To configure the orchestration server, complete the following tasks:

1. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. Click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

d. Click Records Integration-Resources Service Package .

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.
2. Configure the candidate systems in your pipeline.

For more information, see Configuring candidate systems.

Configuring candidate systems


Configure each system that is used for the development, QA, staging, and production stage in the pipeline.

Do the following steps:

1. On each candidate system, add the PegaDevOpsFoundation application to your application stack.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Built on application section, click Add application.

c. In the Name field, press the Down arrow key and select PegaDevOpsFoundation.

d. In the Version field, press the Down arrow key and select the version of Deployment Manager that you
are using.

e. Click Save.

2. If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd
service packages.

a. Click Records Integration-Resources Service Package .

b. Click api.

c. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

d. Click RecordsIntegration-ResourcesService Package.

e. Click cicd.

f. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is
cleared.

3. To use a product rule for your target application, test application, or both, other than the default rules that
are created by the New Application wizard, on the development system, create product rules that define the
test application package and the target application package that will be moved through repositories in the
pipeline.

For more information, see Product rules: Completing the Create, Save As, or Specialization form.

When you use the New Application wizard, a default product rule is created that has the same name as your
application.

4. Configure repositories through which to move artifacts in your pipeline.

For more information, see Configuring repositories on the orchestration server and candidate systems.

Configuring repositories on the orchestration server and candidate


systems
If you are using Deployment Manager on-premises, create repositories on the orchestration server and all
candidate systems to move your application between all the systems in the pipeline. You can use a supported
repository type that is provided in Pega Platform, or you can create a custom repository type.

If you are using Deployment Manager on Pega Cloud Services, default repositories are provided. If you want to use
repositories other than the ones provided, you can create your own.

For more information about creating a supported repository type, see Creating a repository connection.

For more information about creating a custom repository type, see Creating and using custom repository types for
Deployment Manager.

When you create repositories, note the following information:


The Pega repository type is not supported.
Ensure that each repository has the same name on all systems.
When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog
Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must
select the Preemptive authentication check box.

After you configure a pipeline, you can verify that the repository connects to the URL of the development and
production repositories by clicking Test Connectivity on the Repository rule form.

Configuring the development system


After you configure the orchestration server and all your candidate systems, configure additional settings so that
you can create pipelines if you are using branches in a distributed or non-distributed branch-based environment.

To configure the development system, complete the following steps:

1. On the development system (in nondistributed environment) or the main development system (in a
distributed environment), create a dynamic system setting to define the URL of the orchestration server,
even if the orchestration server and the development system are the same system.

a. Click Create Records SysAdmin Dynamic System Settings .

b. In the Owning Ruleset field, enter Pega-DevOps-Foundation.

c. In the Setting Purpose field, enter RMURL.

d. Click Create and open.

e. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format:
http://hostname:port/prweb/PRRestService .

f. Click Save.

2. On either the development system (in a non-distributed environment) or the remote development system (in
a distributed environment), use the New Application wizard to create a new development application that
developers will log in to.

This application allows development teams to maintain a list of development branches without modifying the
definition of the target application.

3. On either the development system or remote development system, add the target application of the pipeline
as a built-on application layer of the development application.

a. Log in to the application.

b. In the Designer Studio header, click the name of your application, and then click Definition.

c. In the Built-on application section, click Add application.

d. In the Name field, press the Down arrow key and select the name of the target application.

e. In the Version field, press the Down arrow key and select the target application version.

f. Click Save.

4. On either the development system or remote development system, lock the application rulesets to prevent
developers from making changes to rules after branches have been merged.

a. In the Designer Studio header, click the name of your application, and then click Definition.

b. In the Application rulesets section, click the Open icon for each ruleset that you want to lock.

c. Click Lock and Save.

5. To publish branches to a development system to start a branch to merge, configure a Pega repository.

It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a
branch to the remote development system to start a deployment. Publishing a branch when you have
multiple pipelines per application is not supported.

a. On either the development system or remote development system, in Designer Studio, enable Pega
repository types.

For more information, see Enabling the Pega repository type.

b. Create a new Pega repository type. For more information, see Creating a repository connection.

c. In the Host ID field, enter the URL of the development system.

d. Ensure that the default access group of the operator that is configured for the authentication profile of
this repository points to the pipeline application on the development system (in a nondistributed
environment) or source development system (in a distributed environment).

Configuring additional settings


As part of your pipeline, you can optionally send email notifications to users or configure Jenkins if you are using a
Jenkins task.

Configuring email notifications on the orchestration server

You can optionally configure email notifications on the orchestration server. For example, users can receive
emails when pre-merge criteria are not met and the system cannot create a deployment.

Configuring Jenkins

If you are using a Jenkins task in your pipeline, configure Jenkins.

Configuring email notifications on the orchestration server


You can optionally configure email notifications on the orchestration server. For example, users can receive
emails when pre-merge criteria are not met and the system cannot create a deployment.

To configure the orchestration server to send emails, complete the following steps:

1. ​Use the Email wizard to configure an email account and listener by clicking Designer Studio Integration Email
Email Wizard .

This email account sends notifications to users when events occur, for example, if there are merge conflicts.
For detailed information, see the procedure for “Configuring an email account that receives email and
creates or manages work” in Entering email information in the Email wizard

2. From the What would you like to do? list, select Receive an email and create/manage a work object.

3. From the What is the class of your work type? list, select Pega-Pipeline-CD.

4. From the What is your starting flow name? list, select NewWork.

5. From the What is your organization? list, select the organization that is associated with the work item.

6. In the What Ruleset? field, select the ruleset that contains the generated email service rule.

This ruleset applies to the work class.

7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule.

8. Click Next to configure the email listener.

9. In the Email Account Name field, enter Pega-Pipeline-CD, which is the name of the email account that the listener
references for incoming and outgoing email.

10. In the Email Listener Name field, enter the name of the email listener.

Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens.

11. In the Folder Name field, enter the name of the email folder that the listener monitors.

Typically, this folder is INBOX.

12. In the Service Package field, enter the name of the service package to be deployed.

Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier.
13. In the Service Class field, enter the service class name.

14. In the Requestor User ID field, press the Down arrow key, and select the operator ID of the release manager
operator.

15. In the Requestor Password field, enter the password for the release manager operator.

16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs.

17. In the Password field, enter the password for the operator ID.

18. Click Next to continue the wizard and configure the service package.

For more information, see Configuring the service package in the Email wizard.

19. After you complete the wizard, enable the listener that you created in the Email Wizard.

For more information, see Starting a listener.

Understanding email notifications

Emails are also preconfigured with information about each notification type. For example, when a
deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of
the system on which the deployment failure occurred.

Understanding email notifications


Emails are also preconfigured with information about each notification type. For example, when a deployment
failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on
which the deployment failure occurred.

Preconfigured emails are sent in the following scenarios:

Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using
branches, to the operator who started a deployment.
Deployment step failure – If any step in the deployment process is unsuccessful, the deployment pauses. An
email is sent to the release manager and, if you are using branches, to the operator who started the branch
merge.
Deployment step completion – When a step in a deployment process is completed, an email is sent to the
release manager and, if you are using branches, to the operator who started the branch merge.
Deployment completion – When a deployment is successfully completed, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge.
Stage completion – When a stage in a deployment process is completed, an email is sent to the release
manager and, if you are using branches, to the operator who started the branch merge.
Stage failure – If a stage fails to be completed, an email is sent to the release manager and, if you are using
branches, to the operator who started the branch merge.
Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent
to the user, who can approve or reject the task from the email.
Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you
are using branches, to the operator who started the branch merge.
Pega unit testing failure – If a Pega unit test cannot successfully run on a step in the deployment, an email is
sent to the release manager and, if you are using branches, to the operator who started the branch merge.
Pega unit testing success – If a Pega unit test is successfully run on a step in the deployment, an email is
sent to the release manager and, if you are using branches, to the operator who started the branch merge.
Schema changes required – If you do not have the required schema privileges to deploy the changes on
application packages that require those changes, an email is sent to the operator who started the
deployment.
Guardrail compliance score failure – If you are using the Check guardrail compliance task, and the
compliance score is less than the score that is specified in the task, an email with the score is sent to the
release manager.
Guardrail compliance score success – If you are using the Check guardrail compliance task, and the task is
successful, an email with the score is sent to the release manager.
Approve for production – If you are using the Approve for production task, which requires approval from a
user before application changes are deployed to production, an email is sent to the user. The user can reject
or approve the changes.
Verify security checklist failure – If you are using the Verify security checklist task, which requires that all
tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security
best practices, the release manager receives an email.
Verify security checklist success – If you are using the Verify security checklist task, which requires that all
tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security
best practices, the release manager receives an email.

Configuring Jenkins
If you are using a Jenkins task in your pipeline, configure Jenkins.

Do the following steps:

1. On the orchestration server, create an authentication profile that uses Jenkins credentials.

a. Click Create Security Authentication profile .

a. Enter a name, and then click Create and open.

b. In the User name field, enter the user name of the Jenkins user.

c. Click Set password, enter the Jenkins password, and then click Submit.

d. Click the Preemptive authentication check box.

e. Click Save.

2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the
following steps:

a. In Jenkins, click Manage Jenkins.

b. Click Configure Global Security.

c. In the CSRF Protection section, clear the Prevent Cross Site Request Forgery exploits check box.

d. Click Save.

3. Install the Post build task plug-in.

4. Install the curl command on the Jenkins server.

5. Create a new freestyle project.

6. On the General tab, select the This project is parameterized check box.

7. Add the BuildID and CallBackURL parameters.

a. Click Add parameter, and then select String parameter.

b. In the String field, enter BuildID.

c. Click Add parameter, and then select String parameter.

d. In the String field, enter CallBackURL.

8. In the Build Triggers section, select the Trigger builds remotely check box.

9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely.

10. In the Build Environment section, select the Use Secret text(s) or file(s) check box.

11. In the Bindings section, do the following actions:

a. Click Add, and then select User name and password (conjoined).

b. In the Variable field, enter RMCREDENTIALS

c. .In the Credentials field, click Specific credentials.

d. Click Add, and then select Jenkins.

e. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager
operator that is configured on the orchestration server.

f. In the Password field, enter the password.


g. Click Save.

12. Configure information in the Post-Build Actions section, depending on your operating system:

If Jenkins is running on Microsoft Windows, go to step 13.


If Jenkins is running on Linux, go to step 14.

13. If Jenkins is running on Microsoft Windows, add the following post-build tasks:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" .

d. Click Add another task.

e. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}"
"%CallBackURL%"

g. Click Save.

14. If Jenkins is running on Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent
sign (%) to access the environment variables:

a. Click Add post-build action, and then select Post build task.

b. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build fails, for example BUILD FAILURE.

c. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

d. Click Add another task.

e. In the Log text field, enter a unique string that for the message that is displayed in the build console
output when a build is successful, for example BUILD SUCCESS.

f. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data "
{\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL"

g. Click Save.

Configuring and running pipelines with Deployment Manager 3.4.x


Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks
so that you can quickly deploy high-quality software to production.

Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks
and allow you to quickly deploy high-quality software to production.

On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their
Pega Platform applications. The landing page displays all the running and queued application deployments,
branches that are to be merged, and reports that provide information about your DevOps environment such as
key performance indicators (KPIs).

Note: These topics describe the features for the latest version of Deployment Manager 3.4.x.

Configuring an application pipeline

When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous
delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be
merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA
stage of the pipeline.
Manually starting a deployment in Deployment Manager

You can start a deployment manually if you are not using branches and are working directly in rulesets. You
can also start a deployment manually if you do not want deployments to start automatically when branches
are merged.

Starting a deployment in a distributed, branch-based environment

If you are using Deployment Manager in a distributed, branch-based environment and using multiple
pipelines per application, first export the branch to the source development system, and then merge it.

Completing or rejecting a manual step

If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can
either complete it or reject it if your role has the appropriate permissions. For example, if a user was
assigned a task and completed it, you can complete the task in the pipeline to continue the deployment.
Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete
or reject a step either within the pipeline or through email.

Managing aged updates

If your role has the appropriate permissions, you can manage aged updates in a number of ways, such as
importing them, skipping the import, or manually deploying applications. Managing aged updates gives you
more flexibility in how you deploy application changes.

Configuring settings to automatically apply schema changes

You can configure settings to automatically deploy schema changes that are in an application package that
is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema
changes if you do not have the privileges to deploy them.

Pausing and resuming deployments

When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment
at the next step. Your user role determines if you can pause a deployment.

Stopping a deployment

If your role has the appropriate permissions, you can a deployment to prevent it from moving through the
pipeline.

Managing a deployment that has errors

If a deployment has errors, the pipeline stops processing on it. You can perform actions such as rolling back
the deployment or skipping the step on which the error occurred.

Viewing branch status

You can view the status of all the branches that are in your pipeline. For example,you can see whether a
branch was merged in a deployment and when it was merged.

Viewing deployment logs

View logs for a deployment to see the completion status of operations, for example, when a deployment
moves from staging to production. When the Deploy task runs, the application package is imported in to the
candidate system. By default, logs record all the new rule and data instances and all the updated rule and
data instances that are in this application package. You can disable the logging of such rule and data types
and can change the logging level to control which events are displayed in the log.

Viewing deployment reports for a specific deployment

Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started
and ended. If there were schema changes on the deployment, the report displays the schema changes.

Viewing reports for all deployments

Reports provide a variety of information about all the deployments in your pipeline. For example, you can
view the frequency of new deployments to production.

Deleting a pipeline
If your role has the appropriate permission, you can delete a pipeline. When you delete a pipeline, its
associated application packages are not removed from the repositories that the pipeline is configured to use.

Managing artifacts generated by Deployment Manager

You can view, download, and delete application packages in repositories that are on the orchestration server.
If you are using Deployment Manager on Pega Cloud Services, application packages that you have deployed
to cloud repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can
download and permanently delete the packages.

Configuring an application pipeline


When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery
workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you
can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline.

You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in
the following scenarios:

To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline
that has only a production stage or development and production stages.
To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud Services

If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application
name and version for the pipeline. Many fields are populated by default, such as the URL of your
development system and product rule name and version.

Adding a pipeline on premises

When you add a pipeline on premises, you define all the stages and tasks that you want to do on each
system. For example, if you are using branches, you can start a build when a branch is merged. If you are
using a QA system, you can run test tasks to validate application data.

Modifying application details

You can modify application details, such as the product rule that defines the content of the application that
moves through the pipeline.

Modifying URLs and authentication profiles

You can modify the URLs of your development and candidate systems and the authentication profiles that
are used to communicate between those systems and the orchestration server.

Modifying repositories

You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the development
repository, and all the production-ready artifacts are archived in the production repository.

Configuring Jenkins server information for running Jenkins jobs

If you are using a Run Jenkins step, configure Jenkins server information so that you can run Jenkins jobs.

Specifying merge options for branches

If you are using branches in your application, specify options for merging branches into the base application.

Modifying stages and tasks in the pipeline

You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you
can skip a stage or add tasks such as Pega unit testing to be done on the QA stage.

Adding a pipeline on Pega Cloud Services


If you are using Pega Cloud Services, when you add a pipeline, you specify details such as the application name
and version for the pipeline. Many fields are populated by default, such as the URL of your development system
and product rule name and version.
To add a pipeline on Pega Cloud Services, do the following steps:

1. In the Designer Studio footer, click Deployment Manager.

2. Click Add pipeline.

3. Specify the details of the application for which you are creating the pipeline.

a. To change the URL of your development system, which is populated by default with your development
system URL, in the Development environment field, press the Down arrow key and select the URL.

This is the system on which the product rule that defines the application package that moves through
the repository is located.

b. In the Application field, press the Down arrow key and select the name of the application.

c. In the Version field, press the Down arrow key and select the application version.

d. Click the Access group field and select the access group for which pipeline tasks are run.

This access group must be present on all the candidate systems and have at least the sysadmin4 role.
Ensure that the access group is correctly pointing to the application name and version that is
configured in the pipeline.

e. In the Pipeline name field, enter a unique name for the pipeline.

4. Click Create.

Result: The system adds tasks, which you cannot delete, to the pipeline that are required to successfully
run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud Services, it also adds mandatory
tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security
checklist task.

5. Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see
Modifying stages and tasks in the pipeline.

Adding a pipeline on-premises


When you add a pipeline on premises, you define all the stages and tasks that you want to do on each system.
For example, if you are using branches, you can start a build when a branch is merged. If you are using a QA
system, you can run test tasks to validate application data.

To add a pipeline on premises, complete the following steps:

1. In the Designer Studio footer, click Deployment Manager.

2. Click Add pipeline.

3. Specify the details of the application for which you are creating the pipeline.

a. In the Development environment field, enter the URL of the development system. This is the system on
which the product rule that defines the application package that moves through the repository is
located.

b. In the Application field, press the Down arrow key and select the name of the application.

c. In the Version field, press the Down arrow key and select the application version.

d. In the Access group field, press the Down arrow key and select the access group for which pipeline
tasks are run.

This access group must be present on all the candidate systems and have at least the sysadmin4 role.

e. In the Pipeline name field, enter a unique name for the pipeline.

f. In the Product rule field, enter the name of the product rule that defines the contents of the application.

g. In the Version field, enter the product rule version.

4. To configure dependent applications, click Dependencies.


a. Click Add.

b. In the Application name field, press the Down arrow key and select the application name.

c. In the Application version field, press the Down arrow key and select the application version.

d. In the Repository name field, press the Down arrow key and select the repository that contains the
production-ready artifact of the dependent application.

If you want the latest artifact of the dependent application to be automatically populated, ensure that
the repository that contains the production-ready artifact of the dependent application is configured to
support file updates.

e. In the Artifact name field, press the Down arrow key and select the artifact.

For more information about dependent applications, see Product rules: Listing product dependencies for
Pega-supplied applications.

a. Click Next.

5. In the Environment details section, in the Stages section, specify the URL of each candidate system and the
authentication profile that each system uses to communicate with the orchestration system.

a. In the Environments field for the system, press the Down arrow key and select the URL of the system.

b. If you are using your own authentication profiles, in the Authentication field for the system, press the
Down arrow key and select the authentication profile that you want to communicate from the
orchestration server to the system.

By default, the fields are populated with the DMAppAdmin authentication profile.

6. In the Artifact management section, specify the development and production repositories through which the
product rule that contains application contents moves through the pipeline.

7. In the Development repository field, press the Down arrow key and select the development repository.

8. In the Production repository field, press the Down arrow key and select the production repository.

9. In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify the Jenkins
details.

a. In the URL field, enter the URL of the Jenkins server.

b. In the Authentication profile field, press the Down arrow key and select the authentication profile on the
orchestration server that specifies the Jenkins credentials to use for Jenkins jobs.

10. Click Next.

11. If you are using branches in your application, in the Merge policy section, specify merge options. Do one of
the following actions:

To merge branches into the highest existing ruleset in the application, click Highest existing ruleset.
To merge branches into a new ruleset, click New ruleset.

12. In the Password field, enter the password that locks the rulesets on the development system.

13. Click Next.

Result: The system adds tasks, which you cannot delete, to the pipeline that are required to successfully
run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best
practices such as Check guardrail compliance and Verify security checklist.

14. To specify that a branch must meet a compliance score before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check guardrail compliance.

c. In the Weighted compliance score field, enter the minimum required compliance score.

d. Click Submit.
For more information about compliance scores, see Compliance score logic

15. To specify that a branch must be reviewed before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check review status.

c. Click Submit.

For more information about branch reviews, see Branch reviews.

16. To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check
box. Do not select this check box if you want to manually start deployments.

For more information, see Manually starting a deployment.

17. Clear a check box for a deployment life cycle stage to skip it.

18. In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline.
See the following topics for more information:

Adding the Pega unit testing task


Adding the Jenkins task
Adding the Check guardrail compliance score task
Adding the Verify security checklist task
Modifying the Approve for production task

19. Click Finish.

Running Pega unit tests by adding the Run Pega unit tests task

If you use Pega unit tests to validate application data, add the Pega unit testing task on the pipeline stage
where you want to run it. For example, you can run Pega unit tests on a QA system.

Running Jenkins steps by adding the Run Jenkins step task

If you are using Jenkins to perform tasks in your pipeline, you can add the Run Jenkins step to the stage on
which you want it to run. If you have configured the Jenkins OrchestratorURL and PipelineID parameters,
when this task fails, the pipeline stops running. For more information about configuring these parameters,
see .

Continuing or stopping a deployment by adding the Perform manual step task

Use manual steps so that users must take an action before a pipeline deployment can continue. Users can
either accept the task to continue the deployment or reject the task to stop it.

Specifying that an application meet a compliance score by adding the Check guardrail compliance score task

You can use the Check guardrail compliance score task so that an application must meet a compliance score
for the deployment to continue. The default value is 97, which you can modify.

Ensuring that the Application Security Checklist is completed by adding the Verify security checklist task

For your pipeline to comply with security best practices, you can add a task to ensure that all the steps in
the Application Security Checklist are performed. For customers on Pega Platform 8.4 and above, a new
Security Checklist API is available to provide an automated security configuration assessment. Both
candidate and orchestrator environments should be on or above Deployment Manager 4.8 to utilize this
functionality.

Modifying the Approve for production task

The Approve for production task is added to the stage before production. Use this task if you want a user to
approve application changes before those changes are send to production.

Adding the Pega unit testing task


If you use Pega unit tests to validate application data, add the Pega unit testing task on the pipeline stage where
you want to run it. For example, you can run Pega unit tests on a QA system.

To run Pega unit tests for either the pipeline application or for an application that is associated with an access
group, do the following steps:

1. Do one of the following actions:

Click a manually added task, click the More icon, and then click either Add task above or Add task
below.
Click Add task in the stage.

2. Select Pega unit testing from the Task list.

3. Do one of the following actions:

To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite
ID field, enter the pxInsName of the test suite.

You can find this value in the XML document that comprises the test suite by clicking, in Designer
Studio, Actions XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests
for the pipeline application are run.

To run all the Pega unit tests for an application that is associated with an access group, in the Access
Group field, enter the access group.

For more information about creating Pega unit tests, see Creating Pega unit test cases.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on-premises


Modifying stages and tasks in the pipeline

Adding the Jenkins task


If you are using Jenkins to perform tasks in your pipeline, you can add the Jenkins task to the stage on which you
want it to run.

Do the following steps.

1. Do one of the following actions:

Click a manually added task, click the More icon, and then click either Add task above or Add task
below.
Click Add task in the stage.

2. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that
you want to run.

3. In the Token field, enter the Jenkins authentication token.

4. In the Parameters field, enter parameters, if any, to send to the Jenkins job.

5. Click Submit.

6. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Adding the manual step task


Use manual steps so that users must take an action before a pipeline deployment can continue. Users can either
accept the task to continue the deployment or reject the task to stop it.

To add a manual step that a user must perform in the pipeline, do the following steps:

1. Do one of the following actions:

Click a manually added task, click the More icon, and then click either Add task above or Add task
below.
Click Add task in the stage.
2. From the Task list, select Manual.

3. In the Job name field, enter text that describes the action that you want the user to take.

4. In the Assigned to field, press the Down arrow key and select the operator ID to assign the task to.

5. Click Submit.

6. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on-premises


Modifying stages and tasks in the pipeline

Adding the Check guardrail compliance score task


You can use the Check guardrail compliance score task so that an application must meet a compliance score for
the deployment to continue. The default value is 95, which you can modify.

To specify that an application must meet a compliance score, do the following steps:

1. Do one of the following actions:

Click a manually added task, click the More icon, and then click either Add task above or Add task
below.
Click Add task in the stage.

2. From the Task list, select Check guardrail compliance.

3. In the Weighted compliance score field, enter the minimum required compliance score.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises


Modifying stages and tasks in the pipeline

Adding the Verify security checklist task


For your pipeline to comply with security best practices, you can add a task so that to ensure that all the steps in
Application Security Checklist are performed.

You must log in to the system for which this task is configured, and then mark all the tasks in the Application
Security checklist as completed for the pipeline application. For more information about completing the checklist,
see Preparing your application for secure deployment.

Do the following steps:

1. Do one of the following actions:

Click a manually added task, click the More icon, and then click either Add task above or Add task
below.
Click Add task in the stage.

2. From the Task list, select Verify Security checklist.

3. Click Submit.

4. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on-premises


Modifying stages and tasks in the pipeline

Modifying the Approve for production task


The Approve for production task is added to the stage before production. Use this task if you want a user to
approve application changes before those changes are send to production.

Do the following steps:

1. Click the Info icon.


2. In the Job name field, enter a name for the task.

3. In the Assign to field, press the Down arrow key and select the user who approves the application for
production. An email is sent to this user, who can approve or reject application changes from within the
email.

4. Click Submit.

5. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on-premises


Modifying stages and tasks in the pipeline

Modifying application details


You can modify application details, such as the product rule that defines the content of the application that moves
through the pipeline.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines , and then click the name of the pipeline.

2. Click Actions Application details .

3. In the Development environment field, enter the URL of the development system, which is the system on
which the product rule that defines the application package that moves through the repository is located.

4. In the Version field, press the Down arrow key and select the application version.

5. In the Product rule field, enter the product rule that defines the contents of the application.

6. In the Version field, enter the product rule version.

7. f you are using a separate product rule to manage test cases, in the Application test cases section, complete
the following steps:

a. To deploy test cases, select the Deploy test applications check box.

b. In the Test application field, enter the name of the test application.

c. In the Version field, enter the version of the test case product rule.

d. In the Access group field, enter the access group for which test cases are run.

e. In the Product rule field, enter the name of the test case product rule.

f. From the Deploy until field, select the pipeline stage until which the test case product rule will be
deployed.

Note: When you use separate product rules for test cases and run a pipeline, the Run Pega unit tests,
Enable test coverage, and Verify test coverage tasks are run for the access group that is specified in
this section.

For the Run Pega scenario tests task, the user name that you provide should belong to the access group
that is associated with the test application.

8. If the application depends on other applications, in the Dependencies section, add those applications.

a. Click Add.

b. In the Application name field, press the Down arrow key and select the application name.

c. In the Application version field, press the Down arrow key and select the application version.

d. In the Repository name field, press the Down arrow key and select the repository that contains the
production-ready artifact of the dependent application. If you want the latest artifact of the dependent
application to be automatically populated, ensure that the repository that contains the production-ready
artifact of the dependent application is configured to support file updates.

e. In the Artifact name field, press the Down arrow key and select the artifact.
For more information about dependent applications, see Listing product dependencies.

9. Click Save.

Modifying URLs and authentication profiles


You can modify the URLs of your development and candidate systems and the authentication profiles that are
used to communicate between those systems and the orchestration server.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines , and then click the name of the pipeline.

2. Click Actions Environment Details .

3. In the Environments field for the system, press the Down arrow key and select the URL of the system.

4. In the Authentication field for the system, press the Down arrow key and select the authentication profile
that you want to communicate from the orchestration server to the system.

5. Click Save.

Modifying repositories
You can modify the development and production repositories through which the product rule that contains
application contents moves through the pipeline. All the generated artifacts are archived in the Development
repository, and all the production-ready artifacts are archived in the Production repository.

You do not need to configure repositories if you are using Pega Cloud Services; you can use different repositories
other than the default ones that are provided.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines , and then click the name of the pipeline.

2. Click Actions Artifact Management .

3. If you are using Deployment Manager on premises, or on Pega Cloud Services with default repositories,
complete the following tasks:

a. In the Application repository section, in the Development repository field, press the Down arrow key and
select the development repository

b. In the Production repository field, press the Down arrow key and select the production repository.

4. If you are using Deployment Manager on Pega Cloud Services and want to use different repositories other
than the default repositories, complete the following tasks:

a. In the Artifact repository section, click Yes.

b. In the Development repository field, press the Down arrow key and select the development repository.

c. In the Production repository field, press the Down arrow key and select the production repository.

5. Click Save.

Configuring Jenkins server information


If you are using a Jenkins step, specify details about the Jenkins server such as its URL.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines , and then click the name of the pipeline.

2. Click Actions External orchestration server .

3. Click the Jenkins icon, and then click OK.

4. In the URL field, enter the URL of the Jenkins server.


5. In the Authentication profile field, press the Down arrow key and select the authentication profile on the
orchestration server that specifies the Jenkins credentials to use for Jenkins jobs.

6. Click Save.

Specifying merge options for branches


If you are using branches in your application, specify options for merging branches into the base application.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines , and then click the name of the pipeline.

2. Click Actions Merge policy .

3. Do one of the following actions:

To merge branches into the highest existing ruleset in the application, click Highest existing ruleset.
To merge branches into a new ruleset, click New ruleset.

a. In the Password field, enter the password that locks the rulesets on the development system.

4. Click Save.

Modifying stages and tasks in the pipeline


You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can
skip a stage or add tasks such as Pega unit testing to be done on the QA stage.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines , and then click the name of the pipeline.

2. Click Pipeline model.

3. To specify that a branch must meet a compliance score before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check guardrail compliance.

c. In the Weighted compliance score field, enter the minimum required compliance score.

d. Click Submit.

For more information about compliance scores, see Compliance score logic.

4. To specify that a branch must be reviewed before it can be merged:

a. In the Merge criteria pane, click Add task.

b. From the Task list, select Check review status.

c. Click Submit.

For more information about branch reviews, see Branch reviews.

5. To start a deployment automatically when a branch is merged, select the Trigger deployment on merge
check box. Do not select this check box if you want to manually start a deployment.

For more information, see Manually starting a deployment.

6. Clear a check box for a deployment life cycle stage to skip it.

7. In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline.
See the following topics for more information:

Adding the Pega unit testing task


Adding the Jenkins task
Adding the Check guardrail compliance score task
Adding the Verify security checklist task
Modifying the Approve for production task

8. Click Finish.

Manually starting a deployment


You can start a deployment manually if you are not using branches and are working directly in rulesets. You can
also start a deployment manually if you do not want deployments to start automatically when branches are
merged.

Do the following steps:

1. If you do not want deployments to start automatically when branches are merged:

1. If the pipeline is not open, in the navigation pane, click Pipelines Application pipelines
2. Click Pipeline model.
3. Select the Trigger deployment on merge check box.

2. Do one of the following actions:

If the pipeline that you want to start is open, click Start deployment.
Click Pipelines , and then click Start deployment for the pipeline that you want to start.

3. In the Start deployment dialog box, start a new deployment or deploy an existing application by
completing one of the following actions:

To deploy a new application package, go to step 3.


To deploy an application that is on a cloud repository, go to step 4.

4. To start a deployment and deploy a new application package, do the following steps:

a. Click Generate new artifact.

a. In the Deployment name field, enter the name of the deployment.

b. Click Deploy.

c. Go to step 5.

5. To start a deployment and deploy an application package that is on a cloud repository, do the following
steps:

a. Click Deploy an existing artifact.

b. In the Deployment name field, enter the name of the deployment.

c. In the Select a repository field, press the Down arrow key and select the repository.

d. In the Select an artifact field, press the Down arrow key and select the application package.

6. Click Deploy.

Starting a deployment in a distributed, branch-based environment


If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per
application, first export the branch to the source development system, and then merge it.

Do the following steps:

1. On the remote development system, package the branch. For more information, see Packaging a branch.

2. Export the branch.

3. On the source development system, import the branch by using the Import wizard. For more information,
see Importing a file by using the Import wizard.

4. On the source development system, start a deployment by using the Merge Branches wizard. For more
information, see Submitting a branch into a pipeline.

If you are using one pipeline per application, you can publish a branch to start the merge. For more
information, see Publishing a branch to a repository.

Completing or rejecting a manual step


If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either
complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to
continue the deployment. Deployment Manager also sends you an email when there is a manual step in the
pipeline. You can complete or reject a step either within the pipeline or through email.

Deployment Manager also generates a manual step if there are schema changes in the application package that
the release manager must apply. For more information, see Schema changes in application packages.

To complete or reject a manual step within the deployment, do the following steps:

1. To complete or reject a manual step from within an email, click either Accept or Reject.

2. To complete or reject a manual step in the pipeline,

a. In the Designer Studio footer, click Deployment Manager.

b. Click a pipeline.

c. Accept or reject the step by doing one of the following actions:

To resolve the task so that the deployment continues through the pipeline, click Complete.
To reject the task so that the deployment does not proceed, click Reject.

Managing aged updates


You can manage aged updates in a number of ways such as importing them, skipping the import, or manually
deploying applications. Managing aged updates gives you more flexibility in how you deploy application changes.

Do the following steps::

1. In the Designer Studio footer, click Deployment Manager.

2. Click the pipeline.

3. Click View aged updates to view a list of the rules and data instances, which are in the application package,
that are older than the instances that are on the system.

4. Click the More icon and do one of the following actions:

To import the older rule and data instances that are in the application package into the system, which
overwrites the newer versions that are on the system, click Overwrite aged updates.
To skip the import, click Skip aged updates.
To manually deploy the package from the Import wizard on the system, click Deploy manually and
resume. Deployment Manager does not run the Deploy step on the stage.

Understanding aged updates

An aged update is a rule or data instance in an application package that is older than an instance that is on a
system to which you want to deploy the application package. By being able to import aged updates, skip the
import, or manually deploy your application changes, you now have more flexibility in determining the rules
that you want in your application and how you want to deploy them.

Understanding aged updates


An aged update is a rule or data instance in an application package that is older than an instance that is on a
system to which you want to deploy the application package. By being able to import aged updates, skip the
import, or manually deploy your application changes, you now have more flexibility in determining the rules that
you want in your application and how you want to deploy them.

For example, you can update a dynamic system setting on a quality assurance system, which has an application
package that contains the older instance of the dynamic system setting. Before Deployment Manager deploys the
package, the system detects that the version of the dynamic system setting on the system is newer than the
version in the package and creates a manual step in the pipeline.
Configuring settings to automatically apply schema changes
You can configure settings to automatically deploy schema changes that are in an application package that is to
be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if
you do not have the privileges to deploy them.

Do the following steps:

1. On the candidate system, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true
to enable schema changes at the system level.

a. In Designer Studio, search for AutoDBSchemaChanges.

b. In the dialog box that appears for the search results, click AutoDBSchemaChanges.

c. On the Settings tab, in the Value field, enter true.

d. Click Save.

2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more
information, see Specifying privileges for an Access or Role to Object rule.

Result: These settings are applied sequentially. If the AutoDBSchemaChanges dynamic system setting is set to
false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

For more information about the database/AutoDBSchemaChanges dynamic system setting, see Importing rules
and data by using a direct connection to the database.

Understanding schema changes in application packages

If an application package that is to be deployed on candidate systems contains schema changes, the Pega
Platform orchestration server checks the candidate system to verify that you have the required privileges to
deploy the schema changes. One of the following results occurs:

Understanding chema changes in application packages


If an application package that is to be deployed on candidate systems contains schema changes, the Pega
Platform orchestration server checks the candidate system to verify that you have the required privileges to
deploy the schema changes. One of the following results occurs:

If you have the appropriate privileges, schema changes are automatically applied to the candidate system,
the application package is deployed to the candidate system, and the pipeline continues.
If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the
schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so
that you can apply the schema changes. After you complete the step, the pipeline continues. For more
information about completing a step, see Completing or rejecting a manual step.

You can also configure settings to automatically deploy schema changes so that you do not have to manually
apply them if you do not have the required privileges. For more information, see Configuring settings to
automatically deploy schema changes.

Pausing a deployment
When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at
the next step.

To pause a deployment:

1. If the pipeline is not open, in the navigation pane, click Pipelines .

2. Click the pipeline.

3. Click Pause.

Stopping a deployment
Stop a deployment to discontinue processing.
To stop a deployment:

1. If the pipeline is not open, in the navigation pane, click Pipelines .

2. Click the More icon, and then click Abort.

Performing actions on a deployment that has errors


If a deployment has errors, the pipeline stops processing on it. You can perform actions on it, such as rolling back
the deployment or skipping the step on which the error occurred.

Do the following steps:

1. If the pipeline is not open, in the navigation pane, click Pipelines .

2. Click the More icon, and then do one of the following actions:

To resume running the pipeline from the task, click Resume from current task.
To skip the step and continue running the pipeline, click Skip current task and continue.
To roll back to an earlier deployment, click Rollback.
To stop running the pipeline, click Abort.

Viewing branch status


You can view the status of all the branches that are in your pipeline. For example,you can see whether a branch
was merged in a deployment and when it was merged.

Do the following steps:

1. Click Deployment Manager in the Designer Studio footer.

2. Click a pipeline.

3. Click Actions View branches .

Viewing deployment logs


View logs for a deployment to see the completion status of operations, for example, when a data simulation is
moved to the simulation environment. You can change the logging level to control which events are displayed in
the log.

For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting
purposes. For more information, see Logging Level Settings tool.

Do the following steps:

1. .Click Deployment Manager in the Designer Studio footer.

2. Click a pipeline.

3. Click the Gear icon for the deployment for which you want to view the log file.

4. Click View log.

Viewing deployment reports for a specific deployment


Deployment reports provide information about a specific deployment. You can view information such as the
number of tasks that you configured on a deployment that have been completed and when each task started and
ended.

Do the following steps:

1. Click Deployment Manager in the Designer Studio footer.

2. Click a pipeline.

3. Click the Gear icon for the deployment for which you want to view the deployment report.

4. Click View report.


Viewing reports for all deployments
Reports provide a variety of information about all the deployments in your pipeline. For example, you can view
the frequency of new deployments to production.

You can view the following key performance indicators (KPI):

Deployment Success – Percentage of deployments that are successfully deployed to production


Deployment Frequency – Frequency of new deployments to production
Deployment Speed – Average time taken to deploy to production
Start frequency – Frequency at which new deployments are triggered
Failure rate – Average number of failures per deployment
Merges per day – Average number of branches that are successfully merged per day

1. In the Designer Studio footer, click Deployment Manager.

2. Click a pipeline.

3. Click Action Reports .

Deleting an application pipeline


When you delete a pipeline, its associated application packages are not removed from the repositories that the
pipeline is configured to use.

Do the following steps:

1. In the Designer Studio footer, click Deployment Manager.

2. Click the Delete icon for the pipeline that you want to delete.

3. Click Submit.

Viewing, downloading, and deleting application packages


You can view, download, and delete application packages in repositories that are on the orchestration server. If
you are using Deployment Manager on Pega Cloud Services, application packages that you have deployed to
cloud repositories are stored on Pega Cloud Services. To manage your cloud storage space, you can download
and permanently delete the packages.

If you are using a separate product rule to manage a test application, the name of the product rule is the same as
that of the product rule with _Tests appended to it.

Do the following steps:

1. In the Designer Studio footer, click Deployment Manager.

2. Click the pipeline for which you want to download or delete packages.

3. Click Actions Browse artifacts .

4. Click either Development Repository or Production Repository.

5. To download a package, click the package, and then save it to the appropriate location.

6. To delete a package, select the check boxes for the packages that you want to delete and then click Delete.

Understanding distributed development for an application


When you use continuous integration and delivery (CI/CD) workflows, you set up the systems in your environment
based on your workflow requirements. For example, if only one team is developing an application, you can use a
single system for application development and branch merging.

However, you can use a distributed development environment if multiple teams are simultaneously developing an
application. A distributed development environment can comprise multiple development systems, on which
developers author and test the application. They then migrate their changes into and merge them on a
development source system from which those changes are packaged and moved in the CI/CD workflow.

When you configure a distributed development environment, ensure that you are following best practices for
development and version control.

For more information about development best practices, see Understanding best practices for DevOps-based
development workflows.
For more information about versioning best practices, see Understanding best practices for version control in
the DevOps pipeline.

Understanding the benefits of distributed development

Distributed development environments offer a number of benefits when multiple development teams are
working on the same application. For example, each development team can continue to work on its own
Pega Platform server even if other team servers or the source development system are unavailable.

Understanding the components of a distributed development environment

Distributed development consists of several systems, including remote development systems, the source
development system, and an automation server.

Developing applications, merging branches, and deploying changes in a distributed development


environment

When you work in a distributed development environment, you generally work in branches and merge them
to incorporate changes into the base application. The implementation of some of your tasks depends on your
specific configuration, such as which automation server you are using.

Understanding the benefits of distributed development


Distributed development environments offer a number of benefits when multiple development teams are working
on the same application. For example, each development team can continue to work on its own Pega Platform
server even if other team servers or the source development system are unavailable.

With distributed development, you can accomplish the following:

Reduce disruption across the development organization.

Each development team can do system-wide configuration and maintenance on its own Pega Platform
server without affecting other team systems.

Increase overall productivity.

Because each team works on its own remote development system, teams can continue working even if the
development source system or another team server experiences system or application issues. System or
application issues are introduced to the source development system or to another team server.

Ensure higher quality change management.

A distributed development setup helps to insulate the source development system from changes introduced
by developers. Distributed development also reduces or eliminates the creation of unnecessary rules or data
instances application testing generates.

Reduce latency for geographically distributed teams.

Teams can have co-located development servers that have reduced latency, which also increases
productivity.

Reduce the need for coordination across teams when introducing changes and packaging the final
application.

Distributed development simplifies the application packaging process, because developers package the
application on the development source system, which includes all the latest application rulesets to be
packaged.

Capture application changes.

If you use an automation server such as Deployment Manager, when you merge changes on the source
development system, you can audit application updates.

Understanding the components of a distributed development


environment
Distributed development consists of several systems, including remote development systems, the source
development system, and an automation server.

The distributed development environment comprises systems that perform the following roles:

Remote development systems – the systems on which development work takes place, typically in branches.
Each team usually uses one Pega Platform server on each system.

Development teams can use tools such as container management or provisioning scripts to quickly start up
remote development systems.

Source development system – the Pega Platform server that stores the base application, which contains only
the latest production changes. It is also the system from which the application is packaged. You merge
branches on this system from remote development systems.

You should maintain high availability and have a reliable backup and restore strategy for the source
development system.

Automation server – the server that automates continuous integration or continuous delivery jobs that are
part of an application lifecycle, such as automated testing, application packaging, task approval, and
deployment.

You can use a number of tools as the automation server, such as Deployment Manager, Jenkins, or Bamboo.

While an automation server is not a requirement, it is recommended that you use one, because it reduces
the manual steps that you need to do in a DevOps workflow.

Developing applications, merging branches, and deploying changes


in a distributed development environment
When you work in a distributed development environment, you generally work in branches and merge them to
incorporate changes into the base application. The implementation of some of your tasks depends on your
specific configuration, such as which automation server you are using.

In general, working in a distributed development environment consists of the following tasks and methods:

1. On the remote development system, build a team application layer that is built on top of the main
production application. The team application layer contains branches, tests, and other development rulesets
that do not go into the production application. For more information, see the Pega Community Using multiple
built-on applications.

2. Lock the application ruleset by performing the following steps:

a. In the header of Dev Studio, click the name of your application, and then click Definition.

b. In the Edit Application rule form, in the Application rulesets section, click the Open icon for the ruleset
that you want to lock.

c. On the Edit Ruleset rule form, click Lock and Save.

d. In the Lock Ruleset Version dialog box, in the Password field, enter the password that locks the
ruleset.

e. In the Confirm Password field, reenter the password to confirm it.

f. Click Submit.

g. Save the Edit Ruleset rule form.

h. Save the Edit Application rule form.

3. Create a branch of your production ruleset in the team application. For more information, see Adding
branches to your application.

4. Work in branches on remote development systems.

5. Use release toggles to disable features that are not available for general use. For more information, see
Toggling features on and off.

6. Create a review so that other developers can review branch content. For more information, see Creating a
branch review.

7. Conduct developer reviews to review the content and quality of the branch. For more information, see
Reviewing branches.

8. Lock the branch. For more information, see Locking a branch.

9. Migrate branches to the source development system and then merge and validate the branches. Depending
on your configuration, you can either do both steps at the same time or separately. Do one of the following
tasks:

a. To migrate and merge branches at the same time, perform step 10.

b. To migrate and merge branches separately, perform steps 11 - 13.

10. To migrate and merge branches at the same time, do one of the following actions:

Use Deployment Manager to create pipelines and start a deployment. For more information, see
Migrating and merging branches by using Deployment Manager.
Configure third-party automation servers to automatically merge branches after you publish branches
to the source development system. For more information, see Migrating and merging branches with
third-party automation servers.

11. To migrate a branch and then separately merge and validate the branch, migrate branches to the source
development system by doing one of the following tasks:

Publish a branch to the source development system. For more information, see Publishing a branch to a
repository.
Use prpcUtils to automatically package and migrate the application. For more information, see
Automatically deploying applications with prpcUtils and Jenkins.
Manually migrate the application package by packaging and exporting it. For more information, see
Exporting a branch to the source development system.

12. Merge and validate branches by using the Merge Branches wizard. For more information, see Merging
branches into target rulesets.

13. Migrate the merged rules back to the remote development systems by doing one of the following tasks:

Rebase the development application to obtain the latest ruleset versions from the source development
system. For more information, see Understanding rule rebasing.
Use prpcServiceUtils to export a product archive of your application and import it to the remote
development systems. For more information, see Automatically deploying applications with prpcUtils
and Jenkins.
Manually migrate the application by exporting it from the development source system and then
importing it into the remote development system. For information, see Importing a branch into remote
development systems after merging.

Migrating and merging branches by using Deployment Manager

If you are using Deployment Manager as your automation server, you can use it to merge branches on the
source development system. You must configure certain settings on the source development system before
you can create pipelines that model pre-merge criteria and can merge branches.

Migrating and merging branches with third-party automation servers

If you are using a third-party automation server such as Jenkins, you can automatically start a branch merge
after you publish the branch to the development source system.

Publishing a branch to the source development system

You can migrate a branch to the source development system by publishing a branch to it through a Pega
repository.

Exporting a branch to the source development system

In a distributed development environment, developers migrate branches to a source development system


on which they then merge the branches. You can manually migrate a branch to the source development
system by packaging the branch on your remote development system and then exporting it to the source
development system.

Importing a branch into remote development systems after merging


After you merge branches on the source development system, manually migrate the merged branches back
to the remote development system by packaging and then importing it.

Migrating and merging branches by using Deployment Manager


If you are using Deployment Manager as your automation server, you can use it to merge branches on the source
development system. You must configure certain settings on the source development system before you can
create pipelines that model pre-merge criteria and can merge branches.

Do the following tasks to configure Deployment Manager to merge branches on the source development system:

1. Configure the source development system so that you can merge branches on it. For more information, see
Configuring the development system for branch-based development.

2. Create a pipeline for your application, which includes modeling pre-merge criteria, such as adding a task
that developers must complete a branch review before merging branches. For more information, see
Configuring an application pipeline.

3. Start a deployment by doing one of the following tasks:

Submit an application into the Merge Branches wizard. For more information, see Starting a deployment
as you merge branches from the development environment.
Publish application changes in App Studio. For more information, see Publishing application changes in
App Studio.

Migrating and merging branches with third-party automation


servers
If you are using a third-party automation server such as Jenkins, you can automatically start a branch merge after
you publish the branch to the development source system.

To publish a branch and automatically start a merge, do the following tasks:

1. Create a repository connection between the remote development system and the development source
system. For more information, see Creating a repository.

2. Publish the branch to the source development system through the repository. For more information, see
Publishing a branch to a repository.

Publishing a branch to the source development system


You can migrate a branch to the source development system by publishing a branch to it through a Pega
repository.

Note: To automatically merge the branch after publishing it, follow the procedure in Migrating and merging
branches with third-party automation servers.

1. Create a Pega repository connection between the remote development system and the source development
system. For more information, see Adding a Pega repository.

2. Publish the branch to the source development system through the Pega repository. For more information,
see Publishing a branch to a repository.

Exporting a branch to the source development system


In a distributed development environment, developers migrate branches to a source development system on
which they then merge the branches. You can manually migrate a branch to the source development system by
packaging the branch on your remote development system and then exporting it to the source development
system.

To migrate a branch to the source development system, do the following tasks:

1. On the remote development system, package the branch. For more information, see Packaging a branch.

2. On the source development system, import the application package by using the Import wizard. For more
information, see Importing rules and data by using the Import wizard.
Importing a branch into remote development systems after
merging
After you merge branches on the source development system, manually migrate the merged branches back to
the remote development system by packaging and then importing it.

To migrate a branch back to the remote development system, do the following tasks:

1. On the source development system, package the branch. For more information, see Packaging a branch.

2. On the remote development systems, import the application package by using the Import wizard. For more
information, see Importing rules and data by using the Import wizard.

Understanding continuous integration and delivery pipelines with


third party automation servers
Use DevOps practices such as continuous integration and continuous delivery to quickly move application
changes from development, through testing, and to deployment. Use Pega Platform tools and common third-party
tools to implement DevOps.

You can set up a continuous integration and delivery (CI/CD) pipeline that uses a Pega repository in which you
can store and test software and a third-party automation server such as Jenkins that starts jobs and performs
operations on your software. Use a CI/CD pipeline to quickly detect and resolve issues before deploying your
application to a production environment.

For example, you can configure an automation server with REST services to automatically merge branches after
you publish them to a Pega repository. You can also configure Jenkins to create branch reviews, run PegaUnit
tests, and return the status of a merge.

Using branches with repositories in a continuous integration and delivery pipeline

When you work in a continuous integration and development environment, you can configure a repository on
a source development system to store and test software. You publish branches to repositories to store and
test them. You can also configure a pipeline with REST services on your automation server to perform
branch operations, such as detecting conflicts, merging branches, and creating branch reviews, immediately
after you push a branch to the repository.

Remotely starting automation jobs to perform branch operations and run unit tests

In a continuous integration and delivery (CI/CD) workflow, repositories provide centralized storage for
software that is to be tested, released, or deployed. You can start a job remotely from an automation server,
such as Jenkins, and use the branches REST and merges REST services to merge branches when you push
them from your development system to a Pega repository on a source development system.

Implementing a CI/CD pipeline with repository APIs

After you have configured an automation server and system of record (SOR) so that you can remotely start
jobs on the automation server, you can implement a continuous integration and development pipeline with
the branches REST and merges REST services. These services detect potential conflicts before a merge,
merge rules in a branch, obtain the status of the merge, and create branch reviews. By remotely starting
jobs that automatically perform branch operations, your organization can deliver higher-quality software
more quickly.

Using branches with repositories in a continuous integration and


delivery pipeline
When you work in a continuous integration and development environment, you can configure a repository on a
source development system to store and test software. You publish branches to repositories to store and test
them. You can also configure a pipeline with REST services on your automation server to perform branch
operations, such as detecting conflicts, merging branches, and creating branch reviews, immediately after you
push a branch to the repository.

To use branches with repositories, you must perform the following tasks:

1. In Dev Studio, create a repository. For more information, see Creating a repository.

2. On the source development system, create a development application that is built on all the applications
that will go into production. You must also create a ruleset in the development application that contains all
the rules that you are using for continuous integration.

For example: For example, if you have a production application MyCoAppwith with the rulesets MyCo:01-01
and MyCoInt:01-01, you can create a MyCoDevAppdevelopment application that is built on MyCoAppand has
only one ruleset, MyCoCIDev:01-01. This ruleset contains the data transforms that are needed to set default
information, such as the application into which branches will be merged.

You can use the branches REST and merge REST services in your pipeline to perform branch operations. The
branches REST service provides subresources that you can use to detect conflicts, merge branches, and
create branch reviews.

3. Configure a continuous integration and development pipeline so that your automation server, such as
Jenkins, starts a job immediately after you push a branch to the source development system.

Use the branches REST and merge REST services in the pipeline to perform branch operations, such as
detecting conflicts and merging branches. For more information, see Remotely starting automation jobs to
perform branch operations and run unit tests.

Remotely starting automation jobs to perform branch operations


and run unit tests
In a continuous integration and delivery (CI/CD) workflow, repositories provide centralized storage for software
that is to be tested, released, or deployed. You can start a job remotely from an automation server, such as
Jenkins, and use the branches REST and merges REST services to merge branches when you push them from your
development system to a Pega repository on a source development system.

Pega Platform can communicate with common repository technologies and also can act as a binary repository.
Pega Platform can browse, publish, or fetch artifacts that are created whenever an action creates a RAP file: for
example, exporting an application, product, branch, or component into a remote system of record. By starting
jobs remotely and using the automation server to detect conflicts and merge branches, your organization can
deliver higher-quality software more quickly.

For more information about using branches with repositories, see Using branches with repositories in a
continuous integration and delivery pipeline.

After you push a branch to a system of record, your automation server tool runs a job. Your pipeline can detect
conflicts before a merge. If there are conflicts, the merge does not proceed. If there are no conflicts, the merge
proceeds on the development source system. Your pipeline can run all unit test cases or a test suite to validate
the quality of your build.

After a merge is completed, you can rebase the rules on your development system to import the most recently
committed rules from your system of record. For more information, see Understanding rule rebasing. In addition,
you can configure your pipeline to send emails to users, such as when a job starts or when a conflict is detected.

The following figure displays an example workflow of the pipeline:

Workflow of a continuous integration pipeline on a development source


Workflow of a continuous integration pipeline on a system of record

Configuring your automation server

Configure your automation server so that you can remotely start jobs on it. Your configuration depends on
the automation server that you use.

Defining the automation server URL

Configure a dynamic system setting on the main development system to define your automation server URL.
Your configuration depends on the automation server that you use.

Configuring a continuous integration and delivery pipeline

After you configure your automation server and your source development system, you can configure a
pipeline on your job to automate the testing and merging of rules. Actions that you can do include obtaining
merge conflicts, creating branch reviews, and running unit tests.

Configuring your automation server


Configure your automation server so that you can remotely start jobs on it. Your configuration depends on the
automation server that you use.

For example, do the following steps to configure Jenkins:

1. Open a web browser and navigate to the location of the Jenkins server.

2. Install the Build Authorization Token Root Plugin.

a. Click Manage Jenkins.

b. Click Manage Plugins.

c. On the Available tab, select the Build Authorization Token Root Plugin check box.

d. Specify whether to install the plug-in without restarting Jenkins or download the plug-in and install it
after restarting Jenkins.

3. Configure your Jenkins job to use parameters.

a. Open the job and click Configure.

b. On the General tab, click the This project is parameterized check box.

c. Click Add Parameter, and then click String Parameter.

d. In the Name field, enter notificationSendToID , which is the operator ID of the user who started the Jenkins job.

Email notifications about the job are sent to the email address that is associated with the user ID.

e. Click Add Parameter, and then click String Parameter.

f. In the Name field, enter branchName.

g. Click Save.

4. ​Configure the build trigger for your job.

a. Click Configure.

b. On the General tab, in the Build Triggers section, select the Trigger builds remotely (e.g., from scripts)
check box.

c. In the Authentication Token field, enter an authentication token, which can be any string.

d. Click Save.

Defining the automation server URL


Configure a dynamic system setting on the main development system to define your automation server URL. Your
configuration depends on the automation server that you use.

For example, do the following steps if you are using Jenkins:

1. Click Create Sysadmin Dynamic System Settings

2. Enter a description in the Short description field.

3. In the Owning Ruleset field, enter Pega-API.

4. In the Setting Purpose field, enter JenkinsURL.

5. Click Create and open.

6. On the Settings tab, in the Value field, enter http://myJenkinsServerURL/buildByToken/buildWithParameters.

7. Click Save.

Configuring a continuous integration and delivery pipeline


After you configure your automation server and your source development system, you can configure a pipeline
on your job to automate the testing and merging of rules. Actions that you can do include obtaining merge
conflicts, creating branch reviews, and running unit tests.

You can do the following actions:

Send a notification with the job URL to the user who published the branch or started the job.
Call the branches REST service with GET /branches/{ID}/conflicts to obtain a list of conflicts. If there are no
conflicts, you can continue the job; otherwise, you can end the job and send a notification to the user to
indicate that the job failed.
Use the merges subresource for the branches REST service to merge branches.
Call the merges REST service with GET /branches/{ID}/merge to obtain the status of a merge.
Use the reviews subresource for the branches REST service to create a branch review.
Use the Execute Tests service to run unit test cases or test suites. For more information, see Running test
cases and suites with the Execute Tests service.
Set up Jenkins to poll the job, using the unique ID that the branches service returned when you merged the
branch, until the status is no longer set to Processing. If the merge is successful, you can continue the job;
otherwise, you can send a notification to the user to indicate that the job failed.
Publish the rulesets into which the branches were merged to a repository such as JFrog Artifactory.
Notify the user that the job is complete.

For more information about the branches REST and merges REST services, seeImplementing a CI/CD pipeline with
repository APIs.

Implementing a CI/CD pipeline with repository APIs


After you have configured an automation server and system of record (SOR) so that you can remotely start jobs
on the automation server, you can implement a continuous integration and development pipeline with the
branches REST and merges REST services. These services detect potential conflicts before a merge, merge rules
in a branch, obtain the status of the merge, and create branch reviews. By remotely starting jobs that
automatically perform branch operations, your organization can deliver higher-quality software more quickly.

To access the documentation about the data model, click Resources API .

For more information about response codes, see Pega API HTTP status codes and errors.

Understanding the branches REST service

Use the branches REST service to retrieve a list of conflicts before you run tests and merge branches and
perform additional tests on conflicts before performing a merge operation. You can also create branch
reviews.

Understanding the merges REST service

Use the merges REST service to obtain the status of the merge that you created by using the merge
subresource.

Understanding the branches REST service


Use the branches REST service to retrieve a list of conflicts before you run tests and merge branches and perform
additional tests on conflicts before performing a merge operation. You can also create branch reviews.

Understanding the conflicts subresource

Use the conflicts subresource to retrieve a list of conflicts before running tests, allowing the pipeline to fail
more quickly so that you can correct errors faster.:

Understanding the merge subresource

Use the merge subresource to perform additional tests on conflicts, and then perform a merge operation.

Understanding the review subresource

Use the review subresource to create a branch review.

Understanding the conflicts subresource


Use the conflicts subresource to retrieve a list of conflicts before running tests, allowing the pipeline to fail more
quickly so that you can correct errors faster.:
This subresource takes the following parameters

Request – http://serverURL/prweb/api/v1/branches/{id}/conflicts
Parameter – ID. The name of the branch for which you want to receive conflicts. This parameter is required.
Response – The conflicts subresource returns the number of conflicts.

Understanding the merge subresource


Use the merge subresource to perform additional tests on conflicts, and then perform a merge operation.

This subresource takes the following parameters:

Request – http://serverURL/prweb/api/v1/branches/{id}/merge
Parameter – ID. The name of the branch that you want to merge. This parameter is required.
Response – The merge subresource returns a unique ID after a validation event occurs. During the merge,
the status is saved to an instance of the System-Queue-Merge class.

To verify the status of a merge, use the Merges REST service, using the ID returned by the response.

You can also use the Queue Management landing page to view information about and remove merge requests
without needing to know the response ID. Open the landing page by clicking Dev StudioSystem Operations Queue
Management .

You can also update logging levels to INFO on the pzMergeServicePostActionProcessing activity to log
informational messages. These messages could provide information about why exceptions are occurring and also
act as a reference that you can use if you are working with Pegasystems Global Customer Support. For more
information about logging levels, see Logging Level Settings tool.

Understanding the review subresource


Use the review subresource to create a branch review.

This subresource takes the following parameters:

Request – http://serverURL/prweb/api/v1/branches/{id}/review
Parameter – ID. The name of the branch for which you want to create a review. This parameter is required.
Request body – The email account of the user creating the review and the users who are reviewing the
branches. Use the following format:

{ "author": "<your_userid", "description": "<description of the review", "reviewers": [ { "ID": "reviewer_userid" } ] }

Response – The review subresource returns the ID of the branch review.

Understanding the merges REST service


Use the merges REST service to obtain the status of the merge that you created by using the merge subresource.

This subresource takes the following parameters:

Request – http://serverURL/prweb/api/v1/merges/{ID}
Parameter – ID. The unique identifier that you obtained by running the merge subresource of the branches
REST service. This parameter is required.
Response – The merges REST service returns the status from the System-Queue-Merge instance.

You might also like