Z Devops Guide
Z Devops Guide
IBM
Edition 1.1.0 (December 2024)
© Copyright International Business Machines Corporation 2024.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
Notices...................................................................................................................i
Chapter 1. Overview.............................................................................................. 1
Applying DevOps to IBM Z........................................................................................................................... 1
IBM Z DevOps Acceleration Program.....................................................................................................1
CI/CD for z/OS applications......................................................................................................................... 2
CI/CD for z/OS applications....................................................................................................................2
Integrated development environment...................................................................................................5
Source code management......................................................................................................................6
Build......................................................................................................................................................12
Artifact repository................................................................................................................................ 15
Deployment manager...........................................................................................................................16
Pipeline orchestrator............................................................................................................................16
iii
Application audit compliance practices.............................................................................................. 54
Further reading..................................................................................................................................... 55
iv
SCLM-to-Git Migration Tool................................................................................................................101
Manual migration............................................................................................................................... 102
Legal information...............................................................................................125
Trademarks.............................................................................................................................................. 125
Privacy policy considerations.................................................................................................................. 125
v
vi
Notices
This information was developed for products and services offered in the US. This material might be
available from IBM® in other languages. However, you may be required to own a copy of the product or
product version in that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program, or
service that does not infringe any IBM intellectual property right may be used instead. However, it is the
user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS”
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice,
and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to actual people or business enterprises is
entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform
for which the sample programs are written. These examples have not been thoroughly tested under
You can click on each component in the following list to learn more about it and see common technology
options:
• Integrated development environment (IDE): The IDE is what the developer uses to check out and
edit her code, as well it check it back into the version control system. Many modern editors have
Chapter 1. Overview 3
features that enhance development capabilities, such as syntax highlighting, code completion, outline
view navigation, and variable lookups, as well as integrations such as debugging and unit testing.
• Source code management (SCM, Version control): The SCM is used to store and manage different
versions of source code files, as well as application configuration files, test cases, and more. This
is what enables the application development team to do parallel development. We recommend a
Git-based SCM. For more information about Git and why it is foundational to our recommendations, as
well as an explanation of the Git concepts, see the SCM documentation.
• Build: The build component takes care of understanding dependencies, and then compiling and linking
programs to produce the executable binaries such as load modules and DBRMs. When running this
component, you can also integrate automated steps for unit testing and code quality inspection
(although these are sometimes considered as separate components in the CI/CD pipeline). We
recommend that the build is handled by IBM Dependency Based Build (DBB), which has intelligent
build capabilities that enable you to perform different types of build to support various steps in your
workflow. Some examples of these build types include single-program user builds, full application
builds, and impact builds.
• Artifact repository: Once the build component has created the executable binaries, they are packaged
together and uploaded into the artifact repository, along with metadata to help trace those binaries
back to the source. This component is crucial for decoupling the source code management from the
runtime environments, enabling the key DevOps practice of "Build once, deploy many".
• Deployment manager: The deployment manager is the tool that rolls out the application packages.
When it is time to deploy the application, the deployment manager downloads the package from the
artifact repository and uploads the contents to the target libraries. If there are other steps to perform,
such as installation steps like CICS® NEWCOPY or PHASE-IN, or a bind step when Db2® is involved, the
deployment manager also handles those. Importantly, it also keeps track of the inventory of execution
environments so that you can know what each environment is running.
• Pipeline orchestrator: The pipeline orchestrator oversees all the automated processes in the pipeline.
This component integrates the steps from the different tools together and ensures they all run in the
correct order.
Although it might seem CI/CD requires developers to learn and work with a lot of different tools, they are
primarily just working with the IDE for code editing, the SCM for version control, and performing some
individual user builds. Once development gets to point where they want to integrate their code changes
into their team's shared codebase, the pipeline is largely automated via the pipeline orchestrator. This
means that once the CI/CD pipeline is in place, if the developer has to interact with any of the automated
components at all, they would mostly just be checking a dashboard or status, performing any intentionally
manual approval steps, and/or verifying the results of the pipeline job.
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
Chapter 1. Overview 5
Source code management
A source code management (SCM) tool manages and stores different versions of your application
configuration such as source code files, application-specific configuration data, test cases, and more.
It provides capabilities to isolate different development activities and enables parallel development.
In our described continuous integration/continuous delivery (CI/CD) implementation, we showcase Git as
the SCM when applying DevOps to IBM Z®.
Git basics
Git is a distributed "version control system" for source code. It provides many features to allow
developers to check in and check out code with a full history and audit trail for all changes.
Source is stored in repositories (also known as "repos") on hierarchical file systems on Linux®, MacOS,
Windows, or z/OS UNIX System Services.
The team stores a primary copy of the repository on a service running Git on a server (see Common Git
provider options). Such services provide all the resilience required to safeguard the code and its history.
Once source code is moved into a repository on the server, that becomes the primary source of truth, so
existing processes to ensure the resilience of copies on z/OS are no longer required.
An application repo can be cloned from the team's chosen Git server (known as the "remote") to any
machine that has Git, including a developer's local computer using popular integrated development
environments (IDEs) such as IBM® Developer for z/OS (IDz) and Microsoft’s Visual Studio Code (VS Code).
By default, clones contain all the files and folders in the repository, as well as their complete version
histories. (Cloning provides many options to select what is copied and synchronized.)
All Git operations that transfer the data held in the repository (clone, push, fetch, and pull) use SSH
or HTTPS secure communications. Pros and cons of each protocol are discussed in "Git on the Server -
The Protocols".
Info: SSH on z/OS
z/OS UNIX System Services includes OpenSSH. z/OS OpenSSH provides the following z/OS
extensions:
Chapter 1. Overview 7
• System Authorization Facility (SAF) key ring: z/OS OpenSSH can be configured to allow z/OS
OpenSSH keys to be stored in SAF key rings.
• Multilevel security: This is a security policy that allows the classification of data and users
based on a system of hierarchical security levels combined with a system of non-hierarchical
security categories.
• System Management Facility (SMF): z/OS OpenSSH can be configured to collect SMF Type
119 records for both the client and the server.
• Hardware Cryptographic Support: OpenSSH can be configured to choose Integrated
Cryptographic Service Facility (ICSF) callable service for implementing the applicable SSH
session ciphers and HMACs.
The developer can then create "branches" in the repository. Branches allow developers to make and
commit changes to any files in the repository in isolation from other developers working in other
branches, or for an individual developer to work on multiple work items that each have their own branch.
For each task the developer has (such as a bug fix or feature), the developer would generally do their
development work on a branch dedicated to that task. When they are ready to promote their changes,
they can create a "pull request", (also known as a "merge request") which is a request to integrate (or
"merge") those changes back into the team's common, shared branch of code.
With Git’s branching and merging features, changes can be performed in isolation and in parallel with
other developer changes. Git is typically hosted by service providers such as GitHub, GitLab, Bitbucket, or
Azure Repos. Git providers add valuable features on top of the base Git functionality, such as repository
hosting, data storage, and security.
In Git, all changes are committed (saved) in a repo using a commit hash (unique identifier) and a
descriptive comment. Most IDEs provide a Git history tool to navigate changes and drill down to line-by-
line details in Git diff reports. The following image of an Azure Repos example setup shows the Git history
on the right panel, and a Git diff report on the left.
As part of comprehensive integrity assurance, developers can cryptographically sign their commits.
Git branching
A Git "branch" is a reference to all the files in a repo at a certain point in time, as well as their history.
A normal practice is to create multiple branches in a repo, each for a different purpose. In the standard
pattern (incorporated into our branching model for mainframe development) there will be a "main"
branch, which is shared by the development team. The team's repository administrator(s) will usually set
up protections for this branch, requiring approval for any change to be merged into it. The team might
also have additional shared branches for different purposes, depending on their branching strategy. The
repository administrator(s) can also set up branch protections for these branches, as well as any other
branch in the repository.
Info: Branches are not the same as deployment targets
Do not think of branches being aligned to deployment targets (such as test or production
environments). For more on this see No environment branches in our recommended branching
model.
Git merge
Feature branching allows developers to work on the same code, and work in parallel and in isolation. Git
merge is how all the code changes from one branch get integrated into another branch. Once developers
complete their feature development, they initiate a pull request asking to integrate their feature changes
into the team's shared branch of code.
The pull request process is where development teams can implement peer reviews, allowing team
leads or other developers to approve or reject changes. They can also set up other quality gates
such as automated testing and code scanning to run on the PR. Git will automatically perform merge
conflict detection to prevent the accidental overlaying of changes when the pull request is merged in.
Development teams often have a CI pipeline that is triggered to run upon pull request approval/merge for
the integration test phase.
Chapter 1. Overview 9
Doing this kind of parallel development is complicated on legacy systems, especially with PDSs, because
developers have to figure out how to merge the code at the end, especially when working on the same
files. Additionally, legacy SCMs typically lock files that are being worked on. In contrast, Git branching
allows the developers to work on the files at the same time, in parallel.
In the Git example illustrated above, Dev1 and Dev2 agreed to work on different parts of the same
program, and they then each make their own pull request to integrate their respective chanages back into
the team's shared branch of code when they are ready. Dev1 has done this before Dev2, so his changes
have been approved and merged in first. When Dev2 later makes her request to merge her code changes
into the team's shared branch of code, Git does a line-by-line check to make sure the changes proposed
in Dev2's pull request do not conflict with any of the changes in the shared branch of code (which now
include Dev1's changes). If any issues are found, Git will stop the merge and alert the developers of the
merge conflict. Git will also highlight the conflicting code so that the developers know where to look and
can resolve the conflict, most likely via another commit in Dev2's branch.
Git tags
A Git tag references the repo with a specific, unique commit point. Tags are optional but are strongly
recommended and broadly used in modern development practices with Git.
Forking repositories
Repositories can also be forked. A fork is a more independent copy of the original repo created on the
remote git service either in a different organization or under an individual's account. The original repo
from which a fork is created is commonly known as the upstream repository. A project can impose a
restriction to stop forks being created.
As an independent copy, it has its own branches (including main). Forks have an association with the
original repo, and pull requests can be made from forks to their originating repos.
Best practices
Sharing code
It is a common practice that mainframe applications share common code. For example, COBOL
copybooks are typically shared across applications that process similar data.
The following diagram illustrates how teams can define repos to securely share common code. In this
example, App Team 1 has common code that App Team 2 can clone and use in their build.
Another example (also illustrated in the following diagram) is that an enterprise-wide team can maintain
source that is common across many applications.
Branching conventions
• Follow the IBM-recommended Git branching model for mainframe development, which provides
guidance on branch naming conventions, branch management, and Git workflows to support various
steps in the software development lifecycle.
• Define and communicate the Git workflow being used by the team/organization.
• Commit related changes. A commit should be a wrapper for related changes.
• Write good (descriptive, concise) commit messages.
• Work with small incremental changes that can be merged, tested, and deployed in short sprint cycles.
Chapter 1. Overview 11
• Communicate with peers when working on common code.
• After releasing a hotfix, merge it into the main branch for integration with ongoing work.
• Clean up short-living branches (such as features, hotfixes, and so on).
Resources
This page contains reformatted and updated excerpts from Git training for Mainframers.
Build
The Build component of a continuous integration/continuous delivery (CI/CD) pipeline converts the
source code into executable binaries. It supports multiple platforms and languages. In mainframe
environments, it includes understanding dependencies, compile, link-edit, and unit test. The build can
include the inspection of code quality to perform automated validation against a set of coding rules. In
some cases, code quality inspection could also be a component of its own in the pipeline.
While many of the steps in the DevOps flow for mainframe applications can be performed using the
same tooling used by other development teams, the build step in particular needs to remain on
z/OS®. Therefore, this step is primarily handled by IBM® Dependency Based Build (DBB). DBB has
intelligent build capabilities where it can not only compile and link z/OS programs to produce executable
binaries, but it can also perform different types of builds to support the various steps in an application
development workflow. This includes the ability to perform an "impact build", where DBB will only build
programs that have changed since the last successful build and the files impacted by those changes,
saving time and resources during the development process.
DBB is a set of APIs based on open-source Groovy and adapted to z/OS. This enables you to easily
incorporate your z/OS application builds into the same automated CI/CD pipeline used by other teams.
It is possible to use DBB as a basis to write your own build scripts, but we recommend starting with the
zAppBuild framework or DBB zBuilder to provide a template for your build, and then customizing it as
necessary for your enterprise and applications.
The zAppBuild framework helps facilitate the adoption of DBB APIs for your enterprise and applications.
Rather than writing your own Groovy scripts to interact with the DBB APIs, you can fill in properties to
define your build options for zAppBuild, and then let zAppBuild invoke DBB to perform your builds.
The DBB zBuilder is an integrated configuration-based build framework for building z/OS applications
with DBB that was introduced in DBB version 3.0.0 (October 2024). Build configuration is defined in YAML
files under the control of the build engineering team. It is an alternative to the zAppBuild framework.
DBB features
• Perform builds on z/OS and persist build results
• Persist metadata about the builds, which can then be used in subsequent automated CI/CD pipeline
steps, as well as informing future DBB builds
• Can be run from the command line interface, making it easy to integrate into an automated CI/CD
pipeline
zAppBuild features
• Framework template facilitates leveraging DBB APIs to build z/OS applications, letting you focus on
defining the build's properties separately from the logic to perform the build
• High-level enterprise-wide settings that can be set for all z/OS application builds
• Application-level settings for any necessary overrides in individual application builds
• Includes out-of-the-box support for the following languages:
– COBOL
– PL/I
– BMS and MFS
zAppBuild introduction
zAppBuild is a free, generic mainframe application build framework that customers can extend to meet
their DevOps needs. It is available under the Apache 2.0 license, and is a sample to get you started with
building Git-based source code on z/OS UNIX System Services (z/OS UNIX). It is made up of property files
to configure the build behavior, and Groovy scripts that invoke the DBB toolkit APIs.
Build properties can span across all applications (enterprise-level), one application (application-level),
or individual programs. Properties that cross all applications are managed by administrators and define
enterprise-wide settings such as the PDS name of the compiler, data set allocation attributes, and more.
Application- and program-level properties are typically managed within the application repository itself.
The zAppBuild framework is invoked either by a developer using the "User Build" capability in their
integrated development environment (IDE), or by an automated CI/CD pipeline. It supports different build
types.
The main script of zAppBuild, build.groovy, initializes the build environment, identifies what to build,
and invokes language scripts. This triggers the utilities and DBB APIs to then produce runtime artifacts.
The build process also creates logs and an artifact manifest (BuildReport.json) for deployment
processes coordinated by the deployment manager.
The following chart provides a high-level summary of the steps that zAppBuild performs during a build:
Chapter 1. Overview 13
zAppBuild architecture
The zAppBuild framework is split into two parts. The core build framework, called dbb-zappbuild,
is a Git repository that contains the build scripts and stores enterprise-level settings. It resides in a
permanent location on the z/OS UNIX file system (in addition to the central Git repository). It is typically
owned and controlled by the central build team.
The other part of zAppBuild is the application-conf folder that resides within each application
repository to provide application-level settings to the central build framework. These settings are owned,
maintained, and updated by the application team.
application-conf overview
This folder is located within the application's repository, and defines application-level properties such as
the following:
Resources
• IBM documentation for DBB
• IBM Dependency Based Build Fundamentals course
This page contains reformatted excerpts from the following documents:
• DBB zAppBuild Introduction and Custom Version Maintenance Strategy
• Packaging and Deployment Strategies in an Open and Modern CI/CD Pipeline focusing on Mainframe
Software Development
Artifact repository
Once building occurs, the pipeline then publishes and stores the build outputs as a package in the artifact
repository. This package contains any artifact that will need to be deployed, such as load modules,
DBRMs, DDL, and the configuration files for the subsystems. Importantly, the package also contains the
build artifacts' metadata and other necessary pieces of information that enable any changes to be traced
back to the version control system. Depending on the system, the package can be a WAR, EAR files,
a Windows installer package, among others. The artifact repository can also be used as the publishing
platform to store intermediate files needed in the build phase.
The artifact repository contains a complete history of packages, and therefore also provides access to
older versions. This feature is especially important in cases where a rollback or audit is required. The
artifact repository is meant to be the single point of truth for binaries, much in the same way that a SCM is
the single point of truth for source files.
Chapter 1. Overview 15
It is expected that a package will be deployed to several execution environments, each of them being
used for different testing phases. Ultimately, some packages will be deployed to production. In this
arrangement, the artifact repository acts like a proxy for the deployment manager, which is responsible
for deploying the artifacts produced by the build system to one or more runtime environments.
The key mission and benefit of an artifact repository is to decouple source code management (SCM)
configurations from runtime environments. This supports the fundamental DevOps principle of "build
once, deploy many". Once you build and test a set of binaries to verify it, then that is the same set of
binaries that you will want to deploy to the production environment. By ensuring you can use the same set
of executables between your deployment environments, from testing to production, you not only reduce
the risk of build time issues going undetected into your production environments, but it also becomes
much easier to determine if a deployment problem is the result of a build time issue or a runtime
environment issue.
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
Deployment manager
The deployment manager is responsible for understanding the execution environments and maintains
an inventory of the environment’s deployed content. It is used to rollout application packages. For
many runtimes, copying artifacts is not enough to actually make them executable. There are numerous
installation steps to perform. A good example of this would be a CICS® NEWCOPY/PHASE-IN, or, when
Db2® is involved, a bind against the database of the environment.
Common options
• UrbanCode® Deploy (UCD)
• Wazi Deploy (Python or Ansible®)
• Ansible z/OS® modules
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
Pipeline orchestrator
Also known as the CI (Continuous Integration) Orchestrator - This is where automation happens. The
CI Orchestrator provides connectors to version control, build systems, and packaging and deployment.
Its goal is to remove manual and repetitive tasks as much as possible. It also drives the building of the
application package, includes automated unit tests, and publishes the results in an artifact repository to
make them available to the provisioning and deployment practices.
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
Chapter 1. Overview 17
18 IBM Z DevOps Guide
Chapter 2. Getting started
Roles
Having a team with the right skills and mindset is critical to a successful DevOps transformation effort.
While the following roles each have their own specific skillsets and tasks, an individual can perform more
than one role if it makes sense for their team and organization. You can click on each role to learn about it.
Architect
The architect helps define the new software delivery process.
Generally, the architect will be someone with strong z/OS skills who understands the infrastructure and
current build processes. This deep background knowledge about the current z/OS infrastructure state and
mainframe application build processes is important for understanding how to translate those processes
into the more modern DevOps pipeline.
Build specialist
The build specialist develops and maintains the build scripts for the new pipeline.
This is a developer type of role that focuses on turning the source code into a deployable artifact,
so familiarity with z/OS build processes is required. The build specialist might adapt a non-mainframe
example of build scripting to z/OS.
• Background skills and knowledge:
– Mainframe build fundamentals (for example, JCL/REXX, understanding of compile/link/bind options,
and so on)
• Skills and concepts to learn:
– Git concepts
– IBM Dependency Based Build (DBB) architecture (for example, dependency management and build
results)
– Groovy scripting
• Tasks:
– Plan and perform migrations
– Develop and maintain the customized build framework
• Job positions that might fill this role:
– Build engineer
– z/OS build administrator
Pipeline specialist
The pipeline specialist assembles and administers the pipeline via the CI/CD orchestrator.
This is a developer type of role that focuses on building, scaling, and maintaining the CI/CD pipeline
structure. The pipeline specialist does not need to be as z/OS-aligned as the build specialist. Rather than
being concerned with building COBOL programs (or other z/OS languages), the pipeline specialist is more
concerned about integrating tools together. This role often already exists elsewhere in the enterprise.
Deployment specialist
The application deployment specialist implements the deployment solution.
This developer type of role may be part of the DevOps team (with the pipeline specialist), and
might already be using a deployment manager with other teams. It is helpful for them to have some
Middleware specialist
The middleware specialist role is an umbrella term that covers different technical roles that help install
and configure the tools for the CI/CD pipeline.
This role might be handled by more than one individual, as it can cover setup tasks on both Linux® and
mainframe environments, depending on the enterprise's needs.
• Background skills and knowledge:
– Background in managing or administering the requisite middleware system
• Skills and concepts to learn (if not already acquired):
– Initial install and configure steps for DevOps tooling
Migration specialist
The migration specialist is typically a transitional role that focuses on facilitating the migration from the
legacy development tools and processes to the modern CI/CD pipeline.
This role can either be handled by a selected team in the enterprise, or by a business partner.
• Background skills and knowledge:
– Mainframe data fundamentals
– Understanding of the legacy development system
• Skills and concepts to learn (if not already acquired):
– Git concepts
– IBM Dependency Based Build fundamentals (for example, DBB Migration Tool)
• Tasks:
– Help move data from legacy z/OS application development systems to Git
• Job positions that might fill this role:
– DevOps implementation architect
– Build engineer and DevOps team
Testing specialist
The testing specialist is technical role that focuses on quality assurance in the software.
While testing in legacy development workflows is often manual and time consuming, the move to a
modernized DevOps toolchain allows the testing specialist to create tests that can be automatically run by
the developer, and/or as part of a CI/CD pipeline. The scope of these tests can range from individual unit
tests to larger-scale integration tests on dedicated testing platforms.
• Background skills and knowledge:
– Understanding of the z/OS application functionality and use cases
– Experience testing z/OS applications
• Skills and concepts to learn (if not already acquired):
– Git concepts
– IBM Dependency Based Build fundamentals (for example, running a DBB User Build)
– Modern z/OS testing tools such as zUnit, IBM Virtual Dev and Test for z/OS (ZVDT), and/or IBM Z
Virtual Test Platform (VTP)
• Tasks:
– Create and automate testing processes for the CI/CD pipeline (for example, unit and/or integration
testing)
• Job positions that might fill this role:
– Quality engineer
– Quality assurance team
VS Code
IBM Developer for z/OS on VS Code requires the Visual Studio Code (VS Code) IDE to be installed on the
local workstations of z/OS application developers, along with the IBM Z® Open Editor and IBM Z Open
Debug VS Code extensions. These extensions are shipped with advanced features as supported product
components of IBM Developer for z/OS Enterprise Edition (IDzEE). IBM Z® Open Editor is also available
with the base functionality as a free-of-charge VS Code extension.
The VS Code extensions communicate with the mainframe backend via the Zowe framework. The
mainframe connection can be established either via z/OSMF and SSH, or alternatively via IBM RSE API.
Additional details can be found in the IDz on VS Code documentation for setting up the integrations to
interact with z/OS.
Build
IBM Dependency Based Build (DBB) is the recommended build tool for z/OS applications. This is
complemented by the zAppBuild framework, which helps facilitate your build process using DBB APIs.
Many clients start by using zAppBuild and enhancing it to their needs, for example by adding new
language scripts, or by modifying the existing build processing logic.
Tip:
If you prefer a configuration-based build framework over a script-based build framework,
the DBB zBuilder is an alternative to zAppBuild, and is available in DBB version 3.0.0 and
above. Information about setting up and configuring the DBB zBuilder can be found in the DBB
documentation page Setting up and configuring zBuilder.
This section provides a set of instructions for how you can make zAppBuild available in your Git provider
and how to synchronize new features of zAppBuild into your customized fork.
Note: zAppBuild releases new versions through the main branch. New contributions are added first to the
develop branch, which then will be reviewed and merged to the main branch.
The IBM DBB samples repository contains additional utilities that enhance or integrate with the other
DBB build processes.
Who needs to install DBB, and where?
• System programmers install DBB toolkit on z/OS.
– Set up Db2® for z/OS or Db2 for LUW (Linux®, UNIX, and Windows) for the DBB metadata store.
– See IBM Documentation on Installing and configuring DBB.
• Developers using IDz as their IDE must add the DBB integration to their installation of IDz in order to
use DBB's user build feature.
Who needs to set up zAppBuild (and the IBM DBB samples repository), and where?
• The build engineer and/or DevOps team (in DAT roles: Build specialist and/or Pipeline specialist) should
set this up with the enterprise's Git provider.
c. In your terminal, enter the command for cloning the repository. (The following command uses the
Git repository URL, but the SSH path can also be used if you have SSH keys set up.):
cd <existing_repo>
git remote rename origin old-origin
git remote add origin <Your central Git repository>
git push -u origin –all
git push -u origin --tags
b. On the Git provider's webpage for your new central repository in the browser, you will find that
the repository is now populated with all of zAppBuild's files and history, just like on IBM's public
zAppBuild repository.
• The following screenshot shows an example of a populated central zAppBuild repository with
GitLab as the Git provider:
c. Verify that the new remote is available by issuing the command to list the remotes again: git
remote -v:
d. Fetch the latest information from the official repository, by executing a Git fetch for the official
dbb-zappubuild repository:
e. Make sure that your feature branch is checked out, before attempting to merge the changes
from zappbuild-official. To merge the changes run into your branch update-zappbuild, run the
following command:
Potentially, you face merge conflicts. In the above case, the merge processor could not
automatically resolve the utilities/ImpactUtilities.groovy.
Run the command git status to see which files changed:
Artifact repository
The artifact repository is often already in-place as part of the enterprise's non-mainframe CI/CD pipeline.
Who needs to set up the artifact repository, and where?
• Generally, the DevOps team (pipeline specialist) will work to set this up for z/OS application teams, as
well.
Deployment manager
Who needs to install the deployment manager, and where?
• Depending on the software selected, the deployment manager might require an agent on the z/OS
side, which can be set up by a system programmer (infrastructure team). (Alternatively, the pipeline
orchestrator could SSH into z/OS.)
Resources
This page contains reformatted excerpts from the following documents:
• DBB zAppBuild Introduction and Custom Version Maintenance Strategy
Starting simple
The trunk-based development approach with short-lived feature branches is a simple and structured
workflow to implement, integrate, and deliver changes with an early integration process flow using a
single long-living branch: main. Developers work in isolation in feature branches to implement changes to
the source code, and ideally test the changes in a specific environment. Each feature branch (sometimes
referred to as a "topic branch") is dedicated to a specific developer task such as a feature or bug fix.
A similar workflow is also documented by Microsoft without giving it a name.
The main branch is the point of reference for the entire history of the mainline changes to the code base,
and should be a protected branch. All changes should originate on a separate branch created to hold
Merging a branch
A branch holds all the commits for a change - be that a single commit for a one-liner or a sequence of
commits as the developer refined the change while making it ready for review and merging into main.
The request to merge a branch is made explicitly, but can be as formal or informal as needed by the
team. Protection of main can mean that only certain people can perform the merge, or that a review and
approval of the change is required before merging it, or both.
Scaling up
The use of branches for concurrently planned activities scales extremely well for busier teams.
Additionally, epic and release maintenance branches accommodate specific development workflows and
allow the model to scale even further. The latter two branches exist for the duration of the epic or release
maintenance and are short-living branches.
The implemented changes of the iteration are then delivered collectively as part of the next release. Each
development team decides how long an iteration is. We advocate for working towards smaller, quicker
release cycles, but this model can also be used with longer iterations. Due to business or technical
reasons, the merging of features into the main branch can also be delayed. Although this scenario is
a discouraged practice, the recommendation in this case is to group such features into a specific epic
branch, as described later.
This branching model uses Git tags to identify the various configurations/versions of the application, such
as a release candidate or the version of the application repository that is deployed to production.
Depending on the type of change, the development workflow can vary. In the standard scenario,
developers use the main branch to deliver changes for the next planned release, while the
release maintenance branches allow fixing of the current release running in the production runtime
environment(s). Using epic branches is optional for development teams, but is a grouping mechanism
for multiple features that should be built and tested together, thus allowing teams to increase the
concurrency of working on multiple, larger development initiatives of the application. The epic branch also
represents a way to manage the lifecycle of features that are not planned for the next planned release. In
this way, it is a vehicle to delay merging the set of features into the main branch for a later time.
No environment branches
Notice that main is the only long-running branch. In particular, there are no branches aligned to
environments that may be deployed to. For example, there are no prod, production, QA, or Test
branches.
This branching model exploits the ability to build a deployable package from any branch at any point in its
history, and those packages can be deployed to your environments as needed to verify the changes they
incorporate.
• Developers can build feature branches as and when they need to deploy to unit testing environments
before they consider a pull request for completed work.
• The build of a branch that is the subject of a pull request can be deployed to environments with more
extensive automated testing as part of evaluating whether the changes in the branch are good enough
to be merged.
• The build of a release candidate level of code on main can be deployed to the full range of test
environments to validate whether it meets the quality demanded of being deployed to production.
The build and deployment tools ensure clear traceability from the point in the history of main (or a
feature/epic branch before merging) a package was produced. The deployment manager installs the
package into the various testing environments and, upon successful completion of testing and sign-off,
into the production environment as a release. The deployment manager maintains the inventories of the
deployed packages for the various runtime environments.
Additional long-lived branches aligned to more traditional environments will each represent alternative
histories and give rise to possible ambiguity as sequences of commits that will need to be merged to
multiple branches rather than just to main.
The single consolidated history on main serializes the commits of merged feature or fix branches - and
then is punctuated with explicit release tags and/or release branches. Since a branch can be easily
created from any previous commit, even if a release candidate build needs a specific fix, this can be
achieved whenever it is needed.
Naming conventions
Consistent branch naming conventions help indicate the context for the work that is performed.
Throughout this document, the following naming patterns are used:
• main: The only long-living branch which is the only branch from which every release is initially derived
• release/rel-2.0.1: The release maintenance branch for an example release named rel-2.0.1
• epic/ai-fraud-detection: An epic branch where "aiFraudDetection" is describing the initiative
context (in this example, an initiative to adopt AI technology for fraud detection)
Feature branches also need to relate back to the change request (or issue) from the planning phase and
their context. Some examples are shown in the following list:
• feature/42-new-mortgage-calculation for a planned feature for the next planned release.
• hotfix/rel-2.0.1/52-fix-mortgage-calculation for a fix of the current production version
that is running the rel-2.0.1 release.
• feature/ai-fraud-detection/54-introduce-ai-model-to-mortgage-calculation for a
contribution to the development initiative for adopting AI technology for fraud detection.
Integration branches
Specific branches, such as main, epic, and release branches can be seen as integration branches,
because their purpose is to integrate changes from other branches (typically feature branches). To drive
the integration process of changes into a shared branch of code, mechanisms like pull requests are a
convenient way as they guide the developers with a streamlined workflow. The number of integration
branches required for your development process depends on the needs of the application team. However,
while the cost of creating new branches is low, keeping them up-to-date, for instance by integrating
release bugfixes from the stabilization phase into concurrent epic branches, can be expensive.
For application teams who want to embrace an agile development methodology and who sequentially
deliver new releases with limited parallel development initiatives, they can use the main branch and,
optionally, the release maintenance branch as integration branches to implement the next planned
release and potential bug fixes. The following diagram illustrates a branching model for a Git-based
development process with sequential release deliveries.
A common, recommended practice is to squash the different commits created on the feature branch
into a single new commit when merging, which keeps the Git history from becoming cluttered with
intermediate work for the feature. This also helps to maintain a tidy history on the main branch with only
the important commits.
When the work items implemented on the epic branch are planned and ready to be delivered as part of
the next planned release, the development team merges the epic branch into the main branch.
Epic branches can be used to compose various styles of development processes. The documentation for
Working practice variations provides additional samples.
Learn more
This page describes our recommended Git branching model and workflows for mainframe development.
This model is intended to be used as a template, and can be adjusted, scaled up, or scaled down
according to the needs of the development team. Additional variations for the branching strategies and
workflows can be found in Working practice variations.
For recommendations on designing and implementing the workflows described in this branching model,
please refer to Implementing a pipeline for the branching model.
Developers implement their changes by committing to short-living feature branches (visualized in yellow),
and integrate those via pull requests into the long-living main branch (visualized in red), which is
configured to be a protected branch.
At a high level, the development team works through the following tasks:
1. New work items are managed in the backlog. The team decides which work items will be
implemented in the next iteration. Each application team can decide on the duration of the iteration
(which can also be seen as the development cycle). In the above diagram, three work items
3. To start making the necessary modifications for their development task, developers create a copy of
the Git repository on their local workstations through Git's clone operation. If they already have a
local clone of the repository, they can simply update their local clone with the latest changes from
the central Git repository by fetching or pulling updates into their local clone. This process makes
the feature branch available for the developers to work with on their local workstation. They can
then open their local clone of the repository in their integrated development environment (IDE), and
switch to the feature branch to make their code changes.
4. Developers use the Dependency Based Build (DBB) User Build facility of their IDE to validate their
code changes before committing the changes to their feature branch and pushing the feature branch
with their updates to the central Git repository. (Tip: Feature branches created locally can also be
pushed to the central Git server).
Tip:
This branching model is also known as a continuous integration model to reduce merge
conflicts. While developing on the feature branch, a common practice is for developers to
regularly sync their feature branch with the main branch by merging the latest changes
from the main branch into their feature branch. This ensures that developers are operating
based on a recent state of main, and helps to identify any potential merge conflicts so that
they can resolve them in their feature branch.
5. Developers test their changes before requesting to integrate them into the shared codebase. For
example, they can test the build outputs of the User Build step. For a more integrated experience,
6. When developers feel their code changes are ready to be integrated back into the shared main
branch, they create a pull request asking to integrate the changes from their feature branch into the
main branch. The pull request process provides the capability to add peer review and approval steps
before allowing the changes to be merged. As a basic best practice, the changes must be buildable.
If the pull request is associated with a feature branch pipeline, this pipeline can also run automated
builds of the code in the pull request along with tests and code quality scans.
7. Once the pull request is merged into the main branch, the next execution of the Basic Build Pipeline
will build all the changes (and their impacts) of the iteration based on the main branch.
The pipeline can optionally include a stage to deploy the built artifacts (load modules, DBRMs, and so
on) into a shared test environment, as highlighted by the blue DEV-TEST icon in the above diagram.
In this DEV-TEST environment, the development team can validate their combined changes. This first
test environment helps support a shift-left testing strategy by providing a sandbox with the necessary
setup and materials for developers to test their changes early. The installation happens through
the packaging and deployment process of a preliminary package that cannot be installed to higher
environments (because it is compiled with test options), or alternatively through a simplified script
solution performing a copy operation. In the latter, no inventory and no deployment history of the
DEV-TEST system exist.
8. In the example scenario for this workflow, the development team decides after implementing Feature
1 and Feature 2 to progress further in the delivery process and build a release candidate package
Although not depicted in the above diagram, this point in main's history can be tagged to identify it as
a release candidate.
9. The release candidate package is installed in the various test stages and takes a predefined route.
The process can be assisted by the pipeline orchestrator itself, or the development team can use the
deployment manager. In the event of a defect being found in the new code of the release candidate
package, the developer creates a feature branch from the main branch, corrects the issue, and
merges it back into the main branch (while still following the normal pull request process). It is
expected that the new release candidate package with the fix is required to pass all the quality gates
and to be tested again.
10. In this sample walkthrough of an iteration, the development of the third work item (Feature 3) is
started later. The same steps as above apply for the developer of this work item. After merging the
changes back into the main branch, the team uses the Basic Build Pipeline to validate the changes
in the DEV-TEST environment. To create a release candidate package, they make use of the Release
Pipeline. This package (Package RC2 in the following diagram) now includes all the changes delivered
for this iteration -- Feature 1, Feature 2 and Feature 3.
3. The developers fetch the feature branch from the central Git repository into their local clone of the
repository and switch to that branch to start making the necessary modifications. They use the user
build facility of their IDE to vet out any syntax issues. They can use a feature branch pipeline to build
the changed and impacted files. Optionally, the developer can prepare a preliminary package, which
can be used for validating the fix in a controlled test environment.
4. The developer initiates the pull request process, which provides the ability to add peer review and
approval steps before allowing the changes to be merged into the release/rel-2.1.0 release
maintenance branch.
5. A Basic Build Pipeline for the release maintenance branch will build all the changes (and their
impacts).
6. The developer requests a Release Pipeline for the release/rel-2.1.0 branch that builds the
changes (and their impacts), and that includes the packaging process to create the fix package for
the production runtime. The developer will test the package in the applicable test environments, as
shown in the following diagram.
7. After collecting the necessary approvals, the fix package can be deployed to the production
environment. To indicate the new state of the production runtime, the developer creates a Git tag
(2.1.1 in this example) for the commit that was used to create the fix package. This tag indicates the
currently-deployed version of the application.
8. Finally, the developer is responsible for starting the pull request process to merge the changes from
the release/rel-2.1.0 branch back to the main branch to also include the fix into the next release.
9. The release/rel-2.1.0 branch is retained in case another fix is needed for the active release. The
release maintenance branch becomes obsolete when the next planned release (whose starting point is
represented by a more recent commit on the main branch) is deployed to production. In this event, the
new commit point on the main branch becomes the baseline for a new release maintenance branch.
4. The developer initiates the pull request process, which provides the ability to add peer review and
approval steps before allowing the changes to be merged into the epic branch.
5. A Basic Build Pipeline for the epic branch will build all the merged features (both the changes and their
impacts) from the point where the epic branch was branched off.
6. It is important that the team frequently incorporates updates that have been implemented for the next
release and/or released to production via the default development workflow (with the main branch)
into the epic branch to prevent the configurations from diverging too much and making the eventual
merge of the epic branch into main difficult. A common practice is to integrate changes from main
into the epic branch at least after each completion of a release via the default workflow, in order to
merge in the latest stable version updates. More frequent integrations may lead to pulling intermediate
versions of features that might not be fully implemented from a business perspective; however, this
should not deter developers since the main branch should always be in a buildable state.
7. When the development team feels that they are ready to prototype the changes for the initiative in
the initiative's test environment, they request a Release Pipeline for the epic branch that builds the
changes (and their impacts) and includes the packaging process to create a preliminary package.
This preliminary package can then be installed into the initiative's test environment (for example, the
EPIC-DEV-TEST environment). The team will test the package in the assigned test environments for
this initiative, as shown in the following diagram.
8. Once the team is satisfied with their changes for the development initiative, they plan to integrate the
changes of the epic branch into the main branch using the pull request process. This happens when
What is an audit?
"An Information Technology audit is the examination and evaluation of an organization's information
technology infrastructure, applications, data use and management, policies, procedures, and operational
processes against recognized standards or established policies. Audits evaluate if the controls to protect
information technology assets ensure integrity and are aligned with organizational goals and objectives."
— Definition provided by Harvard University
Further reading
While this page highlights key aspects of software development audit requirements, there are many
more topics to consider. DevOps tooling vendors may also have their own specific documentation on
implementing compliance using their products. The following examples are vendor-specific references
with further details:
• Azure® DevOps and RBAC
• GitLab® administration and compliance
Tutorials
IBM Z Systems software trials (IBM Z Trial)
If you are new to DevOps for z/OS® applications, you might want to explore the workflow and tooling
without having to first install and configure an entire technology stack on your own environment. IBM
Z® Systems software trials (also known as IBM Z Trial) allows you to try out a variety of IBM Z software
experiences in a provisioned environment for free, meaning you can get right to learning how to use these
tools/technologies. The following IBM Z Trial experiences are particularly relevant to DevOps and CI/CD
pipelines for z/OS applications:
• IBM® Developer for z/OS on VS Code: Get hands-on with the end-to-end development workflow and
practices for developing and maintaining mainframe applications in a modern DevOps environment. The
products showcased in this IBM Z Trial include IBM Developer for z/OS on VS Code, IBM Z Open Debug,
GitLab, IBM Dependency Based Build (DBB), and IBM Wazi Deploy.
• IBM Application Delivery Foundation for z/OS®: Explore the range of features in the IBM Application
Delivery Foundation for z/OS (ADFz) suite of integrated tooling that can help you analyze, understand,
debug, and modify your COBOL programs.
– Further reading: ADFz Resources contains links to standalone enablement resources for ADFz,
IDz, and IBM Z DevOps including videos, blog links, new release announcements, PDFs, and other
deep-dive learning content.
Courses
The DevOps Acceleration Team (DAT) offers free courses that are beneficial to learners who want to
increase their DevOps skills in a holistic and engaging manner. The available courses are a mix of self-
paced and remote instructor-led courses, and cover a variety of topics for different roles. An IBM®-issued
Credly badge is awarded to the learner upon successful completion of each course.
• Self-paced courses: These courses can be taken at your own pace, whenever it is convenient for you.
• Remote instructor-led courses: Instruction occurs with a remote class and instructor over web
conference on a set schedule, offering interaction opportunities.
Use an intelligent build tool to compile and link your z/OS applications
• IBM Dependency Based Build Fundamentals: This self-paced course provides the audience with an
introduction to building mainframe applications in a DevOps pipeline with IBM Dependency Based Build
(DBB).
Related role(s): Build specialist
docs/
cbsa/
|- application-conf/
|- api/
|- src/
|- test/
|- zos/
|- src/
|- cobol/
|- copybooks/
|- copybooks_gen/
|- interfaces/ (containing services to other mainframe applications)
|- test/ (e.g. Mainframe Unit tests)
|- test/ (e.g. Galasa tests)
|- java/
|- src/
|- com/acme/cbsa/packages
|- test/ (JUnit)
Each component folder contains src and test subfolders when applicable, to segregate the source code
from the test artifacts. For the mainframe artifacts, the most common layout is to group by artifact type,
such as the programming languages. Inside the zos/src folder, the structure indicates the different
purposes of the application's artifacts. For instance, the folder interfaces contains include files owned
by the application that can be referenced by other applications.
For the pipeline setup, the clone and checkout phase can be enhanced to retrieve external
dependencies from other Git repositories. These steps can also be configured to only retrieve the shared
include files that reside in the interfaces folder. Additionally, standardized dependency resolution rules
can be applied in the build framework configuration.
Having a standardized layout across applications' repositories helps in using some features that Git
providers usually provide. One of these features is the capability to implement a policy-based control
of who needs to review and approve changes (known as CODEOWNERS or branch policies). Such a
mechanism is easier to setup when based on the repository's structure.
To facilitate the navigation in the codebase, it is possible to further group source code by business
functionality or by architectural composition of the application. In the following sample layout, the
mainframe component, zos, is split into core account management functions (such as create, update,
and delete), payment functions (to handle payments between accounts), and shared functions.
docs/
cbsa/
|- application-conf/
|- api/
|- src/
|- test/
|- zos/
|- src/
Combining various application components into a single repository allows changes to be processed by
a single pipeline with a streamlined build, packaging, and deployment process as part of a cohesive
planning and release cadence. However, if application components are loosely coupled and/or even
maintained by different teams, it often makes more sense to maintain them in separate repositories to
allow for independent planning and release cycles. The following section provides guidance on how to
manage dependencies between separate repositories.
App size File count Average lines per Size (MB) Size at 50%
program at 80 compression (MB)
bytes per line
small 1,500 1,000 114 57
med 5,000 1,500 572 286
large 50,000 5,000 19,073 9,537
Very Large 140,000 5,000 53,406 26,703
Total size is half the story. Managing a Git repository's history also plays an important part in maintaining
optimum performance.
1 See the "Using subprograms" chapter of the Enterprise COBOL for z/OS documentation library
programming guide.
It is very common to define multiple copybooks for programs in order to isolate data structures and reuse
them in other areas of an application component. Using copybooks allows more modularity at source level
and facilitates dealing with private and shared data structures, or even private or shared functions.
As applications communicate, their implementation consumes the public interface of the applications
with which they interact. This concept of a public interface is common in Java programs and the way the
communication between applications is defined. This principle can also be applied to existing COBOL and
PL/I programs to help explain the structure required for a modern SCM, and is illustrated in the following
diagram, with the applications' usage of other applications' interfaces indicated in red.
Resources
This page contains reformatted excerpts from Develop Mainframe Software with OpenSource Source code
managers and IBM Dependency Based Build.
User build
In an early development phase, developers need the ability to build the single program they are working
on, as well as the unit test programs being created.
The following integrated development environment (IDE) tools provide an option to enable the developer
to compile a selected program in an easy and fast way, without the need to commit or push the change
to the repository, while still using the same build options defined for the pipeline build. The purpose is to
build fast for the personal testing scenario:
• IBM Developer for z/OS® (IDz)
• IBM Z Open Editor & IBM Developer for z/OS on VS Code
Additional information about performing a user build can be found in the documentation for IBM
Dependency Based Build and for the IDEs listed above.
Pipeline build
A pipeline build is generally a build of changes in one or more commits on a branch in the remote
repo. It can be triggered automatically (for example, when developers push their feature branch to the
remote repo, or when commits are merged to main), or it can be triggered manually (such as for release
candidates). It produces the official binaries, outputs that can be packaged and deployed to different
environments, including production. By having the clear definition of what went into each build and the
official build outputs, this ensures there are audit records.
Full build
A full build compiles the full defined configuration. The full list of source files is provided as input from
the pipeline to the build scripts. One strategy is to allow the specification of the build type in the pipeline
orchestrator, as demonstrated in the following screenshot of a Jenkins sample pipeline. The build script
would then need to handle this input as the build type.
Resources
This page contains reformatted excerpts from Develop Mainframe Software with OpenSource Source code
managers and IBM Dependency Based Build.
The package is represented in an archive format such as .tar (common in the UNIX world). This format is
consistent with non-mainframe applications, where teams usually work with full application packages in
archives such as a WAR or a JAR file.
In all cases, the package consists of two items:
• The actual binaries and files produced by the build manager
• A manifest describing the package's contents (that is, metadata)
For mainframe applications, a package will contain executables required to run your application, such
as program objects, DBRM, JCL, and control cards – as well as a manifest file. An example of package
contents in a typical mainframe application package is shown in the following image.
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
Resources
This page contains reformatted excerpts from Packaging and Deployment Strategies in an Open and
Modern CI/CD Pipeline focusing on Mainframe Software Development.
To be correctly displayed with the IBM-1047 code page, the contents of this file must be transformed.
Please note how the hexadecimal codes for the à and the ! characters changed, respectively from x’7C’
to x’44’ and from x’4F’ to x’5A’:
• Transforming to the IBM-1047 code page:
It is important to understand that the code page plays a determining role not only when displaying a file,
but also when editing it. To ensure the content of a file is consistent when used by different people, the
code page used for editing and displaying will likely be the same for all the users. If Alice edits a file with
the IBM-1147 code page and introduces characters (like accents) specific to that code page, then Bob will
need to use the IBM-1147 code page to display the content of this file. Otherwise, he may experience the
situation depicted earlier, where accents are not displayed correctly. If Bob uses the IBM-1047 code page
To be correctly displayed with the IBM-1047 code page, the contents of this file must be transformed.
The hexadecimal codes for the [ and the ] characters must be changed from x’BA’ and x’BB’ to x’AD’ and
x’BD’, respectively:
• Transforming to the IBM-1047 code page:
Again, it is very important that anyone and everyone who displays or edits the file uses a consistent code
page. This can sometimes be a challenge, as the code page to be used is generally specified in the 3270
Emulator (TN3270) client session set-up. Another challenge is trying to determine the original encoding
used to create the file.
To summarize, the binary content of a file must be transformed to ensure consistency when displayed
with another code page. This process is known as the code page translation and is key when migrating
your source from your z/OS platform to a non-EBCDIC platform, which is using different code pages (and
most likely today, UTF-8).
Vocabulary
In this document, we will interchangeably use the coded character set ID (CCSID), its equivalent
name, and the code page to designate an encoding. Although not strictly equivalent, they are all often
interchanged for the sake of convenience. For instance, the IBM-1047 code page is equivalent to the
1047 coded character set (CCSID) and is usually named IBM-1047. The code page for IBM-037 is
equivalent to coded character set (CCSID) 037 and is usually named IBM-037. The CCSID for UTF-8 is
1208 and is linked with many code pages.
To simplify, the code pages will be designated by their common name, for instance IBM-037, IBM-1047,
IBM-1147, and UTF-8.
For reference, a list of code pages is available in the Personal Communications documentation.
This transformation could even be automated (albeit cautiously) through scripting. Other use cases for
these characters should be analyzed carefully, and developers should be encouraged to write their source
code in a way that allows for code page conversion.
Fortunately, IBM® Dependency Based Build (DBB) provides a feature in its migration script to detect these
special characters and helps manage their code page conversion or their transfer as binary.
In any case, the decision about using binary-tagged files in Git, refactoring these files to transform the
non-printable and non-roundtripable characters into their hex values, or not changing the content of
the files should be taken prior to performing the final migration from datasets to Git repositories. If the
migration is started with files that contain either non-printable or non-roundtripable characters, there is a
high risk that files cannot be edited using other editors. Once the files are migrated, it is often very difficult
to resolve the situation after the fact, as there can be information lost during the code page conversion.
In that situation, the best option is to restart the migration from datasets, assuming the original members
are still available until the migration is validated.
# line endings
* text=auto eol=lf
# file encodings
*.cpy zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.cbl zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.bms zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.pli zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.mfs zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.bnd zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.lnk zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.txt zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.groovy zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.sh zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.properties zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.asm zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.jcl zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.mac zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
*.json zos-working-tree-encoding=utf-8 git-encoding=utf-8
The zos-working-tree-encoding parameter specifies the code page used for encoding files on z/OS
UNIX and in PDSs. It must be consistent with file tags for the files that are under the control of Git. The
git-encoding parameter specifies the encoding for files stored in Git outside of the z/OS platform.
The typical value for this parameter is UTF-8.
This file is used by Git for z/OS to understand which conversion process must be performed when files are
added to Git and when files are transferred between a Git repository and the z/OS platform. This file plays
an important role in the migration process of source files from PDSs to Git repositories, and its content
must be thoroughly planned.
An example for this file can be found in the dbb-zappbuild repository. This sample contains the major
definitions that are typically found in the context of a migration project.
File tagging
One of the additional parameters is the tagging of files to help determine the nature of the content. By
default, files are not tagged on z/OS UNIX, and their content is assumed to be encoded in IBM-1047 by
most of the z/OS utilities, unless stated otherwise. If the original content of those files is created with a
different code page, some characters may not be rendered or read by programs properly.
A required step to ensure that files are correctly read and processed is to tag them. The tagging of
files on z/OS UNIX is controlled by the chtag command. With chtag, you can print the current tagging
information for a file (-p option, or the ls -T command also displays the tag of listed files) or change
this tagging (-t/-c/-b/-m options). It is important to understand that this tagging information is then
used during the migration process into a remote repository by Git for z/OS, which uses that information to
correctly convert the file to the standard Git encoding of UTF-8. Having the correct tagging set for files on
z/OS UNIX is a major step in the migration process to ensure a successful conversion.
The following example shows the output of the ls -alT command for a folder that contains COBOL
source code:
ls -alT
total 448
drwxr-xr-x 2 USER1 OMVS 8192 Feb 22 14:47 .
drwxr-xr-x 7 USER1 OMVS 8192 Jan 12 14:06 ..
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 8930 Feb 22 13:04 epscmort.cbl
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 132337 Jan 12 14:06 epscsmrd.cbl
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 7919 Feb 22 13:04 epsmlist.cbl
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 5854 Jan 12 14:06 epsmpmt.cbl
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 6882 Jan 12 14:06 epsnbrvl.cbl
Automatic conversion
Although not directly involved in the migration process, the Enhanced ASCII feature introduces an
in-flight automatic conversion for files stored in z/OS UNIX. This automatic conversion is mainly
controlled by the _BPXK_AUTOCVT environment variable or by the AUTOCVT parameter defined in a
BPXPRMxx PARMLIB member. By default, programs on z/OS UNIX are operating in EBCDIC. When
the _BPXK_AUTOCVT parameter is activated to either ON or ALL, along with the correct tagging of
files, programs executing in z/OS UNIX can transparently and seamlessly work with ASCII files without
converting them to EBCDIC.
During the migration process to Git, the .gitattributes file is used to describe the conversion format
for all the files under the control of Git. To avoid any manual tasks, it is recommended to enable the
automatic conversion for any thread working with Git on z/OS and the .gitattributes file. This file is
discussed in further detail in the section Defining the code page of files in Git.
Other options can interfere in the automatic conversion process. Without being exhaustive, this list
provides some parameters which can also influence the global behavior of tools ported to z/OS, including
Git for z/OS:
These parameters can affect the overall behavior of the z/OS UNIX environment, so they should
be configured with caution. Detailing the impacts of these parameters is outside the scope of this
document. For more information on how they can affect your configuration, please reference the above
documentation. Additionally, the recommended values for these parameters are described in the DBB
configuration documentation page.
Summary
This document highlighted some pitfalls to avoid when migrating source code from z/OS PDSs to Git:
correctly determining the original code page used when editing and reading source code members in
z/OS is a key activity to ensure a smooth migration of these elements to Git. The second aspect is about
managing the specific EBCDIC characters which are not easily converted to their UTF-8 counterparts. For
these specific characters, known as non-printable and non-roundtripable characters, a decision must be
taken to either refactor the source code to eliminate those characters, or transfer the files as binary. Both
options have drawbacks that should be evaluated prior to the final migration to Git, as there is no easy
way back.
The DBB Migration Tool shipped with IBM Dependency Based Build helps perform this migration activity
by automating the detection of non-printable and non-roundtripable characters, copying the files to z/OS
UNIX, tagging them on z/OS UNIX, and generating a .gitattributes file. You can learn more about this
utility in DBB Migration Tool.
Resources
This page contains reformatted excerpts from Managing the code page conversion when migrating z/OS
source files to Git.
Example setup
To illustrate the scenarios, the following sample PDSs were constructed to highlight some specific
migration situations that may be encountered and how to mitigate potential issues:
• MIGRATE.TECHDOC.SOURCE
• MIGRATE.TECHDOC.COPYBOOK
MIGRATE.TECHDOC.SOURCE
Content of the MIGRATE.TECHDOC.SOURCE dataset:
Member Description
IBM037 Member that has been created using the code page
of IBM-037.
Example: IBM-037 Code Page
void main(int argc, char *argv[])
MIGRATE.TECHDOC.COPYBOOK
Content of the MIGRATE.TECHDOC.COPYBOOK dataset:
Migration scenarios
Migration using the default settings
In this scenario, we will be migrating all the source members in the MIGRATE.TECHDOC.SOURCE PDS
using the default settings into a local z/OS UNIX Git Repository under /u/user1/Migration. This is the
most simplistic form of invoking the DBB Migration Tool and, in most cases, satisfies most needs.
Note that the DBB Migration Tool is using a default encoding, which is IBM-1047.
An examination of the files on the local z/OS UNIX Git repository will reveal that the files were copied and
were tagged with the default code page of IBM-1047.
ls -alT /u/user1/Migration/source
total 64
drwxr-xr-x 2 USER1 OMVS 8192 May 4 12:33 .
drwxr-xr-x 4 USER1 OMVS 8192 May 4 12:33 ..
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 61 May 4 12:33 ibm037.src
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 61 May 4 12:33 ibm1047.src
Additionally, the .gitattributes file was created (or updated) with the correct encoding mappings.
All source artifacts, except those tagged as binary (to be discussed later), will be stowed in the remote
repository using the UTF-8 code page (as defined by the git-encoding=utf-8 parameter), whereas
any artifacts that are copied from the remote repository to z/OS will be translated to the IBM-1047 code
cat /u/user1/Migration/.gitattributes
source/*.src zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
At this point, Git actions such as add, commit or push can be performed on the migrated source artifacts
to introduce them to the remote repository.
git add .
git commit -m "Simple Migration Example"
[main 6436b92] Simple Migration Example
3 files changed, 5 insertions(+)
create mode 100644 .gitattributes
create mode 100644 source/ibm037.src
create mode 100644 source/ibm1047.src
git push
Counting objects: 6, done.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 634 bytes | 211.00 KiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.ibm.com:user1/Migration.git
3d2962a..6436b92 main -> main
Once the Git push command has completed to the remote respository on your Git server, the resulting
files should be translated into the correct UTF-8 code page.
$DBB_HOME/migration/bin/migrate.sh -r /u/user1/Migration -m
MappingRule[hlq:MIGRATE.TECHDOC,extension:SRC,toLower:true,pdsEncoding:IBM-037] "SOURCE(IBM037)"
Note that the DBB Migration Tool is using the override encoding of IBM-037 for a named member. This
override does not necessarily have to be performed on a member-by-member basis, as the DBB Migration
Tool supports the ability to override the encoding for an entire PDS being migrated.
An examination of the files on the local z/OS UNIX Git repository will reveal that the file was copied and
tagged with the override code page of IBM-037.
ls -alT /u/user1/Migration/source
total 64
drwxr-xr-x 2 USER1 OMVS 8192 May 4 13:10 .
drwxr-xr-x 4 USER1 OMVS 8192 May 4 13:10 ..
t IBM-037 T=on -rw-r--r-- 1 USER1 OMVS 61 May 4 14:54 ibm037.src
t IBM-1047 T=on -rw-r--r-- 1 USER1 OMVS 61 May 4 13:10 ibm1047.src
cat /u/user1/Migration/.gitattributes
source/*.src zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
source/*.src zos-working-tree-encoding=IBM-037 git-encoding=utf-8
However, in this example you will notice a slight anomaly in that there are two (2) entries for the same
sub-folder source/*.src. This will cause an encoding conflict during the Git add action. To correct this
situation, the .gitattributes file must be manually updated to add the file name. Wild cards can be
used in the file name should there be more than one member that matches this situation. The order of
these entries is important, with the last entry taking precedence. In some cases, additional wild carding
may be required to prevent further conflicts.
cat /u/user1/Migration/.gitattributes
source/*.src zos-working-tree-encoding=ibm-1047 git-encoding=utf-8
source/ibm037.src zos-working-tree-encoding=IBM-037 git-encoding=utf-8
Once the correction has been made to the .gitattributes file, the Git commit and push actions can
be performed on the updated files to the remote repository:
git add .
git commit -m "IBM037 Code Page Fix"
[main 107c86c] IBM037 Code Page Fix
2 files changed, 2 insertions(+), 1 deletion(-)
git push
Counting objects: 5, done.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 485 bytes | 485.00 KiB/s, done.
Total 5 (delta 2), reused 0 (delta 0)
To github.ibm.com:user1/Migration.git
6436b92..107c86c main -> main 3d2962a..6436b92 main -> main
Now when examining the offending file in the remote repository, the contents of the file should be
translated correctly:
The probability that members of a single PDS were written using a different code page, though possible,
is extremely low. However, it is worth pointing out that it could expose an issue in how the DBB Migration
Tool generates the .gitattributes file.
If detected, the DBB Migration Tool will emit a diagnostic message in the console log and will copy the
member to z/OS UNIX as binary and therefore no code page conversion will be performed:
$DBB_HOME/migration/bin/migrate.sh -r /u/user1/Migration -m
MappingRule[hlq:MIGRATE.TECHDOC,extension:CPY,toLower:true,pdsEncoding:IBM-037]
"COPYBOOK(NROUND)"
Note that the DBB Migration Tool has detected numerous non-roundtripable characters on various lines
and has performed the copy as binary.
An examination of the files on the local z/OS UNIX Git Repository will reveal that the file was copied.
The file should automatically be tagged as binary by the DBB Migration Tool, but if not, the chtag -b
command can be used to add the binary tag prior to performing the Git add command.
ls -alT /u/user1/Migration/copybook
total 48
drwxr-xr-x 2 USER1 OMVS 8192 May 8 13:10 .
drwxr-xr-x 5 USER1 OMVS 8192 May 8 13:12 ..
b binary T=off -rw-r--r-- 1 USER1 OMVS 560 May 8 13:10 nround.cpy
Additionally, the .gitattributes file was automatically updated by the DBB Migration Tool to indicate
that the file is mapped as binary:
cat /u/user1/Migration/.gitattributes
copybook/nround.cpy binary
During the Git push to the remote repository, Git will treat this as a binary file and no conversion to UTF-8
will take place. In essence, the resulting file in the remote repository will be the original contents of the
PDS member, in EBCDIC.
git add .
warning: copybook/nround.cpy added file have been automatically tagged BINARY because they were
untagged yet the .gitattributes file specifies they should be tagged
git commit -m "Binary File"
[main 0213795] Binary File
2 files changed, 1 insertion(+)
create mode 100644 .gitattributes
create mode 100644 copybook/nround.cpy
git push
Counting objects: 5, done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (5/5), 598 bytes | 598.00 KiB/s, done.
Total 5 (delta 0), reused 0 (delta 0)
Once the Git push action has completed to the remote repository, the resulting file will be treated as
binary:
This may not be an ideal situation as described in the documentation on Managing non-printable and
non-roundtripable characters, and should be corrected/reconciled before continuing with the migration.
** Build finished
Note that the DBB Migration Tool has detected numerous non-printable characters on various lines and
has performed the copy as text and will be tagged on z/OS UNIX using the supplied encoding of IBM-037:
ls -alT /u/user1/Migration/copybook
total 64
drwxr-xr-x 2 USER1 OMVS 8192 May 6 15:39 .
drwxr-xr-x 4 USER1 OMVS 8192 May 6 14:55 ..
t IBM-037 T=on -rw-r--r-- 1 USER1 OMVS 114 May 6 15:39 nprint.cpy
Note that the DBB Migration Tool has detected numerous non-printable characters on various lines and
has performed the copy as binary. The file may be untagged on z/OS UNIX System Services, and if it is the
case, the tagging of the file is still required and should be performed manually:
ls -alT /u/user1/Migration/copybook
total 48
drwxr-xr-x 2 USER1 OMVS 8192 May 6 17:17 .
drwxr-xr-x 4 USER1 OMVS 8192 May 6 17:17 ..
- untagged T=off -rw-r--r-- 1 USER1 OMVS 320 May 6 17:17 nprint.cpy
With the -l, --log option, a log file can be created to contain all the messages about the migration
process, including the non-printable and non-roundtripable characters encountered during the scan. This
log file can be used by the developers to perform the necessary changes in their original source code
members prior to the real migration process.
Many other options of the Mapping Rule parameter can be used to control the behavior of the DBB
Migration Tool. These options are described on the IBM Documentation website.
Resources
• This page contains reformatted excerpts from Managing the code page conversion when migrating z/OS
source files to Git.
Migration process
The SCLM-to-Git Migration Tool moves source members from SCLM to Git in three phases. The steps and
outputs for each phase are detailed in the tool's Migration Process documentation on GitHub. You can
view a video guide for how to perform each phase in the following list.
The three-phase migration process used by the SCLM-to-Git Migration Tool:
1. Extract the SCLM metadata and source: Watch Migrating SCLM to Git Part 1 to learn more.
2. Migrate source members to a Git repository in z/OS UNIX: Watch Migrating SCLM to Git Part 2 to learn
more.
3. Create sample DBB Groovy build scripts: Watch Migrating SCLM to Git Part 3 to learn more.
Resources
• SCLM-to-Git migration tool GitHub repository
• Module 5 - Migrating Source Members to Git, IBM Dependency Based Build Foundation Course
• Migration made easy - host SCLM to Git on Z (webinar)
Manual migration
Manual migration of source data from z/OS® to Git is generally not recommended, as it tends to be slower,
more tedious, and prone to human error. However, it is possible, and can be done several ways, including
the following:
• Copy the files to z/OS UNIX System Services (z/OS UNIX) via the Interactive System Productivity Facility
(ISPF).
• Copy the files to z/OS UNIX via IBM® Developer for z/OS (IDz).
– Drag and drop members from IDz's Remote System Explorer (RSE) to a local project.
Although manual migration is not recommended, if you do proceed with it, then it is important to
remember that you must also manually create the .gitattributes file used for code page translation
between z/OS and the Git server, and also manually detect and manage code page conversion issues.
When developers start working on a new task, they will first create a feature branch. Feature branches are
created off the latest code state of the source configuration, whether that is the main branch or an epic or
release maintenance branch.
If the feature branch was created on the central Git repository, the developers can use the integrated
development environment (IDE), a terminal, or another Git interface on their local workstation to clone
or pull the new feature branch from the central Git repository. They then switch to the feature branch to
implement their changes.
IDEs supported by IBM® allow developers to perform a Dependency Based Build (DBB) User Build to
quickly gather feedback on the implemented changes. This feature is expected to be used before the
changes are committed and pushed to the remote repository, where a pipeline can process changes
automatically. Developers regularly commit and push their changes to synchronize with their feature
branch in the remote repository.
The build uses the dependency metadata managed by IBM Dependency Based Build via DBB collections,
which are consumed by the build framework, zAppBuild. At the first execution of the build process for
feature branches, zAppBuild will duplicate this metadata by cloning the related collections for efficiency
purposes. This cloning phase ensures the accuracy of the dependency information for this pipeline build.
Often, these controlled development test environments are used as shared test environments for multiple
application teams. To use the same runtime environment, such as a CICS region, for both prototyping
and for testing integrated changes, we recommend separating the preliminary (feature) packages from
the planned release packages by separating these types into different libraries. The package for the
Housekeeping recommendations
A housekeeping strategy should be implemented when the feature branch is no longer needed and
therefore removed from the central Git provider. Successful merging adds commits from one branch to
the head of another. Once complete, the branch the commits were merged from can be safely deleted.
(Keeping old branches can cause confusion and does not contribute to the traceability of the history.) This
housekeeping strategy should include the cleanup of the DBB collections, the build workspace on z/OS
UNIX System Services, and the build datasets.
Specific scripts can be integrated into the pipeline to delete collections and build groups, or remove
unnecessary build datasets. When leveraging GitLab CI/CD as the pipeline orchestrator, the use of GitLab
environments helps to automate these steps when a branch is deleted. An implementation sample is
provided via the published technical document Integrating IBM z/OS Platform in CI/CD Pipelines with
Gitlab. Generally, webhooks and other extensions of the pipeline orchestrator can be used to perform
these cleanup activities when a branch is deleted.
The Basic Build Pipeline for main, epic, and release branches
It is common practice to build every time the head of the main, epic, or release branch is modified.
When a feature branch is merged into a shared integration branch, a new pipeline is kicked off to build the
merged changes in the context of the configuration of the integration branch.
Additional steps such as automated code reviews or updates of application discovery repositories can be
included in the pipeline process, as shown in the sample pipeline setup in the following screen capture.
For the hotfix workflow, the hotfixes are planned to be implemented from a release maintenance branch
whose baseline reference is the commit (or Git tag) that represents the state of the repository for the
release. This is also the commit from which the respective release maintenance branch was created, as
depicted in the below diagram.
There are two options to deploy the generated artifacts to the shared development test system -
represented by the blue DEV-TEST shape in the above figure.
(Recommended) Option A: Extend the pipeline with a packaging stage and a deployment stage to create
a preliminary package similar to Release Pipeline: Package stage. It is traditionally the responsibility of
the deployment solution to install the preliminary package into different environments. Doing so in this
The Release Pipeline is used by the development team when they want to create a release candidate
package that can be deployed to controlled test environments. The development team manually requests
the pipeline to run. The pipeline is not expected to be used for every merge into the main branch.
The Release Pipeline differs from the previously-discussed pipelines and includes additional steps:
after the stages of building and code scans have successfully completed, the pipeline packages all the
incorporated changes of all merged features for this deliverable to create a package.
The package can be an intermediate release candidate version that can already be tested in the managed
test environments, as outlined in the high-level workflows. When the development team has implemented
all the tasks planned for the iteration, this same pipeline is used to produce the package that will be
deployed to production.
The following diagram outlines the steps of a GitLab pipeline for the Build, Package, and Deploy stages.
The Deploy stage can only be present in the pipeline for the default workflow (with main) when delivering
changes with the next planned release, because the pipeline is unaware of the assigned environments for
the epic and release maintenance workflows.
Implementation details of the Deploy stage can vary based on the pipeline orchestrator being used. In a
GitLab CI/CD implementation, a pipeline can stay on hold and wait for user input. This allows the pipeline
to automatically trigger the deployment of the application package into the first configured environment,
and lets the application team decide when to deploy to the next environment through a manual step (for
instance, deployment to the Acceptance environment).
With Jenkins as the CI/CD orchestrator, it is not common to keep a pipeline in progress over a long time.
In this case, the pipeline engineering team might consider the approach of requesting the deployments
through the user interface of the deployment manager, or alternatively, they can design and set up a
deployment pipeline in Jenkins that can combine the deployment with any automated tests or other
automation tasks.
Deployment to production
When the release candidate package has passed all quality gates and received all the necessary
approvals, it is ready to be deployed to the production environment.
The release manager takes care of this step of the lifecycle and will use the user interface of the
deployment manager, such as UCD's browser-based interface. In the case of a deployment manager
solution with a command-line interface such as Wazi Deploy, the user interface of the pipeline
orchestrator is used by the release manager to drive the deployment to production. A deployment
pipeline definition needs to be configured to roll out the package.
Conclusion
This page provides guidance for implementing a Git branching model for mainframe development with
IBM Dependency Based Build and zAppBuild.
The CI/CD pipeline configurations that were outlined at various stages can be adjusted depending on the
application team's existing and desired development processes and philosophy. Factors that might impact
the design of the pipelines and workflow include test strategies, the number of test environments, and
potential testing limitations.
When designing a CI/CD pipeline, assessment of current and future requirements in the software delivery
lifecycle is key. As CI/CD technologies continue to evolve and automated testing using provisioned
test environments becomes more common in mainframe application development teams, the outlined
branching strategy can also evolve to maximize the benefits from these advances.
Azure Pipelines
• Azure DevOps and IBM® Dependency Based Build Integration
• Building enterprise CI/CD pipelines for mainframe applications using the IBM Z & Cloud Modernization
Stack: See Section 4: "Building a CI/CD pipeline with Azure DevOps and IBM Z & Cloud Modernization
Stack".
GitHub Actions
• Using IBM Dependency Based Build (DBB) with GitHub Actions
GitLab CI
• Build a pipeline with GitLab CI, IBM Dependency Based Build, and IBM UrbanCode® Deploy
• Integrating IBM z/OS platform in CI/CD pipelines with GitLab
Jenkins
• Build a Pipeline with Jenkins, DBB and UCD
• Managing git credentials in Jenkins to access the central git provider
• POC Cookbook – Building a modern pipeline in Mainframe
• Setting up the CI Pipeline Agent on IBM Z as a Started Task
Additional tools
Integrating IBM Application Discovery and Delivery Intelligence (ADDI) in
CI/CD pipelines
Introduction
IBM® Application Discovery and Delivery Intelligence (ADDI) is a product that maps z/OS® artifacts
belonging to mainframe applications, providing developers with reports and graphs that help them
understand the relationships between the different z/OS components. Initially introduced to support the
most common languages of the z/OS platform (COBOL, PL/I, and Assembler), ADDI has been enhanced
over the last few years to support more and more artifact types: CICS® and IMS definitions, Db2® tables,
JCL, job schedulers, and many more.
For IT departments using ADDI, this product has become the one-stop repository that contains all the
necessary information to understand how the different components of a z/OS application work together.
By providing detailed reports on cross-relationships and visual representation of artifact interconnections,
ADDI facilitates the developers’ tasks, especially when it comes to discovering a new application,
searching for a text string over multiple files, or performing an impact analysis before introducing a
change.
Where:
• /m1 is the parameter that is used to invoke the Make process.
• ProjectName is the name of the project where the Make process is triggered.
• /m2 (y/n) refers to whether the Make process is forced or not as follows:
– /m2 y means that if another AD Component is using the project in read mode, the process starts.
– /m2 n means that if another AD Component is using the project in read mode, the process does not
start until the project is released.
• /m3 (y/n) refers to whether the status of the Make process is logged or not as follows:
– /m3 y means that the status log file BatchMakeStatusFile_timestamp.txt is generated under
the project's folder.
– /m3 n means that the status log file is not generated.
Where:
• /umm1 is the parameter that is used to invoke the Synchronization process.
• ProjectName is the name of the project where the Synchronization process is triggered.
With this configuration, it is not the responsibility of ADDI to retrieve the members from z/OS or from any
other source, as it only checks which files have changed on the filesystem. This is where Git plays a major
role to retrieve these files from a central Git provider. Assuming the source code files of a z/OS application
are stored in a Git repository, a Git client installed on the machine where ADDI is hosted can retrieve the
source files, by issuing Git commands such as git clone and/or git fetch.
For a demo of Git support in ADDI, please refer to the Additional resources section of this page.
Setting up pre-requisites
In the previous sections, the integration with Git and the command-line options of the Build client
were described. All the necessary pieces are now available to complete the integration of ADDI into
an automated CI/CD pipeline. Before implementing the automated process to update and build ADDI
projects, some technical pre-requisites must first be set up.
To enable the use of Git to synchronize source files stored in a Git repository, a Git client must be installed
on the machine where ADDI is running, because the source components will be cloned there. To ensure
the Git repository is accessible and can be safely cloned to the ADDI machine, a git clone operation
In this example, the project is called RetirementCalculator. Additional options such as the database
attachment to use are also specified, along with the Cross Applications Analysis or the Business Rules
Discovery. The creation of the project can now be finalized. The database for this project is then created.
The next step is to configure the project to enable the use of a Synchronization file. Using ADDI’s
Administration Dashboard, select the Configure > Install Configurations tab, and navigate to the IBM
Application Discovery Build Client install configuration link. On the displayed panel, the members
synchronization must be enabled, and the path to a Synchronization file must be specified:
The content of the Git repository is now available on the local filesystem of the machine where ADDI runs:
The source files can now be added to the project through the Build Client. For all the artifact types of your
project, right-click on the corresponding virtual folder, and select Add all files from folder. In the next
panel, specify the folder path where your source files were cloned.
In the following example image, the path C:\Program Files\gitlab-
runner\builds\RetirementCalculator\ADDI-Integration\retirementCalculator\cobol
is specified for the zOS Cobol virtual folder:
This command launches the Build Client with no graphical interface, to update the source files from the
local filesystem.
The next command to check is the Make of the project.
When the process is complete, a log file is created and made available in the project folder. It should show
that no updates are found (since the Build was previously performed on the same source files):
The next setup phase is to implement these two command-line actions in the CI/CD pipeline. In this
example, GitLab will be used to drive the execution of the CI/CD pipeline. An additional step of the
pipeline is then declared to call the ADDI Build Client with the two command-line options.
The pipeline description for GitLab is as follows:
ADDI Refresh:
stage: Analysis
tags: [addi]
dependencies: []
variables:
ADDI_PROJECT_NAME: RetirementCalculator
script:
- |
& 'C:\Program Files\IBM Application Discovery and Delivery Intelligence\IBM
Application Discovery Build Client\Bin\Release\IBMApplicationDiscoveryBuildClient.exe' /umm1 $
{ADDI_PROJECT_NAME}
& 'C:\Program Files\IBM Application Discovery and Delivery Intelligence\IBM
Application Discovery Build Client\Bin\Release\IBMApplicationDiscoveryBuildClient.exe' /m1 $
{ADDI_PROJECT_NAME} /m2 y /m3 y
In the script section, the two Build Client commands are run in sequence, in a synchronous way. The
first command will synchronize the project based on the content of the Synchronization file, and the
second command will trigger the Make processing in ADDI.
The GitLab Runner has been configured to clone into a specific location, as specified by
the GIT_CLONE_PATH variable. In this sample setup, this variable is set to $CI_BUILDS_DIR/
$CI_PROJECT_NAME/$CI_COMMIT_REF_NAME, which resolves to C:\Program Files\gitlab-
runner\builds\RetirementCalculator\ADDI-Integration on the Windows machine where
ADDI is running. It is necessary to ensure that this path is consistent with the path configured in the
Synchronization file, to take updates of source files into account.
On the machine where ADDI runs, a log file is created in the ADDI project’s folder once the Make process
is finished. This log file shows that the update to the EBUD01 program was correctly processed and built
by ADDI:
Shortly after this successful processing, the updated analysis is available through the Analyze Client in
Eclipse.
Conclusion
This documentation describes how the integration of ADDI could be performed in a CI/CD pipeline.
Depending on the SCM solution and CI/CD orchestrator being used, this integration can slightly differ,
thereby leveraging other provided capabilities.
In this sample implementation, only one project is created in ADDI, but it may be interesting to have
different projects for different states of the same application. A project in ADDI could represent the
application in its main (mainline change history) state and another project could represent the application
in production. This implementation would require two distinct projects in ADDI, and some changes
in the Synchronization file and the CI/CD process. In this configuration, the number of entries in the
Synchronization file would double, due to configuration for the two projects referring to different locations
on the filesystem where branches are checked out.
Another option for the implementation would be to optimize the execution of the ADDI Build Client
commands. In the sample implementation described in this documentation, each change to the in-
development branch of the Git repository triggers the pipeline to refresh ADDI. If too many updates are
occurring on the application, especially in its in-development state, there may be some interest to run the
update process only once a day. This can be managed by a CI/CD orchestrator or using the cron utility.
Additional resources
The following video series demonstrates ADDI's Git support, including the local synchronization process,
command-line interface commands and automation flow, and how to automatically populate and update
ADDI projects in Git.
1. ADDI and Git Support - Part 1
2. ADDI and Git Support - Part 2
Additional resources
Follow the following links for additional information:
1. DBB Documentation
2. DBB community samples repository
3. zAppBuild repository
4. Discover and plan for z/OS hybrid applications
5. CI for the z/OS DevOps experience
Trademarks
IBM®, the IBM logo, and ibm.com® are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
"Copyright and trademark information" here.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux® is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Red Hat®, JBoss®, OpenShift®, Fedora®, Hibernate®, Ansible®, CloudForms®, RHCA®, RHCE®, RHCSA®,
Ceph®, and Gluster® are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the
United States and other countries.