JCL - Overview: When To Use JCL
JCL - Overview: When To Use JCL
Batch and Online processing differ in the aspect of input, output and program execution request.
In batch processing, these aspects are fed into a JCL which is in turn received by the Operating
System.
Job Processing
A job is a unit of work which can be made up of many job steps. Each job step is specified in a
Job Control Language (JCL) through a set of Job Control Statements.
The Operating System uses Job Entry System (JES) to receive jobs into the Operating System,
to schedule them for processing and to control the output.
There are many Free Mainframe Emulators available for Windows which can be used to write
and learn sample JCLs.
One such emulator is Hercules, which can be easily installed in Windows by following few
simple steps given below:
1. Download and install the Hercules emulator, which is available from the Hercules' home
site - : www.hercules-390.eu
2. Once you installed package on Windows machine, it will create a folder like C:\
Mainframes.
3. Run Command Prompt (CMD) and reach the directory C:\Mainframes on CMD.
4. The complete guide on various commands to write and execute a JCL can be found on
URL www.jaymoseley.com/hercules/installmvs/instmvs2.htm
Hercules is an open source software implementation of the mainframe System/370 and ESA/390
architectures, in addition to the latest 64-bit z/Architecture. Hercules runs under Linux,
Windows, Solaris, FreeBSD, and Mac OS X.
A user can connect to a mainframe server in a number of ways such a thin client, dummy
terminal, Virtual Client System (VCS) or Virtual Desktop System (VDS).
Every valid user is given a login id to enter into the Z/OS interface (TSO/E or ISPF). In the Z/OS
interface, the JCL can be coded and stored as a member in a Partitioned Dataset (PDS). When
the JCL is submitted, it is executed and the output received as explained in the job processing
section of previous chapter.
Structure of a JCL
The basic structure of a JCL with the common statements is given below:
(1) JOB statement - Specifies the information required for SPOOLing of the job such as job id,
priority of execution, user-id to be notified upon completion of the job.
(3) EXEC statement - Specifies the PROC/Program to be executed. In the above example, a
SORT program is being executed (i.e., sorting the input data in a particular order)
(4) Input DD statement - Specifies the type of input to be passed to the program mentioned in
(3). In the above example, a Physical Sequential (PS) file is passed as input in shared mode
(DISP = SHR).
(5) Output DD statement - Specifies the type of output to be produced by the program upon
execution. In the above example, a PS file is created. If a statement extends beyond the 70th
position in a line, then it is continued in the next line, which should start with "//" followed by
one or more spaces.
(6) There can be other types of DD statements to specify additional information to the program
(In the above example: The SORT condition is specified in the SYSIN DD statement) and to
specify the destination for error/execution log (Example: SYSUDUMP/SYSPRINT). DD
statements can be contained in a dataset (mainframe file) or as instream data (information hard-
coded within the JCL) as given in above example.
Each of the JCL statements is accompanied by a set of parameters to help the Operating Systems
in completing the program execution. The parameters can be of two types:
Positional Parameters:
Keyword Parameters:
They are coded after the positional parameters, but can appear in any order. Keyword
parameters can be omitted if not required. The generic syntax is KEYWORD=value.
Example: MSGCLASS=X, i.e., the job log is redirected to the output SPOOL after the
job completion.
In the above example, CLASS, MSGCLASS and NOTIFY are keyword parameters of
JOB statement. There can be keyword parameters in EXEC statement as well.
These parameters have been detailed out in the subsequent chapters along with appropriate
examples.
Syntax:
Let us see the description of the terms used in above JOB statement syntax.
Job-name
This gives an id to the job while submitting it to the OS. It is can be length of 1 to 8 with
alphanumeric characters and starts just after //.
JOB
Positional-param
This refers to the person or group to which the CPU time is owed. It is set as
per the rules of the company owning the mainframes. If it is specified as (*),
Account information
then it takes the id of the user, who has currently logged into the Mainframe
Terminal.
This identifies the person or group, who is in charge of the JCL. This is not a
Programmer name
mandatory parameter and can be replaced by a comma.
Keyword-param
Following are the various keyword parameters, which can be used in JOB statement. You can
use one or more parameters based on requirements and they are separated by comma:
Based on the time duration and the number of resources required by the
job, companies assign different job classes. These can be visualized as
individual schedulers used by the OS to receive the jobs. Placing the
jobs in the right scheduler will aid in easy execution of the jobs. Some
companies have different classes for jobs in test and production
CLASS environment.
CLASS=0 to 9 | A to Z
PRTY To specify the priority of the job within a job class. If this parameter is
not specified, then the job is added to the end of the queue in the
specified CLASS. Following is the syntax:
PRTY=N
Here system sends the message to the user "userid" but if we use
NOTIFY = &SYSUID, then the message is sent to the user submitting
the JCL.
To specify the output destination for the system and Job messages when the
job is complete. Following is the syntax:
MSGCLASS=CLASS
MSGCLASS
Valid values of CLASS can be from "A" to "Z" and "0" to "9".
MSGCLASS = Y can be set as a class to send the job log to the JMR
(JOBLOG Management and Retrieval: a repository within mainframes
to store the job statistics).
Specifies the type of messages to be written to the output destination
specified in the MSGCLASS. Following is the syntax:
MSGLEVEL=(ST, MSG)
TYPRUN Specifies a special processing for the job. Following is the syntax:
Specifies the time span to be used by the processor to execute the job.
Following is the syntax:
REGION=nK | nM
REGION Here, region can be specified as nK or nM where n is a number, K is
kilobyte and M is Megabyte.
Here, JOB statement is getting extended beyond the 70th position in a line,so we continue in the
next line which should start with "//" followed by one or more spaces.
Miscellaneous Parameters:
There are few other parameters, which can be used with JOB Statement but they are not
frequently used:
BYTES Size of data to be written to output log and the action to be taken when the
size is exceeded.
These are used in conditional job step processing and are explained in detail
COND and RESTART
while discussing conditional Processing.
The purpose of the EXEC statement is to provide required information for the
program/procedure executed in the job step. Parameters coded in this statement can pass data to
the program in execution, can override certain parameters of JOB statement and can pass
parameters to the procedure if the EXEC statement calls a procedure instead of directly
executing a program.
Syntax:
Let us see the description of the terms used in above EXEC statement syntax.
Step-name
This identifies the job step within the JCL. It can be of length 1 to 8 with alphanumeric
characters.
EXEC
PGM This refers to the program name to be executed in the job step.
PROC This refers to the procedure name to be executed in the job step. We
will discuss it a separate chapter.
Keyword-param
Following are the various keyword parameters for EXEC statement. You can use one or more
parameters based on requirements and they are separated by comma:
ADDRSPC=VIRT | REAL
ACCT=(userid)
ACCT This is similar to the positional parameter accounting information in
the JOB statement. If it is coded both in JOB and EXEC statement, then
the accounting information in JOB statement applies to all job steps
where an ACCT parameter is not coded. The ACCT parameter in an
EXEC statement will override the one present in the JOB statement for
that job step only.
Common Keyword Parameters of EXEC and JOB Statement
Keyword Parameter Description
Following is a simple example of JCL script along with JOB and EXEC statements:
JCL - DD Statement
Datasets are mainframe files with records organised in a specific format. Datasets are stored on
the Direct Access Storage Device (DASD) or Tapes of the mainframe and are basic data storage
areas. If these data are required to be used/created in a batch program, then the file (i.e., dataset)
physical name along with the file format and organisation are coded in a JCL.
The definition of each dataset used in the JCL is given using the DD statement. The input and
output resources required by a job step needs to be described within a DD statement with
information such as the dataset organisation, storage requirements and record length.
Syntax:
//DD-name DD Parameters
Description
Let us see the description of the terms used in above DD statement syntax.
DD-name
A DD-NAME identifies the dataset or input/output resource. If this is an input/output file used
by a COBOL/Assembler program, then the file is referenced by this name within the program.
DD
Parameters
Following are the various parameters for DD statement. You can use one or more parameters
based on requirements and they are separated by comma:
Parameter Description
When any of the sub-parameters of DISP are not specified, the default
values are as follows:
RECFM is the record format of the dataset. RECFM can hold values
FB, V or VB. FB is a fixed block organisation where one or more
logical records are grouped within a single block. V is variable
organisation where one variable length logical record is placed within
one physical block. VB is Variable Block organisation where one or
more variable length logical records are placed within one physical
block.
DCB
BLKSIZE is the size of the physical block. The larger the block, greater
is the number of records for a FB or VB file.
The UNIT and VOL parameters are listed in the system catalog for
catalogued datasets and hence can be accessed with just the physical
DSN name. But for uncataloged datasets, the DD statement should
include these parameters. For new datasets to be created, the
UNIT/VOL parameters can be specified or the Z/OS allocates the
suitable device and volume.
UNIT The UNIT parameter specifies the type of device on which the dataset is
stored. The device type can be identified using Hardware Address or
Device type group. Following is the syntax:
UNIT=DASD | SYSDA
Where DASD stands for Direct Access Storage Device and SYSDA
stands for System Direct Access and refers to the next available disk
storage device.
The VOL parameter specifies the volume number on the device
identified by the UNIT parameter. Following is the syntax:
VOL=SER=(v1,v2)
VOL Where v1, v2 are volume serial numbers. You can use the following
syntax as well:
VOL=REF=*.DDNAME
Following is an example, which makes use of DD statements along with various parameters
explained above:
JOBLIB Statement
A JOBLIB statement is used in order to identify the location of the program to be executed in a
JCL. The JOBLIB statement is specified after the JOB statement and before the EXEC
statement. This can be used only for the instream procedures and programs.
Syntax:
//JOBLIB DD DSN=dsnname,DISP=SHR
The JOBLIB statement is applicable to all the EXEC statements within the JCL. The program
specified in the EXEC statement will be searched in the JOBLIB library followed by the system
library.
For example, if the EXEC statement is executing a COBOL program, the load module of the
COBOL program should be placed within the JOBLIB library.
STEPLIB Statement
A STEPLIB statement is used in order to identify the location of the program to be executed
within a Job Step. The STEPLIB statement is specified after the EXEC statement and before the
DD statement of the job step.
Syntax:
The program specified in the EXEC statement will be searched in the STEPLIB library followed
by the system library. STEPLIB coded in a job step overrides the JOBLIB statement.
Example
The following example shows the usage of JOBLIB and STEPLIB statements:
Here, the load module of the program MYPROG1 (in STEP1) is searched in the
MYPROC.SAMPLE.LIB1. If not found, it is searched in the system library. In STEP2,
STEPLIB overrides JOBLIB and load module of the program MYPROG2 is searched in
MYPROC.SAMPLE.LIB2 and then in the system library.
INCLUDE Statement
A set of JCL statements coded within a member of a PDS can be included to a JCL using an
INCLUDE statement. When the JES interprets the JCL, the set of JCL statements within the
INCLUDE member replaces the INCLUDE statement.
Syntax:
The main purpose of INCLUDE statement is reusability. For example, common files to be used
across many JCLs can be coded as DD statements within INCLUDE member and used in a JCL.
Dummy DD statements, data card specifications, PROCs, JOB, PROC statements cannot be
coded within an INCLUDE member. An INLCUDE statement can be coded within an
INCLUDE member and further nesting can be done up to 15 levels.
JCLLIB Statement
A JCLLIB statement is used to identify the private libraries used in the job. It can be used both
with instream and cataloged procedures.
Syntax:
The libraries specified in the JCLLIB statement will be searched in the given order to locate the
programs, procedures and INCLUDE member used in the job. There can be only one JCLLIB
statement in a JCL; specified after a JOB statement and before EXEC and INCLUDE statement
but it cannot be coded within an INCLUDE member.
Example
In the following example, the program MYPROG3 and INCLUDE member MYINCL is
searched in the order of MYPROC.BASE.LIB1, MYPROC.BASE.LIB2, system library.
JCL - Procedures
The JCL Procedures are set of statements inside a JCL grouped together to perform a particular
function. Usually, the fixed part of the JCL is coded in a procedure. The varying part of the Job
is coded within the JCL.
You can use a procedure to achieve parallel execution of a program using multiple input files. A
JCL can be created for each input file, and a single procedure can be called simultaneously by
passing the input file name as a symbolic parameter.
Syntax:
//*
//Step-name EXEC procedure name
The contents of the procedure are held within the JCL for an instream procedure. The contents
are held within a different member of the base library for a cataloged procedure. This chapter is
going to explain two types of procedures available in JCL and then finally we will see how we
can nest various procedures.
Instream Procedure
When the procedure is coded within the same JCL member, it is called an Instream Procedure. It
should start with a PROC statement and end with a PEND statement.
In the above example, the procedure INSTPROC is called in STEP1 and STEP2 using different
input files. The parameters DSNAME and DATAC can be coded with different values while
calling the procedure and these are called as symbolic parameters. The varying input to the JCL
such as file names, datacards, PARM values, etc., are passed as symbolic parameters to
procedures.
User-defined symbolic parameters are called JCL Symbols. There are certain symbols called
system symbols, which are used for logon job executions. The only system symbol used in batch
jobs by normal users is &SYSUID and this is used in the NOTIFY parameter in the JOB
statement.
Cataloged Procedure
When the procedure is separated out from the JCL and coded in a different data store, it is called
a Cataloged Procedure. A PROC statement is not mandatory to be coded in a cataloged
procedure. Following is an example of JCL where it's calling CATLPROC procedure:
Within the procedure, the symbolic parameters PROG and BASELB are coded. Please note that
the PROG parameter within the procedure is overridden by the value in the JCL and hence PGM
takes the value CATPRC1 during execution.
Nested Procedures
Calling a procedure from within a procedure is called a nested procedure. Procedures can be
nested up to 15 levels. The nesting can be completely in-stream or cataloged. We cannot code an
instream procedure within a cataloged procedure.
A SET statement is used to define commonly used symbolics across job steps or procedures. It initializes
the previous values in the symbolic names. It has to be defined before the first use of the symbolic
names in the JCL.
Let's have a look at the below description to understand a little more about the above program:
0 = Normal - all OK
4 = Warning - minor errors or problems.
8 = Error - significant errors or problems.
12 = Severe error - major errors or problems, the results should not be trusted.
16 = Terminal error - very serious problems, do not use the results.
A job step execution can be controlled based on the return code of the previous step(s) using the
COND parameter and IF-THEN-ELSE construct, which has been explained in this tutorial.
COND parameter
A COND parameter can be coded in the JOB or EXEC statement of JCL. It is a test on the return
code of the preceding job steps. If the test is evaluated to be true, the current job step execution is
bypassed. Bypassing is just omission of the job step and not an abnormal termination. There can
be at most eight conditions combined in a single test.
Syntax:
COND=(rc,logical-operator)
or
COND=(rc,logical-operator,stepname)
or
COND=EVEN
or
COND=ONLY
Last two conditions (a) COND=EVEN and (b) COND=ONLY, have been explained below in
this tutorial.
The COND can be coded either inside JOB statement or EXEC statement, and in both the cases,
it behaves differently as explained below:
When COND is coded in JOB statement, the condition is tested for every job step. When the
condition is true at any particular job step, it is bypassed along with the job steps following it.
Following is an example:
When COND is coded in EXEC statement of a job step and found to be true, only that job step is
bypassed, and execution is continued from next job step.
When COND=EVEN is coded, the current job step is executed, even if any of the previous steps
abnormally terminate. If any other RC condition is coded along with COND=EVEN, then the job
step executes if none of the RC condition is true.
When COND=ONLY is coded, the current job step is executed, only when any of the previous
steps abnormally terminate. If any other RC condition is coded along with COND=ONLY, then
the job step executes if none of the RC condition is true and any of the previous job steps fail
abnormally.
Another approach to control the job processing is by using IF-THEN-ELSE constructs. This
gives more flexibility and user-friendly way of conditional processing.
Syntax:
Following is the description of the used terms in the above IF-THEN-ELSE Construct:
name: This is optional and a name can have 1 to 8 alphanumeric characters starting with
alphabet, #,$ or @.
Condition: A condition will have a format: KEYWORD OPERATOR VALUE, where
KEYWORDS can be RC (Return Code), ABENDCC (System or user completion code),
ABEND, RUN (step started execution). An OPERATOR can be logical operator (AND
(&), OR (|)) or relational operator (<, <=, >, >=, <>).
Example
Let's try to look into the above program to understand it in little more detail:
The return code of STP01 is tested in IF1. If it is 0, then STP02 is executed. Else, the
processing goes to the next IF statement (IF2).
In IF2, If STP01 has started execution, then STP03a and STP03b are executed.
In IF3, If STP03b does not ABEND, then STP04 is executed. In ELSE, there are no
statements. It is called a NULL ELSE statement.
In IF4, if STP01.RC = 0 and STP02.RC <=4 are TRUE, then STP05 is executed.
In IF5, if the proc-step PST1 in PROC PRC1 in jobstep STP05 ABEND, then STP06 is
executed. Else STP07 is executed.
If IF4 evaluates to false, then STP05 is not executed. In that case, IF5 are not tested and
the steps STP06, STP07 are not executed.
The IF-THEN-ELSE will not be executed in the case of abnormal termination of the job such as
user cancelling the job, job time expiry or a dataset is backward referenced to a step that is
bypassed.
Setting Checkpoints
You can set checkpoint dataset inside your JCL program using SYSCKEOV, which is a DD
statement.
A CHKPT is the parameter coded for multi-volume QSAM datasets in a DD statement. When a
CHKPT is coded as CHKPT=EOV, a checkpoint is written to the dataset specified in the
SYSCKEOV statement at the end of each volume of the input/output multi-volume dataset.
In the above example, a checkpoint is written in dataset SAMPLE.CHK at the end of each
volume of the output dataset SAMPLE.OUT.
Restart Processing
You can restart processing ether using automated way using the RD parameter or manual using
the RESTART parameter.
RD parameter is coded in the JOB or EXEC statement and it helps in automated JOB/STEP
restart and can hold one of the four values: R, RNC, NR or NC.
RD=R allows automated restarts and considers the checkpoint coded in the CHKPT
parameter of the DD statement.
RD=RNC allows automated restarts, but overrides (ignores) the CHKPT parameter.
RD=NR specifies that the job/step cannot be automatically restarted. But when it is
manually restarted using the RESTART parameter, CHKPT parameter (if any) will be
considered.
RD=NC disallows automated restart and checkpoint processing.
If there is a requirement to do automated restart for specific abend codes only, then it can be
specified in the SCHEDxx member of the IBM system parmlib library.
RESTART parameter is coded in the JOB or EXEC statement and it helps in manual restart of
the JOB/STEP after the job failure. RESTART can be accompanied with a checkid, which is the
checkpoint written in the dataset coded in the SYSCKEOV DD statement. When a checkid is
coded, the SYSCHK DD statement should be coded to reference the checkpoint dataset after the
JOBLIB statement (if any), else after the JOB statement.
In the above example, chk5 is the checkid, i.e., STP01 is restarted at checkpoint5. Please note
that a SYSCHK statement is added and SYSCKEOV statement is commented out in the previous
program explained in Setting Checkpoint section.
DSN=&name | *.stepname.ddname
Temporary datasets need storage only for the job duration and are deleted at job completion.
Such datasets are represented as DSN=&name or simply without a DSN specified.
If a temporary dataset created by a job step is to be used in the next job step, then it is referenced
as DSN=*.stepname.ddname. This is called Backward Referencing.
Concatenating Datasets
If there is more than one dataset of the same format, they can be concatenated and passed as an
input to the program in a single DD name.
In the above example, three datasets are concatenated and passed as input to the SORT program
in the SORTIN DD name. The files are merged, sorted on the specified key fields and then
written to a single output file SAMPLE.OUTPUT in the SORTOUT DD name.
Overriding Datasets
In a standardised JCL, the program to be executed and its related datasets are placed within a
cataloged procedure, which is called in the JCL. Usually, for testing purposes or for an incident
fix, there might be a need to use different datasets other than the ones specified in the cataloged
procedure. In that case, the dataset in the procedure can be overridden in the JCL.
In the above example, the dataset IN1 uses the file MYDATA.URMI.INPUT in the PROC,
which is overridden in the JCL. Hence, the input file used in execution is
MYDATA.OVER.INPUT. Please note that the dataset is referred as STEP1.IN1. If there is only
one step in the JCL/PROC, then the dataset can be referred with just the DD name. Similarly, if
there are more than one step in the JCL, then the dataset is to be overridden as
JSTEP1.STEP1.IN1.
In the above example, out of the three datasets concatenated in IN1, the first one is overridden in
the JCL and the rest is kept as that present in PROC.
Defining GDGs in a JCL
Generation Data Groups (GDGs) are group of datasets related to each other by a common name.
The common name is referred as GDG base and each dataset associated with the base is called a
GDG version.
For example, MYDATA.URMI.SAMPLE.GDG is the GDG base name. The datasets are named
as MYDATA.URMI.SAMPLE.GDG.G0001V00, MYDATA.URMI.SAMPLE.GDG.G0002V00
and so on. The latest version of the GDG is referred as MYDATA.URMI.SAMPLE.GDG(0),
previous versions are referred as (-1), (-2) and so on. The next version to be created in a program
is refered as MYDATA.URMI.SAMPLE.GDG(+1) in the JCL.
The GDG versions can have same or different DCB parameters. An initial model DCB can be
defined to be used by all versions, but it can be overridden when creating new versions.
In the above example, IDCAMS utility defines the GDG base in GDGSTEP1 with below
parameters passed in the SYSIN DD statement:
IDCAMS can be used to alter the definition parameters of a GDG such as increasing LIMIT,
changing EMPTY to NOEMPTY, etc., and its related versions using the SYSIN command is
ALTER MYDATA.URMI.SAMPLE.GDG LIMIT(15) EMPTY.
Delete GDG in a JCL
IDCAMS can be used to delete the GDG and its related versions using the SYSIN command
DELETE(MYDATA.URMI.SAMPLE.GDG) GDG FORCE/PURGE.
FORCE deletes the GDG versions and the GDG base. If any of the GDG versions are set
with an expiration date which is yet to expire, then those are not deleted and hence the
GDG base is retained.
PURGE deletes the GDG versions and the GDG base irrespective of the expiration date.
Here, if the GDG had been referred by the actual name like
MYDATA.URMI.SAMPLE.GDG.G0001V00, then it leads to changing the JCL every time
before execution. Using (0) and (+1) makes it dynamically substitute the GDG version for
execution.
Input-Output Methods
Any batch program executed through a JCL requires data input, which is processed and an output
is created. There are different methods of feeding input to the program and writing output
received from a JCL. In batch mode, there is no user interaction required but input and output
devices and required organisation are defined in JCL and submitted.
Data Input in a JCL
There are various ways to feed the data to a program using JCL and these methods have been
explained below:
Instream Data
In Example 1, input to MYPROG is passed through SYSIN. The data is provided within the JCL.
Two records of data are passed to the program. Please note that /* marks the end of instream
SYSIN data.
"CUST1 1000" is record1 and "CUST2 1001" is record2. End of data condition is met when the
symbol /* is encountered while reading the data.
As mentioned in most of the examples in previous chapters, data input to a program can be
provided through PS, VSAM or GDG files, with relevant DSN name and DISP parameters along
with DD statements.
In Example 1, SAMPLE.INPUT1 is the input file through which data is passed to MYPROG. It
is referred as IN1 within the program.
Data Output in a JCL
The output in a JCL can be cataloged into a dataset or passed to the SYSOUT. As mentioned in
DD statements chapter, SYSOUT=* redirects the output to the same class as that mentioned in
the MSGCLASS parameter of the JOB statement.
Specifying MSGCLASS=Y saves the job log in the JMR (Joblog Management and Retrieval).
The entire JOB log can be redirected to the SPOOL and can be saved to a dataset by giving the
XDC command against the job name in the SPOOL. When the XDC command is given in the
SPOOL, a dataset creation screen is opened up. The job log can then be saved by giving
appropriate PS or PDS definition.
Job logs can also be saved into a dataset by mentioning an already created dataset for SYSOUT
and SYSPRINT. But the entire job log cannot be captured through this way (i.e., JESMSG will
not be cataloged) as done in JMR or XDC.
In order to execute a COBOL program in batch mode using JCL, the program needs to be
compiled and a load module is created with all the sub-programs. The JCL uses the load module
and not the actual program at the time of execution. The load libraries are concatenated and
given to the JCL at the time of execution using JCLLIB or STEPLIB.
There are many mainframe compiler utilities available to compile a COBOL program. Some
corporate companies use Change Management tools like Endevor, which compiles and stores
every version of the program. This is useful in tracking the changes made to the program.
IGYCRCTL is an IBM COBOL compiler utility. The compiler options are passed using PARM
parameter. In the above example, RMODE instructs the compiler to use relative addressing mode
in the program. The COBOL program is passed using SYSIN parameter and the copybook is the
library used by the program in SYSLIB.
This JCL produces the load module of the program as output which is used as the input to the
execution JCL.
Below a JCL example where the program MYPROG is executed using the input file
MYDATA.URMI.INPUT and produces two output files written to the spool.
Data input to COBOL batch program can be through files, PARAM parameter and SYSIN DD
statement. In the above example:
Data records are passed to MYPROG through file MYDATA.URMI.INPUT. This file
will be referred in the program using the DD name INPUT1. The file can be opened, read
and closed in the program.
The PARM parameter data ACCT5000 is received in the LINKAGE section of the
program MYPROG in a variable defined within that section.
The data in the SYSIN statement is received through ACCEPT statement in the
PROCEDURE division of the program. Every ACCEPT statement reads one whole
record (i.e., CUST1 1000) into a working storage variable defined in the program.
For running COBOL DB2 program, specialised IBM utility is used in the JCL and program; DB2
region and required parameters are passed as input to the utility.
In the above example, MYCOBB is the COBOL-DB2 program run using IKJEFT01. Please note
that the program name, DB2 Sub-System Id (SSID), DB2 Plan name are passed within the
SYSTSIN DD statement. The DBRM library is specified in the STEPLIB.
Utility programs are pre-written programs, widely used in mainframes by system programmers
and application developers to achieve day-to-day requirements, organising and maintaining data.
A few of them are listed below with their functionality:
These utility programs need to be used with appropriate DD statements in a JCL in order to
achieve the specified functionality.
DFSORT Overview
DFSORT is a powerful IBM utility used to copy, sort or merge datasets. SORTIN and
SORTINnn DD statements are used to specify input datasets. SORTOUT and OUTFIL
statements are used to specify output data.
SYSIN DD statement is used to specify the sort and merge conditions. DFSORT is generally
used to achieve the below functionalities:
SORT the input file(s) in the order of the specified field(s) position in the file.
INCLUDE or OMIT records from the input file(s) based on the specified condition.
SORT MERGE input file(s) in the order of the specified field(s) position in the file.
SORT JOIN two or more input files based on a specified JOIN KEY (field(s) in each
input file).
When there is additional processing to be done on the input files, a USER EXIT program
can be called from the SORT program. For example, if there is a header/trailer to be
added to the output file, then a USER written COBOL program can be called from the
SORT program to perform this functionality. Using a control card, data can be passed to
the COBOL program.
On the other way round, a SORT can be called internally from a COBOL program to
arrange the input file in a particular order before being processed. Usually, this is not
recommended in view of performance for large files.
ICETOOL Overview
ICETOOL can achieve all the functionalities of DFSORT in one or more conditions.
SPLICE is a powerful operation of ICETOOL which is similar to SORT JOIN, but with
additional features. It can compare two or more files on specified field(s) and create one
or more output files like file with matching records, file with non-matching records, etc.
Data in one file in a particular position can be OVERLAYed into another position in the
same or different file.
A File can be split into n files based on a specified condition. For example, a file
containing names of employees can be split into 26 files, each containing the names
starting with A, B, C and so on.
Different combination of file manipulation is possible using ICETOOL with a little
exploration of the tool.
SYNCSORT Overview
SYNCSORT is used to copy, merge or sort datasets with a high performance. It gives best
utilization of system resources and efficient operation in 31-bit and 64-bit address spaces.
It can be used in the same lines of DFSORT and can achieve the same features. It can be invoked
by a JCL or from within a program coded in COBOL, PL/1 or Assembler language. It also
supports User Exit programs to be called from the SYNCSORT program.
Frequently used sort tricks using these utilities are explained in the next chapter. Complex
requirements, which requires a huge programming in COBOL/ASSEMBLER can be achieved
using the above utilities in simple steps.
JCL - Basic Sort Tricks
The day-to-day application requirements in a corporate world that can be achieved using Utility
Programs are illustrated below:
1. A file has 100 records. The first 10 records need to be written to output file.
The option STOPAFT will stop reading the input file after 10th record and terminates the
program. Hence, 10 records are written to output.
2. Input file has one or more records for same employee number. Write unique records to
output.
SUM FIELDS=NONE removes duplicates on fields specified in SORT FIELDS. In the above
example, employee number is in the field position 1,15. The output file will contain the unique
employee numbers sorted in ascending order.
In the input file, the content in position 1,6 is overwritten to the position 47,6 and then copied to
the output file. INREC OVERLAY operation is used in order to rewrite data in input file before
copying to output.
data1 1000
data2 1002
data3 1004
4-digit sequence number is added in output at position 10, starting at 1000 and incremented by 2
for every record.
HDR 20110131
data1
data2
data3
TRL 000000003
TOT calculates the number of records in the input file. HDR and TRL are added as identifiers to
header/trailer, which is user defined and can be customised as per the users' needs.
6. Conditional Processing
data1select
data2 EMPTY
data3select
Based on the 6th position of the file, the BUILD of output file varies. If 6th position is SPACES,
then text "EMPTY" is appended to input record. Else, the input record is written to output, as-is.
7. Backing up a file
IEBGENER copies the file in SYSUT1 to file in SYSUT2. Please note that file in SYSUT2 takes
the same DCB as that of the SYSUT1 in the above example.
8. File Comparison
JOINKEYS specifies the field on which the two files are compared.
REFORMAT FIELDS=? places 'B' (matched records), '1' (present in file1, but not in
file2), or '2' (present in file2 but not in file1) in the 1st position of the output BUILD.
JOIN UNPAIRED does a full outer join on the two files.
MATCH File
1000
1003
NOMATCH1 File
1001
1005
NOMATCH2 File
1002
The same functionality can be achieved using ICETOOL also.
Specify DD DUMMY in the overriding JCL for the ones, which are not overridden.
//STEP1.IN1 DD DUMMY
// DD DSN=MYDATA.URMI.IN2,DISP=SHR
// DD DUMMY
2) Current version of a GDG is used as input in step1 of a job and a new version is created
as output. The output of step1 is used in step2 and the next version is created as output in
step2. How do you reference each GDG version in each step?
When the file is used as input in IDCAMS, job completes with a warning (return code 4) if the
file is empty.
4) A JCL has 4 steps and job abends. How to restart the job and run only step 2?
5) What are the ways of passing data to a COBOL program from JCL?
Data can be passed to a COBOL program through files, PARM parameter and SYSIN DD
statement.
6) How can the same PROC be re-used and called by many JOBs?
The varying portion of the JCL can be specified using symbolic parameters in the JOB and the
static parts can be specified in the PROC. For example, if the file name changes for every JOB
that uses the PROC, then the varying portion of the file name can be coded in JCL using
symbolic parameter.
7) How do you create a dataset in a JCL with the same file organisation as that of another
existing dataset?
Use IEBGENER and pass existing file in SYSUT1. Pass new file in SYSUT2 and mention
DCB=*.SYSUT1 to get the same DCB as that of SYSUT1 dataset. Refer to Basic Sort Tricks
chapter for IEBGENER example.
By using the UNIT and VOL serial parameters in the dataset DD statement.
9) What are the statements that are not valid to be included in an INCLUDE statement?
Dummy DD statements, data card specifications, PROCs, JOB, PROC statements cannot be
coded within an INCLUDE member. An INLCUDE statement can be coded within an
INCLUDE member and further nesting can be done up to 15 levels.
10) A JCL has 2 steps. How to code the JCL such that if step1 abends, then step2 runs.
Else, job terminates with step1?
Using RD parameter in JOB/EXEC statement. The abend codes for which RESTART need to be
performed can be mentioned in the SCHEDxx member of the IBM system parmlib library.
12) A JCL has 10 steps. How to run step3 and step7 (only) without using COND parameter
or IF-THEN-ELSE?
Using IEBEDIT in a JCL, selected steps of another JCL can be run. In the above JCL, the input
JCL with 10 steps is present in MYDATA.URMI.JOBS(INPUTJOB). STEP3 and STEP7 is
specified in SYSIN of IEBEDIT, so that those two steps are run.
In case of a GDG, least recent generation is uncataloged if the GDG base had been defined with
NOEMPTY parameter when the LIMIT is reached. All generations are uncataloged when coded
with EMPTY.
14) How can a GDG base be created in a JCL. What is the difference between EMPTY and
SCRATCH parameter while defining/altering GDG base?
GDG base can be created using IDCAMS utility. EMPTY uncataloges all the generations when
the LIMIT is reached. SCRATCH physically deletes the generation, when it is uncataloged.
(LIMIT specifies the maximum number of versions that the GDG base can hold).
15) A dataset contains 2500 records. How can the last 1500 records copied to an output
file?
In the SORT/ICETOOL program, SKIPREC = n can be used, which skips the first n records and
then copies the rest to the output file.
16) How can a file of 3n records be split into 3 files each containing n records?
STARTREC and ENDREC restricts the READ from the input file on the specified record
number.
If the data processed in the program is genuinely huge and needs more time than the class limit,
then the TIME parameter can be coded as TIME=1440 to get infinite time until job completion.
18) In a JCL, a large volume dataset is loaded to a table using BMCLOAD in STEP1 and
an image copy of the loaded table is taken using BMCCOPY in step2. Step2 abends
because the image copy dataset cannot hold the volume of the table. How can this be
rectified?
The SPACE parameter of the image copy dataset can be increased based on the volume of the
table and the job can be restarted from step2.
19) If the submitter of a job wants to inform another user about the job completion, how
can it be done?
NOTIFY=userid of the person (not the submitter) can be specified in the JOB statement so that
the user gets a notification with the return code upon job completion. But the job log is present in
the spool under the submitter's userid only.
SORTOF01:
123 //*Trailing '*' removed
4560000000
123****123
789
SORTOF02:
123******* //*Trailing spaces removed
4560000000
123****123
789
SORTOF03:
123*******
4560 //*Trailing zeroes removed
123****123
789