Implementado Sms
Implementado Sms
SC23-6849-30
Note
Before using this information and the product it supports, read the information in “Notices” on page 267.
This edition applies to Version 2 Release 3 of z/OS (5650-ZOS) and to all subsequent releases and modifications
until otherwise indicated in new editions.
Last updated: July 17, 2017
© Copyright IBM Corporation 1988, 2017.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . vii Preallocating Space for Multivolume Data Sets . 32
Managing Space and Availability for Data Sets . . 33
Tables . . . . . . . . . . . . . . . ix Managing Data with DFSMShsm . . . . . . 36
Using SMS with DFSMShsm Commands . . . 37
Using SMS with Aggregate Backup and Recovery
About this document . . . . . . . . . xi Support . . . . . . . . . . . . . . . 39
z/OS information . . . . . . . . . . . . xi Managing DASD Volumes with SMS . . . . . . 40
Pooling Volumes with Storage Groups . . . . 40
Summary of changes . . . . . . . . xiii Selecting Volumes with SMS. . . . . . . . 42
Summary of changes for z/OS Version 2 Release 3 Managing Virtual I/O with SMS . . . . . . 44
(V2R3) . . . . . . . . . . . . . . . . xiii Separating Large Data Sets . . . . . . . . 44
Summary of changes for z/OS Version 2 Release 2 Avoiding Allocation Failures. . . . . . . . 45
(V2R2) . . . . . . . . . . . . . . . . xiii Managing Tape Data with DFSMSrmm . . . . . 45
z/OS Version 2 Release 1 summary of changes . . xiii Using Management Classes to Manage Tape Data 45
Using Storage Groups for Volume Pooling . . . 45
How to send your comments to IBM . . xv Related Reading for Managing Tape Data with
If you have a technical problem . . . . . . . xv DFSMSrmm . . . . . . . . . . . . . 46
Designing Your ACS Routines . . . . . . . . 46
Using the ACS Language and Variables . . . . 46
Chapter 1. Introducing System-Managed
Using ACS Installation Exits . . . . . . . . 47
Storage . . . . . . . . . . . . . . . 1 Using ACS Indexing Functions . . . . . . . 47
System-Managed Storage . . . . . . . . . . 1 Using FILTLIST Statements . . . . . . . . 48
DFSMS in the System-Managed Storage Environment 1 Using SELECT Statements . . . . . . . . 48
Benefits of System-Managed Storage . . . . . . 2 Using Advanced ACS Routine Design and
Managing Data with SMS . . . . . . . . . . 6 Coding Techniques . . . . . . . . . . . 50
Using SMS Classes and Groups . . . . . . . 6 Placing Your Volumes under System Management 51
Using Aggregate Groups . . . . . . . . . 14 Converting with Data Movement . . . . . . 52
Using Automatic Class Selection Routines . . . 14 Converting Data In-Place . . . . . . . . . 54
Defining the Storage Management Subsystem Gaining Support for SMS from Your Users . . . . 56
Configuration . . . . . . . . . . . . 16 Identifying SMS Benefits to Users . . . . . . 57
Software and Hardware Considerations . . . . . 16 Defining Data Classes to Simplify Data Set
Implementing System-Managed Storage . . . . . 17 Allocations . . . . . . . . . . . . . 58
Changing the JCL . . . . . . . . . . . 61
Chapter 2. Planning to Implement Identifying the User's Role . . . . . . . . 65
System-Managed Storage . . . . . . 19
Implementing to Fit Your Needs . . . . . . . 19 Chapter 3. Enabling the Software Base
Using DFSMS FIT to Implement System-Managed for System-Managed Storage . . . . . 67
Storage . . . . . . . . . . . . . . . . 19 Providing Security in the DFSMS Environment . . 67
Using Milestones to Implement System-Managed Protecting System-Managed Data Sets . . . . 68
Storage . . . . . . . . . . . . . . . . 20 Protecting SMS Control Data Sets . . . . . . 68
Enabling the Software Base . . . . . . . . 20 Protecting Functions and Commands . . . . . 68
Activating the Storage Management Subsystem 21 Restricting Access to Fields in the RACF Profile 69
Managing Temporary Data . . . . . . . . 21 Restricting Access to Classes and Groups . . . 70
Managing Permanent DASD Data . . . . . . 21 Protecting ISMF Functions . . . . . . . . 71
Managing Tape Data . . . . . . . . . . 22 Using ISMF to View the Starter Set . . . . . . 72
Using Storage Class to Manage Performance and Viewing the Sample SCDS . . . . . . . . 72
Availability . . . . . . . . . . . . . . 22 Viewing the Sample ACS Source Routines . . . 72
Using Cache to Improve Performance for Using ISMF to Identify Ineligible Data Sets . . . . 73
Directly-Accessed Data Sets . . . . . . . . 23 Identifying Unmovable Data Sets and Absolute
Improving Performance for Sequential Data Sets 25 Track Allocation . . . . . . . . . . . . 73
Improving Performance for VSAM Data Sets . . 27 Making Unmovable Data Sets Eligible for System
Improving Performance with Hiperbatch . . . 28 Management . . . . . . . . . . . . . 76
Improving Performance with the Parallel Access Using ISMF to Manage Storage Devices . . . . . 77
Volume Option . . . . . . . . . . . . 28 Implementing a System-Determined Block Size . . 77
Improving Availability. . . . . . . . . . 29 How System-Determined Block Size Works . . . 78
Improving Availability during Data Set Backup 30
Contents v
vi z/OS DFSMS Implementing System-Managed Storage
Figures
1. Allocating Data Sets or Storing Objects . . . . 7 49. Management Class Define Panel, Page 3 of 6 98
2. Using Data Class . . . . . . . . . . . 8 50. Writing an ACS Routine . . . . . . . . 99
3. Using Storage Class . . . . . . . . . . 10 51. Accessing the ISPF/PDF Editor . . . . . . 99
4. Using Management Class . . . . . . . . 11 52. Sample Storage Class ACS Routine for the
5. Using Storage Groups . . . . . . . . . 13 Minimal Configuration . . . . . . . . 100
6. Processing ACS Routines . . . . . . . . 15 53. Sample Storage Group ACS Routine for the
7. Paths for Implementing System-Managed Minimal Configuration . . . . . . . . 101
Storage . . . . . . . . . . . . . . 18 54. Translating an ACS Routine . . . . . . . 102
8. Storage Class Define Panel, Page 1 of 2 25 55. Specifying the SCDS Base Configuration 102
9. Management Class Define Panel, Page 1 of 6 34 56. Validating the SCDS . . . . . . . . . 103
10. Management Class Define Panel, Page 2 of 6 35 57. Storage Group and Volume Status for
11. Management Class Define Panel, Page 3 of 6 35 PRIME80 . . . . . . . . . . . . . 106
12. Pool Storage Group Define Panel (Page 1) 40 58. Output from the DEVSERV command 107
13. Pool Storage Group Define Panel (Page 2) 41 59. System-Managed Temporary Data. . . . . 110
14. Using the FILTLIST Statement . . . . . . 48 60. Creating ACS Test Cases . . . . . . . . 114
15. Example of the Format-1 SELECT Statement 49 61. Defining ACS Test Cases . . . . . . . . 114
16. Example of the Format-2 SELECT Statement 49 62. ACS Test Case Define Panel, Page 1 of 4 115
17. Data Class Define Panel, Page 1. . . . . . 58 63. ACS Test Case Panel, Page 2 of 4 . . . . . 116
18. Data Class Define Panel, Page 2. . . . . . 59 64. Test ACS Routines Panel . . . . . . . . 116
19. Data Class Define Panel, Page 4. . . . . . 60 65. Creating an ACS Output Listing . . . . . 117
20. Data Class Define Panel, Page 6. . . . . . 60 66. Migrating Permanent Data . . . . . . . 120
21. Controlling Management Class Assignments 70 67. Sample Fallback Procedure Using DFSMSdss 124
22. Defining Resource Class Profiles . . . . . 70 68. FILTLISTs for TSO Data Used in Management
23. Protecting ISMF Functions . . . . . . . 72 Class ACS Routine . . . . . . . . . . 125
24. Selecting Specified Data Sets Using ISMF 74 69. Management Class ACS Routine for TSO
25. Identifying Unmovable Data Sets . . . . . 74 Data . . . . . . . . . . . . . . 126
26. ISMF List of ISAM and Unmovable Data Sets 70. Sample TSO Data Conversion In-Place 126
by DSORG . . . . . . . . . . . . . 75 71. Sample TSO Data Conversion with
27. ISMF Data Set List . . . . . . . . . . 75 Movement . . . . . . . . . . . . 127
28. Identifying Absolute Track Allocation using 72. Assigning a Data Class for VSAM Processing 131
ISMF . . . . . . . . . . . . . . . 76 73. Assigning a Data Class Based on the
29. ISMF List of Unmovable Data Sets by ALLOC Low-Level Qualifier . . . . . . . . . 132
UNIT . . . . . . . . . . . . . . 76 74. Management Class ACS Routine Fragment for
30. Sample Job for Allocating Control Data Sets 82 Batch Data . . . . . . . . . . . . 141
31. Minimal SMS Configuration . . . . . . . 83 75. DFSMSdss Job Migrating Batch Data to
32. CDS Application Selection Panel . . . . . 84 System Management . . . . . . . . . 143
33. SCDS Base Define Panel, Page 1 of 2 . . . . 85 76. FILTLIST Section for CICS from Storage Class
34. SCDS Base Define Panel, Page 2 of 2 . . . . 85 ACS Routine . . . . . . . . . . . . 151
35. Storage Class Application Selection Panel 88 77. Segment to Permit Special Users to Override
36. Storage Class Define Panel, Page 1 of 2 89 SMS allocation . . . . . . . . . . . 152
37. Listing Storage Classes Defined in the Base 78. SELECT Section for CICS from Storage Class
Configuration . . . . . . . . . . . . 90 ACS Routine . . . . . . . . . . . . 153
38. Initiating a Copy for the SC1 Storage Class 90 79. FILTLIST Section for IMS from Storage Class
39. Copying the SC1 Storage Class Construct 91 ACS Routine . . . . . . . . . . . . 155
40. Defining a Storage Group for the Minimal 80. ACS Code to Permit Special Users to
Configuration . . . . . . . . . . . . 92 Override SMS Allocation. . . . . . . . 156
41. Defining Pool Storage Group Attributes Page 1 92 81. SELECT Section for IMS from Storage Class
42. Defining Pool Storage Group Attributes Page 2 93 ACS Routine . . . . . . . . . . . . 157
43. Defining Storage Group System Status . . . 93 82. FILTLIST Section for DB2 from Storage Class
44. Defining Non-Existent Volume in Storage ACS Routine . . . . . . . . . . . . 159
Group . . . . . . . . . . . . . . 94 83. Logic to Permit Special Users to Override
45. Defining Volume System Status . . . . . . 95 SMS allocation . . . . . . . . . . . 160
46. Defining a Management Class for the Minimal 84. SELECT Section for DB2 from Storage Class
Configuration . . . . . . . . . . . . 96 ACS Routine . . . . . . . . . . . . 161
47. Management Class Define Panel, Page 1 of 6 96 85. SELECT Section for DB2 from Storage Class
48. Management Class Define Panel, Page 2 of 6 97 ACS Routine . . . . . . . . . . . . 162
For information about accessibility features of z/OS®, for users who have a
physical disability, please see Appendix D, “Accessibility,” on page 263.
z/OS information
This information explains how z/OS references information in other documents
and on the web.
When possible, this information uses cross document links that go directly to the
topic in reference using shortened versions of the document title. For complete
titles and order numbers of the documents for all products that are part of z/OS,
see z/OS Information Roadmap.
New
v The SCDS Base Define Panel has been updated to include new requirements.
Refer to “Defining the SMS Base Configuration” on page 83 for more
information.
New
v The Data Class Define Panel has been updated to include Guaranteed Space
Reduction. Refer to “Defining Data Classes to Simplify Data Set Allocations” on
page 58 for more information.
v The Pool Storage Group Define Panel has been updated to include Total Space
Alert Threshold % and Track-Managed Space Alert Threshold %. Refer to
“Defining the Storage Group” on page 91 for more information.
Important: If your comment regards a technical problem, see instead “If you have
a technical problem.”
v Send an email to [email protected].
v Send an email from the Contact z/OS web page (www.ibm.com/systems/z/os/
zos/webqs.html).
IBM or any other organizations use the personal information that you supply to
contact you only about the issues that you submit.
System-Managed Storage
System-managed storage is the IBM automated approach to managing storage
resources. It uses software programs to manage data security, placement,
migration, backup, recall, recovery, and deletion so that current data is available
when needed, space is made available for creating new data and for extending
current data, and obsolete data is removed from storage.
You can tailor system-managed storage to your needs. You define the requirements
for performance, security, and availability, along with storage management policies
used to automatically manage the direct access, tape, and optical devices used by
the operating systems.
This section also briefly describes the following related program products or
features:
™
v DFSORT
®™
v RACF
v DFSMS Optimizer
v Tivoli® Storage Manager
v CICSVR
The DFSMSdss functional component of DFSMS copies and moves data for z/OS.
DFSORT sorts, merges, and copies data sets. It also helps you to analyze data and
produce detailed reports using the ICETOOL utility or the OUTFIL function.
RACF, a component of the Security Server for z/OS, controls access to data and
other resources in operating systems.
The DFSMS Optimizer feature provides analysis and simulation information for
both SMS and non-SMS data. For more information on the DFSMS Optimizer
feature, see DFSMS Optimizer User's Guide and Reference.
®
Tivoli Storage Manager is a client-server licensed product that provides storage
management services in a multiplatform computer environment. The
backup-archive client program allows users to back up and archive files from their
workstations or file servers to storage, and restore and retrieve backup versions
and archived copies of files to their local file systems.
You can use the Tivoli Storage Manager for z/OS to back up and recover
individual files within the Hierarchical File System (HFS). The entire data set can
also be backed up and recovered using DFSMShsm or DFSMSdss, though less
frequently. For example, on an I/O error, you can restore the entire data set using
DFSMShsm or DFSMSdss and then use the Tivoli Storage Manager client to
recover individual files that were backed up since the DFSMShsm or DFSMSdss
backup. This should result in faster recoveries.
You can use the CICSVR product to apply forward recovery logs against
®
recoverable CICS VSAM data sets after they have been restored using DFSMShsm
or DFSMSdss backups. The forward recovery logs are written by CICS and
CICSTS.
Related Reading:
Figure 1. Allocating Data Sets or Storing Objects. You use data class to define model
allocation characteristics for data sets; storage class to define performance and availability
goals; management class to define backup and retention requirements; and storage group to
create logical groupings of volumes to be managed as a unit.
Table 1 shows how a data set, object, DASD volume, tape volume, or optical
volume becomes system-managed.
Table 1. When A Data Set, Object, or Volume Becomes System-Managed
DASD Optical Tape
1 2
Data Set Assign Storage Class Not applicable Not system-managed
3
Object Stored Stored Stored
Volume Assign Storage Group Assign Object or Assign Storage Group
4
Object Backup
Storage Group
Rules:
1. A DASD data set is system-managed if you assign it a storage class. If you do
not assign a storage class, the data set is directed to a non-system-managed
DASD or tape volume-one that is not assigned to a storage group-unless you
specify a specific system-managed tape volume, in which case the data set is
allocated on system-managed tape.
2. You can assign a storage class to a tape data set to direct it to a system-managed
tape volume. However, only the tape volume is considered system-managed, not
the data set.
3. OAM objects each have a storage class; therefore, objects are system-managed.
The optical or tape volume on which the object resides is also system-managed.
4. Tape volumes are added to tape storage groups in tape libraries when the tape
data set is created.
Data class attributes define space and data characteristics of data sets that are
normally specified on JCL DD statements, TSO/E ALLOCATE commands, access
method services (IDCAMS) DEFINE commands, dynamic allocation requests, and
ISPF/PDF panels. For tape data sets, data class attributes can also specify the type
of cartridge and recording method, and if the data is to be compacted. Users then
need only specify the appropriate data classes to create standardized data sets.
You can use data class to allocate sequential and VSAM data sets in extended
format for the benefits of compression (sequential and VSAM KSDS), striping, and
large data set sizes (VSAM).
You can also use the data class automatic class selection (ACS) routine to
automatically assign data classes to new data sets. For example, data sets with the
low-level qualifiers LIST, LISTING, OUTLIST, or LINKLIST are usually utility
output data sets with similar allocation requirements, and can all be assigned the
same data class.
Figure 2 shows that data sets can be assigned a data class during data set creation.
TSO Allocate
ISPF/PDF
ACS
Routine
Data
Class
Allocation
Non-System-Managed System-Managed
Volumes Volumes
Figure 2. Using Data Class. Data classes can be used for new allocations of both
system-managed and non-system-managed data sets.
If you change a data class definition, the changes only affect new allocations.
Existing data sets allocated with the data class are not changed, except for the
system-managed buffering attribute. With system-managed buffering, the data class
attributes are retrieved and used when the data set is opened.
Some of the availability requirements you can specify with storage classes can only
be met by DASD volumes attached through one of the following storage control
devices, or a similar device:
v 3990 Model 3
v 3990 Model 6
™
v RAMAC Array Subsystem
™
v Enterprise Storage Server® (ESS)
Some of the attributes you can specify require the use of the dual copy device of
the 3990 Model 3 or Model 6 Storage Control or the RAID characteristics of
RAMAC or ESS. The performance goals you set can be met through devices
attached through storage controls with or without cache.
Figure 3 on page 10 shows the storage control configurations needed to use all
storage class attribute values.
Storage
Performance Class Availability
Figure 3. Using Storage Class. Storage classes make the best use of fault-tolerant devices
You can use the storage class Availability attributes to assign a data set to
fault-tolerant devices, in order to ensure continuous availability for the data set.
The available fault-tolerant devices include dual copy devices and RAID
architecture devices, such as RAMAC and ESS.
You can use the storage class Accessibility attribute to request that point-in-time
copy be used when data sets or volumes are backed up.
You can specify an I/O response time objective with storage class. During data
allocation, the system attempts to select the available volume closest to the
specified performance objective.
For objects, the system uses the performance goals you set in the storage class to
place the object on DASD, optical, or tape volumes. The storage class is assigned to
an object when it is stored or when the object is transited. The ACS routines can
override this assignment.
If you change a storage class definition, the changes affect the performance service
levels of existing data sets that are assigned that class when the data sets are
subsequently opened. However, the definitional changes do not affect the location
or allocation characteristics of existing data sets.
Figure 4 shows that you can use management class attributes to perform the
following tasks:
v Use early migration for old generations of a generation data group (GDG) by
specifying the maximum number of generations to be kept on primary storage,
and determine what to do with rolled-off generation data sets.
v Delete selected old and unused data sets from DASD volumes.
v Release allocated but unused space from data sets.
v Migrate unused data sets to tape or DASD volumes.
v Specify how often to back up data sets, and whether point-in-time copy should
be used during backup.
v Specify how many backup versions to keep for data sets.
v Specify how long to save backup versions.
v Specify the number of versions of aggregate backups to keep and how long to
retain those versions.
v Specify the number of backup copies of objects (1 or 2)
v Establish the expiration date for objects.
v Establish transition criteria for objects.
v Indicate if automatic backup is needed for objects.
GDG Mgt.
Space
Expiration System-Managed
Backup Volume
Storage
Migration/ Management
Object Transition Management
Class Subsystem
Data
DFSMShsm and
Management
DFSMSdss
Requirements
DFSMShsm-Owned
For objects, you use the class transition attributes to define when an object is
eligible for a change in its performance objectives or management characteristics.
For example, after a certain number of days you might want to move an object
from a high-performance DASD volume to a slower optical volume. You can also
use the management class to specify that the object should have a backup copy
made when the OAM Storage Management Component (OSMC) is running.
If you change a management class definition, the changes affect the management
requirements of existing data sets and objects that are assigned that class.
You can reassign management classes when data sets are renamed.
Storage groups, along with storage classes, help reduce the requirement for users
to understand the physical characteristics of the storage devices which contain
their data.
You can direct new data sets to as many as 15 storage groups, although only one
storage group is selected for the allocation. The system uses the storage class
attributes, volume and storage group SMS status, MVS™ volume status, and
available free space to determine the volume selected for the allocation. In a tape
environment, you can also use tape storage groups to direct a new tape data set to
an automated or manual tape library.
DFSMShsm uses some of the storage group attributes to determine if the volumes
in the storage group are eligible for automatic space or availability management.
OBJECT
Storage Groups
TAPE
OBJECT
BACKUP
DATABASE
DFSMShsm-Owned
Non-System-Managed
SYSTEM UNMOVABLE TAPE
Migration Migration
Level 1 Level 2,
Backup, Dump
Figure 5. Using Storage Groups. In this example, DASD volumes are grouped so that primary
data sets, large data sets, DB2 data, IMS data, and CICS data are all separated.
The virtual input/output (VIO) storage group uses system paging volumes for
small temporary data sets. The tape storage groups contain tape volumes that are
held in tape libraries. The object storage group can span optical, DASD and tape
volumes. An object backup storage group can contain either optical or tape
volumes within one OAM invocation. Some volumes are not system-managed, and
DFSMShsm owns other volumes for use in data backup and migration. DFSMShsm
migration level 2 tape cartridges can be system-managed if you assign them to a
tape storage group.
You can use data-set-size-based storage groups to help you deal with free-space
fragmentation, and reduce or eliminate the need to perform DFSMSdss DEFRAG
processing. See “Pooling Volumes with Storage Groups” on page 40 for more
information.
For objects, there are two types of storage groups: object and object backup. OAM
assigns an object storage group when the object is stored. The first time an object is
stored to a collection, the storage group ACS routine can override this assignment.
You can specify one or two object backup storage groups for each object storage
group.
You can use aggregate groups as a supplement to using management class for
applications that are critical to your business. You can associate an aggregate group
with a management class. The management class specifies backup attributes for the
aggregate group, such as the copy technique for backing up DASD data sets on
primary volumes, the number of aggregate versions to retain, and how long to
retain versions. Aggregate groups simplify the control of backup and recovery of
critical data sets and applications.
Although SMS must be used on the system where the backups are performed, you
can recover aggregate groups to systems that are not using SMS. You can use
aggregate groups to transfer applications to other data processing installations or
migrate applications to newly-installed DASD volumes. You can transfer the
application's migrated data, along with its active data, without recalling the
migrated data.
The ACS language contains a number of read-only variables, which you can use to
analyze new data allocations. For example, you can use the read-only variable
&DSN to make class and group assignments based on data set or object collection
name, or &LLQ to make assignments based on the low-level qualifier of the data
set or object collection name. You cannot alter the value of read-only variables.
You can use another read-only variable, &SECLABEL, to assign storage groups
based on the type of information in the data set. For example, you might want to
store all of the data for a classified project on specific sets of volumes.
You use the four read-write variables to assign the class or storage group you
determine for the data set or object, based on the routine you are writing. For
example, you use the &STORCLAS variable to assign a storage class to a data set
or object.
Related Reading: For a detailed description of the ACS language and its variables,
see z/OS DFSMSdfp Storage Administration.
For each SMS configuration, you can write as many as four routines: one each for
data class, storage class, management class, and storage group. Use ISMF to create,
translate, validate and test the routines.
Figure 6 on page 15 shows the order in which ACS routines are processed. Data
can become system-managed if the storage class routine assigns a storage class to
the data, or if a user-specified storage class is assigned to the data. If this routine
does not assign a storage class to the data, the data cannot reside on a
system-managed volume, unless a specific system-managed tape volume is
DFSMSdss or Storage
DFSMShsm Storage Class Class
Conversion of ACS Routine
Existing Data Sets not
Storage assigned
Class Assigned
Management Class
ACS Routine
System-Managed
Volume
Storage Group
ACS Routine
Because data allocations, whether dynamic or through JCL, are processed through
ACS routines, you can enforce installation standards for data allocation on
system-managed and non-system-managed volumes. ACS routines also enable you
to override user specifications for data, storage, and management class, and
requests for specific storage volumes.
You can use the ACS routines to determine the SMS classes for data sets created by
the Distributed FileManager/MVS. If a remote user does not specify a storage
class, and if the ACS routines decide that the data set should not be
system-managed, the Distributed FileManager/MVS terminates the creation
process immediately and returns an error reply message to the source. Therefore,
when you construct your ACS routines, consider the potential data set creation
requests of remote users.
You can also use your ACS routines to detect a reference to non-SMS-managed
data sets using VOL=REF, and then either allow or fail the referencing allocation.
This is done by testing the &ANYVOL or &ALLVOL read-only variable for a value
of 'REF=NS'. This gives the ACS routines control over whether a new,
non-SMS-managed data set can be allocated on a non-SMS-managed volume or
not. SMS fails the allocation if the ACS routines attempt to make the referencing
data set SMS-managed, since this could cause problems attempting to locate that
data set with DISP=OLD or DISP=SHR and lead to potential data integrity
problems.
For data set allocations that use volume referencing or unit affinity, your ACS
routines can determine the storage residency of the referenced data sets.
This information is stored in SMS control data sets, which are VSAM linear data
sets. You can define these control data sets using the access method services
DEFINE CLUSTER command.
Related Reading: For detailed information on creating SMS control data sets, see
z/OS DFSMSdfp Storage Administration.
You must define the control data sets before activating SMS. Although you only
need to allocate the data sets from one system, the active control data set (ACDS)
and communications data set (COMMDS) must reside on a device that can be
accessed by every system to be managed with the SMS configuration.
You do not have to implement and use all of the functions in SMS. For example,
you can implement system-managed tape functions first, without also
implementing SMS on DASD. You can also set up a special pool of volumes (a
storage group) to only exploit the functions provided by extended format data sets,
such as compression, striping, system-managed buffering, partial release, and
candidate volume space amount, to name just a few.
Managing
Temporar y
Data
Optimizing
Managing Tape Usage Managing
Permanent Object
Data Data
Managing
Tape Volumes
You can use the DFSMS Fast Implementation Techniques (FIT) to guide you in
implementing DFSMS quickly and simply. DFSMS FIT uses a question-and-answer
approach and a data classification process to create a DFSMS design tailored to
your installation. DFSMS FIT also includes a number of tools, sample jobs and
code, and actual installation examples to help shorten the implementation process.
You can also use IBM NaviQuest for z/OS in conjunction with DFSMS FIT.
Related Reading:
v For more information on implementing DFSMS, see “Using Milestones to
Implement System-Managed Storage” on page 20.
v For information about DFSMS FIT, see “Using DFSMS FIT to Implement
System-Managed Storage” on page 19.
v This book does not discuss how to implement system-managed storage to
support objects. If you are implementing the DFSMS environment only to
support objects, see z/OS DFSMS OAM Planning, Installation, and Storage
Administration Guide for Object Support for guidance.
v Alternatively, if your goal is to manage both data and objects in the DFSMS
environment, consider implementing the first three milestones using this guide
first, and then using z/OS DFSMS OAM Planning, Installation, and Storage
Administration Guide for Object Support to help you customize the DFSMS
environment for objects.
Related Reading: For a sample project plan for DFSMS implementation, see
Appendix A, “Sample Project Plan for DFSMS Implementation,” on page 237.
For example, you can implement system-managed tape functions without also
implementing SMS on DASD. You can also set up a special pool of volumes (a
storage group) to only exploit the functions provided by extended format data sets,
such as compression, striping, system-managed buffering (SMB), partial release,
and candidate volume space amount, to name just a few. You can put all your data
(for example, database and TSO) in a pool of one or more storage groups and
assign them appropriate policies at the storage group level to implement
DFSMShsm operations in stages, or to benefit from such SMS features as
compression, extended format, striping, and record-level sharing (RLS).
In conjunction with DFSMS FIT, you can use NaviQuest, a testing and reporting
tool developed specifically for DFSMS FIT. With NaviQuest you can perform the
following tasks:
v Automatically test your DFSMS configuration
v Automatically test your ACS routines
v Perform storage reporting, through ISMF and with DCOLLECT and VMA data
v Report functions on ISMF table data
v Use REXX EXECs to run ISMF functions in batch
v Assist the storage administrator in creating ACS routines
Related Reading:
v For more information about DFSMS FIT, see Get DFSMS FIT: Fast Implementation
Techniques.
v For more information about NaviQuest, see z/OS DFSMSdfp Storage
Administration.
The starter set shipped with DFSMS consists of a sample base configuration and
sample Automatic Class Selection (ACS) routines that can assist you in
implementing system-managed storage. It also contains an SMS configuration,
which is a VSAM linear data set with typical SMS classes and groups. This sample
source configuration data set (SCDS) contains SMS classes and groups that can be
used for your first activation of SMS and for later milestones that manage more of
your data.
Tip: The examples in this book might be more current than samples in the starter
set.
You can use these samples along with this book to phase your implementation of
system-managed storage. The completion of each phase marks a milestone in
implementing system-managed storage.
Appendix B, “Sample Classes, Groups, and ACS Routines,” on page 243 contains
sample ACS routines that correspond to each of the milestones that have SMS
active.
Chapter 5, “Managing Temporary Data,” on page 109 describes how to tailor your
DFSMS environment to manage temporary data.
You can use the sample ACS routines for the permanent milestone and the SMS
configuration in the sample SCDS, along with Chapter 6, “Managing Permanent
Data,” on page 119, to assist your migration to system-managed permanent data.
Managing permanent data is divided into the following stages that are based on
your major data set classifications:
Related Reading: For information on using DFSMS FIT to create a DFSMS design
tailored to your environment and requirements, and for information on using
NaviQuest to test and validate your DFSMS configuration, see “Using DFSMS FIT
to Implement System-Managed Storage” on page 19.
You can also use the IBM Virtual Tape Server (VTS) with or without tape mount
management to optimize your use of tape media.
Related Reading:
v For more information about tape mount management, see Chapter 11,
“Optimizing Tape Usage,” on page 177.
v After using tape mount management, you can migrate tape volumes to system
management. See Chapter 12, “Managing Tape Volumes,” on page 219 for more
information about setting up system-managed tape libraries.
v For information about using DFSMS FIT to create and implement a DFSMS
design tailored to your DASD or tape environment and requirements, see “Using
DFSMS FIT to Implement System-Managed Storage” on page 19.
With the introduction of control units with large caches, sophisticated caching
algorithms, large bandwidths, and such features as the IBM ESS parallel access
volume and multiple allegiance, you no longer have to use the storage class
performance values. But you can still use these values if you want to influence
system-managed buffering for VSAM data sets or require sequential data striping
for performance critical data sets.
The ESS allows for concurrent data transfer operations to or from the same volume
on the same system. A volume used in this way is called a Parallel Access Volume
(PAV). If you are using ESS devices, you can define DFSMS storage classes with
the parallel access volume (PAV) option enabled. If the data set being allocated is
assigned to this new or modified Storage Class, then the outcome of the volume
selection process will influenced by the way in which the PAV option was
specified. This is described in more detail later.
Design your storage classes early and use the RACF facility class to authorize users
access to such items as the VTOC or VTOC index. You can also use job accounting,
or RACF user or group information available to your ACS routines to identify
users that require specialized services. For example, you can use the &JOB, &PGM,
and &USER read-only variables to distinguish Distributed FileManager/MVS data
set creation requests. If you do not provide a storage class for Distributed
FileManager/MVS data sets, only limited attribute support is available, affecting
performance and function. Distributed FileManager/MVS rejects file creation
requests that do not result in system-managed data sets.
The following objectives can help you identify the storage hardware services you
require and the storage classes that you need to design:
v Improve performance for directly-accessed data sets
v Improve performance for sequentially-accessed data sets
v Improve data set backup performance
v Improve data set availability
v Place critical data sets on specific volumes
v Preallocate space for multivolume data sets
To use enhanced dynamic cache management, you need cache-capable 3990 storage
controls with the extended platform.
System-managed data sets can assume the following three states with 3990 cache
and DASD fast write services:
The enhanced dynamic cache management of DFSMS ensures that must-cache data
sets have a priority on 3990 cache and DASD fast write services, and that the
may-cache data sets that benefit most from cache and DASD fast write receive
these specialized performance services. You can get the best performance by
assigning most data to the may-cache category. The enhanced dynamic cache
management then supports performance management automation, but lets you
designate selected data as must-cache data sets.
The cacheability of data sets also depends on the applications. Some applications
could access the same data several times, while others (for example, sequential
access) might not.
When first opened, may-cache data sets are cached; DFSMS calculates their hit
ratios to determine whether the data sets are good cache candidates. It does this by
comparing the hit ratios to a specific threshold. If the total hit ratio is less than the
read threshold, reads are inhibited for the data set. If the write hit ratio is less than
the write threshold, DASD fast write is inhibited for the data set.
After a specified number of I/O operations, the data set is again eligible for
caching and fast write, and is evaluated again.
The bias attributes, Sequential and Direct, interact with the Millisecond Response
(MSR) attributes, Sequential and Direct, to determine if a data set requires the
services of a cache-capable storage control. If the Direct or Sequential Millisecond
Response attribute's value can only be satisfied by a cache-capable storage control,
the Direct and Sequential bias attributes are evaluated to see if the data set is
primarily read (R) or written (W). SMS attempts to allocate data sets on the device
that most closely matches the MSR and BIAS that you choose.
Figure 8 shows the ISMF panel that describes storage performance requirements for
data sets that must be cached with the DASD fast write services of a cache-capable
3990 storage control.
Data sets that are accessed sequentially can benefit from dynamic cache
management; however, improved performance can be more effectively realized
through the use of larger block and buffer sizes and parallel I/O processing.
Sequential data striping can be used for physical sequential data sets that cause I/O
bottlenecks for critical applications. Sequential data striping uses extended-format
sequential data sets that SMS can allocate over multiple volumes, preferably on
different channel paths and control units, to improve performance. These data sets
must reside on volumes that are attached to IBM 9340 or RAMAC Array
Subsystems, to IBM 3990 Storage Subsystems with the extended platform, or ESS.
Sequential data striping can reduce the processing time required for long-running
batch jobs that process large, physical sequential data sets. Smaller sequential data
sets can also benefit because of DFSMS's improved buffer management for QSAM
and BSAM access methods for striped extended-format sequential data sets.
Chapter 8, “Managing Batch Data,” on page 135 describes how sequential data
striping can be used in the batch environment.
Recommendation: Use the logical backup and restore techniques for striped data
sets having more than one stripe. These multi-part data sets can only be restored
from physical backup copies if you enter an individual restore command for each
part.
The benefit from sequential data striping must be evaluated in relationship to your
®
ESCON cache-capable 3990 storage control configuration. For each
serially-attached cache-capable 3390 storage control in a storage group, up to four
paths are available for concurrent I/O operations. Consequently, four stripes at
most can be effectively used per storage control. Newer control units support more
than four paths.
Related Reading: For more information about the DCBE macro, see z/OS DFSMS
Macro Instructions for Data Sets.
For striped data sets, the maximum number of extents on a volume is 123.
You can request sequential data striping by setting Data Set Name Type to
(EXTENDED,P). If striping is not possible, the data set is allocated as non-striped.
SMS determines the number of volumes to use for a striped data set based on the
value of the Sustained Data Rate in the storage class. Sustained Data Rate is the
data transfer rate that DFSMSdfp should keep up during a period of typical I/O
activity for the data set.
Restrictions: The RESET/REUSE option is not supported for VSAM data striping.
The restrictions for data in the striped format are the same as for other VSAM data
sets in the extended format (EF). The KEYRANGE and IMBED attributes are not
supported for any VSAM data set types.
VSAM-striped data sets can be extended on the same volume, equivalent to the
existing data striping for SAM data sets, or to a new volume, which is not
supported for SAM data sets. The ability to extend a stripe, or stripes, to a new
volume is called multi-layering.
VSAM striping is used only for the data component of the base cluster of a VSAM
data set. It is effective for sequential processing when the data set is processed for
non-shared resources (NSR). The following conditions must be met for VSAM data
striping:
v The data set must be system-managed.
v The data set must be in the extended format.
v The stripe count must be greater than one.
The storage class SDR value is greater than the minimum for a device type: 4
MB per second for 3390 and 3 MB per second for 3380, when the request is for
non-guaranteed space.
Definition: Single-striped data sets refers to data sets that are in extended format but
are not striped under the above conditions. They are, therefore, considered
non-striped.
For selected data sets, the shared buffer is established on the first open of the data
set by any of the jobs. The access methods use the Data Look-aside Facility to
cache the data in the shared buffer. For example, the shared buffer is invalidated
when another job opens the data set for output.
The candidate data sets for Hiperbatch are defined to the system. For example,
physical sequential data sets accessed using QSAM and VSAM ESDS, RRDS,
VRRDS, and KSDS with a control interval size of a multiple of 4096 bytes are
eligible for Hiperbatch.
Related Reading: For more information about Hiperbatch, see the MVS Hiperbatch
Guide in the z/OS Internet Library (www.ibm.com/systems/z/os/zos/library/
bkserv)
The Storage Class option for Parallel Access Volumes may be used to influence
volume selection in such a way that data sets that require high performance may
be directed towards volumes that are being used as Parallel Access Volumes. The
DFSMS volume selection process puts eligible volumes into the primary, secondary,
or tertiary category. For more information about the volume selection process, see
“Selecting Volumes with SMS” on page 42.
Improving Availability
The following options can help you improve availability:
Enterprise Storage Server (ESS)
Provides such copy services as FlashCopy, extended remote copy (XRC),
suspend/resume for unplanned outages, and peer-to-peer remote copy
(PPRC). For detailed descriptions of these copy services, see z/OS DFSMS
Advanced Copy Services.
Data set separation
Used to keep designated groups of data set separate, on either the physical
control unit (PCU) or volume level, from all the other data sets in the same
group. This reduces the effect of single points of failure. For information on
how to use data set separation, see Using data set separation in z/OS
DFSMSdfp Storage Administration.
Storage class availability attribute
Used to assign a data set to a fault-tolerant device. Such devices ensure
continuous availability for a data set in the event of a single device failure.
The fault-tolerant devices that are currently available are dual copy devices
and RAID architecture devices, such as RAMAC or ESS.
The following options are available for the availability attribute. To ensure
that SMS allocates a data set on a fault-tolerant device, assign the data set
a storage class that specifies the Availability attribute as CONTINUOUS.
CONTINUOUS
Data is placed on a dual copy or RAID device so that it can be
accessed in the event of a single device failure. If neither of the
devices is available, allocation fails. Dual copy, RAMAC, or ESS
volumes are eligible for this setting.
PREFERRED
The system tries, but does not guarantee, to place data on a
fault-tolerant RAID device. Dual copy volumes are not candidates
for selection.
STANDARD
This represents normal storage needs. The system tries to allocate
the data set on a non-fault-tolerant device to avoid wasting
Management class attributes let you choose how DFSMShsm and DFSMSdss
process data sets that are in use during the backup. Point-in-time capabilities,
using either concurrent copy on the 3990-6, or virtual concurrent copy on the
RAMAC Virtual Array, let you use the following backup capabilities:
v Use DFSMSdss to create a point of consistency backup of CICS/VSAM, IMS, or
DB2 databases without needing to quiesce them during the entire backup
process.
v Use DFSMSdss to create backups of data sets without requiring serialization
during the entire backup process.
DFSMSdss serializes the data during the concurrent copy initialization period
(the time between the start of DFSMSdss and the issuing of the ADR734I
message).
v Create and maintain multiple backup versions of DFSMShsm control data sets,
while increasing the availability of DFSMShsm functions, such as recall.
v Use the backup-while-open capability for CICS VSAM data sets, with DFSMSdss
in batch mode or with automated DFSMShsm, to provide backups with data
integrity even when the data sets are being updated. Data integrity is assured
for VSAM KSDSs even when CICS access results in control interval or control
area splits or data set extends.
With concurrent copy, DFSMSdss works with a cache-capable 3990 storage control
and SMS to begin and sustain concurrent copy sessions. DFSMSdss determines a
list of physical extents by volume that are associated with each session. For each
backup session, the storage control ensures that the original track images are
preserved in the cache, while writing any updated track images to DASD. Each
cache-capable 3990 storage control can sustain up to 64 concurrent copy sessions
simultaneously.
When virtual concurrent copy is being used for backup, DFSMSdss uses the
SnapShot feature of the RAMAC Virtual Array to create an interim point-in-time
copy of the data to be backed up. Once the point-in-time copy is created,
serialization is released and the concurrent copy session is logically complete.
DFSMSdss then performs I/O from the interim point-in-time copy to create the
backup. Once this is done, the backup is physically complete and the job ends.
Similar operations are followed for ESS.
With DFSMSdss SnapShot copy support and the RAMAC Virtual Array, you can
make almost instantaneous copies of data. Once the copy is complete, both the
source and target data sets or volumes are available for update.
For non-VSAM data sets, secondary extents are allocated only on the last volume
of the multivolume data sets. All volumes except the last one will have only
primary extents.
With the IBM ESS, the Guaranteed Space attribute of a storage class with specific
volsers is no longer required for data sets other than those that need to be
separated, such as the DB2 online logs and BSDS, or those that must reside on
specific volumes because of their naming convention, such as the VSAM RLS
sharing control data sets. The ESS storage controllers use the RAID architecture
that enables multiple logical volumes to be mapped on a single physical RAID
group. If required, you can still separate data sets on a physical controller
boundary for availability beyond what is inherently built into the RAID
architecture.
The ESS is also capable of parallel access volumes (PAV) and multiple allegiance.
These ESS capabilities, along with its bandwidth and caching algorithms, make it
unnecessary to separate data sets from each other for the purpose of performance.
Traditionally, IBM storage subsystems allow only one channel program to be active
on a disk volume at a time. This means that after the subsystem accepts an I/O
request for a particular unit address, this unit address appears "busy" to
subsequent I/O requests. This ensures that additional requesting channel programs
cannot alter data that is already being accessed. By contrast, the ESS is capable of
If you specify NO for Guaranteed Space, then SMS chooses the volumes for
allocation, ignoring any VOL=SER statements specified on JCL. Primary space on
the first volume is preallocated. NO is the default.
Specifying volsers with the Guaranteed Space attribute of the storage class is
strongly discouraged. If used, the following considerations must apply:
v Ensure that the user is authorized to the storage class with the Guaranteed
Space attribute.
v Write a storage group ACS routine that assigns a storage group that contains the
volumes explicitly specified by the user.
v Ensure that all volumes explicitly specified by the user belong to the same
storage group, by directing an allocation that is assigned a Guaranteed Space
storage class to all the storage groups in the installation.
v Ensure that the requested space is available because there is no capability in
SMS to allow specific volume requests except with the Guaranteed Space
attribute.
v Ensure that the availability and accessibility specifications in the storage class
can be met by the specified volumes.
This procedure is done automatically by the DB2 subsystem for table spaces
allocated using DB2 STOGROUPS.
Assign all your system-managed permanent data sets to management classes, even
if their management class attributes specify that no space or availability services
are required for the data sets. If a management class is not assigned, DFSMShsm
Tip: You can prevent DFSMShsm processing at the storage group level.
Data sets with varying management requirements coexist on the same volume.
However, you might want to separate certain types of data sets with similar
management requirements in their own storage group. An example is the
production database data placed in a database storage group. You can use the
image copy utilities of DB2 and IMS databases for backup and recovery. Because of
the customized procedures required to back up and restore this data, you can
separate it from data that uses DFSMShsm facilities.
Figure 9 through Figure 11 on page 35 show the ISMF management class panel
definitions required to define the STANDARD management class.
Expiration Attributes
Expire after Days Non-usage . . NOLIMIT (1 to 9999 or NOLIMIT)
Expire after Date/Days . . . . . NOLIMIT (0 to 9999, yyyy/mm/dd or
NOLIMIT)
Use ENTER to Perform Verification; Use DOWN Command to View next Panel;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
The expiration and retention attributes for the STANDARD management class
specify that no expiration date has been set in the management class. These data
sets are never deleted by DFSMShsm unless they have explicit expiration dates.
Migration Attributes
Primary Days Non-usage . . . . 15 (0 to 9999 or blank)
Level 1 Days Non-usage . . . . 30 (0 to 9999, NOLIMIT or blank)
Command or Auto Migrate . . . . BOTH (BOTH, COMMAND or NONE)
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
For single-volume data sets, DFSMShsm releases any unused space when you
specify partial release. Also, if the data set is not referenced within 15 days, it
moves to migration level 1, and, after 15 more days, moves to migration level 2.
For all VSAM data sets allocated in the extended format and accessed using the
VSAM access method, you can use the Partial Release attribute of the management
class to release allocated but unused space. The system releases the space either
immediately during close processing, or during DFSMShsm space management
cycle processing. This is similar to how the system processes non-VSAM data sets.
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; Cancel to Exit.
DFSMShsm backs up the data set daily if the data set has been changed. The last
two versions of the data set are retained as long as the data set exists on primary
DFSMShsm uses the Auto Dump attribute to determine the storage groups
containing volumes that should be dumped with DFSMSdss full volume dump.
The Guaranteed Backup Frequency storage group attribute allows you to assign a
maximum period of elapse time before a data set is backed up regardless of its
change status.
SMS uses storage groups to contain the definitions of volumes that are managed
similarly. Each storage group has a high allocation and low migration threshold
defined. SMS uses the high allocation threshold to determine candidate volumes
for new data set allocations. Volumes with occupancy lower than the high
allocation threshold are selected in favor over those volumes that contain more
data than the high allocation threshold specifies. DFSMShsm uses the low
migration threshold during primary space management, and the interval threshold
during interval migration to determine when to stop processing data.
Use ENTER to Perform Selection; Use DOWN Command to View next Page;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
In Figure 12 on page 40, data sets on volumes in the PRIME90 storage group are
automatically backed up and migrated according to their management class
attributes. These volumes are also automatically dumped and one copy of each
volume is stored offsite.
SMS tries not to allocate above high threshold, but might allocate a new data set in
a storage group which is already at or above threshold if it cannot find another
place to put the data. In PRIME90, interval migration is triggered at 50% of the
difference between the high and low threshold values. As shown in Figure 12 on
page 40, DFSMShsm lets the volume fill to near 95%, but can trigger interval
migration if the volume exceeds 88%, which is midway between the low (80%) and
high (95%) thresholds specified on the panel. (For AM=Y storage groups, this
requires SETSYS INTERVAL.)
For EAS-eligibile data sets on volumes that support cylinder-managed space, the
allocation threshold is divided into categories. All categories are assessed to
determine the volumes capability of meeting the threshold requirements. These
include the volumes capability in meeting the track-managed space or the
cylinder-managed space thresholds and the total volume space threshold. Note that
the Allocation Threshold percentage applies to both cylinder-managed space and
total volume space.
Volume pools should not contain mixed device types (with different track
geometries) because data set extensions to multiple volumes might result in
problems. You can design the storage group ACS routine to direct allocation to up
to 15 storage groups. You can thus preserve your existing volume pooling
structure.
As an example, the starter set's SMS configuration has a PRIMARY storage group
that has been subdivided into two storage groups, PRIME80 and PRIME90,
because our storage configuration contained both 3380 and 3390 device types. We
Chapter 2. Planning to Implement System-Managed Storage 41
balanced allocation to both the 3380 and 3390 devices by coding the following
statement in the storage group ACS routine:
v
SET &STORGRP = ’PRIME80’,’PRIME90’
This statement results in all volumes in these two storage groups being considered
equally for allocation. The system selects volumes based on their ability to satisfy
the availability and performance criteria that you specify in the storage class that is
assigned to the data set.
Extended address volume (EAV) and non-EAV volumes may reside in the same
volume pools. SMS prefers EAV volumes for EAS-eligible data sets that are equal
to or larger than the BPV. For non- EAS-eligible requests that are smaller than the
BPV, SMS will not have a preference for EAV volumes.
You can implement data-set-size-based storage groups to help you deal with
free-space fragmentation, and reduce or eliminate the need to perform DFSMSdss
DEFRAG processing. Customers often use DEFRAG to reclaim free space in large
enough chunks on each volume to prevent abends due to space constraints. By
implementing data-set-size-based storage groups, one storage group, for example,
can contain data sets smaller than 25 MB. With this approach, when a data set is
deleted or expired, it leaves behind a chunk of free space that is similar in size to
the next data set to be allocated. Since large data sets are not directed to this
storage group, they are directed to other groups that might have less overall space,
but in larger contiguous chunks. The end result is that the fragmentation index is
high, but since space constraint abends do not occur, DEFRAG processing is not
required.
Related Reading: For more details about the attribute fields that are displayed on
the Pool Storage Group Define panel and other ISMF panels, see z/OS DFSMSdfp
Storage Administration.
For nonmultistriped data sets, SMS classifies all volumes in the selected storage
groups into the following four volume categories:
Primary Volumes
Primary volumes are online, below threshold, and meet all the specified
criteria in the storage class. Both the volume status and storage group
status are enabled. Volume selection starts from this list.
For EAS-eligible data sets on devices with cylinder-managed space, both
the track-managed space / cylinder-managed space threshold and the total
volume space threshold will be assessed to determine if the volume gets
placed on the primary volume list.
Secondary Volumes
Secondary volumes do not meet all the criteria for primary volumes. SMS
selects from the secondary volumes if no primary volumes are available.
Tertiary Volumes
Volumes are classified as tertiary if the number of volumes in the storage
If allocation is not successful from the primary list, then SMS selects volumes from
the secondary volume list and subsequently from the tertiary volume list. Selection
continues until the allocation is successful, or until there are no more available
volumes and the allocation fails.
For multistriped data sets, volumes are classified as primary and secondary.
Primary volumes are preferred over secondary volumes. A single primary volume
is randomly selected for each unique controller, and all other eligible volumes
behind the same controller are secondary. Secondary volumes are randomly
selected if initial allocation on the primary volume is unsuccessful. If the controller
supports striping, there is no preference in different controller models.
You can mix devices of varying performance characteristics within one storage
group. For example, if you specify a nonzero IART in the storage class, mountable
volumes are considered before DASD volumes. If the IART is zero, mountable
volumes are not considered and a DASD volume is selected. You can also add new
devices into an existing z/OS complex and take advantage of different
performance and availability characteristics.
After the system selects the primary allocation volume, that volume's associated
storage group is used to select any remaining volumes requested.
SMS interfaces with the system resource manager (SRM) to select from the eligible
volumes in the primary volume list. SRM uses device delays as one of the criteria
for selection and does not prefer a volume if it is already allocated in the jobstep.
This is useful for batch processing when the data set is accessed immediately after
creation. It is, however, not useful for database data that is reorganized at off-peak
hours.
SMS does not use SRM to select volumes from the secondary or tertiary volume
lists. It uses a form of randomization to prevent skewed allocations, in instances
such as when new volumes are added to a storage group or when the free space
statistics are not current on volumes. You can force SMS not to use SRM by
specifying a non-zero IART value.
Related Reading: For more information about volume selection and data set
allocation, see z/OS DFSMSdfp Storage Administration.
Once a storage group has been selected, SMS selects the volumes based on
available space, control unit separation, and performance characteristics if they are
specified in the assigned storage class.
Related Reading: For information about striping volume selection with SMS, see
z/OS DFSMSdfp Storage Administration.
You can control VIO usage on each system that shares your SMS configuration
through the use of SMS STORAGE GROUP STATUS. In this way, different VIO
storage groups having VIOMAXSIZEs tailored for the system can be selected. By
setting a specific system's VIO SMS STORAGE GROUP STATUS to DISABLE(ALL),
VIO allocation can be prevented on that system.
The space requirements for large data sets can limit the free space available to
other data sets. Also, more space must be reserved on volumes to support the new
allocation or DFSMShsm recall of large data sets. Because of this, the high
allocation/migration thresholds for storage groups containing large data sets
should be set lower than for storage groups containing normal-sized data sets.
Your own definition of large should provide for successful DFSMShsm recall in
five or fewer extents, based on the threshold you have set for the storage group.
Management class can be used to limit the negative effects of large data sets
during the space management process. As an example, you can migrate large data
sets directly to migration level 2, rather than letting them move to migration level
1.
Another management option is to place large data sets in your primary storage
group and assign a management class that prevents migration. This is a good
solution if you have only a few large data sets that need to be immediately
available on DASD.
After you define the SMS classes and groups needed to support your data, you
develop ACS routines to assign these classes and groups to new data sets, as well
as to data sets that result from migrating volumes to system management, or from
copying or moving data to system-managed volumes.
ACS routines use a simple, procedure-oriented language. ACS routines begin with
a PROC statement that identifies the type of ACS routine, such as storage class
ACS routine, and terminates with an END statement. Variables that are generated
from the allocation environment for the data, such as data set name and job
accounting information, are supplied to your routines as ACS read-only variables.
You can use these variables to assign a class or group to a data set. This is done by
assigning the corresponding read/write variable in your ACS routine. Each routine
sets only its own read/write variable, as follows:
v The data class ACS routine sets &DATACLAS
v The storage class ACS routine sets &STORCLAS
v The management class ACS routine sets &MGMTCLAS
v The storage group ACS routine sets &STORGRP
DFSMS contains three sample ACS installation exits: one each for data class,
storage class, and management class. After each ACS routine runs, the
corresponding ACS exit routine is called. In an exit, you can call other programs or
services. The parameters passed to the exit are the same as the ACS routine
read-only variables.
You can also use installation exits to override the ACS-assigned classes or to alter
the ACS variables and rerun the ACS routine.
If you do not need these installation exits, do not supply dummy exits. Otherwise,
you incur unnecessary overhead.
Tip: A pre-ACS routine exit is available with the VTS Export/Import SPE. The
purpose of this exit is to let tape management systems provide read-only variables
to the ACS routines to facilitate tape-related decision making.
Related Reading: For more information on using installation exits, see z/OS
DFSMS Installation Exits.
The ACS language provides the following three indexing functions to help you
make class or group assignments:
v &ALLVOL and &ANYVOL
v Accounting information
v Data set qualifier
You can use the accounting information indexing function to refer to specific fields
in the JOB or STEP accounting information.
With the data set qualifier indexing function, you can index the DSN variable to
refer to specific qualifiers. For example:
v
Value for &DSN is ’A.B.C.D’
Value for &DSN(3) is ’C’
Value for &DSN(&NQUAL) is ’D’
Related Reading: For detailed descriptions of variable syntax and use, see z/OS
DFSMSdfp Storage Administration.
The FILTLIST statement defines a list of literals or masks for variables used in IF
or SELECT statements. The following example shows the syntax for this statement:
v
FILTLIST name < INCLUDE(filter list) >
< EXCLUDE(filter list) >
FILTLIST statements cannot contain numeric values and, therefore, cannot be used
in comparisons with the numeric variables &NQUAL, &NVOL, &SIZE,
&MAXSIZE, &EXPDT, or &RETPD.
The ACS routine fragment in Figure 14 shows how you can use the FILTLIST
masks to compare with a read-only variable (such as &DSN) to determine which
data sets should receive specific performance services.
All IBM-supplied ACS environment conditions are tested by the WHEN clauses.
The comment, /* INSTALLATION EXIT */, indicates that the OTHERWISE clause
is run only if an installation exit has set &ACSENVIR to a value you defined in an
ACS routine exit. Figure 15 shows a format-1 SELECT statement with the select
variable ACSENVIR:
SELECT (&ACSENVIR)
WHEN (’ALLOC’)
DO ... END
WHEN (’RECALL’)
DO ... END
WHEN (’RECOVER’)
DO ... END
WHEN (’CONVERT’)
DO ... END
OTHERWISE /* INSTALLATION EXIT */
DO ... END
END
When coding ACS routines, remember that the WHEN clauses of the SELECT
statement are tested serially. The first WHEN that is true causes its clause to be
run. After the first true WHEN is encountered, the rest of the SELECT is not run.
Figure 16 shows a format-2 SELECT statement:
SELECT
WHEN (&RETPD < 7)
SET &MGMTCLAS = ’DELSOON’
WHEN (&RETPD < 30)
SET &MGMTCLAS = ’FREQBKUP’
WHEN (&RETPD < 60)
SET &MGMTCLAS = ’NORMAL’
OTHERWISE
SET &MGMTCLAS = ’DELNEVER’
END
Design your ACS routines to function reliably for the long term. Use ACS variables
that sustain their meaning over time and in various environments, such as &DSN
rather than &ACCT_JOB. Variables such as &ACCT_JOB can assume different
values depending on the operating environment. &ACCT_JOB might be a
significant variable to test to determine a management class in the environment
when the data set is being allocated. However, if DFSMShsm recalls the data set,
the value of the &ACCT_JOB variable changes.
Within your SMS complex, you might need to know the system and Parallel
®
Sysplex® where an ACS routine is executing to direct the allocation to a storage
group that is accessible from the current system. You can use the &SYSNAME (set
to the system name) and &SYSPLEX (set to the Parallel Sysplex name of the
system where the ACS routine is executing) read-only variables. You can use these
variables to distribute a single set of ACS routines to all the systems in your
enterprise. The receiving sites do not have to make any changes to the ACS
routines. However, if you have some down-level systems, they do not support
these variables. Also, you should be careful using the &SYSNAME and &SYSPLEX
variables on JES3 systems, because the system where the ACS routines are run
might be different from the system where the job is run.
Use the same coding rules to ensure maintainability that you would use if coding
one of your applications:
v Divide the routine into logical sections, one for each significant data
classification, such as TSO, database, and so on.
v Keep a change log in the beginning of each ACS routine that includes a
description of the coding change, the initials of the person making the change,
and the change date.
v Create meaningful names for FILTLIST variables. A FILTLIST variable can be 31
characters long. Use the underscore character to make the variable more
readable.
v Create meaningful names for classes and groups. The name should describe the
type of service rather than the data classification.
v When you code a SELECT, always code an OTHERWISE.
v When you make a class or group assignment, code an EXIT statement.
v Use SELECT/WHEN in preference to the IF/THEN structure to enhance
reliability and maintainability of the routine.
v Use comments freely throughout the routine to relate the ACS routine design
objectives to the code that implements the design.
You can use NaviQuest to test changes to your ACS routines. For more information
about NaviQuest, see z/OS DFSMSdfp Storage Administration.
End Programming Interface Information
The value of SMS VOLUME STATUS shows the relationship of the volume to SMS.
Your volumes can assume three states:
Converted
Indicates that the volume is fully available for system management. All
data sets on the volume have a storage class and are cataloged in an
integrated catalog facility catalog.
Initial Indicates that the volume is not fully available for system management
because it contains data sets that are ineligible for system management.
An attempt was made to place the volume under system management, but
data sets were determined to be ineligible for system management based
either on SMS eligibility rules or on the decisions made in your ACS
routines. Temporary failure to migrate to system management occurs when
data sets are unavailable (in use by another application) when the
migration is attempted.
No new data set allocations can occur on a volume with initial status. Also,
existing data sets cannot be extended to another volume while the volume
is in this state.
You can place volumes in initial status as you prepare to implement
system management.
Tip: You can use the DFSMSdss CONVERTV function with the TEST
option to determine if your volumes and data sets are eligible for system
management. See “Testing the Eligibility of Your Volumes and Data Sets”
on page 55 for more information on the TEST option.
Non-SMS
The volume does not contain any system-managed data sets and has not
been initialized as system-managed.
You can either do data conversion with movement using DFSMSdss's COPY or
DUMP/RESTORE functions, or you can convert in-place using DFSMSdss's
CONVERTV function. The approach you use to place your data under system
management depends on the following considerations:
v The degree of centralization of the storage management function
One benefit of doing conversion with data movement is that the data is allocated
according to the allocation thresholds that you set for the storage groups, so that
space usage can be balanced.
Tip: When doing conversions-in-place, consider that the ACS variables available to
your routines are more limited when using the DFSMSdss CONVERTV function.
For more information, see the DFSMSdss section of z/OS DFSMSdfp Storage
Administration.
Related Reading: For more information on using AMS to define a BCS and VVDS,
see z/OS DFSMS Managing Catalogs.
When you use the logical COPY function to move the data to system-managed
volumes, DFSMSdss performs the following steps:
1. Searches any catalogs for data sets cataloged outside the standard search order.
You cannot use JOBCAT and STEPCAT statements to locate system-managed
data sets. When placing data sets under system management, you can use the
DFSMSdss INCAT keyword to specify catalogs that must be searched to find
data sets cataloged outside the standard search order.
2. Copies all eligible data sets to system-managed volumes.
These data sets are also assigned an SMS storage class and management class.
3. Places ineligible data sets on the non-system-managed volumes you specified
on the OUTDD or OUTDYNAM parameter.
When you copy or restore a data set when SMS is active, the storage class ACS
routine is always run. The management class and storage group ACS routines
are run if the storage class ACS routine determines that the data set should be
system-managed.
4. Catalogs all system-managed data sets in the standard search order.
Using the BYPASSACS keyword lets you bypass the ACS routines for this run of
DFSMSdss COPY or RESTORE. Either the class names now assigned to the data
Chapter 2. Planning to Implement System-Managed Storage 53
set, or the class names specified using the STORCLAS or MGMTCLAS keywords,
are used instead of having the ACS routines determine them. Then you can
determine the storage and management class assigned to the data set. However,
the storage group ACS routine is never bypassed.
Conversely, you can use the COPY and RESTORE commands to remove a data set
from system management. Do this by specifying both the BYPASSACS and
NULLSTORCLAS keywords as part of the command. The ACS routines are then
bypassed, and the data set reverts to non-system-managed status.
Related Reading: For more information about using these DFSMSdss facilities, see
z/OS DFSMSdss Storage Administration.
CONVERTV then allocates a VVDS if one does not already exist. CONVERTV also
updates the basic catalog structure (BCS) and VVDS with the SMS storage and
management classes assigned by your ACS routines for data sets that meet SMS
eligibility requirements.
If the volume and data sets do not meet all SMS requirements, DFSMSdss sets the
volume's physical status to initial. This status lets data sets be accessed, but not
extended. New allocations on the volume are prohibited. If all requirements are
met, DFSMSdss sets the volume status to converted.
Use the following required steps to prepare for in-place data conversion:
1. Design your SMS configuration including classes, groups, and ACS routines.
2. Determine how your DASD volumes should be assigned to SMS storage
groups.
3. Determine if your volumes and data sets are eligible for system management,
and remove any ineligible ones.
4. Stabilize the volumes prior to placing them under system management.
5. Place the volumes under system management.
Use the TEST option of DFSMSdss's CONVERTV function to verify that your
volume and data sets are eligible to be placed under system management.
You can use the ISMF Data Set Application to identify ineligible data sets before
you perform the in-place conversion. This is described in Chapter 3, “Enabling the
Software Base for System-Managed Storage,” on page 67. Also, you can use
CONVERTV as a line operator from the ISMF Volume Application to build the job
that performs the volume migration. Running CONVERTV with the TEST keyword
simulates the migration to system-managed storage, letting you evaluate the
eligibility of your volume without actually migrating the data. Be aware that the
simulation reports on permanent, rather than transient, error conditions. As an
example, data sets that cannot be serialized are not reported as exceptions during
the simulation.
Related Reading: For more information on using the CONVERTV command, see
z/OS DFSMSdss Storage Administration.
You can specify the TEST keyword with NONSMS. The actual change is not
performed, but the function generates a report showing data sets eligible for
removal from system management. It also indicates whether the volume as a
whole is eligible for placement under system management.
VSAM component names are derived from DSNAME. For a key-sequenced VSAM
cluster, the data component is named DSNAME.DATA and the index component is
named DSNAME.INDEX.
Using LIKE and REFDD: The LIKE keyword lets you copy data characteristics
from an existing data set to a new one. Using the LIKE keyword, you can copy the
RECORG, RECFM, SPACE, LRECL, KEYLEN, KEYOFF attributes to a data set that
you are creating. REFDD can be used when the data set is allocated earlier in the
job.
Using DATACLAS: You can either specify the DATACLAS keyword or have data
characteristics assigned automatically.
External specification of DATACLAS
Users can specify the DATACLAS keyword to select a set of model data set
Figure 17, Figure 18 on page 59, Figure 19 on page 60 and Figure 20 on page 60
show examples of the ISMF Data Class Define panel, with the attributes used to
allocate an extended format, VSAM key-sequenced data set (KSDS).
Related Reading: For detailed descriptions of the data class attributes listed on the
Data Class Define panel, see z/OS DFSMSdfp Storage Administration.
You can specify additional volume and VSAM attributes on the second panel,
shown in Figure 18 on page 59.
Data Set Name Type . . . . . (EXT, HFS, LIB, PDS, Large or blank)
If Ext . . . . . . . . . . (P, R or blank)
Extended Addressability . . N (Y or N)
Record Access Bias . . . . (S, U, DO, DW, SO, SW or blank)
RMODE31 . . . . . . . . . . (ALL, BUFF, CB, NONE or blank)
Space Constraint Relief . . . N (Y or N)
Reduce Space Up To (%) . . (0 to 99 or blank)
Guaranteed Space Reduction. N (Y or N)
Dynamic Volume Count . . . (1 to 59 or blank)
System Managed Buffering . . (1K to 2048M or blank)
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
On this panel, you can specify the following attributes for a data class:
v Whether to allocate VSAM data sets in extended format.
v Whether to allocate a VSAM data set to use extended addressability, so that it
can grow beyond the four gigabyte (GB) size (the data set must be allocated in
extended format to be eligible for extended addressability)
v Whether to let VSAM determine how many and which type of buffers to use
when allocating VSAM data sets in extended format
v Whether to retry new volume allocations or extends on new volumes that have
failed due to space constraints
v Whether to dynamically add volumes to a data set when a new volume is
needed, and how many to add (up to 59; valid only when space constraint relief
is Y). See z/OS DFSMSdfp Storage Administration for more information about
dynamic volume count.
v Whether to support VSAM data sets (both system-managed and
non-system-managed data sets) with spanned record formats. Spanned record
formats are those in which a data record can cross control interval boundaries
v Whether extended format KSDSs are able to contain compressed data. You can
request that physical sequential data sets be compressed using either tailored or
generic compression dictionaries. You can use the access method services
DCOLLECT command, the ISMF display function, and SMF type 14, 15, and 64
records to assess the overall space savings due to compression
v Whether the data set could support extended attributes (format 8 and 9 DSCBs)
and optionally reside in EAS. (EATTR option).
Use page 4 of the ISMF Data Class Define panel, shown in Figure 19 on page 60, to
further modify data classes.
Media Interchange
Media Type . . . . . . . (1 to 13 or blank)
Recording Technology . . (18,36,128,256,384,E1,E2-E4,EE2-EE4 or ’ ’)
Performance Scaling . . . (Y, N or blank)
Performance Segmentation (Y, N or blank)
Block Size Limit . . . . . (32760 to 2GB or blank)
Recorg . . . . . . . . . . (KS, ES, RR, LS or blank)
Keylen . . . . . . . . . . (0 to 255 or blank)
Keyoff . . . . . . . . . . (0 to 32760 or blank)
CIsize Data . . . . . . . (1 to 32768 or blank)
% Freespace CI . . . . . . (0 to 100 or blank)
CA . . . . . . (0 to 100 or blank)
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
Use page 6 of the ISMF Data Class Define panel, shown in Figure 20, to specify
whether to assign attributes for VSAM record-level sharing (RLS) to
system-managed data sets.
You can specify whether the data set is eligible for backup-while-open processing
and whether the data set is recoverable. You can specify the name of the forward
recovery log stream. You can also specify the size of VSAM RLS data that is cached
in the CF cache structure that is defined to DFSMS. You can also specify whether
SMSVSAM is allowed to use 64-bit addressable virtual storage for its data buffers,
moving them above the 2 gigabyte bar.
You can develop data classes as you migrate permanent data to system-managed
storage, or you can defer this until after most permanent data has been migrated.
First, have users externally specify the keyword DATACLAS to select these data
60 z/OS DFSMS Implementing System-Managed Storage
classes. Later, you can further automate by assigning them in your ACS routines.
This latter level of automation requires a plan to identify specific data set types by
interrogating the values of ACS variables. A simple example is the ACS variable,
&LLQ, which represents the data set's low-level qualifier.
Data classes can assist you in enforcing standards for data set allocation. The need
to maintain MVS or DFSMSdfp installation exits that enforce allocation standards
might be eliminated by using the data class ACS routine facilities. All keywords on
the JCL DD statement, TSO ALLOCATE command, or dynamic allocation are
passed to your data class ACS routine to help you determine how to allocate the
data set. You can issue a warning message to your users if their allocation violates
standards, such as specifying a specific volume or not using a secondary allocation
request. Or, you can fail an allocation by setting a non-zero return code in your
data class ACS routine. DFSMSdfp enables you to include any ACS variable in the
informational messages you create.
You can override a data class attribute using JCL or dynamic allocation parameters.
However, overriding a subparameter of a parameter can override ALL of the
subparameters for that parameter. For example, SPACE=(TRK,(1)) in JCL can cause
primary, secondary, and directory quantities, as well as AVGREC and AVGUNIT, in
the data class to be overridden. However, if you also specify DSNTYPE=PDS, the
directory quantity is taken from the data class.
If you want the data class to supply the default value of a parameter, do not
specify a value for that parameter in the JCL or dynamic allocation. Users cannot
override the data class attributes of dynamically allocated data sets if you use the
IEFDB401 user exit. By default, SMS cannot change values that are explicitly
specified because doing so would alter the original meaning and intent of the
allocation. The default for the 'override space' attribute in Data Class is NO, but
you can use this attribute to allow SMS to change explicitly specified values.
Locate Processing
No STEPCATs or JOBCATs are permitted if system-managed data sets are
referenced, because SMS only supports data set locate using the standard order of
search. SMS does locate processing during step setup, so do not reference data sets
as old that will not be cataloged until the job actually executes. If you do, the
locate fails and you get a JCL error.
You can use DUMMY storage groups to minimize JCL changes for existing data
sets that are located by specifying the volume. If you have placed the data set
under system management and it no longer exists on the original volume or the
volume no longer exists, you can add the volume to a DUMMY type storage
group. When the data set is referenced (and not found), the system uses the
catalog to locate the data set. However, you cannot use dummy storage groups to
handle volume allocations. That is, you cannot use DD statements like:
//DD1 DD VOL=SER=DUMMY1,UNIT=SYSDA,DISP=SHR
This type of statement (with DISP=OLD or SHR) allocates a specific volume. SMS
manages data set allocations, and because this is not really a data set, SMS cannot
take control of it.
VOL=SER Usage
When you use VOL=SER for data set stacking, that is, to place several data sets on
the same tape volume or set of tape volumes, at least one of the volume serial
numbers specified must match one of the volume serial numbers for the data set
on which this data set is being stacked. Use VOL=SER when stacking multivolume,
multi-data sets within the same job step.
Related Reading: For more information on data set stacking, and on when to use
VOL=SER versus VOL=REF, see “Data Set Stacking” on page 209.
VOL=REF Usage
Without SMS, when you specify VOL=REF on the DD statement it indicates that
the data set should be allocated on the same volume as the referenced data set.
With SMS, specifying VOL=REF causes the storage class of the referenced data set
to be assigned to the new data set. The storage group of the referenced data set is
also passed to the storage group ACS routine, so you can code your ACS routine
to perform one of the following three actions:
v Assign the same storage group to the new data set (required for SMS-managed
tape VOL=REFs).
Storage group might be blank if you entered volumes into a tape library with a
blank storage group name. However, you can assign storage group based on
library name in this case.
v Assign a different storage group to the new data set.
v Fail the allocation.
The ACS routines are passed the following values in the &ALLVOL and
&ANYVOL read-only variables when VOL=REF is used:
'REF=SD' - The reference is to an SMS-managed DASD or VIO data set
Remember these restrictions when you design your storage groups and ACS
routines, and eliminate any use of VOL=REF that might violate these restrictions.
Related Reading: For examples of ACS routines used when allocating data sets
using VOL=REF, see the following sections:
v “Data Set Stacking Using Volume Reference” on page 211
v “Using Volume Reference to System-Managed Data Sets” on page 215
v “Using Volume Reference to Non-System-Managed Data Sets” on page 216
With the IBM TotalStorage Enterprise Automated Tape Library (3494 or 3495), data
sets do not have to be cataloged. If a data set is not cataloged, constructs (classes)
assigned to it are lost. Even if the data set is cataloged, unlike the data sets on
DASD, the SMS constructs assigned to the data set are not retained in the catalog.
However, because it is now cataloged, a VOL=REF can be done to it by data set
name.
When no storage class is available from a referenced data set, the storage class
routine must assign a storage class to the referencing data set, enabling the
allocation to be directed to the storage group of the referencing data set.
Otherwise, the allocation fails.
UNIT Usage
If your existing JCL specifies an esoteric or generic device name in the UNIT
parameter for a non-system-managed data set, then make certain that it is defined
on the system performing the allocation. Although the esoteric name is ignored for
a system-managed data set, the esoteric must still exist for non-system-managed
data sets, including DD DUMMY or DD DSN=NULLFILE statements that include
the UNIT parameter.
If the volume is not mounted, a mount message is issued to the operator console.
This occurs regardless of whether a volser is defined to an SMS storage group or to
an SMS dummy storage group.
Chapter 2. Planning to Implement System-Managed Storage 63
IEHLIST Processing
You should review jobs that call IEHLIST, because this utility is highly
device-oriented.
IEHMOVE Processing
IEHMOVE is not supported for system-managed data sets.
IEHPROGM Processing
You should review jobs that call IEHPROGM because this utility is highly
device-oriented. The following changes apply to IEHPROGM processing for
system-managed data sets:
v CATLG and UNCATLG options are not valid.
v SCRATCH, RENAME VOL parameter device type and volume lists must
accurately reflect the actual device where the data set was allocated. You and
SMS-not the user-control device selection.
v SCRATCH causes the data set to be deleted and uncataloged.
v RENAME causes the data set to be renamed and recataloged.
v SCRATCH VTOC SYS supports both VSAM and non-VSAM temporary data
sets.
IEBGENER Processing
Jobs that call IEBGENER have a system-determined block size used for the output
data set if RECFM and LRECL are specified, but BLKSIZE is not specified. The
data set is also considered to be system-reblockable.
Tip: You can use DFSORT's ICEGENER facility to achieve faster and more efficient
processing for applications that are set up to use the IEBGENER system utility. For
more information, see z/OS DFSORT Application Programming Guide.
GDG Processing
When you define a generation data group (GDG), you can either scratch or
uncatalog generation data sets (GDSs) as they exceed the GDG limit. Because
SMS-managed data sets cannot be uncataloged, be aware of the following:
v How generation data sets become part of the generation data group
v How the NOSCRATCH option operates
v How you alter the GDG's limit
Creating generation data sets: A GDS state, deferred roll-in, preserves JCL that
creates and references GDSs. When a new GDS is created, it is cataloged but
assigned a status of deferred roll-in, so that the oldest generation is not deleted.
The new GDS is only rolled in at disposition processing at job or step termination
in an existing job when you code DISP=(OLD,CATLG) in a different job or use the
access method services ALTER ROLLIN command. This has special implications on
restarts after a job failure.
Using NOSCRATCH: Generation data groups that are defined using the
NOSCRATCH option operate differently in the SMS environment. Generations that
exceed the limit are assigned a status of rolled-off. Special management class
attributes affect these data sets. You can have DFSMShsm migrate them
immediately or expire them.
Altering GDG limit: If you use the access method services ALTER command to
increase the GDG limit, no rolled-off GDSs or deferred roll-in GDSs are rolled in.
When you increase the GDG limit, you must use the ALTER ROLLIN command to
The Data Set Application displays all data set characteristics including, for
SMS-managed data sets, the classes assigned to the data set. Users can also view
the most recent date that the data set has been backed up by DFSMShsm, or the
number of stripes that the system has used to allocate the data set for any striped
data sets. By using the Data Class, Storage Class, and Management Class
applications, users can interactively review the attributes that you are using to
manage their data sets. These online applications can supplement your own user's
guide.
Data set lists can be generated by using data set filters. ISMF uses the same filter
conventions that are standard throughout DFSMS. Users can also filter by data set
characteristics. DFSMShsm and DFSMSdss commands can be used as ISMF line
operators in the Data Set Application to perform simple storage management
functions that supplement the automatic functions provided by DFSMS. For
example, DFSMSdss's COPY command might be useful. Users can select COPY
options on ISMF panels that result in a fully-developed batch job, which can be
submitted and run in the background. Using ISMF, you do not need to know
DFSMSdss syntax. Other useful commands are DFSMShsm's RECALL or
RECOVER.
Related Reading:
v For information on using the RACF control features, see z/OS Security Server
RACF Security Administrator's Guide.
v For information on using RACF in a DFSMS environment, see z/OS DFSMSdfp
Storage Administration.
The following example shows the RACF commands issued to activate an SMS
configuration:
v
SETROPTS CLASSACT(FACILITY)
You can define general resource profiles to protect specialized DFSMSdss and
access method services functions that are designed to protect the integrity of
system-managed data sets. For example, you can use the BYPASSACS keyword
when copying or restoring data sets using DFSMSdss. This overrides SMS class
assignments and creates non-system-managed data sets or system-managed data
You can create a separate RACF profile to individually authorize each function,
keyword, and command for system-managed data sets. Or, using the common
high-level qualifier STGADMIN, you can create RACF generic profiles for
command or operation authorization.
Related Reading: For a list of the profiles you must define to protect catalog and
DFSMSdss functions for system-managed data sets, see z/OS DFSMSdfp Storage
Administration.
You can define default data, storage, and management class names, and an
application identifier in RACF user or group profiles. SMS retrieves these defaults
and supplies them as input variables to the ACS routines. You can use the
application identifier to associate data sets that have different highest-level
qualifiers and different resource owners.
To use default SMS classes for a data set, define a resource owner or data set
owner in the RACF profile for that data set. RACF uses the resource owner to
determine the user or group profiles that contain the default SMS classes.
Be careful when assigning RACF defaults because it is unlikely that a given SMS
class is applicable to all data sets created by a user or group. However, RACF
defaults can be effectively used to supplement your ACS routine logic and handle
class assignments for data sets that are difficult to identify using other ACS
READ/ONLY variables.
Figure 21 on page 70 shows how you can use the RACF default to control the
management class assigned to a data set.
Figure 22 shows an example of a command sequence you can use to define the
SMS-related FIELD resource class profiles. The example enables storage
administrators to update the resource owner field, and enables the user to update
the set of SMS default classes.
With system-managed storage, data ownership is the basis for determining who
can use RACF-protected SMS resources. Previously, checking was based on the
user's authority to use a resource. For system-managed data sets, the owner of the
data set must be authorized to use the SMS classes. The RACF functions, such as
data set protection and authorization control for data, programs, commands, and
keywords, apply to databases as well.
RACF contains two resource classes: STORCLAS and MGMTCLAS. Authorize SMS
storage and management classes by defining them as RACF profiles to the
STORCLAS and MGMTCLAS resource classes. The profile names are the same as
the names of the storage or management classes.
The following example shows the command sequence you can use to define a
general resource class profile for the storage class, DBCRIT, and the database
administrator's ability to use the DBCRIT. In the following example, the storage
administration group is the owner of the general resource profile:
SETROPTS CLASSACT(STORCLAS) RACLIST(STORCLAS)
The RACF RESOWNER value, based on the high-level qualifier of the data set
name, is the default used to check authorization to use management and storage
classes. Another way to check this authorization is to use the user ID that is
allocating the data set. This prevents the problems that can occur with restoring or
recalling data sets that have a protected storage class and management class, and
that are owned by users whose user or group IDs have been revoked.
Certain authorization functions are necessary beyond the data set level, and are
outside the scope of RACF. Because of the special nature of these functions, some
of them are implemented in particular database products, for example, DB2 and
IMS.
You can define several RACF profiles to limit the use of ISMF applications.
However, make the list and display dialogs available to all users.
Related Reading: For a list of RACF profiles that can be defined to limit the use of
ISMF applications, see z/OS DFSMSdfp Storage Administration.
You can use the SMS configuration and ACS routines on the product tape and in
Appendix B, “Sample Classes, Groups, and ACS Routines,” on page 243 to
familiarize yourself with the implementation-by-milestone sample configurations.
The base SMS configuration contains all the data classes, storage classes,
management classes, and storage groups needed to implement all five milestones.
A set of ACS routines accompanies each milestone, beginning with the activating
SMS milestone, which is discussed in Chapter 4, “Activating the Storage
Management Subsystem,” on page 79. The ACS routine design controls the
assignment of classes to ensure that each milestone uses the subset of classes and
groups needed during the milestone. You translate the ACS routines for the
milestone to fully implement the SMS configuration.
Use the ISMF Data Class, Storage Class, Management Class, and Storage Group
applications to list the attributes of the classes and groups. A list of the sample
classes, groups, and ACS routines included in the starter set is contained in
Appendix B, “Sample Classes, Groups, and ACS Routines,” on page 243.
Most of your data sets can be system-managed. System-managed data sets must
satisfy the following two major requirements:
v They must be cataloged
DFSMSdss supports the automatic cataloging of data sets during migration to
system management. The categories of uncataloged data sets that require
additional planning before they can be migrated to system management are:
– Pattern DSCBs
Batch jobs typically use these data sets to generate DCB information for the
new GDSs. You can defer handling these pattern DSCBs until you migrate the
batch data that requires them. This is discussed in Chapter 8, “Managing
Batch Data,” on page 135.
– Uncataloged data sets whose names collide with data set names that are
already cataloged in the standard search order
v They must be movable
SMS is designed to reduce user awareness of device-oriented considerations.
System-managed data sets should be able to reside on any volume that can
deliver the performance and storage management services required. Unmovable
data sets must be identified and, either isolated on non-system-managed
volumes, or converted to a format that is supported in the DFSMS environment.
Recommendation: Because there are application dependencies involved when
changing the data set format, weigh the effort required to convert these data
sets.
2. Enter the DOWN command three times to display page 4 of the Data Set
Selection Entry Panel. Select further criteria with the specifications shown in
Figure 25.
To limit the List, Specify a value or range of values in any of the following:
Rel Op Value Value Value Value
------ -------- -------- -------- --------
Allocation Unit . . . .
CF Cache Set Name . . .
CF Monitor Status . . .
CF Status Indicator . .
Change Indicator . . . .
Compressed Format . . .
Data Class Name . . . .
Data Set Environment . .
Data Set Name Type . . .
Data Set Organization eq psu dau isu pou
(1 to 8 Values) ===> is
DDM Attributes . . . . .
Device Type . . . . . .
(1 to 8 Values) ===>
Use ENTER to Perform Selection; Use UP/DOWN Command for other Selection Panel;
Use HELP Command for Help; Use END Command to Exit.
When you complete your selection, a list of the data sets that match the
selection criteria displays. Figure 26 on page 75 shows a sample list of data sets
that match the data set organization criteria.
Figure 26. ISMF List of ISAM and Unmovable Data Sets by DSORG
Enter the RIGHT command to display the data set organization column as
shown in Figure 27.
3. Repeat the process with the specifications shown in Figure 28 on page 76.
To limit the List, Specify a value or range of values in any of the following:
Rel Op Value Value Value Value
------ -------- -------- -------- --------
Allocation Unit . . . .
CF Cache Set Name . . .
CF Monitor Status . . .
CF Status Indicator . .
Change Indicator . . . .
Compressed Format . . .
Data Class Name . . . .
Data Set Environment . .
Data Set Name Type . . .
Data Set Organization eq abs
(1 to 8 Values) ===> is
DDM Attributes . . . . .
Device Type . . . . . .
(1 to 8 Values) ===>
Use ENTER to Perform Selection; Use UP/DOWN Command for other Selection Panel;
Use HELP Command for Help; Use END Command to Exit.
When you complete your selection, a list of the data sets that match the
selection criteria displays. Figure 29 shows a sample list of data sets that match
the allocation unit criteria.
These data sets must have physical sequential, partitioned, or partitioned extended
organization and fixed- or variable-length record formats. Unmovable or BDAM
data sets are not supported.
You can cause existing data sets to use a system-determined block size by copying
them using DFSMSdss and specifying the REBLOCK parameter or assign a Data
Class with 'Force System Determined Blocksize' attribute set to Y. You can modify
the DFSMSdss reblock installation exit to have DFSMSdss mark the data set as
reblockable. Or, you can use DFSMShsm to allocate the data set using a
system-determined block size by migrating and recalling the data set. You must
specify SETSYS CONVERSION in DFSMShsm PARMLIB to enable this support. If
you do not have these components installed, you can use the DCB OPEN user or
installation exit to implement system-determined block size.
Before using system-determined block size for your data sets, evaluate the effect on
your applications. Applications that specify block size or data sets with very large
logical record sizes might present implementation problems. Also, additional
virtual storage might be required in the application's address space to store the
larger buffers.
All of these elements are required for a valid SMS configuration, except for the
storage class ACS routine.
Recommendation: Ensure that a storage class ACS routine is part of your minimal
configuration. This prevents users from externally specifying a storage class on
their DD statements, causing the data set to be system-managed before you are
ready.
To use SMS effectively, use the information in this chapter and in z/OS DFSMSdfp
Storage Administration.
Coordinate with your z/OS systems programming group and operations to make
z/OS changes required for SMS activation, and schedule an IPL. There are required
changes for PARMLIB and GRS definitions.
Related Reading: For the formula used to calculate the appropriate SMS control
data set size, see z/OS DFSMSdfp Storage Administration.
Recommendation: Always review the COMMDS size when migrating from prior
DFSMS releases.
// EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEFINE CLUSTER(NAME(YOUR.OWN.SCDS) LINEAR VOLUME(D65DM1) -
TRK(25 5) SHAREOPTIONS(2,3)) -
DATA(NAME(YOUR.OWN.SCDS.DATA) REUSE)
Specify SHAREOPTIONS(2,3) only for the SCDS. This lets one update-mode user
operate simultaneously with other read-mode users between regions.
Specify SHAREOPTIONS(3,3) for the ACDS and COMMDS. These data sets must
be shared between systems that are managing a shared DASD configuration in a
DFSMS environment.
A reserve is issued while updating the control data sets for the following reasons:
v The COMMDS is updated with space statistics at the expiration of the time
interval specified in the IGDSMSxx member in PARMLIB.
v The ACDS is updated whenever a configuration change occurs, such as when an
operator varies a volume offline.
You should place resource name IGDCDSXS in the RESERVE conversion RNL as a
generic entry. This minimizes delays due to contention for resources and prevents
deadlocks associated with the VARY SMS command.
Test Source
Control
ACS Routines Data Set
Starter Set
Minimal Configuration: SCDS
Validate
Configuration
Tip: The volume does not have to exist as long as you do not direct allocations to
either the storage group or the volume.
Because a storage class can be assigned using a JCL DD statement, a storage class
ACS routine is not required in a valid SCDS.
Recommendation: Define a storage class ACS routine so that users do not attempt
to use the STORCLAS keyword in JCL. The base configuration consists of the
names for each system or system group in the SMS complex, the default
management class, and the default device geometry and unit information.
The following sections describe the steps you should follow to define a minimal
configuration. The figures show the sample classes, groups, base configuration, and
ACS routines provided in either the starter set or Appendix B, “Sample Classes,
Groups, and ACS Routines,” on page 243.
2. In the CDS Name field, type in the name of the SCDS that is to contain the
base configuration. In this example, the CDS name is YOUR.OWN.SCDS. Enter
a 2 (Define) to view the SCDS Base Define panel shown in Figure 33 on page
85.
Use ENTER to Perform Verification; Use DOWN Command to View next Panel;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL To Exit.
Use the DOWN command to view the second page as shown in Figure 34.
The SCDS name is the same as the value that you specified on the CDS
Application Selection panel (see Figure 32 on page 84).
3. Define a default management class and type it in the Default Management
Class field. In this example, we have used the STANDEF management class, a
management class in the sample configuration.
The default management class is only used when DFSMShsm performs
automatic processing for those data sets that do not have management classes
assigned to them. When no management class is assigned to a data set, the
catalog entry for that data set contains no management class, even though the
default management class controls its backup and availability. You should
periodically search for data sets that are system-managed and have no
Chapter 4. Activating the Storage Management Subsystem 85
management class assigned. DFSMSdss's filtering capabilities can be used to
identify system-managed data sets with no management class assigned, and to
produce a report containing these management class exceptions.
4. You should set the value in the Default Unit field to your system's primary
esoteric name. For Default Device Geometry, specify values for the Bytes/Track
and Tracks/Cylinder attributes. The values for the 3380 are 47476 and 15,
respectively. For the 3390, the values are 56664 and 15, respectively.
You should indicate the characteristics of your predominant device as the
characteristics for the default unit. If your configuration contains 90% 3390-2s
and 10% 3380-Ks, then specify the 3390 geometry characteristics as the default
device geometry.
The JCL UNIT parameter is optional for new data set allocations for both
system-managed and non-system-managed data sets. SMS uses the Default
Unit attribute if no unit is specified when allocating non-system-managed data
sets. The Default Device Geometry attribute converts an allocation request from
tracks or cylinders to KBs or MBs when an esoteric unit is used, or when no
unit is given. Through this conversion, uniform space can be allocated on any
device type for a given allocation.
The space request is always converted to KBs or MBs according to the
following formula:
(# tracks x (bytes/track))
tracks allocated = ____________________
track capacity
Where:
v bytes/track is derived from the Default Device Geometry attribute.
v track capacity is the capacity of the device selected, including device
overhead.
v The result of the calculation, tracks allocated, is rounded up.
This change can affect your user's existing JCL that specifies the UNIT
parameter. There are two variations of UNIT coding:
v Users specify a generic name, such as 3380 or 3390:
These users have allocation converted to bytes, based on the geometry of that
device.
v Users specify an esoteric name, such as SYSDA:
These users have allocation converted to bytes, based on the Default Device
Geometry attribute.
Use an esoteric name for a more consistent amount of allocated space. It
provides a transition for users to allocation in the system-managed storage
environment.
5. If you have created a data set separation profile, use the optional field DS
Separation Profile to provide SMS with the name of the profile. During volume
selection for data set allocation, SMS attempts to separate, on the PCU or
volume level, the data sets that are listed in the data set separation profile.
You can specify any valid sequential or partitioned member data set name,
with a maximum length of 56 characters, with or without quotation marks. For
data set names without quotation marks, ISMF will add the TSO user ID prefix
of the person who is defining or updating the base configuration.
The default value is blank, which indicates that data set separation is not
requested for the SMS complex.
Related Reading: For detailed information on defining SMS classes and groups
using ISMF, see z/OS DFSMSdfp Storage Administration.
You can define the class, SC1, in your configuration in one of two ways:
1. Define the class using the define option of the ISMF storage class application.
2. Use the ISMF COPY line operator to copy the definition of SC1 from the starter
set's SCDS to your own SCDS.
2. Give values for the CDS Name and a Storage Class Name fields. The CDS
Name must be the same name that you gave for the SCDS on the CDS
Application Selection panel (see Figure 32 on page 84). In this example, the
CDS name is USER6.TEST.SCDS and the storage class name is SC1.
Enter 3 (Define) to display the Storage Class Define panel, shown in Figure 36
on page 89.
3. SCDS Name and Storage Class Name are output fields containing the values
that you specified on the Storage Class Application Selection panel (see
Figure 35 on page 88). Description is an optional field of 120 characters where
you can describe the storage class.
4. For the minimal configuration, allow other storage class attributes to default to
the ISMF values. Do not specify Y for the Guaranteed Space attribute to avoid
allocation failures on specified volumes. You can specify it later for such data
sets as IMS online logs, DB2 online logs, or DB2 BSDS.
Press Enter to verify the attributes. Enter END on the command line or press
PF3 to exit this panel.
To define the storage class by copying the definition from the sample base
configuration:
1. List the storage classes in the base configuration, as shown in Figure 37 on page
90.
2. View the storage classes in the base configuration and enter COPY as a line
operator on the line describing the SC1 class, as shown in Figure 38.
3. Copy the SC1 storage class from the base configuration to your own SCDS, as
shown in Figure 39 on page 91. Enter the name of your SCDS in the Data Set
Name field, and “SC1” in the Construct Name field of the “Copy To” data area.
Press Enter. The SC1 SAVED message indicates that the storage class has been
successfully copied.
Defining a non-existent volume lets you activate SMS without having any
system-managed volumes. No data sets are system-managed at this time. This
condition provides an opportunity to experiment with SMS without any risk to
your data.
2. Specify values for the CDS Name and Storage Group Name fields. The CDS
name must be the same name that you specified for the SCDS on the CDS
Application Selection panel (see Figure 32 on page 84). In this example, the
CDS name is USER6.MYSCDS and the storage group name is SG2.
Enter a 2 (Define) to display the Pool Storage Group Define panels, shown in
Figure 41 and Figure 42 on page 93.
Use ENTER to Perform Selection; Use DOWN Command to View next Page;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
SCDS Name and Storage Group Name are output fields containing the values
that you specified on the Storage Group Application Selection panel (see
Figure 40 on page 92). Description is an optional field of 120 characters that
you can use to describe the storage group type.
If you supply the name of a Parallel Sysplex in the Migrate System Name,
Backup System Name, or Dump System Name field, make sure you enter it
correctly. Otherwise, the expected operation might not occur.
3. Let the storage group attributes default to the ISMF values.
Press Enter to verify and display the SMS Storage Group Status Define panel
shown in Figure 43.
CDS Name, Storage Group Name, and Storage Group Type are output fields
containing the values you specified on previous panels.
6. Enter a 2 (Define) and specify the appropriate volume serial numbers. Each
time you press Enter, you display the SMS Volume Status Define panel, shown
in Figure 45 on page 95.
SCDS Name, Storage Group Name, and Volume Serial Numbers are output
fields containing the values that you entered on the Storage Group Volume
Selection panel (see Figure 44 on page 94).
7. Define the relationship between the volume and each system or system group
by entering DISALL in the SMS VOL STATUS column next to each name in the
System/Sys Group Name column.
The management class, STANDEF, is defined in the starter set's SCDS. You can
copy its definition to your own SCDS in the same way as the storage class was
copied. If you choose to define the default management class:
1. Enter a 3 (Management Class) on the ISMF Primary Option Menu to display
the Management Class Application Selection panel (see Figure 46 on page 96).
Use ENTER to Perform Selection; Use DOWN Command to View next Selection Panel;
Use HELP Command for Help; Use END Command to Exit.
2. Specify values in the CDS Name and Management Class Name fields. The CDS
name must be the same name that you specified for the SCDS on the CDS
Application Selection panel (see Figure 32 on page 84). In this example, the
SCDS name is YOUR.OWN.SCDS and the management class name is
STANDEF.
Enter a 3 (Define) to view the first page of the Management Class Define panel
(see Figure 47).
Expiration Attributes
Expire after Days Non-usage . . NOLIMIT (1 to 93000 or NOLIMIT)
Expire after Date/Days . . . . . NOLIMIT (0 to 93000, yyyy/mm/dd or
NOLIMIT)
Use ENTER to Perform Verification; Use DOWN Command to View next Panel;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
SCDS Name and Management Class Name are output fields containing the
values that you specified on the Management Class Application Selection panel
(see Figure 46). Description is an optional field of 120 characters that you can
use to describe the management class.
Migration Attributes
Primary Days Non-usage . . . . 2 (0 to 9999 or blank)
Level 1 Days Non-usage . . . . 15 (0 to 9999, NOLIMIT or blank)
Command or Auto Migrate . . . . BOTH (BOTH, COMMAND or NONE)
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
SCDS Name and Management Class Name are output fields containing the
values that you specified on the Management Class Application Selection panel
(see Figure 46 on page 96).
4. Specify N in the Partial Release field to inhibit DFSMShsm's space management
from releasing allocated but unused space. We specified a short life on primary
and migration level 1 for these data sets, to prevent over-commitment of
primary and migration level 1 storage. These data sets should be re-assigned a
management class that is more appropriate than the default. Specify BOTH in
the Command or Auto Migrate field to permit DFSMShsm, the storage
administrator, or the user to manage the data set.
Scroll down to perform the verification and to display the third page of the
Management Class Define panel shown in Figure 49 on page 98.
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; Cancel to Exit.
SCDS Name and Management Class Name are output fields containing the
values that you entered on the Management Class Application Selection panel
(see Figure 46 on page 96).
5. In the Backup Attributes fields, specify that a minimal number of backup
versions be retained for the data set. Specify BOTH for Admin or User
command Backup, Y for Auto Backup to ensure that the data set is retained on
both DFSMShsmbackup volumes and migration level 2, and let Backup Copy
Technique default. Leave the remaining fields blank.
3. Select option 1 (Edit), and press Enter to display the Edit-Entry panel shown in
Figure 51.
ISPF Library:
Project . . . IBMUSER
Group . . . . IMPLACTV . . . . . . . . .
Type . . . . ACSLIB
Member . . . STORCLAS (Blank or pattern for member selection list)
Workstation File:
File Name . . . . .
Options
Initial Macro . . . . / Confirm Cancel/Move/Replace
Profile Name . . . . . Mixed Mode
Format Name . . . . . Edit on Workstation
Data Set Password . . Preserve VB record length
4. Type in the appropriate data set name for the ACS routines. We have shown
the name of the PDS or PDSE corresponding to the sample ACS routines for
this milestone. The storage class ACS routine is allocated in the STORCLAS
member.
Press Enter to access the ISPF/PDF editor.
5. On this screen, enter the source code for the storage class ACS routine, as
shown in Figure 52 on page 100. This routine sets a null storage class and exits.
No data is system-managed.
/**********************************************************************/
/* C H A N G E H I S T O R Y */
/* ============================ */
/* DATE RESP DESCRIPTION OF CHANGE */
/* -------- ---- ----------------------------------------------- */
/* 91/03/24 EG Initial routine created. */
/* PURPOSE: This routine assigns a null storage class to */
/* all data sets. */
/* INPUT: The following ACS variables are referenced: NONE */
/* OUTPUT: Null storage class. */
/* RETURN Zero is the only return code. Data set allocations are */
/* CODES: not failed in this routine. */
/**********************************************************************/
SET &STORCLAS = ’’
Figure 52. Sample Storage Class ACS Routine for the Minimal Configuration
6. Enter the END command to save the routine and return to the Edit-Entry panel
shown in Figure 51 on page 99.
7. Enter the name of your storage group ACS routine as the new member name in
the Member field. The sample routine uses the name STORGRP. The ISPF/PDF
editor appears again.
8. On this screen, enter the source code for the storage group ACS routine, as
shown in Figure 53 on page 101. This source code assigns the previously
defined storage group. Because this particular storage group contains a
non-existent volume, no volumes are system-managed.
/**********************************************************************/
/* C H A N G E H I S T O R Y */
/* ============================ */
/* */
/* DATE RESP DESCRIPTION OF CHANGE */
/* -------- ---- ----------------------------------------------- */
/* */
/* 91/03/24 EG Initial routine created. */
/* */
/* PURPOSE: This routine is never run for the minimal */
/* SMS configuration. It only exists to satisfy the */
/* requirement for storage group ACS routine */
/* for every valid SMS configuration. A storage */
/* group containing no real DASD volumes is assigned, */
/* NOVOLS. */
/* */
/* INPUT: The following ACS variables are referenced: NONE */
/* */
/* OUTPUT: The NOVOLS storage group is assigned. */
Figure 53. Sample Storage Group ACS Routine for the Minimal Configuration
9. Enter the END command to save the routine and return to the Edit-Entry panel
(see Figure 51 on page 99). From this panel, enter the END command again to
return to the ACS Application Selection panel (see Figure 50 on page 99).
2. Enter the appropriate values for the storage class ACS routine. Press Enter to
perform the translation. If errors are found, the listing contents are displayed. If
the translation is successful, the message ACS OBJECT SAVED is displayed.
3. Enter the END command to return to the Translate ACS Routines panel. Next,
translate the storage group ACS routine.
3. In the SCDS Name field, specify the name of your SCDS. Enter an asterisk in
the ACS Routine Type field to validate the entire SCDS.
You can save a listing of the validation results by specifying a sequential data
set name or partitioned data set member in the Listing Data Set field. If you
leave this field blank, no listing is generated, so you do not see possible errors.
You receive warning messages from the VALIDATE command if there are
classes defined in your SCDS that are not assigned in your ACS routines.
If you have storage groups defined in your SCDS that are not assigned in your
ACS routines, you receive messages from VALIDATE and your SCDS is marked
invalid.
To define SMS to z/OS, you must place a record for SMS in an IEFSSNxx member.
IEFSSNxx defines how z/OS activates the SMS address space. You can code an
IEFSSNxx member with keyword or positional parameters, but not both.
If you choose to use positional instead of keyword parameters, use the following
positional format to define SMS in IEFSSNxx:
SMS[,[IGDSSIIN][,’ID=yy[,PROMPT=NO]]
YES]]
DISPLAY]]
Where:
ID=yy
yy is the two-character suffix for the SMS initialization member, IGDSMSxx.
PROMPT=DISPLAY
This option displays the contents of the IGDSMSxx member, but you cannot
change the contents.
During initial testing, you probably want to be able to start SMS manually. Omit
IGDSSIIN in the SMS record to do this. Once you are comfortable with SMS
operation, add IGDSSIIN to cause SMS to start automatically during IPL.
Recommendation: Place the SMS record before the JES2 record in IEFSSNxx to
start SMS before starting the JES2 subsystem.
IGDSMSxx also sets the synchronization time interval between systems. This
interval represents how many seconds SMS lets elapse before it checks the
COMMDS for status from other systems.
If you plan to use the RACF default data class, storage class, and management
class for the data set owner, you must specify ACSDEFAULTS=YES. The following
command shows a basic format for the SMS record you submit:
SMS ACDS(YOUR.OWN.ACDS)
COMMDS(YOUR.OWN.COMMDS)
Related Reading: For more information about activating a new SMS configuration,
see z/OS DFSMSdfp Storage Administration.
Tip: The ACTIVATE command, run from the ISMF CDS application, is equivalent
to the SETSMS operator command with the SCDS keyword specified. If you use
RACF, you can enable storage administrators to activate SMS configurations from
ISMF by defining the facility, STGADMIN.IGD.ACTIVATE.CONFIGURATION, and
issuing permit commands for each storage administrator.
z/OS operator commands complement your ability to monitor and control SMS
operation. You can use the DISPLAY operator command to show information about
the active configuration. The following sample command displays the status of the
storage group PRIME80 and all the volumes defined in the storage group:
DISPLAY SMS,STORGRP(PRIME80),LISTVOL
Activating SMS
To activate SMS, perform the following steps:
1. Define SMS to the operating system.
Update the PARMLIB members on all the systems where SMS is intended to
run. If you are sharing DASD between systems, you only need to activate the
SMS configuration on one system; the COMMDS activates the other systems. If
you want to prevent initial activation on other systems, the ACDS and
COMMDS should reside on non-shared volumes.
2. Initialize the SMS address space.
DEVSERV P,1000,4
IEE459I 10.45.49 DEVSERV PATHS 780
UNIT DTYPE M CNT VOLSER CHPID=PATH STATUS
TC DFW PIN DC-STATE CCA DDC ALT CU-TYPE
1000,3380K,O,000,D65DM1,2E=+ 2F=+
YY YY N SIMPLEX C0 00 3990-3
1001,3380K,O,000,D65DM2,2E=+ 2F=+
YY NY N SIMPLEX C1 01 3990-3
1002,3380K,O,000,D65DM3,2E=+ 2F=+
NY YY N SIMPLEX C2 02 3990-3
1003,3380K,O,000,D65DM4,2E=+ 2F=+
YY NY N SIMPLEX C3 03 3990-3
************************ SYMBOL DEFINITIONS ************************
O = ONLINE + = PATH AVAILABLE
Enforcing Standards
You can use data class ACS routine facilities to automate or simplify storage
allocation standards if you:
v Use manual techniques to enforce standards
v Plan to enforce standards before implementing DFSMS
v Use DFSMSdfp or MVS installation exits to enforce storage allocation standards
The data class ACS routine provides an automatic method for enforcing standards,
because it is called for system-managed and non-system-managed data set
allocations. Standards are enforced automatically at allocation time, rather than
through manual techniques after allocation.
Appendix C, “Installation and User Exits,” on page 259, describes the installation
exits available in the DFSMS environment. Use the information to evaluate if your
installation exit usage continues to apply to system-managed data sets.
Temporary data sets are created and deleted within the same job, job step, or
terminal session. No entries are made in the basic catalog structure (BCS) for these
data sets, but system-managed VSAM data sets do have VVDS entries. Both VSAM
and non-VSAM data sets have VTOC entries. The data set name for temporary
data is either omitted or is a single qualifier with && or && at the beginning.
When the DSNAME is omitted, the system generates a name that begins with SYS
and includes the Julian date and time.
When you code request temporary data set allocation, the ACS read-only variable
data set type, &DSTYPE, is set to TEMP. The storage class ACS routine determines
whether to allocate these data sets to VIO or to volumes in a pool storage group
category depending on the data set usage and size. During automatic space
management, DFSMShsm automatically deletes system-managed temporary data
sets that remain on a volume after an abnormal end of job or system failure.
Figure 59 on page 110 shows how DFSMShsm allocates and manages temporary
data sets.
Storage Group
ACS Routine
DFSMShsm Deletes
Temporary Data
The following major tasks are required for system-managed temporary data sets:
v Review the planning considerations
v Define SMS storage classes and groups
v Create ACS routines
v Test the ACS routines
v Initialize DASD volumes for LARGExx and PRIMExx storage groups
v Reactivate the configuration
The PRIMExx and LARGExx storage groups support temporary data sets that are
too large for VIO support. To tailor the storage groups:
1. Enter 6 (Storage Group) on the ISMF Primary Option Menu to view the Storage
Group Application Selection panel.
2. Type in values for the CDS Name, Storage Group Name and Storage Group
Type fields. In this example, the CDS name is YOUR.OWN.SCDS, the storage
group name is VIO, and the storage group type is VIO.
Enter 3 (Alter) to view the VIO Storage Group Alter panel. The VIO Maxsize
attribute in the storage group determines the largest temporary data set that
can be written to VIO. You determine the size by examining your use of central
and expanded storage and your paging activity.
Type in your threshold as the VIO Maxsize attribute if you need to change the
20 MB value. This is the primary space size plus 15 times the secondary space
size.
Reasonable limits for VIO depend far more on the sizes of paging data sets
than they do on the amount of central storage.
Type in a device type as the VIO UNIT attribute. The VIO device type is virtual
and is unrelated to actual devices on your system. Update the Description field
to reflect your changes.
Press Enter to verify your changes. Then press PF3 to save the updated storage
group. Press PF3 again to return to the Storage Group Application Selection
panel.
3. Now tailor the PRIMExx storage group.
4. Enter a 4 (Volume) on the Storage Group Application Selection panel. Then,
specify a volume or range of volumes and enter a 2 (Define) on the Storage
Group Volume Selection panel. Define the relationship between the volume and
each of your systems or system groups by typing in ENABLE in the SMS VOL
STATUS column next to the appropriate system or system group names in the
System/Sys Group Name column.
5. Optionally, define the LARGExx storage group in the same way as you did for
the PRIMExx storage group.
Restriction: If you use VOL=REF processing to refer to a temporary data set, you
might get different results in storage group assignments than expected. This is
because temporary data sets are assigned a storage group by the system, based on
a list of eligible storage groups, such as: VIO, PRIME, STANDARD, etc. Data sets
that use VOL=REF are assigned a storage group based on this list of eligible
storage groups, not on the name of the storage group used to successfully allocate
the first data set being referenced. This might result in the data sets being allocated
in different storage groups.
Restriction: ACS installation exits are not called during ACS routine testing.
2. Enter a 4 (Test) to view the ACS Test Selection panel, shown in Figure 61.
3. Type in the name of the PDS containing the ACS test case data in the ACS Test
Library field. In this example, the data set name is USER6.TEST.DATA. Type in
the name of the particular library member containing the test case in the ACS
Test Member field. You can type in one test case per member.
Enter a 1 to view the first page of the ACS Test Case Define panel, as shown in
Figure 62 on page 115.
ACS Test Library and ACS Test Member are output fields containing the values
that you specified on the ACS Test Selection panel (see Figure 61 on page 114).
Description is an optional field of 120 characters that you can use to describe
the test case.
4. Specify the appropriate values. The following are sample values for your use:
v DSN: STGADMIN.TEST.TEMPDATA
v DD: SORTWK1
v Dsorg: PS
v Dstype: TEMP
v Xmode: BATCH
v ACSenvir: ALLOC
v MAXSIZE: 400000
Leave the remaining fields blank and scroll down to view the second page of
the ACS Test Case Define panel, shown in Figure 63 on page 116.
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
ACS Test Library and ACS Test Member are output fields containing the values
that you specified on the ACS Test Selection panel (see Figure 61 on page 114).
5. Specify STGADM01 in the JOB field, SYSADMIN in the GROUP field, and 3390
in the UNIT field. Leave the remaining fields blank.
Press Enter to perform the verification. Enter the END command to return to
the ACS Test Selection panel (see Figure 61 on page 114).
ACS TEST
MEMBER EXIT CODE RESULTS
--------- ---------- ------------------------------------
DESCRIPTION: TEST CASE CREATED 99/11/07 AT 10:24
TEMPT01 0 SC = NORMAL
0 SG = NORMAL
3. After examining the results, enter the END command to view the ACS Output
Listing Disposition panel, on which you can specify whether to keep the output
listing.
TSO, batch, and database data are the usual candidates for migration during the
Managing Permanent Data phase. Tape data sets and volumes are migrated in the
Managing Tape Data milestone. You can migrate database data sets and tape
volumes under SMS in any order.
Most data set types can benefit from SMS, with some types benefiting more than
others.
Related Reading: For specific information about objects, see z/OS DFSMS OAM
Planning, Installation, and Storage Administration Guide for Object Support.
System-Managed
Storage Groups
Application 2
Database IMS
Database
CICS
VIO Temporary
Unmovable Non-System-Managed
These data sets are relatively easy to place under system management because
allocation requests for TSO data typically use esoteric unit names, and the data sets
are already in storage pools. You can readily convert these pools to the primary
storage groups, PRIME80 and PRIME90.
If you now use DFSMShsm, you are probably already providing management
services for TSO data. Implementing DFSMS-based space management and
availability services requires translating the volume-oriented management
parameters documented in your DFSMShsm PARMLIB member to the
data-set-oriented SMS management classes.
HFS data sets are similar to TSO data sets. HFS files, like PDSE members, cannot
be individually managed. These files cannot be converted. They are created in HFS
data sets as hierarchical files using z/OS UNIX System Services. HFS data sets
must be system-managed.
Data set-level DFSMSdss and DFSMShsm functions can be performed on HFS data
sets. However, file level backups can only be performed using Tivoli Storage
Manager clients.
SMS allocates TSO data in the PRIMExx and LARGExx storage groups, based on
data set size. Data sets larger than 285 MB are directed to the LARGExx storage
group. Listing data sets, SYSPRINT from compilers and linkage editors, are
automatically deleted by DFSMShsm after a short life on primary storage. Active
source, object, and load libraries exist on primary storage indefinitely. If these data
sets are not used, they are migrated by DFSMShsm to migration level 1, but are
recalled automatically when accessed by a user.
Multiple versions of backups of TSO data sets are maintained to minimize the
effect of accidental deletion of application programmer data, such as source or JCL
libraries. These data sets receive better-than-average availability service.
TSO data sets do not have high performance requirements in comparison to other
data categories, and are assigned standard performance services.
No additional logic in the storage class ACS routine is required to assign TSO data
sets to the STANDARD class. The OTHERWISE statement associated with the first
SELECT statement is run for TSO data set allocations, setting the storage class to
STANDARD. Refer to the storage class ACS routine displayed in Appendix B,
“Sample Classes, Groups, and ACS Routines,” on page 243 for a coding example.
EXTBAK defines the availability and space management attributes for long-term
libraries that contain JCL, source program, CLIST, object, and load modules.
Because the Partial Release attribute is set to CI (conditional immediate), any
primary space that is allocated, but not used, is released immediately if a
secondary space request is specified. EXTBAK retains these data sets on primary
storage indefinitely if they are referenced at least once every 15 days, as dictated
by the value of MIGRATE PRIMARY DAYS NON-USAGE. If 15 days elapse
without the data set being referenced, DFSMShsm moves the data set to migration
level 1 where the data set exists for 60 days before being written to migration level
2. The attribute, Number Backup Versions, Data Set Exists, indicates that the five
most recent versions of the data set are retained by DFSMShsm on backup
volumes as long as the data set exists.
Table 6 on page 125 shows the attributes for these two management classes.
Figure 68. FILTLISTs for TSO Data Used in Management Class ACS Routine
Figure 69 on page 126 shows the mainline logic needed to assign the management
classes. When the LLQ of the data set being allocated satisfies one of literals or
masks listed in the FILTLIST statement, the management class, EXTBAK or
INTERIM, is assigned to the system-managed data set. Any TSO data sets that do
not have LLQs matching either PGMRDATA or PGMRLIST are assigned the
STANDARD management class. This is done by the last OTHERWISE clause in the
management class ACS routine. See the management class ACS routine in
Appendix B, “Sample Classes, Groups, and ACS Routines,” on page 243 for an
example.
No additions to the storage group ACS routine logic are required to support TSO
data.
In-Place Conversion
If your TSO data is currently pooled and you are satisfied with the overall
performance of the TSO workload, you can convert the volumes in-place using
DFSMSdss CONVERTV. The CONVERTV command processes volumes and
evaluates the eligibility of the volumes and the data sets on the volume to be
system-managed. If the volume and data sets are eligible for system management,
your storage class and management class ACS routines are run to assign storage
classes and management classes to all the data sets on the volume. Figure 70 shows
a sample CONVERTV operation.
CONVERTV -
SMS -
DDNAME(D65DM1)
Figure 71 is a sample of a job that calls DFSMSdss to move data sets from a
non-system-managed volume to system-managed volumes determined by your
storage group ACS routine. In this example, the source data resides in TSO pools
so the TSO source volume, D65DM1, is specified as the primary DFSMSdss filter in
the LOGINDYNAM parameter. All data sets on the source volume are moved,
excluding a group of system programmer data sets having the high-level qualifier,
SYS1. If your ACS routines determine that any data set should not be
system-managed, the data set is moved to the non-managed volumes that you
specify in the OUTDYNAM parameter. In this example, D65DM2 is the target
volume for all non-system-managed data sets.
//*--------------------------------------------*
//* JOB : DSS COPY *
//* NOTE: Sample DSS Job to Convert a TSO *
//* data to system-managed storage *
//*--------------------------------------------*
//COPY EXEC PGM=ADRDSSU,REGION=4096K
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
If you want DFSMSdss to allocate the TSO data sets with the minimum space
required to contain the data, omit ALLEXCP and ALLDATA. Also, use
TGTALLOC(TRACK) to ensure that the minimum space is allocated for data sets
that were originally allocated in cylinders.
Related Reading: For more information about using ALLDATA and ALLEXCP, see
the DFSMSdss section of z/OS DFSMSdfp Storage Administration.
The starter set assumes that you have a data set naming convention based on LLQ.
Sample data classes are included in the starter set to help you define the following:
v VSAM data sets based on the RECORG parameter
With these data classes, users can create key-sequenced, relative record,
entry-sequenced, or linear VSAM data sets using batch JCL run in the
background, or TSO ALLOCATE commands run in the foreground. These data
classes create VSAM data sets with a primary allocation of 400 KB. The user
supplies the information about key offset and length for VSAM key-sequenced
data sets. Any performance-related options are the user's responsibility to
provide.
Table 7 on page 129 shows the attributes for these sample VSAM classes.
v Simple physical sequential data sets including test data sets and output listings.
You can create 80-byte, fixed-block data sets by using the class DATAF. The
primary space requested is 400KB.
Data sets having variable-blocked records with a average record size of 255 bytes
are defined using DATAV. Based on the primary space request, 1.275 MB are
allocated.
Listing data sets having a primary space of 90 MB are allocated using the
LISTING data class.
Table 8 on page 129 shows the attributes for these physical sequential data
classes.
v Load and source libraries in both PDS and PDSE format
The data class, LOADLIB, is reserved for load libraries that you intend to
allocate as partitioned data sets. SRCFLIB and SRCVLIB are data classes that
allocate PDSEs, based on the value of the DATA SET NAME TYPE attribute.
Table 9 on page 129 shows the attributes for these model libraries.
Table 8 shows the data class attributes for simple physical sequential data sets
including test data sets and output listings:
Table 8. Data Classes for Physical Sequential Data Sets
Attributes DASD Physical Sequential Data Set Data Classes
Name DATAF DATAV LISTING
Recfm FB VB VBA
Lrecl 80 255 137
Space Avgrec U U U
Space Avg Value 80 255 137
Space Primary 5000 5000 2000
Space Secondary 5000 5000 2000
Volume Count 1 1 1
Table 9 shows data class attributes for load and source libraries in both PDS and
PDSE format:
Table 9. Data Classes for Libraries
Attributes Partitioned Data Set Data Classes
Name LOADLIB SRCFLIB SRCVLIB
Recfm U FB VB
Lrecl — 80 255
Space Avgrec U U U
Space Avg Value 23476 80 255
Space Primary 50 5000 5000
Space Secondary 50 5000 5000
Space Directory 62 62 62
Data Set Name Type PDS LIBRARY LIBRARY
Volume Count 1 1 1
Recommendation: Allow users to assign a data class externally through batch JCL,
TSO ALLOCATE commands, or using the ISPF/PDF enhanced allocation
The data class ACS routine for DASD data sets has three segments: one for
externally-requested data classes, one for VSAM data sets and the other for
non-VSAM data sets. The data class ACS routine performs the following tasks:
v Checks for externally-requested data classes and assigns it to the data set if the
data class is part of your active SMS configuration. If it is not, no data class is
assigned.
v Assigns the appropriate VSAM data class, if the DCB characteristic, RECORG,
for the data set indicates that it is a VSAM data set. Otherwise, the LLQ of the
data set name for any non-VSAM data set allocations is compared with the
FILTLIST variables for each of the data types. If it matches, the data class
associated with the variable is assigned.
Figure 73 on page 132 shows the data class ACS routine fragment necessary to
assign the data classes, based on the data set name's LLQ.
Related Reading: For a full description of data set naming conventions, see z/OS
MVS JCL Reference.
HFS data sets should have a separate data class, and should be placed in a
primary (PRIMExx) storage group.
You can readily migrate batch data to system management, but you need to
understand application cycles to properly define the availability requirements of
the data sets.
You can realize these benefits by migrating batch data to system management, as
follows:
v You can improve batch job performance for I/O-bound jobs with sequential and
VSAM data striping.
Jobs that process large, sequential or VSAM data sets can improve performance
if you convert these data sets to extended-format data sets that are striped.
Sequential and VSAM data striping causes data sets to be written across
multiple DASD volumes on unique storage paths and then read in parallel on
each volume.
Channel load and storage space can also be reduced if you use host-based data
compression.
v You can improve batch job performance by moving tape data sets to DASD.
Batch jobs constrained by tape access speed or drive availability can benefit from
system management. Sequential access speed of striped data sets on DASD is
faster than that of tape, and you spend no time with tape mounts. Production
data sets and backups of production data sets both benefit from system
management:
– Production data sets
Migrating this data to system management gives you an opportunity to
improve performance for batch applications by moving selected data sets to
DASD.
– Backup copies of production data sets
Data sets that are application point-in-time backups can be written to a
system-managed DASD buffer and managed with DFSMShsm according to
the application's space and availability requirements. This strategy is
discussed in Chapter 11, “Optimizing Tape Usage,” on page 177.
v Critical batch data sets benefit from dual copy and from RAID technology.
Unrecoverable I/O errors can cause reruns. A cache-capable 3990 model storage
control's dual copy capability can diminish the effect of hardware outages by
maintaining a secondary copy of the data set on a separate volume. An I/O
error on the primary volume causes the system to switch automatically to the
secondary copy without any disruption to the application. If you use DASD fast
© Copyright IBM Corp. 1988, 2017 135
write with dual copy, there is no performance penalty for this increased
availability because the I/O is complete when written to the cache-capable 3990
storage control's non-volatile storage. The update is then destaged to the
primary and secondary volumes.
v Batch data sets benefit from DFSMShsm's fully automatic availability
management.
Most applications' availability requirements are met with the STANDARD
management class. Applications with specialized cycles, such as batch data sets,
might require unique management classes. Using data set-oriented management
classes lets you customize services, based on each application's requirements.
Later, you can simplify or eliminate application-initiated backup and recovery
procedures.
SMS allocates batch data in the PRIMExx and LARGExx storage groups, based on
data set size. Data sets larger than 285 MB are directed to the LARGExx storage
group.
Most data sets are allocated using system-determined block size to optimize space
usage. SMS allocates large, sequential batch data sets having high-performance
requirements in extended format. Critical data sets are maintained on dual copy
volumes.
For sequential data sets, SMS writes a hardware EOF at the beginning of the data
set at initial allocation. This prevents data integrity problems when applications try
to read the data before data is written in the data set.
You can manage the majority of your data with the STANDARD management
class. Data sets having unique management requirements are identified by data set
name or RACF &APPLIC and managed using a specialized management class.
Generation data sets are identified by the ACS variable, &DSTYPE, and receive
special management, based on the nature of the generation data group. If the
generation data group contains backups of data sets, the current copy migrates
quickly to migration level 1. In contrast, if it represents the current version of the
production data set, the current version is retained on primary storage until the
next generation is created. Output data sets containing reports are early candidates
for movement to migration level 1.
Batch data sets can vary greatly from cycle to cycle. Certain data sets should not
have space released automatically because of this variability. However, batch data
sets that are generation data sets are assigned a management class causing unused
space to be released automatically when the data set is closed.
Tip: You can use the DFSMS Optimizer Function to help you select data sets that
can benefit from striping.
To replace pattern DSCBs, create data classes in your SMS configuration to describe
the DCB characteristics. Use the parameter, DATACLAS, on the DD statement to
generate the proper DCB characteristics for the generation data sets.
The FASTSEQ storage class is an example of a storage class that allocates a striped
data set. For FASTSEQ, the value of SUSTAINED DATA RATE, 9 MB/sec, causes
the data set to be spread over two 3390 volumes because the data transfer rate for
a 3390 volume is 4.2 MB/sec. When you use the SUSTAINED DATA RATE to
create striped data sets automatically, you must also create a data class for striped
data sets. An example of a data class describing a striped data set is shown in
Table 12 on page 142.
Data sets that must be available for critical applications should be assigned the
CRITICAL storage class. The dual copy feature then maintains multiple copies of
the data set. If an error occurs in the primary data set, the secondary copy is
automatically used by the application.
Table 10 shows the storage classes that are useful for batch data sets.
Table 10. Storage Classes for Batch Data
Attributes Storage Classes
Name STANDARD MEDIUM FASTSEQ CRITICAL
Direct — 10 — 10
Millisecond
Response
Sequential — 10 — 10
Millisecond
Response
Availability STANDARD STANDARD STANDARD CONTINUOUS
Accessibility STANDARD STANDARD CONTINUOUS CONTINUOUS
Guaranteed NO NO NO NO
Space
Guaranteed NO NO NO NO
Synchronous
Write
Sustained Data — — 9.0 —
Rate
Managing GDGs
Manage GDG data sets according to their type, either backup or production.
Migrate backup generation data sets to migration level 1 and the older versions to
migration level 2. The backup GDS can be recalled if required. Keep the current
generation of production generation data sets on primary storage until the next is
created.
There are two management classes you can assign to generation data sets.
GDGPROD and GDGBKUP are applicable to production and backup GDSs,
respectively. Table 11 lists the attributes for the GDG data classes.
Table 11. Management Classes for Batch Data
Attributes Management Classes
Name GDGBKUP GDGPROD STANDARD MONTHMIG
Expire after NOLIMIT NOLIMIT NOLIMIT NOLIMIT
Days Non-usage
Expire after NOLIMIT NOLIMIT NOLIMIT NOLIMIT
Date/Days
Retention Limit NOLIMIT NOLIMIT NOLIMIT NOLIMIT
Partial Release YES IMMED YES IMMED NO NO
Migrate Primary 2 15 15 35
Days Non-usage
Level 1 Days 0 60 60 70
Non-usage
Command or BOTH BOTH BOTH BOTH
Auto Migrate
# GDG Elements 1 1 — 1
on Primary
Rolled-off GDS EXPIRE EXPIRE EXPIRE EXPIRE
Action
Backup — 0 — 0
Frequency
Few characteristics of batch data sets can be generalized. Consider these variances
before you automate space and availability management with DFSMShsm:
v Batch data sets might be inactive for a long time, but when needed, must be
available immediately. This type of data set should not be migrated to tape.
v Batch data set sizes might vary greatly, based on the production cycle. These
data sets should be assigned a management class with Partial Release=NO to
inhibit DFSMShsm from releasing space.
v Batch data sets might only be needed for a short period. These data sets might
consist of reports or error listings, and be eligible for early deletion.
You should supplement the basic set of management classes with others that reflect
the data set requirements for specialized applications.
If you have a large, complex batch workload and want to use SMS performance
services without analyzing the management requirements of the data, you can
migrate these data sets to system management and assign a management class that
does not cause any automatic space or availability management actions to occur.
Always assign a management class, because if one is not assigned, the default
management class attributes or DFSMShsm defaults are used. Once you have
migrated your batch data, you can design the management classes and the
management class ACS routines to accommodate batch data. You can assign the
management classes for batch by executing DFSMSdss's CONVERTV, using the
REDETERMINE option for volumes containing batch data.
Figure 74. Management Class ACS Routine Fragment for Batch Data
You do not need to add logic to the storage group ACS routine for batch data.
Users can assign the data classes created to replace the GDG pattern DSCBs by
specifying the data class, using the DATACLAS parameter on the DD statement for
the GDS. Or, you can automate assignment of data classes by testing the &DSTYPE
variable for the value GDS and the data set name.
The data is moved to the PRIMExx and LARGExx storage groups, based on the
size of the data sets using DFSMSdss COPY and the source volumes that the
application's data resides on as another filter. Figure 75 on page 143 shows sample
JCL that you can use to migrate the first application's data to system management.
The sample job uses the following parameters:
v The LOGINDYNAM(D65DM2,D65DM3) parameter tells DFSMSdss to move all
data sets on volumes D65DM2 and D65DM3.
v The DATASET(INCLUDE(1STAPPL.**)) indicates that, for the volumes selected,
only data sets having the 1STAPPL HLQ are copied.
You cannot migrate to striped data sets with DFSMSdss. You can exclude these
data sets from the initial COPY and then recreate them as striped data sets through
the use of standard z/OS utilities, such as access method services or IEBGENER.
COPY DATASET(INCLUDE(1STAPPL.**)) -
LOGINDYNAM(D65DM2,D65DM3) -
ALLDATA(*) -
ALLEXCP -
CATALOG -
SPHERE -
DELETE -
PURGE -
TGTALLOC(SOURCE) -
TOLERATE(IOERROR) -
WAIT(2,2)
If you temporarily remove data sets for a batch application from system
management, you can still use the DATACLAS parameter to supply the data set
characteristics. Data classes apply to both system-managed and
non-system-managed data sets.
Table 13 on page 147 summarizes the SMS services for database data.
1. DFW refers to the DASD fast write extended function of a cache-capable 3990 storage
control.
You can design your ACS routines so that SMS restricts the allocation of data sets
in CICS, IMS, and DB2 storage groups to production databases and selected system
data sets. Only specially-identified users, such as the database or storage
administrator can allocate data in these storage groups. Most data sets that support
the database environment, including the recovery and system data sets, are
directed to the PRIMExx storage group. The storage and database administrators
have special SMS authority to assign data sets with critical performance and
availability requirements to specific volumes. Dual copy and RAID technology
provide high availability for selected data sets that are not duplexed by the
database management system. Use DASD fast write and cache to provide superior
performance for databases and recovery data sets.
DFSMS supplements the backup and recovery utilities provided by the database
management system as follows:
v DFSMSdss uses concurrent copy capability and virtual concurrent copy support
to create point-in-time backups.
v Data base utilities (except for CICS) invoke DFSMSdss for concurrent copy and
virtual concurrent copy support for point-in-time backups and
backup-while-open.
v DFSMShsm backs up system data sets and end-user database data that is less
critical than production database data. You can use the backup-while-open
function with concurrent copy or virtual concurrent copy support to back up
CICS VSAM data sets while they are open for update.
v DFSMShsm carries out direct migration to migration level 2 for archived
recovery data sets on DASD.
v End-user and test database data is migrated by DFSMShsm through the storage
hierarchy, based on database data usage.
For CICS, IMS, and DB2, you must ensure that any database data sets you plan to
system-manage are cataloged using the standard search order. In particular, check
image copies and logs to ensure that they are cataloged.
CICS data can benefit from compression, extended format, extended addressability,
secondary volume space amount, and dynamic cache management enhancements
when the data sets are KSDSs. Batch programs accessing this data can benefit from
system-managed buffering.
For IMS data, consider converting any OSAM data sets to VSAM. By converting to
VSAM, you can benefit from enhanced dynamic cache management. IMS Version 5
supports enhanced dynamic cache management for OSAM data sets. KSDSs being
used by IMS can be extended format but cannot be compressed because IMS uses
its own form of compression and cannot tolerate compression performed by
DFSMS.
System data sets, such as the CICS availability manager data sets, must also be
available during restart and are good candidates for allocation on fault-tolerant
devices.
v The CAVM message and control data sets require a storage class with
guaranteed space, because they should be placed on different volumes.
v Use DASD fast write for intrapartition transient data to improve transaction
response time.
v Transactions using auxiliary temporary data can benefit from using cache and
DASD fast write.
Three tables have been developed to identify the types of CICS database data sets
having high availability and performance requirements. Table 14 shows the
relationship between the data set, data type, and LLQ that are used to identify the
data sets having high availability requirements in the storage class ACS routine.
Table 15 shows the relationship between the data set, data type, and LLQ that
identify data sets having high write activity in the storage class ACS routine.
Table 14. CICS Data Sets Requiring High Availability
CICS data set Data set type Low-level qualifier
System logs Recovery DFHJ01A/B
Restart data set Recovery DFHRSD
CICS Availability Manager Recovery DFHXRMSG (message)
(CAVM) DFHXRCTL (control)
CICS system definition System DFHCSD
Production databases Databases —
All other CICS data sets are assigned the STANDARD storage class. SMS storage
classes assigned to these database data sets contain performance objectives and
performance and availability requirements.
SMS ensures that data sets assigned to the DBCRIT storage class are:
v Placed on fault-tolerant devices, because availability is CONTINUOUS
v Using point-in-time copy, because accessibility is CONTINUOUS
Using the Guaranteed Space attribute to specify volumes is not recommended for
most of the data sets for the following reasons:
v SMS uses randomizing techniques to select volumes, which should satisfy most,
if not all, allocations. The randomizing techniques tend to spread data sets
across the available volumes in a storage group.
v With the IBM RVA and the ESS, multiple logical volumes can be mapped to a
physical volume due to their RAID architecture, volume capacity, and, if
applicable, their log structured array architecture.
v The IBM ESS has large cache structures and sophisticated caching algorithms. It
is capable of providing a much larger throughput. Its capabilities of parallel
access volume and multiple allegiance allow many concurrent accesses to the
same data. Therefore, specific volume placement and data set separation used
for performance reasons should no longer be required.
SMS attempts to allocate these data sets behind a cache storage control and use
dynamic cache management to deliver good I/O service.
SMS ensures that data sets assigned to the FASTWRIT storage class have
concurrent copy, because accessibility is CONTINUOUS.
Refer to Table 23 on page 161 for the list of attributes associated with these storage
classes.
Figure 76. FILTLIST Section for CICS from Storage Class ACS Routine
CICS database data sets are first identified by the HLQ, PCICS, UCICS, or TCICS.
Selected system and recovery database data sets are identified and assigned a
storage class having attributes set to provide the required services. In this sample
routine, production databases are given DASD fast write and cache services. User
databases receive better than average service and, if allocated behind a
cache-capable 3990 storage control, become may-cache candidates. In keeping with
our ACS coding recommendations, once a data type has been identified, it is
assigned an appropriate storage class and then the routine is exited. Figure 78 on
page 153 shows the coding for the CICS database data sets.
/**********************************************************************/
/* Start of CICS Select */
/**********************************************************************/
WHEN (&DSN = &CICS) /* Select CICS datasets, */
DO /* production and test */
SELECT
WHEN (&DSN = &CICS_PROD_CAVM OR /* Dual Copy capability */
&DSN = &CICS_PROD_RESTART OR /* for CICS Avail. Mgr. */
&DSN = &CICS_PROD_CSD) /* Restart, System Def. */
DO
SET &STORCLAS = ’DBCRIT’
EXIT
END
WHEN (&DSN = &CICS_PROD_TEMP OR /* Cache temporary storage*/
&DSN = &CICS_PROD_LIB) /* and applic. libraries */
DO
SET &STORCLAS = ’FAST’
EXIT
END
WHEN (&DSN = &CICS_PROD_INTRA OR /* Use DASD fast write for*/
&DSN = &CICS_PROD_DB) /* intrapartition data and*/
/* some production DBs */
DO
SET &STORCLAS = ’FASTWRIT’
EXIT
END
WHEN (&DSN = &CICS_USER_DB) /* Give user databases */
DO /* better than average */
SET &STORCLAS = ’MEDIUM’ /* performance */
EXIT
END
OTHERWISE /* Give all other datasets*/
DO /* average performance */
SET &STORCLAS = ’STANDARD’
EXIT
END
END
END
/**********************************************************************/
/* End of CICS Select */
/**********************************************************************/
Figure 78. SELECT Section for CICS from Storage Class ACS Routine
IMS supports two types of databases: DL/1 and DEDB. IMS can duplex DEDB
databases; however, DL/1 databases are not duplexed. Use fault-tolerant devices to
protect critical DL/1 databases.
Table 17 shows the relationship between the data set, data type, and LLQ that
identify the data sets having high availability requirements in the storage class
ACS routine. The data sets listed in Table 17 are not duplexed by IMS.
Table 17. IMS Data Sets Requiring High Availability
IMS data set Data set type Low-level qualifier
Restart Data Set (RDS) Recovery RDS
Message Queues Data Set Recovery QBLKS SHMSG LGMSG
(MSG Q)
DL/1 databases Databases —
Several IMS data types benefit from DASD fast write. For example, Extended
Recovery Facility (XRF) users, with the RECON and OLDS, benefit from DASD fast
write because multiple systems use these database data sets. Also, consider using
DASD fast write for the IMS scratchpad area (SPA), for IMS users with heavy
conversational workloads.
The access reference pattern for the database can affect the caching or DASD fast
write benefits for production databases. Databases having high update activity are
candidates for DASD fast write.
Table 18 shows the relationship between the data set, data type, and LLQ that
identify the data set in the storage class ACS routine.
Table 18. IMS Data Sets Having High Write Activity
IMS data set Data set type Low-level qualifier
Write Ahead Data Set Recovery WADS0/1
(WADS)
Online Log Data Set (OLDS) Recovery OLPxx/OLSxx
Recovery Control Data Set Recovery RECON
(RECON)
Scratchpad area (SPA) System SPA
IMS databases that are predominantly read can also benefit from cache services.
Table 19 on page 155 summarizes this potential benefit for both DEDB and DL/1
databases.
All other IMS data sets are assigned the STANDARD storage class.
Refer to Table 23 on page 161 for the list of attributes associated with these storage
classes.
Figure 79. FILTLIST Section for IMS from Storage Class ACS Routine
This is a common section run before processing IMS database data sets by type. It
lets system programmers, database and storage administrators assign storage
classes externally, and even allocate these data sets as non-system-managed.
Figure 80 on page 156 shows the ACS code.
Figure 80. ACS Code to Permit Special Users to Override SMS Allocation
Figure 81 on page 157 shows the logic to process IMS database data sets by type.
Only production database data sets are assigned specialized services in this
routine. The first character of the data set's HLQ defines a production (P) or test
(T) database data set. The three low-order characters of the HLQ are set to IMS.
The DSN mask defined in the FILTLIST section for the data set type describes the
data set type in the LLQ. Consider the following when deciding on your naming
conventions:
v The first WHEN clause verifies that the data set belongs to an IMS system, by
checking the FILTLIST variable, IMS
v Each uniquely named data set type is assigned the recommended storage class
v Any IMS production database data set not specifically identified, or any test IMS
database data set, is assigned STANDARD storage services.
/**********************************************************************/
/* Start of IMS Select */
/**********************************************************************/
WHEN (&DSN = &IMS) /* Select all IMS data */
DO /* sets, including test */
SELECT
WHEN (&DSN = &IMS_PROD_RESTART OR /* Dual copy capability */
&DSN = &IMS_PROD_QUEUE OR /* for restart, message */
&DSN = &IMS_PROD_SMSG OR /* queues data sets */
&DSN = &IMS_PROD_LMSG)
DO
SET &STORCLAS = ’DBCRIT’
EXIT
END
WHEN (&DSN = &IMS_PROD_WADS OR /* Use DASD fast write */
&DSN = &IMS_PROD_OLDS OR /* for write ahead, online*/
&DSN = &IMS_PROD_SPA) /* log and scratch pad */
DO /* area */
SET &STORCLAS = ’FASTWRIT’
EXIT
END
WHEN (&DSN = &IMS_PROD_DL1) /* Cache DL/1 databases */
DO
SET &STORCLAS = ’FAST’
EXIT
END
OTHERWISE /* Give all other datasets*/
DO /* average performance */
SET &STORCLAS = ’STANDARD’
EXIT
END
END
END
/**********************************************************************/
/* End of IMS Select */
/**********************************************************************/
Figure 81. SELECT Section for IMS from Storage Class ACS Routine
Table 20 shows the relationship between the data set, data type, and LLQ that
identifies the data set in the storage class ACS routine.
Table 20. DB2 Data Sets Requiring High Availability
DB2 data set Data set type 3rd-level qualifier
DB2 catalog System DSNDB06
DB2 directory System DSNDB01
The active logs can benefit from DASD fast write and cache services, because DB2
transactions can wait for logging before completing. The access reference pattern
for the database can affect the caching or DASD fast write benefits for production
databases. The DFSMS I/O statistics provide long term measurement of data set
accesses, response components, and cache statistics. They can be used in
application tuning or batch window reduction. You can benefit significantly from
I/O priority scheduling in a mixed workload environment. For example, to achieve
consistent response times for transaction processing, you can prioritize transaction
processing reads above query reads and DB2 asynchronous writes.
Table 21 shows the relationship between the data set, data type, and LLQ that
identifies the data set in the storage class ACS routine.
Table 21. DB2 Data Sets Having High Write Activity
DB2 data set Data set type Low-level qualifier
Active Log Recovery LOGX/Y
Boot Strap System BSDSX/Y
The access reference pattern for the database can affect the caching benefits for
production DB2 databases. Table 22 summarizes this result.
Table 22. DB2 Data Sets Having High Read Activity
DB2 data set Data set type 2nd-level qualifier
Production databases Databases DSNDBC/D
You can prevent inadvertent migration of DB2 data in one of the following ways:
v Assign a management class with Command or Auto Migrate = None.
v Assign a management class with Command or Auto Migrate = Command. This
specification prevents DFSMShsm from automatically migrating DB2 data during
the primary space management cycle, but it allows you to migrate test databases
on command.
v Assign a management class with Primary Days non Usage > nn days, where nn
is greater than the number of days when these data sets stay open.
v Modify the relevant DB2 database settings. Contact your IBM DB2 Service
Support Representative for further information.
Figure 82. FILTLIST Section for DB2 from Storage Class ACS Routine
Figure 83 on page 160 shows the common logic that lets privileged users, such as
the database administrator, override SMS allocation decisions or even to allocate
the data set as a non-system-managed data set.
Figure 85 on page 162 shows the logic to assign storage classes to DB2 database
data sets. Only production database data sets are assigned specialized services in
this routine. The first character of the data set's HLQ denotes whether the data set
is production (P), test (T), or end-user (U). The three low-order characters of the
HLQ are set to DB2. The DSN mask defined in the FILTLIST section for the data
set type describes it in the LLQ.
/**********************************************************************/
/* Start of DB2 Select */
/**********************************************************************/
WHEN (&DSN = &DB2) /* Select DB2 data sets */
DO
SELECT
WHEN (&DSN = &DB2_PROD_LOG) /* Use fast write for */
DO /* active logs */
SET &STORCLAS = ’FASTWRIT’
EXIT
END
WHEN (&DSN = &DB2_PROD_CATALOG OR /* Dual copy for catalog */
&DSN = &DB2_PROD_DIRECTRY OR /* directory and boot */
&DSN = &DB2_PROD_BSDS) /* strap data set */
DO
SET &STORCLAS = ’DBCRIT’
EXIT
END
OTHERWISE /* Give all other DB2 data*/
DO /* average performance */
SET &STORCLAS = ’STANDARD’
EXIT
END
END
END
Figure 84. SELECT Section for DB2 from Storage Class ACS Routine
Table 23 shows the attributes for the storage classes assigned to database data sets.
Table 23. Storage Classes for Database Data
Attributes Storage Classes
Name MEDIUM FAST DBCRIT FASTWRIT
Direct Millisecond 10 5 10 5
Response
Direct Bias — — W W
Sequential 10 5 10 5
Millisecond Response
Sequential Bias — — — W
Availability STANDARD STANDARD CONTINUOUS STANDARD
Accessibility STANDARD CONTINUOUS CONTINUOUS CONTINUOUS
Guaranteed Space NO NO NO NO
Guaranteed NO NO NO NO
Synchronous Write
/**********************************************************************/
/* Start of DB2 Select */
/**********************************************************************/
WHEN (&DSN = &DB2) /* Select DB2 data sets */
DO
SELECT
WHEN (&DSN = &DB2_PROD_LOG) /* Use fast write for */
DO /* active logs */
SET &STORCLAS = ’FASTWRIT’
EXIT
END
WHEN (&DSN = &DB2_PROD_CATALOG OR /* Dual copy for catalog */
&DSN = &DB2_PROD_DIRECTRY OR /* directory and boot */
&DSN = &DB2_PROD_BSDS) /* strap data set */
DO
SET &STORCLAS = ’DBCRIT’
EXIT
END
OTHERWISE /* Give all other DB2 data*/
DO /* average performance */
SET &STORCLAS = ’STANDARD’
EXIT
END
END
END
Figure 85. SELECT Section for DB2 from Storage Class ACS Routine
Then, ensure that your data class ACS routine for DB2 includes the following
statement:
FILTLST DB2WH INCLUDE(DWH*.**) /* assuming these have a high level*/
/* qualifier of DWH...*/
SELECT
WHEN (&DSN = &DB2WH)
DO
&DATACLAS = ’LDSEA’
END
OTHERWISE
...
You should not use DFSMShsm to manage most production database data. Instead,
assign the NOACT management class to these data sets. NOACT inhibits
DFSMShsm space and availability management. Specifically, Auto Backup is set to
NO so that DFSMShsm does not back up the data set, Admin or User Command
Backup is set to NONE to prohibit manual backup commands, and expiration
attributes are set to NOLIMIT to prevent data set deletion.
Although production database data does receive automatic backup service, you can
use DFSMSdss to run point-in-time for production database data sets. Accessibility
is set to CONTINUOUS for all storage classes assigned to production database
data sets to ensure that the data set is allocated to a point-in-time capable volume.
Database data that has less critical availability requirements, typically test or
end-user databases, benefit from system management using DFSMShsm.
Additionally, selected data types for production systems can be effectively
managed using SMS facilities.
For CICS/VSAM systems, extrapartition transient data, test and end-user database
data can be managed with DFSMShsm. Extrapartition transient data is directed to
DFSMShsm's migration level 2 by assigning the DBML2 management class to these
data types. The attributes for DBML2 keep data sets on primary storage for two
days (Migrate Primary Days Non-usage=2) and, if not used, they are migrated to
tape (Level 1 Days Non-usage=0).
End-user and test data sets are assigned the DBSTAN management class. This class
is different from the STANDARD management class because backup copies for
data sets assigned to it are retained much longer than average data sets (Retain
Days Only Backup Version=400).
After doing a data set restore using DFSMSdss (either directly or driven by
DFSMShsm), the databases need to be brought to the point of failure or to a point
of consistency using forward recovery logs. This can be achieved using CICSVR for
CICS VSAM File control data sets. CICS and CICSVR support backup-while-open
using either concurrent copy or virtual concurrent copy support.
For IMS systems, DFSMShsm can manage change accumulation logs and image
copies. These data sets can stay on primary storage for a short time and then
migrate directly to tape. The DBML2 management class is assigned.
For DB2 systems, you can manage archive logs and image copies with DFSMShsm.
These data sets can be retained on primary storage for a short period of time and
then migrated directly to tape. DBML2 management class is provided for these
data set types. End-user database data can also be managed. These data sets are
assigned the DBSTAN management class.
Table 24 shows the attributes for the management classes assigned to database data
sets.
Table 24. Management Classes for Database Data
Attributes Management Classes
Name DBML2 DBSTAN NOACT
Expire after Days NOLIMIT NOLIMIT NOLIMIT
Non-usage
Expire after Date/Days NOLIMIT NOLIMIT NOLIMIT
Retention Limit NOLIMIT NOLIMIT NOLIMIT
Partial Release COND IMMED NO NO
Migrate Primary Days 2 15 —
Non-usage
Level 1 Days Non-usage 0 60 —
Command or Auto Migrate BOTH BOTH NONE
# GDG Elements on 1 1 —
Primary
Rolled-off GDS Action EXPIRE EXPIRE —
Backup Frequency 1 0 —
Number Backup Versions, 2 3 —
Data Set Exists
Number Backup Versions, 1 1 —
Data Set Deleted
Retain Days only Backup 60 400 —
Version
Retain Days Extra Backup 30 100 —
Versions
Admin or User Command BOTH BOTH NONE
Backup
Auto Backup YES YES NO
Backup Copy Technique CONCURRENT REQUIRED CONCURRENT REQUIRED STANDARD
Figure 86. FILTLIST Section for Database from Management Class ACS Routine
In the database logic section of the management class routine, the data set name is
matched with the two FILTLIST variables and, if there is a match, the
corresponding management class is assigned. In this routine, any production
database data sets not specifically identified as managed data types are assigned
the NOACT class. Figure 87 on page 166 shows the ACS coding segment for
database data in the management class ACS routine.
/**********************************************************************/
/* Start of Mainline SELECT */
/**********************************************************************/
SELECT
WHEN (&GROUP = &SPECIAL_USERS && /* Let system programmers */
&MGMTCLAS = &VALID_MGMT_CLASS) /* assign externally- */
DO /* specified management */
SET &MGMTCLAS = &MGMTCLAS /* class */
EXIT
END
WHEN (&DSN = &DBML2) /* Send CICS extra- */
DO /* partition, DB2 image */
SET &MGMTCLAS = ’DBML2’ /* copies and archive logs,*/
EXIT /* IMS change accumulation */
END /* and image copies to ML2 */
END
/**********************************************************************/
/* End of Mainline SELECT */
/**********************************************************************/
END /* End of Management Class Procedure */
Figure 87. Management Class ACS Routine Sections for Database Data
DB2 lets the database administrator define a collection of volumes that DB2 uses to
find space for new data set allocations. This collection is known as a DB2
STOGROUP. If you use DB2 STOGROUPs to manage DB2 allocation, you must
ensure that your strategy for DB2 database data migration does not conflict with
Table 25 shows the attributes for the storage groups defined for CICS/VSAM, IMS,
and DB2 database data. No space management or backup services are performed
by DFSMShsm for these storage groups. However, volumes in the database storage
groups are dumped by DFSMShsmfor local recovery required because of a
hardware failure. These volumes are also dumped and stored offsite in preparation
for a site disaster.
Table 25. Storage Groups for Database Data
Attributes Storage Groups
Name CICS DB2 IMS
Type POOL POOL POOL
Auto Migrate NO NO NO
Auto Backup NO NO NO
Auto Dump YES YES YES
Dump Class DBONSITE, DBOFFS DBONSITE, DBONSITE,
DBOFFS DBOFFS
High Threshold 75 75 75
Low Threshold 60 60 60
Guaranteed Backup NOLIMIT NOLIMIT NOLIMIT
Frequency
Figure 88 on page 168 shows the filters that identify the production database data
sets using the data set name's HLQ and isolate them in their own storage group.
You or the database administrator allocate the production database data using a
storage class to specify the data's performance and availability requirements. The
storage class also indicates that the database is being allocated by an administrator
authorized to place data in this storage group. Figure 89 on page 168 shows the
ACS routine statements that identify the production database and database storage
class, and assign the storage group.
Only selected system, recovery, and production database data is selected in the
storage group ACS routine to be allocated in the database storage groups. All other
database data is allocated on volumes in the PRIMExx or LARGExx storage group.
/**********************************************************************/
/* Start of FILTLIST Statements */
/**********************************************************************/
FILTLIST CICS INCLUDE(PCICS*.**)
EXCLUDE(**.DFHXRMSG,**.DFHXRCTL,**.LOADLIB,
**.COB2CICS,**.COB2LIB,**.PLILINK,
**.DFHJ01%,**.DFHRSD,**.DFHCSD,
**.DFHINTRA,**.DFHTEMP)
FILTLIST DB2 INCLUDE(PDB*.**)
EXCLUDE(*.*.DSNDB01.**,**.LOG*)
FILTLIST IMS INCLUDE(PIMS*.**)
EXCLUDE(**.LGMSG,**.OL*,**.QBLKS,
**.RDS,**.SHMSG,
**.SPA,**.WADS*,**.RECON)
FILTLIST SPECIAL_USERS INCLUDE(’SYSPROG’,’STGADMIN’,’DBA’)
/**********************************************************************/
/* End of FILTLIST Statements */
/**********************************************************************/
Figure 88. FILTLIST Section for Database from Storage Group ACS Routine
Figure 89. SELECT Section for Database from Storage Group ACS Routine
To do this, allocate all of the partitions in a single IEFBR14 job step, using JCL. As
long as there are adequate number of volumes in the storage groups and the
volumes are not above allocation threshold, the SMS allocation algorithms with
SRM ensure each partitions is allocated on a separate volume.
System-managed tape allows you to define requirements for your tape data in
logical, rather than physical, terms. Without requiring changes to programs or JCL,
you can define the policies that the system uses to map those logical requirements
to physical tape resources, which can include mixed device types (for example,
IBM 3490 or 3490E transports), mixed media (for example, cartridge system tapes
or enhanced capacity cartridge system tapes), and tape library dataservers.
This chapter describes the tasks you need to complete as you plan for
system-managed tape. It shows you how to optimize your current tape
environment and convert your tape volumes to system management. See
Chapter 11, “Optimizing Tape Usage,” on page 177 and Chapter 12, “Managing
Tape Volumes,” on page 219 for more information on implementation
considerations and sample plans for managing tape data and volumes under SMS.
Tip: With TMM, you need to extensively analyze tape mounts, modify ACS
routines to redirect allocations intended for tape to a DASD pool, and then migrate
them to tape with the DFSMShsm interval migration. To avoid this complicated
process, you can use the IBM Virtual Tape Server (VTS) to fill tape media, reduce
tape mounts, and save system resources. See “Using the Virtual Tape Server (VTS)
to Optimize Tape Media” on page 179 for more information.
Tape mount management is a methodology for managing tape data sets within the
DFSMS storage hierarchy:
1. Tape data sets are categorized according to size, pattern of usage, and other
criteria, so that appropriate DFSMS policies can be assigned to tape mount
management candidates.
2. Data sets written to tape are intercepted at allocation and, if eligible for tape
mount management, redirected to a system-managed DASD buffer. The buffer
serves as a staging area for these data sets until they are written to tape. The
location of the data is transparent to the application program.
3. DFSMShsm periodically checks the occupancy of the DASD buffer storage
group to ensure that space is available when needed and migrates data sets to
a lower level of the storage hierarchy when they are no longer required on
primary DASD volumes.
4. DFSMShsm eventually moves the data to tape, using single-file format and data
compaction to create full tape cartridges.
This process can significantly reduce tape mounts and the number of cartridges
required to store the data. Operations can benefit from a decreased number of
random tape mounts, while applications benefit from improved job throughput
because the jobs are no longer queued up on tape drives.
Restriction: You cannot use tape mount management for OAM objects that are
written to tape.
Although JCL changes are usually not necessary for implementing tape mount
management, you do need to determine the effects of jobs that leave tape data sets
uncataloged, and special expiration date codes that are used by some tape
management systems, if these practices exist in your current environment. See
Chapter 11, “Optimizing Tape Usage,” on page 177 and Chapter 12, “Managing
Tape Volumes,” on page 219 for more information on planning considerations.
The volume mount analyzer uses your installation's SMF data to analyze tape
mount activity and to produce reports that help you perform the following tasks:
v Identify trends and other information about tape mount events, including data
set name, job, program, and data set size (bytes transferred).
v Evaluate the tape hardware configuration.
v Quantify the benefits of tape mount management in terms of library and tape
mount reduction.
v Determine which data sets are good candidates for tape mount management.
v Determine data class and management class requirements for tape mount
management candidates
v Develop ACS routine filters to select tape mount management candidates and
exclude other data sets that must remain on tape.
v Determine the size of the DASD buffer and the high and low thresholds needed
for the buffer's storage group.
To run the volume mount analyzer, you must have DFSORT installed.
Related Reading:
v For procedures on performing a volume mount analyzer study and interpreting
the results, see z/OS DFSMS Using the Volume Mount Analyzer.
v For information on implementing tape mount management using the volume
mount analyzer reports, see Chapter 11, “Optimizing Tape Usage,” on page 177
and Chapter 12, “Managing Tape Volumes,” on page 219.
With the Automated Tape Library Dataserver, when you allocate a new data set in
the automated environment, the system selects an appropriate tape cartridge from
a scratch tape pool in the Automated Tape Library Dataserver. If you request a
data set that is already stored on a cartridge in a tape library dataserver, DFSMS
enables the dataserver to automatically locate, mount, demount, and store the
correct tape cartridge for you. With the manual tape library, scratch pool support
can be provided by your tape management system.
DFSMSrmm also plays an important role in this environment. It enables the tape
library dataserver to automatically recycle tapes back into the scratch pool when
the data sets on the tapes expire. This eliminates the need for you to manually
change these tapes back into scratch tapes. DFSMSrmm invokes the object access
method (OAM) to update the volume's status during the recycle process. It also
Normal SMS processing ignores the UNIT parameter. So, JCL or dynamic
allocations could be specifying unit names that no longer exist in the system.
However, if the new keyword SMSHONOR (in JCL) or DALSMSHR (for dynamic
allocations) is coded along with a valid device name or esoteric name, device
allocation will attempt to allocate to the devices that are common to the UNIT and
device pools selected by SMS.
Related Reading: You may need more information during the various stages of the
installation and conversion process. See Chapter 11, “Optimizing Tape Usage,” on
page 177, Chapter 12, “Managing Tape Volumes,” on page 219, and the following
publications:
v For information about JES3 support for tape library dataservers, see z/OS JES3
Initialization and Tuning Guide.
v For more information about using OAM to define system-managed tape libraries
and using the Manual Cartridge Entry programming interface, see z/OS DFSMS
OAM Planning, Installation, and Storage Administration Guide for Tape Libraries.
v For information on how certain DFSMSdfp installation exits affect library
management, see z/OS DFSMS Installation Exits.
v For information about DFSMSrmm support for manual cartridge entry
processing and for the pre-ACS interface, see z/OS DFSMSrmm Implementation
and Customization Guide.
You can implement these tape mount management techniques without changing
your JCL streams or backup and recovery procedures. Tape data sets that cannot
be addressed by tape mount management techniques can also be system-managed.
Refer to Chapter 12, “Managing Tape Volumes,” on page 219 for information on
these data sets.
Storage
Class 3490E
Assigned
Management Class
ACS Routine
Storage Group
ACS Routine
DFSMShsm
%Used > Interval
Threshold
TMMBUF90
Active Volumes
Management
Class
TMMBFS90
Quiesced Volumes
Actual Days > LEVEL 1 DAYS
The DASD buffer is a staging area for these tape mount management candidates
before they are written to tape. DFSMShsm periodically checks the occupancy of
the DASD buffer's storage group. If the allocated space exceeds the midway point
between low and high threshold (if you specified interval migration) for the
storage group, DFSMShsm moves data sets to its migration level 1 or 2 volumes to
bring the buffer down to the low threshold.
Data set movement through the storage hierarchy is based on the management
class that you assign to the data set. DFSMShsm uses single-file format and data
compaction technologies to create a full cartridge for each migration level 2 tape
volume before requesting another.
Very large and offsite tape data sets that must remain in their current form are
written directly to tape. You can also write them in a compacted form to make
better use of both the media and cartridge subsystem.
VTS lets you define up to 32 virtual tape drives to the host. Not visible to the host
are up to 6 physical tape devices. When the host writes to one of the virtual
devices, it actually writes to a virtual volume residing on the VTS DASD buffer.
The VTS, transparent to the host, copies the entire virtual volume onto a logical
volume that is then mapped to physical stacked volumes known only to the VTS.
These logical and physical volumes cannot be ejected directly. However, VTS offers
many other advantages. For example, VTS:
v Does not need a DASD buffer
v Does not use DFSMShsm facilities to fully stack tapes
v Does not require double data movement in the host
v Does not require changes to ACS routines
v Fully uses tape media due to the structure of the virtual volumes on the physical
tape
VTS avoids the extensive analysis required to use tape mount management. You
can, however, use VMA studies to use VTS more effectively, since these studies
identify useful information, such as data sets needing to be stored offsite, or
temporary data sets that can be written to DASD and expired.
Change the JCL or dynamic allocation that references an undefined unit name
where the data set is eligible for tape mount management, to specify a unit name
that exists in the configuration and contains tape devices. If you do not, the system
might replace a valid data set with an empty data set, causing data loss.
You might want to allow users to use JCL to delete a tape data set that has been
directed to system-managed DASD. You can use the OVRD_EXPDT keyword in
the IGDSMSxx member of the parmlib, which specifies whether the expiration date
should always, or never, be overridden when deleting system-managed DASD data
sets. You should use the OVRD_EXPDT keyword only in the following situations:
v When management class cannot be used
v For use with tape allocations redirected to DASD
For a single processor with GRS not active, data sets that should be written
directly to migration level 2 must be at least one day old before being migrated by
DFSMShsm.
Offsite data sets include interchange, disaster recovery and vital records. Disaster
and vital records data is shipped to a remote retention facility. Interchange data
sets are sent to another data center for processing. Interchange can also be
customer input/output data.
Very large data sets are multivolume data sets that are not practical to intercept
with SMS and manage on DASD, even on a temporary basis. This data includes
volume dumps, very large image copies, and other large data sets. The definition
of large varies. We use 600 MB as the large data set size for the SMS configuration
described in this book. Volume dump data sets can range from 500 MB to 1200
MB, but most of them are greater than 600 MB. You can intercept 600 MB data sets
with tape mount management without increasing the DASD buffer. But, if most of
them are greater than 600 MB, they should be considered as large and sent directly
to tape. Table 26 identifies some typical classes of data sets that must remain on
tape in their current form.
Table 26. Data Sets That Must Remain on Tape
Data Type Description
Interchange Data that is supplied to another site
Identify these data sets to your storage class ACS routine so that the storage class
read/write variable is always set to null.
Data sets that are not truly large are considered normal-sized. These data sets are
eligible to be intercepted and system-managed. They include temporary, active,
and backup data sets. Table 27 describes the tape data sets that are candidates for
re-direction to DASD.
Table 27. Data Sets That Can Be Redirected to DASD
Data Type Description
Temporary Data created and deleted in a single job
Active Data that is read one or more times
Point-in-time Backup Copy of an existing DASD data set written
to tape for backup recovery purposes
Database Image Copy Backup data created by database utilities
Identify these data sets to your storage class ACS routine and assign storage and
management classes appropriate for the data set.
The volume mount analyzer uses your SMF data to analyze tape mounts and
associate them with the job and program initiating the request for the tape data
set. It can also produce reports that identify information about tape mount events,
including the data set name, job, program, and data set size (bytes transferred).
Using the volume mount analyzer provides the following benefits:
v Quantify the benefits of tape mount management through library and tape
mount reduction
v Evaluate your tape and cartridge hardware configuration
v Develop ACS routine filters to select tape mount management candidates and
exclude other data sets that must remain on tape
Figure 91 shows the major input to and output from the volume mount analyzer
process.
SMF
GFTAXTR GFTAVMA
Extract Classify data sets
Reads SMF data File
Summarizes records Determine cost/benefits
Determine ACS filters
Figure 91. Volume Mount Analyzer Helps You Design Your Tape Mount Management
Environment
GFTAVMA determines where each data set belongs in the storage hierarchy, based
on a simulation of management class attributes, PRIMARY DAYS and LEVEL 1
DAYS.
You can specify values for these attributes or use the volume mount analyzer's
defaults. Each data set has a set of management criteria, based on the usage
classification that the volume mount analyzer assigns to the data set. Possible
classifications include the following:
v DFSMShsm-owned
v Temporary
v Backup
v Single
v BCOPY
v Active
Data sets that are DFSMShsm-owned are on backup and migration tapes created
by DFSMShsm processes.
Data sets classified as temporary have system-generated temporary data set names.
These data sets are allocated on the primary or large storage groups, and are
deleted by DFSMShsm during space management.
Data sets classified as backup are allocated to the DASD buffer and become eligible
for migration the next time that space management is run. They migrate directly to
migration level 2.
Data sets classified as single are referenced on only one day during the sample
period. These data sets can be directed to migration level 2 after one day.
Data sets classified as BCOPY are data sets that are written and read once during
the sample period. Typically, these might be copies of data sets to be taken offsite
as part of your disaster recovery procedures. GFTAVMA does not make any
assumptions about how these data sets should migrate through the storage
hierarchy. They are handled as active data sets.
Any data sets not assigned a usage category are considered as active. They are
allocated to the DASD buffer and migrate through the storage hierarchy according
to user-specified or volume mount analyzer default values for the volume mount
Related Reading: For more information about analyzing tape usage with the
volume mount analyzer, see z/OS DFSMS Using the Volume Mount Analyzer.
This step introduces you to the volume mount analyzer reports and gives you an
overall picture of tape use. These reports include the following:
v Estimate (EST) of your savings in tape mounts and volume counts as result of
implementing the tape mount management methodology, and also the cost of
additional DASD volumes for the DASD buffer
v Gigabyte (GB) report, showing you the maximum gigabyte allocations by hour,
to help you determine free space requirements for the tape mount management
DASD buffer
v Usage (USE) report, showing the maximum number of drives used concurrently
over the sample period for your tape and cartridge subsystems
v Top (TOP) report, showing you which applications are major tape users
You can use the following GFTAVMA keywords to produce the initial volume
mount analyzer reports:
REPORT(EST,GB,USE,TOP)
Use these initial reports to determine a strategy for follow-on analyses needed to
design your ACS routines and classes and size of the DASD buffer.
Figure 92 on page 186 is a sample of a savings and cost summary chart contained
in an Estimate Report.
--------------------------------------------------
The Estimate Report uses the current date as the last day of the input sample.
DFSMShsm runs automatic space management for the DASD buffer only once a
day. The DASD buffer can be smaller if DFSMShsm space management is
performed hourly.
Compare the CURR STAT MOUNTS and TARG STAT MOUNTS columns in
Figure 93 on page 187 to see the reduction of operator workload in managing all
normal-sized tape data.
Note:The fractional mounts and volume counts shown in the report are called
statistical mounts and statistical volumes, respectively.
These mount and volume counts are fractional because of the presence of multiple
data sets on a single tape volume. The volume mount analyzer assigns a partial
mount or volume count when one of the data sets on a multi-file volume is
requested. It does this so that volume mount analyzer filters, which include or
exclude data sets, can properly predict tape mount management effects on the one
data set, as opposed to the other data sets on the volume. Figure 94 shows the
affect on tape library size.
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--------+
|DATA SET |CURR STAT|TARG STAT| DFSMS - PRIMARY | HSM - LEVEL 1 | HSM - LEVEL 2 | TAPE - DIRECT |
|CATEGORY | VOLUMES | VOLUMES |CURR VOL#|TARG VOL#|CURR VOL#|TARG VOL#|CURR VOL#|TARG VOL#|CURR VOL#|TARG VOL#|
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|TEMPORARY| 48.2| | 48.2| | | | | | | |
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|ACTIVE | 86.8| 9.9| 18.7| | 0.0| | 68.1| 9.9| | |
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|ACTV GDG | 635.4| 132.0| 90.4| | 0.0| | 545.0| 132.0| | |
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|BACKUP | 4987.3| 1131.5| 760.7| | | | 4226.6| 1131.5| | |
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|LARGE | 557.8| 323.8| | | | | | | 557.8| 323.8|
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|HSM SFF | 0.0| 0.0| | | | | | | 0.0| 0.0|
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|HSM GEN | 0.0| 0.0| | | | | | | 0.0| 0.0|
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|MGMTCLAS | | 4.2| | | | | | 4.2| | |
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
|TOTAL | 6315.5| 1601.4| 918.0| | 0.0| | 4839.7| 1277.6| 557.8| 323.8|
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
The Estimate Report also shows the DASD buffer requirements necessary to
manage all normal-sized tape data. Figure 95 on page 188 displays a sample
Estimate Report for DASD buffer sizings.
Using the Estimate Report from Figure 92 on page 186, you can observe the
following:
v Tape mount management can save 81.6% of all mounts and 74.6% of all
cartridges with 28.8 volumes of 3390-3 DASD (assuming that DFSMShsm runs
once every 24 hours). This can be reduced significantly for backup data that can
be moved every hour using interval processing of DFSMShsm.
v Based on the keywords set for this run, the simulation indicates that an 8.8%
recall rate is incurred because of level 2 recalls from DFSMShsm.
v Only a small portion of the mounts are saved (4.4%) by sending large files
directly to a larger capacity cartridge.
From Figure 93 on page 187 and Figure 94 on page 187, you can observe that most
of the mounts and tape volumes saved fall into the tape mount management
category of backup, as follows:
v Mounts reduced from 25476.8 to 1131.5
v Volumes reduced from 4987.3 to 1131.5
From Figure 95, you can observe that because most of the data sets are backup
data sets (38794 out of 41238), there is a good chance that the tape mount
management DASD buffer can be dramatically reduced if hourly interval
processing is done with DFSMShsm.
Table 28 shows that the free space for the tape mount management buffer should
be about 6 GB to support the period with the highest allocation activity.
Table 28. Maximum Gigabyte Allocations by Hour Report
Maximum Tape Mounts: 18 Day Summary Report
MAX 00 01 02 03 04 05 06 07 08 ... 20 21 22 23 MAX TOT
GB/
HR
HR 6 6 5 3 5 5 4 6 6 ... 5 6 5 6 6 2046
MAX
The job-related variables are listed so that you can start to identify patterns you
can translate into ACS routine FILTLIST statements for ACS read-only variables,
such as job name, job accounting information, data set or program name.
Your standards determine the variables that are most useful in reliably identifying
how your tape data sets should be processed. Program name is normally the most
efficient filter to capture a lot of mounts, as a single program is often responsible
for multiple data sets stacked on a tape. Intercepting multi-file sets of data sets
together removes the additional complexities involved in breaking up these sets of
data. You can also use job names or accounting information as filters.
Your ACS routines use the filters that you code in FILTLIST statements to intercept
tape allocations for the tape mount management candidates. Take note of the
patterns in variables such as data set name or program name.
Table 30 shows data from a Top Report for the most tape-intensive users by
program name.
Table 30. First Five Entries from a Top Report: Program Names-LARGE = 600 MB
RANK PROG # DSNS % TOT CUM % TOT > LARGE AVG SIZE # MNTS % TOT CUM % TOT
DSN MNTS
1 ADRDSSU 556 14.8 556 14.8 489 759.8 1532.0 24.1 1532.0 24.1
2 IDCAMS 1097 29.3 1653 44.1 22 99.9 1397.9 22.4 2929.9 46.1
3 ARCCTL 4 0.1 1657 44.2 2 17573 728.0 11.5 3657.9 57.5
4 COPY 672 17.9 2329 62.1 5 21.8 573.0 7.0 4230.9 66.6
5 ICEMAN 116 3.1 2445 65.2 2 57.2 417.5 6.6 4648.4 73.1
This report identifies the five top-ranked programs accounting for 73.1% of all tape
mount activity and 65.2% of all tape data sets during the sample period. AVG SIZE
shows the average size in MBs of the input and output data sets for the five
programs. The column, > LARGE, shows the number of data sets processed by the
program that exceeded the LARGE value, 600 MB, for the sample. The DFSMSdss's
For the ADRDSSU application in Table 30 on page 189, the average size of all data
sets, 759.8 MB, and the size of 489 data sets (out of 556 data sets) are greater than
the LARGE value of 600 MB. This indicates that the entire application (that is,
PGM=ADRDSSU) should be considered as LARGE and sent directly to tape.
For this sample installation, use program name as a filter in ACS FILTLISTs to
exclude ADRDSSU and ARCCTL from tape mount management. However,
IDCAMS, COPY, and ICEMAN output are good tape mount management
candidates, based on data set size, representing a large portion of the mounts
(22.4%, 7.0%, and 6.6%, respectively). Therefore, you would include IDCAMS,
COPY, and ICEMAN in the ACS FILTLISTs so that these programs are included for
tape mount management.
Similarly, the Top Report for data set HLQ shows that several HLQs are linked
with large tape data sets. Filters based on data set name are discussed in
“Identifying and Excluding Large Data Sets” on page 191.
To achieve optimal tape performance, create these data sets using advanced
cartridge subsystems, such as the 3490E with Enhanced Capacity Cartridge System
Tape.
To identify exception tape data, perform your volume mount analyzer simulations
in the following sequence:
1. Identify and exclude large data sets
2. Develop filters for large data sets
3. Identify and exclude special jobs or data sets
4. Finalize exclusion filter list
For the sample, program name identifies some large data sets that account for
many mounts. Typically, program name can be used as a filter to identify mounts
for large data sets that can be excluded with ACS filtering techniques. Often, one
program is responsible for creating multiple data sets stacked on tape. Intercepting
mounts for multi-file data sets in this way simplifies your design problems linked
with breaking up these stacked data sets.
Use the Top Report to identify efficient filters for capturing many large data sets.
The >LARGE and AVG SIZE columns give a good indication of the tendency of
particular programs or applications to generate LARGE data.
Use the following volume mount analyzer request to predict the affect of the 3490E
with Enhanced Capacity Cartridge System Tape on large tape data set activity:
REP(USE,EST)
PGM(INC(ADRDSSU,ARCCTL))
UNIT(EXC(3420))
LARGE(0)
TAPELEN(2)
The sample request for estimate shows the reduction in mounts and volumes when
all tape data created is written using 3490E technology. Table 31 on page 192 shows
These data sets sometimes have a recognizable pattern, such as the following:
v “*.DR.**” can identify the disaster recovery data sets
Tip: If your tape management system provides support for the pre-ACS routine
exit and supplies the MDEST parameter, you might be able to identify most of
these data sets by using an MSDEST parameter value other than 'NULL'.
The following keywords are required to assess the affect of 3490E hardware on
tape data sets that will continue to be written directly to tape:
REP(EST)
TAPEDEV(3490E)
EXPDT(INC(98000)
JOB(INC(VRJOB))
PGM(INC(ADRDSSU,ARCCTL))
DSN(INC(*.DR.**))
After you identify your tape data sets, develop a set of characteristics for them
and:
v Determine the management classes
v Determine the space values for data classes
v Quantify tape mount management benefits and costs
v Determine free space requirements
v Determine tape and cartridge configuration changes
Table 32 shows the five top-ranked management classes for the sample
installation's tape mount management data sets generated by IDCAMS.
Table 32. Management Class Report for IDCAMS Data. L0AGE is the number of days that the data set resides on
primary storage before migration. L1AGE shows the number of days that the data set resides on migration level 1.
ORDER L0AGE L1AGE L2-MNTS RECALLS TOT L0-TRKS L1-TRKS L2-MB
TRKS
1 1 1 451 451 3191 3191 0 3
2 0 1 451 451 7 0 7 0
3 0 0 451 451 0 0 0 8
4 1 2 99 451 3192 3191 1 0
5 0 2 99 451 8 0 8 0
Now you need to run the Estimate Report using all of the assumptions. Because
you chose L0AGE(1) and L1AGE(3), concede that all data sets created by IDCAMS
are active. Specify this to the volume mount analyzer to force all of these data sets
into the active category. Also, because you are not excluding the 22 LARGE data
sets, set the LARGE value high enough so the volume mount analyzer does not
separate them into LARGE.
The following keywords request the storage sizing information for this subset of
data produced by IDCAMS.
PGM(INC(IDCAMS))
CLASSIFY(ACTIVE)
L0AGE(1)
L1AGE(3)
LARGE(99999)
TAPEDEV(3490E)
TAPELEN(2)
REP(EST)
---------------------------------------------------
Now decide if you are satisfied with the projected improvements. If not, go back to
“Identifying and Excluding Large Data Sets” on page 191, exclude the data sets
that you have already determined how to manage, and try to identify more
potential tape mount management candidates.
maximum GB required
free space % = __________________________
total size of DASD buffer
Use the free space percentage to set the low threshold for the DASD buffer storage
group. For example, the final sum of all the estimates was 12 3390-3 volumes for
level 0. The 3GB maximum each hour is a little larger than one 3390-3 volume.
This is approximately 10% of the total estimate (12 volumes). If you choose a high
threshold of 95%, the low threshold would be 85%, or even 80% to allow for some
growth. All of this logic assumes that DFSMShsm processes the tape mount
management DASD buffer every hour.
IDRC helps to reduce the number of cartridges required to store data, and reduces
the elapsed time for batch jobs depending on cartridge I/O. This makes IDRC
effective for both single volume and multivolume data sets. It optimizes the data
exchange between the controller and the device, increasing the number of devices
that can be used concurrently.
To implement IDRC with SMS data classes, perform the following steps:
v Define data classes for your offsite tape data and very large backup, active, and
temporary tape data sets. Set COMPACTION to Y for these data classes. Data
classes for these tape data set categories are TAPOSITE, TAPBKUP, TAPACTV,
and TAPTEMP.
v Allow your data class ACS routine to assign these classes during allocation. This
method writes the data in IDRC-compacted format. The same result occurs if
you specify DCB=TRTCH=COMP on the DD statement.
Remember that the data class ACS routine is driven for both system-managed
and non-system-managed data sets.
Using data class lets the system determine an optimal block size for the tape
data set if you do not specify one. z/OS DFSMS Using Data Sets describes how
the system determines block size for tape. Using system-determined block size
improves tape channel usage and buffer management.
v Define a data class, NONTMM, with COMPACT=N, so tape data sets are
directed to tape in a non-compacted form. Use the NONTMM data class on your
DD statements to tell your data class ACS routine that the data sets should not
be compacted or redirected to the DASD buffer. These might be data sets
shipped offsite to facilities without IDRC-capable drives or those used by
applications that call the READ BACKWARDS command. This command is
simulated for IDRC data sets; compacting them severely degrades performance.
Few data set types rely on the READ BACKWARDS command. For example, the
IMS log is read backwards if it is processed during recovery. Do not compact
DFSORT work files.
Data classes can also be used as artificial classes, or flags, in your ACS routines to
determine the data category that the tape data set belongs to, and whether to
system manage it. You can set these artificial classes in the data class ACS routine,
and check them in the storage class ACS routine when you determine if the data
set is to be system-managed.
In general, tape mount management data sets do not have special performance or
availability requirements, so new storage classes are not required. After you gain
experience with tape mount management implementation, evaluate whether some
candidates might benefit from sequential data striping.
The TMMBKUP and TMMACTV management classes support the two major
categories of tape mount management data. TMMBKUP data moves to migration
Recommendation: Define at least one separate storage group for the DASD buffer,
because the threshold management policy for this set of volumes differs from
others. Set the low threshold based on the space requirements for new tape mount
management data allocations during periods of peak tape usage. Set the high
threshold so a full cartridge of data, at least, is written during interval migration.
During volume selection, volumes in an overflow storage group are less preferred
than those in an enabled storage group but more preferred than those in a
quiesced storage group.
Exception: When an overflow storage group contains more volumes than a buffer
storage group, specified volume counts might result in volumes in the overflow
storage group being preferred over volumes in the buffer storage group.
Table 35 summarizes the types of data sets, tape mount management technique,
and their corresponding SMS classes and groups.
Table 35. SMS Classes and Groups for Tape Data Sets
Management Storage
Data Type Technique Storage Class Data Class Class Group
Interchange Advanced Null TAPOSITE — —
Cartridge
HW
Disaster Advanced Null TAPOSITE — —
Cartridge
HW
Vital Record Advanced Null TAPOSITE — —
Cartridge
HW
DFSMShsm Advanced Null — — —
Backup Cartridge
HW
DFSMShsm Advanced Null — — —
Migration Cartridge
HW
Auto Dump Advanced Null — — —
Cartridge
HW
Volume Advanced Null — — —
Dump Cartridge
HW
Using the Partial Release management class attribute can reduce the number of
data classes needed to allocate space for tape mount management candidates. The
CI option, release-on-close, releases allocated but unused space when the data set is
closed or during the next space management cycle.
The following sample data classes are defined for use with tape mount
management data sets:
TAPOSITE
Assigned to data sets that are usually stored offsite, such as vital records,
disaster recovery backups, archives, and interchange data.
These tape data sets are allocated directly to tape.
TAPACTV
Assigned to active data sets larger than 600 MB. These data sets are
allocated directly to tape.
Table 36 shows the attributes of the management classes assigned by the ACS
routine, shown in Figure 101 on page 208.
Related Reading: For descriptions of the management class attributes, see z/OS
DFSMSdfp Storage Administration.
Table 36. Sample Management Classes for Tape Mount Management
ATTRIBUTE TMMBKUP TMMACTV
EXPIRE NON-USAGE 15 200
EXPIRE DATE/DAYS NOLIMIT NOLIMIT
MAX RET PERIOD 20 NOLIMIT
PARTIAL RELEASE YI YI
PRIMARY DAYS 0 2
LEVEL 1 DAYS 0 10
CMD/AUTO MIGRATE BOTH BOTH
#GDS ON PRIMARY _ 1
ROLLED OFF GDS ACTION _ EXPIRE
BACKUP FREQUENCY _ 1
#BACKUPS (DS EXISTS) _ 2
# BACKUPS (DS DELETED) _ 1
RETAIN DAYS ONLY _ 60
BACKUP
RETAIN DAYS EXTRA _ 30
BACKUP
ADM/USER BACKUP NONE ADMIN
AUTO BACKUP NO YES
BACKUP COPY S S
TECHNIQUE
DFSMSrmm supports the SMS pre-ACS interface. The SMS subsystem calls
DFSMSrmm before the data class ACS routine obtains control. Then, DFSMSrmm
optionally sets the initial value for the ACS routine MSPOOL and MSPOLICY
read-only variables if the pre-ACS installation exit has not done so. DFSMSrmm
however does not use the installation exit.
Related Reading: For detailed information on the DFSMSrmm support for the SMS
pre-ACS interface, see z/OS DFSMSrmm Implementation and Customization Guide. Or,
check with your tape management vendors for information on support for the
variables.
This routine uses the data categories from the tape mount management analysis to
accomplish the following:
v Filter out data sets intended to be stored offsite.
v Filter out very large data sets intended to go directly to tape.
v Assign an appropriate data class name to data sets intended for tape mount
management. These data sets are directed to the tape mount management DASD
buffer.
Figure 98. FILTLIST Section of a Sample Data Class ACS Routine for Tape Mount
Management
Figure 99 on page 206 uses the filters previously defined in the data class routine
to identify the classifications of tape data, and sets the appropriate artificial class.
These classes are later used in the storage class, management class, and storage
group ACS routines to determine if and how the data should be managed, and
where it should be stored.
/**********************************************************************/
/* Start of Tape Data Set Mainline */
/**********************************************************************/
WHEN (&UNIT = &TAPE or &UNIT=&DS_STACK)
DO
SELECT //*Start of Tape Select */
WHEN (&GROUP = &SPECIAL_USERS &&
&DATACLAS = ’NONTMM’) //*Permit system pgmrs. */
DO //*and storage admin. to */
SET &DATACLAS = &DATACLAS //*write tape volumes */
EXIT
END
WHEN (&DSN = &OFFSITE) //*Write data sets to be */
DO //*sent offsite to own */
SET &DATACLAS = ’TAPOSITE’ //*data class */
EXIT
END
WHEN (&DSN = &LARGE_BACKUP) //*Write large data set */
DO //*backups to tape */
SET &DATACLAS = ’TAPBKUP’
EXIT
END
WHEN (&DSN = &LARGE_ACTIVE) //*Write other large, */
DO //*permanent data sets */
SET &DATACLAS = ’TAPACTV’ //*to tape */
EXIT
END
WHEN (&DSN = &LARGE_TEMP) //*Write large, temporary */
DO //*data sets to tape */
SET &DATACLAS = ’TAPTEMP’
EXIT
END
WHEN (&DSN = &HSM) //*Write HSM ML2, dump, */
DO //*backup, and TAPECOPY */
SET &DATACLAS = ’HSMDC’ //*data sets to tape */
EXIT
END
WHEN (&DSTYPE = ’TEMP’) //*Identify temporary */
DO //*data sets that are */
SET &DATACLAS = ’TMMTEMP’ //*TMM candidates */
EXIT
END
WHEN (&PGM = &BACKUP) //*Set TMM backup */
DO //*data class */
SET &DATACLAS = ’TMMBKUP’
EXIT
END
OTHERWISE
DO
SET &DATACLAS = ’TMMACTV’ //*Set TMM active data */
EXIT //*class to all other data */
END //*sets */
END //*End of Tape DO */
END //*End of Tape SELECT */
/**********************************************************************/
/* End of Tape Data Set Mainline */
Figure 99. Sample Data Class ACS Routine for Tape Mount Sample Data Class ACS Routine
for Tape Mount
/**********************************************************************/
/* Start of FILTLIST Statements */
/**********************************************************************/
FILTLIST VALID_DEVICE INCLUDE(’3380’,’3390’,’3420’,3480’,’3490’,
’SYSDA’,’,’3480X’,TAPE*,’3494’,
’3495’,’9345’)
FILTLIST TMM_DATA_CLASS INCLUDE(’TMMACTV’,’TMMBKUP’)
FILTLIST TAPE_DATA_CLASS INCLUDE(’TAPACTV’,’TAPBKUP’,TAPTEMP’,
’TAPOSITE’,’NONTMM’)
FILTLIST VALID_STORAGE_CLASS INCLUDE(’BACKUP’,’CRITICAL’,’DBCRIT’,’FAST’,
’FASTREAD’,’FASTWRIT’,’GSPACE’,
’MEDIUM’,’NONVIO’,’STANDARD’)
/**********************************************************************/
/* End of FILTLIST Statements */
/**********************************************************************/
SELECT
WHEN (&UNIT ^= &VALID_DEVICE && &UNIT ^= ’STK=SMSD’)
/* Unit must be valid DASD*/
DO /* or tape device or not */
SET &STORCLAS = ’’ /* externally specified */
EXIT
END
WHEN (&HLQ = &HSM_HLQ && /* Do not manage data sets*/
&DSN(2) = &HSM_2LQ) /* on ML1, ML2 */
DO
SET &STORCLAS = ’’
EXIT
END
WHEN (&DATACLAS = &TAPE_DATA_CLASS) /* Do not manage "large" */
DO /* or offsite tape data */
SET &STORCLAS = ’’ /* sets */
EXIT
END
WHEN (&GROUP = &SPECIAL_USERS && /* Permit storage admin. */
&STORCLAS = ’NONSMS’) /* or data base admin. */
DO /* to create */
SET &STORCLAS = ’’ /* non-system-managed */
EXIT /* data sets */
END
WHEN (&DATACLAS = &TMM_DATA_CLASS) /* Manage active, backup,*/
DO /* temporary data sets */
SET &STORCLAS = ’STANDARD’ /* that are tape mount */
EXIT /* management candidates */
END
Figure 100. Sample Storage Class ACS Routine for Tape Mount Management
Figure 101. Sample Management Class ACS Routine for Tape Mount Management
Related Reading: To enable setting of management class names and storage group
names, DFSMSrmm calls the management class ACS routine for non-SMS tape data
sets. See z/OS DFSMSrmm Implementation and Customization Guide for further
information on the variables set for the RMMVRS environment.
Figure 102. Sample Storage Group ACS Routine for Tape Mount Management
If you choose, DFSMSrmm can call storage group ACS routines for
non-system-managed tapes to obtain a storage group name and use it as a scratch
pool ID. For information on how to choose this option, see z/OS DFSMSrmm
Implementation and Customization Guide.
To request data set stacking, you must have the following JCL options on the DD
statement:
v Data-set-sequence-number subparameter on the LABEL parameter is greater
than one.
This subparameter is used to identify the relative position of a data set on a tape
volume. For existing cataloged data sets, the system obtains the
data-set-sequence-number from the catalog.
v VOL=SER or VOL=REF parameter
Requirement: All data sets in the data set collection must be directed to the same
device category, of the following four device categories:
v System-managed DASD
v System-managed tape
v Non-system-managed DASD
v Non-system-managed tape
A mixture of these device categories is not allowed for the following reasons:
v There would be problems accessing the data sets later. For example, a data set
with a data-set-sequence-number of three could be placed as the first data set on
the tape if the first two data sets were redirected to DASD.
v There could be problems locating data sets later since some types of data sets
must be cataloged and others may be uncataloged.
If you have an allocation in a previous job or step that specifies VOL=SER and that
is rerouted to SMS DASD, it will be cataloged on a volume other than that
specified in the JCL.
For example, in the following statement, you must not specify volume VOL001 in
the JCL unless the volume is in a DUMMY storage group if you later want to
allocate the data set as OLD:
//DD1 DD DSN=A,VOL=SER=VOL001,DISP=(NEW,CATLG),LABEL=(1,SL)
Otherwise, it will cause the locate for the data set to be bypassed, probably
resulting in an OPEN abend since the data set was rerouted and is not on the
volume.
This is because the system does not include existing SMS-managed requests in any
data set collection.
Additional information is now passed to the ACS routines when VOL=REF is used.
The &ALLVOL and &ANYVOL ACS read-only variables contain one of the
following values when the reference is to a system-managed data set:
v 'REF=SD' - The reference is to an SMS-managed DASD or VIO data set
v 'REF=ST' - The reference is to an SMS-managed tape data set
v 'REF=NS' - The reference is to a non-SMS-managed data set
When a data set is referenced, the name of the referenced storage group is passed
to the storage group ACS routine in the &STORGRP read-write variable. The ACS
routines can allow the allocation in the storage group of the referenced data set or
select a different storage group or list of storage groups. For NEW to NEW
references, multiple storage groups might have been assigned to the referenced
data set. In this case, only the first storage group is passed as input to the ACS
routines for the referencing data set; this might not be the storage group in which
the referenced data set is actually located.
Figure 103 shows an example of an ACS routine fragment to assign the referencing
data set to the same storage group as the referenced data set.
PROC STORGRP
FILTLIST REF_SMS INCLUDE(’REF=SD’,’REF=ST’)
Figure 103. Sample ACS Routine to Assign Same Storage Group as Referenced Data Set
Rule: The assignment of &STORGRP = &STORGRP does not work if you have
entered private tapes into the library with a blank storage group name, since a
valid storage group name is unavailable.
Recommendation: Use VOL=REF for data set stacking across jobs or steps. If you
use VOL=SER to stack data sets across steps or jobs, the system cannot detect data
set stacking. In these cases, you can do one of the following:
v Change your JCL to specify VOL=REF instead of VOL=SER.
v Ensure that your ACS routines don't redirect these data set allocations, unless
you guarantee that they can be redirected to a consistent device category.
If the ACS routines initially directed the stacked allocations to different device
categories, the system detects this and re-invokes the ACS routines, passing them
additional information. The ACS routines can then take one of the following
actions:
v Correct the problem and route the allocations to consistent device categories
v Fail the stacked allocation (if the ACS routine exits with a non-zero return code)
v Fail to correct the inconsistency; SMS fails the allocation
Figure 104 and Figure 105 show examples of ACS routine fragments to assign the
referencing data set to a consistent device category as the referenced data set.
PROC STORCLAS
FILTLIST DS_STACK INCLUDE(’STK=SMSD’,’STK=NSMS’)
Figure 104. Storage Class ACS Routine Fragment to Assign Consistent Device Category
PROC STORGRP
FILTLIST DS_STACK INCLUDE(’STK=SMSD’,’STK=NSMS’)
Figure 105. Storage Group ACS Routine Fragment to Assign Consistent Device Category
Since data set stacking might cause a second or third invocation of the ACS
routines, you might want to take special care when using WRITE statements to
avoid duplicates in the job log.
Additionally, existing SMS-managed data sets are not checked by the system for
inclusion in a data set collection. For SMS-managed data sets that are cataloged,
the system cannot assume that the volume information in the catalog represents
the original intended volume for the data set. For example, the data set might have
been redirected, in which case if the system uses the volume information in the
catalog to allocate the data set, the data set might be incorrectly placed in the
wrong data set collection.
Unit Affinity
Data set stacking can be used in conjunction with unit affinity. In a tape
environment, unit affinity is a JCL keyword (UNIT=AFF) used to minimize the
number of tape drives used in a job step. The system attempts to use the same
tape drive for a request that specifies UNIT=AFF for both the referenced and
referencing DD statements.
Table 38 shows the values to which the &UNIT read-only variable are set when
UNIT=AFF is requested, as well as what each value means to the ACS routines:
Table 38. Values for &UNIT ACS Read-Only Variable
Device Category of
Data Set Stacking Data Set on Which to
&UNIT Value ACS Invocation Indication Stack
AFF= First Unknown Not applicable
STK=SMSD Second Yes and different device System-managed DASD
categories
z/OS MVS JCL User's Guide discusses data set stacking and unit affinity and
provides examples.
When unit affinity is specified on a DD statement, three new values are set
depending on the unit of the AFF'ed DD. The following example explains how
these values are set. In this example, DD1 is directed to SMS DASD, and DD2 is
directed to SMS tape.
//DD1 DD UNIT=SYSDA,DISP=NEW,..
//DD2 DD UNIT=AFF=DD1,DISP=NEW,..
//DD3 DD UNIT=AFF=DD2,DISP=NEW,..
//DD4 DD UNIT=AFF=DD1,DISP=NEW,...
With the exception of the JES3 environment, ACS routines are called multiple
times. When the ACS routines are invoked by the JES3 PRESCAN processing, the
&UNIT read-only variable is set to 'AFF=' for DD2, DD3, and DD4. The ACS
routines are invoked again later during the allocation process with the values
shown in the example.
DISP=NEW
You can designate an overflow storage group using the ISMF Pool Storage Group
Define panel.
Note: When an overflow storage group contains more volumes than a buffer
storage group, specified volume counts might result in volumes in the overflow
storage group being preferred over volumes in the buffer storage group during
volume selection.
For SMS mountable tape, the referencing data must be assigned to the same
storage group as the referenced data set, if the referencing data set is also to be
SMS mountable tape data set.
For example, consider two storage groups, BIG and SMALL, that are defined based
on data set size. If the referenced data set is assigned to storage group BIG, you
must also ensure that the referencing data set goes to storage group BIG, even if its
size would logically assign it to storage group SMALL. Conversely, if the
referenced data set is assigned to storage group SMALL, then the referencing data
set must also be assigned to storage group SMALL. If the referencing data set is
large, this can result in out-of-space abends for allocations in storage group
SMALL.
Restriction: If you use VOL=REF processing to refer to a temporary data set, you
might get different results in storage group assignments than expected. This is
because temporary data sets are assigned a storage group by the system, based on
a list of eligible storage groups, such as: VIO, PRIME, STANDARD, etc. Data sets
that use VOL=REF are assigned a storage group based on this list of eligible
storage groups, not on the name of the storage group used to successfully allocate
the first data set being referenced. This might result in the data sets being allocated
in different storage groups.
Information on the referenced device type and the referenced storage group can be
passed to the ACS routines when VOL=REF is used. The &ALLVOL and
&ANYVOL ACS read-only variables contain one of the following values when the
reference is to a system-managed data set:
v 'REF=SD' - The reference is to an SMS-managed DASD or VIO data set
v 'REF=ST' - The reference is to an SMS-managed tape data set
The &STORGRP read-write variable contains the name of the referenced storage
group when the ACS routine is entered. You can then allow the allocation in the
storage group of the referenced data set or select a different storage group or list of
storage groups.
Figure 106 shows an example of how to code your ACS routine to assign the
referencing data set to a different storage group than the referenced data set.
PROC STORGRP
SELECT (&ANYVOL)
WHEN(’REF=SD’)
IF &DSTYPE = ’TEMP’ && &DSORG NE ’VS’ THEN
SET &STORGRP = ’VIOSG’,’MAIN3380’,’MAIN3390’,’SPIL3380’,’SPIL3390’
ELSE
SET &STORGRP = ’MAIN3380’,’MAIN3390’,’SPIL3380’,’SPIL3390’
WHEN(’REF=ST’)
IF &STORGRP = ’’ THEN
SET &STORGRP = &STORGRP
ELSE
SELECT(&LIBNAME)
WHEN(’ATL1’)
SET &STORGRP = ’TAPESG1’
WHEN(’ATL2’)
SET &STORGRP = ’TAPESG2’
END
OTHERWISE
END
END
Figure 106. Sample ACS Routine to Assign Different Storage Group than Referenced
You can code your ACS routines to do one of two of the following actions:
v Allow the allocation to proceed as a non-system-managed data set.
Figure 107 shows an example of how to code your ACS routine to allow those
that meet certain criteria to proceed and warn others that the allocations will be
failed after a specific date.
PROC 0 STORGRP
FILTLIST AUTH_USER INCLUDE(’SYSPROG1’,’SYSPROG2’,’STGADMIN’,’SYSADMIN’)
Figure 107. Sample ACS Routine to Allow Allocation of Non-System-Managed Data Set
PROC 1 STORGRP
FILTLIST AUTH_USER INCLUDE(’SYSPROG1’,’SYSPROG2’,’STGADMIN’,’SYSADMIN’)
Figure 108. Sample ACS Routine to Fail Allocation of Non-System-Managed Data Set
If the ACS routines attempt to make the referencing data set SMS-managed, SMS
fails the allocation. This is because allowing the data set to be allocated as an
SMS-managed data set could cause problems locating the data set, as well as
potential data integrity problems. These problems occur when the data set is
allocated with DISP=OLD or DISP=SHR, due to the tendency to copy old JCL. For
example, the following sample referenced data set is non-SMS-managed:
//DD1 DD DSN-A.B.C,DISP=OLD,VOL=REF=D.E.F,....
If the referenced data set is SMS-managed, a LOCATE is done for the referencing
data set, and the data set is accessed using the result of the LOCATE. If the
referenced data set is non-SMS-managed, then no LOCATE is done, and the
referencing data set is assumed to reside on the same volume as the referenced
In a tape mount management environment, you can determine that any of the data
sets is a good candidate to redirect to the DASD buffer.
You can establish up to 15 concurrent interval migration tasks to migrate data from
primary volumes to tape. You can improve the effective data rate up to three times
by increasing the number of tasks from one to seven. The SETSYS
MAXINTERVALTASKS setting controls the maximum number of these tasks that
can operate concurrently. Ensure that one cartridge drive per interval migration
task is available to support this multi-tasking.
In Chapter 11, “Optimizing Tape Usage,” on page 177, you used tape mount
management to direct some of your tape data sets to two storage groups,
TMMBUFxx and TMMBFSxx, to have the data sets managed by DFSMShsm. They
migrate through the storage hierarchy and eventually reside on migration level 2
volumes. Migration level 2 volumes, and the volumes containing data sets written
by DFSMShsm or your own applications directly to tape, can be system-managed.
The addition of system-managed tape enables you to manage all types of storage
media-DASD, optical disks, and tape volumes-in the DFSMS environment.
You need to define a tape library to support system-managed tape data. A set of
integrated catalog facility user catalogs, called the tape configuration database,
contains information about your tape libraries and volumes. You can use tape
storage groups to direct new allocation requests to tape libraries.
The tape library definitions are created using ISMF. This builds a library record in
the tape configuration database and in the specified SCDS. A tape library contains
a set of volumes and the tape subsystems that are used to support mount activity
for the library's volumes. A tape library supports both scratch and private
volumes.
The tape configuration database consists of the tape library and volume records
residing in one or more tape library volume catalogs. Volume catalogs are
integrated catalog facility user catalogs containing system-managed tape library
and volume entries. A general tape library catalog contains all library records. If
specialized catalogs do not exist, volume entries are placed in this catalog. You can
create specialized catalogs, selected based on the first character of the volume
serial number, to hold data about related tape volumes.
Unlike the ATLDS, the manual tape library does not use the library manager. With
the manual tape library, a human operator responds to mount messages generated
by the host and displayed on a console. There are no robotics associated with an
MTL, it is a purely logical grouping of non-ATLDS drives — standalone drives —
that is supported with MTL. Mount messages are displayed on consoles for human
operators to see and respond to, just as they would for standalone, non-ALTDS
tape drives.
Volumes can be associated with manual tape libraries so that only those volumes
defined for a specific manual tape library can be mounted on drives in that MTL.
See “Setting Up a Manual Tape Library” on page 229 for information about
defining the manual tape library.
Exception: Manual tape libraries are not intended to operate within competitive
robotic tape libraries. If you need to try such a definition, contact the manufacturer
of the specific robotic library system for assistance.
Because media associated with any new tape devices will likely be incompatible
with the real devices that are being emulated, there is a need to take this
management out of the hands of the user and into system management. The
manual tape library provides this ability by recognizing the real underlying device
type rather than the device type that is being emulated. By defining these libraries,
associating the media with these libraries and properly defining the SMS
constructs, the allocation of drives and the mounting of the appropriate media can
be accomplished through system control.
No JCL changes are required to use MTL. The SMS storage group ACS routines
can be updated to influence the placement of new tape data sets to an MTL.
However, you must use HCD to identify tape drives as being MTL resident.
Related Reading:
v For specific RMM support, see z/OS DFSMSrmm Implementation and
Customization Guide.
v For information on the use of the Initial Access Response Time (IART) value in
the Virtual Tape Server Environment, see z/OS DFSMS OAM Planning,
Installation, and Storage Administration Guide for Tape Libraries.
For non-system-managed tape, you can use the SMS ACS routines to determine the
scratch pooling on tape storage group names. See z/OS DFSMSrmm Implementation
and Customization Guide for more information.
Related Reading: For a list of the tape exits available for DFSMS, see z/OS DFSMS
OAM Planning, Installation, and Storage Administration Guide for Tape Libraries.
The storage class ACS routine must assign a storage class for the request to be SMS
managed. For information about the use of storage class values, see z/OS DFSMS
OAM Planning, Installation, and Storage Administration Guide for Tape Libraries.
During the migration, if existing multivolume data sets are entered in libraries,
ensure that all volumes for a multivolume tape data set reside in the same tape
library.
OAM automatically updates your tape configuration database as you enter the
cartridges into the library. OAM uses the information passed by DFSMSrmm (such
as private or scratch, 18-track or 36-track recording). Make sure that the following
DFSMSrmm storage location names are not used as tape library names: REMOTE,
LOCAL, DISTANT, BOTH, CURRENT.
Defining OAM
OAM plays a central role in the SMS tape library support. OAM manages,
maintains, and verifies the tape volumes and tape libraries within a tape storage
environment.
The tape storage group definition links a storage group with tape libraries.
Figure 111 on page 225 shows the Storage Group Application definition for the
After you define the storage group, you set the status for the storage group to each
system that uses the tape library dataserver. You can temporarily prevent a storage
group from being assigned by your storage group ACS routine by assigning its
SMS Storage Group Status to DISNEW or DISABLE. The default for this attribute
is ENABLE.
Figure 112. Storage Group ACS Routine Fragment to Assign Tape Storage Groups
DFSMShsm uses the compaction data class attribute to determine whether to create
the DFSMShsm volume using IDRC. If you do not assign a data class for
DFSMShsm tape volumes in your data class ACS routine, then the options of
TAPEHARDWARECOMPACTION, TAPEHWC and NOTAPEHWC, are used to
make this determination for 3480X and 3490 devices, as before. For 3490E devices,
data class must be assigned for DFSMShsm to inhibit the use of IDRC compaction.
If it is not, the tape is written using IDRC compaction.
Naming conventions for volume serial numbers can help you balance the volume
catalog update activity.
Before defining your tape libraries, ensure that only storage administrators can
update the tape configuration database. Add STGADMIN.IGG.LIBRARY to the set
of SMS facilities that are protected by RACF.
Figure 113 shows how to define a specific volume catalog. The name of the general
catalog is SYS1.VOLCAT.VGENERAL, and SYS1.VOLCAT.VH is an example of the
name of a specific volume catalog for tapes having serial numbers beginning with
H. The HLQ, SYS1, can be replaced by another one if the LOADxx member of the
PARMLIB is changed appropriately.
DEFINE UCAT -
(NAME(SYS1.VOLCAT.VH) -
VOLCAT -
VOLUME(D65DM4) -
CYL(1 1))
/*
Defining the Tape Console: You should identify a console to receive critical
messages about 3494 or 3495 tape processing. Standard mount messages handled
by the 3494 or 3495 accessor are not routed to the console, but are directed to a
console log. Enter the name of this console as defined in your PARMLIB member,
CONSOLxx, in the Console Name attribute.
Defining Tape Library Connectivity: You enable z/OS systems to use the tape
library by defining system names in the Initial Online Status attribute. These
system names must also reside in the base configuration of your active SMS
configuration. A tape library that is defined to z/OS and physically connected to a
system can be online or offline. If a tape library is offline, you can use the VARY
SMS,LIBRARY command to bring the tape library online. If you do not set a status,
SMS assumes that the tape library is not connected. Ensure that the tape
configuration database is available to every system that uses the tape library.
Initially, the volumes in your tape library might be scratch volumes, private
volumes, or a combination of both types of volumes. Enter the predominant type
of use attribute, Private or Scratch, in the Entry Default Use attribute.
When you or DFSMShsm eject volumes from the tape library dataserver, the
volume entry in the tape configuration database can be retained or purged. Use the
Eject Default to set this attribute to Keep or Purge based on your requirements. If
you expect volumes to be reused in the library, use the default value, Keep, for this
attribute.
Related Reading: For more information about using DFSMSrmm, see z/OS
DFSMSrmm Implementation and Customization Guide.
DFSMSdfp works together with the library manager to keep the tape configuration
database synchronized with the library manager database.
The ISMF Mountable Tape Application, accessed from the ISMF Volume
Application, lets you change information about system-managed tape volumes.
You can ensure that the modification is reflected both in the tape configuration
database and the library manager database. For example, you can change the use
attribute of a system-managed tape volume from private to scratch status or from
scratch to private, change the owner or storage group, eject cartridges, or change
their shelf location.
Recommendation: Changes that are made using the ISMF Mountable Tape
Application can be automatically synchronized with the tape management system
if it fully supports the OAM tape management exits. You can use access method
services to do the same; instead, use the ISMF application to ensure consistency
between the library manager and the tape configuration database. Only the host's
volume catalogs are updated by access method services.
You can also use DFSMSrmm to do this with the same level of integrity.
You can produce tailored lists of volumes and their usage characteristics using the
ISMF Mountable Tape Volume Selection Entry panel. For a list of tape volumes,
you can use the AUDIT list command to assess the accuracy of the contents of the
tape configuration database. Issue the AUDIT function to schedule the audit. The
AUDIT causes the automatic tape library dataserver's accessor to go to the location
referenced in the tape configuration database and verify the contents. If the results
of the physical examination conflict with the volume catalog information, the error
status field for the volume is updated with a code, indicating the type of error
found. When the audit is complete, an acknowledgment is sent to the TSO session,
and the storage administrator can view any audit errors by refreshing the tape
volume list. You can also audit a tape library from the Tape Library List panel.
Before you define the tape library to SMS, consider the type of tape subsystems in
your installation. The following tape subsystems are supported in an MTL:
v 3480
v 3480x
v 3490
v 3590-Bxx
v 3590-Exx
Configuration ID . : MVS1
Device number . . : 0960 Number of devices : 1
Device type . . . : 3590
Parameter/
Feature Value P Req. Description
OFFLINE No Device considered online or offline at IPL
DYNAMIC Yes Device supports dynamic configuration
LOCANY Yes UCB can reside in 31 bit storage
LIBRARY No Device supports auto tape library
MTL Yes MTL resident device
AUTOSWITCH No Device is automatically switchable
LIBRARY-ID 12345 5 digit library serial number
LIBPORT-ID 01 2 digit library string ID (port number)
SHARABLE No Device is Sharable between systems
F1=Help F2=Split F3=Exit F4=Prompt F5=Reset
F7=Backward F8=Forward F9=Swap F12=Cancel F22=Command
If the IODF resulting from this definition can be shared with systems that have no
MTL support installed, or that have the full-function code installed but they are in
coexistence mode (MTLSHARE has been specified in the LOADxx member), then
the drives on MTL-defined UCBs will be defined as standalone (non-ATLDS)
drives.
Defining the Manual Tape Library to SMS: You use the LIBRARY-ID to link the
tape library definition with a tape library. The installation creates the LIBRARY-ID
for MTL libraries. You enter this ID in the LIBRARY-ID attribute.
Another ID field called the LIBPORT-ID field links the tape library definition to the
particular control unit within the library. You enter this ID in the LIBPORT-ID
attribute.
Defining the Tape Console for MTL: You should identify a console to receive
critical messages about MTL processing. Enter the name of this console as defined
in your PARMLIB member, CONSOLxx, in the Console Name attribute. For
maximum visibility, MTL mount and demount messages are then issued to the
named console and to the specified routing codes.
Defining Manual Tape Library Connectivity: You enable z/OS systems to use
the tape library by defining system names in the Initial Online Status attribute.
These system names must also reside in the base configuration of your active SMS
configuration. A tape library that is defined to z/OS and physically connected to a
system can be online or offline. If a tape library is offline, you can use the VARY
SMS,LIBRARY command to bring the tape library online. If you do not set a status,
SMS assumes that the tape library is not connected. Ensure that the tape
configuration database is available to every system that uses the tape library.
Supporting Devices and Device Mixtures within an MTL: Devices that emulate
3490s and use media that is imcompatible with real 3490s are not supported in an
In an MTL system, the SETCL default for MTL devices is NONE. This means that
indexing is not to be done on this system. The reason for this default is that the
cartridge loader status for an MTL device is not maintained across an IPL, so it is
safest to default the cartridge loader to NONE during IPL processing. This requires
the installation to explicitly state, through use of the LIBRARY SETCL command,
the intended use of the device. In this way, if devices are being shared across
systems, or are being dynamically set using the LIBRARY SETCL command, the
ACL does not inadvertently get indexed with the wrong system's volumes or with
the wrong media type. Otherwise, the ACL could be indexed when not
appropriate, exhausting the ACL.
Note: The meaning of NONE on an MTL device is different from its meaning for
drives in an automated tape library environment, in which it means that the
cartridge loader is to be emptied.
The existing LIBRARY SETCL command can be used to set the cartridge loader
scratch media type for library-resident devices including MTL resident devices. See
z/OS DFSMS OAM Planning, Installation, and Storage Administration Guide for Tape
Libraries for full details of this command.
For devices in a manual tape library, a new media type, ANY, can be specified.
This indicates that media type preferencing through dataclass is not being used so
that the ACL should be indexed for SCRTCH or PRIVAT mounts. This enables you
to load any valid media type for the device.
Rules:
v If a preferenced allocation is made, the ACL will not be indexed.
v The LIBRARY SETCL command, when issued in a manual tape library
environment, takes effect only on the system in which the command was issued
(unlike the automated tape library environment). If multiple systems are sharing
the scratch volumes in the cartridge loader, the same command should be issued
on each sharing system with the non-sharing systems being set to NONE.
The following rules apply for indexing an ACL on a full-function MTL system.
Related Reading: For more information about media types and recording
technology, see z/OS DFSMS OAM Planning, Installation, and Storage Administration
Guide for Tape Libraries.
Initially, the volumes in your tape library might be scratch volumes, private
volumes, or a combination of both types of volumes. Enter the predominant type
of use attribute, Private or Scratch, in the Entry Default Use attribute.
Estimate the number of scratch volumes by media type needed on average at all
times to ensure that allocations proceed without interruption. When the count of
available scratch volumes falls below the scratch threshold, DFSMSdfp sends a
message to your console (designated in your tape library definition). The message
stays on the console until the available scratch volumes exceed twice the specified
threshold.
When you use DFSMSrmm to eject volumes from the tape library, the entry in the
tape configuration database is optionally purged. DFSMSrmm has all the
information needed to recreate the entries when the volumes are returned for
reuse.
Tape drives can be shared between systems. For more information, see “Sharing an
IODF” on page 235.
You can also use the DFSMShsm LIST TTOC SELECT with the LIB or NOLIB
option to check which DFSMShsm migration level 2 and backup volumes are
system-managed or non-system-managed. The LIST DUMPVOLUME SELECT with
the LIB or NOLIB options do the same for dump volumes.
You can audit the data status of both migration and backup volumes, using the
LIST TTOC SELECT command with the FULL, NOTFULL, or EMPTY options.
Because there is no vision system associated with an MTL, there are no barcode
strips on MTL volumes. Therefore, this restriction does not apply to MTL volsers,
and the full range of valid volser characters is allowed. However, because there
might be a future need for you to move MTL volumes to an ATLDS, ensure that all
volsers on MTL volumes are alphanumeric. All other rules that apply to tape
volumes in an ATLDS also apply to those in an MTL. Specifically:
v Both scratch and private tapes can be entered into an MTL
v A scratch volume cannot be requested using a specific volume serial number
v All volumes of a multivolume data set should reside in the same library, or all
should reside outside a library. However, if they do not, you can enter the
volumes through the Volume Not In Library installation exit (CBRUXVNL).
v All volumes of a multivolume data set must belong to the same tape storage
group
v All volumes of a multivolume data set must be recorded using the same tape
recording technology
v Volumes of a multivolume data set may be of media types that are consistent
with the recording technology. For example, MEDIA9, MEDIA11, and MEDIA13
volumes can be used with enterprise format 4 (EFMT4) and enterprise encrypted
format 4 (EEFMT4) recording technology.
Sharing an IODF
A system having no MTL support, that uses an IODF containing MTL definitions,
displays error messages such as CBDA384I during IPL or ACTIVATES. Otherwise,
the IPL is successful, the UCBs are built correctly, and the devices are treated as
standalone devices. If you have a sysplex, you might not want to enable MTL on
all your systems at the same time. This might be because of IPL schedules for the
various machines (enablement of the full-function code requires an IPL), or you
can install MTL and IPL, but find that the work required to update SMS constructs,
ACS routines, etc, on all systems is more than you can handle at one time.
Tape drives can be shared between systems. A device defined as MTL resident on
one system can be used as a standalone device on a sharing system. This will
happen by default if the IODF on the non-MTL system contains no references to
To support the sharing of IODFs that contain MTL definitions on systems without
MTL support, coexistence is provided in the form of the full-function Tape UIM.
This avoids warning messages otherwise issued during IPL or Activate when the
MTL feature is encountered in the IODF on the coexisting system. The result is a
tape device recognized and initialized as a standalone, non-ATLDS, drive.
Build on these models to create your own plans. List the tasks in detail, and
include:
v Interdependencies between tasks
v External dependencies
v The person responsible for each task
v A project schedule, including completion dates for each task
v Checkpoints to evaluate progress of tasks
v A written agreement of ownership and support of all areas involved
v Regular reviews of status and progress to date
v A way to modify the plan if required
v Management commitment to the plan
These sample plans cover items collected from numerous installations to provide a
complete list.
Table 40. Enabling the system-managed software base
ENABLING THE SYSTEM-MANAGED Start End Responsible
SOFTWARE BASE Dependencies Date Date Evaluation Dates Person
Install DFSMS
Install DFSORT
Install RACF
Cache user catalogs in the catalog address
space
Cache VSAM buffers in hiperspace
Set up ISMF Storage Administrator options
Use ISMF cache support
Use ISMF media support
Use ISMF to identify data sets that cannot be
system-managed
Use system determined block size
The DFSMS product tape contains a set of sample ACS routines. This appendix
contains sample definitions of the SMS classes and groups that are used in the
sample ACS routines.
You can base your SMS configuration on these routines, modifying them as
needed.
Table 45 summarizes the attributes assigned to each data class for the sample SMS
ACS routines.
Table 45. Sample Data Classes for Data Sets
Attributes VSAM Data Classes
NAME DIRECT ENTRY KEYED LINEAR
RECORG RR ES KS LS
SPACE AVGREC U U U U
SPACE AVG VALUE 4096 4096 4096 4096
SPACE PRIMARY 100 100 100 100
SPACE SECONDARY 100 100 100 100
VOLUME COUNT 1 1 1 1
Attributes Tape Mount Management Data Classes
NAME NONTMM TMMACTV TMMBKUP TMMTEMP
SPACE AVGREC — M M M
SPACE PRIMARY — 200 200 200
SPACE SECONDARY — 20 20 50
VOLUME COUNT — 10 10 10
Attributes Tape Data Classes
NAME TAPACTV TAPBKUP TAPOSITE TAPTEMP
COMPACTION Y Y Y Y
MEDIA TYPE MEDIA1 MEDIA1 MEDIA1 MEDIA1
RECORDING TECHNOLOGY 36TRACK 36TRACK 36TRACK 36TRACK
Attributes Generation Data Set and Sequential Data Classes
NAME GDGF80 GDGV104 DATAF DATAV
RECFM F V FB VB
LRECL 80 104 80 255
SPACE AVGREC K M U U
SPACE AVG VALUE 80 104 80 255
SPACE PRIMARY 10 5 5000 5000
See SAMPLIB for the sample data class ACS routine. This routine handles data set
allocations on DASD and tape.
The routine first handles data allocations on DASD volumes. It allows users to
specify any valid data class when allocating data sets on DASD.
If a user has not specified the data class, and is allocating a VSAM data set, the
routine assigns a data class according to the record organization of the data set.
Non-VSAM data sets are assigned a data class according to the LLQ of the data set
name. Separate classes are assigned for load libraries, source code libraries with
fixed-length records, source code libraries with variable-length records, listing data
sets, sequential data sets with fixed-length records, and sequential data sets with
variable-length records. If the LLQ does not match those defined in the filter lists
at the beginning of the routine, no data class is assigned to the data set.
Next, the routine handles tape data set allocations. Storage administrators and
system programmers are allowed to use the NONTMM data class, which is tested
for in the storage class routine so that these allocations are not redirected to the
tape mount management buffers.
All remaining tape allocations are assigned data classes that the subsequent
routines direct to appropriate pool storage groups. These data sets are categorized
as temporary based on data set type, backup based on program name, or active.
SAMPLIB also contains the sample data class ACS routine for the permanent
milestone.
Table 46 summarizes the attributes assigned to each storage class for the sample
SMS ACS routines.
Table 46. Sample Storage Classes for Data Sets
Attributes General Storage Classes
NAME STANDARD GSPACE NONVIO NONSMS
AVAILABILITY STANDARD STANDARD STANDARD STANDARD
ACCESSIBILITY STANDARD STANDARD STANDARD STANDARD
GUARANTEED SPACE NO YES NO NO
GUARANTEED NO NO NO NO
SYNCHRONOUS
WRITE
Attributes High Performance Storage Classes
NAME MEDIUM FAST FASTREAD FASTWRIT
DIRECT MILLISECOND 10 5 5 5
RESPONSE
DIRECT BIAS — — R W
SEQUENTIAL 10 5 5 5
MILLISECOND
RESPONSE
SEQUENTIAL BIAS — — R W
See SAMPLIB for the sample storage class ACS routine. This routine handles data
set allocations on DASD and tape.
The routine first ensures that no storage class is assigned for data on devices not
considered valid, for data on migration level 1 or 2 storage, for tape data, and for
system data sets. Storage administrators and system programmers are also allowed
to specify the NONSMS storage class during allocation, and this routine ensures
that no storage class is assigned. These data sets are not system-managed.
Next, the routine assigns storage classes to CICS, DB2, and IMS database data.
v The DBCRIT storage class, used for dual copy, is assigned to selected CICS, DB2,
and IMS data.
v The FAST storage class, used for must-cache data, is assigned to selected CICS
and IMS data.
v The FASTWRIT storage class, used for DASD fast write, is assigned to selected
CICS, DB2, and IMS data.
v The MEDIUM storage class, used to provide better than average performance, is
assigned to some CICS data.
All other CICS, DB2, IMS, and miscellaneous data sets are assigned the
STANDARD storage class.
SAMPLIB also contains the sample storage class ACS routines for the temporary
and permanent milestones.
Table 47 summarizes the attributes assigned to each management class for the
sample ACS routines.
Exception: This table does not include the management class attributes for objects.
All object attributes are allowed to default to blanks.
Table 47. Sample Management Classes for Data Sets
Attributes General Management Classes
NAME STANDARD STANDEF INTERIM EXTBAK
EXPIRE AFTER DAYS NON-USAGE NOLIMIT NOLIMIT 3 NOLIMIT
EXPIRE AFTER DATE/DAYS NOLIMIT NOLIMIT 3 NOLIMIT
RETENTION LIMIT NOLIMIT NOLIMIT 3 NOLIMIT
PARTIAL RELEASE YES NO YES IMMED COND IMMED
MIGRATE PRIMARY DAYS NON-USAGE 15 2 3 15
LEVEL 1 DAYS NON-USAGE 60 15 60 60
COMMAND OR AUTO MIGRATE BOTH BOTH BOTH BOTH
# GDG ELEMENTS ON PRIMARY — — 1 —
ROLLED-OFF GDS ACTION — — EXPIRE —
BACKUP FREQUENCY 0 1 1 0
NUMBER BACKUP VERSIONS, DATA 2 2 2 5
EXISTS
NUMBER BACKUP VERSIONS, DATA 1 1 1 1
DELETED
SAMPLIB contains the sample management class ACS routine. This routine
handles data set allocations on DASD and tape.
The routine first handles tape data sets. Data sets that are recalled or recovered by
DFSMShsm, that originally were written to the tape mount management buffer, are
assigned the STANDARD or GDGBKUP management class, as appropriate. The
storage group routine ensures that these data sets are not recalled to the buffer, but
are placed in one of the standard pool storage groups.
Storage administrators and system programmers are allowed to assign any valid
management class.
The remainder of this routine handles standard new DASD data set allocations.
Database data sets and generation data sets are assigned separate management
classes to ensure special treatment.
All other data sets are assigned the STANDARD management class. These data
sets are backed up and migrated by DFSMShsm using standard management
criteria.
SAMPLIB also contains a sample management class ACS routine for the permanent
milestone.
Table 48 summarizes the attributes assigned to each storage group for the sample
SMS ACS routines.
Table 48. Sample DASD Storage Groups
Attributes Primary and Large Storage Groups
NAME PRIME80 PRIME90 LARGE80 LARGE90
TYPE POOL POOL POOL POOL
AUTO MIGRATE YES YES YES YES
AUTO BACKUP YES YES YES YES
AUTO DUMP YES YES YES YES
DUMP CLASS ONSITE, OFFSITE ONSITE, OFFSITE ONSITE, OFFSITE ONSITE, OFFSITE
HIGH THRESHOLD 95 95 75 75
LOW THRESHOLD 80 80 60 60
GUARANTEED 15 15 15 15
BACKUP FREQUENCY
SMS VOLUME OR ENABLE ENABLE ENABLE ENABLE
STORAGE GROUP
STATUS
Attributes Tape Mount Management Storage Groups
NAME TMMBUF80 TMMBUF90 TMMBFS80 TMMBFS90
TYPE POOL POOL POOL POOL
AUTO MIGRATE INTERVAL INTERVAL INTERVAL INTERVAL
AUTO BACKUP YES YES YES YES
AUTO DUMP YES YES YES YES
DUMP CLASS ONSITE, OFFSITE ONSITE, OFFSITE ONSITE, OFFSITE ONSITE, OFFSITE
HIGH THRESHOLD 95 95 75 75
LOW THRESHOLD 0 0 0 0
GUARANTEED NOLIMIT NOLIMIT NOLIMIT NOLIMIT
BACKUP FREQUENCY
SMS VOLUME OR ENABLE ENABLE ENABLE ENABLE
STORAGE GROUP
STATUS
Attributes Database and VIO Storage Groups
NAME CICS DB2 IMS VIO
TYPE POOL POOL POOL VIO
VIOMAXSIZE — — — 20MB
VIO UNIT — — — 3380
SAMPLIB contains the sample storage group ACS routine. This routine handles
DASD data and tape allocations that are redirected to DASD using tape mount
management techniques. It does not assign tape storage groups.
Filter lists are used to identify production databases for CICS, DB2, and IMS, and
the storage classes assigned to them.
The routine then handles temporary DASD data set allocations. Data sets smaller
than 285MB are eligible for VIO or, if larger than the maximum size allowed by the
VIO storage group, are allocated in the primary storage groups PRIME90 or
PRIME80. The routine checks the storage class assigned, so that only data sets with
the STANDARD storage class are eligible for VIO. This ensures that temporary
VSAM and DFSORT work data sets are not assigned to the VIO storage group,
because the storage class routine assigns the NONVIO storage class to those data
sets. Temporary VSAM and DFSORT work data sets are assigned to the primary
storage groups by the OTHERWISE statement at the end of the routine.
Next, the routine places CICS, DB2, and IMS production databases in the
corresponding storage group.
Most data allocations are handled by the last two steps. Data sets 285MB or larger,
including temporary data sets, are placed in the large storage groups. All other
data sets are placed in the primary storage groups.
SAMPLIB also contains the sample storage group ACS routines for the activating,
temporary, and permanent milestones.
In DFSMS, you can use ACS routines instead of some installation exits. For
example, rather than using an installation exit to standardize JCL or enforce
standards, consider using ACS routines.
Before you implement DFSMS, review your existing exits. If you continue to use
existing installation and user exits, review their function and order of execution
before designing the ACS routines. This ensures that the decisions made in the
ACS routines are not unintentionally overridden.
The following tables list and describe the DFSMSdfp, DFSMShsm, DFSMSdss, and
MVS installation exits to review. They also indicate which exits are used for
system-managed and non-system-managed data sets.
Related Reading: For detailed information on these exits, see the following
publications:
v z/OS DFSMS Installation Exits
v z/OS DFSMSdfp Storage Administration
v z/OS DFSMShsm Implementation and Customization Guide
v z/OS MVS Installation Exits
If you experience difficulty with the accessibility of any z/OS information, send a
detailed message to the Contact z/OS web page (www.ibm.com/systems/z/os/
zos/webqs.html) or use the following mailing address.
IBM Corporation
Attention: MHVRCFS Reader Comments
Department H6MA, Building 707
2455 South Road
Poughkeepsie, NY 12601-5400
United States
Accessibility features
Accessibility features help users who have physical disabilities such as restricted
mobility or limited vision use software products successfully. The accessibility
features in z/OS can help users do the following tasks:
v Run assistive technology such as screen readers and screen magnifier software.
v Operate specific or equivalent features by using the keyboard.
v Customize display attributes such as color, contrast, and font size.
Each line starts with a dotted decimal number; for example, 3 or 3.1 or 3.1.1. To
hear these numbers correctly, make sure that the screen reader is set to read out
The dotted decimal numbering level denotes the level of nesting. For example, if a
syntax element with dotted decimal number 3 is followed by a series of syntax
elements with dotted decimal number 3.1, all the syntax elements numbered 3.1
are subordinate to the syntax element numbered 3.
Certain words and symbols are used next to the dotted decimal numbers to add
information about the syntax elements. Occasionally, these words and symbols
might occur at the beginning of the element itself. For ease of identification, if the
word or symbol is a part of the syntax element, it is preceded by the backslash (\)
character. The * symbol is placed next to a dotted decimal number to indicate that
the syntax element repeats. For example, syntax element *FILE with dotted decimal
number 3 is given the format 3 \* FILE. Format 3* FILE indicates that syntax
element FILE repeats. Format 3* \* FILE indicates that syntax element * FILE
repeats.
The following symbols are used next to the dotted decimal numbers.
? indicates an optional syntax element
The question mark (?) symbol indicates an optional syntax element. A dotted
decimal number followed by the question mark symbol (?) indicates that all
the syntax elements with a corresponding dotted decimal number, and any
subordinate syntax elements, are optional. If there is only one syntax element
with a dotted decimal number, the ? symbol is displayed on the same line as
the syntax element, (for example 5? NOTIFY). If there is more than one syntax
element with a dotted decimal number, the ? symbol is displayed on a line by
itself, followed by the syntax elements that are optional. For example, if you
hear the lines 5 ?, 5 NOTIFY, and 5 UPDATE, you know that the syntax elements
NOTIFY and UPDATE are optional. That is, you can choose one or none of them.
The ? symbol is equivalent to a bypass line in a railroad diagram.
! indicates a default syntax element
The exclamation mark (!) symbol indicates a default syntax element. A dotted
decimal number followed by the ! symbol and a syntax element indicate that
the syntax element is the default option for all syntax elements that share the
same dotted decimal number. Only one of the syntax elements that share the
dotted decimal number can specify the ! symbol. For example, if you hear the
lines 2? FILE, 2.1! (KEEP), and 2.1 (DELETE), you know that (KEEP) is the
Notes:
1. If a dotted decimal number has an asterisk (*) next to it and there is only
one item with that dotted decimal number, you can repeat that same item
more than once.
2. If a dotted decimal number has an asterisk next to it and several items
have that dotted decimal number, you can use more than one item from the
list, but you cannot use the items more than once each. In the previous
example, you can write HOST STATE, but you cannot write HOST HOST.
3. The * symbol is equivalent to a loopback line in a railroad syntax diagram.
+ indicates a syntax element that must be included
The plus (+) symbol indicates a syntax element that must be included at least
once. A dotted decimal number followed by the + symbol indicates that the
syntax element must be included one or more times. That is, it must be
included at least once and can be repeated. For example, if you hear the line
6.1+ data area, you must include at least one data area. If you hear the lines
2+, 2 HOST, and 2 STATE, you know that you must include HOST, STATE, or
both. Similar to the * symbol, the + symbol can repeat a particular item if it is
the only item with that dotted decimal number. The + symbol, like the *
symbol, is equivalent to a loopback line in a railroad syntax diagram.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
Site Counsel
2455 South Road
Poughkeepsie, NY 12601-5400
USA
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
Applicability
These terms and conditions are in addition to any terms of use for the IBM
website.
Personal use
You may reproduce these publications for your personal, noncommercial use
provided that all proprietary notices are preserved. You may not distribute, display
or make derivative work of these publications, or any portion thereof, without the
express consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make
derivative works of these publications, or reproduce, distribute or display these
publications or any portion thereof outside your enterprise, without the express
consent of IBM.
Rights
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
Notices 269
NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
Depending upon the configurations deployed, this Software Offering may use
session cookies that collect each user’s name, email address, phone number, or
other personally identifiable information for purposes of enhanced user usability
and single sign-on configuration. These cookies can be disabled, but disabling
them will also eliminate the functionality they enable.
If the configurations deployed for this Software Offering provide you as customer
the ability to collect personally identifiable information from end users via cookies
and other technologies, you should seek your own legal advice about any laws
applicable to such data collection, including any requirements for notice and
consent.
For more information about the use of various technologies, including cookies, for
these purposes, see IBM’s Privacy Policy at ibm.com/privacy and IBM’s Online
Privacy Statement at ibm.com/privacy/details in the section entitled “Cookies,
Web Beacons and Other Technologies,” and the “IBM Software Products and
Software-as-a-Service Privacy Statement” at ibm.com/software/info/product-
privacy.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available at Copyright and Trademark
information (www.ibm.com/legal/copytrade.shtml).
Notices 271
272 z/OS DFSMS Implementing System-Managed Storage
Index
Special characters accessing a dataserver from multiple SMS
complexes 228
aggregate (continued)
group
&ACCT_JOB variable 50 ACDS overview 14
&ACSENVIR variable 49 calculating the size 81 using for backup and recovery 39
&ALLVOL variable 47 defining to z/OS 81 allocation
&ANYVOL variable 47 description 16 data class 8
&APPLIC variable 136 ACL indexing 232 data set 42
&DATACLAS variable 46 ACS DFSORT temporary data sets 110
&DSN variable 48 data class routine improved control 3
&DSTYPE variable 136 example 246 simplified 3
&EXPDT variable 48 tape mount management 204 storage class, specific volume 33
&GROUP variable 152 environment VSAM data sets 57
&LLQ variable 58 RMMPOOL 47 analyzing
&MAXSIZE variable 48 RMMVRS 45, 47 tape 182
&MGMTCLAS variable 46 language 46 ARECOVER command 39
&NQUAL variable 48 management class routine assistive technologies 263
&NVOL variable 48 example 253 attribute
&PGM variable 205 tape mount management 207 Add'l Volume Amount, data class 32
&RECORG variable 131 routine Compaction (data class) 219
&RETPD variable 48 creating test cases 113 Guaranteed Backup Frequency,
&SIZE variable 48 data class example 246 storage group 36
&SPECIAL_USERS variable 152 description 14 Media Type (data class) 219
&STORCLAS variable 46 determining classes for Distributed Recording Technology (data
&STORGRP variable 46 FileManager/MVS data sets 15 class) 219
&SYSNAME variable 50 for minimal configuration 98 authorizing access using SAF 68
&SYSPLEX variable 50 management class example 253 automated tape libraries 219
&USER variable 109 managing temporary data 113 automated type libraries
&VALID_STORAGE_CLASS running test cases 116 defining 227
variable 152 sample 243 automatic cartridge loader 232
storage class 113 automatic class selection (ACS) 98
storage class example 249 automatic tape library dataserver 219
Numerics storage group example 256
32-system mode 87 tape mount management 204,
3880 Model 23 storage control 25 206, 207, 208
testing 113
B
3990 Model 3 Storage Control 9 backup and recovery support, aggregate
3990 Model 6 Storage Control 9 translating 101, 113
(ABARS) 39
8-name mode 87 selecting ISMF application 98, 113
backup-while-open function 5, 58, 147
storage class routine
base configuration, SMS
example 249
ACS routine
A managing temporary data 113
tape mount management 206
storage class 99
ABACKUP command 39 storage group 98
storage group routine
ABARS 5, 39 translating 101
example 256
absolute track data sets 73, 76 writing 98
tape mount management 208
access methods classes and groups 87
ACS Application Selection panel 98, 113
basic sequential access method default management class 95
ACS routine
(BSAM) 25 defining 83
creating 112
improved performance for data description 16
writing 113
sets 23, 25 storage class
ACS Test Case Define panels 114
indexed sequential access method ACS routine 99
ACS Test Selection panel 113
(ISAM) 52, 76 defining 88
activating
object access method (OAM) storage group
minimal configuration 79
defining to z/OS 223 ACS routine 98
new SMS configuration 105
managing tape data 222 defining 91
SMS subsystem 106
overflow sequential access method system group 86
active control data set (ACDS) 81
(OSAM) 148, 239 batch data
Add'l Volume Amount (data class
preallocating the VVDS 53 benefits, SMS 135
attribute) 32
queued sequential access method defining sequential data striping 142
ADDVOL command 36
(QSAM) 25 implementing DFSMS
aggregate
accessibility 263 defining data classes 141
backup and recovery support
contact IBM 263 fallback strategy 137
(ABARS) 39
features 263 general considerations 136
Index 275
ISMF (continued)
panels (continued)
management class (continued)
TMMACTV 202
O
Pool Storage Group Define 40, 91 TMMBKUP 202 OAM
SCDS Base Define, Page 1 of 2 84 Management Class Application Selection defining to z/OS 223
SCDS Base Define, Page 2 of 2 83 panel 95 managing tape data 222
SMS Storage Group Status Management Class Define panel (page object
Define 91 1) 34 data class 9
SMS Volume Status Define 91 managing storage class 7, 10
Storage Class Application storage devices 77 operator command, MVS
Selection 88 tape volumes 219, 220 DEVSERV 106
Storage Class Define 25 manual tape libraries 219 DISPLAY SMS 106
Storage Class List 88, 89 configuration characteristics 231 SET SMS 106
Storage Group Application connectivity 231 SETSMS 106
Selection 91 defining 229 VARY SMS 106
Storage Group Volume device mixtures 231 Optimizer 2, 4
Selection 94 HCD 230 optimizing
Tape Storage Group Define 224 indexing 232 current tape environment 171
Test ACS Routines 116 mixtures 231 OSAM data sets 148, 239
Translate ACS Routines 101 tape console 231 overflow storage group 200, 215
Validate ACS Routines or Entire using 220 overriding
SCDS 102 maximum gigabyte report 188 data class 61
protecting functions 71 measuring data set accesses 158 expiration date 97, 180
SETCACHE line operator 77 Media Type (data class attribute) 219 OVRD_EXPDT keyword 97, 180
viewing the starter set 72 migration
attributes 33
defining a management class 10 P
J interval 181
permanent data 119
panels, ISMF
JCL, changing 61 ACS Application Selection 98, 113
using DFSMShsm 2 ACS Test Case Define, Page 1 of
JES3
minimal configuration, SMS 4 114
restrictions 50
allocating control data sets 80 ACS Test Case Define, Page 2 of
contents 83 4 113
defining ACS Test Selection 113
K classes and groups 87 CDS Application Selection 84, 102
keyboard default management class 95 Copy Entry 88
navigation 263 storage class 88 Data Class Define, Page 1 58
PF keys 263 storage group 91 Data Class Define, Page 2 58
shortcut keys 263 translating ACS routines 101 Data Class Define, Page 3 58
KEYLEN parameter 57 writing ACS routines 98, 99 Data Class Define, Page 5 58
KEYOFF parameter 57 mixing devices in a storage group 43 Data Set List 74
multiple allegiance 23, 32 Data Set Selection Entry 73
multivolume data sets Edit-Entry 98
L conversion eligibility 54
Improved data recording capability
Management Class Application
large data sets, separating 44 Selection 95
(IDRC) 198 Management Class Define (3) 35
library
preallocation, SMS support 23, 32 Management Class Define (page
tape conversion activities 173
tape mount management 181 1) 34
library manager 229
TMMBKUP data class 202 Management Class Define (page
LIKE keyword 57
MVS 2) 34
locate processing 62
operator command Management Class Define, Page 1 of
DEVSERV 106 5 95
DISPLAY SMS 106 Management Class Define, Page 2 of
M SET SMS 106 5 95
management class VARY SMS 106 Management Class Define, Page 3 of
ACS routine 5 97
example 253 Pool Storage Group Define 40, 91
tape mount management 207
translating 101
N SCDS Base Define, Page 1 of 2 84
navigation SCDS Base Define, Page 2 of 2 83
default for minimal configuration 95 SMS Storage Group Status Define 91
keyboard 263
Define panels 34, 95 SMS Volume Status Define 91
NaviQuest 18, 20
description and summary 250 Storage Class Application
NetView File Transfer Program 222
managing with DFSMShsm 34 Selection 88
NONTMM data class 202
overview 10 Storage Class Define 25
report 193 Storage Class List 88, 89
selecting ISMF application 95 Storage Group Application
specifying attributes 34 Selection 91
tape mount management 202 Storage Group Volume Selection 94
Index 277
SMS (continued) storage class (continued) system-managed storage (continued)
configuration 79 defining migration 1
control data sets managing temporary data 111 permanent data 119
ACDS 16 minimal configuration 88 placement 1
allocating 80 tape mount management 202 planning 19
calculating sizes 81 description and summary 247 recall 1
COMMDS 16 managing temporary data 111 recovery 1
defining to z/OS 81 objects 10 software base 67
overview 16 overview 9 temporary data 109
SCDS 16 specifying attributes TSO data 121
controlling processing 106 dual copy 29 system-managed tape
data class overview 8 fault-tolerant device 29 planning 171
management class STANDARD 202
defining default 95 volume allocation 33
overview 10
managing data 6
Storage Class Application Selection
panel 88
T
TAPACTV data class 201
minimal configuration Storage Class Define panel 25
TAPBKUP data class 202
allocating control data sets 80 Storage Class List panel 88, 89
tape
defining 83 storage group
configuration database 229
defining default management ACS routine
conversion
class 95 example 256
preparing 174
defining storage class 88 minimal configuration 100
converting
defining storage group 91 tape mount management 208
tape volumes to
translating ACS routines 101 translating 101
system-managed 173
writing ACS routines 98 defining
current environment
new configuration 105 managing temporary data 111
analyzing 182
operator commands 106 minimal configuration 91
optimizing 171
pre-ACS interface 204 non-existent 94
data set
storage class tape mount management 203
classifying 181
defining for minimal volume serial numbers 94
storage class 7
configuration 88 description and summary 254
management system 179
overview 9 managing temporary data 111
mount management
storage group overflow 200, 215
classifying tape data sets 181
defining for minimal overview 12
data set stacking 209
configuration 91 selecting ISMF application 91
description 177
overview 12 size-based 13
excluding data sets 192
SMS Volume Status Define panel 91 volume selection 12
implementation 179
SnapShot feature, RAMAC Virtual Storage Group Application Selection
managing tape data 22
Array 30, 146 panel 91
optimizing tape usage 177
software base 67 Storage Group Status Define panel 91
volume reference 209
source control data set (SCDS) 81 Storage Group Volume Selection
volume reference chains 218
SPACE parameter 57 panel 94
volume reference data sets 215
spill storage group 215 Storage Management Subsystem
volume serial 212
split cylinder data sets 73 (SMS) 83
optimizing tape usage 179
stacking, data set striping, data set
system-managed 19
overview 209 improving performance 25
benefits 219
using volume reference 211 using 26
planning 171
with volume serial 212 striping, VSAM data 27
system-managed benefits 4
STANDARD storage class 202 summary of changes xiii
tape data sets
starter set 20 Summary of changes xiii
system-managed 179
statistics synchronization time interval 104
volume mount analyzer
cache, in SMF records 4 system group
analyzing 172
data set-level, in SMF records 24 defining 87
volumes
DFSMS I/O 158 name 87
managing 219, 220
RMF Device Activity Report 137 system-determined block size 77
tape mount management 219, 220
space, in the COMMDS 16 system-managed
implementation plan 172
tape usage summary 184 buffering 27
Tape Storage Group Define panel 224
volume, in the COMMDS 16, 81 tape 219
tape volumes
storage class 88, 89 system-managed storage
managing 219, 220
ACS routine backup 1
TAPOSITE data class 201
example 249 batch data 135
TAPTEMP data class 202
managing temporary data 113 benefits 2
temporary data
minimal configuration 99 data security 1
benefits, SMS 109
tape mount management 206 database data 145
implementing DFSMS 110
translating 101 definition 1
managing
copying 88, 89 deletion 1
activating the configuration 118
managing data 6
creating ACS test cases 113
V
Validate ACS Routines or Entire SCDS
panel 102
variable
&ACCT_JOB 50
&ACSENVIR 49
&ALLVOL 47
&ANYVOL 47
&APPLIC 136
&DATACLAS 46
&DSN 48
Index 279
280 z/OS DFSMS Implementing System-Managed Storage
IBM®
Printed in USA
SC23-6849-30