Using The VTS To Optimize Tape in zOS
Using The VTS To Optimize Tape in zOS
Works?
by Gerard Nicol on November 10, 2014
As a dyslexic trying to navigate my way through high school, I had no hope of completing the text books that we
were expected to read. When it came to exam time, my only option was to ask my classmates what the book was
about, and based upon their answers sit down for two hours and write an essay on a book I had never read.
I had no idea at the time that I was dyslexic. In fact, I only discovered this in my 30s when I watched a short news
clip that showed how a printed page looks to a person with dyslexia: The page shimmers and makes it all but
impossible to read anything more than a sentence without losing concentration.
I had also long forgotten that I had to interview my 15-year-old classmates, but was reminded of this when I was
reading Malcolm Gladwell’s book, David and Goliath, something that I can read in my 40s presumably because
my adult brain doesn't have the additional challenge of awkwardly interacting with girls.
According to Gladwell, the majority of successful entrepreneurs are dyslexic, including the entrepreneurial
poster-boy, Richard Branson. Apparently, we dyslexics (Sir Richard and I) have been inoculated by ridicule, learn
to be able to harass others for critical information (that we can't read for ourselves) and have to be always on the
lookout for people who are also making it up as they go along, because we can't risk writing a Merchant of Venice
essay based upon misinformation.
Anyway, there is a point to my discussing my dyslexia here, and it relates to my highly tuned ability to sniff out
when people are making stuff up to skirt around the fact that they don't have a good answer to the questions you’re
asking them.
An example of this was the last time I worked with IBM to implement their GDPS (Geographically Dispersed
Parallel Sysplex) high availability replication platform.
IBM had sold my employer three sets of disks, Primary DASD for production workload, Secondary DASD for
high availability replication and Tertiary DASD for testing disaster recovery IPLs without interference to high
availability.
Each time we met with IBM, they encouraged management to disable replication to the Secondary DASD so it
was not necessary to copy the Secondary data to Tertiary disk before IPLing the DR system.
Each time I pointed out to IBM that z/OS didn't come with special Karmic protection, which ensured that while
performing a DR test you were completely indemnified from a real disaster, the argument would be countered with
the assertion that outages are uncommon and besides, several other customers (who were named, undoubtedly
without their knowledge or consent) were also testing off Secondary disk.
I'm not completely sure why IBM was so adamant that we not use the Tertiary DASD they had sold us. My
suspicion is that there was some anomaly in GDPS at the time that made Tertiary IPL a problem, or that perhaps it
was a level of complexity too far, given that GDPS, in keeping with most IBM software, wasn't all that intuitive or
forgiving.
At the time, my employer, in addition to replication, was also backing up to tape and there was no provision for a
one-pack system. The art of IPLing from tape had been long forgotten.
The reality of the situation, like at way too many z/OS shops around the world, was that, if all z/OS images were
down (as happened from time to time) and each image failed to IPL due to something as simple as a missing
comma in a PARMLIB member, the enterprise would have been up the proverbial creek without a paddle.
I'm not proud to say it, but trying to explain that exposure to my workmates or to management, who lacked the
reasoning skills to make anything but a consensus decision, really was beyond the limits of my personal interest in
the enterprise's welfare.
Today, after several years, I see many z/OS sites rolling out Virtual Tape Servers (VTS). The rationale behind
these units is to move backup tapes, which are an integral part of the z/OS fabric, onto disk, where problems such
as inefficient tape usage can be managed without the need to look at jobs that nobody has touched for 20 years.
Historically, we used tape on the mainframe for archival (DF/HSM), data sets that stood next to no chance of ever
being read or data sets too big (or to big to predict the size of) to fit on disk.
In a virtualized tape environment, hypothetically, this list of candidate data sets could also be extended to almost
any data set that didn't require non-sequential access, such as a VSAM file, or a PDS.
If you listen to the folks pushing out VTS units, you would think they are the magic bullet that can rid the
enterprise of the scourge of native tape once and for all; the VTS is the “smoke and mirrors” that allows tape to
remain an integral part of the z/OS fabric without having any tape. It is the phenylalanine that allows you to drink
Coca-Cola all day without having to run for 10 miles after work to burn off the sugar calories.
But like the brain tumor that phenylalanine will ultimately give you, the VTS is not without its downsides.
If you have only one VTS, you need to back it up to native tape and while you're at it, you probably should send
those tapes offsite, too.
If your VTS ever crashes, you will be without your entire tape library. Your DF/HSM recalls won't work and
every job that needs a tape will fail until you restore your entire VTS, from the very first tape to the last.
If your VTS is in the same data center as your Primary DASD and you have to IPL at a secondary site, rather than
being able to IPL your system and then throw a single tape at any tape mount that is issued, you will have to wait
until your entire VTS is restored before you can mount even the first tape.
To make things a little less dicey, those who have extra millions can replicate their VTS, but replicating a VTS
isn't backing up a VTS.
I'm not sure what protections the various vendors have in their VTS subsystems, but one would hope there is
something to stop a malicious employee (like the FAA guy who lost it because he was being transferred from
Chicago to Hawaii) from uncataloging all of the tape data sets to delete every virtualized tape backup before
anybody can stop the disk from being overwritten.
Even if these safeguards are in place, sometimes disk replication screws up. Google discovered this when they
upgraded their proprietary Gmail Ghetto Disk solution and had to restore Gmail from tape.
At least Google had the good sense to backup people's pointless Gmail to tape. One would hope that the same
common sense is also applied to all corporate z/OS data that IBM loves to tell us is the most critical information in
the world.
Either way, before you unplug that last native tape drive, find a dyslexic to ask a few critical questions of the guy
selling you the VTS. If we can write a two-hour essay on Shakespeare without reading the book, we can tell you
when the devil is citing scripture for his own purpose.
Virtual tape systems can provide significantly more functionality for tape data sets than physical tape systems.
Virtual tape disk arrays provide levels of scalability, performance and flexibility for mainframe tape operations
that are simply unavailable with physical tape. In many cases, a single virtual tape array can take the place of
multiple tape subsystems to meet functionality requirements for batch processing, data protection, Hierarchical
Storage Management (HSM) migrations and archiving. In addition, they can extend Disaster Recovery (DR)
capability to tape workloads with array-based replication.
Data Consistency
One area that can be particularly problematic for mainframe users is keeping data consistent throughout the data
protection process. Data must be replicated to protect it from corruption or deletion, and administrators must be
confident that it’s available, accurate and recoverable. In addition, they must know the relationship of the
replicated data with current transactions and changes. To put it another way, the data itself is important, but
equally important is the administrator’s comfort level that the recovery data can be matched up with current
activities. Re-syncing disk and tape data sets is a difficult job.
With different functions operating on mainframe disk and tape or virtual tape, it’s up to the administrator to
configure replication processes to accommodate their specific needs. Disk and tape replication processes are
separate, and when disk and tape are out of sync, there can be significant consequences. In fact, some mainframe
users are required to declare a disaster if tape data sets fall behind disk by even a few minutes.
Imagine the following scenario based on a main data center (DC1) housing both disk and tape data sets and a
remote data center (DC2) for offsite replication. DASD is doing synchronous replication and tape is doing
asynchronous replication:
Disk ahead of tape: Disk is replicated from DC1 to DC2 on every I/O, while tape is replicated from DC1 to DC2
every 15 minutes. As a result, disk data is ahead of tape data. The disk catalogs are current, while the most recent
tape is 15 minutes old.
State of this environment: Your catalogs have entries pointing to tapes that aren’t yet replicated; in addition, your
Migration Level 2 (ML2) resident data sets are gone. In this scenario, you have both data loss and data integrity
problems.
There are applications that keep disk volumes in sync; of great value would be a consistency capability that
encompassed both disk and tape. This would make all data universally consistent, and enable disk and tape to be
managed in the same consistency group, greatly improving recoverability. A key benefit would be the
deterministic characteristic, providing assurance to the administrator that all data was in sync.
The method for achieving this would require a synchronous replication solution to a highly available remote site.
With data sets in the same consistency group, an automated replication function would send the writes to both disk
and tape targets. For additional protection, asynchronous replication to a third site would be helpful, enabling
organizations to implement the “star” replication configuration for an out-of-region data center. Having a single
replication method and combined consistency grouping for both disk and tape would improve data availability and
simplify recovery operations.
To enhance ease-of-use, an automated failover application could be applied to the universally consistent disk and
tape data, pulling all the pieces together to get business back in operation quickly. With this consistent approach to
data recovery, mainframe administrators could be assured of no loss of data such as ML2 and DBMS logs, while
ensuring the fastest possible business resumption.
Knowing that replicated disk and tape data sets are consistent would take a big weight off the administrator’s
mind. If a disaster occurred, it would be much simpler to find the correct tape volumes and begin restore, and
administrators would have peace of mind knowing they could recover faster because of the universal data
consistency. In addition, it would provide assurance of recovery for regulatory compliance, lifting another burden.
As you evaluate disk-based mainframe virtual tape solutions, be sure to check on other attributes. For example,
will the solution work seamlessly and transparently with your existing applications and business processes, or does
it require code changes, or alteration of production operations or JCL? If it interrupts current applications and
processes, the disruption may not be worth the effort. Also, a virtual tape platform with multiple storage options
can handle various tape processes while easing management. Performance is always important, along with
scalability, security and encryption.
Enterprise Insights: Disk-Based Virtual Tape Speeds Tape Recycles
by Jim O’Connor in Enterprise Tech Journal on September 10, 2012
The most common physical tape operations are backup, Hierarchical Storage Management (HSM) migration, data
archiving, batch processing and work tapes that are used for temporary staging, Syncsort work files, transaction
log files, and other tasks. Media management is one challenge of the tape environment; it requires planning, and
tape management tasks consume IT staff time.
To make maximum use of the tape media, many data sets are stacked onto a single physical tape. When tape space
utilization is low, a recycle operation (similar to a disk defragmentation program) is performed. Once a certain
percentage of the files on a tape are scratched, HSM can perform a tape recycle. This involves purging the
scratched data and combining current data with other tapes involved in the recycle to create a new physical tape.
This process is designed to minimize the wasted sections of tape and optimize media utilization.
With physical tape, the recycle process involves multiple tape drives; if some of the drives are in use in other
processes, the recycle task must wait until those drives are free. This tape drive contention slows down the recycle
process. In addition, reading the tape can take a long time. Together, these issues can result in recycles taking
longer than the window allotted, creating delays and contention problems across the environment.
When using virtual tape servers, HSM recycles involve additional complexity, since both the physical and the
logical tape files must be included. With virtual tape servers, HSM data sets are written to cache (the logical
volume), and multiple data sets are then stacked and written out to tape (the physical volume). In this scenario, the
recycle process must be executed twice—once to the physical tape infrastructure and once to the logical HSM file
—which potentially doubles the tape drive contention as well as the delays associated with reading from tapes.
On the other hand, a disk-based virtual tape solution can be much more efficient and effective. First, because no
physical tape is used, there are no actual tape mounts, robotic activity, or tape rewinds to deal with. Data is all
stored on disk, so no tape drive contention issues arise, and there are no constraints for the number of tape drives
available to mount the VOLSERs during the recycle process. Disk is significantly more reliable and can provide
extremely fast throughput—
gigabytes per second—speeding the recycle and enabling completion within the allotted time. It offers greater
flexibility as well; the virtual cartridge size is configurable, so you can set a size based on your specific needs. In
addition, there’s no “double duty” recycle process as is required with virtual tape servers (and outlined
previously).
While physical tape has long been a staple of mainframe data centers, today’s disk-based “tape” technologies offer
significant improvements for tape tasks such as tape recycles. The speed and ease-of-use of a disk-based virtual
tape solution can greatly improve productivity and reduce the number, time, and complexity of IT tasks.
Benefits of Virtual Tape for HSM Migration
by Jim O’Connor in Enterprise Tech Journal on August 15, 2012
In banking, healthcare, aviation, communications and other industries around the globe, companies are focused on
IT efficiency and reducing their Total Cost of Ownership (TCO). Confronted with budget constraints, pressure to
increase service levels, compliance initiatives, concerns about energy consumption and costs, and the ever-present
growth of data volumes, efficient operations are an ongoing challenge.
Data growth is usually a result of multiple factors: an onslaught of content creation and consumption; increasing
electronic transactions; fat media files; large databases; and a more stringent regulatory environment that requires
data to be retained and kept readily accessible for longer periods. This data growth is prompting additional storage
purchases for capacity and to improve performance by increasing the number of spindles. This results in storage
arrays becoming significantly underutilized, which means wasted capacity and bloated acquisition costs.
Hierarchical Storage Management (HSM) migration is an antidote: It can help IT administrators save money on
expensive storage and still keep data highly accessible.
HSM Migration
Recently, the IT universe has entertained a constant stream of chatter about the exciting “new” technique of
automated storage tiering that virtualization seems to have enabled. However, in the mainframe world, IT users
have been leveraging this type of technology since the early ’90s with HSM, which is a policy-based process for
moving data from the most expensive, highest-performance storage pool to less-expensive pools over time, while
keeping data quickly accessible. HSM originated before the advent of Storage Area Networks (SANs), when both
the cost and speed differentials between disk and optical/tape were far greater than today, and when fewer
efficiency technologies existed.
HSM migrates data according to IT-defined policies; generally, more active data is kept on the highest tier and less
active data is moved to lower tiers. The entry of secondary disk into the data center creates an intermediate disk
pool option, and enables a combined pool of disk and tape data that serves as an active archive; data can be backed
up but still available online.
HSM is often configured so that as disks reach a certain capacity threshold, files are automatically migrated to
another storage pool, but leave behind a “stub” file that identifies file attributes and points to the new location.
This stub file ensures quick recall so data access isn’t compromised by the migration. While backup requires a
restore request to bring data back, HSM migration keeps data more available. When a user needs a file, the HSM
application simply locates it and transparently returns it to the user. Migrations can be initiated on demand, too; for
example, to move data that might interfere with backup performance. Other commonly used migration policies are
based on data age, most recent usage, size, user, application, etc.
Because mainframes support mission-critical business processes, data availability is paramount; if data managed
by HSM is unavailable, multiple corporate divisions or processes can come to a screeching halt. This can impact
customer services, revenue, and compliance. For example, if an airline’s HSM migration slows data access, it may
affect maintenance tasks, crew scheduling, and any number of processes that impact passenger services. Tape
systems restrict flexibility, as tape drives are often directly connected to individual mainframes, requiring complex
manual intervention to alter data set access. These tasks create delays that impact productivity.
With tape data stored on disk, you can leverage disk-based performance and compression and bypass the first
HSM step (L0 to ML1) and go directly from active data to ML2. With recall times typically less than 2 seconds,
this saves productive time. This method of HSM migration can reclaim both mainframe host CPU cycles and Tier
1 storage space used for the L0 to ML1 step; this keeps all the processing power available to applications, reduces
costs, and frees up computing and storage resources for other tasks. The time savings are significant; using the
previous example, those 2,000 recalls would take just over one hour, instead of 50. That type of difference can
have a huge impact, helping users get answers faster and improving both customer facing and internal business
processes. In addition, tape recycles are faster since stacking is eliminated and all data is on disk.
A Single Solution
A virtual tape disk array can eliminate the challenges of tape handling, such as manual management, physical
movement of cartridges, problems with robotics, questionable reliability, and greater risk of tapes being lost,
stolen, or destroyed for HSM migration and other tape processes such as backup, work tapes, and data archiving.
Optimally, a virtual tape solution that leverages both standard and de-duplication storage in the same device would
minimize management, data center footprint, and energy costs. This type of system could direct tape processes to
the most appropriate storage; for instance, sending backup to de-duplication storage and HSM migration to
standard disk.
As you evaluate disk-based virtual tape solutions, be sure to check on other attributes. For example, will the
solution:
• Work seamlessly and transparently with your existing applications and business processes, or does it require
code changes, altering production operations or Job Control Language (JCL)?
• Improve performance and shrink batch windows?
• Extend Disaster Recovery (DR) capability to tape workloads with array-based replication?
• Improve data protection with advanced security and encryption features?
• Scale easily without complex re-configuration as workloads grow?
Summary
HSM migration provides a method of reducing storage costs by automatically migrating data to less-expensive
storage tiers based on policies. The beauty of a disk-based virtual tape solution for HSM migration is that the right
solution can do the job without consuming mainframe-processing cycles, using less Tier 1 storage, and keeping
data quickly recallable. In addition, virtual tape can bypass the intermediate pool and send data directly from
active Tier 1 storage to tape. Disk-based solutions are also faster, more reliable, and improve data protection.
2 Pages
1
2
The delivery of IT services is more difficult today than ever due to several factors. First is the constant growth of
data volumes that not only stress primary storage but also make backup and recovery extraordinarily difficult. This
is exacerbated by users’ much higher expectations for data availability that in turn severely restrict backup
windows and restore times. In addition, the expanding use of virtualization and cloud-based services give users a
taste of immediacy and location-independence they lap up like hungry dogs; virtual machines can be created in
minutes, and data can be accessed via the cloud rapidly from any location, with multiple devices. Mainframe
systems aren’t immune to these new developments and are increasingly being asked to provide unprecedented
services and service levels, particularly in terms of data availability and protection. As the business becomes
accustomed to secure, scalable, efficient backup procedures and instant recovery of applications, mainframe
systems must adjust.
A key challenge for mainframe deployments in delivering these kinds of service levels rests with the extensive use
of tape for backup and recovery, batch processing, Hierarchical Storage Management (HSM), fixed content
archiving, etc. While tape does an excellent job of providing cost-effective processing abilities, enabling
economically attractive, long-term storage and supporting high throughput rates, it can be a slow process due to
the amount of setup required. If you want to recover last year’s payroll data from offsite tape (a fairly common
scenario even these days), you must first locate the right tapes, transport them to your data center, mount them, and
read them back sequentially. This can mean days of delay before the actual recovery can even begin. If you need
more tapes mounted than you have drives available, additional delays occur.
Tape can also be unreliable with its many mechanical parts; tape libraries, tape media, robotic arms, and the like
offer multiple opportunities for errors or failure. The fact that the majority of tape issues these days are human
handling or software errors doesn’t mitigate the point. Tapes aren’t RAID-protected and can easily be lost,
corrupted, or stolen; unfortunately, Disaster Recovery (DR) is an expensive, time-consuming, laborious process
that usually involves a third party. Tape volumes can be replicated remotely, but only with upgraded products and
expensive channel extensions; IP-based replication is impossible for many organizations because of the massive
(and costly) bandwidth needs, not to mention additional software. In addition, the aforementioned data growth
impedes performance of all processes and exacerbates the problems.
Disk-to-disk-to-tape solutions can minimize the mechanical processes, but are faster than standard tape only if the
data resides in the cache. In addition, they’re often proprietary, take a significant amount of management and floor
space, and can’t handle certain applications. In the end, to accommodate the many use cases of mainframe tape,
organizations often find themselves obligated to buy and operate two or three different solutions in parallel,
increasing both equipment and operational costs.
T-REX
Your business continuity and resiliency plans should include it. The world’s fastest, most comprehensive ICF
catalog management & recovery software, expert support.
Learn More
Data volumes that tend to have duplicate blocks can benefit from deduplication, but backup, with its multiple
redundant files, is a prime target. When files are backed up, the same data is copied over and over again,
consuming storage space and clogging network bandwidth. By deduplicating those volumes, companies can save
on storage purchases and free up bandwidth for replication to a DR site. Deduplication can shrink data volumes by
vast amounts—30 times isn’t uncommon—and reduce bandwidth needs by up to 99 percent. These are huge
savings, enabling companies to not only keep backup data onsite (and quickly available) longer, but also enable
remote replication of many data sets instead of just the most critical.
This combined system would be able to handle all mainframe tape use cases in a single solution, all managed from
the same location. It would enable companies to deploy advanced tape replacement and leverage storage tiers
while enjoying the benefits of unified management. Other benefits include:
• Faster, less costly backup and restore. Deduplication of backup of data dramatically shrinks the amount of
data, making backup much faster, saving on both storage and transmission costs, getting data to offsite locations
faster, and enabling faster recovery.
• Recovery objectives defined by business need. By using disk and deduplicated storage, companies would no
longer have to restrict their Recovery Point Objectives and Recovery Time Objectives (RPOs/RTOs) to tape’s
limitations; instead, they could be defined according to business need. The time required to write backups to
sequential tape has a direct impact on RPO, since it forces a significant period of time to elapse between backups.
If it takes 24 hours to back up to tape, then a failure or outage will result in at least that much data loss. Similarly,
RTO is dependent on how long the tape recovery process takes, regardless of what the business needs.
• Improved Total Cost of Ownership (TCO). By combining these tasks, companies would be able to buy and
manage a single system instead of buying three systems and running them in parallel and still provide the most
efficient environment for each task. Backup could go to deduplication storage while highly interactive data could
be directed to high-performance storage. This would save on management costs as well as data center floor space,
power, and cooling. Other key savings would ensue from reducing the costs of tape vaulting, tape purchases, and
software licensing for multiple solutions.
2 Pages
1
2
Reputation-damaging corporate scandals, such as the Enron case in 2001, have highlighted the need for stronger
compliance and regulations for publicly listed companies. Compliance regulations—such as Sarbanes-Oxley, the
Health Information Portability and Accountability Act (HIPAA), and the Gramm-Leach-Bliley Act—were created
to ensure that financial, customer, and patient information is safeguarded and available for long periods of time.
With the continuing exponential increase of information, regulatory compliance is a big challenge for companies
already seeking to reduce costs. These companies need reliable and affordable methods.
IT staff are challenged to manage the expanding tape storage hierarchy with existing resources and head count,
and ensure efficient utilization and media optimization while showing consistent value to the business. Tape
storage management operations have evolved by automating critical monitoring and analysis encompassing tapes,
robotics, virtual tape systems, and tape management systems.
Unifying and optimizing tape management practices can reduce costs and complexity, improve support, and yield
a powerful, easy-to-use solution. Users should then be able to select the “best” tape technology for their
environment—without restrictions or vendor lock-in.
z/VM client collaboration, blockchain, CA Mainframe Operational Intelligence, and much more!
Read it now
Tape technology remains the least expensive, most reliable option available for storing large amounts of data for
long periods and an integral part of any compliance strategy, but there are several things you must first consider
for compliance initiatives. Some examples include the need to reduce security threats and save money.
Here are a few points to consider to help you achieve compliance objectives and protect your information while
reducing the demand for management resources.
Cost-effective data storage for long-term archiving. Using tape reduces the cost of data storage by
approximately 30 to 40 percent over physical disk. Even with the newest disk technologies, the cost per gigabyte
on a tape cartridge is still significantly less than keeping the information on disk. Since 2002, vast improvements
in tape technology have appeared, including:
A much longer media life (the standard lifecycle of a tape media is between 15 and 30 years)
Improved drive reliability
Higher Mean Time Between Failure (MTBF)—up to 400,000 hours
Faster data rates
A more rugged design for transportability.
These advancements have helped ensure information is available and secure in case an audit is necessary for
compliance purposes. Compliance regulations such as HIPAA dictate that information must be kept for at least 10
years, so tape will continue to be relevant to keep information secure and safe in case of an audit.
Storage media savings via tape media virtualization. In a typical tape environment, tape space utilization is well
under 100 percent, which means you aren’t making full use of available space on tape and will need more media as
data increases over time. More tapes require more resources to manage and maintain. Virtual tape technology
dramatically reduces the number of tapes needed by letting you send backups to existing DASD devices as virtual
tape volumes and then stack multiple virtual volumes to a single physical tape drive. This process achieves nearly
100 percent utilization of your physical tapes and significantly reduces the number of tapes being managed,
transported and stored, delivering a big reduction in Total Cost of Ownership (TCO) and increasing the Return on
Investment (ROI).
Virtual tape technology can be hardware or software-based, but software virtual tape solutions provide all the
benefits of a virtual tape hardware solution while letting you minimize your energy footprint to save space,
cooling, and electrical expenses.
4.11 Summary
The VTS is an excellent choice for storing most tape data, with the possibility to handle offsite backups and
interchange data that need to be taken out of the library using the Advanced Function feature. The VTS
utilizes cartridge capacity fully, is easy to use, enables you to run more concurrent tape applications, and in many
cases offers very fast mount times by eliminating physical mounts entirely.
Native tape drives, particularly Magstar 3590 drives, or a TMM implementation may in some cases offer
advantages over the VTS. Remember, however, that directing tape data to different media on a selective basis
requires more tuning effort from the storage administrator than an "all-in-VTS" solution. If you have native
tapes drives or have implemented TMM and want to keep using them for storing part of your tape
data, consider the following points when you decide when to use each:
° The VTS can be used for any sequential data. You have to evaluate
placing data that needs to be taken out of the library, such as
offsite backups and interchange data on the VTS. Make sure that the
time requirements meet your needs. A second VTS at the offsite
location may also be used for offsite copies. The maximum ESCON
distance of the VTS is 43 km.
° The VTS offers better response times for scratch mounts than native
tapes. Write-intensive functions, such as backup and dump, benefit
particularly from fast VTS scratch mounts.
° The VTS offers better response times for private mounts than native
tapes, provided the referenced volume resides in the VTS TVC.
Intermediate data sets that are used by several jobs or job steps in
succession benefit from this and are good candidates for the VTS.
° With VTS, we recommend that you write only one data set per volume if
you are not going to use all the data sets on the volume at once.
When you have more than one data set on a logical volume, the VTS
recalls the whole volume in the tape volume cache even if you need
only one of the data sets. This lengthens mount times and fills the
tape volume cache unnecessarily.
° Small data sets, when stored one per logical volume, quickly increase
the number of logical volumes you have to insert in the VTS. This
increases the number of entries in the Library Manager database, in
the VTS database, in your tape management system catalog, and in the
SMS volume catalog. If you have already implemented TMM, you will
need to evaluate the costs of managing these small data sets in the
VTS. The maximum of 150,000 logical volumes will have to be taken
into consideration also.
° To protect your data against media failures, do not store a data set
and its backup in the same VTS because you have no control over which
logical volumes end up on which stacked volumes. For example, a data
set on a DFSMShsm ML2 volume and its backup on an DFSMShsm backup
volume may end up on the same stacked volume.
° If you regularly need to append a data set daily, native tape drives
may offer better performance, because a logical volume has to be
recalled into the TVC before it can be appended. This will, of
course, depend on the size of your TVC and the activity that has taken
place since the last append.
° Some functions restrict the media type you can use. For example,
DFSMShsm TAPECOPY requires that the input and output volumes are of
the same media type. Thus your physical cartridges may dictate the
VTS logical volume type you can use.
types of tape data sets. When VTS is not an obvious first choice, we
indicate an alternative.
Prefer VTS for small data sets as long as the number of logical volumes is not a con
1. VTS
Small data sets straint. Each Magstar 3494 Tape Library can have a maximum of 150,000 logical v
2. TMM
olumes. TMM is also a possibility especially if you are already using it.
1. VTS These go well in the VTS. VTS does not waste cartridge capacity
Medium and large data sets like native 3590 and requires less cartridge shelf space in the
library than native 3490.
Very large data sets 1. 3590 Native 3590 offers better performance for very large batch data
2. VTS sets. Use 3590 if you can do it without wasting too much
cartridge capacity.
1. VTS The VTS is well suited for data sets that pass information from
Intermediate data sets one job or job step to the next. With the VTS, you avoid mount
delays if the data set is referenced before it is deleted from
the VTS TVC.
DFSMSdss and DFDSS
1. 3590 Prefer native Magstar drives for performance reasons. Concurrent
2. VTS full-volume physical dumps with OPT(4) quickly use more bandwidth
Physical dumps
than a VTS can offer. VTS may be an option for physical dumps
without OPT(4). You need to design your jobs so as to stack
multiple dumps on one volume to utilize 3590 cartridge capacity.
1. VTS VTS good for logical dumps which require less bandwidth than
Logical dumps physical dumps. With VTS it is easy to utilize cartridge
capacity. You also benefit from fast virtual scratch mounts.
DFSMShsm and DFHSM
1. 3590 HSM can use the full capacity of native 3590. Recalls from 3590
ML2
2. VTS are generally faster.
1. VTS Although recovery from native 3590 is generally faster, it is
less of a concern because you should only seldom need to recover
Backup
backups. By using the VTS, you can benefit from fast scratch
mounts during backup.
1. VTS With the VTS you can benefit from fast scratch mounts. If you
CDS backup
2. TMM are already utilizing TMM, you may want to keep them in TMM.
1. 3590 With the prerequisite DFSMS/MVS maintenance HSM can stack
2. VTS multiple dumps on one volume. APAR OW27973 allows you to add
Volume dumps this functionality. With the VTS you can also stack many dumps
on one physical cartridge. You also benefit from fast scratch
mounts.
ABARS 1. Offsite Typically needs to go offsite
Tape copy 1. Offsite Typically needs to go offsite
ADSM
Tape storage pools 1. 3590 ADSM can use the full capacity of native 3590. Restores from
2. VTS 3590 are generally faster.
1. VTS Good for VTS use. You benefit from fast scratch mounts and can
Storage pool backups, fully use cartridge capacity. If the primary storage pool is on
incremental native 3590, the backup cannot end up on the same volume as the
primary data.
1. Offsite Often need to go offsite.
Storage pool backups, full
2. VTS
1. VTS These are typically small so should work fine in the VTS. Could
Database backup, incremental
2. TMM stay in TMM if you already are utilizing it.
1. Offsite Good for the VTS, if backup must go offsite, use Advanced
Database
2. VTS Function feature. The VTS does not waste cartridge capacity as
backups, full
native tapes do and gives you fast scratch mounts.
OAM Objects
1. 3590 OAM can use the full capacity of native 3590. Retrieval from
OBJECT storage groups
2. VTS 3590 is generally faster.
1. Offsite Often need to go offsite. When the primary object is on native
OBJECT BACKUP storage groups 2. VTS 3590, there is no danger that it and its backup will end up on
the same physical volume.
Database Systems
Archive logs and journals 1. VTS These go well in the VTS. You benefit from fast scratch mounts.
Dual archive logs and journals 1. Offsite These typically go offsite.
1. VTS The VTS should be fine for these data sets. TMM should work if
Local image copies, incremental
2. TMM you are already utilizing it.
Local image copies, full 1. VTS The VTS good here.
Note: The Device column indicates the primary, and optionally a secondary, recommendation for storing the data
type. The key to the Device column is:
Another method for optimizing your tape media is through the Virtual Tape Server (VTS) tape library hardware.
You can use VTS with or without tape mount management. It does not require ACS routine or other software
changes.
VTS lets you define up to 32 virtual tape drives to the host. Not visible to the host are up to 6 physical tape
devices. When the host writes to one of the virtual devices, it actually writes to a virtual volume residing on the
VTS DASD buffer. The VTS, transparent to the host, copies the entire virtual volume onto a logical volume that is
then mapped to physical stacked volumes known only to the VTS.
These logical and physical volumes cannot be ejected directly. However, VTS offers many other advantages. For
example, VTS:
VTS avoids the extensive analysis required to use tape mount management. You can, however, use VMA studies
to use VTS more effectively, since these studies identify useful information, such as data sets needing to be
stored offsite, or temporary data sets that can be written to DASD and expired.