0% found this document useful (0 votes)
12 views365 pages

A Comprehensive Justification For Migrat

This dissertation by Stephen R. Guendert provides a comprehensive justification for migrating from ESCON to FICON technology, highlighting the performance and cost benefits of FICON in modern mainframe environments. It discusses the evolution of channel technology, the limitations of ESCON, and the advantages of FICON in terms of bandwidth and total cost of ownership. The document includes detailed assessments and recommendations for various companies considering this migration.

Uploaded by

SirousFekri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views365 pages

A Comprehensive Justification For Migrat

This dissertation by Stephen R. Guendert provides a comprehensive justification for migrating from ESCON to FICON technology, highlighting the performance and cost benefits of FICON in modern mainframe environments. It discusses the evolution of channel technology, the limitations of ESCON, and the advantages of FICON in terms of bandwidth and total cost of ownership. The document includes detailed assessments and recommendations for various companies considering this migration.

Uploaded by

SirousFekri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 365

A COMPREHENSIVE JUSTIFICATION FOR

MIGRATING FROM ESCON TO FICON

by

Stephen R. Guendert

A dissertation submitted in partial fulfillment of the


requirements for the degree of

Doctor of Philosophy, Management Information Systems

Warren National University

2007

Approved by ___________________________________________________
Chairperson of Supervisory Committee

Program Authorized
to Offer Degree _________________________________________________

Date _________________________________________________________
WARREN NATIONAL

UNIVERSITY

ABSTRACT

A COMPREHENSIVE
JUSTIFICATION FOR
MIGRATING FROM ESCON TO
FICON

by Stephen R. Guendert

Chairperson of the Supervisory Committee:Professor Alan Proffitt


Department of MIS

Numerous articles and white papers have been written on FICON technology
since its late 1990s marketplace debut. These papers discuss basic
technology concepts, the relief of bandwidth constraints between processor
and storage devices, the performance and distance advantages of FICON
over ESCON, and how to measure those advantages. In 2007 and beyond,
when investing in new mainframe, mainframe disk, tape, or virtual tape
storage equipment, enterprises are most likely evaluating FICON equipment.
Most modern disk, tape, and virtual tape subsystems are so fast they need
FICON channels to exploit the performance potential they offer, or more
importantly, they need FICON channels for the enterprise to realize the
performance capabilities they are paying for. Purchasing a new mainframe
,or new mainframe storage in 2007 and running it on ESCON does not let
the end user fully tap that DASD array’s and/or tape library’s performance
potential, and hence not realize the full benefits of what you spent money on.
Much like buying imported sports cars, moving to FICON can be an
expensive undertaking, but when doing a proper analysis that focuses on the
entire mainframe environment including hosts, disk, tape, virtual tape,
switching, and physical infrastructure (cabling) it becomes apparent that
there is significant total cost of ownership savings (TCO) realized by
migrating off of ESCON to FICON.
A Comprehensive Justification For Migrating From ESCON to FICON

Table of Contents

CHAPTER 1.................................................................................................................................. 1
Introduction................................................................................................................................... 1
CHAPTER 2.................................................................................................................................. 6
Introduction................................................................................................................................... 6
2.1 Evolution of the channel: 43 years of mainframe I/O history ............................................ 6
2.1.1 Three fundamental concepts ............................................................................................ 7
2.1.2 System/360......................................................................................................................... 8
2.1.3 System/370....................................................................................................................... 11
2.1.4 System 370/XA (MVS/XA)............................................................................................. 15
2.1.5 System/390, zSeries, System Z and beyond .................................................................... 19
2.1.7 ESCON directors ............................................................................................................. 22
2.1.8 Fiber or Fibre?.................................................................................................................. 25
2.1.9 The evolved fibre channel protocol and the mainframe .................................................. 25
2.1.10 The FICON bridge card ................................................................................................. 27
2.1.11 Native FICON directors................................................................................................. 30
2.2 ESCON................................................................................................................................... 32
2.2.1 ESCON technology ............................................................................................................ 33
2.2.2 ESCON architecture......................................................................................................... 37
2.2.3 ESCON channels ............................................................................................................. 43
2.2.4 ESCON topology and ESCON directors ......................................................................... 45
2.2.5 ESCON summary............................................................................................................. 55
2.3 FICON.................................................................................................................................... 56
2.3.1 FICON’s introduction and beginnings............................................................................. 56
2.3.2 FICON and fibre channel architecture basics .................................................................. 68
2.3.3.1 Native FICON data pipelining .................................................................................. 83
2.3.3.2 FICON Native connectivity enhancements 2002-2007 ............................................ 85
2.3.4 ESCON vs. FICON and migration planning considerations ........................................... 87
2.3.5 FICON channel to channel (CTC) .................................................................................. 95
2.3.5.1 ESCON CTC technology.......................................................................................... 98
2.3.5.2 FICON CTC technology ......................................................................................... 100
2.3.5.3 System z9 and FICON Express 4 specific enhancements ...................................... 103
2.3.5.4 Considerations for Migration (ESCON CTC to FICON CTC) .............................. 103
2.3.5.5 FICON CTC operational/functional characteristics................................................ 105
2.3.5.6 Recommendations for a FCTC device numbering scheme..................................... 106
2.3.6 Cascaded FICON basics ................................................................................................ 108
2.3.6.1 Technical basics of FICON cascading.................................................................... 112
2.3.6.2 High integrity enterprise fabrics ............................................................................. 116
2.3.7 Cascaded FICON and HA/DR/BC ................................................................................ 118
2.3.7.1 IT Resilience ........................................................................................................... 119
2.3.7.2 Regional Disasters .................................................................................................. 119
2.3.7.3 Non-data center planning issues ............................................................................. 120
2.3.7.4 The best laid plans…….. ........................................................................................ 121
2.3.7.5 Summary of 9/11 lessons learned ........................................................................... 123
2.3.7.6 Business continuity/disaster recovery and IT resilience......................................... 126

Copyright © Stephen R. Guendert 2007 i


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.7.7 The seven tiers of disaster recovery........................................................................ 129


2.3.8 FICON cascading benefits ............................................................................................. 133
2.3.8.1 Optimizing use of storage resources ....................................................................... 136
2.3.9 Cascaded FICON performance factors .......................................................................... 137
2.3.9.1 Buffer to buffer credit management: an oxymoron?.............................................. 138
2.3.9.2 Packet Flow and Credits ......................................................................................... 139
2.3.9.3 Buffer-to-Buffer Flow Control ............................................................................... 140
2.3.9.4 Implications to Asset Deployment.......................................................................... 142
2.3.9.5 Configuring BB credit allocations on FICON directors ......................................... 143
2.3.9.6 Exhaustion of BB credits and frame pacing delay.................................................. 144
2.3.9.7 What is the difference between frame pacing and frame latency? ......................... 146
2.3.9.8 How to prevent frame pacing delay? ...................................................................... 147
2.3.9.9 How can things be improved?................................................................................. 147
2.3.9.10 Dynamic Allocation of BB_credits....................................................................... 149
2.3.9.11 Closing thoughts on buffer to buffer credits ......................................................... 150
2.3.10 Quality of Service (QoS) and cascaded FICON .......................................................... 150
2.3.10.1 Defining quality and service ................................................................................. 151
2.3.10.2 Storage and QoS ................................................................................................... 153
2.3.10.3 Fibre channel class 4 class of service (CoS)......................................................... 154
2.3.10.4 Infiniband and QoS ............................................................................................... 158
2.3.10.5 Brocade and FICON QoS ..................................................................................... 159
2.3.10.6 QoS and the mainframe: Workload Manager and Intelligent Resource Director 163
2.3.10.7 Dynamic channel path management (DCPM) ...................................................... 164
2.3.10.8 Channel subsystem priority queuing..................................................................... 165
2.3.11 FICON cascading summary......................................................................................... 165
2.4 Latest and greatest mainframe I/O and mainframe storage enhancements ................. 167
2.4.1 Mainframe I/O enhancements...................................................................................... 167
2.4.2 Mainframe storage enhancements.................................................................................. 169
2.4.2.1 FICON/FCP intermix basic concepts...................................................................... 172
2.4.3 System z9 specific enhancements.................................................................................. 175
2.4.3.1 Node Port ID Virtualization (NPIV)....................................................................... 176
2.4.3.2 MIDAW facility...................................................................................................... 179
2.4.3.3 System z9 multiple subchannel sets........................................................................ 181
2.4.3.4 System z9 self-timed interconnect (STI) enhancements......................................... 183
2.4.3.5 Open exchanges ...................................................................................................... 187
2.4 Conclusion ........................................................................................................................... 187
CHAPTER 3.............................................................................................................................. 188
Introduction............................................................................................................................... 188
3.1 Systems Management Facilities and Resource Measurement Facility .......................... 188
3.2 Performance analysis basics ............................................................................................. 191
3.3 Response and service time basics ...................................................................................... 192
3.4 Understanding ESCON and FICON channel path metrics ............................................ 195
3.4.1 ESCON .......................................................................................................................... 195
3.4.1.1 Classic ESCON definitions.................................................................................... 198
3.4.2 FICON............................................................................................................................ 198
3.4.2.1 FICON director measurements ............................................................................... 201

Copyright © Stephen R. Guendert 2007 ii


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3.4.2.2 Graphical analysis of a FICON I/O ........................................................................ 202


3.4.2.3 Formal FICON definitions..................................................................................... 204
3.4.3. Using RMF/CMF and RMF Magic to understand Disk Subsystem Performance........ 205
3.4.3.1 Visibility gap........................................................................................................... 205
3.4.3.2 RMF Data Collection on Disk Subsystems ............................................................ 207
3.4.3.3 Reviewing the Disk response time components ..................................................... 208
3.4.3.4 RAID....................................................................................................................... 208
3.4.3.5 Measuring back-end performance........................................................................... 209
3.4.3.6 Understanding Disconnect time.............................................................................. 212
3.4.3.7 RMF Magic and FICON performance.................................................................... 214
3.5 Making the business case for a FICON migration.......................................................... 217
3.5.1 Background on why this tool was developed ................................................................ 220
3.5.2 Brief Overview of the tool ............................................................................................. 224
3.5.3 Understanding the key financial metrics: ROI methodology overview......................... 227
Financial Metrics .................................................................................................................... 228
Discounting: The Time Value of Money ................................................................................ 228
Cost of Capital ........................................................................................................................ 231
Net Present Value (NPV)........................................................................................................ 233
Return on Investment.............................................................................................................. 235
Internal Rate of Return (IRR) ................................................................................................. 237
Payback Period........................................................................................................................ 238
Chapter 4 ................................................................................................................................... 241
4.1 Introduction......................................................................................................................... 241
4.2 ESCON to FICON assessment at the “ABC” company .................................................. 241
4.2.1 Pre-assessment DASD environment .............................................................................. 242
4.2.2 DASD environment findings .................................................................................. 243
4.2.2.1 DASD capacity under-utilized............................................................................ 243
4.2.2.3 Some subsystems are partitioned ............................................................................ 243
4.2.2.4 Many ESCON channels with low utilization.......................................................... 243
4.2.3 DASD recommendations at “ABC” company............................................................... 243
4.2.3.1 DASD consolidation ............................................................................................... 244
4.2.3.2 Cost estimates ......................................................................................................... 245
4.2.4 “ABC” Company virtual tape environment................................................................... 246
4.2.4.1 Current virtual tape environment ............................................................................ 246
4.2.4.2 Findings................................................................................................................... 246
4.2.4.3 Recommendations................................................................................................... 249
4.2.5 Native tape environment ................................................................................................ 249
4.2.5.1 Findings................................................................................................................... 250
4.2.5.3 Recommendations................................................................................................... 250
4.2.6 Disaster recovery ........................................................................................................... 252
4.2.6.1 Findings................................................................................................................... 252
4.2.6.2 Recommendations................................................................................................... 252
4.2.7 Mainframe processors.................................................................................................... 253
4.2.8 Conclusions.................................................................................................................... 253
4.3 ESCON to FICON migration assessment at the “XYZ” Company ............................... 256
4.3.1 DASD subsystems ......................................................................................................... 256

Copyright © Stephen R. Guendert 2007 iii


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.3.1.1 DASD findings........................................................................................................ 258


4.3.1.2 DASD analysis and recommendations.................................................................... 258
4.3.1.3 Workload growth .................................................................................................... 259
4.3.2 Virtual tape environment ............................................................................................... 260
4.3.2.1 Current ESCON virtual tape environment.............................................................. 261
4.3.2.2 Findings................................................................................................................... 261
The primary finding was that the existing VTS subsystems are overloaded with high recall
rates ..................................................................................................................................... 261
4.3.2.3 Analysis and recommendations .............................................................................. 262
4.3.3 Native tape ..................................................................................................................... 263
4.3.3.1 Current environment ............................................................................................... 264
4.3.3.2 Findings................................................................................................................... 264
4.3.3.3 Recommendations................................................................................................... 267
4.3.4 Conclusions.................................................................................................................... 267
4.3.4.1 DASD subsystems .................................................................................................. 267
4.3.4.2 Virtual Tape ............................................................................................................ 268
4.3.4.3 Native Tape............................................................................................................. 269
4.4 FICON DASD assessment for the ACSH Company........................................................ 269
4.4.1 Current ESCON attached DASD environment.............................................................. 270
4.4.2 Findings.......................................................................................................................... 272
4.4.3 Analysis and recommendations ..................................................................................... 272
4.4.4 Switched FICON or direct attached (P2P) FICON? ...................................................... 275
4.4.4.1 Cost considerations ................................................................................................. 276
4.4.4.2 Reliability and Availability Considerations............................................................ 277
4.4.4.3 Performance considerations .................................................................................... 278
4.4.5 Final recommendations and conclusions ....................................................................... 280
4.5 BBB Company FICON financial analysis ........................................................................ 281
4.5.1 ESCON environment ..................................................................................................... 282
4.5.2 FICON at BBB Company .............................................................................................. 288
4.5.3 Cost savings summary ..................................................................................................... 289
Chapter 5 ................................................................................................................................... 294
5.1 Business Benefits and costs of migrating to FICON........................................................ 295
Cost justifying FICON DASD.................................................................................................. 298
Cost justifying FICON for native and virtual tape................................................................ 300
Final thoughts............................................................................................................................ 303
References.................................................................................................................................. 304
Appendix A................................................................................................................................ 312
Step-by-Step Instructions for use of the McDATA-Brocade FICON ROI tool .................. 312
Step 2: Launching the Tool with Macros Enabled.................................................................. 314
Module 1: Home ..................................................................................................................... 314
Module 2: Input....................................................................................................................... 316
Module 3: Investment ............................................................................................................. 318
Module 4: ROI Summary........................................................................................................ 318
Module 5: Benefit Detail and Impact Assumptions................................................................ 319
Module 6: Assumptions .......................................................................................................... 321
Module 7: Printed Reports ...................................................................................................... 322

Copyright © Stephen R. Guendert 2007 iv


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Saving Your Scenario ............................................................................................................. 324


ROI Methodology: Overview ................................................................................................. 325
Financial Metrics .................................................................................................................... 326
Discounting: The Time Value of Money ................................................................................ 326
Cost of Capital ........................................................................................................................ 329
Net Present Value (NPV)........................................................................................................ 331
Return on Investment.............................................................................................................. 333
Internal Rate of Return (IRR) ................................................................................................. 335
Payback Period........................................................................................................................ 336
Appendix B ................................................................................................................................ 340
ABC Company ESCON VTS activity ..................................................................................... 340
Appendix C................................................................................................................................ 343
ABC Company ESCON/FICON model results...................................................................... 343
Appendix D................................................................................................................................ 347
ABC Company Native Tape Statistics .................................................................................... 347
Appendix E ................................................................................................................................ 351
XYZ Company Storage Subsystem Growth........................................................................... 351

List of Tables

Table 1. FICON vs. ESCON summary........................................................................................ 71


Table 2. FICON channel card evolution ....................................................................................... 87
Table 3. A comparison of channel technologies for CTC ......................................................... 102
Table 4. Sample of well known regional disasters 2001-2006 ................................................. 120
Table 5. Summary of seven DR tiers ......................................................................................... 133
Table 6. Virtual channels allocation table................................................................................... 161
Table 7. Virtual channels assignment for Brocade Condor ASIC based switches/directors ..... 162
Table 8. Example of discounting future cash flows................................................................... 230
Table 9. Calculating net present value....................................................................................... 234
Table 10 ...................................................................................................................................... 236
Table 11 ...................................................................................................................................... 239
Table 12 DASD ESCON environment at "ABC" company ....................................................... 242
Table 13. FICON environment model ....................................................................................... 244
Table 14. Consolidated FICON environment model ................................................................. 245
Table 15. DASD list prices for "ABC" company ...................................................................... 245
Table 16. 1 Distribution of Required Subsystems at "ABC" company ..................................... 247
Table 17. VTS activity comparison ........................................................................................... 248
Table 18. ABC Company native tape statistics ......................................................................... 250
Table 19. XYZ Company ESCON DASD performance ........................................................... 258

Copyright © Stephen R. Guendert 2007 v


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 20. FICON option 1 for XYZ Company.......................................................................... 258


Table 21. FICON option 2 for XYZ Company.......................................................................... 259
Table 22. FICON projected growth at XYZ Company.............................................................. 260
Table 23. CopyCross sizing metrics .......................................................................................... 263
Table 24. CopyCross recommended configuration ................................................................... 263
Table 25. 3590B tape drive statistics ......................................................................................... 264
Table 26. 3592 tape drive model comparison............................................................................ 265
Table 27 ESCON attached DASD performance at ASCH Company........................................ 271
Table 28. Existing ESS 800 configuration................................................................................. 273
Table 29. ESS 800 modeled with four FICON channels ........................................................... 273
Table 30. ESS 800 FICON, additional 16GB cache.................................................................. 273
Table 31. ESS 800 FICON, 32 GB cache, 15K RPM drives.................................................... 274
Table 32. Current ESS F20, ESCON ........................................................................................ 274
Table 33. Replace ESS F20 with 2nd FICON ESS 800 ............................................................ 274
Table 34. DS8300 performance .................................................................................................. 275
Table 35 Teaneck data center ESCON environment .................................................................. 282
Table 36 Sterling Forest ESCON port count .............................................................................. 287
Table 37 FICON Consolidation savings Teaneck ...................................................................... 289
Table 38. FICON consolidation savings Sterling Forest ........................................................... 290
Table 39. Total FICON consolidation cost savings ................................................................... 291

List of Figures

Figure 1. ESCON frame structure................................................................................................. 39


Figure 2. Dynamic and static connections ................................................................................... 51
Figure 3. Multiple hosts connecting to a control unit .................................................................. 52
Figure 4. Multiple hosts connecting to multiple control units .................................................... 54
Figure 5. S/390 Evolution ............................................................................................................ 58
Figure 6. IBM options for replacing ESCON .............................................................................. 60
Figure 7. ESCON to FICON migration channel equivalence...................................................... 62
Figure 8. ESCON to FICON bridge aggregation......................................................................... 63
Figure 9. FICON bridge sample topology ................................................................................... 63
Figure 10. FICON bridge card frame process.............................................................................. 64
Figure 11. ESCON to FICON native configuration example ...................................................... 65
Figure 12. Three FICON connectivity examples......................................................................... 66
Figure 13. ESCON channel command and data transfer ............................................................. 67
Figure 14. Fibre channel architecture ........................................................................................... 72
Figure 15. FICON operating modes............................................................................................. 77
Figure 16. Cascaded FICON and terminology ............................................................................ 79
Figure 17. ESCON command and data transfer “Mother-May-I?” ............................................. 83
Figure 18. FICON command and data transfer-“Assumed Completion” ..................................... 84

Copyright © Stephen R. Guendert 2007 vi


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 19. ESCON CTC connections .......................................................................................... 98


Figure 20. Fabric addressing support (a) ................................................................................... 114
Figure 21. Fabric addressing support (b) ................................................................................... 115
Figure 22. Sample IOCP coding for FICON cascaded director configuration .......................... 116
Figure 23. Cost of business continuity solution versus cost of outage ...................................... 129
Figure 24. Two site non cascaded FICON environment............................................................ 134
Figure 25. Two site cascaded FICON environment................................................................... 135
Figure 26. Sample FICON director activity report (RMF 74-7)................................................ 145
Figure 27. Frame pacing delay indications in RMF 74-7 record............................................... 146
Figure 28. Frame size, link speed and distance determine buffer credit requirements.............. 147
Figure 29. Virtual channels in Brocade 2 Gb/Sec switches....................................................... 161
Figure 30. Virtual channels in Brocade 4 Gb/sec switches/directors ........................................ 163
Figure 31. Fibre channel standard.............................................................................................. 172
Figure 32. Fibre channel addressing .......................................................................................... 178
Figure 33. Multiple subchannel sets .......................................................................................... 183
Figure 34. System z9 logical channel configuration.................................................................. 184
Figure 35. z990 vs. z9-109 fanout arrangement ........................................................................ 185
Figure 36. Redundant I/O interconnect..................................................................................... 186
Figure 37. Redundant I/O interconnect (2) ................................................................................ 186
Figure 38. ESCON’s circuit switched data transfer mechanism................................................ 195
Figure 39. ESCON’s “Mother-May-I?” protocol ...................................................................... 197
Figure 40. FICON packet switched data transmission............................................................... 199
Figure 41. FICON “Assumed Completion”............................................................................... 200
Figure 42. Response time as a function of utilization for old and new subsystems .................. 206
Figure 43. RMF record types for various system components .................................................. 207
Figure 44. Sample RAID rank (array group) read response time reporting for one subsystem.
Each line represents one RANK or array group ......................................................................... 211
Figure 45. Sample RAID rank (array group) HDD utilization reporting for one subsystem. Each
line represents one RANK or array group. ................................................................................. 212
Figure 46. Breakdown of disconnect time. Note that during the night batch window, the ‘other’
category is larger, as some internal subsystem components operate near saturation. ................ 213
Figure 47. Estimated average back-end service time for EMC subsystem. Note that this is the
average value, individual disks may have much higher response time values ........................... 214
Figure 48. Scatter chart showing throughput and service time: as the throughput for the whole
subsystem increases, so does the service time for individual I/O operations ............................. 215
Figure 49. Effective data rate for three disk subsystems. Note that during periods of very high
activity, individual I/O operations are running at very low data rates........................................ 216
Figure 50. XYZ Company ESCON DASD environment .......................................................... 257
Figure 51. Current Storage Environment................................................................................... 271
Figure 52. Switched FICON Fan-in/Fan-out ............................................................................. 277
Figure 53. Point to Point (P2P) FICON ...................................................................................... 277
Figure 54.Switched FICON Availability .................................................................................... 278
Figure 55. P2P FICON Balanced Workloads ............................................................................. 279
Figure 56. Switched FICON balanced workloads ...................................................................... 279
Figure 57 Recommended FICON DASD Infrastructure ............................................................ 281
Figure 58. Teaneck ESCON directors 1-4 .................................................................................. 283

Copyright © Stephen R. Guendert 2007 vii


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 59. Teaneck ESCON directors 5-8 ................................................................................. 284


Figure 60. Teaneck ESCON directors 5-10 ............................................................................... 285
Figure 61Sterling Forest ESCON directors(1)............................................................................ 286
Figure 62. Sterling Forest ESCON directors (2)........................................................................ 286
Figure 63 Teaneck FICON.......................................................................................................... 288
Figure 64. Sterling Forest FICON .............................................................................................. 288

Copyright © Stephen R. Guendert 2007 viii


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

CHAPTER 1

Introduction

In 1990, IBM introduced Enterprise Systems Connection (ESCON) to the world with ES/3090

mainframe processors as the way to address the limitations of the parallel (bus and tag) channel

architecture. As such, ESCON provided noticeable, measurable improvements in distance

capabilities, switching topologies, and most importantly, response time and service time

performance. ESCON was, and still is a very successful storage network protocol for mainframe

systems and is the father of the modern storage area network (SAN). ESCON supported

significant expansion in mainframe computing throughout the 1990s, and is still in widespread

use today. By the early 2000s, ESCON’s advantages over parallel channels had become its own

weakness. IBM initially developed Fibre Connection (FICON) technology to address the

limitations of the ESCON architecture. FICON evolved in the late 1990s to address the technical

limitations of ESCON in bandwidth, supported distances, and channel/device addressing. The

greater bandwidth of FICON and its enhanced distance capabilities compared with ESCON are

starting to make FICON an essential component in high availability/disaster recovery/business

continuity (HA/DR/BC) solutions (Artis, Guendert, 2006). HA/DR/BC implementations include

IBM’s Geographically Dispersed Parallel Sysplex (GDPS), remote direct access storage device

(DASD) mirroring, electronic tape/virtual tape vaulting and remote DR sites (hot sites).

Compared with ESCON, FICON offers:

• Increased number of concurrent connections.

• Increased distance support.

• Increased link bandwidth.

• Increased channel device addressing support.

Copyright © Stephen R. Guendert 2007 1


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• Greater exploitation of priority I/O queuing (quality of service).

• Increased distance to data droop effect (better performance over long distances).

Numerous articles and papers exist on FICON technology dating back to its late 1990s debut

in the marketplace. The performance and other technical advantages FICON has over ESCON

are well documented and demonstrated. This study will further document several cases. In

general, data centers that have migrated from ESCON infrastructure connectivity to FICON have

seen overall improved response times and overall performance, which has led to an increased

ability to meet service level agreements (SLAs) (Guendert, Seitz, 2004). Even so, only an

estimated one third of existing ESCON customers worldwide have migrated from ESCON to

FICON (Guendert, Houtekamer, 2005). As of June 2006, according to the IBM Corporation,

1383 IBM mainframe customers are still using ESCON directors in their mainframe storage

infrastructure. As of April 2007, according to Bill Rooney and John Turner, IBM Mainframe I/O

Program managers at IBM’s Poughkeepsie facility, new IBM System z9 mainframes are still

shipping to customers with more ESCON channels installed than FICON channels. IBM’s stated

direction is to attempt to move the vast majority of mainframe customers to FICON prior to the

next generation mainframe being released. When the technology advantages are so readily

apparent, one must ask why there are so few mainframe end users who have migrated to FICON?

There are several factors that explain why. First, most of the concepts associated with FICON,

particularly those associated with the actual planning of the storage architecture, are

diametrically opposed to what the mainframers have learned in the preceding forty years of

parallel and ESCON channel connectivity. Secondly, the mainframe user base is very

traditionally risk-conscious, and conservative. They like to plan everything and have complete

control over the process. Couple that with the lack of a clear, concise, and statistically scientific

Copyright © Stephen R. Guendert 2007 2


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

planning methodology emphasizing storage performance and not simple channel utilization, and

the hesitancy becomes more understandable (Guendert, Houtekamer, 2005). A third major factor

delaying migrations to FICON is the initial cost of entry, including the purchase of new hardware

and infrastructure (Guendert, Seitz, 2004). In 2007 and beyond, when investing in new

mainframes, mainframe disk, tape, or virtual tape storage equipment, companies will most likely

be evaluating FICON equipment. Most of these modern storage subsystems, along with the

latest IBM mainframes, are so fast in terms of I/O performance that FICON channels are

necessary to exploit the performance potential they offer. More importantly to the end user, they

need FICON channels to realize the performance capabilities they are paying for. Purchasing a

new System z9 mainframe, or new mainframe storage in 2007 and running it on ESCON

connectivity does not fully tap that DASD array’s and/or tape library’s performance potential,

and hence, the full benefits of the hardware investment cannot be realized (Guendert,

Houtekamer, 2005). Yet it has been difficult for mainframe end users to recognize and quantify

the cost savings associated with the elimination of some of their existing ESCON infrastructure,

as well as their ability to leverage other existing ESCON hardware by attaching it to the FICON

storage network. The fourth major factor in the slow rate of FICON adoption was the initial lack

of FICON storage devices with the initial FICON announcements by IBM. IBM originally

announced FICON as generally available on the IBM 9672 S/390 G5 and G6 mainframe models.

As a result, many prospective FICON customers had to do “rolling upgrades” of storage devices,

which further increases the initial costs (Guendert, Seitz, 2004). Finally, until this document,

nobody in the mainframe community has written a detailed, comprehensive technical and

financial case for moving to FICON.

Copyright © Stephen R. Guendert 2007 3


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The purpose of this research study is to explore the potential performance gains, enhanced IT

resilience, and long term total cost of ownership (TCO) benefits realized by an IBM mainframe

installation migrating from an ESCON to a FICON channel infrastructure. This study will utilize

statistical performance data collected from several mainframe installations between March 2004

and October 2007, as well as structured interviews from consulting engagements at these

installations during the same time period. Collection of statistical performance data will be via

IBM’s System Management Facility (SMF) and Resource Measurement Facility (RMF), and the

data analyzed using Intellimagic’s suite of storage performance analysis/capacity planning

software. Some of these clients were looking to understand the long-term TCO benefits inherent

in migrating from ESCON to FICON. When necessary, the author calculated these benefits

using a Microsoft Excel based tool developed by the author for Brocade Communications

Systems and its predecessor McDATA Corporation. The results of this study will assist

mainframe end users, IBM, and other storage vendors to better understand how to justify

migrating from the older ESCON technology to the newer FICON technology. This justification

will not only be the technical justification, but also the business justification that today’s CIO

requires.

The IBM Corporation has been perplexed as to the slow adoption rate of FICON technology.

This slow adoption rate has made a contribution to the perceived “death of the mainframe” that

has been rumored for the past ten years. The purpose of this study is to help prove FICON’s

technical advantages over ESCON, and translate those into real TCO savings required by today’s

procurement processes.

It would be a good idea to define some basic terms and concepts associated with this study.

The best way to do this is with the brief historical discussion of mainframe I/O and channel

Copyright © Stephen R. Guendert 2007 4


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

architecture which follows in Chapter two. Chapter two will also review the current literature

available on ESCON and FICON by looking in-depth at the details of the ESCON and FICON

channel technologies, and making comparisons of the two. It will go into much more depth on

FICON in an effort to completely explain the uses and technical value of FICON technology.

Chapter three will describe the techniques and systems used for doing the analysis work done

at several customers and clients. This will include a discussion of mainframe I/O performance

metrics, and a description of how to use the Excel based financial justification tool the author

developed for making the financial justification for customers. Chapter four is the detailed

analysis of the data collected, and chapter five is the conclusion to this study.

Copyright © Stephen R. Guendert 2007 5


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

CHAPTER 2

Introduction

Chapter two will review literature written over the previous twenty years on the subject of

mainframe I/O. Particular focus will be on ESCON and FICON; although the chapter starts with

a brief historical perspective on mainframe I/O dating back to the IBM S/360. This historical

perspective is important, as IBM has continued to build and enhance the older mainframe

architectures over the past forty three years. An understanding of the history is also necessary in

order to appreciate the developments made in the technology over the past forty three years.

The chapter starts with the historical perspective dating back to the first IBM S/360 in 1964,

and continues to the modern System z9 of 2007. This historical perspective includes a more

detailed discussion on the development of ESCON and FICON. Next is a detailed section on the

ESCON technology, followed by a similarly detailed section on FICON technology, including

FICON channel to channel (CTC) technology. Throughout the FICON technology section,

comparisons are made with ESCON to highlight the technical advantages FICON has over

ESCON. The chapter concludes with a discussion of new technologies from the past four years

that are unique to FICON, including a detailed discussion on FICON cascading and its impact on

disaster recovery.

2.1 Evolution of the channel: 43 years of mainframe I/O history

Contemporary mainframe z/OS computer systems are the product of the evolution of the

hardware and software of the IBM System/360 architecture originally introduced in 1964. The

System/360 introduced three fundamental concepts to computing that have remained to this day.

Copyright © Stephen R. Guendert 2007 6


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.1.1 Three fundamental concepts

The most fundamental concept that the System/360 introduced was the concept of system

architecture. Prior to the IBM System/360, each new computer introduced to the market

presented a different instruction set and programming interface. These changes often required an

end user to extensively redesign or rewrite programs with each change in computer. The

System/360 architecture defined a common set of programming interfaces for a broad set of

machines. This allowed software (SW) to be completely independent of a specific computer

system (Artis, Houtekamer, 1993). This holds true 43 years after the System/360 introduction:

Mainframe installations in 2007 often are running programs initially coded back in the late

1960’s.

The second fundamental concept the IBM System/360 introduced was device independence.

Prior to the System/360, each application program was required to directly support each device it

managed. The System/360 specified a logical boundary between the end user program and the

device support facilities provided by the computer’s operating system. This innovation allowed

for easy introduction of new device technologies into the architecture.

The third and possibly most important was the concept of a channel. Prior computing

architectures had the central processor itself manage the actual interfaces with attached devices.

The channel concept changed this. A channel is a specialized microprocessor that manages the

interface between the processor and devices. The channel provides a boundary between the

central processor’s request for an I/O, and the physical implementation of the process that

actually performs the I/O operation. The channel executes its functions under the control of a

channel program, and connects the processor complex with the input and output control units.

Over the past 43 years, the fundamental philosophy of I/O control has transitioned from a

Copyright © Stephen R. Guendert 2007 7


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

strategy of centralized control to one of distributed operation that incorporates more intelligent

devices. Simultaneously, the evolution of the System/360 architecture into the modern System Z

has significantly increased the power of the channel subsystem and has transferred more of the

responsibility for managing the I/O operation to the channel’s side of the I/O boundary. It is

important to understand this evolution.

2.1.2 System/360

IBM announced the System/360 architecture in the summer of 1964. The System/360 unified

machine architectures and united the IBM computing industry to a single set of instructions.

Converting from the IBM 7090 to the System/360 would be just as difficult as it was converting

from to the 7090 from its predecessor, but IBM promised the industry that this was the last time

for this difficult of a conversion. IBM went on to promise its customers that larger and faster

iterations of the System/360 would allow them to grow their computers without rewriting their

code: IBM accomplished speed improvements in the System/360 by changing the hardware and

not the end user’s programs. A program written for one model of the System/360 would run on

the next version of the hardware, or on future machines. For example, a program designed to run

on the System/360 model 50 would run on the System/370 model 3090 (Johnson, 1989).

IBM’s goal with the System/360 was to allow its sales and marketing divisions to sell the

customer the latest and greatest System/360 machine, and be able to guarantee to the customer

that no software changes would be required to use the new hardware. If the customer’s

application ran on one System/360 processor, it would run on the next higher processor offering.

IBM thus introduced the concept of upward compatibility to the computing industry, and the

concept of investment protection (Johnson, 1989).

Copyright © Stephen R. Guendert 2007 8


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

There were many factors that influenced the original System/360 design. However, IBM’s

experience with the IBM 709 and IBM 7090 processors in the early 1960’s was the critical factor

in the design of the IBM System/360 I/O subsystem (Artis, Houtekamer, 1993.) While the IBM

704 and other contemporary processors of the early 1960’s had specific processor operation

codes for reading, writing, backspacing, or rewinding a tape, the IBM 7090 relied on a channel to

provide these instructions for executing the processors’ I/O requests.

The function of a channel is to perform all control and data transfer operations that arise

during an I/O operation. The channel interacts with the CPU, main storage, and central units for

devices. The CPU initiates an I/O operation via an I/O instruction, and the channel functions

independently from that point onward. An I/O operation involves control operations such as

positioning the read/write head of a disk unit on a specific track, as well as data transfer

operations such as reading a record from a disk (IBM, 1966).

A device is a peripheral unit that transmits, stores, or receives data and connects to the

channel via a control unit. Some examples of devices are disks, terminals, tape drives, printers,

and similar peripheral equipment used in data storage and transfer. Typically, several devices

connect to a control unit that in turn connects to a channel. A control unit functions as an

intermediary between device and channel. Usually, several devices connect to a control unit and

several control units connect to a channel. The interaction between channel and control unit and

between control unit and device varies greatly according to the type of device (Prasad, 1989).

The System/360 architecture defined two types of channels: the selector and byte multiplexer

channels. While contemporary System Z systems still include byte multiplexer channels, selector

channels had fundamental performance limitations. A selector channel performed only one I/O

operation at a time and was dedicated to the device for the duration of the I/O (Prasad, 1989).

Copyright © Stephen R. Guendert 2007 9


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The System/360 architecture used selector channels for high-speed data transfer from devices

such as tapes and disk drives. Several devices can connect to a selector channel. However, the

selector channel dedicates itself to a device from start to end of the channel program. This is true

even during the execution of channel commands that do not result in data transfer. While this

design was adequate for the relatively small processors comprising the System/360 series, it

provided a fundamental limit on achievable I/O throughput. The block multiplexer channel

introduced in the System/370 architecture replaced the selector channel (Johnson, 1989).

Byte multiplexer channels connect low speed devices such as printers, cardpunches, and card

readers. Unlike the selector channel, byte multiplexer channels do not dedicate to a single device

while performing an I/O operation (Artis, Houtekamer, 1993). The byte multiplexer channel

adequately supported multiple low speed devices at one time by interleaving bytes from different

devices during data transfer.

The I/O generation process describes the topology of the System/360 I/O configuration to the

System/360 operating system. This system generation process was commonly referred to, (and

still is to this day) as an I/O Gen by the systems programmers responsible for it (Artis,

Houtekamer, 1993). Utilizing the information provided in the I/O Gen, the System/360 operating

system selected the path for each I/O and also selected the alternate path in the event the primary

path was busy servicing another device or control unit.

In addition to path selection, the System/360 operating system was responsible for some

additional tasks in the I/O process. One of these tasks was supervising I/O operation during its

execution and dealing with any exceptional conditions through I/O interrupt handling. A

restriction of the System/360 was that the I/O interrupt that notified the processor of the

completion of an I/O had to return over the same channel path used by the initial start I/O to

Copyright © Stephen R. Guendert 2007 10


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

reach the device (Artis, Houtekamer, 1993). With the increasing complexity of I/O

configurations and more shared components, the probability of finding some part of the

originating path busy when the I/O attempted to complete substantially increased. This was one

of several things that led to the System/370 architecture.

2.1.3 System/370

The System/370 architecture was a natural extension of the System/360 architecture. The

System/370 design addressed many of the architectural limitations encountered with end user

implementations of larger System/360 class systems. The architecture of the System/370 is

evolutionary in the sense that it was based on IBM’s experience of a decade with the System/360

architecture. IBM’s experience demonstrated that the storage architecture of the System/360

needed significant changes in order to function in an on-line multiprogramming environment. Its

architects viewed the System/370 architecture as an extension of the System/360 architecture

necessitated by technological growth and operating system demands (Case, Padegs, 1978). The

System/370 architecture introduced the concepts of virtual storage and multiprocessing. The

redesigned I/O subsystem enhanced parallelism and throughput and IBM further enhanced the

I/O addressing scheme to support virtual storage, while the architecture was extended to

encompass the multiprocessing capability (Lorin, 1971).

The System/370 included the IBM 370/xxx, 303x and the initial 308x series of processors and

supported several major operating systems including the first versions of IBM’s multiple virtual

storage (MVS) operating system. MVS, in the form of z/OS, is the primary IBM mainframe

operating system in use around the world today (Artis, Houtekamer, 1993).

The System/370 architecture did not change the System/360’s 24-bit addressing scheme. This

was because the operating system and compiler produced applications programs used the extra

Copyright © Stephen R. Guendert 2007 11


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

bits in address words for control purposes, and therefore required extensive modifications (Case,

Padegs, 1978). It was not until the advent of the System 370/x architecture that IBM changed

the 24-bit addressing to 31-bit addressing. The late 1960’s evolution in multiprogram operating

systems that had occurred since the System/360’s 1964 debut also needed virtual storage for their

implementation. The System/370 provided this virtual storage, and “it probably is the most

significant distinction between the System/360 and System/370 architectures (Prasad, 1989 p

112).”

The System/370 was different from the System/360 in five primary areas (Johnson, 1989):

1. Multiprocessing – IBM designed the MVS operating system of the System/370 for

multiprocessing systems. In multiprocessing, the conceptual machine is composed of one

or more central processing units (CPUs) running under a single operating system and

sharing a common memory (Prasad, 1989).

2. Dynamic Address Translation (DAT) freed the application programmer from concerns on

which the application is running. (Virtual storage).

3. Extended Real Addressing- IBM designed System/370 mainframes to remove central

storage considerations from the applications.

4. Protection facilities- the System/370 extensions to the System/360 protection facilities

benefited applications.

5. Channel Indirect Addressing. The System/370 has vastly different channel capabilities

necessitated by the concept of virtual storage being applicable to I/O operations. IBM

introduced a Channel Indirect Data Addressing (CIDA) facility to translate the virtual

addresses in channel programs to absolute addresses (Prasad, 1989). Channel Indirect

Copyright © Stephen R. Guendert 2007 12


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Addressing was the first in a series of hardware architecture changes that enabled the

System/370 to grow into the 1990’s and 21st century (Johnson, 1989).

The System/370 architecture defined a new type of channel, the block multiplexer channel.

Block multiplexer channels connect high-speed devices. The block multiplexer channel is “part

of a processor complex channel subsystem that can keep track of more than one device

transferring blocks, and interleave blocks on the channel to different devices” (Johnson, 1989 p

548). Similar to the System/360 selector channels, block multiplexer channels have a high data

transfer rate. Unlike selector channels, block multiplexer channels are capable of disconnecting

from a device during portions of an operation when no data transfer is occurring (for example,

when the device is performing a central operation to position the read/write head.) During such

periods a block multiplexer channel is free to execute another channel program pertaining to a

different device connected to it. During actual data transfer, the device stays connected to the

channel until data transfer is complete (Prasad, 1989). Block multiplexer channels therefore

could perform overlapping I/O operations to multiple high-speed devices. The block multiplexer

channel is dedicated to the device for the duration of data transfer, and does not interleave bytes

from several devices like a byte multiplexer channel (Artis, Houtekamer, 1993).

Block multiplexer channels are ideally suited for use with Direct Access Storage Devices

(DASD) with rotational position sensing (RPS). RPS is the ability of a DASD device to

disconnect from the channel while certain channel command words (CCWs) in a channel

program execute. IBM originally introduced RPS for the 3330 set sector command. The 3330

released the channel that started the I/O request until the disk reaches the sector/angular position

specified in the CCW. The device then attempted to reconnect to the channel so it could execute

the next CCW (which was usually a search) in the channel program under execution. “All

Copyright © Stephen R. Guendert 2007 13


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

DASD architectures subsequent to the 3330 support RPS (Johnson, 1989 p 574).” In simpler

terms, the channel disconnects from the device during the rotational delay necessary before an

addressed sector appears under the read/write head. During this time it can service another

device. When the addressed sector on the first device is approaching, it reconnects with the

channel and a data transfer takes place. If the channel is busy with the second device, the first

device will try to reconnect after another rotation of the platter (disk) (Prasad, 1989).

To facilitate the parallelism brought about by the block multiplexer channel, IBM logically

divided the channel itself into a number of subchannels. A subchannel is the memory within a

channel for recording byte count, addresses, and status and control information for each unique

I/O operation that the channel is executing at any given time (Artis, Houtekamer, 1993). A single

device could exclusively use each subchannel, or multiple devices could share the subchannel.

However, only one of the devices on a shared subchannel could transfer data at any one time. A

selector channel has only one subchannel since it could only store data elements and execute

operations for one I/O at a time, and had to remain busy for the entire duration of the I/O

operation. Both byte multiplexer and block multiplexer channels can have several subchannels

because they can execute several I/O operations concurrently. While selector channels were still

present in the System/370 architecture, they were included simply for the purpose of providing

transitional support for existing devices incapable of attachment to the newer block multiplexer

channels. The System/370 architecture completely retained the byte-multiplexer channels for

low speed devices (Prasad, 1989). The block multiplexer channels on the System/370 resulted in

the realization of a substantial increase in I/O throughput for a given level of channel utilization.

The utilization of a block multiplexer channel was less than one-sixth that of a System/360

selector channel for the same level of I/O traffic (Artis, Houtekamer, 1993).

Copyright © Stephen R. Guendert 2007 14


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The System/370 architecture was not flawless, and it too eventually ran into problems and

limitations just as had occurred with the System/360. Chief amongst these were device-

addressing problems in multiprocessor (MP) environments. The other processor could not easily

access channels owned by one processor in a MP configuration. This resulted in having to

mirror the I/O configurations of both sides of a multiprocessor so that each processor could issue

I/O requests using its own channels. This resulted in complex and redundant cabling, and also

consumed extra channel attach points to the configurations shared storage control units. This in

turn resulted in the limitation of the overall size of large multi-system configurations due to the

cabling requirements of multiprocessor systems. This subsequently led to the System/370

extended architecture (System/370XA).

2.1.4 System 370/XA (MVS/XA)

The System 370 extended architecture (XA) and the MVS/XA operating system introduced the

ability to address two gigabytes (via 31 byte addressing) of virtual and central storage, improved

I/O processing and improved the reliability, availability, and serviceability (RAS) of the

operating system (Johnson, 1989). These changes evolved from IBM’s experience with the

System/370 architecture. IBM’s experience addressed several major end user issues with the

System/370 architecture, including the extension of real and virtual storage, and the expansion of

the system address space from 16 megabytes to 26 megabytes. They also included the relocation

of the device control blocks from low storage, the addition of substantial new hardware elements

to facilitate the management of multi-processes, and streamlining multiprocessing operations.

Finally, I/O performance was improved and better operating efficiency realized on I/O

operations with the introduction of the concept of the channel subsystem (Padegs, 1983),

(Cormier, 1983).

Copyright © Stephen R. Guendert 2007 15


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

IBM and its customers found the System/370 I/O architecture inadequate in large installations

for four primary reasons:

1. The number of supported devices was limited to 4K.

2. IBM designed the System/370 architecture for a uniprocessor environment. In a

multiprocessing (MP) environment, each CPU was assigned to a dedicated channel set.

(Each CPU had to work with its own channel set). If a device was connected only to a

single CPU via a channel, that specific CPU had to be running all programs requiring

data transfer from the device.

3. The I/O module of the operating system performed many functions. This resulted in

CPU overhead. Some of these functions included handling control unit or device busy

indications as well as handling interruptions arising from device end and control unit end

conditions.

4. The System/370 channel operation was inefficient in a shared environment where

multiple devices connected to multiple CPUs via multiple channels. The architecture

required the physical channel used for initiation of an I/O operation be used for

continuation of the operation. This would cause delay, particularly when a device could

not reconnect because other original channel was busy (Prasad, 1989).

The I/O architecture of the 370/XA was different from that of the System/370 in several

respects. First, while the System/370 supported 4K devices, the 370/XA supported up to 64K.

Secondly, an I/O processor called the channel subsystem performed many of the functions

previously performed by the CPU (i.e. by the operation system) in the System/360 and

System/370. The most substantial change was the assignment of all of the channels to a new

element called the External Data Controller (EXDC). Unlike the System/370 architecture where

Copyright © Stephen R. Guendert 2007 16


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

subchannels could be dedicated or shared, every device in a System/370XA configuration was

assigned its own subchannel by the EXDC. Every device now had an associated subchannel, and

there was a one-to-one correspondence between device and subchannel. Fourth, rather than each

processor owning a set of channels which it used almost exclusively, the processors shared all of

the channels. What this did was completely eliminate the redundant channel assignment and

cabling problems previously discussed for MP processors (Prasad, 1989), (Artis, Houtekamer,

1993).

At the time, many of these changes were radically new concepts. A set of rules for

optimization of CPU time in I/O processing, operational ease, reduction of delays, and

maximization of data transfer rates led to these new concepts. These rules resulted from many

years of observation of the System/360 and System/370 I/O architectures in end-user

environments. Basically, the System/370XA architecture addressed the problems that had

developed with I/O throughput with a variety of hardware and software innovations. In

particular, IBM enhanced the channel subsystem and operating system with the System/370XA

and MVS/XA to support the dynamic reconnection of channel programs. Previously, the

System/370 architecture required each I/O completion to return to the processor via the same

path on which it originated. As configurations grew larger and more complex, the combinatorial

probability of finding all path elements used to originate an I/O free at the time of its completion

became increasingly lower (Artis, Houtekamer,1993). This resulted in poor I/O performance.

Dynamic reconnections can only fail if all possible paths are busy. New disk technologies had

also evolved to exploit the dynamic reconnection feature. Examples of such technologies were

Device Level Selection (DLS) and Device Level Selection Enhanced (DLSE). The IBM 3380

disk drive was an example of a device having dynamic reconnection capability. The 3380 could

Copyright © Stephen R. Guendert 2007 17


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

connect to a channel other than the one that initiated the I/O operation when it is ready to go

ahead with data transfer. A block multiplexer channel works with a disk drive that has rotational

position-sensing (RPS) capability in the following manner: the channel disconnects from the

device once it has issued a search ID equal command prior to the read command. When the

addressed sector is approaching the read/write head, the device attempts to reconnect with the

channel. If the channel is not ready, it attempts to reconnect at the end of the next revolution

(hence a delay). With the capability for dynamic reconnect, the disk can reconnect to any

available channel defined as a path, thus minimizing delays in the completion of I/O (Cormier,

1983).

Under the System/370 architecture, the CPU (operating system) was responsible for channel

path management. Channel path management consisted of testing the availability of channel

paths associated with the device. Obviously this involved substantial overhead for the CPU. “In

the System/370XA architecture, path selection and the burden to handling the majority of the

interaction with the I/O subsystem was removed from the CPU (operating system) end

transferred to the external data controller” (Artis, Houtekamer,1993 p114). Also, under the

System/370 architecture, the CPU was interrupted at the end of various phases of execution of

the I/O operation. In contrast, IBM enhanced the System/370XA operating system (MVS/XA)

to allow any processor to service an I/O interrupt, rather than the I/O being serviced only by the

issuing processor. Although the channel passed an interrupt to notify the processor of an I/O

completion, the EXDC handled most of the interactions with the devices and other control

functions (Johnson, 1989).

Since the MVS/XA operating system no longer required knowledge of the configuration

topology for path selections, the majority of information in the I/O Gen in prior architectures

Copyright © Stephen R. Guendert 2007 18


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

such as System/360 and System/370 was migrated to a new procedure in System/370XA. This

new procedure was the I/O Configuration Program (IOCP) generation. The IOCP Gen produces

a data set processed by the EXDC at initialization time to provide the EXDC with knowledge of

the configuration. Since it was now the EXDC selecting the path and dealing with intermediate

results and/or interrupts, the processor’s view of I/O is far more logical that physical. To initiate

an I/O under System 370/XA, the processor merely executes a start subchannel (SSCH)

command to pass the I/O to the EXDC. The EXDC is then responsible for device path selection

and I/O completion. “The assignment of this responsibility makes the overall concept of I/O far

more distributed than centralized (Artis, Houtekamer, 1993).”

While it is certainly true that the System/370/XA architecture addressed most of the

performance issues that had developed in the I/O subsystem with System/360 and System/370,

the complexity and size of the I/O configuration continued to grow. Some of these issues

included cable length restrictions, bandwidth capacity of the I/O subsystem, and the continued

requirement for non-disruptive installation to facilitate end-user’s response to the ever more

increasing availability demands of critical online systems.

2.1.5 System/390, zSeries, System Z and beyond

IBM introduced the System/390 architecture in September 1990. The primary I/O objective of

the System/390 architecture was addressing the restrictions encountered during the end of life of

the System/370/XA architecture. Most of these restrictions had to do with the old parallel

channel implementation. From the System/360 through System 370 and onto

System/370XA/EXA, IBM implemented selector, byte multiplexer, and block multiplexer

channels with what were known as parallel channels (also known as bus and tag). The parallel

channel structure was originally introduced with the System/360 architecture, and has remained

Copyright © Stephen R. Guendert 2007 19


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

in use with some modifications ever since. Many mainframe users today are still running some

devices on parallel channels (and the hide these off in some hidden dark corner of their data

center). Parallel (bus and tag) channels are also known as copper channels since they consist of

two cables with multiple copper coaxial carriers. These copper cables connect to both the

channel (mainframe) and I/O devises using 132 pin connectors. The cables themselves weigh

approximately one pound/linear foot (1.5 kg/meter) and due to their size/bulk they require a

considerable amount of under-raised-floor space. The nickname bus and tag arrived because one

of the two cables is the bus cable contains the signal lines for transporting the data. Separate

lines (i.e. copper wires) are provided for eight inbound signals and 8 outbound signals (1 byte in,

1 byte out). The second cable is the tag cable, which controls the data traffic on the bus cable.

Despite all of the historical improvements in the implementation of parallel channels, 3

fundamental problems remained:

1. The maximum data rate could not exceed 6Mbyte/sec without a major interface and

cabling redesign.

2. Parallel cables were extremely heavy and bulky, and required a large amount of space

below the computer room floor.

3. The maximum cable length of 400 ft (122m) presented physical constraints to large

installations. Although channel extenders allowed for longer connections, due to

protocol conversion they typically lowered the maximum data rate below 6 Mbyte/sec.

Also, at distances greater than 400 ft, the signal skew of parallel cables became

intolerable (Artis, Houtekamer, 1993).

Copyright © Stephen R. Guendert 2007 20


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

These three fundamental issues were the driving force behind the introduction of fiber optic

serial channels in the S/390 architecture. The special channels are formally called the ESA/390

Enterprise Systems Connection (ESCON) architecture I/O interface by IBM (IBM, 1990).

IBM decided that serial transmission cables were to be the next generation of interconnection

links for mainframe channel connectivity. Serial transmission cables could provide much

improved data throughput (12-20 Mbyte/sec sustained data rates) with other benefits such as a

significant reduction in cabling bulk. The System/390 architecture introduced a new, serial

channel architecture called Enterprise System Connection (ESCON). The ESCON architecture

was based on a 200Mbit (20 Mbyte)/sec fibre optic channel technology that addressed both the

cable length and bandwidth restrictions that were hampering large MF installations (IBM, 1990).

As a pioneering technology, ESCON was architected differently than today’s modern Fiber

Connection (FICON) or open systems’ Fibre Channel Protocol (FCP). With ESCON, the fiber

optic link consisted of a pair of single fibers, with one of the fibre pair supporting the

unidirectional transmission of data and the other used for control and status communication. The

two fibers comprising an optical link were packaged in a single multimode, 62.5 micron cable

using newly developed connectors on each end. This functionally replaced two parallel cables

that were each about one inch in diameter resulting in a significant saving of both weight and

physical bulk.

All of the ESCON system elements were provided with the newly developed ESCON

connectors or sockets (very different than the small form factor plugs (SFPs) used today) for link

cable attachment to the mainframe, storage or a director. For the very first time, I/O link cables

could be connected and disconnected while mainframe and storage elements remained in an

operational state. This innovation was the first implementation of non-disruptive “hot

Copyright © Stephen R. Guendert 2007 21


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

pluggability” of I/O and storage components. Other advantages afforded by fiber optic

technology include improved security, because the cable did not radiate signals and insensitivity

to external electrical noise that provided enhanced data integrity.

Another “First” for ESCON was its innovative use of a point-to-point switched topology. For

ESCON in 1991, this P-to-P switched topology resulted in the creation of a “storage network”,

the world’s first deployment of storage area networking. With a switched point-to-point topology

all elements were connected by point-to-point links, in a star fashion, to a central switching

element (ESCON Director). By way of contrast, the older IBM parallel data streaming I/O

connectivity used a multi-drop topology of “daisy-chained” cables which had no “switch”

capability (Guendert, Lytle, 2007).

2.1.7 ESCON directors

At the beginning of the ESCON phase, IBM developed its own central dynamic crossbar

switching technology referred to as an “ESCON Director”. These first storage switching devices,

announced in late 1990 and enhanced over the years, provided many benefits including:

• ESCON I/O channel ports that could dynamically connect to 15 storage control units

compared to a maximum of 8 static storage control units with the older data streaming

protocol.

• Allowing devices on a control unit to be shared by any system’s ESCON I/O channels

that are also attached to that director.

• Customizing the any-to-any connectivity characteristics of the ESCON director at an

installation by means of an operator interface provided by a standard PS/2 workstation

or by means of an in-band connection to a host running the ESCON Manager

application program.

Copyright © Stephen R. Guendert 2007 22


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• The use of HA redundant directors to create an HA storage network (five-9’s) by

providing alternate pathing with automatic failover.

• Common pathing for channel-to-channel (CTC) I/O operations between mainframe

systems.

• Non-disruptive addition or removal of connections while the installed equipment

continued to be operational.

• An internal port (the CUP or control unit port) that could connect to a host channel via

internal ESCON director switching, and a physical director port genned for that

connectivity. When the mainframe and CUP were connected in this manner, the host

channel communicated with the ESCON director as it would with any other control unit.

• An ability to “fence” and/or block/unblock ports to isolate problems, perform

maintenance and control the flow of traffic across the director ports.

In essence, mainframes have from the beginning led the way in making fiber channel media a

commonplace connectivity solution in the enterprise data center. Both McDATA’s and

InRange’s enterprise mainframe products became widely known and respected for their high

availability characteristics. Today, many of the characteristics that are taken for granted in both

the z/OS server and the open systems server world were initially developed specifically to meet

the demanding requirements of the MVS and z/OS enterprise computing platforms. The strict

requirements of providing a high availability, 7x24 enterprise computing data center drove many

of the technical initiatives to which McDATA responded with vision and innovation.

Interestingly, among the many accomplishments in the mainframe arena, it was McDATA

and not IBM that engineered and produced the first high availability ESCON director. IBM

OEM’d it from McDATA and then deployed it as their 9032 model 3 (28-124 ports) announced

Copyright © Stephen R. Guendert 2007 23


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

in September 1994. Previously, IBM had produced their two original, non-HA versions of the

ESCON Director, the 9032 model 2 (28-60 ports) and 9033 model 1 (8-16 ports). After the

release of the 9032-3 ESCON Director, McDATA then went on to engineer, on behalf of IBM,

two additional HA versions of this product line – the 9033 model 4 (4-16 ports) and 9032-5 (24-

248 ports). InRange also directly sold their high availability CD/9000 (256 port) ESCON

Director to customers. Between 2003 and 2007, numerous mergers and acquisitions occurred in

the storage networking industry. In 2003 Computer Network Technology (CNT) acquired

Inrange. In 2005 McDATA acquired CNT, and in 2007 Brocade acquired McDATA. In essence

Brocade, as it is currently organized, has provided the world with 100% of the switched ESCON

capability in use today (Guendert, Lytle, 2007).

There were a number of features that allowed the 9032 model 3 and model 5 to be highly

available. High on the list of those HA features were the ability to activate new microcode non-

disruptively. McDATA first developed this capability in 1990 when it engineered the 9037

model 1 Sysplex Timer for IBM. The Sysplex Timer was a mandatory hardware requirement for

a Parallel Sysplex consisting of more than one mainframe server. It provides the synchronization

for the time-of-day (TOD) clocks of multiple servers, and thereby allows events started by

different servers to be properly sequenced in time. If the Sysplex Timer goes down then all of the

servers in the Parallel Sysplex are at risk for going into a hard wait state. So the Sysplex Timer

that was built by McDATA to do non-disruptive firmware code loads and this technology was

then available to be used by McDATA when it created its ESCON (and later FICON) Directors.

Time passed and ESCON deployments grew rapidly and along with it optical fiber

deployments. As pointed out above, ESCON was by necessity developed as a proprietary

protocol by IBM as there were no fibre channel (FC) standards to fall back on at that time. Very

Copyright © Stephen R. Guendert 2007 24


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

little FC storage connectivity development was underway until IBM took on that challenge and

created ESCON. So over time as ESCON was pushed towards its limits, it began to show its

weaknesses and disadvantages, chief among them being performance limitations, addressing

limitations and scalability limitations. And while IBM mainframe customers deployed ESCON

in data centers worldwide, fiber channel was also maturing and standards and best practices were

being developed and published.

2.1.8 Fiber or Fibre?

The question about using the spelling “fibre” or “fiber” occurs frequently. There are a couple of

good explanations of proper use. The English typically use the “fibre” spelling which has French

origins, while the alternative American spelling of “fiber” has Germanic origins and is still used

in many European countries where the origin of their native language is more Germanic than

Latin.

A second explanation, and one better suited to the industry, is that the standards committee

decided that “fiber” means the physical elements (cable, switch, etc.) and “fibre” is used when

discussing the architecture of a fabric or the protocol itself (e.g. Fibre Channel Protocol).

2.1.9 The evolved fibre channel protocol and the mainframe

All of the infrastructure vendors noted above played a pivotal role in various standards

organizations, especially regarding fibre channel standards. Their premier position in many of

the standards committees allowed them to voice customer concerns and requirements and then

help tailor standards that influenced the deployment of FC in data centers. Among their many

successes in this area were the open systems vendor interoperability initiatives and the 2Gbps

and 4Gbps link speed initiatives.

Copyright © Stephen R. Guendert 2007 25


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Brocade was among the first vendors to provide fibre channel switching devices. InRange and

McDATA became best known for their director-class switching devices and CNT for its

distance extension devices. But it is the director-class switch that is at the heart of switch

architectures in the MVS and z/OS data centers. As a result of the visionary work done by

McDATA with the ESCON director, and because of their track record of working closely and

successfully with them, IBM asked McDATA to help create the follow on mainframe protocol to

ESCON – the FIber CONnection or FICON.

IBM wanted FICON to be its new high-performance I/O interface for “Big Iron” that would

support the characteristics of existing and evolving higher speed access and storage devices. To

that end, IBM wanted the design of its new FC-based protocol to be based on currently existing

fibre channel standards so that it would provide mobility and investment protection for

customers in years to come. To be concise, IBM wanted the new FICON protocol to use the

protocol mapping layer based on the ANSI standard Fibre Channel-Physical and Signaling

Interface (FC-PH). FC-PH specifies the physical signaling, cabling and transmission speeds for

fiber channel. In pursuit of that goal, IBM and McDATA engineers wrote the initial FICON

specifications, co-authored the FICON program interface specifications, and pushed the FICON

protocol through the standards committees until it became an accepted part of the FC standards.

In the course of this work McDATA became the only infrastructure vendor to hold FICON co-

patents with IBM (Guendert, Lytle, 2007).

McDATA and InRange then turned their attention to providing FICON connectivity solutions.

This actually occurred in two parts. Part one was providing a bridge for ESCON storage users to

test and use the new FICON protocol and part two was creating native FICON switching devices.

Copyright © Stephen R. Guendert 2007 26


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.1.10 The FICON bridge card

IBM announced FICON channels for use with IBM’s 9672-G5 line of S/390 mainframe servers

on May 7th, 1998. Native point-to-point FICON attachment was supported as well as a FICON-

to-ESCON bridging function. With a bridge, the customer could connect up to 8 ESCON storage

control units, via a FICON bridge feature card in the ESCON director, which then multiplexed

back to the S/390 mainframe server using a FICON CHPID link. In bridge mode, the ESCON

protocols limited activities to one operation per storage control unit, but multiple storage control

units could be active across the FICON channel. Similarly, there is no increase in bandwidth for

any individual storage control unit because the control units are still on ESCON links; however,

there is an increase in device addressability (for example, the number of storage devices).

There were several good reasons for creating a FICON bridged environment. First of all, this

was a great way for customers to begin testing FICON without having to upgrade very much of

their existing ESCON infrastructure. A FICON bridge card feature allowed for the early

deployment of FICON while keeping the I/O configuration simple, so that users could exploit the

greater bandwidth and distance capabilities of a FICON mainframe channel in conjunction with

their existing ESCON storage devices. Also giving a boost to the FICON bridge feature was the

fact that IBM was late to deliver FICON capable DASD storage products. Only its Magstar 3590

A60 tape control unit supported FICON and that was not until December 2000, over 2 years after

FICON was introduced on the mainframe.

There is an interesting history behind FICON’s development on the mainframe and the delay

in getting FICON capable storage from IBM. Bear in mind that the fundamental force driving

many of the FICON decisions was that at this point in history, IBM was shifting from a hardware

company that also sold software and services to a services and software company that sells

Copyright © Stephen R. Guendert 2007 27


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

hardware. This shift in personality, of course, created some roadmap discrepancies between the

various entities within IBM. The IBM mainframe team was well aware of the various limitations

of ESCON and wanted FICON offered on storage as soon as possible. However, the IBM storage

team had embraced fibre channel in the form of FCP and had prioritized FCP development over

FICON development. Undaunted, the IBM mainframe team needed to provide customers with a

migration path to FICON without causing the customer to upgrade both sides of the ESCON

fabric simultaneously. Their innovative solution was the FICON Bridge feature for the 9032-5,

which gave the mainframe team a method of introducing the FICON channel without causing a

customer to replace their storage at the same time. Once the bridge feature was brought to market,

the pressure to produce FICON-attachable storage increased. Even though customer interest in

FICON DASD had increased significantly, the IBM storage team had already prioritized FCP-

attached storage higher and could not immediately begin the FICON-attached storage

development. The importance of the hardware pull model was too great to limit the production of

FICON-attached storage devices and IBM ultimately provided the FICON specification to

competitive storage vendors. As one would expect, this spurned on the IBM storage group and

they were the first to deliver FICON DASD at the end of July 2001 (Guendert, Houtekamer,

2005).

From 1998 until 2001 there simply was not very much native FICON storage. This was

finally resolved in July 2001 with the IBM announcement of FICON onto its TotalStorage

Enterprise Storage Server (code named Shark) as well as to two models of its TotalStorage

Virtual Tape Server (VTS). These were the very first DASD and virtual tape systems natively

compatible with FICON. After those announcements other storage vendors began releasing their

own versions of FICON capable storage.

Copyright © Stephen R. Guendert 2007 28


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Between May 1998 and July 2001 and beyond, FICON bridging was in wide use in data

centers that wanted to start deploying FICON. In fact, manufacturing of the FICON bridge card

feature continued until December 31, 2004. It had a very long life and continues to be supported

and used in some datacenters even today.

But where did this FICON bridging card feature originate? IBM once again approached

McDATA when IBM needed a 9032 ESCON Director internal bridge feature (a card) built in

order to translate between FICON and ESCON protocols. The new FICON bridge feature

engineered and manufactured only by McDATA, would manage the optoelectronics, framing,

and format conversion from the newer 1Gbps FICON channel paths (CHPIDs) back to ESCON

storage devices (Guendert, Lytle, 2007).

In regard to the functions of the bridge, there were two pieces of the puzzle - one resided in the

mainframe channel and the other on the bridge card. IBM actually developed both of these pieces,

which required the IBM developers to split the necessary code between the mainframe channel

and the FICON bridge card. The actual conversion between ESCON and FICON could then

remain IBM's intellectual property and McDATA's development effort could remain in the fibre

channel realm. This was an important aspect of the joint effort because it allowed the two

companies to develop a customer solution without the problems of intellectual property rights

exchange getting in the way.

All of this developmental effort on the FICON-to-ESCON bridge feature allowed the 9032

ESCON Director to become the first ever IBM switch to support FICON. Each of the bridge

features could perform the work of up to eight ESCON channels. Up to 16 FICON Bridge

features (equivalent to 128 ESCON ports or one half of the Model 5’s capacity) would be

supported in a single 9032 Model 5 ESCON Director. The FICON bridge feature handled

Copyright © Stephen R. Guendert 2007 29


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

multiplexing (1 FICON port fed up to 8 ESCON ports) and protocol format conversion. IBM

took responsibility to perform protocol conversion in the mainframe. The 1Gbps optical input

was steered to an optoelectronic converter, and from there to a 32-bit, 26-Mhz parallel bus. Next

was a framer, then a fibre channel high-level protocol converter, followed by eight IBM-supplied

ESCON engines the output of which connected to the 9032 Director’s backplane and thence

outbound to ESCON storage devices. Except for the costs associated with the bridge feature

itself, upgrading the ESCON Director was nearly painless and helped to conserve the customer’s

investment in their 9032 Model 5 ESCON Director. This very first implementation of FICON on

the mainframe helped to allow a customer to protect his investment in ESCON control units,

ESCON Directors, and other infrastructure while migrating to the new FICON channels and

storage control units.

The 9032-5 ESCON Director has the distinction of being the first switch that was able to

connect up to both FICON and ESCON protocols in a single switching unit chassis – and it was

the McDATA exclusive FICON bridge card that allowed that connectivity to take place.

InRange announced, in December 2000, the ability for fibre channel and FICON storage

networks to coexist with legacy ESCON storage through a single storage area network (SAN)

director. Their unique storage networking capability was provided through the release of a Fibre

Channel Port Adapter (FCPA) for their CD/9000 ESCON Director. For the first time, ESCON,

FC and FICON could all coexist within the same Director-class switch footprint (Guendert,

2005).

2.1.11 Native FICON directors

The FICON Director solves the same problem for the customer as the ESCON Director solved. It

provides any to any connectivity between hosts and devices that use FICON connections. The

Copyright © Stephen R. Guendert 2007 30


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

backbone of an open systems SAN or mainframe FICON infrastructure consists of one or more

fibre channel switching devices. Over the past decade, during which we saw the rise of fiber

channel based storage networking, two entirely different classes of FC switching devices have

evolved into usefulness in today’s market— traditional motherboard-based switches (switches)

and redundant component-based director-class switches (directors). The category of switching

device selected to be deployed in any specific situation has a significant impact on availability,

scalability, security, performance and manageability of the networked storage architecture.

Both InRange and McDATA had developed FC-based Directors to support the high end of the

expanding and important open systems server market. Brocade was developing and shipping

motherboard-based switches into the small-medium size customer segment of the open systems

server market. But InRange and McDATA were the early pioneers in switched-FICON

technology.

Because of their many years of experience in the mainframe data center both InRange and

McDATA knew that only a director-class switch would be welcomed to be deployed as the

backbone in these demanding customer infrastructures. A director-class switch is populated with

redundant components, provided with non-disruptive firmware activation and engineered to be

fault-tolerant to 99.999% availability if deployed in redundant fabrics. These switches typically

have a high port count and can serve as a central switch to other fabrics. These high-end switches

and features are also what IBM wanted to certify for its initial switched-FICON implementations.

Therefore, each of these rival vendors engineered, certified and sold a series of products to meet

the growing needs of this segment of the storage interconnect marketplace (Guendert, Lytle,

2007).

Copyright © Stephen R. Guendert 2007 31


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

A FICON director is simply a fibre channel director that also supports the FICON protocol

and some FICON Management software. Customers usually demand that the backbone of a

FICON infrastructure be based on director-class switches than can provide five-9’s of

availability (Guendert, 2005a).

Beyond what is available for open system SAN, the most significant enhancements to a

FICON switching device are the FICON Management Server—an ability to enable Control Unit

Port (CUP) and have management views with hexadecimal port numbers—and an ability to have

switch-wide connection control by manipulating the Prohibit Dynamic Connectivity Mask

(PDCM – a port connectivity matrix table). PDCM is a hardware feature of a FICON switching

device that provides extra security in FICON networks. Essentially, PDCM is hardware zoning

which allows the blocking and unblocking of ports which provides switch-wide port-to-port

connection control.

FICON Management Server (FMS), an optional feature, integrates a FICON switching device

with IBM’s System Automation OS/390 (SA OS/390) or SA z/OS management software and

RMF. It enables Control Unit Port (CUP) on the FICON switching device and provides an in-

band management capability for mainframe users. It means that customers with IBM’s SA

OS/390 or SA z/OS software on their mainframe host can manage a FICON switch over a

FICON connection in the same way they always managed their 9032 ESCON Directors. It also

allows RMF to draw performance statistics out of the FICON switching device directly.

2.2 ESCON

Since its introduction in 1964 and until the late 1980s, IBM made only incremental

enhancements to the System/360 I/O interface. Faster processors and tightly coupled

multiprocessing eventually outstripped I/O capability. Performance projections indicated that

Copyright © Stephen R. Guendert 2007 32


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

the number of channels could exceed practical limits. By the mid to late 1980s it became clear to

IBM that there was need for a new basis for I/O interconnection. The Enterprise Systems

Connection (ESCON) architecture is a comprehensive interconnection system that embodied a

synergy of technology, architecture, channels, and topology using fiber optics.

2.2.1 ESCON technology

ESCON made it possible to structure what was at the time a high speed backbone network in

a data center. The primary elements in an ESCON network are the fiber optic links, ESCON

channels, ESCON control units, and ESCON directors. Software functions supporting these

elements include the ESCON Manager program and the ESCON Dynamic Reconfiguration

Management Program.

The primary mainframe customer concerns with respect to the interconnection of systems,

control units, and channels are the need for increased data rates, system disruption, cable bulk,

cable distance, and complexity. IBM developed ESCON with the following 10 objectives (Calta,

deVerr, 1992):

1) Improved interconnection capability within data processing centers for both increased

intersystem connections and increased device sharing between systems.

2) Permitting/allowing an increased number of devices to be accessible by channels.

3) Extension of the distance for direct attachment of control units and direct system-to-

system interconnection in a campus environment.

4) Permission of operating systems and applications to run unchanged on computers

installing ESCON.

Copyright © Stephen R. Guendert 2007 33


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

5) Permission of inserting additional control units and systems into running configurations

without having to turn off the power. This avoids scheduled and unscheduled outages for

installation and maintenance (typically referred to as “hot pluggability”).

6) Provision of significantly higher instantaneous data rates than possible with parallel

channels. Also, development of more efficient control and data transfer protocols. Both

of these would provide an order of magnitude increase in I/O channel throughput.

7) Development of availability approaches for systems configured with ESCON attachment

so that these systems would support continuous operations.

8) Significant reduction in the sheer bulk and the number of cables required for

interconnection of system elements in the data center.

9) Support for orderly migration from the parallel channel connected environment to new

ESCON configured systems.

10) Providing a base for the total systems solution of interconnection. Such a system “would

be suitable for both current and future generation systems and their I/O” (Calta, deVeer,

1992, p 536-37).

The scope of the objectives and the original scope of the ESCON system were limited to

serving as the hub/backbone interconnection network within a data processing complex. In other

words, the extended machine room, and campus environment. IBM traditionally defined the

machine room/extended machine room as “an area containing one or more systems of processors

and channel attached I/O, where I/O devices are possibly shared between systems” (Calta,

deVeer, 1992, p 537). The machine room/extended machine room could be spread out over

different physical locations. It excluded equipment attached via common carrier facilities

(Telco). IBM distinguished the machine room/extended machine room from the campus

Copyright © Stephen R. Guendert 2007 34


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

environment only by the assumption of distances under one kilometer (km) between the systems

(processors, ESCON directors) and control units. This included data centers where equipment

was installed on multiple levels of a multi story building. On the other hand, a campus is “an

area, including multiple buildings, which is wholly under the control of an enterprise” (Calta,

deVeer, p 537). The interconnections between buildings may exceed up to 2-3 km or more. A

campus is an area where dedicated cables are installed without interfering with a Telco right of

way. ESCON’s stated objectives included interconnecting systems in data centers occupying

widely dispersed buildings. The objectives also included supporting database backups in secure,

remote buildings as well as supporting the sharing of databases between systems in separate

buildings. Finally, the objectives included supporting local area network (LAN) connections

within and between campus buildings (Coleman, Meltzer, Weiner, 1992).

IBM decided that a new technology, architecture, and interconnection topology would be

required to meet the stated objectives for ESCON. After studying and analyzing the potential

and capabilities of a wide variety of transmission media, the first decision made by IBM was to

use serial fiber optic links. Next, the technology and transmission techniques needed to be

determined. IBM spent a painstaking effort analyzing and selecting an interconnection topology

that would provide the most suitable characteristics for both the machine room/extended machine

room data processing environments. Lastly, the campus backbone network IBM envisioned had

to have its architecture defined.

From the outset, IBM decided that ESCON links would use serial transmission. At distances

greater than 400 ft (122m) the inherent signal skew of parallel cables became intolerable. Serial

transmission cables would also result in a significant reduction in cable bulk. It also meant that a

much higher rate of transmission would be required to achieve desired levels of performance.

Copyright © Stephen R. Guendert 2007 35


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Copper cables/coaxial lines were too limited in bandwidth. Fiber optic cable showed the

promise of much higher rates of sustained bandwidth capable over greater distances. A

drawback of fiber optics was, at the time, fiber optic technology suitable for ESCON was still

highly experimental. It required the development of reliable devices and connections, in addition

to suitable jacket material and fiber.

The initial link technology selected for ESCON was serial fiber optic technology using a

longwave (LW) LED. LW LEDs were more reliable, less costly, and safer than the LW laser

technology available at the time. IBM established a target transmission rate of 200 Mbit/sec (20

MByte/sec). The current IBM implementation of ESCON uses 200 Mbit(20 MByte)/sec fiber

optic technology. Actual implementations offer a maximum data transfer rate of 17 MByte/sec

(Artis, Houtekamer, 1993). Based on fiber loss information available from testing data, IBM

concluded that transmission distances of 2-3 km were achievable.

IBM defined ESCON’s fiber optic link to be a pair of single fibers capable of supporting the

full duplex transmission of data. Each ESCON link physically consists of two fibers. One fiber

is for inbound signals while the second is for outbound signals. This makes certain that the fiber

will not restrict two-way data transmissions provided the storage controller can exploit two way

data transmission. IBM employs two types of fiber in ESCON’s implementation. The initial

1990 announcement of ESCON incorporated multimode fibers and LED light sources. The

maximum link length that IBM supported for these components was 3 Km. IBM made a second

ESCON announcement in 1991. This second announcement provided the Extended Distance

Feature (XDF) single mode fibers with laser light sources. XDF components provided a

maximum link length of 20 km. Disk (DASD) was still restricted to a maximum distance of 9km

(Artis, Houtekamer, 1993). The two fibers in the links are packaged in a multilayer cable jacket

Copyright © Stephen R. Guendert 2007 36


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

with a 4.8mm diameter. This is in stark contrast to the 1 inch diameter of parallel I/O interface

cables. IBM also developed cable terminations that are duplex connectors specifically designed

to ensure proper fiber alignment and to prevent misinsertion. All of the elements in an ESCON

system are provided with matching connectors for attachment of the link cables. The link cables

were designed by IBM to allow for connection and disconnection while the elements remain in

an operational state. Two other previously unrealized advantages that were afforded by the fiber

optic include insensitivity to external electrical noise and improved security (Calta, deVerr,

1992).

2.2.2 ESCON architecture

The basic rational behind the ESCON protocol architecture was maximization of cable

distance. To maximize cable distance, IBM needed to minimize the number of handshakes

(message exchanges). IBM designed data transfer pacing to allow full data rate (20 MByte/sec)

transmission and streamline the control protocol. To address streamlining the control protocol

and control exchange, IBM designed the ESCON control frames to contain an entire channel

command word (CCW). The ESCON receiver also implemented sufficient buffers for the round

trip distance so as to keep the sender supplied with enough data requests to support continuous

data transfer.

The use of fiber optics as the interconnection medium between processors and I/O device

provided two primary benefits (Elliott, Sachs, 1992):

1) Substantially higher data rates.

2) Larger transmission distances compared to the parallel (copper) buses traditionally used

for I/O interconnection. This is due in large part to fiber optics having very high noise

Copyright © Stephen R. Guendert 2007 37


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

immunity as well as lower error rates. Error rates of less than 1 error in 10000000000000

bits are achievable with fiber optic transmission systems (Aulet, Doerstler, 1992).

Cable bulk and connector reliability are significant concerns in a computer system, with large

numbers of channels and I/O devices. Since fiber optics work with serial transmission on one

fiber in each direction, cable bulk is reduced and connector reliability is enhanced. Cable bulk is

reduced because a bi-directional fiber optic interconnection (link) requires only two fibers. On

the other hand, the System/390 with parallel channel interface, for a similar link, required 48

coaxial cables with 96 connector contacts at each end of the transmission link.

An I/O interface architecture is the set of rules that govern how information specified by the

I/O instructions of the processor is communicated on the transmission medium. These rules also

govern how the channel and I/O device cooperate to exchange this information. Enterprise

System Connectivity architecture (ESCON) transmits data over fiber optic links at a rate of 200

Mbit/sec. ESCON employs a 10 bit representation for each byte of data transferred, and

propagates signals at 200,000 km/sec (Artis, 1998).

The character representation employed by ESCON is known as the 8b/10b character set. The

8b/10b character representation is also employed by the fibre channel architecture as well as by

IBM’s proprietary serial storage architecture. The 8b/10b character set has two distinct benefits.

First, each character has two representations. These are RD+ and RD-, which are ones and zeros

rich respectively. Which representation to employ is dynamically determined by the sender. The

sender’s decision is based on the recent balance of 1s and 0s in the data stream to avoid burning

in the optical receiver. An optical receiver can exhibit signal memory if exposed to a prolonged

stream of 1s during a data transmission (this is similar to how a human would continue to see the

bright blur for several seconds after staring at a bright light source). The second benefit of the

Copyright © Stephen R. Guendert 2007 38


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

8b/10b character representation is the presence of unique command characters. These characters

are 10 bit combinations not employed for either the RD+ or RD- representation of any character.

Twelve of these characters are referred to as special K characters. They are employed to denote

a variety of things, including the start and end of ESCON frames, error recovery, facilitation,

dynamic reconfiguration, and hot device plugging. The special K characters are also referred to

as signal byte command codes (SBCC).

Figure 1. ESCON frame structure

Data, status, and control information does not simply travel as long streams of bits. In ESCON,

the data, status, and control information travels over the ESCON links in a frame structure. The

structure of the frame consists of four primary data area (Artis, 1998):

1) Link header-The link header contains a special K character start of frame delimiter,

destination port address, and finally a link control field. The link control field defines the

function of the frame.

2) Device header-The device header contains the device header flag, an information field

identifier, and a device address ranging from 0 to 1023.

3) Device information block (DIB)-The DIB contains either control, status, or user data.

DIBs may be 8,16,32,64,128,256,512, or 1024 bytes long. The DIB’s maximum size to

be employed is defined by the control unit’s ESCON adapters.

Copyright © Stephen R. Guendert 2007 39


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4) Link trailer-The link trailer contains a cyclic redundancy check (CRC0 field for error

correction, as well as a special K character end of frame delimiter.

The first frame in an ESCON communication sequence establishes the ESCON logical link

over the physical connectivity provided by ESCON’s architecture. ESCON protocol time is the

overhead associated with the establishment of this link (Artis, 1998).

Since some aspects of this architecture are determined by the nature of the I/O architecture’s

transmission medium and its interconnection topology, it was necessary to replace the parallel

interface architecture with a newly designed I/O interface architecture that described how

information is transferred on the serial fiber optic transmission medium. Fiber optics was chosen

as the ESCON transmission medium in order to meet the requirements for increased distances

and bandwidth compared to the predecessor parallel channels (IBM, 1990).

The large increases in processor speed in recent decades have led to large increases in

aggregate system I/O bandwidth and in the data transfer rates required f individual I/O devices.

The requirement for increased distances was driven by an increasing need to permit the high

speed interconnection of multiple computer systems within a geographic radius of a few

kilometers. It was also driven by the need to enable critical data storage devices to be placed in

secure locations.

At an early stage in the development process, IBM decided that fiber optics should be

introduced into System/390 by changing only the I/O interface architecture. IBM decided to not

change the system structure, I/O architecture of the processor, or existing I/O application

software. Taking this route enabled the obtainment of most of the performance benefits of fiber

optics, while simultaneously limiting the costs of change and development time.

Copyright © Stephen R. Guendert 2007 40


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

There was a much simpler way to introduce fiber optics that also would have preserved

system compatibility. IBM could have chosen to preserve as much of possible of the parallel

interface’s architecture and simply replaced the physical transmission medium. This approach

would have preserved the existing parallel-interface protocol; however, the information that is

normally placed on the parallel interface links is converted to a serial format for transmission on

the fiber optic link. None of the methods IBM explored for doing this made efficient use of the

fiber optic link, nor did they provide an opportunity to introduce new I/O functionality. Even

more importantly, the parallel interface protocol does not provide the function needed for

establishing dynamic connections through the switch. This led to IBM deciding to design a

completely new, message based interface architecture that directly maps the semantics of the I/O

architecture of the processor onto messages on the fiber optic links, and provides opportunities

for introducing future functionality enhancements. The resulting architecture maximized the

number of data bytes transferred per byte of control information and by minimizing the number

of message exchanges between I/O devices and channels. This made for much more efficient

use of the link bandwidth. Each message exchanges (or handshake) results in additional message

processing. It also results in a distance dependent delay equal to the time for the optical signal to

travel from one end of the link to the other and return (a round trip).

Elliott and Sachs defined 3 architecturally defined entities (Elliott, Sachs, 1992):

1) Channel-The channel directs the transfer of information between I/O devices and main

storage. It provides the common controls for the attachment of different types of I/O

devices. Each channel had one ESCON interface. Processors may be divided into

several logical partitions (LPARs), each of which functions as a separate processor. A

single ESCON channel may be shared by multiple partitions (EMIF). In terms of the

Copyright © Stephen R. Guendert 2007 41


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

architecture, each sharing partition has a separate “image” which represents the shared

channel.

2) Control unit-The control unit provides the logical capability necessary to operate and

control one or more I/O devices and adopts the characteristics of each I/O device to the

requirements of the channel. A control unit has one or more ESCON interfaces. A single

control unit may consist of multiple logical entities called control unit images. An

ESCON control unit refers to the common control unit functions that provide/maintain an

interface between the I/O device and the channel subsystem. An ESCON control unit

image refers to the control unit facilities that are accessible from a single serial interface

or channel path (Artis, Houtekamer, 1993). Therefore, for example, a control unit with

four serial I/O interfaces will have four control unit images. Although there may be some

shared control unit facilities, most operations can be performed independently by these

images. In terms of the architecture, each control unit image is treated as an independent

control unit, with its own complement of I/O devices. Each ESCON interface on a

control unit provides communication with multiple images.

3) ESCON director-The ESCON director provides the capability to interconnect any two

links attached to it. The link attachment point on the ESCON director is a dynamic

switch port (or simply, a port). A port consists of an ESCON interface and the

electronics that implement the port function defined by the architecture. The basic

function of the ESCON director is to create a connection between two ports, thereby

enabling the channel and control unit attached to these ports to communicate with each

other. All ports can be simultaneously participating in connections, and each connection

can be transferring data at the maximum data rate of the individual links. Although a

Copyright © Stephen R. Guendert 2007 42


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

failure of an ESCON director affects all attached channels and control units, it has

optional fault tolerance capabilities. Because each channel or control unit is directly

connected to the ESCON director by means of a separate link, the ESCON director

provides a degree of fault isolation among the channels and control units so that failures

or maintenance operations of one do not affect the others. Redundant paths between

processors and devices are provided through separate ESCON directors. This protects

the overall system against individual link failures as well as against ESCON director link

failures (Elliott, Sachs, 1992).

2.2.3 ESCON channels

The ESCON architecture required the design of a channel that could attach to the entire line of

Enterprise System/9000 processors and deliver the required performance. Other functions such

as the ESCON channel-to-channel (CTC) adapter were required, and these functions had to be

implemented using the same channel hardware. To meet these requirements there were some

key elements that went into the design of the IBM ESCON channel. This proved to be a

challenge at times for IBM, since these requirements were often conflicting (Casper, Flanagan,

Gregg, 1992). The ESCON channel had to provide a high enough level of performance to sustain

the maximum data transfer rate allowed by the ESCON architecture (Elliott, Sachs, 1992). It

also had to perform chaining and block recognition in a timely manner (IBM, 1990).

The basic function performed by an ES/9000 channel subsystem is to manage the transfer of

data and control unit information between attached I/O devices and system storage in order to

free the central processors (CPs) of this burden (Casper, Flanagan, Gregg, 1992). The

application requiring I/O must do the following: 1) build a channel program (in system storage)

consisting of one or more channel command words (CCWs) that describes the data areas and

Copyright © Stephen R. Guendert 2007 43


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

provides the I/O device commands to be used. 2) build an operation request block (ORB)

specifying several parameters including the channel program address. Finally, 3) issue a start

sub-channel instruction that specifies the I/O device. The channel subsystem proceeds to queue

and execute the requested I/O operation and informs the program of final status of the operation

via an I/O interruption (Tucker, 1986).

The general structure of the ES/9000 channel system consisted of four primary elements:

1) Channels that are responsible for executing channel programs. Channels initiate channel

programs, perform data transfer and chaining operations, and provide final status

information to the integrated off load processors (IOPs). They also continue disconnected

operations when requested by an I/O device.

2) Integrated off load processors (IOPs): The IOPs perform all communication with the

central processors (CPs) and they also maintain the work queues for the channel

subsystem. They also perform initialization functions as well as aiding in recovery from

catastrophic channel errors. Finally, the IOPs perform path selection and they retry when

busy conditions occur (IBM, 1990).

3) Staging hardware: Staging hardware provides the communication paths between the

channels, the IOPs and the remainder of the system. Each channel is attached to the

staging hardware via its own channel attachment bus.

4) The I/O interface: The ESCON channel connects processors to ESCON control units,

converters, and directors by means of a full duplex fiber optic serial link that operates at

200 Mbit (20 Mbyte)/sec. This link is the I/O interface. The I/O interface utilizes the

character oriented 8B/10B data encoding scheme (Franaszek, Widmer 1983).

Copyright © Stephen R. Guendert 2007 44


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Communication on the I/O interface is performed with sequences of characters called

frames.

A microprocessor which controls all of the channel elements is at the heart of the ESCON

channel. All IBM mainframe channels since the early 1970s use some form of microprocessor.

The requirements for the ESCON channel’s microprocessor were more demanding than for the

previous channels. The ESCON architecture required the implementation of two channel types,

and the channel hardware had to be able to act as a control unit (the ESCON CTC adapter). This

in turn required the microprocessor to have complete control of the frame transmission and

reception hardware in order to allow different types of frames to be generated and received.

Unlike previous channel microprocessors that were only capable of handling one task at a time,

the ESCON channel microprocessor had to be able to control the data transfer to/from system

storage and to/from the I/O device simultaneously and had to switch tasks very rapidly between

the two.

The ESCON architecture also uses far more complex recovery algorithms than the

System/370 parallel I/O interface. The ESCON channel microprocessor was the key to the

ESCON channel design. Its programmability provided the flexibility to implement multiple

architectures, while the combination of rapid task switching, and multiple functions per cycle

provided performance equaling or exceeding that required by the ESCON architecture (Casper,

Flanagan, Gregg, 1992).

2.2.4 ESCON topology and ESCON directors

When talking about ESCON’s topology, the term topology refers to the structure of the network

used to interconnect the elements of a data processing complex (both campus and

machine/extended machine room). In contrast to the multi-drop daisy chained topology used in

Copyright © Stephen R. Guendert 2007 45


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

the IBM parallel channel system, ESCON utilizes a switched point-to-point topology: all

elements connect by point to point links in a star fashion to a central switching element (Calta,

deVeer, 1992). Many alternative topologies such as collision bus, token ring, and dual insertion

ring were considered, but in the end IBM decided to go with the switched point-to-point

topology for the ESCON system. It would utilize a central dynamic crossbar switch (an ESCON

director), to which the elements to be interconnected would attach via point-to-point links. This

would allow an I/O channel connected to the switch to connect to any other element also

connected to the switch (Calta, deVeer, 1992).

One of the challenges facing IBM was to determine if a suitable switch could be designed and

built at a reasonable cost. A properly designed switch would allow the switched point-to-point

topology to form a peer network. In other words, any attached element could connect to any

other attached element, and if the switch were fully dynamic, with N parts, N/2 concurrent

connections could be made. A dynamic switch means that connections are automatically

established and broken for each transmission sequence. This would also result in an economical

network. With the ESCON director, IBM came up with a switching device capable of (at the

time) extremely high bandwidth, while offering an economical form of interconnection network.

One very important point: Even though ESCON transferred data in the form of packets called

frames, a switched point-to-point ESCON network is not a packet switching network. The lead

frame of an ESCON transmission includes a request for a connection. Following this, the

ESCON director establishes and maintains that connection for the entire duration of the

conversation between the two connected elements. Therefore, ESCON directors function in the

manner of a circuit switch. Once the ESCON connection is established, data does not experience

Copyright © Stephen R. Guendert 2007 46


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

a store and forward delay while passing through the ESCON director. The connected path is full

duplex.

There were ten functional requirements IBM developed, and the switched point-to-point

topology satisfied all ten:

1) Install function-The switched point-to-point topology simplified the addition of

equipment to, or deletion from a given configuration.

2) Alternate pathing function-Two (or more) available paths could be provided from an

ESCON I/O channel to control unit devices. These paths could be used interchangeably

as an alternative under busy or failure conditions.

3) Channel-to-channel (CTC) function-The switched point-to-point topology allowed any

ESCON I/O channel of one system to connect to an ESCON I/O channel of any other

system attached to the same director. For CTC operation, the ESCON I/O channels on

one of the communicating systems had to be initialized with the ESCN CTC microcode.

4) Fencing function-Fencing is the isolation or disabling of an ESCON director port. It is

primarily used for isolation of an element during maintenance operations.

5) Multi-drop function-The switched point-to-point topology gave I/O channels the ability

to connect to many control units.

6) Multiple/shared control unit function-Once a control unit has been attached to an ESCON

director, any system’s ESCON I/O channels that are attached to that ESCON director

may share its devices.

7) Partitioning function-ESCON directors have the ability to partition director ports, i.e., to

restrict the connectivity of a given port to a subset of all other ports. This function is

primarily used to isolate portions of the system for test configurations.

Copyright © Stephen R. Guendert 2007 47


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

8) Dedicated Path Function-used for establishing a dedicated connection that is insensitive

to controls contained in frame link headers.

9) Point-to-point operation: The ESCON architecture does support direct point-to-point

connections that do not traverse an ESCON director. Such configurations are typically

for small installations, and are extremely rare in large enterprise level installations. Each

ESCON I/O channel of a system can cable directly to one ESCON control unit without

the use of an ESCON director, using only point-to-point connections.

10) Cascaded director operation: Even though the ESCON director design supports only one

level of dynamic director path selection, two ESCON directors may be installed in a

series configuration if the path through one of the directors is a dedicated connection.

This is known as cascading directors. Greater distances (up to 9km) may be covered by a

cascaded configuration due to the link repeater function of an ESCON director port.

The ESCON switched point-to-point topology objectives were directed towards improving

system and subsystem interconnectivity in a data center. The topology supported better

connectivity, simplified installation and isolation. Switched point-to-point offered the potential

to reduce the number of cables required, and allowed a greater number of devices per channel.

The ESCON director provides the primary interconnection mechanism of the IBM Enterprise

Systems Connection (ESCON) architecture for System/390 mainframes. The ESCON Director

was the implementation of the new switched point-to-point topology interconnecting serial fiber

optic System/390 I/O channels and control units. It provides dynamic, non-blocking, any-to-any

connectivity for up to 248 (9032 Mod 5 director) channels and control units that are attached to

its ports. With the introduction of the ESCON Extended Distance Feature (XDF), distances of

up to 20 km are supported, thus permitting channel to control unit or channel to channel (CTC)

Copyright © Stephen R. Guendert 2007 48


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

distances of up to 40 km with a single ESCON Director, or up to 60 km with two chained

ESCON Directors. ESCON XDF was introduced in Fall 1991, and was a laser link product as

opposed to the earlier LED technology (Artis, Houtekamer, 1993).

The switched point-to-point topology is central to the design of both the ESCON architecture

and the ESCON director. This topology overcame the disadvantages of multi-drop and direct

attached point-to-point. With the ability it provides to switch connections via a switching unit,

only one link is needed from each control unit or channel to the switching unit to realize any

channel to channel or channel to control unit connection. When a channel or control unit is

added, it needs a path only to the switching unit rather than to all the nodes with which it

communicates. This makes it possible to potentially have a large number of point-to-point

connections among a group of nodes. This can significantly reduce the number of channels and

control unit interfaces required for an implementation, and also reduces the total amount of

cabling required.

The ESCON director was designed to operate as a non-blocking circuit switch. Therefore, in

a system with N ports, N/2 simultaneous full duplex connections can be established between any

combination if port pairs. Once a connection is established, a direct path between the two

attached end points remains in place until a command to disconnect is received. The switched

point-to-point topology not only offered superior performance characteristics, it also simplified

the physical planning and reconfiguration normally associated with data processing installations.

The ports of an ESCON director are very versatile in allowing the attachment of either control

units or channels. Such flexibility allowed the ESCON director to adapt to changing

characteristics inherent in any data processing installation. Link attachment changes were far

less disruptive in a switched point-to-point topology. The addition/deletion of a link did not

Copyright © Stephen R. Guendert 2007 49


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

affect the connectivity of the other links in the network. The ESCON architecture permits an

installation to be modified while it remains operational. ESCON channels and control units can

be added or disconnected from the ESCON Director without affecting the operation of the rest of

the system. This ability was very important with 24 hours per day/7 days per week (24x7)

operations becoming the rule. System disruptions could no longer be tolerated. Finally, through

the use of two cascaded ESCON Directors, I/O devices may be located up to 60 km away from

the host processor complex. Such distances provided increased flexibility in the physical

planning of data processing installations as well as new alternatives for disaster recovery and

backup planning (Georgiou, Larsen, 1992).

An ESCON link connects a channel (host-mainframe) and control unit (storage), but up to

two ESCON Directors may be configured between the channel and control unit. These ESCON

directors may be used to provide static or dynamic routing capabilities. They could also be used

as simple repeaters to increase the maximum total distance between channel and control unit up

to 9 km for LED or 60 km for single mode lasers.

An ESCON director is an implementation of a dynamic switch. An ESCON director accepts

serial links, and then creates either dynamic or static connections between these links. This

provides a flexible way to connect multiple channels from one or more systems to one or more

control units. The actual links through an ESCON director are switched point-to-point

connections. These connections through an ESCON director may be either static or dynamic.

Static connections, also called dedicated connections, are fixed during normal usage. Dynamic

connections can be created and terminated based on information in the data frames being

transmitted over links.

Copyright © Stephen R. Guendert 2007 50


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Current usage has dynamic switches being managed on a per I/O exchange basis. During an

I/O operation, before a transfer starts, the channel subsystem will establish a link. After the

transfer operation is completed, either the channel or the storage director may terminate the

connection. Only 1 dynamic connection is allowed in a link. See figure 2 below.

Figure 2. Dynamic and static connections

In the parallel channel architecture, each channel path is used to connect a single channel on a

host (mainframe) to a port on the control unit (storage). A specific control unit can have multiple

channel ports through channel switching (in the parallel implementation a control unit may be

comprised of two or more storage directors). With the ESCON serial channel architecture,

ESCON directors can be used in such as way so as to allow multiple hosts to access a storage

director through a serial channel.

Copyright © Stephen R. Guendert 2007 51


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 3. Multiple hosts connecting to a control unit

While the control unit shown has only one physical serial channel path it must be able to

communicate with all hosts shown. Each system that wishes to communicate with the control

unit must establish a logical path. These logical paths are allocated on a first-come-first-serve

basis to channels during initial machine load (IML) or after an initial program load (IPL)-another

word for “reboot” (Artis, Houtekamer 1993). The ESCON architecture imposes a limit on the

Copyright © Stephen R. Guendert 2007 52


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

number of logical paths that may be active for a control unit. The architecture defines 16 logical

paths per physical path, and minor extensions allow the use of up to 256 logical paths.

ESCON directors allows for the definition of a large number of potential logical paths. This

typically would be used for either disaster recovery or other workload reconfiguration reasons.

During normal operations, these logical paths may (should) be blocked using the configuration

management facilities provided by the ESCON Manager (ESCM) software. The ESCON

software runs on the mainframe and can control and synchronize all ESCD connections, and

issue the appropriate commands to vary paths on and off line. However, such functionality

requires that a virtual telecommunication access method (VTAM) link exists between the hosts.

The simplest set up for ESCON links is direct (point-to-point) connectivity between an ESCON

channel on a host mainframe and an ESCON control unit using multimode fibers. Such direct

attached configurations are appropriate for small systems/installations. Similar to parallel

channels in a shared environment, direct attached connectivity can result in many links. ESCON

directors can be used to help alleviate the problem and provide additional logical connections

using fewer physical links in a switched point-to-point environment. Each port on an ESCON

director accepts one link to a channel, control unit, or another ESCON director. Connecting to

another director can be used to provide both dynamic and static (dedicated) connections.

ESCON directors may also be used as repeaters in the event that links longer than 3 km are

required (multi-mode fiber) or longer than 20 km (single mode fiber). This is further subject to

the absolute physical distance limitations for the particular type of device being addressed. “A

repeater is an amplifier that corrects for signal loss over long distances” (Artis, Houtekamer,

1993, p40).

Copyright © Stephen R. Guendert 2007 53


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The implementation of the ESCON director and ESCON architectures restrict links between a

processor channel and a control unit to a maximum of two ESCON directors (ESCDs). At most,

one of the ESCDs may provide a dynamic connection.

Figure 4. Multiple hosts connecting to multiple control units

Control unit A is connected via a direct point-to-point ESCPN link to host A, and with a

switched point-to-point link to hosts A, B, and C using ESCD 1. Control unit B is connected with

switched links to all hosts using ESCD 1 and to host B using ESCD 2. Multiple links to host B

provide link protection in the even either ESCD 1 or ESCD 2 fail. Control unit C is connected to

Copyright © Stephen R. Guendert 2007 54


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

host A through two ESCDs, with one dedicated (static) and one dynamic connection. Control

unit C is connected to host C with a dynamic connection. “Depending on the fiber optic

technology employed, the processor-to-director and director-to control unit links may be a long

as 20 km” (Artis, Houtekamer, 1993 p41).

The ESCON architecture restriction of any link to a maximum of one dynamic switch may

likely lead to a great increase in the number of ESCON links that are required for a particular

configuration. Practical configurations would employ multiple ESCDs to guarantee availability

if one ESCD fails. This simplified example is intended to demonstrate the potential savings in

fiber links if the ESCON architecture had been expanded to include multiple dynamic

connections. It should be noted that such functionality did not occur for nearly twelve years,

when support of this was announced as cascaded FICON.

2.2.5 ESCON summary

ESCON offered a substantially new approach to the attachment of control units to large systems

(mainframe) channels. The first truly major architectural change in I/O attachment since the

1964 announcement of System/360, ESCON widened the scope of local channel attachment to

campus distances, provided dramatically higher data rates, and simplified the management of the

data center complex. It provided the basis for supporting multiple systems with much greater

processing power. Since the introduction of ESCON in 1990, IBM introduced a number of

enhancements. The Extended Distance Feature (XDF) increased the maximum distance of an

individual fiber optic link to 20 km using single mode fiber optics and laser technologies.

IBM also introduced several new ESCON attachable storage control units in the last sixteen

years for both DASD (disk) and tape. IBM also introduced the ESCON Multiple Image Facility

Copyright © Stephen R. Guendert 2007 55


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

(EMIF) to allow the sharing of physical channels by any number of logical partitions (LPARs)

on a host mainframe.

ESCON supports the creation of campus wide high speed fiber optic backbone inter-

connection networks. ESCON did not simply accommodate attaching I/O control units to

mainframes. ESCON allowed sysplexes (complexes of multiple mainframe systems) in different

buildings to be interconnected via switching technology to form a cohesive data processing

utility (Calta, deVeer, 1992).

2.3 FICON

The term FICON represents the architecture as defined by the International Committee of

Information Technology Standards (INCITS) and published as ANSI standards. FICON also

represents the names of the System z9, zSeries, and S/390 server features FICON Express 4,

FICON Express 2, FICON Express, and FICON (Neville, White, 2006).

2.3.1 FICON’s introduction and beginnings

At its introduction in September, 1990, the ESCON channel architecture provided end users

with many benefits. Some of these benefits included increased distance capability, improved

performance, and a switching topology. Another benefit was that moving from parallel channels

to ESCON channels did not require application software changes. However, the ESCON

channel communication protocols were constrained because of the limits on the technology and

the degree of allowable risk. Since the introduction of the first System/360 mainframe in 1964,

the hardware and software components changed significantly up through the System/390. In

some areas, this technological growth has been far more explosive than in others. This has made

it even more important to realize that all system components must provide a balanced system as

Copyright © Stephen R. Guendert 2007 56


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

new technologies become available. By the late 1990s, some IBM S/390 customers started to

experience limitations in their ESCON environments (Beal, Trowell, 1999):

1) ESCON channels’ 1024 device address limitation per channel: The ESCON channel

implementation limited the device support to 1024 devices (subchannel/device numbers)

per channel. A FICON channel could support 16K devices.

2) The S/390 256 ESCON channel connectivity constraint: this occurs when a customer

wants/needs to install additional ESCON channels to support the attachment of additional

I/O devices; however, the S/390 processor in question is already at/near the S/390 256

channel architecture limit. Implementation of FICON channels would provide additional

I/O connectivity while still keeping within the 256 channel S/390 architectural limit.

3) High fiber cabling costs between local and remote sites. Running multiple ESCON

channel fibers between a local and its remote site can become cost prohibitive. Using

FICON channels between such sites can provide up to an 8:1 reduction in the number of

fiber links required for cross-site connectivity. In most cases, this provides an end user a

significant cost reduction for their overall fiber infrastructure.

4) Dark fiber distance limitations: Many customers will prefer to use dark fiber with no

retransmission of the fiber signal. If the customer is using ESCON LED channels, the

dark fiber distance can only be 3 km. If the customer is using FICON single mode 9

micron fiber, the dark fiber distance can be extended to 10 km (or 20 km with IBM

approval). NOTE: The 10 km distance was at FICON’s inception. It has been improved

over the last 8 years to 35km.

5) Data rate performance droop: As the distance between an ESCON channel and ESCON

control unit (CU)increases, the performance slightly decreases up to a distance of 9km

Copyright © Stephen R. Guendert 2007 57


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

(by decrease in performance, what is implied in a reduced data transfer rate). When a

distance beyond 9 km is reached, the performance decreases far more rapidly. This point

of sudden performance decrease is known as the distance data rate droop point. The

distance data rate droop point for FICON channels does not occur until 100km.

By the late 1990s IBM came to the realization that there were both ESCON architecture and

processor implementation limitations inhibiting growth in S/390 I/O subsystems. The ESCON

architectural limitations included:

1) 256 CHPIDs per CPC

2) 16 control unit images per control unit link

3) 4096 device numbers per control unit link

The limitations of the S/390 processor implementation include:

1) 1024 device numbers per CHPID

2) 80k subchannels per CPC

3) Channel buffer size

4) Channel local storage space

The primary reasons for developing a new architecture included:

1) Constraint relief for S/390 I/O

2) 2) Improve the S/390 cost competitiveness with open systems.

The requirements of a new architecture included providing improved link bandwidth,

improved connectivity, larger data sizes, and all the while protecting the customers’ investment

in existing technology.

Figure 5. S/390 Evolution

Copyright © Stephen R. Guendert 2007 58


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 5, from a late 1990s IBM presentation, which is not drawn to scale, shows growth in

the areas of CPU, storage, I/O and DASD from the System/360 to the time of FICON’s inception

in the late 1990s. The current architectural limit of 256 channels per processor was originally

defined with the introduction of the System 370/XA architecture. At the time this was

introduced, the maximum processor capacity was approximately five million instructions per

second (MIPS). By the late 1990s, the System/390 capacity was slightly higher than 1600 MIPS.

High availability systems, such as Parallel Sysplex helped compound the constraint. The

channel configuration options need to keep pace better with the capacity enhancements to the

processor. Customer I/O requirements were also evolving, and by 1998 IBM had defined the

following list as the top 10 mainframe customer I/O requirements (Beal, Trowell, 1999):

1) A choice of hardware technology, and a toleration of mixed hardware technology.

2) Compatibility with the existing S/390 software.

3) Good response times.

4) Increased distance capability.

5) Unlimited capacity for data storage.

Copyright © Stephen R. Guendert 2007 59


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

6) Grater channel bandwidth.

7) Greater number of device addresses.

8) Simple migration.

9) Cost-effectiveness.

10) Easy operational control/manageability.

In order to achieve these, IBM had several implementation options to consider, including

more channels, faster channels, or brand new channels. Figure 6 below serves as a summary of

the discussion which follows.

Figure 6. IBM options for replacing ESCON

Simply changing the architecture to allow for the use of more channels would provide access

to a greater number of I/O subsystems (and therefore more data). However, this would do

nothing in terms of improving bandwidth, performance, and attachable distance. More channels

would also lead to a costlier, more complex environment to manage.

Making the ESCON channels faster would have made minimal improvements in effective

performance and throughput. Due to the nature of the ESCON protocol, a five fold increase in

Copyright © Stephen R. Guendert 2007 60


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

bit rate transfer would only have resulted in an approximately 10% improvement in response

times.

By adopting new channel architecture, the opportunity to remove the inhibitors identified

when looking at more and/or faster channels would be removed. A new channel architecture

allowed the opportunity to eliminate existing constraints such as the maximum of 256 concurrent

channels per processor. A new channel architecture would also allow IBM to greatly improve

link performance and enhance distance connectivity. It also would allow for closely balancing

CPU and channel growth so that I/O connectivity would not limit LPAR processing capacity.

The 256 channel limit is still in effect with the new (FICON) architecture. However, the new

FICON channels perform at an effective rate of anywhere between five and eight ESCON

channels connectivity equivalence. When compared with ESCON, FICON rescues protocol

overhead. This reduction in protocol overhead, coupled with some other technology advances

enabled the initial FICON channel links to perform at up to 100 MB/sec full duplex. This is the

equivalent of up to 5 concurrent ESCON bulk-data (large block size traffic) I/O operations such

as tape streaming, or up to eight concurrent ESCON transaction type I/O operations such as

DASD traffic.

IBM introduced FICON to be the foundation for the future high-performance I/O channels.

The FICON architecture and its implementation allows for:

1) Initial implementation to ESCON devices via a FICON Bridge Card implementation via

the 9032-5 ESCON director. This is FICON bridge mode or FCV mode).

2) Direct attached point-to-point to I/O systems with S/390 FICON interfaces.

3) Switched point-to-point via a FICON director to I/O subsystems with S/390 FICON

interfaces.

Copyright © Stephen R. Guendert 2007 61


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 7 illustrates the potential of “expanding” beyond the 256 channel limit when using

FICON. Without FICON, an S/390 processor can accommodate up to 256 ESCON channels.

When FICON channels first arrived in the marketplace on the S/390 G5 and G6 processors, a

maximum number of 24 FICON channels were able to be implemented on a processor. Such as

S/390 processor could then accommodate the equivalent connectivity of 360 ESCON channels:

168 ESCON channels and 24 FICON channels providing the equivalence of 192 ESCON

channels (168+192=360) (Beal, Trowell, 1999).

Figure 7. ESCON to FICON migration channel equivalence

Figure 8 (Guendert, Lytle 2006) serves as a good example for illustrating the result of using a

FICON channel for consolidating/aggregating a number of ESCON channel connections, The

diagram is an example of a configuration using a FICON channel in FCV mode via a FICON

bridge card in the 9032-5 ESCON director. The primary benefit of the FICON bridge card is that

FICON channels in FCV mode can support channel paths to control units with standard ESCON

Copyright © Stephen R. Guendert 2007 62


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

channel adapters. In other words, investment protection is provided. As shown on the right side

of the diagram, the aggregation of eight ESCON channels onto one FICON channel also

provides the ability to support eight concurrent I/O operations.

Figure 8. ESCON to FICON bridge aggregation

Figure 9. FICON bridge sample topology

Copyright © Stephen R. Guendert 2007 63


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 10. FICON bridge card frame process

Figures 9 and 10 (Guendert, Lytle, 2006) shows a sample topology using a FICON bridge

card implementation from a different perspective. The S/390 FICON channel connects the

processor to the bridge card adapter port in the ESCON director. The channel operates using the

new FICON channel protocols. The bridge card receives commands and data over the FICON

Copyright © Stephen R. Guendert 2007 64


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

channel and takes this information, and its internal link controllers transfer it to the attached

ESCON control units. Data and status conditions received by the link controller from the

attached ESCON control units are transferred to the channel using the FICON protocol.

Figure 11 (Guendert, Lytle 2006) illustrates another FICON topology known as FICON

native, or FC mode, The FICON native topology supports the connection of a FICON channel to

a FICON capable control unit that has a fibre channel adapter supporting FC-SB2 protocols. The

left half of the diagram shows a typical ESCON director topology. The right side of the diagram

shows an equivalent FICON native configuration and lists the major benefits realized.

Figure 11. ESCON to FICON native configuration example

Figure 12 (Guendert, Lytle 2006) is three examples of direct attached FICON native and

FICON switched connectivity, similar to the previous example of the FICON bridge topology.

Copyright © Stephen R. Guendert 2007 65


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 12. Three FICON connectivity examples

Copyright © Stephen R. Guendert 2007 66


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 13. ESCON channel command and data transfer

Copyright © Stephen R. Guendert 2007 67


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

ESCON ESCON ESCON


Channel CCW 1 CU Device
CMD

CE/DE END

CCW 2
CMD

END
CE/DE
CCW 3
CMD

CE/DE END

Figure 13 (Guendert, 2004) illustrates the channel command and data transfer in an ESCON

environment. Each channel command sent from the ESCON channel to the control unit has to

complete before the channel will send the next channel command. This requires an exchange on

interlocking protocols for each command and there are a number of these protocol exchanges

during the transfer of the command and data transfer for one channel command word (CCW).

This “hand shaking” protocol increases the elapsed time for each CCW and along with the way

data frames are managed, it has a significant effect on data transfer droop at greater channel to

control unit distances.

2.3.2 FICON and fibre channel architecture basics

The S/390 FICON architecture is more formally known as IBM’s FC-SB2 architecture. It

introduced the FICON bridge, FICON native, and FICON director configurations to the S/390

world. As originally conceived, the S/390 FICON architecture was intended to be an

Copyright © Stephen R. Guendert 2007 68


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

enhancement of, rather than a replacement for the existing ESCON architecture. To this day,

IBM still sells more ESCON channels than FICON channels on each new mainframe (the current

model is the System z9) sold. IBM has, for the past three years, started to reduce the number of

available ESCON channels and increase the number of available FICON channels on each new

mainframe model. The FICON channel architecture is compatible with:

1) Fibre channel physical and signaling standard (FC-FS)

2) Fiber channel switched fabric and switch control requirements (FC-SW)

3) Fibre channel single byte 2 (FC-SB2) and fibre channel single byte 3 (FC-SB3) standards.

FICON native configurations provide for the direct attachment of a FICON channel to a

control unit with an S/390 FICON interface. This type of connectivity is known as channel type

FC. The FICON director configuration is a variant of FICON native that provides for switched

connections between FICON channels and control units with FICON interfaces. The

transmission medium for the FICON interface is a fiber optic cable. Physically, it is a pair of

optical fibers that provide the two dedicated unidirectional, serial-bit transmission lines.

Information in a single optical fiber flows, bit by bit in one direction. At any link interface mine

optical fiber is used to receive data while the other is used to transmit data. Full duplex

capabilities are exploited for data transfer. The fibre channel standard (FCS) protocol specifies

that for normal I/O operations frames flow serially in both directions, allowing for several

concurrent read and write I/O operations on the same link. Currently, with FICON Express 4

channels on the System z9, 4 Gbps (400 MB/sec) is the theoretical maximum unidirectional

bandwidth capability of the fiber link. The actual data rate of the 4 Gbps link (whether it is

measured in IOPS or MBps depends on the type of workload, fiber infrastructure, and storage

devices in the configuration (Neville, White, 2006).

Copyright © Stephen R. Guendert 2007 69


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The FICON bridge configuration (known as FCV mode for FICON ConVerted) is a modified

version of the FC-SB2 architecture. The “bridge” is actually a special card that plugs into the

chassis of the IBM 9032-5 ESCON director. It provides a bridge function by accepting one

FICON channel link (from a FICON channel configured as FCV mode) and provides multiple,

internal ESCON switch matrix connections to the ESCON director ports. The bridge card

converts the information received from the FICON channel to ESCON protocol, and also

converts some of the command-response ESCON frames received back from the ESCON control

unit back to FICON protocols.

The FICON architecture eliminated two key limitations of the ESCON architecture that were

inhibitors to growth in the S/390 I/O subsystems of the late 1990s. These extensions include a

maximum of 64K devices per FICON link, and a maximum of 256 control unit images per

FICON link. The S/390 FICON architecture was designed to be fully compatible with all

existing S/390 software and this has been carried through to the current predominant mainframe

operating system (z/OS). The FICON channels execute all standard S/390 channel commands

(CCWs) and programs. However, some system software changes were required in order to

exploit all of the features of the S/390 FICON architecture, as well as for managing performance

tuning and capacity planning.

Copyright © Stephen R. Guendert 2007 70


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 1. FICON vs. ESCON summary

Function FICON (Native and Bridge) ESCON


Switchin Packet Circuit
Synchronous Command
Asynchronous Command Execution
Command Executio Channel/CU
Simultaneous read and Read OR
Full-duplex data Half-duplex data
Connectionles Connection-
Dedicated
Packets individually Pre-
When data is sent When data is sent
Data channel is channel is
Link Data 400 MB/sec (4 17 MB/sec (200
3600 I/Os per sec(G5/G6)
Maximum 13000 I/Os per sec(F4 ) 1200 I/Os per sec

3 Km
Distance (no 10 Km, 20 Km
Repeated Distance w/o
significant data rate
100 Km 9 Km
degradatio
Frame Transfer 128 KB 1 KB
CU Images/CU 25 16
UAs/Channe 16K Device 1K Device
4 K (FICON Bridge)
UAs/Control 16 K (Native 1

Copyright © Stephen R. Guendert 2007 71


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 14. Fibre channel architecture

The fibre channel architecture was developed by the National Committee for Information

Technology Standards (NCITS). Figure 14 (Guendert, 2004) illustrates the different

levels/layers in the fibre channel architecture (FC-0 to FC-4) described by the various fibre

channel protocols (FC-PI, FC-FS and the ULPs).

1) FC-0 level: (physical interface and media). The fibre channel physical interface,

specified in the FC-PI protocol, consists of the transmission media, receivers, transmitters,

and their associated interfaces. The physical interface specifies a variety of media,

drivers, and receivers capable of operating at the various supported speeds.

Copyright © Stephen R. Guendert 2007 72


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2) FC-1 level (transmission protocol): The FC-1 level is a link control protocol that

performs a conversion from the 8-bit EBCDIC code into a 10-bit transmission code by

assigning a unique bit pattern to each known hexadecimal character. The transmitting

N_port performs the encoding function when sending the character stream over the fiber.

The receiving N_port performs decoding back to 8-bit EBCDIC code.

3) FC-2 level (signaling protocol): the fibre channel physical framing and signaling

interface, transmission protocol, and signaling protocol of high performance serial links

for support of higher level protocols such as FC-SB2 (FICON) and others. FC-1 and FC-

2 define all of the functions required to transfer information from one N_port to another

N_port.

4) FC-3 level (common services): reserved for future functionality.

5) FC-4 level (mapping): Describes the data payload. FC-SB2 is an FC-4 protocol and is

based on the single-byte command code set (SBCCS).

2.3.3 FICON native mode

When initially introduced in the S/390 9672 G5/G6 processors and with the initial zSeries

processors, FICON channel support could operate in 1 of 3 modes (Trowell, White, 2002):

1) FICON channels in FICON Bridge (FCV) mode allowed access to S/390 ESCON control

units with ESCON interfaces by the FICON channel in FCV mode that is connected to a

FICON bridge adapter in a 9032-5 ESOCN director.

2) A FICON channel in FICON native (FC) mode allows access to FICON native interface

control init. This access can be either directly from a FICON channel in FC mode (point-

to-point), or it can be from a FICON channel in FC mode connected in series through one

or two FICON directors.

Copyright © Stephen R. Guendert 2007 73


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3) A FICON channel can also have a different microcode load applied to it so that it will

operate in open systems fibre channel protocol (FCP) mode. A FICON channel in FCP

mode can operate in 1 of two ways:

a. Via a FICON channel in FCP Mode through a single fibre channel switch or

multiple switches to a FCP device.

b. Via a FICON channel in FCP mode through a single fibre channel switch or

multiple switches to a fibre channel to SCSI bridge.

A couple of items of note:

1) The S/390 9672 G5/G6 processors only support single switched point to point FICON. The

64 bit architecture of the zSeries and System z processors support both single and dual switched

topologies. A dual switch configuration is known as cascaded FICON directors.

2) The S/390 9672 G5/G6 processors do not support FICON channels in FCP mode. The zSeries

and System z processors (Guendert, 2004) only support FCP mode.

The protocol that describes the operation of a FICON channel when operating in FICON

native (FC) mode is the FC-SB2 protocol. FICON native mode uses the FC-4 layer riles as

described in the FC-SB2 and Fibre Channel Framing and Signaling Protocol (FC-FS) documents.

There are a number of benefits introduced by a FICON channel operating in FICON native

(FC) mode (Trowell, White, 2002):

1) Increased distance: FICON increases the allowed (supported) distances from channel to

control unit, channel to switch, or switch to control unit links. The ESCON distance of 3

km was increased to up to 20 km at 1 Gbps and 12 km at 2 Gbps for FICON channels

using long wavelength lasers. Cascaded FICON directors provide an additional increased

Copyright © Stephen R. Guendert 2007 74


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

distance benefit by even further extending the maximum unrepeated distance between

data centers.

2) Increased number of concurrent connections: ESCN supports one I/O connection at any

one time while FICON native mode channels support up to 32 concurrent I/O

connections (64 on the System z9).

3) Increased distance to data droop effect: The end to end distance before data droop effect

occurs increases from 9 km for ESCON to up to 100 km for FICON.

4) Increased channel device address support: The ESCON limitation of 1024 devices per

channel increases to 16,384 devices per channel for FICON.

5) Increased link bandwidth: ESCON’s link bandwidth of 20 MBps increases to 100 MBps,

200 MBps, and 400 MBps for FICON/FICON Express, FICON Express2 and FICON

Express4 channels respectively.

6) Common use of fibre channel communication and topology: FICON is an FC-4 layer

protocol and uses the fibre channel standard framing and signaling protocol (FC-FS) for

communication using the same topology as open systems storage area networks (SANs).

7) Greater exploitation of parallel access volumes (PAVs): FICON allows for greater

exploitation of new DASD features such as PAVs and HyperPAVs because of its greater

addressing capabilities, and because more I/O operations can be started for a group of

FICON channel paths that with ESCON.

8) Greater exploitation of priority I/O queuing: FICON channels use frame and information

unit (IU) multiplexing control. This provides greater exploitation of the priority I/O

queuing mechanisms within the FICON capable control units.

9) Greater exploitation of channel to channel (CTC) capabilities:

Copyright © Stephen R. Guendert 2007 75


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

a. A pair of ESCON channels provides ESCON CTC. One of these is defined as

CTC, the other as CNC. At least two ESCON channels are required. FICON

CTC connectivity can be implemented using 1 or 2 FICON native (FC) channels.

b. An ESCON CTC channel supports only 1 active I/O operation at a time. A

FICON CTC channel supports up to 32 concurrent I/O operations.

c. An ESCON CTC channel can only operate in half-duplex mode (transferring data

in only a direction at a time). A FICON CTC channel operates in full-duplex

mode, sending and receiving data at the same time.

d. An ESCON channel defined as CTC can only support the CTC function. Only an

SCTC control unit may be defined on an ESCON channel defined as CTC. A

FICON native channel supporting an FCTC control unit may communicate with

an FCTC control unit on the host mainframe, and simultaneously support

operations to other I/O control unit types such as tape, virtual tape, or DASD.

10) Better utilization of links: The concurrent I/O operations and frame multiplexing and

link pacing in the FICON protocol provide for better utilization of the channel links.

Link pacing is possible through the use of buffer to buffer credits. Buffer to buffer

credits prevent the over-running of the port capabilities at either end of the link. While

ESCON directors use circuit switching, FICON directors introduced frame packet

switching and frame multiplexing. FICON also uses CCW and data pre-

fetching/pipelining to reduce the communication interlock hand-shaking present in

ESCON.

Copyright © Stephen R. Guendert 2007 76


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON native (FC) mode channels can operate in one of 3 topologies.

1) Point-to-point: direct attachment of a FICON channel on a mainframe to a FICON

capable control unit.

2) Switched point-to-point: Attachment of a FICON channel on a mainframe to a FICON

switch/director and through the switch/director to a FICON capable control unit.

3) Cascaded FICON directors: same as switched point-to-point but through 2 FICON

switches/directors and to a FICON capable control unit.

FICON switch is a generic term for a switch that supports the transferring of frames

containing FC-SB2 architecture payloads and also supports the FC-FS extended link services

(ELSs). A FICON director requires both items, plus it has an internal node port that supports the

mainframe’s in band management functionality known as control unit port (CUP).

Figure 15. FICON operating modes

Copyright © Stephen R. Guendert 2007 77


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

A direct attached or point-to-point configuration forms when a channel path consists of a

single link connecting a FICON channel to one or more FICON control unit images (logical

control units). Such a configuration is allowed only when a single control unit is defined on the

channel path or multiple control unit images (logical control units) share the same node port

(N_port) in the control unit. In such direct attached configurations, a maximum of one link

attaches to the channel. Since the maximum number of control unit images that are supported by

the FICON architecture over the FC link to a control unit is 256, the maximum number of

devices that are addressable over a channel path configured point-to-point is 65,536 (256 x 256).

The FICON channel itself determines whether the link that connects to it is a direct

attached/point-to-point topology, or if it is a switched topology. The FICON channel does this

by logging into the fabric via the fabric login process (FLOGI ELS), and checking the accept

response to the fabric login (ACC ELS). The accept response will indicate what type of

topology exists. An example is shown in Figure 15 (Guendert, 2004).

A switched point-to-point configuration forms when a FICON channel in FC mode connects

one or more processor images to a fibre channel link connected to a FICON switch/director. The

FICON switch/director then dynamically connects to one or more other ports on itself (internally

within the switch), and on to FICON CU ports. The links to the CU ports interconnect with one

or more control units and/or images (logical control units). Channels and control units may

attach to the FICON switch/director in any combination, the exact combination depending on

configuration requirements and on available resources in the switch/director. The sharing of a

control unit through a fibre channel switch means that communication from a number of

channels to a control unit may occur either over one switch to CU link (for cases where a CU

Copyright © Stephen R. Guendert 2007 78


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

only has one link to the switch/director) or over multiple link interfaces for cases where a CU has

more than 1 link to the switch/director.

Although only one fibre channel link attaches to the FICON channel in a FICON switched

point-to-point configuration, from the switch the FICON channel can address/communicate with

many FICON CUs on different switch ports. At the control unit, the same addressing

capabilities exist as for direct attached configurations. Switched point-to-point increases the

communication and addressing capabilities for the channel giving it the capability to access

multiple control units.

The communication path between a channel and control unit in a switched point-to-point

configuration is composed of two different parts: the physical channel path, and the logical path.

The physical paths are the links, or interconnection of two links connected by a switch that

provides the physical transmission path between a channel and control unit. A FICON logical

path is a specific relationship established between a channel image and a control unit image for

communication during execution of an I/O operation and presentation of status.

A cascaded FICON director configuration is only supported by the zSeries and System z

processors. A cascaded configuration is different from a switched point-to-point configuration in

that from the first switch, the path connects to a second FICON switch/director (typically at a

remote site as part of a disaster recovery/business continuity (DR/BC) implementation. From the

second switch/director there are additional links which interconnect with one or more control

units and/or images (LCUs).

Figure 16. Cascaded FICON and terminology

Copyright © Stephen R. Guendert 2007 79


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

It is a good point here to introduce some basic FICON terminology that used throughout the

remainder of this chapter. Please refer to figure 16(Guendert, 2004).

1) Buffer credits: Buffer to buffer credits are a flow control mechanism used by the fibre

channel protocol FC-2 level to pace the transfer of frames between optically adjacent

ports. They will be discussed in more detail in the comprehensive cascaded FICON

section later in this chapter.

2) IU pacing: IU Pacing is for end-to-end flow control and is a FICON (FC-SB2) status

parameter sent by the control unit to the channel to indicate the maximum number of IUs

a channel can send before a command response is expected.

3) Fibre optic sub assembly (FOSA): A FOSA consists of a transmitter and a receiver. The

transmitter converts electrical signals to optical signals for propagation over a fibre

channel link. The receive converts optical signals back to electrical signals. There are

two types of FOSA: long wavelength and short wavelength.


Copyright © Stephen R. Guendert 2007 80
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4) Optical link: The optical link is the physical link between two FOSAs of the same

wavelength and type. It is important to note that the wavelength of each FOSA on either

end of an optical link must match.

5) Fibre channel link (FC link): The fibre channel link is the communication link between a

FICON channel card in the mainframe I/O cage and a fibre channel port on a control unit

or switch/director. It is also true for the link between control unit and switch, or between

switches. The FC links also includes any optical fiber extenders that may be in the FC

link.

6) Interswitch link (ISL): In a switched fabric, the link providing connectivity between two

cascaded FICON switches/directors is an ISL. Multiple ISLs can be used between

cascaded FICON switches/directors. The switch/director ports that the ISLS are

connected to are referred to as E_ports.

7) Logical paths: The FICON (FC-SB2) Establish Logical Path (ELP) function is performed

from a channel image (processor LPAR) to a control unit image (LCU) to request the

establishment of a logical path. A channel attempts to establish a logical path to the

control unit images that are described in its I/O configuration in its I/O configuration

definitions in HCD (Trowell, White, 2002).

8) Fabric login: All node ports (N_ports) which are FICON channel and FICON CU ports

connected by FC link to a FC switch must execute a fabric login (FLOGI) to the fabric

(F_port) as described in the FC-FS architecture, before the N_ports can send any FC-4

protocol frames for I/O traffic. The FLOGI process determines the presence/absence of a

switched fabric. If a fabric is present, it assigns/confirms the N_Port identifier of the

N_Port which initiated the FLOGI.

Copyright © Stephen R. Guendert 2007 81


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

9) Port Login (PLOGI): If the FLOGI process determines a fabric is present, an N_Port

proceeds with destination N_Port Logins (PLOGI).

10) Entry switch: the FICON director that is directly connected to the processor’s FICON

channel and to the CU (destination) and/or other FICON director.

11) Cascaded switch: the FICON director that connects to the CU (destination) and to the

entry switch.

12) Switched fabric: one or more switches are interconnected to create a fabric. A switched

fabric takes advantage of aggregated bandwidth via switched connections between node

ports (N_ports).

13) Open exchange: part of fibre channel terminology brought over to FICON. An open

exchange represents an I/O operation in progress over the channel. Many I/O operations

can be in progress over FICON channels at any one time. For example, a disk I/O

operation might temporarily disconnect from the channel while performing a seek

operation or while waiting for a disk rotation. During this disconnect time other I/O

operations can be managed. In addition, FICON channels can multiplex data transfer for

several devices at the same time. This also allows workloads with low to moderate

control unit cache hit ratios to achieve higher levels of activity rates per channel. S/390

and zSeries mainframes (z990/890 and z900/800) can sustain up to 32 such open

exchanges per FICON channel. System z9 FICON Express 2 and FICON Express 4

channels can sustain up to 64 concurrent open exchanges.

Copyright © Stephen R. Guendert 2007 82


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.3.1 Native FICON data pipelining


While ESCON channel program operations require a channel end/device end (CE/DE) to be sent

to the channel after execution of each channel command word (CCW), the FICON native (FC)

mode channel supports CCS and data pipelining.

Figure 17. ESCON command and data transfer “Mother-May-I?”

ESCON channel CCW operation (command and data transfer) on an ESCON channel is shown

above (Trowell, White, 2002). The channel transfers the command and data to the control unit

and waits for a channel end/device end presented by the control unit after execution of the

command by the CU/device (CCW interlock). After the CE/DE for the previous command is

received, the next command is transferred to the control unit for execution. An ESCON channel

program operation requires a channel end/device end (CE/DE) to be presented after execution of

each CCW. As shown in the diagram, the ESCON channel transfers the CCW to the control unit

and waits for CE/DE to be presented by the control unit after execution of the CCW by the

device. This is known as CCW interlock. After receiving the CE/DE for the previous CCW, the

channel transfers the next CCW to the control unit for execution.

Copyright © Stephen R. Guendert 2007 83


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 18. FICON command and data transfer-“Assumed Completion”

FICON FICON FICON


Channel CCW 1 CU Device
CMD
CCW 2 END
CCW 3 CMD
END
CMD
CE/DE END

The FICON architecture provides the protocol for CCW and data prefetching/pipelining which

eliminates the CCW interlock of ESCON. In contrast to the ESCON channel, the FICON channel

exploits CCW and data pipelining. With a FICON channel, all CCWs (up to the FICON channel

IU pacing limit) are transferred to the control unit without waiting for CE/DE after each I/O

operation. The device presents an end to the control unit after each CCW operation. Once the

last CCW in the chain has been executed by the CU/device, the control unit presents CE/DE to

the channel. (Guendert, 2004) This reduction of CCW interlock enhances error recovery, allows

for CCW synchronization, allows for prefetching of CCWs and their associated data and reduces

the number of “mother-may-I?” protocol handshakes. On a FICON channel all CCWs may be

transferred to the control unit without waiting for CE/DE after each I/O operation. The device

presents a DE to the control unit after each CCW execution as before. If the last CCW of the

CCW chain has been executed by the device, the control unit will present CE/DE to the channel.

Copyright © Stephen R. Guendert 2007 84


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

This process is what is known as CCW command data prefetching/pipelining. (McGavin,

Mungal, 2004)

2.3.3.2 FICON Native connectivity enhancements 2002-2007


ESCON channels do not support concurrent I/O connections. A FICON channel in FICON

native (FC) mode supports multiple concurrent I/O connections to FICON control units. The

number of concurrent I/O operation connections on a FICON channel for S/390, zSeries, or

System Z environment depends on the following (Trowell, White 2002):

1) Processor model

2) FICON topology and FICON director model

3) I/O configuration definition

4) Performance characteristics of the control unit

The FICON bridge (FCV) mode channel provided eight concurrent I/O operation connections

(compared to one for an ESCON channel). The FICON native mode channel can provide up to

16 or more concurrent I/O operation connections by exploiting the FICON channels’ Information

Unit (IU) operation multi-plexing and FC frame multiplexing capability. The capability for

concurrent I/O operations must not be confused with bandwidth capacity. A FICON channel’s

bandwidth utilization depends entirely on the number and type of connected control units. The

total available bandwidth on all installed FICON channels is also a function of the capacity of the

channel subsystem.

FICON Express, initially available on the z800 and z900 mainframe models, offered

performance improvements over the earlier S/390 9672 G5/G6 FICON technology. I/O

operations per second (IOPS) can be as high as 7200 with an effective bandwidth of 200 MB/sec.

Both of these numbers assume 4 KB block sizes and mostly sequential operations. The FICON

Copyright © Stephen R. Guendert 2007 85


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Express channel card uses an improved internal bus (64 bits wide with a 66 MHz clock)

compared to the previous FICON channel cards’ 32 bit wide bus with a 33 MHz clock. As

FICON technology has continued to evolve, similar improvements have been made with each

subsequent iteration of the FICON channel card. See diagram below. FICON Express supports

1, 2, and 4 Gbps data rates using a feature called auto negotiation. With z/OS you can check the

data rate at which the FICON Express connection operates using RMF (McGavin, Mungal,

2002).

Each port on a FICON Express card can be initialized with one of three microcode loads:

1) A microcode load that provides operation in FICON bridge (FCV) mode and assumes the

FICON express channel card port is connected to a FICON bridge card on a 9032-5

ESCON director. The FICON bridge card converts the FICON connection into a

maximum of eight ESCON channels going to control units.

2) A microcode load that provides operation in native FICON (FC) mode and works with

native FICON control units and a single FICON director. FICON CTC control units and

cascaded FICON directors are also supported assuming the appropriate/current driver

levels are implemented on the mainframe.

3) A microcode load to support SCSI over fiber with Linux. This is known as an FCP

channel type. The microcode loading is done at power-on-reset (POR) and can be

reloaded by dynamic I/O reconfiguration changes.

Copyright © Stephen R. Guendert 2007 86


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 2. FICON channel card evolution

Server Channel Microprocessor Internal Bus Data STI

Type To STI Rate

G5/G6 FICON 166 MHz 32-bit 1 Gbit 1

Power PC 750 33MHz

z900 FICON 333 MHz 32-bit 1 Gbit 1

Power PC 750 33 MHz

L2 Cache

z8xx/z9xx FICON 333 MHz 64-bit 2 Gbit 1

Express Power PC 750 66 MHz

L2 Cache

z890/z990 FICON 448 MHz 64-bit 2 Gbit 1(2

z9 Express 2 Power PC 750 112 MHz Z9)

L2 Cache

z9 FICON 448 MHz 64-bit 4 Gbit 2

Express 4 Power PC 750 112 MHz

L2 Cache

2.3.4 ESCON vs. FICON and migration planning considerations

FICON enables multiple concurrent I/O operations to occur simultaneously to multiple control

units. This is one of the fundamental differences between FICON and ESCON with its

sequential I/O operations. FICON channels also permit intermixing of large and small I/O

operations on the same link. This in and of itself is a significant enhancement compared with

Copyright © Stephen R. Guendert 2007 87


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

ESCON channels. The data center I/O configuration now has increased connectivity flexibility

because of the increased I/O rate, increased bandwidth, and multiplexing of mixed workloads.

There are eight fundamental differences in how FICON channels operate when compared

with ESCON channels:

1) FICON channels may have multiple concurrent I/O operations for each control unit (CU)

port. This is even true when these operations are to the same logical control unit (LCU).

ESCON channels allow only one actively communicating I/O operation at a time.

2) FICON channels can receive multiplexed I/O frames from different control units in a

switched point-to-point and cascaded FICON director topologies.

3) Intermixing control unit types with different characteristics such as tape and DASD on

the same FICON channel will not cause the same communications lockout impact as

occurs with ESCON.

4) ESCON allows for static and dynamic connections in a switch. FICON switch

connections are strictly dynamic.

5) FICON channels support greater unrepeated link distances than ESCON channels (3km

vs. 10-20km). FICON channel data droop effect occurs at much greater link distances

than it does for ESCON (9 km vs. 100km).

6) The channel to channel (CTC) function is fully implemented in FICON channels

7) Addressing limitations (1024 device addresses for ESCON, 16,384 for FICON).

8) Reduced number of required channels and required fibers due to the increased bandwidth

and I/O rate of FICON channels. Switched point-to-point topology allows even further

consolidation.

Copyright © Stephen R. Guendert 2007 88


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

For maximum I/O concurrency to multiple control units, the number of paths from a

processor image (LPAR) that are defined to access and be configured to a control unit and device

be equal to the maximum number of concurrent I/O operations that the control unit can sustain.

The S/390 and Z/architectures provide for a maximum of eight paths from a processor image to a

control unit image. When designing an S/390, zSeries, or System z configuration it is necessary

to be aware of some architecture and processor rules/recommendations/best practices:

1) A logical CU and device cannot be accessed more than once from the same channel path

(CHPID). This applies to both ESCON and FICON channels.

2) A physical control unit that has multiple logical control units may be accessed more than

once from the same FICON channel path, but to different logical control units within the

physical control unit subsystem.

3) zSeries, System z, and S/390 processor resources and packaging.

4) FICON switch/director resources and packaging.

5) Fiber cabling requirements.

6) Connection recommendations: what is the number of paths needed to support the

required number of concurrent I/O operations from a processor to a control unit?

7) What are the characteristics of the control units?

a. Addressing: LCUs and devices

b. Logical paths?

c. Support for concurrent I/O?

The most important constraints of the ESCON architecture relieve by implementing FICON

are:

Copyright © Stephen R. Guendert 2007 89


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

1) The maximum number of control units per channel goes from 120 on ESCON

implementations to 256 on FICON bridge (FCV) and FICON native (FC)

implementations.

2) The maximum number of control unit images (CUADD) per channel and per control unit

link goes from 16 on ESCON and FICON bridge (FCV) implementations to 256 on

FICON native (FC) implementations.

3) The maximum number of device addresses per channel goes from 1024 on ESCON

implementation to 16,384 on FICON native (FC) implementations.

4) The maximum number of device addresses per control unit link goes from 1024 on

ESCON implementation to 16,384 on FICON native implementation.

End users planning on migrating from ESCON to FICON need to have a firm understanding

of the entire control unit addressing and connectivity characteristics in their environment as these

characteristics will determine how the control unit can be configured in a FICON environment.

Understanding these characteristics will allow for the full exploitation of the control unit’s

capabilities. The following factors are dependent on the control unit characteristics:

1) The number of installed FICON adapters at the control unit.

2) The number of logical paths supported by the control unit at each FICON adapter.

3) The number of logical control units (LCUs) supported and the LCU address for each

supported LCU

4) The number of logical paths supported by each logical control unit within a physical

control unit.

5) The number of logical paths supported by the control unit when there is only one control

unit function within the physical control unit.

Copyright © Stephen R. Guendert 2007 90


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

6) Number of concurrent I/O transfers per physical control unit.

7) Number of concurrent I/O transfers per logical control unit.

8) The number of devices and device unit addresses (UAs) supported per logical control unit.

a. Some devices may be supported by more than one UA, with each device unit

address being supported by a different device number. This is known as base and

alias device addressing and is used in DASD parallel access volumes (PAVs).

Channel to control unit connectivity is different for FICON channels than it is for ESCON

channels and for the FICON bridge channels to the same control unit. The three primary

differences are:

1) One ESCON channel path can be used for only 1 I/O transfer at a time.

2) One FICON bridge channel; can be used for up to eight concurrent I/O transfer from the

same physical control unit, where each transfer is from a device in a different logical

control unit.

3) Once FICON channel can be used for multiple concurrent I/O transfers from the same

physical control unit and even for the same logical control unit.

It is possible to install and define more than eight paths to any physical CU (from the same

LPAR) when the physical control unit has two or more logical control units. A maximum of

only eight channel paths may be defined to any one logical control unit. End users may use this

approach for physical control units supporting more than eight concurrent I/O transfers and that

have a customer requirement for a high I/O rate when using ESCON channels.

Operating a FICON environment provides the end user with the advantage of having the

capacity for multiple concurrent I/O operations on their I/O channels and adapters, allowing for

Copyright © Stephen R. Guendert 2007 91


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

much higher I/O concurrency than could be achieved with ESCON. There are two primary

reasons for this:

1) A single FICON channel can have I/O operations to multiple logical control units at the

same time via the FICON protocol’s frame multiplexing capabilities.

2) FICON’s frame multiplexing and CCW data prefetching/pipelining also allow for

multiple I/O operations to the same logical control unit. This allows multiple, concurrent

I/O operations to the same logical control unit, even within the same physical control unit.

By also using IBM’s parallel access volume (PAV) functionality, it is even possible to do

multiple concurrent I/O operations to the same volume.

IBM states in multiple documentation sources that intermixing ESCON channels and FICON

channels to the same CU from the same mainframe operating system image (LPAR) is only

supported as a transitional step for migration. Access to any ESCON interface control unit from

a processor image may be from ESCON or FICON bridge channels. Intermixing ESCON

channels, FICON bridge channels, and FICON native channels to the same control unit from the

same LPAR is also supported. IBM recommends that FICON native channel paths only be

mixed with ESCON and FICON bridge channel paths to ease migration from ESCON channels

to FICON channels using dynamic I/O configuration. The coexistence is very useful during the

transition period from ESCON to FICON channels. The mixture allows you to dynamically add

FICON native channel paths to a control unit while keeping its devices operational. A second

dynamic I/O configuration change can then remove the CNC and FCV channels while keeping

devices operational. The mixing of FC channel paths with CNC and FCV channel paths should

only be for the duration of the migration to FICON (Trowell, White, 2002).

Copyright © Stephen R. Guendert 2007 92


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

When FICON was introduced not all of the required control unit performance information

was readily available, particularly for non-IBM DASD vendors such as EMC, HDS, and STK.

Therefore, IBM came up with a general rule of thumb (ROT) for ESCON to FICON migration

planning, and recommended a 4:1 channel consolidation ratio (4 ESCON channels:1 FICON

channel). In other words, IBM recommended four times fewer FICON native channels than

ESCON channels to a control unit. Even with four times fewer channels, a FICON native

configuration is capable of more I/O concurrency that the ESCON configuration. It also

removed the ESCON addressing limitations, thus increasing the I/O workload of a control unit.

It is extremely important to realize that relying on rules of thumb rather than taking the time

to do a proper architecture performance analysis and model has created far too many

performance problems for far too many end users. The optimum number of FICON channels

and adapters is related to the control unit characteristics and implementation. Control unit

characteristic data is now readily available from vendors and has been incorporated into the

leading performance management and modeling software. It is best to use such software when

planning an ESCON to FICON migration rather than relying on simplistic rules of thumb that are

now outdated.

Another situation in which it is important to have a solid understanding of the control unit’s

operational characteristics and FICON channel characteristics is when the end user is

determining whether to intermix different CU types on the same FICON channel. Historically

IBM has recommended against intermixing control units that the end user does not want locked

out for periods of time with other control units that can cause lockout for certain types of channel

operations. Tape devices and their control units are an example of a possible lockout cause.

IBM does allow (support) intermixing of control unit types with different channel usage

Copyright © Stephen R. Guendert 2007 93


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

characteristics such as DASD and tape on the same FICON channel. FICON channels do not

have the interlock problems inherent with ESCON channels, but performance characteristics still

need to be taken into account. Tape drives and their control units normally transfer large data

blocks. Such large block data transfers may interfere with the response time of some control

units requiring the best possible response times such as disk control units. Therefore, for control

units sensitive to response times, it is not recommended to configure them to use the same

FICON channel as tape control units.

IBM software and the overwhelming majority of independent software vendor (ISV) software

have been modified to operate optimally in a FICON environment. These modifications are to

allow the software to take full advantage of the performance benefits of command and data

prefetching/pipelining with the FICON channels. The prefetching/pipelining of CCW

commands and data provided for by the fibre channel architecture allows for more efficient I/O

operations. FICON channels also synchronize on the transition from an input to an output

operation within the channel program. In other words, to guarantee data integrity, whenever the

current command being executed in the FICON channel is in input operation (read) and a

subsequently fetched CCW describes an output operation (write) the FICON channel indicates to

the control unit that the status must be presented at completion of the input operation by the

device. Overriding this transition from read CCWs to write CCWs further improves I/O

operations and allows the maximum benefit to be gained from the CCW and data

prefetching/pipelining capabilities of the FICON channel.

When migrating from ESCON to FICON, the following rules must always be considered:

Copyright © Stephen R. Guendert 2007 94


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

1) Configure a minimum of two FICON channel paths to an LCU for high availability. The

exact number of paths depends on throughput requirements, but always should be a

minimum of two.

2) An LCU or device cannot be defined to be accessed more than once from the same

FICON channel path.

3) Configure the channel paths according to the quantity of resources available in the

FICON channel and control unit.

4) A physical control unit that has multiple logical control units may be accessed more than

once from the same FICON channel path. However, this access must be from different

logical control units within the physical control unit.

2.3.5 FICON channel to channel (CTC)

CTC stands for channel to channel (connectivity). It is used for server to server connectivity in

the mainframe environment. Server to server communications support data movement and

processor-to-processor communications. Other common terminology used for server to server

connectivity includes clustering, grid computing, and server area networks. In the open systems

world we typically see server to server connectivity done via Gb Ethernet, Fibre Channel

(Virtual Interface over Fibre Channel), or Infiniband. For today’s mainframe environments,

server to server access would be either CTC over ESCON or CTC over FICON.

CTC was originally developed by IBM prior to the development of the Systems Network

Architecture (SNA), and it uses a sparse instruction set that simplifies mainframe access for

certain applications such as file and print distribution. Furthermore, CTC bypasses the Virtual

Telecommunications Access method (VTAM) which is how SNA traffic gets processed. By

eliminating VTAM from the equation we realize faster data transfer rates with decreased

Copyright © Stephen R. Guendert 2007 95


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

response times for the systems that have been developed to exploit the inherent advantages in

using CTC (Beal, Trowell, 2001).

The CTC function instruction set mentioned above actually simulates an I/O device that will

then be used by one system control program to communicate directly with another system

control program. In the course of doing so, it provides the synchronization and the data path

used for the data transfer between two channels. When used to connect two channels associated

with different systems, we have what is called loosely coupled multiprocessing systems. The

two channels connected by the CTC connection view this connection as an unshared I/O device.

The CTC is selected by the IOS and responds in the same exact manner as any other I/O device.

How it differs from these other I/O devices is that it will use commands to open a path between

the two channels it connects and then synchronizes the operations that are to be performed

between the two channels. For an analogy, think of this as resembling the transporter technology

used in Star Trek but without the shimmering lights, sound effects, and the need to say

“energize” (Guendert, 2005b).

IBM currently supports CTC for:

• Parallel channels via IBM’s 3088 MCCU

• ESCON channels

• FICON channels via a 9032-5 ESCON Director with the FICON Bridge (FCV) feature

• FICON Native CTC (FCTC)

The primary z/OS, OS/390 and z/VM exploiters of CTC communication include:

• IMS read/write devices

Copyright © Stephen R. Guendert 2007 96


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• XCF (Cross System Coupling Facility) for both path in and path out devices used for

sysplex intersystem communication (z/OS and OS/390 only). For small messages, CTC

can often be more efficient than passing messages via the coupling facility (CF).

• VTAM read/write devices

• TCP/IP read/write devices.

• VM pass through facility

• Remote Spooling Communications Subsystem (RSCS)

CTC can be used in the following configurations (Beal, Trowell, 2001):

• Base Sysplex. In this configuration the CPCs are connected by CTC channels and a

shared data set used to support communications. If more than 1 CPC is involved, we will

use a Sysplex Timer to synchronize the time on all involved systems.

• Parallel Sysplex. Multi system data sharing at high levels of performance will be enabled

by the Coupling Facility. In a parallel sysplex, the CTCs are used to support cross-

system Coupling Facility (XCF) signaling paths between systems in the sysplex.

• Loosely Coupled. This configuration has more than one Central Processing Complex

(CPC), shares DASD, but does NOT share central storage. The CPCs are connected by

CTC channels and are managed by multiple z/OS images.

Prior to the general availability of ESCON in the early 1990s, the IBM 3088 Multisystem

Channel Communication Unit device (control units) provided parallel channel-attached CTC

support. The 3088 control unit is no longer marketed by IBM. As of Dec 31, 2004 the IBM

9032-5 ESCON director and FCV feature are also no longer marketed by IBM (Guendert,

2005b). Today’s options for CTC are ESCON and FICON Native CTC. Many of the principles

of ESCON CTC are carried over to FICON CTC. Therefore, a review of ESCON CTC is in

Copyright © Stephen R. Guendert 2007 97


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

order, and as is a discussion of the improvements made with FICON. This section will wrap up

with a brief discussion of System z9 specifics and their impact of FICON CTC.

2.3.5.1 ESCON CTC technology


ESCON CTC enables direct communication between multiple CPCs and/or CPCs with LPARS

as illustrated below. A stand alone CTC control unit is no longer used to provide CTC adapter

and switching functions. ESCON CTC communication requires a pair of ESCON channels. One

channel is defined as an ESCON CTC channel on one side of the connection. The ESCON CTC

Control Unit Function is provided in the microcode (LIC) of the ESCON CTC channel. The

second channel is defined as an ESCON CNC channel to operate on the other side of the

connection. The CTC channel path will communicate with a CNC channel and vice-versa.

Connections can be either point to point with no intervening ESCON director, or switched point

to point via an ESCON director as seen in the figure below.

Figure 19. ESCON CTC connections

Copyright © Stephen R. Guendert 2007 98


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3 additional key points:

1. The ESCON channel that is defined as CTC cannot be used for other I/O device support.

It can only support the ESCON CTC Control Unit Function and CTC devices. In other

words, it cannot also support disk, tape, etc on the same defined channel. This ESCON

channel is solely for CTC functions.

2. The ESCON channel defined as CNC can support ESCON CTC control unit definitions

and other I/O control unit type definitions such as disk and tape. However, this can only

be done if the channel is connected in a switched point to point (i.e. via an intervening

ESCON director) topology.

3. ESCON CTC channels support a maximum of 512 unit addresses (devices) and up to 120

logical control units (LCUs). This limitation means that multiple pairs of CTC and CNC

channels must be used in an installation with a large number of interconnected LPARs on

the same physical processor or to LPARS/images on other processors. In other words,

for an environment with large System z9 Enterprise Class servers that have high LPAR

counts, ESCON CTC can use up many of your available channels in short order

(Guendert, 2005b).

Recall that ESCON channels can be defined as shared, dedicated, or reconfigurable channel

types. Channels that reside in a CEC without Multiple Image Facility (MIF) cannot be shared by

the CEC’s LPARs. (i.e. these unshared channels are dedicated to a single partition). If this

unshared channel were to be further defined as reconfigurable, it can be reconfigured from one

partition to another partition. MIF allows for channel sharing. LPARs can share the channel

paths, and consequently they can also share any control units and associated I/O devices that are

also configured to the shared channels. So, if this same CEC is running in LPAR mode, the

Copyright © Stephen R. Guendert 2007 99


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

logical partitions can share channel paths. This reduces the number of physical connections

between processor complexes. Both ESCON CTC and CNC channels can be defined as shared

channels (Beal, Trowell, 2001).

The System z 9 and the z890/990 servers support the concept of multiple Logical Channel

Subsystems (LCSSs) on a single server. For example, the z9 Enterprise Class (EC) supports four

LCSSs per server, with each LCSS having from one to 256 channels, and each LCSS can be

configured with 1 to 15 LPARs. This is not to say that the I/O subsystem is no longer to be

viewed as a single entity. It continues to be viewed as a single entity and the operating system

continues to support a maximum of 256 channels.

2.3.5.2 FICON CTC technology


FICON CTC consists of two FICON (FC) channel FCTC control units. This method of CTC

connectivity, which has been available since October 2001, increases bandwidth available for

CTC by up to a factor of 20 with the advent of FICON Express 4 on the System z9 servers

(Guendert, 2005). At least one of the two FCTC control units needs to be defined on a zSeries

server at Licensed Internal Code (LIC) driver level 3C or later. There are multiple options for

implementing a FICON CTC configuration. There are three commonalities that will apply to all

FICON CTC configurations (Beal, Trowell, 2001):

• The FICON channel at each end of the FICON CTC connection, supporting the FCTC

control units, can also communicate with other FICON control units (disk, tape, virtual

tape). This is very different from ESCON.

• The System z/zSeries server at each end of a FICON CTC connection will use a FICON

native channel (defined as CHPID type FC).

Copyright © Stephen R. Guendert 2007 100


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• The FC channel at each end of the CTC connection has a FICON CTC control unit

defined. An FCTC control unit can be defined in IOCP on any FICON native channel;

however, the FCTC control unit function will only be provided by a zSeries server at LIC

driver level 3C or later. The FCTC control function on a zSeries server at LIC driver

level 3C or later can communicate with an FCTC control unit defined on a FC channel on

any of the following:

o S/390 G5 or G6 server

o Another zSeries server

o A Systems z9 server

• FICON CTC communication does not require a pair of channels because it can

communicate with any FICON (FC) channel that has a corresponding FCTC control unit

defined. This means, that unlike ESCON CTC, FICON CTC communications can be

provided using only a single FICON channel per processor. However, even though a

single FC channel per processor can provide CTC connections across multiple processors,

for large FICON configurations it is best practice to use at least one pair of FC channels.

This will allow you to maintain the same CTC definition methodology that you used with

ESCON, but with an added benefit: the FICON channels can simultaneously support the

definition of FCTC control units and disk/tape/virtual tape control units (Guendert,

2005b).

Even though FICON CTC control units are defined at each end, only one end provides the

FICON CTC control unit function. A fairly complex algorithm is used to determine which

channel will provide the FICON CTC control unit function. The algorithm is run during the

initialization of the logical connection between the two ends of the FICON CTC connection.

Copyright © Stephen R. Guendert 2007 101


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Ideally, this algorithm results in balancing the number of FCTC CU functions provided at each

end of the logical connection.

Table 3. A comparison of channel technologies for CTC

Characteristic ESCON FICON

Data transfer mode half duplex full duplex


Data transfer bandwidth 12-17 MB/sec 60-320+ MB/sec*
# of concurrent I/O operations 1 up to 32 (64)**
# of supported unit addresses up to 512 up to 16384
Channel Dedicated to CTC? Yes No
# of required channels minimum 2 1 or 2
shared channels yes (EMIF) yes, MIF
Protocol circuit switched packet switched

* 320 MB/sec assumes FICON Express 4 on a System z9 server. For FICON Express 2 on a non

System z9 server the expected bandwidth is 60-170 MB/sec.

** 32 concurrent I/O operations for non-System z9 servers. The System z9 will support 64

concurrent I/O operations (open exchanges).

The above table summarizes where the differences are in the two technologies for CTC

communication. The multifold gains in bandwidth, number of supported concurrent I/O

operations, and addressability all result in much higher performance for a CTC environment.

FICON also is a much more efficient protocol in terms of CCW (Channel Command Word)

management, resulting in being able to go further distances unrepeated and maintain high levels

of performance without experiencing the phenomena known as data droop. (Data droop is quite

prevalent in the ESCON world). This performance gain allows for the reduction in the number

of channels used for CTC, which in turn results in a lower total cost of ownership attributable to

CTC needs.

Copyright © Stephen R. Guendert 2007 102


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.5.3 System z9 and FICON Express 4 specific enhancements


The primary change in FICON Express4 channels when compared with FICON Express 2

channels is the speed of the link. FE4 is designed to support 4 Gbps link speeds as opposed to

the 2 Gbps link speeds of FE2. FE4 will auto-negotiate link speeds. The maximum number of

IOPS that have been measured on a FE4 channel running an I/O driver benchmarking program

with 4k bytes/IO is approximately 13000. This is the same as had been previously measured

with FE2 channels. Changing the link speed has no effect on the number of small block (4k

bytes/IO) that can be processed. Therefore, improvements in response time measurements

(channel to disk) are only expected to be noticeable for large data transfers.

The same holds true for CTC performance. The biggest improvements with FE4 CTC

measurements will be for large message sizes. Testing done by IBM Poughkeepsie demonstrated

that there is no significant difference in response time measurements for 1k byte size XCF

messages between FE4 and FE2 channels. When the message size was increased to 8k bytes and

again to 32k bytes, IBM’s testing discovered a noticeable difference in response times between

FE4 CTC channels, and FE2 CTC channels (Cronin, 2007).

The System z9 also increased the maximum number of supported concurrent I/Os (open

exchanges) from 32 to 64. While this was done primarily to help DASD workloads with low to

moderate control unit cache hit ratios, it also has benefits for System z9 FICON CTC: twice as

many CTC devices can be simultaneously transmitting short messages with reasonable response

times between System z9 servers.

2.3.5.4 Considerations for Migration (ESCON CTC to FICON CTC)


In general, migration from ESCON CTC (SCTC) to FICON CTC (FCTC) is achievable with a

minimum amount of disruption. This is true as long as different control unit numbers and device

numbers are used in the process. One key thing to make certain of is that hardware driver levels

Copyright © Stephen R. Guendert 2007 103


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

are in compliance with IBM requirements for your machine type(s). Also, a FCTC configuration

can co-exist with an existing ESCON CTC configuration, which is why it is important to use

different control unit and device numbers. Apart from the device type (FCTC) and the channel

type (FC) that users will see in the z/OS and OS/390 command displays, there is no operational

differences in the management of FICON CTC devices from ESCON CTC devices (Beal,

Trowell, 2001).

The recommendations for configuring a FICON CTC device for high availability are similar

to what was best practice for ESCON CTC high availability (Guendert, 2005b):

• Spread the receiving CTC devices across different physical hardware CTC connections.

• Spread the sending CTC devices across different physical hardware CTC connections.

• Provide redundant FCTC connections with as few common hardware elements as

possible. In other words, configure redundant connections:

o On different channels

o Through different directors/switches/fabrics

o On channels in different channel groups

Mixing redundant CTC connections across ESCON and FICON channels is possible while

providing similar functionality. XCF, VTAM, and other exploiters of a CTC infrastructure for

intersystem communication have no reliance whatsoever on whether the CTC device is FCTC or

SCTC. It should be noted that FICON CTC provides many benefits over ESCON CTC, most

notably increased performance and connectivity.

Copyright © Stephen R. Guendert 2007 104


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.5.5 FICON CTC operational/functional characteristics


FICON Native (FC) channels that support a FICON CTC (FCTC) control unit have identical

operational and functional characteristics as any other FC channel supporting other types of

control units such as DASD. Here are some highlights to remember (Guendert, 2005b):

• The link bandwidth for a FICON native (FC) channel (FICON Express2) is 200 MBps.

For a FICON Express4 channel on System z9 this is 400 MBps.

• The FC channel will support up to 16,384 devices.

• The FC channel will support a maximum of 32 concurrent I/O connections (open

exchanges). For System z9 this number is 64.

• The channel to CU end-to-end distance before data droop is up to 100 km for FICON.

• Frame multiplexing support by the FICON channels, FICON directors/switches, and

FICON control units offers optimum utilization of the FC links without the protocol

overhead of ESCON.

• The distance from the channel to control unit, channel to director/switch, or

director/switch to control unit link is up to 10 km (20 km with an RPQ from IBM) for

FICON native channels using single mode fibre with long wavelength lasers.

• FICON channels use frame and Information Unit (IU) multiplexing control to provide

greater exploitation of the priority I/O queuing mechanisms within FICON capable

control units.

Recall the earlier discussion that in a FICON CTC configuration, even though a FCTC control

unit will be defined at each end of the FICON channel, only one of the two ends of a logical

connection will actually provide the FICON FCTC control unit function. The decision as to

which end of the logical connection will actually provide the FCTC control unit function is made

Copyright © Stephen R. Guendert 2007 105


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

using a fairly complicated algorithm. Below is a summary of the factors taken into consideration

by this algorithm (Beal, Trowell, 2001):

• FICON CTC control unit support. FICON CTC control unit support is only provided in

the FICON native (FC) channel on a z900/800/990/890 at driver level 3C or later or on a

System z9 EC/BC server. If the one end meets this condition, and the other end is a

z900GA1 or 9672 G5/G6 processor, the end meeting this condition will provide the

control unit function.

• FICON CTC CU function. If both ends of the connection meet the above condition, then

the count of FCTC CY functions already being supported by the end is taken into

consideration. The FC channel with fewer FCTC CU functions being supported is

selected to provide the FICON CTC control unit function.

• Fibre Channel Node/Port WWN. If after looking at the above two conditions and it is

determined that each end has an equal CTC CU function count, then the FC channel with

the lower fibre channel Worldwide Name (WWN) will provide the FICON CTC control

unit function. The WWN is provided during the port log in (PLOGI) process.

Keep in mind that the FC channels supporting FCTC control units can also be used for

supporting I/O to other control units (disk, tape, etc). Establishing logical paths to disk and

tape control units other than FCTC control units are not taken into consideration in the

automatic FCTC CU function establishment balancing algorithm.

2.3.5.6 Recommendations for a FCTC device numbering scheme


What follows is a summary of one method that has been implemented in numerous installations

worldwide in a wide range of shop sizes. It also was used extensively for ESCON CTC. This

method is simple to follow, implement, and is highly scalable. Defining an environment’s

Copyright © Stephen R. Guendert 2007 106


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON CTC configuration requires an understanding of FICON, an understanding of the System

z/zSeries CTC rules, operations and the different methods available for defining the connections.

It should be noted that as the number of images in a complex grows, the CTC process becomes

increasingly complex.

The most frequent approach to a FICON CTC numbering scheme is called the FICON

sender/receiver device numbering scheme. This scheme allows for ease of migration from

ESCON CTC. It also allows operations personnel and systems programmers to be able to easily

identify the use (send or receive) and target system for any given CTC device in the complex’s

configuration.

This method makes use of the 4 digit z/Architecture device number where we have:

1. First digit uses an even hexadecimal number for the send CTC control unit and device.

An odd hexadecimal number is used for the receive CTC control unit and device.

2. Second/Third digits represent an assigned CTC image ID (assigned on paper) for the

LPAR image. This can be any unique value within the CTC complex. This CTC image

ID is then used as a target identifier (for the image) that will be used when defining how

all other images access this image.

3. Fourth digit-used to indicate whether the CTC connection is the primary or alternate. For

the primary device, use a value of 0 to 7. For the alternate CTC devices use a value of 8

to F.

The recommendation is to use two FICON channels per processor, defining all send FCTC

control units and devices (with numbers of the form 4xxx) on one FICON (FC) channel, and

define all receive FCTC control units and devices (with numbers of the form 5xxx) on a second

FC channel (Guendert, 2005).

Copyright © Stephen R. Guendert 2007 107


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

As described by the discussion on FICON CTC control unit function balancing, if the FICON

channel at each end of a FCTC connection is on a processor that supports the FICON CTC CU

function, the channel that initiates the establishment of the FCTC CU function indicates the

number of FCTC CUs that it currently supports. The receiving FC channel uses this number to

determine which channel provides the FICON CTC control unit function. The channel with the

fewest number of CTC CU functions supported will provide the FCTC CU function for the

connection in question. This is an attempt to balance the load that FCTC connections place on

each channel. The FICON channels supporting the FCTC control units can also be used to

support I/O operations to other control units such as disk and tape.

*Note: The establishments of logical paths to control units other than FCTC control units are not

taken into consideration in the automatic FCTC CU function establishment balancing algorithm.

2.3.6 Cascaded FICON basics

FICON native channels on zSeries and System z9 mainframe support cascaded FICON directors.

This support is for a two director configuration only. With cascading, a FICON native channel

or a FICON native channel with CTC function can connect a mainframe to a device or other

mainframe via two connected FICON directors. FICON support of cascaded directors is

commonly referred to as cascaded FICON, FICON cascading, cascaded switching, or as a two

switch cascaded fabric (Guendert, 2005c).

Cascaded FICON is important for disaster recovery and business continuity solutions. It

provides high availability connectivity and the potential for fiber infrastructure cost savings for

extended storage networks. Cascaded FICON allows for shared links, and therefore, for

improved utilization of intersite connected resources and infrastructure. Disaster

recovery/business continuity (DR/BC) solutions such as IBM’s Geographically Dispersed

Copyright © Stephen R. Guendert 2007 108


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Parallel Sysplex (GDPS) can benefit from the reduced inter-site configuration complexity that

FICON cascading provides.

The specifics of cost savings vary depending upon the infrastructure details, workloads, and

size of data transfers. In general, end users who have data centers geographically separated at

two sites may reduce the number of cross site connections by using cascaded FICON directors.

Further cost savings may be realized in the reduction in the number of required FICON channels

and switch ports. A FICON Express 4 channel list prices for approximately $4K and FICON

director ports currently list for approximately $1500, so this cost savings alone can be fairly

substantial in a large configuration (Artis, Guendert, 2006).

Another important aspect of cascaded FICON is the ability to provide high integrity data

paths. When configuring FICON channel paths through a cascaded fabric, the high integrity

function is an integral component of the FICON architecture. To support cascaded FICON, IBM

and FICON director vendors worked together to help ensure that the robustness in the channel to

control unit path is maintained to the same exacting standards for error detection, data integrity,

and recovery that has existed for many years with both ESCON and the initial implementation of

FICON.

Data integrity helps ensure that any changes to the end users’ data streams are always

detected and that the data frames are delivered to the correct FICON channel port or FICON

control unit port (to a correct end point). End-to-end data integrity is maintained throughout the

cascaded director fabric. Cascaded FICON introduced new integrity features within the FICON

channel and cascaded switch fabric to help ensure the detection and subsequent reporting of any

miscabling actions occurring within the fabric during operational use that may cause a frame to

be delivered to the wrong end point.

Copyright © Stephen R. Guendert 2007 109


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Prior to the January 2003 introduction of support for cascaded FICON director connectivity on

IBM zSeries mainframes, only a single level of FICON directors for connectivity between a

processor and peripheral devices was allowed to be used. Cascaded FICON introduces the open

systems SAN concept of the interswitch link (ISL). IBM now supports the flow of traffic from

the processor through two FICON directors connected via ISL and on to the peripheral devices

such as disk and tape. FICON support of cascaded directors is generally available (GA), has

been supported on the IBM zSeries since 31 January 2003, and is supported on the System z

processors as well.

Cascaded FICON allows the end user to have a FICON Native (FC) channel, or a FICON

CTC channel, connect a zSeries/System z server to another zSeries/System z server or peripheral

device such as disk, tape library, or printer via two FICON directors/switches between the

connected devices and/or servers. A FICON channel in FICON Native (FC) mode connects one

or more processor images to a Fibre Channel link, which connects to the first FICON director,

then dynamically/internally through the first director to one or more ports, and from there to a

second cascaded FICON director. From the second director there are fibre channel links to

FICON control unit (CU) ports on attached devices. These FICON directors may be

geographically separated, providing greater flexibility and fibre cost savings. All FICON

directors connected together in a cascaded FICON architecture must be from the same vendor.

Initial support by IBM is limited to a single hop between cascaded FICON directors; however,

the directors can be configured in a hub/star architecture with up to 24 directors in the fabric.

Cascaded FICON allows customers tremendous flexibility and the potential for fabric cost

savings in their FICON architectures. It is extremely important for business continuity/disaster

recovery implementations. Customers looking at these types of implementations can realize

Copyright © Stephen R. Guendert 2007 110


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

significant potential savings in their fiber infrastructure costs and channel adapters by reducing

the number of channels for interconnecting 2 geographically separated sites with high

availability FICON connectivity at increased distances.

Initially, the FICON (FC-SB-2) architecture did not allow the connection of multiple FICON

directors. Of course, neither does ESCON except when static connections of “chained” ESCON

directors were used to extend ESCON distances. Both ESCON and FICON defined a single

byte for the link address, the link address being the port attached to “this” director. As of 31

January 2003, this changed. Now, it is possible to have two-director configurations, with it also

being possible to have separate geographic sites. This is done by adding the domain field of the

fibre channel destination ID to the link address in order to specify the exiting director and the

link address on that director.

There are some hardware and software requirements specific to cascaded FICON:

1) The FICON directors themselves must be from the same vendor.

2) The mainframes must be z Series machines or System z processors: z800, 890, 900, 990,

z9 BC or z9 EC. Cascaded FICON requires 64-bit architecture to support the two-byte

addressing scheme. Cascaded FICON is, therefore, not supported on 9672 G5/G6

mainframes.

3) z/OS version 1.4 or greater, and/or z/OS version 1.3 with required PTFs/MCLs to support

2-byte link addressing (DRV3g and MCL (J11206) or later).

4) The high integrity fabric feature for the FICON director/switch must be installed on all

directors/switches involved in the cascaded architecture. To prevent an FC port from

transferring more frames than the receiving FC port can handle, both ports of the FC link

Copyright © Stephen R. Guendert 2007 111


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

must request each other’s buffer credit quantity. This request is performed during the

initial initialization of the link. Each time a frame is transferred from a port onto the link,

that port will reduce (by 1) its current buffer credit counter value for the partner port.

When the port at the other end of the link receives the frame and moves it out of its

buffer, it responds with an R_RDY (ready response). When the R_RDY is received by

the transmitting port, it increments the counter by 1.

The greater bandwidth of and distance capabilities FICON has over ESCON are starting

to make it an essential and cost effective component in high availability/disaster

recovery/business continuity (HA/DR/BC) solutions (Guendert, 2005c). These types of

solutions are the primary reason mainframe installations are adopting cascaded FICON

architectures. Since Sept 11, 2001 more and more companies are insourcing DR/BC and

those that are doing so are building the mainframe piece of their new DR/BC datacenters

using FICON, rather than ESCON. Until IBM announced cascaded FICON as being

Generally Available (GA), the FICON architecture was limited to a single domain due to the

single byte addressing limitations inherited from ESCON. FICON cascading allows the end

user to have a greater maximum distance between sites, (i.e., up to an unrepeated distance of

36 km at 2 Gb/sec bandwidth). HA/DR/BC implementations including GDPS, remote DASD

mirroring, electronic tape/virtual tape vaulting and remote DR sites are all facilitated by

cascaded FICON. To better understand cascaded FICON, it is worth looking at recent trends

in high availability/disaster recovery/business continuity (HA/DR/BC).

2.3.6.1 Technical basics of FICON cascading


First, as stated earlier, cascaded FICON is limited to z Series and System z processors only.

Second, a cascaded FICON director configuration involves at least three fibre channel links.

Copyright © Stephen R. Guendert 2007 112


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The first link is between the FICON channel card on the mainframe (known as an N_Port) and

the FICON director’s fibre channel adapter card (which is considered an F_Port). The second

link is between the two FICON directors via what are known as E_Ports. The link between

E_Ports on the directors is known as an inter-switch link, or ISL. The final link is from the

F_Port to a FICON adapter card in the control unit port (N_Port) of the storage device. The

physical paths are the actual fibre channel links connected by the FICON directors providing the

physical transmission path between a channel and a control unit. Please also note that the links

between the cascaded FICON directors may be multiple ISLs, both for redundancy and to ensure

adequate I/O bandwidth.

Single byte addressing refers to the link address definition in the Input-Output Configuration

Program (IOCP). Two-byte addressing (cascading) allows IOCP to specify link addresses for

any number of domains by including the domain address with the link address. This allows the

FICON configuration the capability of creating definitions in IOCP that span more than one

director.

Figure 20 below shows that the FC-FS 24 bit FC port address identifier is divided into three

fields:

1) Domain

2) Area

3) AL Port

In a cascaded FICON environment, 16 bits of the 24-bit address must be defined for the zSeries

server to access a FICON control unit. The FICON directors provide the remaining byte used to

make up the full 3-byte FC port address of the CU being accessed. The AL_Port (arbitrated

Copyright © Stephen R. Guendert 2007 113


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

loop) value is not used in FICON and will be set to a constant value. The zSeries “domain and

“area” fields are referred to as the F_Port’s “port address” field. It is a 2-byte value, and when

defining access to a CU that is attached to this port using the zSeries Hardware Configuration

Definition (HCD) or IOCP, the port address is referred to as the link address. Figures 20 and 21

further illustrate this. Figure 22 is an example of a cascaded FICON IOCP gen (Artis, Guendert,

2006).

Figure 20. Fabric addressing support (a)

Copyright © Stephen R. Guendert 2007 114


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 21. Fabric addressing support (b)

The connections between the two directors are established through the exchange of link

parameters (ELP). The directors then pause for a device fabric login (FLOGI), and then,

assuming the device is another switch they initiate an ELP exchange. This results in the

formation of the ISL connection(s).

In a cascaded FICON configuration, three additional steps occur beyond the “normal” FICON

switched point-to-point communication initialization. The 3 basic steps are:

1) If a 2-byte link address is found in the CU macro in IOCDS, a “Query Security Attribute”

(QSA) command is sent by the host to check with the fabric controller on the directors if

the directors have the high integrity fabric features installed.

2) The director responds to the QSA.

Copyright © Stephen R. Guendert 2007 115


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3) If it is an affirmative response, indicating that a high integrity fabric is present (fabric

binding and insistent domain ID) the login continues. If not, login stops and the ISLs are

treated as invalid (not a good thing).

Figure 22. Sample IOCP coding for FICON cascaded director configuration

2.3.6.2 High integrity enterprise fabrics


Data integrity is paramount in a mainframe environment. End to end data integrity absolutely

must be maintained throughout a cascaded FICON director environment. Why? We must

ensure that any changes made to the customer’s data stream are always detected and that the data

is always delivered to the correct end point. FICON directors in a cascaded director environment

use a software feature to achieve this. The feature key must be installed and operational. What

does high integrity fabric architecture and support entail?

1) Support of Insistent Domain IDs. This means that a FICON director will not be allowed

to automatically change its address when a duplicate switch address is added to the

Copyright © Stephen R. Guendert 2007 116


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

enterprise fabric. Intentional manual operator action is required to change a FICON

director’s address. Insistent Domain IDs prohibit the use of dynamic Domain IDs,

thereby ensuring that predictable Domain IDs are being enforced within the fabric. For

example, suppose a FICON director has this feature enabled, and a new FICON director

is connected to it via an ISL in an effort to build a cascaded FICON fabric. If this new

FICON director attempts to join the fabric with a domain ID that is already in use, the

new director is segmented into a separate fabric. It also makes certain that duplicate

Domain IDs are not used within the fabric (Guendert, 2005c).

2) Fabric Binding. Fabric binding enables companies to allow only FICON

directors/switches that are configured to support high-integrity fabrics to be added to the

storage/FICON network. For example, a Brocade Mseries FICON director without an

activated SANtegrity feature key is prohibited to connect to a Mseries FICON

fabric/director with an activated SANtegrity feature key. The FICON directors that you

wish to connect to the fabric must be added to the fabric membership list of the directors

already in the fabric. This membership list is composed of the “acceptable” FICON

director’s Worldwide Name (WWN) and Domain ID. Using the Domain ID ensures that

there will be no address conflicts, i.e. duplicate domain IDs when the fabrics are merged.

The two connected FICON directors then exchange their membership list. This

membership list is a Switch Fabric Internal Link Service (SW_ILS) function, which

ensures a consistent and unified behavior across all potential fabric access points

(Guendert, 2005c).

Copyright © Stephen R. Guendert 2007 117


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.7 Cascaded FICON and HA/DR/BC

The importance of High Availability, Business Continuity and Disaster Recovery

(HA/BC/DR) for information technology professionals and the corporations they work for has

undergone considerable change in the past twenty years. This change has further increased

exponentially in the 5 ½ years since Sept. 11, 2001. The events of Sept. 11, 2001 (9/11) in the

United States served as a wake up call to those who viewed HA/BC/DR as a mere afterthought

or necessary “check in the block”. The day’s events underscored how critical it is for businesses

to be ready for disasters on a larger scale than that previously thought of. The watershed events

of 9/11 and the resulting experiences served to diametrically change IT professionals’

expectations, and these events now act as the benchmark for assessing the requirements of a

corporation having a thorough, tested BC/DR plan (Boyd, Guendert, 2006).

Following September 11, 2001, industry participants met with multiple government agencies,

including the United States Securities and Exchange Commission (SEC), the Federal Reserve,

the New York State Banking Department, and the Office of the Comptroller of the Currency.

These meetings were held specifically to formulate and analyze the lessons learned from the

events of September 11, 2001. These agencies released an interagency white paper, and the SEC

released their own paper on best practices to strengthen the IT resilience of the US Financial

System (United States Securities and Exchange Commission, 2002). September 11, 2001

hammered home how critical it is for an enterprise to be prepared for disaster. This was even

more so for large enterprise mainframe customers. A complete paradigm shift has occurred

since 9/11/01 when discussing the topic of DR/BC. Disaster recovery is no longer limited to

problems such as fires or a small flood. Companies now need to consider and plan for the

possibility of the destruction of their entire data center and, possibly, the people that work in it.

Copyright © Stephen R. Guendert 2007 118


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.7.1 IT Resilience
While this term is starting to gain more prominence in the HA/BC/DR community, what is

the formal definition of IT resilience? IBM has developed a definition that sums it up very well.

IBM defines IT resilience as “the ability to rapidly adapt and respond to any internal or external

opportunity, threat, demand, or disruption and continue business operations without significant

impact” (Dhondy, Petersen, 2006). This concept is related to disaster recovery, but it is broader

in scope in that disaster recovery concentrates on recovering only from an unplanned event.

2.3.7.2 Regional Disasters


One of the things that was not well appreciated prior to 9/11 was the concept of a regional

disaster. What is a regional disaster? A good working definition of a regional disaster is: a

disaster scenario, man made or natural that impacts multiple organizations and/or multiple users

in a defined geographic area. In other words, it is not a disaster impacting only one organization

or one data center. Below is a list of some well known regional disasters that have occurred

since 2001 (Guendert, 2007).

Organizations whose BC/DR plans focused on recovering from a local disaster such as fire or

power failure within a data center faced the realization on 9/11 that their plans were inadequate

for coping with and recovering from a regional disaster. A regional disaster was precipitated by

the WTC attacks in New York City on 9/11. Hundreds of companies and an entire section of a

large city, including the financial capital of the world were affected. Power was cut off, travel

restrictions were imposed, and the major telephone (land line and wireless) switching centers

were destroyed. Access to the buildings and data centers was at a minimum restricted and at a

maximum completely impossible for several days following 9/11. The paradigm for planning

mainframe centric BC/DR changed overnight. Organizations quickly realized that they now

need to plan for the possibility of a building being eliminated or rendered useless. Not to be

Copyright © Stephen R. Guendert 2007 119


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Chicken Little and proclaim “the sky is falling”, but what about the threat of terrorists using

biological, chemical, or nuclear weapons? What about another regional or super-regional natural

disaster such as Hurricane Katrina or the 2004 Southeast Asia earthquake/tsunami? Or, man-

made disasters such as the 2003 North American power blackout?

Table 4. Sample of well known regional disasters 2001-2006

2.3.7.3 Non-data center planning issues


The first primary issue beyond the data center is erosion in confidence in a business based upon

the company’s reactions to the regional disaster. This typically would be based on how the press

and internal communications to employees reported on disaster related events. In other words,

what is the perception of the company’s ability to react to the crisis, minimize the ensuing chaos

provide timely (and accurate) updates on the status of company operations, and (internally)

discuss how Human Resources (HR) issues are being addressed? In effect, organizations need to

include in their BC/DR plans a communications plan, as well as appoint a BC/DR

Copyright © Stephen R. Guendert 2007 120


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Communications Director/Coordinator. Perception is reality, and the more control an

organization is perceived to have over its destiny, and the more upfront an organization is, the

less likely is an erosion of confidence in the business (United States Federal Reserve System,

2003).

The second primary issue is that an organization needs to be able to restart or execute

restoration of business operations in a timely manner. Failure to do so can result in supplies

being unable to reach the business to resupply necessary materials. Also, if a business is not

available to customers, because they then have no access to its products and services they may

look elsewhere, i.e. to competitors. This loss of revenue and customer loyalty has a direct

relationship with the effects of a loss/erosion in business confidence.

2.3.7.4 The best laid plans……..


Most pre 9/11 installations that actually had BC/DR plans in place (sadly, some did not) felt they

had a sound, well thought out BC/DR plan. Most had tested their plan and felt prepared for what

was, until 9/11, the typical disaster. On 9/11, when it came time to execute these plans for a

catastrophic regional disaster, many if not most of these organizations found that in the immortal

words of John Steinbeck “The best laid plans of mice and men often go astray.” In other words,

what had been considered great plans for a typical disaster were incomplete or simply inadequate

to handle the situation. Some examples found in the interagency report (United States Federal

Reserve System, 2003):

1) Many plans only included plans for replacing IT equipment inside the data center, and

did not include plans for replacing IT equipment outside of the data center. Key elements

of the IT infrastructure are essential to restoring and sustaining business operations and

not to just equipment in the data center.

Copyright © Stephen R. Guendert 2007 121


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2) The BC/DR plan was not comprehensive in that it only addressed recovering the IT

installation. The plan did not address everything required to accomplish a complete end

to end business recovery.

3) Documentation describing how to recover mission critical applications and business

processes was inadequate or sometimes completely missing.

4) The education and training of personnel was not adequate, nor were sufficient disaster

drills conducted to practice executing the plan. The question that begs to be asked is why

not? Was this a lack of control over these procedures and processes due to outsourcing

BC/DR? For those who have served in the U.S. Navy, remember how frequently

casualty drills were practiced? And these casualty drills were practiced as if the ship’s

survival was at stake each time. When your organization’s future survival is at stake,

shouldn’t you practice/test your BC/DR plans in the same devoted fashion the U.S. Navy

practices casualty drills? Practice like you play the real game (Guendert, 2007).

5) Another planning inadequacy that was frequently cited was a lack of addressing

organizational issues directly related to mission critical business functions. Examples of

such items found lacking included but are not limited to:

a. Documentation of a clear chain-of-command with associated decision control

matrix.

b. An effective internal communications plan for employees and an external

communications plan for suppliers, customers, and the media.

c. A documented succession plan to follow in the event key personnel are

incapacitated/unavailable. On 9/11 organizations were woefully unprepared for

the loss of key personnel and their skills.

Copyright © Stephen R. Guendert 2007 122


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

d. Definition of a priority sequence for recovering business processes and associated

IT applications.

2.3.7.5 Summary of 9/11 lessons learned


The following is a summary of the key lessons learned from the regional disaster on 9/11/2001 in

NYC that resulted from the attacks on the World Trade Center (Guendert, 2007).

1. A regional disaster could very well cause multiple companies/organizations to declare

disasters and initiate recovery actions simultaneously, or near-simultaneously. This is

highly likely to severely stress the capacity of business recovery services (outsourcers) in

the vicinity of the regional disaster. Business continuity service companies typically

work on a “first come, first served” basis. So when a regional disaster similar to 9/11

occurs, these outsourcing facilities can fill up quickly and be overwhelmed. Also, a

company’s contract with the BC/DR outsourcer may stipulate that the customer only has

the use of the facility for a limited time (for example-45 days). This may spur companies

with BC/DR outsourcing contracts to a)consider changing outsourcing firms b)re-

negotiate their existing contract or c) study the requirements and feasibility for insourcing

their BC/DR and creating their own DR site. Depending on what an organization’s

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are, option c may

be the best alternative.

2. The recovery site must have adequate hardware and the hardware at the facility must be

compatible with the hardware at the primary site. Organizations must plan for their

recovery site to have a) sufficient server processing capacity b) sufficient storage capacity

and c) sufficient networking and storage networking capacity to enable all business

critical applications to be run from the recovery site. The installed server capacity at the

Copyright © Stephen R. Guendert 2007 123


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

recovery site may be used to meet normal (day-to-day) needs (assuming BC/DR is

insourced). Fallback capacity may be provided via several means including workload

prioritization (test, development, production, and data warehouse). Fallback capacity

may also be provided via implementing a capacity upgrade scheme that is based on

changing a license agreement as opposed to installing additional capacity. IBM System z

and zSeries servers have the “Capacity Backup Option” (CBU). Unfortunately for the

open systems world, such a feature is not prevalent. Many organizations will take a

calculated risk with open systems and not purchase two duplicate servers (one for

production at the primary data center, and a second for the DR data center). Therefore,

open systems DR planning must take this possibility into account, and answer the

question “what can I lose”?

On the subject of compatible hardware is the example posed by encryption. It was

overlooked before and will be again. If an organization does not plan for the recovery

site to have the necessary encryption hardware, the recovery site may not be able to

process the data that was encrypted at the primary site. So, when you buy new tape

drives with onboard encryption, before you plan on reallocating those previous

generation tape drives (that did not have encryption capabilities) to the recovery site to

save $$, maybe you need to re-evaluate the decision. Saving $$ is nice, but in this case,

the $$ savings is very short run and puts the business at risk of not being able to recover

to meet RTO.

3. A robust BC/DR solution must be based on as much automation as possible. The types

of catastrophes inherent in 9/11 style regional disasters make it too risky to assume that

key personnel/critical skills will be available to restore IT services. Regional disasters

Copyright © Stephen R. Guendert 2007 124


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

impact personal lives as well. Personal crises and the need to take care of families,

friends, and loved ones will take a priority for IT workers. Also, key personnel may not

be able to travel and will be unable to get to the recovery site. Mainframe installations

are increasingly looking to automate switching resources from one site to another. One

way to do this in a mainframe environment is with Geographically Dispersed Parallel

Sysplex (GDPS).

4. If an organization is to maintain business continuity, it is critical to maintain sufficient

geographical separation of facilities, resources, and personnel. If a resource cannot be

replaced from external sources within the RTO, it needs to be available internally and in

multiple locations. This statement holds true for hardware resources, employees, data,

and even buildings. An organization also needs to have a secondary disaster recovery

plan. On 9/11, companies that successfully recovered to their designated secondary site

after completely losing their primary data center quickly came to the realization that all of

their data was now in one location. The uncertainty caused by the terrorist actions soon

had many such organizations realizing that if the events continued or if they did not have

sufficient geographic separation and their recovery site was also incapacitated they had

no further recourse (no secondary plan) and would be out of business.

What about the companies that initially recovered third party site with contractual

agreements calling for them to vacate the facility within a specified time period? What

happens when you do not have a primary site to go back to? The threat of terrorism and

the prospect for further regional disasters necessitates asking the question “What is our

secondary disaster recovery plan?

Copyright © Stephen R. Guendert 2007 125


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

This has led many companies to seriously consider implementing a three-site BC/DR

strategy. In a nutshell, what this strategy would entail is having two sites within the same

geographic vicinity to facilitate high availability and a third, remote site for disaster

recovery. The major inhibitor to implementing a three-site strategy is telecommunication

costs. As with any major decision, a proper risk vs. cost analysis should be performed.

5. Asynchronous remote mirroring becomes a more attractive option to organizations

insourcing BC/DR and/or increasing the distance between sites. While synchronous

remote mirroring is popular, many organizations are starting to give more serious

consideration to greater distances between sites and with that, consideration to a strategy

of asynchronous remote mirroring to allow further separation between their primary and

secondary sites.

2.3.7.6 Business continuity/disaster recovery and IT resilience


Business continuity is no longer simply IT Disaster Recovery. Business continuity has evolved

into a management process that relies on each component in the business chain to sustain

operation at all times. Effective business continuity depends on the ability to accomplish five

things. First, the risk of business interruption must be reduced. Second, when an interruption

does occur, a business must be able to stay in business. Third, businesses that want to stay in

business must be able to respond to customers. Fourth, as described earlier, businesses need to

maintain the confidence of the public. Finally, businesses must comply with requirements such

as audits, insurance, health/safety, and regulatory/legislative requirements. In some nations,

government legislation and regulations lay down very specific rules for how organizations must

handle its business processes and data. Some examples are the Basell II rules for the European

banking sector and the United States’ Sarbanes-Oxley Act. These both stipulate that banks must

Copyright © Stephen R. Guendert 2007 126


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

have a resilient back office infrastructure by this year (2007). Another example is the Health

Insurance Portability and Accountability Act (HIPAA) in the United States. This legislation

determines how the U.S. health care industry must account for and handle patient related data

(IBM Systems Group, 2003).

This ever increasing need for “365x24x7xforever” availability really means that many

businesses are now looking for a greater level of availability covering a wider range of events

and scenarios beyond the ability to recover from a disaster. This broader requirement is called

IT resilience. As stated earlier, IBM has developed a definition for IT Resilience: the ability to

rapidly adapt and respond to any internal or external opportunity, threat, disruption, or demand

and continue business operations without significant impact.” The two aspect of IT Resilience

that will be focused on in the remainder of this paper are disaster recovery and continuous/high

availability. Two familiar terms that need to be at the forefront of the discussion are (Raften,

Ratte, 2005):

1) Recovery Time Objective (RTO). RTO traditionally refers to the question “How long

can you afford to be without your systems?” In other words, how long can your business

afford to wait for IT services to be resumed following a disaster? How much time is

available to recover the applications and have all critical operations up and running

again?

2) Recovery Point Objective (RPO). RPO is how much data your company is willing to

have to recreate following a disaster. How much data can be lost? What is the

acceptable time difference between the data in your production system and the data at the

recover site, i.e., what is the actual point-in-time recovery point at which all data is

current? If you have an RPO less than 24 hours, expect to be able to do some form of

Copyright © Stephen R. Guendert 2007 127


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

onsite real time mirroring. If your DR plan is dependent upon daily full volume dumps

you probably have an RPO of 24 hours or more.

Some other related terms that have evolved in the past 6 years include degraded operations

objective (DOO) which answers the question “what will be the impact on operations with fewer

data centers?” Network Recovery Objective (NRO) refers to how long it takes to switch over the

network. Recovery distance objective (RDO) refers to how far away the copies of data need to

be located (Raften, Ratte, 2005).

There are several factors involved in determining your RTO and RPO requirements.

Organizations need to consider the cost of some data loss while still maintaining cross-

subsystem/cross-volume data consistency. Maintaining data consistency enables the ability to

perform a data base restart which typically has a duration of seconds to minutes. This cost needs

to be weighed versus the cost of no data loss which will either a) impact production on all

operational errors in addition to disaster recovery failure situations or yield a data base recovery

disaster (typically hours to days in situation) as cross-subsystem/cross-volume data consistency

is not maintained during the failing period. The real solution that will be chosen is based on a

particular cost curve slope: if I spend a little more, how much faster is disaster recovery? If I

spend a little less, how much slower is disaster recovery (Dhondy, Petersen, 2006)?

In other words, the cost of your business continuity solution is realized by balancing the

equation of how quickly you need to recover your organization’s data versus how much will it

cost the company in terms of lost revenue due to being unable to continue business operations.

The shorter the time period decided on to recover the data to continue business operations, the

higher the costs. It should be obvious that the longer a company is down and unable to process

transactions, the more expensive the outage is going to be for the company, and if the outage is

Copyright © Stephen R. Guendert 2007 128


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

long enough, survival of the company is doubtful. Figure 23(Guendert, 2007) takes us back to

some basic economics cost curves. Much like deciding on the price of widgets, and the optimal

quantity to produce, the optimal solution is the intersection point of the 2 cost curves.

Figure 23. Cost of business continuity solution versus cost of outage

2.3.7.7 The seven tiers of disaster recovery


Disaster recovery planning as a discipline has existed in the mainframe arena for many years.

Back in 1992, the Automatic Remote Site Recovery project at the SHARE user group, in

conjunction with IBM defined a set of business continuity (at the time they called them disaster

recovery) tier levels. These tiers were defined to quantify, categorize and describe all of the

various methodologies that were in use for successful mainframe business continuity

implementations. The original SHARE/IBM white paper defined six tiers of disaster recovery.

The seventh tier was subsequently added when the technology improvements that resulted in

GDPS came to market (Dhondy, Petersen, 2006). This tier concept continues to be used in the

industry and it serves as a very useful, standardized methodology for describing an

Copyright © Stephen R. Guendert 2007 129


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

organization’s business continuity capabilities. The seven tiers range from the least expensive to

the most expensive. They make it easy for an organization to define their current service level,

target service level, risk, and target environment. Please note as you go through the seven tiers

that each tier builds upon the foundation of the previous tier. Let’s go through each of the seven

tiers in more detail (Guendert, 2007).

1) Tier 0: No disaster recovery plan and no offsite data. There is no contingency plan, no

backup hardware, no documentation, no saved information and in the event of a disaster, no

more business.

2) Tier 1: Data backup, no hot site. Tier 1 allows the business to back up their data (typically

with tape). The system, application data, and subsystem is dumped to tape and transported to a

secure facility. Depending on how often backups are made, tier 1 businesses are prepared to

accept several days to weeks of data loss. Their backups will be secure off-site (assuming the

tapes made it safe and sound to the site!). All backup data such as archived logs and image

copies that are still onsite will be lost in the event of a disaster. This typically will be 24-48

hours worth of data. Tier 1 is often referred to as the pickup truck access method (PTAM). Tier

1 businesses recover from a disaster by securing a DR site, installing equipment, transporting the

backup tapes from the secure site to the DR site, restoring the system, application infrastructure

and related data, subsystem, and restarting the systems and workload. This typically takes

several days. The cost factors involved include creating backup tapes, transporting the tapes, and

storing the tapes.

3) Tier 2: Data backup/PTAM with a hot site. Tier 2 is essentially the same as Tier 1 except the

organization has secured a DR facility in advance. This will be somewhat faster and have a more

consistent response time when compared with Tier 1. Data loss will be 24-48 hours, but

Copyright © Stephen R. Guendert 2007 130


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

recovery now will be 24-48 hours as opposed to several days. The cost factors of a Tier 2

solution include ownership of a second IT facility or a DR facility subscription fee. These costs

are on top of the previously mentioned Tier 1 costs.

4) Tier 3: Electronic vaulting. Tier 3 solutions are the same as Tier 2 except with a Tier 3

solution; the organization dumps the backup data electronically to a remotely attached tape

library subsystem. Depending on when the last backup was created, data loss with a Tier 3

solution will be up to 24 hours or less. Also, the electronically vaulted data will typically be

more current that that which is shipped via PTAM. Also, since Tier 3 solutions add higher levels

of automation, there is less data recreation or loss. In addition to the Tier 2 cost factors, Tier 3

cost factors include the telecommunication lines to transmit the backup data, and a dedicated

automated tape library (ATL) subsystem at the remote site.

5) Tier 4: Active secondary site/electronic remote journaling/point in time copies. Tier 4

solutions are used by organizations requiring both greater data currency and faster recovery than

the 3 lower tiers. Rather than relying exclusively on tape, Tier 4 solutions incorporate disk based

solutions such as point in time copy. This leads to the amount lf data loss diminishing to minutes

to hours, and the recovery time being 24 hours or less. Database management system (DBMS)

and transaction manager updates are remotely journaled to the DR site. Cost factors in addition

to the Tier 3 costs include a staffed, running system in the DR site to receive the updates and disk

to store the updates. Examples include peer to peer virtual tape, flash copy functionality, and

IBM Metro/Global copy.

6) Tier 5: Two-site Two-phase commit/transaction integrity. Tier 5 is the same as Tier 4 with

the addition of applications now performing two-phase commit processing between two sites.

This will be used by organizations that have a requirement for consistency of data between

Copyright © Stephen R. Guendert 2007 131


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

production and recovery data centers. Data loss will be seconds and the recovery time will be

under two hours. Cost factors inherent in Tier 5 solutions include modifying and maintaining the

application to add the two phase commit logic in addition to the Tier 4 cost factors. An example

of a Tier 5 solution is Oracle Data Guard.

7) Tier 6: Zero data loss/remote copy. Tier 6 solutions maintain the highest levels of data

currency. The system, subsystem, and application infrastructure/application data is

mirrored/copied from the production site to a DR site. Tier 6 solutions are used by businesses

with little to no tolerance for data loss and that need to rapidly restore data to applications. To do

this, Tier 6 solutions do not depend on the applications themselves to provide data consistency.

Instead, they will use real time server and storage mirroring. There will be small to zero data

loss if using synchronous remote copy. There will be seconds to minutes of data loss if using

asynchronous remote copy. The recovery window will be the time required to restart the

environment using the secondary DASD if they are truly data consistent. Cost factors in addition

to the Tier 5 factors include the cost of telecommunications lines. Examples of Tier 6 solutions

include Metro Mirror, Global Mirror and SRDF.

8) Tier 7: Zero/little data loss (Geographically Dispersed Parallel Sysplex). GDPS is beyond the

original SAHRE defined DR tiers. GDPS provides total IT business recovery through the

management of systems, processors, and storage across multiple sites (Fries, Jorna, 2006). A

Tier 7 solution includes all of the major components used in a Tier 6 solution with the addition of

integration of complete automation for storage, zSeries servers, software, network and

applications. Tier 7 solutions have a data consistency level above and beyond Tier 6 due to this

automation. GDPS manages more than the physical resources. GDPS also manages the

application environment and the consistency of the data. GDPS provides full data integrity

Copyright © Stephen R. Guendert 2007 132


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

across volumes, subsystems, OS platforms, and sites while still providing the ability to perform a

normal restart should a site switch occur. This ensures that the duration of the recovery window

is minimized. Since application recovery is also automated, restoration of systems and

applications is also much faster and more reliable.

Table 5. Summary of seven DR tiers

Tier Description Data loss Recovery time


(hours) (hours)

0 No DR plan All N/A

1 PTAM 24-48 >48

2 PTAM and hot site 24-48 24-48


3 Electronic tape vaulting <24 <24

4 Active 2nd site minutes to hours <24/<2

5 2 site 2 phase commit seconds <2

6 Zero Data loss/remote copy none/seconds <2

7 GDPS none/seconds 1-2

2.3.8 FICON cascading benefits

Cascaded FICON delivers many of the same benefits of open systems storage area networks

(SANs) to the mainframe space. Cascaded FICON allows for simpler infrastructure

management, lowered infrastructure cost of ownership, and higher data availability. This higher

data availability is important in delivering a more robust enterprise disaster recovery strategy.

Further benefits are realized when the ISLs connect directors/switches in two or more locations

and/or are extended over long distances. Figure 24 shows a non-cascaded two-site environment

(Artis, Guendert, 2006).

Copyright © Stephen R. Guendert 2007 133


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 24. Two site non cascaded FICON environment

In the configuration above, all hosts have access to all of the disk and tape subsystems at both

locations. The host channels at one location are extended to the FICON directors at the other

location to allow for cross-site storage access. If each line represents two FICON channels, then

the above configuration would need a total of sixteen (16) extended links. These links would

only be utilized to the extent that the host has activity to the remote devices.

The most obvious benefit when comparing the Figure 1 configuration with one that is cascaded

is the reduction in the number of links across the WAN. The figure below shows a cascaded,

two-site FICON environment.

Copyright © Stephen R. Guendert 2007 134


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 25. Two site cascaded FICON environment

In this configuration, if each line represents two channels, only four (4) extended links are

required. Since FICON is a packet-switched protocol (as opposed to the circuit-switched ESCON

protocol), multiple devices can share the ISLs, and multiple I/Os can be processed across the

ISLs at the same time. This allows for the reduction of links between sites and allows for more

efficient utilization of the links in place. In addition, ISLs can be added as the environment

grows and traffic patterns dictate.

This is the key way in which a cascaded FICON implementation can reduce the cost of the

enterprise architecture. As can be seen in both figures, the cabling schema for both intersite and

intrasite has been simplified. Fewer intrasite cables translate into lower cabling hardware and

management costs. It also reduces the number of FICON adapters, director ports, and host

channel card ports required, thus lowering the connectivity cost for mainframes and storage

devices as well. In the second figure, the sharing of links between the two sites reduces the

Copyright © Stephen R. Guendert 2007 135


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

number of physical channels going across between sites thereby lowering the cost by

consolidating channels and the number of director ports. By the way, the faster the channel

speeds are between sites, the better the intersite cost savings from this consolidation will be. So,

with 4 Gbps FICON and 10Gbps FICON available, the more attractive this becomes (Guendert,

2005c).

Another benefit to this approach, especially over long distances, is that the FICON director

typically has many more buffer-credits per port than do the processor and the disk/tape

subsystem cards. More buffer-credits allow for a link to be extended farther distances without

significantly impacting response times to the host.

2.3.8.1 Optimizing use of storage resources


ESCON limits the amount of terabytes (TB) that a customer can realistically have in a single

DASD array because of device addressing limitations. Rather than filling a frame to capacity,

additional frames need to be purchased, wasting capacity. For example, running Mod 3 volumes

in an ESCON environment typically will lead to running out of available addresses between 3.3

and 3.5 TB. This is significant because it requires more disk array footprints at each site, and:

The technology of DASD arrays will place a limit on the number of control unit (CU) ports

inside, and there is a limit of 8 links per LCU. These 8 links can only perform so fast.

This also limits the I/O density (IOs/GB/Sec) into and out of the frame, placing a cap on the

amount of disk space the frame can support and still supply reasonable I/O response times.

Cascaded FICON lets customers fully utilize their old disk arrays, preventing them from having

to “throttle back” I/O loads and make the most efficient use of technologies such as Parallel

Access Volumes (PAVs). Additionally, a cascaded FICON environment requires fewer fibre

adapters on storage devices and mainframes (Artis, Guendert, 2006).

Copyright © Stephen R. Guendert 2007 136


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Cascaded FICON also allows for TCO savings in an installation’s mainframe tape/virtual tape

environment. FICON directors are “5 nines” devices. The typical enterprise class tape drive is

only 2, maybe 3 nines at best due to all of the moving mechanical parts. A FICON port on a

FICON director typically costs twice as much as a FICON port on a FICON switch. Granted, the

FICON switch is not a “5 nines” device, while the FICON director is. However, does it make

sense to connect “3 nines” tape drives to “5 nines” directors, when the best reliability achieved

will be that of the lowest common denominator (the tape drive)? So, depending on your exact

configuration, it can make more financial sense to connect tape drives to FICON switches

cascaded to FICON directors, thus saving the more expensive director ports for host and/or

DASD connectivity.

2.3.9 Cascaded FICON performance factors

In their March 2003 IBM white paper on Cascaded FICON director performance considerations,

Basener and Cronin listed 7 main factors affecting the performance of a cascaded FICON

director configuration (Basener,Cronin, 2003):

1. The number of ISLs between the two cascaded FICON directors and the routing of traffic

across ISLs.

2. The number of FICON/FICON Express channels whose traffic is being routed across the

ISLs.

3. The ISL link speed.

4. Contention for director ports associated with the ISLs.

5. The nature of the I/O workload (I/O rates, block sizes, use of data chaining, and

read/write ratio).

Copyright © Stephen R. Guendert 2007 137


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

6. The distances of the paths between the components of the configuration (the FICON

channel links from processor(s) to the first director, the ISLs between directors, and the

links from the second director to the storage control unit ports).

7. The number of switch port buffer to buffer credits.

The last point –the number of buffer to buffer credits and the management of buffer to buffer

credits-- is typically the one examined most carefully, as well as the one that is most often

misunderstood. As such, it deserves more detailed attention.

2.3.9.1 Buffer to buffer credit management: an oxymoron?


The introduction of the FICON I/O protocol to the mainframe I/O subsystem ushered in a new

era in our ability to process data rapidly and efficiently. The FICON protocol is significantly

different than its predecessor ESCON protocol. And as a result of two main changes that FICON

made to the mainframe channel I/O infrastructure, the requirements for a new RMF record came

into being. The first significant change was that unlike FICON, ESCON did not use buffer

credits to account for packet delivery. The second significant change was the introduction of

“FICON cascading” which was not possible with ESCON.

While a fair amount of information is readily available, buffer to buffer credits (BB credits)

and their management in a FICON environment still appears to be one of the most commonly

misunderstood concepts today. And truth be told, the phrase “buffer to buffer credit

management” sometimes appears to be an oxymoron. Given their impact on performance over

distances in cascaded FICON environments, this is something that needs to be better addressed.

At present, there is no real way to manage/track BB credits being used. At initial configuration a

rule of thumb is used for allocating them and for management we keep our fingers

crossed. Similar to how the end user manages dynamic PAVs by completely overkilling the

Copyright © Stephen R. Guendert 2007 138


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

number of aliases assigned to a base address, the typical FICON shop completely overkills the

number of BB credits assigned for long distance traffic. Just as PAV overkill can lead to

configuration issues due to addressing constraints, BB credit overkill can lead to director

configuration issues which often times require outages to resolve. Mechanisms for detecting BB

credit starvation in a FICON environment are extremely limited at best.

To get a good basic understanding of BB credits, a brief review of the concept of flow control

is in order.

2.3.9.2 Packet Flow and Credits


The fundamental objective of flow control is to prevent a transmitter from over-running a

receiver by allowing the receiver to pace the transmitter, managing each I/O as a unique instance.

At extended distances, pacing signal delays can result in degraded performance. Buffer-to-buffer

credit flow control is employed to transmit frames from the transmitter to the receiver and pacing

signals back from the receiver to the transmitter. The basic information carrier in the fibre

channel protocol is the frame. Other than ordered sets, which are used for communication of

low-level link conditions, all information is contained within the frames. When discussing the

concept of frames, a good analogy to use is that of an envelope: When sending a letter via the

United States Postal Service, the letter is encapsulated within an envelope. When sending data

via a FICON network, the data is encapsulated within a frame (Kembel, 2003).

To prevent a target device (either host or storage) from being sent more frames than it has

buffer memory to store (overrun), the fibre channel architecture provides a flow control

mechanism based on a system of credits. Each credit represents the ability of the receiver to

accept a frame. Simply stated, a transmitter cannot send more frames to a receiver than the

receiver can store in its buffer memory. Once the transmitter exhausts the frame count of the

Copyright © Stephen R. Guendert 2007 139


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

receiver, it must wait for the receiver to credit-back frames to the transmitter (Artis, Guendert,

2006). A good analogy would be a pre-paid calling card: there are a certain amount of minutes

to use, and one can talk until there is no more time (minutes) on the card.

Flow control exists at both the physical and logical level. The physical level is called buffer-

to-buffer flow control and manages the flow of frames between transmitters and receivers. The

logical level is called end-to-end flow control and it manages the flow of a logical operation

between two end nodes. It is important to note that a single end-to-end operation may have made

multiple transmitter-to-receiver pair hops (end-to-end frame transmissions) to reach its

destination. However, the presence of intervening directors and/or ISLs is transparent to end-to-

end flow control. Since buffer-to-buffer flow control is the more crucial subject in a cascaded

FICON environment, the following section provides a more in-depth discussion.

2.3.9.3 Buffer-to-Buffer Flow Control


Buffer-to-buffer flow control is flow control between two optically adjacent ports in the I/O path

(i.e., transmission control over individual network links). Each fibre channel port has dedicated

sets of hardware buffers for send and receive operations. These buffers are more commonly

known as buffer-to-buffer credits (bb_credits) (Kembel, 2003).The number of available

bb_credits defines the maximum amount of data that can be transmitted prior to an

acknowledgment from the receiver (Guendert, 2005d). BB_credits are physical memory

resources incorporated in the Application Specific Integrated Circuit (ASIC) that manages the

port. It is important to note that these memory resources are limited. Moreover, the cost of the

ASICs increases as a function of the size of the memory resource. One important aspect of fibre

channel is that adjacent nodes do not have to have the same number of credits. Rather, adjacent

Copyright © Stephen R. Guendert 2007 140


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

ports communicate with each other during fabric login (FLOGI) and port login (PLOGI) to

determine the number of credits available for the send and receive ports on each node.

A BB_credit can transport a 2,112 byte frame of data. The FICON FC-SB-2 and FC-SB-3

ULPs employ 64 bytes of this frame for addressing and control, leaving 2K available for z/OS

data. In the event that a 2 Gbit transmitter is sending full 2,112 byte frames, one credit is

required for every 1 KM of fibre between the sender and receiver. Unfortunately, z/OS disk

workloads rarely produce full credits. For a 4K transfer, the average frame size is 819 bytes.

Therefore, five credits would be required per KM of distance as a result of the decreased average

frame size. It is important to note that increasing the fibre speed increases the number of credits

required to support a given distance. In other words, every time the distance doubles, the number

of required bb_credits doubles to avoid transmission delays for a specified distance.

BB_credits are used by Class 2 and Class 3 service and rely on the receiver sending back

receiver-readies (R_RDY) to the transmitter. As was previously discussed, node pairs

communicate their number of credits available during FLOGI/PLOGI. This value is used by the

transmitter to track the consumption of receive buffers and pace transmissions if necessary.

FICON directors track the available bb_credits in the following manner:

• before any data frames are sent, the transmitter sets a counter equal to the BB_credit

value communicated by its receiver during FLOGI,

• for each data frame sent by the transmitter, the counter is decremented by one,

• upon receipt of a data frame, the receiver sends a status frame (R_RDY) to the transmitter

indicating that the data frame was received and the buffer is ready to receive another data

frame, and

• for each R_RDY received by the transmitter, the counter is incremented by one.

Copyright © Stephen R. Guendert 2007 141


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

As long as the transmitter count is a non-zero value, the transmitter is free to continue sending

data. This mechanism allows for the transmitter to have a maximum number of data frames in

transit equal to the value of BB_Credit, with an inspection of the transmitter counter indicating

the number of receive buffers. The flow of frame transmission between adjacent ports is

regulated by the receiving port’s presentation of R_RDYs. In other words, BB_credits has no end

to end component. The sender decrements the BB Credit by 1 for each R_RDY received. The

initial value of BB Credit must be non-zero. The rate of frame transmission is regulated by the

receiving port based on the availability of buffers to hold received frames. It should be noted that

the FC-FS specification allows the transmitter to be initialized at zero, or at the value BB_Credit

and either count up or down on frame transmit. Different switch/director vendors may handle

this with either method, and the counting would be handled accordingly.

2.3.9.4 Implications to Asset Deployment


There are four implications to asset deployment to consider when planning BB-credit allocations

(Guendert, 2005c):

1. For write intensive applications across an ISL (tape and disk replication) the BB_Credit

value advertised by the E_Port on the target side gates performance. In other words, the

number of BB Credits on the target side cascaded FICON director is the major factor.

2. For read intensive applications across an ISL (regular transactions) the BB_Credit value

advertised by the E_Port on the host side gates performance. In other words, the number

of BB Credits at the local location is the major factor.

3. Two ports do not negotiate BB_Credit down to the lowest common value. A receiver

simply “advertises” BB_credits to a linked transmitter.

Copyright © Stephen R. Guendert 2007 142


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4. The depletion of BB_credits at any point between an initiator and a target will gate

overall throughput.

2.3.9.5 Configuring BB credit allocations on FICON directors


There have been two FICON director/switch architectures when it comes to BB credit allocation.

The first, which was prevalent on early FICON directors such as the Inrange/CNT FC9000 and

McDATA 6064 had a range of BB credits that could be assigned to each individual port. Each

port on a port card had a range of BB credits (for example 4-120) that could be assigned to it

during the director configuration process. Simple rules of thumb on a table/matrix were used to

determine the number of BB_credits to use (Guendert, 2005c). Unfortunately, these tables did

not consider workload characteristics, or z/OS particulars. Since changing the BB credit

allocation was an off-line operation, most installations would figure out what they needed, set it

and (assuming it was correct) be done with it. Best practice was typically to max out BB credits

used on ports being used for distance traffic since each port could theoretically be set to the

maximum available BB credits without penalizing other ports on the port card. Some

installations would even max out the BB credit allocation on short distance ports “so they would

not have to worry about it”. However, this could cause other kinds of problems in recovery

scenarios.

The second FICON director/switch architecture has a pool of available BB credits for each

port card in the director. This is the architecture that is on the market today in products available

from Brocade and Cisco. Each port on the port card will have a maximum setting. However,

since there is a large pool of BB credits that must be shared amongst all ports on a port card,

better allocation planning must take place that what an installation could do in the past. It is no

longer enough to simply use distance rules of thumb. Workload characteristics of traffic need to

Copyright © Stephen R. Guendert 2007 143


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

be better understood. Also, as 4 Gbps FICON Express 4 becomes prevalent, and 8 Gbps FICON

Express 8 follows, intra data center distances become something that must be looked at when

deciding how to allocate the pool of available BB credits. It no longer is enough to simply say

that a port is internal to the data center or campus and assign it the minimum number of credits.

This pooled architecture and careful capacity planning it necessitates make it more critical than

ever to have a way to track actual BB credit usage in a cascaded FICON environment. Simply

employing a “fire and forget” approach to bb credit allocation is no longer optimal for ensuring

performance over distance (Artis, Guendert, 2006).

2.3.9.6 Exhaustion of BB credits and frame pacing delay


Similar to the ESCON directors that preceded them, FICON directors and switches have a

feature called Control Unit Port (CUP). Among the many functions of the CUP feature is an

ability to provide host control functions such as blocking and unblocking ports, safe switching,

and in-band host communication functions such as port monitoring and error reporting. Enabling

CUP on FICON directors while also enabling RMF 74 subtype 7 (RMF 74-7) records for your

z/OS system, yields a new RMF report called the “FICON Director Activity Report”. Data is

collected for each RMF interval if FCD is specified in your ERBRMFnn parmlib member (Artis,

Guendert, 2006). RMF will format one of these reports per interval per each FICON director that

has CUP enabled and the parmlib specified. This RMF report is often overlooked but contains

very meaningful data concerning FICON I/O performance—in particular, frame pacing delay. It

is extremely important to note that indications of frame pacing delay are the only indication

available to indicate a BB_credit starvation issue on a given port.

Frame pacing delay has been around since fibre channel SAN was first implemented in the

late 1990s. But until the increased use of cascaded FICON, its relevance in the mainframe space

Copyright © Stephen R. Guendert 2007 144


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

has been completely overlooked. If frame pacing delay is occurring then the buffer credits have

reached zero on a port for an interval of 2.5 microseconds and no more data can be transmitted

until a credit has been added back to the buffer credit pool for that port. Frame pacing delay

causes unpredictable performance delays. These delays generally result in elongated FICON

connect time and/or elongated PEND times that show up on the volumes attached to these links.

It is important to note that only when using switched FICON and only when CUP is enabled on

the FICON switching device(s) can RMF provide the report that provides frame pacing delay

information. Only the RFM 74-7 FICON Director Activity Report provides FICON frame pacing

delay information. This information is not available from any other source today (Artis,

Guendert, 2006).

Figure 26. Sample FICON director activity report (RMF 74-7)

The fourth column from the left in figure 26 is the column where frame pacing delay is

reported. Any number other than zero in this column is an indication of frame pacing delay

occurring. If there is a non-zero number it reflects the number of times that I/O was delayed for

2.5 microseconds or longer due to buffer credits falling to zero. The figure above shows the ideal

situation: zeros down the entire column indicating that enough buffer credits are always available

to transfer FICON frames.


Copyright © Stephen R. Guendert 2007 145
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 27. Frame pacing delay indications in RMF 74-7 record

In figure 27, on the FICON Director Activity Report for switch ID 6E, a M6140 director,

there were at least three instances when port 4, a cascaded link, suffered frame pacing delays

during this RMF reporting interval. This would have resulted in unpredictable performance

across this cascaded link during this period of time (Artis, Guendert, 2006).

2.3.9.7 What is the difference between frame pacing and frame latency?
Frame pacing is an FC4 application data exchange level measurement and/or throttling

mechanism. It uses buffer credits to provide a flow control mechanism for FICON to assure

delivery of data across the FICON fabric. When all buffer credits for a port are exhausted a

frame pacing delay can occur. Frame latency, on the other hand, is a frame delivery level

measurement. It is somewhat akin to measuring frame friction. Each element that handles the

frame contributes to this latency measurement (CHPID port, switch/Director, storage port

adapter, link distance, etc.). Frame latency is the average amount of time it takes to deliver a

frame from the source port to the destination port (Guendert, Lytle, 2007).

Copyright © Stephen R. Guendert 2007 146


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.3.9.8 How to prevent frame pacing delay?


If it is a long distance link that is running out of buffer credits, then it might be possible to enable

additional buffer credits for that link in an attempt to provide an adequate pool of buffer credits

for the frames being delivered over that link. See table 6 below (Guendert, Lytle, 2007).

Figure 28. Frame size, link speed and distance determine buffer credit requirements

Keep in mind that tape workloads will generally have larger payloads in a FICON frame

while DASD workloads might have much smaller frame payloads. The average payload size for

DASD is often about 800-1500 bytes. By using the FICON Director Activity reports for an

enterprise gains an understanding of typical average read and write frames sizes on a port by port

basis.

The average read frame size and the average write frame size for the frame traffic on each

port are very useful for determining how many buffer credits are needed for a long distance link

or possibly to solve a local frame pacing delay issue.

2.3.9.9 How can things be improved?


It would appear that even with the new FICON directors and the ability to assign BB_credits to

each port from a pool of available credits on each port card, that the end user is still stuck. The

end user can best hope they allocate correctly, and then monitor their RMF 74-7 report for

indications of frame pacing delay to indicate they are out of BB_credits. They can then go ahead

Copyright © Stephen R. Guendert 2007 147


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

and make the necessary adjustments to their BB_credit allocations to crucial ports such as the

ISL ports on either end of a cascaded link. However, any adjustments made will merely be a

better guestimate since the exact number being used is not indicated. Imagine driving a car

without a fuel gauge, and having to rely on EPA miles per gallon estimates so you could

calculate how many miles you could drive on a full tank of gas. Of course, this estimate would

not reflect driving characteristics. And in the end, the only accurate indication you get that you

are out of gas is a coughing engine that stops running.

Why do we not yet have the capability, either in RMF, or in the FICON director management

software, to have BB credit usage actively monitored and counted? What we have then is a

situation similar to what we have with monitoring open exchanges. In 2004 Dr. H. Pat Artis

wrote a paper that discussed open exchanges and made a sound case for why open exchange

management is crucial in a FICON environment. Dr. Artis proved the correlation between

response/service time skyrocketing and open exchange saturation, demonstrated how channel

busy and bus busy metrics are not correlated to response/service time, and recommended a range

of open exchanges to use for managing a FICON environment. Since RMF does not report open

exchange counts, Dr. Artis derived a formula using z/OS response time metrics to calculate open

exchanges. Commercial software such as MXG and RMF Magic use this to help users better

manage their FICON environments (Artis, Ross, 2004).

Similar to open exchanges, the data needed to calculate BB_credit usage is currently available

in RMF. All that would be needed is some mathematical calculations be performed. As an area

of future exploration, the RMF 74-7 record (FICON Director Activity Report) should be

updated with these 2 additional fields and the appropriate interfaces be added between the

Copyright © Stephen R. Guendert 2007 148


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON directors and CUP code. Director management software could also be enhanced to

include these two valuable metrics.

2.3.9.10 Dynamic Allocation of BB_credits


The techniques used in BB_credit allocation is very similar in concept to the early technique

used in managing parallel access volume (PAVs) aliases. The simple approach used was called

static assignment. With static assignment, the storage subsystem utility was used to statically

assign alias addresses to base addresses. While a generous static assignment policy could help to

ensure sufficient performance for a base address, it resulted in ineffective utilization of the alias

addresses (since nobody knew what the optimal number of aliases was for a given base), and to

putting pressure on the 64K device address limit. Users would tend to assign an equal number of

addresses to each base, often taking a very conservative approach resulting in PAV alias overkill.

Sounds a lot like what we currently have with BB_credit allocation, in particular with older

generation FICON directors.

An effort to address this was undertaken by IBM, leading to IBM providing workload

manager (WLM) support for dynamic alias assignment. Here, WLM was allowed to

dynamically reassign aliases from a pool to base addresses to meet workload goals. This could

be somewhat lethargic, so users of dynamic PAVs still tend to over configure aliases and are

pushing the 64K device address limitation. Users face what Dr. Artis refers to as the PAV

performance paradox (Artis, 2006a): they need the performance of PAVs, tend to over configure

alias addresses, and are close to exhausting the z/OS device addressing limit.

Perhaps a similar dynamic allocation of BB_credits, in particular for new FICON director

architectures having pools of assignable credits on each port card would be a very beneficial

enhancement for end users. Perhaps an interface between the FICON directors and WLM could

Copyright © Stephen R. Guendert 2007 149


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

be developed to allow WLM to dynamically assign BB_ credits. At the same time, since quality

of service (QOS) is an emerging topic of importance for FICON, perhaps an interface could be

developed between the FICON directors and WLM for functionality with dynamic channel path

management and priority I/O queuing to enable true end to end QOS.

In October 2006, IBM announced HyperPAVs for the DS8000 storage subsystem family to

address the PAV performance paradox. HyperPAVs increase the agility of the alias assignment

algorithm. In a nutshell, the primary difference between the traditional PAV alias management

is that aliases are dynamically assigned to individual I/Os by the z/OS I/O supervisor (IOS)

rather than being statically or dynamically assigned to a base address by WLM. The RMF 78-3

(I/O queuing) record has also been expanded. If a similar feature/functionality and interface

could be developed between FICON directors and the z/OS IOS, we then would have the

ultimate in BB_credit allocation: true dynamic allocation of BB_credits on an individual I/O

basis.

2.3.9.11 Closing thoughts on buffer to buffer credits


This section has reviewed flow control, basics of buffer to buffer credit theory, basics of frame

pacing delay, current buffer to buffer credit allocations methods and presented some proposals

for a) counting BB_credit usage and b) enhancing how BB_credits are allocated and managed.

Current methods of basically blindly allocating credits, and finding out if you don’t have enough

after the fact via an obscure report are not sufficient. BB_credit management can be an

oxymoron in 2007. It does not have to be, nor should it be that way.

2.3.10 Quality of Service (QoS) and cascaded FICON

Not all data is created equal, nor does all data require equal treatment. Some applications and

their associated data require the absolute best performance, while others do not. A FICON

Copyright © Stephen R. Guendert 2007 150


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

storage network has limited resources, therefore, providing the highest performance to all

applications may not be possible. This in turn leads to decisions about Quality of Service (QoS)

levels, QoS monitoring, and QoS enforcement. Quality of Service (QoS), particularly for

cascaded FICON ISLs is growing more and more important to FICON customers. There have

been different thoughts on how to go about achieving FICON storage network QoS (actually

storage network in general, not just limited to FICON and mainframe). These different thought

camps range from implementing a strictly fabric based QoS using Fibre Channel Class 4 Class of

Service, to implementations of QoS based on the Infiniband QoS concepts of Virtual Lanes

(VLs), to true end to end (Host to device) QoS based on techniques originally used in IBM’s

Workload Manager and Intelligent Resource Director back with ESCON. The remainder of this

section will discuss each of these ideas in turn. First, a working definition and background on

QoS is in order.

2.3.10.1 Defining quality and service


Quality and service have different meanings taken in different contexts. A good, general

definition of service is: the expected behavior our outcome from a system. Therefore, Quality

of Service (QoS) is the degree to which the expected outcome is realized. Quantifying and

measuring QoS also becomes a context-dependent task, as it means different things to different

people. For example, to the casual internet user browsing a news site, QoS may simply mean the

responsiveness of the web server to his/her page accesses. On the other hand, to a systems

administrator, QoS may mean the throughput and availability of the web server, the network

connection, storage subsystem, or some combination of one or more of the above. To achieve a

desired level of service, all the components on the end-to-end path must be able to deliver that

Copyright © Stephen R. Guendert 2007 151


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

level of service. Simitci defined some additional key concepts concerning QoS in storage

network architectures (Simitci, 2003):

1) QoS Architecture-The system must include the structures and interfaces to request,

configure, and measure QoS. Furthermore, if the system’s peak performance is below the

desired level, no amount of management will be able to provide QoS.

2) Admissions Policy-This is probably one of the most critical aspects of a QoS system.

When a system accepts to serve (admit) a request, it must make certain resources are

available to achieve the requested QoS level. If there are not enough resources, or if

using the existing resources will hamper the QoS guarantees of previously admitted

requests, the new arrivals should be rejected.

3) Resource Reservation-After a request is admitted to the system, sufficient system

resources must be reserved to provide QoS to that request.

4) Class of Service (CoS)-Even though CoS is sometimes used interchangeably (and thus

incorrectly) with QoS, technically it has a different meaning, CoS defines the type of

service and does not indicate how well the service is performed. Simitci uses the

example of a Fibre Channel (FC) class of service defining message delivery guarantees-

far different than any QoS guarantees of throughput, response time, etc.

In their Proceedings paper for the 2001 ACM Conference on E-Commerce, Menasce, Barbara,

and Dodge developed an equation to compute the ratio of the QoS deficiency to the desired level:

QoS Deviation=(Achieved QoS-Desired QoS) / Desired QoS.

Copyright © Stephen R. Guendert 2007 152


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

In this equation, if the desired QoS level is greater than the achieved QoS level, obviously you

have a negative ratio. Likewise, a positive deviation denotes a QoS better than the one desired

(Barbara, Dodge, Menasce, 2001).

2.3.10.2 Storage and QoS


The reality is, different data and different applications have varying performance needs, so trade

offs in performance are allowed. Some applications will have high QoS priorities, and others

can be delayed to make way for higher priority jobs. If all applications and data required the

absolute best performance all the time, QoS would not be achievable!

QoS in storage and storage networks is simply an optimization problem. You achieve

optimization by trading one performance metric for another. A classic example of such a

tradeoff is between throughput and response time. Increasing queue length (number of active

jobs) increases both the response time and throughput. If what an application requires is low,

bounded response times, you must set it to accept low, bounded throughput values.

In general, storage subsystems cannot make QoS guarantees. They are constructed to accept

and queue all arriving I/O commands. Certain performance tuning techniques, such as queue

prioritization, assigning more buffers/cache space and/or DASD spindles to jobs with higher

priority can help you indirectly achieve partial QoS. But a partial QoS optimization is merely a

best effort service level that does not give any explicit QoS guarantees.

While IP networks and Infiniband have extensive QoS mechanisms and standards, fibre

channel fabrics do not yet have the same level of detail for QoS mechanisms. The Storage

Networking Industry Association (SNIA) has a formal definition for QoS in their SNIA

Dictionary of Storage Networking Terminology: “QoS is a technique for managing computer

system resources such as bandwidth by specifying user visible parameters such as message

Copyright © Stephen R. Guendert 2007 153


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

delivery time. Policy rules are used to describe the operation of network elements to make these

guarantees. Relevant standards for QoS in the IETF are the RSVP (Resource Reservation

Protocol) and COPS (Common Open Policy Service) protocol. RSVP allows for the reservation

of bandwidth in advance, while COPS allows routers and switches to obtain policy rules from a

server.” In a nutshell, in a storage network, QoS is a set of metrics that predict the behavior,

reliability, speed, and latency of a path.

Many existing claims for QoS mostly perform monitoring and configuration for best-effort

performance. Or, they are not true end to end implementations of QoS. The next section will

describe some standards work that had been started in the Fibre Channel standards, and was not

completed. Following that, the Infinband mechanism for QoS will be discussed, followed by a

discussion of how Brocade has implemented a version of QoS similar to Infiniband’s

methodology.

2.3.10.3 Fibre channel class 4 class of service (CoS)


Some initial QoS efforts were made in the T11 Standards group to develop a QoS standard for

fibre channel. It essentially was written as a Class of Service. It was very complex. A team of

consultants worked with the major switch vendors to develop a series of proposals that impacted

several different standards. A summary of Class 4 follows below. It was never formally adopted

or implemented. The discussion of Class 4 is included to reinforce the point that QoS is a

complex topic, and not just a marketing buzzword.

A fibre channel class of service can be defined as a frame delivery scheme exhibiting a

specified set of delivery characteristics and attributes (Kembel, 2003). ESCON and FICON are

both part of the fibre channel standard and class of service specifications.

Copyright © Stephen R. Guendert 2007 154


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

1) Class 1: A class of service providing a dedicated connection between two ports with

confirmed delivery or notification of non-delivery.

2) Class 2: A class of service providing a frame switching service between two ports with

confirmed delivery or notification of non-deliverability.

3) Class 3: A class of service providing a frame switching datagram service between two

ports or a multicast service between a multicast originator and one or more multicast

recipients.

4) Class 4: A class of service providing a fractional bandwidth virtual circuit between two

ports with confirmed delivery or notification of non-deliverability.

Class 4 is frequently referred to as a “virtual circuit” class of service. It works to provide

better quality of service guarantees for bandwidth and latency than Class 2 or Class 3 allow,

while providing more flexibility than Class 1 allows. Similar to Class 1, it is a type of dedicated

connection service. Class 4 is a connection-oriented class of service with confirmation of

delivery (acknowledgement) or notification that a frame could not be processed (reject). Class 4

provides for the allocation of a fraction of the bandwidth on a path between two node ports and

guarantees latency within negotiated quality-of-service bounds. It provides a virtual circuit

between a pair of node ports with guaranteed bandwidth and latency in addition to the

confirmation of delivery or notification of non-deliverability of frames. For the duration of the

Class-4 virtual circuit, all resource necessary to provide that bandwidth are reserved for that

virtual circuit.

Unlike Class-1 that reserves the entire bandwidth of the path, Class-4 supports the allocation

of a requested amount of bandwidth. The bandwidth in each direction is divided up among up to

254 Virtual Circuit (VC) connections to other N_Ports on the fabric. When the virtual circuit(s)

Copyright © Stephen R. Guendert 2007 155


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

are established, resources are reserved for the subsequent delivery of Class-4 frames. Like

Class-1, Class-4 provides in-order delivery of frames. A Class-4 circuit includes at least one VC

in each direction with a set of Quality of Service parameters for each VC. These QoS parameters

include guaranteed transmission and reception bandwidths and/or guaranteed maximum latencies

in each direction across the fabric. When the request is made to establish the virtual circuit, the

request specifies the bandwidth requested, as well as the amount of latency or frame jitter

acceptable.

Bandwidth and latency guarantees for Class-4 virtual circuits are managed by the Quality of

Service Facilitator (QoSF), a server within the fabric. The QoSF is at the well-known address

x’FF FFF9’ and is used to negotiate, manage, and maintain the QoS for each VC and assure

consistency among all the VC’s set up across the full fabric to all ports. (Kembel, 2003). The

QoSF is an optional service defined by the Fibre Channel Standards to specifically support

Class-4 service. Because the QoSF manages bandwidth through the fabric, it must be provided

by a Class-4 capable switch/director.

At the time the virtual circuit is established, the route is chosen and a circuit created. All

frames associated with the Class-4 virtual circuit will be routed via that circuit insuring in-order

frame delivery within a Class-4 virtual circuit. In addition, because the route is fixed for the

duration of the circuit, the delivery latency is deterministic. Class-4 has the concept that the VCs

can be in a “dormant” state, with the VC set up at the N-ports and through the fabric, but with no

data flowing, or a “live” state where data is actively flowing.

To set up a Class-4 virtual circuit, the circuit initiator (CTI) sends a Quality of Service

Request (QoSR) extended link service command to the QoSF. The QoSF makes certain that the

fabric has the available transmission resources to satisfy the requested QoS parameters, then

Copyright © Stephen R. Guendert 2007 156


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

forwards the request to the circuit recipient (CTR). If the fabric and the recipient can both

provide the requested QoS, the QoS request is accepted, and the transmission can start in both

directions. If the requested QoS parameters cannot be met, the request is rejected.

In Class-4, the fabric manages the flow of frames between node ports and the fabric by using

the virtual-circuit flow control mechanism. This is a buffer-to-buffer flow control mechanism

similar to the R_RDY fibre channel flow control mechanism (Kembel, 2003). Virtual-circuit

flow control uses the virtual circuit ready (VC_RDY) ordered set. VC_RDY resembles and

R_RDY, but it contains a virtual circuit identifier byte in the primitive signal, indicating which

VC is being given the buffer-to-buffer- credit. Managing the flow of frames on inter-switch

links (ISLs) must also support the virtual circuit flow control to manage the flow of Class-4

frames between switches.

Each VC-RDY indicates to the N-Port that a single class 4 frame is needed from the N_Port if

it wishes to maintain the requested bandwidth. Each VC_RDY also identifies which virtual

circuit is given credit to send another frame. The fabric controls the bandwidth available to each

virtual circuit by the frequency of VC_RDY transmission for that circuit. One VC_RDY per

second is permission to send 1 frame per second (2 kilobytes/second if 2k frame payloads are

being used). One thousand VC_RDYs per second is permission to send 1,000 frames per second

(2 megabytes per second if 2k frame payloads are being used). The fabric is expected to make

any unused bandwidth available for other live Class-4 circuits, and for Class-2 or Class 3 frames,

so the VC_RDY does allow other frames to be sent from the N_port.

There are some potential scalability difficulties associated with Class-4 service since the fabric

must negotiate resource allocation across each of the 254 possible VCs on each N-port. Also,

Copyright © Stephen R. Guendert 2007 157


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

fabric busy (F_BSY) is not allowed in Class_4. Resources for delivery of Class-4 frames are

reserved when the VC is established and therefore the fabric must be able to deliver the frames.

Class-4 is a very complex issue. Further details on its functionality are beyond the scope of

this document. In addition, because of the complexity, Class-4 was never fully adopted as a

standard. Further work on it was stopped, and much of the language has been removed from the

fibre channel standard. FC-FS-2 letter ballot comment Editor-Late-002 reflected the results of

surveying the community for interest in using and maintaining the specification for Class 4

service. Almost no interest was discovered. It was agreed to resolve the comment by obsolescing

all specifications for Class 4 service except the VC_RDY primitive, which is used by the FC-

SW-x standard in a way unrelated to Class 4.

Therefore, other mechanisms/models for QoS in FICON (fibre channel) were looked at. One

of these was the method used by Infiniband.

2.3.10.4 Infiniband and QoS


Infiniband addresses quality of service through the concept of virtual lanes. Infiniband’s Virtual

Lanes (VLs) enable different quality of service guarantees across the fabric (e.g., priority,

latency guarantees, bandwidth guarantees, etc) by logically dividing a physical link into multiple

virtual links. Each VL has its own independent resource (i.e. send and receive buffers),

dedicated to traffic with specific service levels.

Infiniband’s Virtual Lanes as illustrated in figure 15 are based on independent data streams

for each VL level. Each port can support up to 16 VLs numbered 0 to 15. VL15 is reserved

exclusively for subnet management and is called the management VL. The others (VL0-VL14)

are called data VLs. Each port must support the management VL and at least one data VL

Copyright © Stephen R. Guendert 2007 158


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

starting with VL0. Flow control is on a per VL basis. One VL not having an input buffer

available does not prevent data from following on the other VLs (Futral, 2001).

Infiniband virtual lanes enable the fabric to support different quality of service over the same

physical links, depending on how the subnet manager takes advantage of them. Not all ports

have to support the same number of VLs for management to take advantage of it. The subnet

manager assigns service levels (SLs) to end nodes and configures each port with its own SL to

VL mapping. For instance, the subnet manager can assign SLs based on priority, bandwidth

negotiation, etc, and the end node uses that value. As the packet traverses the fabric, each port

determines which VL the packet uses based on the SL in the packet and the port’s SL-to-VL

mapping table (Futral, 2001).

Another possible use for VLs is for separation of traffic and fairness when multiple systems

share the same subnet. In this case, the subnet manager uses a different set of SLs for each

system and each set of SLs maps to different VLs at each port. Thus, heavy traffic on one VL

does not impact the other systems.

2.3.10.5 Brocade and FICON QoS


Brocade has actively been pursuing QoS for both the open systems and FICON environment.

Due to the complexity, and lack of progress made by standards bodies on Class-4 class of service,

Brocade has not implemented Class 4 service on its switches and directors. Brocade has

implemented several solutions. These include Virtual Channels, ingress rate limiting, and

SID/DID prioritization.

Brocade’s Virtual Channel (VC) technology represented an important breakthrough in the

design of large storage networks. VC’s are unique to Brocade, and are currently available on

the Brocade 4900, 5000, 24000, and 48000 FICON switching products. Brocade’s VC

Copyright © Stephen R. Guendert 2007 159


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

technology is very similar to the Infiniband Virtual Lane concept. To ensure reliable ISL

communications , VC technology logically partitions bandwidth within each ISL into many

different virtual channels and prioritizes traffic to optimize performance and prevent head of line

blocking. Brocade’s Fabric Operating System (FOS) automatically manages VC configurations,

eliminating the need to manually fine tune for maximum performance. This technology also

works in conjunction with Brocade ISL trunking to improve the efficiency of switch-to-switch

communications, simplify FICON storage network design, and reduce the total cost of ownership

(Guendert, Lytle, 2007c).

History: In 2 Gb/Sec Brocade products there was a total of 8 VCs assigned to any link, this

could be internal links, ISLs or trunk group. The VCs are numbered 0-7 and automatically

assigned the following data types. Each VC has its own independent flow control mechanism

and credit scheme, while these features are provided by Brocade’s ASICS they are controlled by

software. Today, Brocade does not allow customers to change the SID/DID assignment, priority,

and allocation to the VCs. However, a future software release will enable this and provide QoS.

Copyright © Stephen R. Guendert 2007 160


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 29. Virtual channels in Brocade 2 Gb/Sec switches

Table 6. Virtual channels allocation table

All class F traffic for the entire fabric automatically receives its own queue and the highest

priority. This ensures that the important control frames such as name server updates, zoning

distribution, RSCNs, etc are never waiting behind “normal” payload traffic (also referred to as

Class-2 or Class-3 traffic). For Class 2/3 traffic (host and storage devices), individual SID/DID

pairs are automatically assigned in a round-robin fashion based on D-ID(Destination ID) across

the four data lanes. This prevents Head of Line Blocking (HoLB) throughout the fabric and

since each VC has its own credit mechanism and flow control, slower device will not “starve”

Copyright © Stephen R. Guendert 2007 161


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

faster. Since Brocade supports IP over FC(IP-FC), multicast and broadcast traffic are assigned

on VCs (6&&) to avoid any congestion on the data VCs from broadcast storms or other

unwanted IP behavior (Guendert, Lytle, 2007c).

In Brocade 4 Gb/sec products the virtual channel infrastructure has been greatly expanded and

some of the automatic assignment has been improved. There are now 17 VCs assigned to any

given internal link, where VC 0 as in @ Gb/sec switches/directors, always has the highest

priority and will always only carry Class F traffic. The remaining 16 VCs are dedicated to Class

2 and/or Class 3 traffic, but they can be modified and/or dedicated to certain traffic types such as

broadcast or multicast traffic. They still have their own credit mechanism and independent flow

control. In addition to the 16 data VCs, each of them now has 8 sub-lists or sub-Virtual

Channels, each of those again has their own credit mechanism and independent flow control.

SID/DID pairs are still assigned in a round-robin fashion across all the VCs. However, with

these new enhancements, a far better distribution is done and it makes the HoLB discussion more

of a theoretical issue than an actual one.

Table 7. Virtual channels assignment for Brocade Condor ASIC based switches/directors

Copyright © Stephen R. Guendert 2007 162


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 30. Virtual channels in Brocade 4 Gb/sec switches/directors

Because of these vast enhancements, Brocade’s 4 Gb/sec switches are even more optimized

for performance and HoLB compared to both its predecessor and the competition.

Note: When connecting 4 Gb/sec switches/directors together the ISLs and trunk groups still use 8

VCs. This is done to avoid potential backwards compatibility issues in fabrics where 2 Gb/sec

and 4 Gbit/sec switches co-exist. Enhancements to the virtual channels capabilities are expected

and will be provided in future versions of FOS.

2.3.10.6 QoS and the mainframe: Workload Manager and Intelligent Resource Director
For years, the IBM mainframe architecture has allowed a mainframe to be divided into separate

logical partitions (LPARS) so that different types of work can run in their own unique

environment. Inside a partition, Workload Manager (WLM) prioritizes all the work depending

on its importance. Logical partitions are assigned LPAR weights meaning the percentage of

overall processing power that’s assigned to all the work in that partition. If a workload shifted so

that more processing power was needed in a particular partition. LPARs shifted processing

power to the partition that needed it, as long as CPU cycles were available. If all the partitions

were at peak utilization, the operator had to change the LPA weights manually. If the demand

Copyright © Stephen R. Guendert 2007 163


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

was unpredictable and irregular, (as in a web server environment) and the system was highly

utilized, the operator had to monitor the system at all times, day and night, to ensure high priority

workloads got the resources they needed.

In addition, the connection between channel path and I/O control units is statically defined.

In the event of a significant shift in workload, those channel path assignments had to be changed

by an operator. Once an I/O request made it to the channel subsystem, it was serviced on a first-

in-first-out basis. This could cause your highest priority work to be delayed dues to significant

I/O contention from lower priority work.

The Intelligent Resource Director is made up of three parts. Two of the three are in place for

QoS functionality in a channel environment. However, the features that are enabled for ESCON

currently do not function in FICON environments. At the time, the interleaving capabilities of

FICON, coupled with its bandwidth led to the belief that QoS functionality was not needed or

desired in FICON environments. That is changing.

2.3.10.7 Dynamic channel path management (DCPM)


Dynamic Channel Path Management (DCPM) is the first such channel QoS functionality. The

I/O configuration definition process is complex and requires significant skill. The process

involves determining how many channels are required by a control unit, and how many other

control units, if any, can share that set of channels. For availability, even if only a single channel

is ever required by a control unit, two or more are normally defined to it in case of a failure

somewhere along the path. Even when the configuration seems perfect, workload changes can

produce a situation where an I/O configuration that allowed meeting a response time goal last

week is inadequate this week. There may be sufficient I/O resources, they just are not where

they are needed. DCPM is designed to let Workload Manager (WLM) dynamically move

Copyright © Stephen R. Guendert 2007 164


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

channel paths through the ESCON director from one I/O control unit to another in response to

changes in the workload requirements. By defining a number of channel paths as “managed”

they become eligible for this dynamic assignment. Moving bandwidth to the important

workloads uses DASD I/O resources much more efficiently. This may help reduce the number

of channel paths needed and could improve availability: in the event of a hardware failure

another channel can be dynamically moved over to handle the work requirements. If the nature

of the workload is such that most subsystems have their peak channel requirements at the same

time, DCPM will be of little help since its job is to reassign existing channels. DCPM works

best when there are variations over time between the channel requirements of different DASD

subsystems.

2.3.10.8 Channel subsystem priority queuing


Channel subsystem, priority queuing is an extension of I/O priority queuing, a concept that has

been evolving in MVS, OS/390 and z/OS over the past several years. In an LPAR cluster, if

important work is missing its goals due to I/O contention on channels shared with other work, it

will be given a higher channel subsystem I/O priority than the less important work. This

function works together with DCPM-as additional channels are moved to the partition running

the important work, channel subsystem priority queuing is designed so that the important work

that really needs it receives the additional I/O resource, not necessarily the other work that just so

happens to be running in the same LPAR cluster.

2.3.11 FICON cascading summary

For the mainframe customer, FICON cascading offers new capabilities to help meet many

requirements of the modern data center. FICON channels are far superior to ESCON channels at

increased distances. ESCON channels experience higher response time elongations than FICON

Copyright © Stephen R. Guendert 2007 165


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

as distances increase. ESCON channels also experience a much more significant data rate droop

at much shorter distances than FICON channels. There are two primary reasons for these

differences:

1) Increased buffer sizes on the FICON channel cards.

2) FICON is an improved protocol with fewer round trip protocol handshakes across the

link.

For ESCON links, significant data rate droop occurs at extended distances over 9 km. For

FICON channels, the channel to control unit end-to-end distance may be increased up to 100km

without data rate performance droop occurring. This assumes that a sufficient quantity of

FICON director buffer to buffer credits are present. The following rules of thumb may be used

as an approximate guide for planning the number of buffer credits (for a given link) required to

minimize the effects of data rate droop. These rules of thumb assume full frame size traffic;

therefore, they do not substitute for more in depth analysis that takes frame size and workload

characteristics into account.

1) At 1 Gbps up to 100 km: plan for a minimum requirement of ½ buffer credit per km. For

example, 50 buffer credits for a 100km link. Depending on application, frame size and

workload characteristics, more buffer credits may be required.

2) At 2 Gbps up to 100km: Plan for a minimum requirement of 1 buffer credit per km. For

example, 1oo buffer credits for a 100 km link. Depending on application, frame size and

workload characteristics, more buffer credits may be required.

3) At 4 Gbps up to 100km: Plan for a minimum requirement of 2 buffer credits per km. For

example, 200 buffer credits for a 100 km link. Depending on application, frame size and

workload characteristics, more buffer credits may be required.

Copyright © Stephen R. Guendert 2007 166


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Cascaded FICON provides the means to build a more resilient, HA/DR/BC mainframe

storage infrastructure. The challenge is to ensure performance across the FICON fabric’s

cascaded links to ensure the highest possible level of data availability and application

performance at the lowest possible cost. Rules of thumb suffice but end users must undertake

proper analysis of requirements and performance.

2.4 Latest and greatest mainframe I/O and mainframe storage enhancements

Over the past three years, IBM has made several enhancements to the mainframe and its I/O

capabilities, and also to mainframe storage. These enhancements make migrating from ESCON

to FICON even more attractive and therefore deserve consideration. Several of these

enhancements are specific to the System z9 (the latest IBM mainframe).

2.4.1 Mainframe I/O enhancements

The first enhancement made to the mainframe that makes migrating to FICON more compelling

is purge path extended functionality. FICON purge path extended functionality is a feature

introduced for the z990/890 mainframes, and it is also available for the System z9 mainframes.

Purge path extended functionality provides for enhanced capability for FICON problem

determination compared with ESCON (Neville, White, 2006). The purge path error recovery

function is now extended to transfer error related data and statistics between the channel and

entry switch, and the control unit and its entry switch to the host operating system. FICON’s

purge path extended functionality gives FICON an enhanced error recovery capability compared

with ESCON.

The second enhancement or set of enhancements made to the mainframe deals with the

channel subsystem. Each mainframe has it own channel subsystem (CSS). The CSS enables the

communication form server memory to peripheral devices via channel connections. The

Copyright © Stephen R. Guendert 2007 167


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

channels in the CSS permit transfer of data between main storage and I/O devices or other

mainframe under the control of a channel program. The CSS allows channel I/ O operations to

continue independently of other operations within the mainframe. This allows other functions to

resume after an I/O operation has been initiated. The CSS also provides communication between

logical partitions (LPARs) within a physical server using internal channels (Chambers, Hatfield,

2006).

The z990 mainframe introduced the concept of the multiple logical channel subsystem

(LCSS) and this is also implemented on the z890 and System z9 mainframes. The z990 and z9

support up to four LCSSs while the z890 supports up to two LCSSs. The LCSS concept was

designed to support the considerable increase in memory size, processing power, and I/O

connectivity present in these mainframes compared to older technology machines. (Almeida,

Hatfield, 2003). Each LCSS may have from 1 to 256 channels and may in turn be configured

with 1 to 15 LPARS (maximum of 60 LPARs per z9 and 30 LPARs per z990/890.

A subchannel represents an addressable device. For example, a disk control unit with 24

drives uses 24 subchannels. An addressable device is associated with a device number. Multiple

subchannel set (MSS) functionality was introduced with the system z9 (Chambers, Hatfield,

2006). Subchannel numbers including their implied path information to a device are limited to

four hexadecimal digits by hardware and software architectures. Four hexadecimal digits

provides 64K addresses, known as a set. For the z900/800 and z990/890 mainframes, IBM

reserved 1024 subchannels, leaving 63K for general use by and end user. For the System z9,

IBM decreased this reserved number to 256 subchannels, leaving 63.75 K for general use by end

users.

Copyright © Stephen R. Guendert 2007 168


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

While ESCN had the ESCON multiple image facility (EMIF), FICON has the same feature

now known simply as multiple image facility (MIF). MIF enables the sharing of channels across

LPARs within a single LCSS, or across LCSSs (z9, z990/890). Channel spanning extends the

MIF concept further to include sharing channels across LPARs and LCSSs. Any or all of the

configured LPARS regardless of the LCSS to which the LPAR is configured can transparently

share FICON channels (Almeida, Hatfield, 2003). These FICON enhancements significantly

increase the scale and flexibility for installations beyond the original addressing limitation

advantages enjoyed by FICON over ESCON.

2.4.2 Mainframe storage enhancements

There are several recent mainframe storage enhancements that also make moving to FICON

from ESCON more attractive to the end users. These are DASD specific, as well as storage

network specific. The following features enhance the performance and usability of the DASD in

a FICON environment.

1) Multiple allegiance (MA): The S/390 and zSeries device architectures require a state of

implicit allegiance between a device and the group of channel paths accessing it. This

allegiance causes the control unit to guarantee access to the device for the duration of the

channel program over the set of paths associated with it. FICON capable DASD arrays’

capability to do concurrent I/O operations facilitates multiple accesses to/from the same

volume with multiple channel groups and system images. Multiple host mainframes may

establish concurrent implicit allegiances provided there is no possibility that any of the

channel programs can alter any data that another may read or write. Since “device busy”

is not presented to the channel, PEND time may be reduced. Specific resources that may

substantially benefit from MA include data sets having a high read/write ratio, data sets

Copyright © Stephen R. Guendert 2007 169


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

having multiple extents on one volume, or data sets that are concurrently shared by

multiple users.

2) Parallel Access Volumes (PAVs): The PAV feature of modern DASD subsystems allows

for the definition of alias devices to base devices. This increases the parallelism of I/O

requests to the base device and in turn reduces UCB queuing (measured by IOSQ time) in

the I/O subsystem. There are two types of PAV implementation in widespread use by

FICON end users:

a. Static PAV-Aliases are manually assigned to the busiest base devices and to base

devices handling the most important work. Reassignment of aliases (such as may

be necessary of the event of workload changes) may be performed manually using

the DASD array’s management software. However, this manual reassignment

process can become very labor intensive, particularly since it must be repeated

whenever there are workload changes if optimal performance is to be achieved.

b. Dynamic PAV-The z/OS workload manager (WLM) and IOS components allow

for the automatic management of aliases. This is known as dynamic alias

management. When dynamic PAVs are enabled, WLM automatically performs

alias address reassignments to help meet its goals and to minimize IOS queuing.

Regardless of which method of PAV implementation an end user chooses, IBM recommends

careful consideration and analysis of an end user’s environment to determine the number of base

to alias devices. For modern DASD subsystems’ FICON control units, a good rule of thumb to

start with is 3-6x the number of FICON attached channels to each logical control unit (Neville,

White, 2006).

Copyright © Stephen R. Guendert 2007 170


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Parallel access volumes (PAVs) made the 63 K limitation of subchannels a problem for larger

installations. A single disk drive (device) with PAVs often consumed at least four subchannels:

the base address and three alias addresses (base plus 3 aliases was and still is a widely used ROT

for PAV). The use of four hexadecimal digits subchannels (and corresponding device numbers)

is architected in a number of places, making it very difficult to remove this constraint. Simply

extending the field would break too many programs. IBM’s solution to the problem was to allow

sets of subchannels (addresses) starting with an implementation of two sets (this may be

increased in the future). Each set provides 64K addresses, The first set (subchannel set zero)

contains all the subchannels reserved for IBM use. Subchannel set one provides the full range of

64K subchannels on a z9. The base principle of subchannels would allow subchannels in either

set to be used for any device addressing purposes. However, the current z/OS implementation of

MSS restricts subchannel set one to disk alias subchannels only. Subchannel set zero may be

used for both base and alias addresses. Each logical channel subsystem (LCSS) can have

multiple subchannel sets (Chambers, Hatfield, 2006).

The primary focus of mainframe storage networking has been on the features associated with

FICON/FCP intermix and z/Linux. It has been possible to combine mainframe FICON storage

networks and open systems fibre channel Storage Area Networks (SAN) onto a common storage

network since early 2003 (Guendert, 2005). This is commonly known as FICON/FCP Intermix

and/or protocol intermix mode (PIM). IBM has also been one of the industry’s biggest

proponents of Linux, particularly Linux on the mainframe. It has become possible to consolidate

many stand alone open systems servers onto a mainframe running multiple Linux images on a

single footprint. In July 2005 IBM announced System z9, and along with it Node Port ID

Virtualization, more commonly known as NPIV.

Copyright © Stephen R. Guendert 2007 171


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2.4.2.1 FICON/FCP intermix basic concepts


There are several key concepts to understand when discussing FICON/FCP Intermix. First, the

open systems SAN Fibre Channel Protocol (FCP-SCSI-3) and FICON (FC-SB-2/3) are merely

different Upper Layer Protocols (ULP) in the overall Fibre Channel Standard. (see figure 30).

As you can see, the difference between open systems SAN and FICON is at the FC-4 (data

payload packet). So, essentially, your open systems SAN directors/switches and mainframe

FICON directors/switches are identical hardware.

Figure 31. Fibre channel standard

Open systems and mainframe environments have not traditionally shared system and/or staff

resources. They have completely different cultures, almost to the point of sometimes resembling

the current two-party system in the United States political arena. So, why would anyone

consider moving to an intermix environment? There are 3 primary reasons.

First, less than 40% of ESCON end users have migrated to FICON since native FICON

became available in 2001. Many of these same end users already have a well established open

Copyright © Stephen R. Guendert 2007 172


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

systems SAN(s) that has (have) been delivering reliable, high performance connectivity for

several years. These organizations, particularly the larger ones, will often have open ports

available on some of their directors/switches. By looking at FICON/FCP intermix as an option;

they might consider a short-term allocation of some of their unused SAN ports to test FICON, or

to complete an initial FICON deployment once testing is complete. This could be beneficial to

an organization looking to delay the purchase of a dedicated FICON infrastructure until the

connectivity requirements are sufficient enough to justify purchasing separate FICON directors

(Guendert, Seitz, 2004).

Secondly, the consolidation in switching infrastructure made possible by FICON/FCP

intermix implementation can reduce total cost of ownership. With some exceptions, the latest

generation of high port count directors from all vendors are designed with non-blocking

architectures and full throughput to all ports, even when the director port count is fully populated.

Many large IT organizations that currently have separate open systems and FICON storage

networks are large enough to fully utilize their director ports, even when running segregated

storage networks. However, there are many organizations that segregate open systems and

mainframe environments that are not as large. Oftentimes these organizations are not fully

utilizing their directors and could realize a significant cost savings by consolidating these

separate, under-utilized directors onto a common storage network. This most likely would result

in fewer “serial numbers” on the floor, leading to lower maintenance, electrical, cooling, and

other operational costs. Also, it allows an organization to have one common cable plant/fiber

infrastructure rather than separate ones for mainframe and open systems. This would likely lead

to lower physical infrastructure costs, as well as reduced management costs for the cable

plant/infrastructure.

Copyright © Stephen R. Guendert 2007 173


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Third, intermix makes perfectly good sense for specialized environments requiring both

flexibility and resource sharing. For example, quality assurance, test, development and disaster

recovery environments/data centers would all fit this description. Already established FICON

implementations will consider intermix with the likely evolution of storage subsystem based

DASD mirroring applications from ESCON via channel extension technology to FCP for

increased performance between sites.

Finally, for end users who are running, or are considering running Linux on the mainframe

they will essentially be running an intermix environment because channels for supporting Linux

images will need to be defined as FCP. The IBM Systems z9 FICON Express2/Express 4

channel cards provide support for Fibre Channel and Small Computer System Interface (SCSI)

devices in Linux environments (Adlung, Bahnzhaf, 2002). The channel card microcode will be

for FCP to support the SCSI data payload of Linux. The z9 FCP support allows Linux running

on the host to access industry-standard SCSI devices. For disk applications, these FCP storage

devices utilize Fixed Block (512 byte) sectors rather than the Extended Count Key Data (ECKD)

format. The SCSI and/or FCP controllers and devices can be accessed by Linux on the System

z9 as long as the appropriate I/O driver support is there (Fries, Kordman, 2005). There are

currently two supported methods of running Linux on the System z9. It can be run as a guest

operating system under z/VM (version 4 release 3 and later releases). It can also be run natively

in a logical partition.

The International Committee of Information Technology Standards (INCITS) developed the

Fibre Channel Protocol Standard (FC-FCP) and subsequently it has been published as an ANSI

standard. The z9 (and zSeries) FCP I/O architecture fully conforms to the Fibre Channel

standards specified by INCITS. FCP is an upper layer fibre channel mapping of SCSI on a

Copyright © Stephen R. Guendert 2007 174


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

common stack of Fibre Channel physical and logical communication layers. FC-FCP and SCSI

are supported by a wide range of controllers and devices which complement the storage

attachment capability through FICON and ESCON channels (Guendert, 2005a).

FICON channels in FCP mode use the Queued Direct Input/Output (QDIO) architecture to

communicate with the operating system. This architecture is derived from the same QDIO

architecture defined for Hipersockets communications and for OSA Express. Rather than using

control devices, FCP channels use/define data devices that represent QDIO queue pairs. This

consists of a request queue and a response queue. Each of these queue pairs represents a

communication path between the FCP channel and the operating system. An operating system

can send FCP requests to the FCP channel via the request queue, and the response queue can be

used by the FCP channel for passing completion indications and unsolicited status indications

back to the operating system (Fries, Kordmann, 2005).

The FCP channel type still needs to be defined using HCD/IOCP, as do the QDIO data

devices. However, there is no requirement for defining the fibre channel storage

devices/controllers, or directors and switches. All of these devices will be configured on an

operating system level using the parameters outlined in the industry standard fibre channel

architecture. They will be addressed using World Wide Names (WWNs), Fibre Channel

Identifiers (IDs), and Logical Unit Numbers (LUNs) (Ogden, White 2005). After the addresses

are configured on the operating system; they are passed to the FCP channel along with the

corresponding fibre channel I/O request via a queue.

2.4.3 System z9 specific enhancements

Just when end users thought that IBM achieved the peak of mainframe I/O performance with the

z990, IBM announced the new System z9 on July 25, 2005. The IBM System z9 brought about

Copyright © Stephen R. Guendert 2007 175


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

tremendous improvements in overall I/O performance over the already formidable z990. These

enhancements make migrating to FICON very attractive, as to get the benefits, the end user

needs to be running a FICON infrastructure. These enhancements include Node Port ID

Virtualization (NPIV), the MIDAW facility, multiple subchannel sets, and hardware imp-

rovements made to the z9, including STI improvements and redundant I/O reconnect.

2.4.3.1 Node Port ID Virtualization (NPIV)


The FCP industry standard architecture does not exploit the security and data access control

functions of Multiple Image Facility (MIF). The z990 and z890 have a feature known as FCP

LUN access control (Guendert, 2005a). This has been carried forward to the Systems z9. LUN

access control provides host-based control of access to storage controllers and their devices as

identified by LUNs. LUN access control also allows read-only sharing of FCP SCSI devices

among multiple operating system images. When a host channel is shared among multiple

operating system images, the access control mechanism is capable of providing for either none,

or all images to have access to a particular logical unit (device) or storage controller. FCP LUN

access control gives the end user the ability to define individual access rights to storage

controller ports as well as devices for each operating system image. LUN access control can

significantly reduce the number of FCP channels that are needed to provide controlled access to

data on FCP SCSI devices (Chambers, Hatfield, 2006). Without LUN access control, FCP

channels prevent logical units from being opened by multiple Linux images at the same time. In

other words, access is granted on a first-come, first-served basis. This prevents problems with

concurrent access from Linux images that are sharing the same FCP channel (which is also

sharing the same worldwide port name). In effect, this means that one Linux image can block

other Linux images from accessing the data on one or more logical units. If the FCP channel is

Copyright © Stephen R. Guendert 2007 176


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

used by multiple independent operating systems (under z/VM or in multiple LPARs), the SAN is

not aware of this fact, and it cannot distinguish among the multiple independent users. The end

user can partially avoid this by requiring separate FCP channels for each LPAR. However, this

does not solve the problem with z/VM.

Another method to control access to devices is the implementation of Node Port (N_Port) ID

Virtualization (NPIV). NPIV is unique to the System z9 and is a major improvement. NPIV

allows each operating system that is sharing an FCP channel to be assigned a unique virtual

world-wide port name (WWPN). The virtual WWPN can be used for both device-level access

control in a storage controller (LUN masking), and for switch-level access control on a fibre

channel switch and/or director (zoning). What NPIV is doing is allowing a single physical FCP

channel to be assigned multiple WWPNs and appear as multiple channels to the external SAN

environment. These virtualized FC N_Port IDs allow a physical fibre channel port to appear as

multiple, distinct ports, providing separate port identification, and security within the fabric for

each operating system image. The I/O transactions of each operating system image are

separately identified, managed, transmitted, and are processed the same as if each operating

system image had its own unique physical N_port (Amann, Banzhaf, 2007).

NPIV is based on a recent extension to the Fibre Channel standards that allows a host bus

adapter (HBA) to perform multiple logins to the SAN fabric via a single physical port. The

switch/director to which the FCP channel is directly connected (i.e. the “entry switch”) must

support multiple N_Port logins. No changes are required for the downstream switches, devices,

or control units. The switch and not the FCP channel itself provides the multiple WWPNs used

for the virtualization (Banzhaff, Friedrich, 2005). Therefore, NPIV is not supported with FCP

point-to-point attachments. NPIV requires switched FCP attachment.

Copyright © Stephen R. Guendert 2007 177


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

NPIV is available with Linux on System z9 in a logical partition or as a guest of z/VM V 4.4,

5.1 and later for SCSI disks accessed via dedicated subchannels and for guest IPL. For guest use

of NPIV, z/VM 4.4 and z/VM 5.1, and z/VM 5.2 provide support transparently; i.e., to PTF is

required. z/VM 5.1 and later provides NPIV support for VM-system use of SCSI disks

(including emulated-FBA minidisks for guests). z/VM 5.1 requires a PTF to properly handle the

case of a fibre channel switch/director not being able to assign a new N_Port ID when one is

requested (due to the switch’s capacity being exceeded). z/VM 5.2 will provide support at a later

date allowing VM users and VM guest operating systems to obtain the worldwide port name(s)

(WWPNs) being used in a virtual machine. The QUERY command will be enhanced for VM

users, and virtualization of a machine function will be enhanced for VM guests (Amann, Banzhaf,

2007). Figure 31illustrates the addressing for NPIV.

Figure 32. Fibre channel addressing

The NPIV enhancements made to the IBM System z9 have made FICON/FCP intermix a

more attractive, viable, and realistic option for connectivity in the world’s largest data centers.

NPIV technology finally makes it realistic to run open systems and z/OS on a mainframe

Copyright © Stephen R. Guendert 2007 178


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

connecting everything via FICON Express channels into a common storage network. End users

need to evaluate the cost of doing so, compared with the costs of operating large Windows

environments.

2.4.3.2 MIDAW facility


MIDAW stands for modified IDAW. (An IDAW is an indirect access word that is used to

specify data addresses for I/O operations in a virtual environment such as we have with a z/OS

environment). The MIDAW facility is a new CCW-indirect-data-address word facility being

added to z/Architecture to coexist with the current IDAW facility. It is a new system

architecture and software exploitation designed to improve FICON performance on the System

z9. Both MIDAW and IDAW facilities offer, for FICON and ESCON channels, alternatives to

using CCW data chaining in channel programs (Artis, 2006b). Both facilities are designed to

reduce channel, director, and control unit overhead by reducing the number of CCWs and frames

processed. The MIDAW facility is usable in certain case where the IDAW facility is not because

it does not have IDAW boundary and data length restrictions. The MIDAW facility is supported

on z/OS 1.6 and higher.

The MIDAW facility is designed to (Guendert, 2006):

1. Be compatible with existing IBM and non IBM disk control units (Note: non IBM storage

devices will require support from their vendors and they should be contacted as part of

the installation systems assurance process).

2. Decrease response time for exploiting I/O.

3. Increase the number of I/O operations per second that can be processed and thus move

more data per second, especially on faster FICON channels.

Copyright © Stephen R. Guendert 2007 179


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Applications that may benefit include: DB2, VSAM, Partitioned Data Set Extended (PDSE),

Hierarchical File System (HFS), z/OS File System (zFS), and other datasets exploiting striping

and compression. Primary benefits include improved channel utilization and significantly

improved I/O response times (Neville, White, 2006). Internal IBM DB2 Table Scan tests

comparing MIDAW facility configurations to pre-MIDAW configurations showed (Berger,

Bruni,
36% 2005):
to 58% reduction in response times.

35% to 56% reduction in channel busy.

56% to 126% improvement in I/O throughput.

The MIDAW facility provides a CCW/IDAW structure that is much more efficient for

certain categories of data-chaining I/O operations. It coexists with the current IDAW facility.

Pre-MIDAW, end users were limited in the usability of IDAWS to straight forward buffering of

a sequential record. This was due to the fact that the existing IDAW design allowed the first

IDAW in a list to point to any address within a page. Subsequent IDAWS (in the same list)

would have to point to the first byte in a page. All but the first and last IDAW in a list was

forced to deal with complete 2K or 4K units of data (Artis, 2006b).

I/O data blocks that have a more complex internal structure (meaning portions of the data

block are directed into separate buffer areas, i.e. scattered read/write) may be processed using

CCWs with data chaining. Unfortunately, data chaining CCWs is inefficient in a modern z/OS

IO environment for a variety of reasons. These reasons include control unit processing and

exchanges, use of switched fabric FICON networks, and the number of channel frames required.

z/OS extended format data sets use internal structures that typically are not visible to the

application program. These extended format data sets require scatter read/write operations.

Copyright © Stephen R. Guendert 2007 180


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Therefore, CCW chaining is required, producing less than optimal I/O performance. The rapid

growth in the use of extended format data sets is what prompted the development of the MIDAW

facility (Artis, 2006b). The scatter read/write effect of MIDAWS make it possible to efficiently

send small control blocks embedded in a disk record to buffers that are separately used for larger

data areas within the record. MIDAW operations are on a single block in the matter of data

chaining (not to be confused with CCW command chaining). The data is therefore sent to the

control unit by the channel in one block of data. For a FICON channel this means all the data is

associated with the same command information unit (IU). Also, please note that there is no

change to ESCON, FICON or control unit implementation. However, with MIDAW there is a

greater chance that more information units (up to 16), may be sent to the control units in a single

burst (Berger, Bruni, 2005).

2.4.3.3 System z9 multiple subchannel sets


This functionality tends to get confused with multiple Logical Channel Subsystems (LCS). It is

a new z9 feature and is not the same thing as LCS. Typically a subchannel represents an

addressable device. For example, a disk control unit with 20 drives uses 20 subchannels for base

addresses. The addressable device is then associated with a device number and the device

number is commonly (in error) known as the device address. Current architecture limits

subchannel numbers to four hexadecimal digits. These four hexadecimal digits provide 64K

addresses. This is known as a set. To date, IBM has reserved 1024 subchannels, which leaves

63K subchannels for general use. Starting with the z9-109 server the number of reserved

subchannels has been decreased from 1024 to 256 (Chambers, Hatfield, 2006). The rapid

development and release of new technologies such as global/metro mirror, GDPS-Hyperswap,

and Parallel Access Volumes (PAV) has made the limitation of 63K subchannels an issue for

Copyright © Stephen R. Guendert 2007 181


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

larger zSeries installations. PAV provides a great example: a single disk drive with PAV often

consumes at least 4 subchannels on its own (representing a base address and 3 aliases).

What IBM has done with the z9, in an effort to solve this problem, is to allow sets of

subchannels (addresses) with the first iteration to be to include two such sets. Each of the two

sets provides 64K addresses. The first set is referred to as Subchannel set 0. This set still

reserves 256 subchannels for IBM use (an improvement from reserving 1024). So, 768

subchannels are now available for end users. Subchannel set 1 allows all 64K addresses to be

used by the end user. The current z/OS implementation restricts Subchannel set 1 to disk alias

subchannels. Subchannel set 0 may be used for both base and alias addresses.

What multiple subchannel sets really do is provide more room for growth in I/O device

configuration. See Figure 32 below (Guendert, 2006). Moving the alias devices into the second

subchannel set has created additional space for device number growth. The storage attachment

capability of the z9 has been dramatically improved over its predecessors. For example, in the

largest case, using 3390 volumes with 54 GB/volume and 768 additional volumes, you could

have 41 Terabytes of additional disk storage addressability (i.e., 54 GB/volume * 768 volumes =

41 TB) (Wyman, Yudenfriend, 2004). Finally, please keep in mind that the appropriate

subchannel set number must be included in IOCP definitions/HCD definitions that produce an

IOCDS). The subchannel set number defaults to zero, and IOCP/HCD changes are only needed

when using subchannel set one.

Copyright © Stephen R. Guendert 2007 182


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 33. Multiple subchannel sets

2.4.3.4 System z9 self-timed interconnect (STI) enhancements


According to IBM’s formal definition, a self-timed interconnect (STI) is an interconnect path

cable that has one or more conductors that transit information serially between two

interconnected units without requiring any clock signals to recover that data (Neville, White,

2006). The interface performs clock recovery independently on each serial data stream and uses

information in the data stream to determine character boundaries and inter-conductor

synchronization. On the System z9, an STI is an interface to the Memory Bus Adapter (MBA),

used to gather and send data.

Similar to the z990, the System z9 uses a book design, a centrally integrated crosspoint switch,

and a large shared L2 cache. See figure 33 below (Guendert 2006). The number of books on a

System z9 is currently 1-4 depending on the model. A book will contain the processors on a

multichip module (MCM), memory, L2 cache, ring connections to the L2 cache on other books,

Copyright © Stephen R. Guendert 2007 183


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

and memory buffer interfaces for I/O connections. L1 cache is still part of every processor unit

(PU). PUs are on the MCM in a book, therefore L1 cache is also in every book. The System z9

book design does differ in some respects from the z990 book design. The L2 cache is a book-

level function (each book can contain up to 40MB of L2 cache memory). The L2 cache

interleaving access of the System z9 has been doubled over what was provided with the z990.

The L2 cache in the books is connected together via a ring design consisting of two rings going

in opposite directions. This design provides improved performance and redundancy. What this

design has resulted in is a “single” unified L2 cache extending across all (installed) books that

functions as a high performance interface between PUs and memory, both of which can now be

in any book (Chambers, Hatfield, 2006).

Figure 34. System z9 logical channel configuration

Each book in a System z9 contains up to eight I/O “fanout” cards. Each of these “fanout”

cards contain a Memory Bus Adapter (MBA) module. Each “fanout” card also contains two STI

ports. Therefore, there is a maximum of 16 STI ports per book. Each STI operates at 2.7 GB/sec.

Copyright © Stephen R. Guendert 2007 184


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

You may recall that the z990 had a slightly different arrangement. The z990 had 3 internal

MBAs and 12 fixed STI ports per book. The z990 STIs also operated at 2.0 GB/sec. Figure 34

below illustrates the “fanout” arrangement (Guendert, 2006).

Figure 35. z990 vs. z9-109 fanout arrangement

The STIs are installed in pairs. IBM determines the correct number of STIs during the

configuration/ordering process to ensure that the needs of the installed I/O cages will be met. An

STI drives a domain in the I/O cage. A domain consists of four I/O feature slots (for example, 4

FICON Express2 channel cards). Two domains will have their STI links tied together for

failover purposes (see figure 3 below). Take note of the connection between the two multiplex

cards in figures 3 and 4. This is the new redundant I/O interconnect functionality and is

illustrated in figures 35 and 36 (Guendert, 2006).

Copyright © Stephen R. Guendert 2007 185


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 36. Redundant I/O interconnect

Figure 37. Redundant I/O interconnect (2)

I/O cage domains are now paired together. The two STI connections to both domains in a

pair will provide automatic backup for each other. If one of the STI connections fails, the STI

connection belonging to the other member of the pair will handle all the I/O adapters in both

domains. Should this occur, there is the potential that the data rate of all the involved adapters

may overrun the bandwidth capabilities of the single STI connection. Obviously, this can delay

some I/O operations. The redundant I/O interconnect potential is best if the two STIs are

Copyright © Stephen R. Guendert 2007 186


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

connected to different books. Redundant I/O interconnect also allows for STI elements to be

removed and replaced concurrently with server operation (Becht, Easton, 2007).

2.4.3.5 Open exchanges


An open exchange is fibre channel terminology. Since FICON is part of the broad Fibre Channel

standards, open exchanges are an important topic. An open exchange represents an I/O operation

in progress over the channel. Unlike ESCON, which was limited to one I/O operation traversing

a channel at any given time, many I/O operations can be in progress across a FICON channel

simultaneously. FICON channels can simultaneously multiplex multiple data transfers from

multiple devices. Prior to the System z9, 32 simultaneous open exchanges were supported per

FICON channel. The z9 has increased this to 64 (Neville, White, 2006). This increases

performance. However, it has resulted in rethinking the planning for channel requirements.

2.4 Conclusion

It is quite clear in the literature reviewed for this study that ESCON was an outstanding

technology. IBM learned from its customers and made staged/phased enhancements to the

mainframe and mainframe I/O subsystem between 1964 and 1990 when ESCON was introduced.

IBM has done the same thing with FICON. The available literature suggests that FICON is a

better technology in terms of performance and IT resilience when compared with ESCON. This

literature also suggests that the most recent enhancements to the mainframe, mainframe I/O, and

mainframe storage are all much more beneficial to end users if they are running FICON instead

of ESCON.

Copyright © Stephen R. Guendert 2007 187


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

CHAPTER 3

Introduction

This chapter will describe the techniques and systems used for doing the statistical performance

and financial analysis work done at several customers and clients between late 2004 and 2007.

The customers and clients are from the U.S. financial sector. The clients and customers will not

allow their firm’s or employee names to be used in any external documentation of these

engagements, due to concerns over their competition using the information against them.

This will include a discussion of mainframe I/O performance metrics , the systems used for

measuring them, and a discussion of how to use Intellimagic’s RMF Magic product for doing

analysis work. It should be noted that this will not be a comprehensive discussion of mainframe

I/O performance. Hundreds of pages of text and countless papers have been written on this

subject over the past 43 years; therefore this will be a brief summary intended to provide the

background knowledge necessary to understand how to technically justify a migration from

ESCON to FICON. The systems that were used to analyze client environments are the default

standard in the mainframe industry for performance analysis, and the Intellimagic RMF Magic

analysis tool is used by all major DASD vendors, as well as several consulting practices.

The chapter will conclude with a description of how to use the Excel based financial

justification tool the author developed for making the financial justification for customers. A

detailed guide to using this tool in included in Appendix A.

3.1 Systems Management Facilities and Resource Measurement Facility

System Management Facilities (SMF) and Resource Measurement Facility (RMF) are two of the

most often employed measurement facilities for analyzing z/OS I/O subsystems. System

Management Facility (SMF) is an integral part of the IBM OS/360, OS/VS1, OS/VS2, MVS/370,

Copyright © Stephen R. Guendert 2007 188


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

MVS/XA, OS/390, and z/OS mainframe operating systems. Originally called System

Measurement Facility, SMF was created as a result of the need for computer system accounting

caused by OS/360. A joint committee of SHARE attendees and IBM employees specified the

requirements. IBM implemented the requirements with release 18 of OS/360 (Merrill, 1984).

The primary intent of SMF was to provide a standard source of measurement data for job

accounting (charging and costing). SMF records information for each performed task and

collects its records based on the various start and stop times associated with the task itself. SMF

provides a workload element’s perspective of measurement. SMF produces a variety of records

that summarize the resource consumption of steps, jobs, output spooling and other system

activities.

IBM introduced Resource Measurement Facility (RMF) in 1977 as selectable unit 20 of

MVS 3.7 to assist installations in the tuning and management of MVS systems. RMF is an

interval recording log based on both sampled and event related measurement data collected by

the operating system and the external data collector. RMF produces SMF record types 70

through 79. RMF was an evolution of the facilities provided by Measurement Facility One

(MF/1) in the first release of MVS (OS/VS2) in August 1974. RMF records information on the

overall state of the system at fixed intervals specified by the user. RMF provides the system’s

perspective of measurement (Artis, Houtekamer, 1993).

There are three types of time intervals used by RMF monitors for data gathering on z/OS

systems (Cassier, Korhonen, 2005):

1) Short term data collection with Monitor III.

2) Snapshot monitoring with Monitor II.

3) Long term data gathering with Monitor I and Monitor III.

Copyright © Stephen R. Guendert 2007 189


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Short term data gathering and reporting are defined in seconds or minutes. The default time

interval for Monitor III is 100 seconds. However, it can be changed to 60 seconds so each

reporting interval consists of 1 minute. Long term data gathering and reporting via Monitor I

and III is performed in 15 or 30 minute intervals (end-user specified). End users typically use

these records for long term reporting (days, months, years) for long term capacity planning.

Monitor II is defined as snapshot monitoring because during the reporting session you get a

report each time you click enter (IBM, 2006).

Monitor I and Monitor III provide long term data collection information about system

workload and resource utilization. Monitor I and Monitor III also handles information regarding

all the HW and SW components of your system: processor, I/O devices, storage activities, and

utilization in addition to resource consumption activity and the performance of multiple groups

of address spaces. Data is gathered for a specific cycle time, and consolidated data records are

written at a specific interval time. The default value for data gathering is 1 second. The default

value for data recording is 30 minutes(Cassier, Kornhonen, 2005). End users can set these

values according to their own requirements and change them as needed.

The analysis and interpretation of the data contained in the SMF and RMF records is a

complex activity. It is further complicated by the ambiguity of many of the provided

measurements. To further complicate things, the data contained in the records are often encoded,

contained in relocatable segments of the records, and/or simply represented by flags in bit strings.

As a result, the level of expertise required to decode the SMF and RMF records is often as great

as the expertise required to interpret the data they contain. This complexity is the primary reason

that users of SMF and RMF use a file reduction tool such as MXG, MICS, or Intellimagic’s

software.

Copyright © Stephen R. Guendert 2007 190


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The I/O activity of a z/OS system is described by RMF type 73, 74, and 78-3 records. These

records are the channel path activity (73), device activity (74) , and I/O queuing (78-3) record

types. The primary record used to describe the activities of the individual devices is the RMF 74

device activity record. There is one type 74 record created for each device during each RMF

interval. RMF measures four response time components for each DASD device. These values

are called IOS queue time (IOSQ), pending time (PEND), disconnect time (DISC) and connect

time (CONN) (Artis, Houtekamer, 1993).

3.2 Performance analysis basics

The human end user view of a system’s performance is often highly subjective and difficult to

manage. It can even be emotional. However, what needs to be kept at the forefront of

performance discussions is that the system exists to meet the business needs of the end user.

Nothing more, nothing less. To match these business needs with the subjective perceptions, the

concept of the service level agreement (SLA) was introduced. The SLA is a contract that

objectively describes and enforces measurables (Cassier, Kornhonen 2005):

1) system availability metrics

2) Average transaction response time for CPU, networks, I/O, or total

3) The distribution of these response times

Performance analysis is the technique used to enforce the performance goals defined in SLAs.

There are two components involved in performance analysis (Jain, 1991).

1) Performance administration: Executed by the Service Level Administrator. The objective

is to define the rank of goals by the importance of the transactions. Following this

definition/rank process, the service level administrator analyzes the RMF reports to verify

the performance results. The service level administrator is responsible for defining the

Copyright © Stephen R. Guendert 2007 191


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

installation’s performance goals based on the business needs. The explicit definition of

workloads and performance goals is called a service definition.

2) Performance management: Performance Management involves the following two tasks:

a. Allocating data processing resources (CPU, I/O, storage) to transactions

according to their SLA. Allocating resources implies determining the transaction

priority in the queues accessing such resources. This priority determines how

long the transaction spends in queues.

b. Monitoring is the task of using RMF to verify if the management objectives are

reached and then reacting accordingly.

In general, there are three ways to solve a performance management problem (a conflict

between SLA goals and reality) (Jain, 1991).

1) Buy more resources-this is done far too often-the old thro more hardware at a problem for

a solution approach.

2) Steals-steal resources from less critical transactions by modifying priorities.

3) Tune-Tuning your system makes for more effective and efficient use of existing

resources.

3.3 Response and service time basics

As CPU speed increases, I/O response time (I/O Tr) becomes increasingly the determinant factor

in the average transaction response time. The formula for I/O response time is (Cassier,

Kornhonen, 2005):

I/O Tr=I/O Ts +I/O Tw

Where Tw=wait time and Ts=service time

Average I/O response time is one of the fields shown in the RMF shared device activity report.

Copyright © Stephen R. Guendert 2007 192


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

I/O Tw=IOSQ + PEND

I/O Ts=CONN + DISC

I/O Tr=IOSQ + PEND + CONN +DISC

• IOSQ: the time spent waiting for the device availability on the z/OS operating system.

Implementing dynamic PAVs has proven to drastically reduce IOSQ.

• PEND (PENDing): Measures time from issuance of the SSCH instruction by z/OS to the

start of dialog between channel and the I/O controller. There are sub-components to

PEND:

1) CMR time measures how long the channel waits for the controller at the start of

the I/O dialogue. It is an indication of how busy the controller is.

2) Device busy time is an indication of how long a device is busy due to a hardware

reserve being issued by another system. Device busy time can be an explanation

for missing interrupt handling (MIH) messages.

3) SAP overhead is an indication of time the SAP needs to handle the I/O request.

• DISC (DISConnect) is the time that the I/O operation already started, but the channel and

controller are not in a dialog. Reasons for this include:

1. Read cache miss: the I/O block is not in the controller’s cache

2. Reconnect miss: the time to dynamically reconnect an ESCON channel.

3. Synchronous remote copy (PPRC for example): mirroring is done within

DISC time.

4. Sequential write hits faster than the controller can accept them in the cache.

5. Multiple allegiance or PAV with extent conflicts: previously reported as

device busy time (in PEND). Modern controllers have moved this to DISC.

Copyright © Stephen R. Guendert 2007 193


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

6. CU busy: also previously reported in PEND. Also moved by modern

controllers to DISC time.

• CONN (CONNect) is the real productive time of an I/O operation. CONN is the time

when the channel is transferring data to/from the controller cache or exchanging control

information with the controller about one I/O operation. There also is still the overhead

of the dialog protocol. For example, in sequential processing if the number of SSCHs

were decreased by better buffering, even though transferring the same amount of data,

CONN time will consistently decrease due the lower overhead.

• Other I/O metrics

1) I/O traffic=I/O Tr * I/O rate. This measures (in ms/sec) the activity in a

device excluding queue time. Numerically = utilization of the device in

question.

2) I/O density = I/O rate per DASD space capacity. Measured in I/O/per

sec/Gbyte.

3) I/O intensity = I/O Tr * I/O rate. This is ms/sec that the activity in the

device includes queue time.

There are several techniques, both hardware and software for reducing I/O Tr. They include

(Cassier, Kornhonen, 2005):

Software:

1) Buffering

2) Data compression

3) Data striping

Hardware:

Copyright © Stephen R. Guendert 2007 194


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

1) Faster channels

2) Faster device paths (adapters) to controllers

3) Larger, more efficient cache controllers

4) Faster (RPMs) disks and smaller disks

5) Channel subsystem I/O priority from IRD

6) More DASD subsystem concurrency (PAVs).

3.4 Understanding ESCON and FICON channel path metrics

There are some unique aspects to ESCON and FICON channel path metrics and performance

measurement beyond the basics mentioned previously. It is also worth recalling two definitions

(Artis, Ross, 2004):

1) channel: a physical entity incorporated in the processor. It is a common mistake to refer

to the entire path between the processor and storage subsystem as the channel.

2) Channel path: the fibre between the channel and the storage subsystem as well as the

interface adapter on the subsystem and any intervening FICON directors.

3.4.1 ESCON

ESCON provides a circuit switched unidirectional data transfer mechanism, i.e., once a data

transfer (CCW, status, or data) for an I/O from the channel to the subsystem (or subsystem to

channel) has begun, no other I/O operations can employ the channel until that transfer has

completed.

Figure 38. ESCON’s circuit switched data transfer mechanism

Copyright © Stephen R. Guendert 2007 195


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

In figure 37 (Guendert, 2004), the channel sends data to the subsystem across an ESCON

channel path. Each ESCON link is comprised of a send/receive pair of fibers. Only one of the

fibers can be transmitting data at one time. During the data transfer process the second fiber is

used to transmit frames that acknowledge and request the transfer of data frames.

-the figure shows that ESCON data frames do not necessarily employ all of the available data

transfer time (which is reported as CONN time). ESCON data frames are defined by the

ESCON protocol as device information blocks (DIBs). The maximum DIB size supported by the

ESCON architecture is 1024 bytes. Typically, about 75% of the reported CONN time represents

time periods during which the channel was occupied but not transferring data. Delays

experienced by transmissions from the processor to the subsystem are reported as PEND time.

Channel path delays experienced by an I/O attempting to reconnect from the subsystem to the

channel are aggregated into DISC time for ESCON channels. As discussed in Chapter 2, ESCON

channels retain the “mother-may-I” protocol employed by parallel (bus and tag) channels for

transferring channel programs from the channel to the storage subsystem. ESCON channels

transmit only a single CCW at a time to the storage subsystem. Once the subsystem has

completed the CCW it notifies the channel via a channel-end/device end (CE/DE) to present the

status of the prior CCW. As long as the status was normal, the channel transmits the next CCW

of the channel program to the storage subsystem (McGavin, Mungal, 2004).

Copyright © Stephen R. Guendert 2007 196


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 39. ESCON’s “Mother-May-I?” protocol

ESCON ESCON ESCON


Channel CCW 1 CU Device
CMD

CE/DE END

CCW 2
CMD

END
CE/DE
CCW 3
CMD

CE/DE END

This “Mother May I?” protocol was initially employed for managing the “dumb” storage

subsystems (no memory to store CCWs) connected to S/360 era processors. This same protocol

was later employed by ESCON to support both dumb and intelligent storage subsystems. The

mother may I implementation presented two distance related performance issues (Artis, Ross,

2004):

1) Each CCW experiences distance related delays as it is transmitted to the storage

subsystem.

2) ESCON’s small DIB and data buffer sizes result in a substantial data related droop as

the distance between the channel and storage subsystem increases. This phenomenon

is known as ESCON droop. (Cronin, 2002).

Copyright © Stephen R. Guendert 2007 197


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3.4.1.1 Classic ESCON definitions


1. PEND time-the time between the acceptance of the SSCH command by the channel

subsystem and the presentation of initial status for the first CCW. PEND time will be

non-zero if the channel, ESCON director, storage director, or device is busy.

2. DISC time-The portion of the remainder of the elapses execution time of an I/O request,

after PEND, when the channel is waiting on data transfer. Zero DISC time can be

expected for I/O requests that do not require physical back end device activity

3. CONN time-The portion of the remainder of the elapsed execution time of an I/O request

after PEND when the channel is transferring data.

3.4.2 FICON

FICON is based on a 100 MB/sec bidirectional packet switched data transfer mechanism. IBM

has adopted faster data rates as the fibre channel specifications have continued to evolve. The

current technology is 400 MB/sec and is known as FICON Express4. 800 MB/Sec FICON

channel technology, known as FICON Express8 is expected to be announced by IBM in 2008.

FICON is an upper level protocol of the broader fibre channel specification. It allows for

multiple, bi-directional transfers (data, status, or channel programs) to be simultaneously active

over a single FICON link. Figure 39 illustrates this process (Guendert, 2004).

Copyright © Stephen R. Guendert 2007 198


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 40. FICON packet switched data transmission


Idle Packets Other data packets Packets for this transfer

On
e Read 200 MBps
FI CON Write
Cha
nne

CONN time for an I/O operation begins when the processor channel receives an initial status

command response for the first CCW of the channel program and ends when the last packet for

the I/O arrives at the processor channel. Unlike ESCON, DISC time is measured by the storage

subsystem in a FICON architecture. The DISC time is eventually subtracted from the measured

CONN time (because to the processor, the entire time being measured is CONN time) for RMF

reporting purposes.

The CONN time of a FICON I/O is a function of the number of packets to be transmitted, the

utilization of the path (the number of available idle packets), the ability of the subsystem to send

the packets, and the distance between the subsystem and the channel. CONN time therefore can

be thought of as a function of the number of concurrent I/Os that a storage subsystem is

processing. As the multi-programming level of the system increases, its service time can be

expected to elongate (Artis, Ross, 2004) In a FICON environment this is known as CONN time

elongation.

As discussed in Chapter 2, FICON utilizes the notion of “assumed completion” rather than the

“mother-may-I?” protocol mechanism for the transfer of channel programs to the storage

subsystem. All FICON compatible storage subsystems are considered to be “intelligent”.

Unlike ESCON, in FICON the entire channel program is transmitted to the subsystem at the start

of an I/O operation. After the storage subsystem completes the entire channel program, it

notifies the channel with a CE/DE transmission. Therefore, a typical channel program would

Copyright © Stephen R. Guendert 2007 199


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

require two presentations of status from the subsystem to the channel. The first is when the

storage subsystem acknowledges the first CCW to end PEND time. The second is when the

subsystem has completed the entire channel program. FICON eliminates the countless “turn

arounds” of the ESCON protocol, and therefore can support distances of up to 100 km between

the channel and storage subsystem without droop.

Figure 41. FICON “Assumed Completion”

FICON FICON FICON


Channel CCW 1 CU Device
CMD
CCW 2 END
CCW 3 CMD
END
CMD
CE/DE END

For more than two decades, I/O subsystem analysis has been based on device level PEND, DISC,

and CONN time as well as channel utilization metrics. FICON introduced a variety of new

measurements that describe the utilization of the channel as well as the read and write data

transfer rates. Three examples (Ross, 2001):

1. Read and write byte counts are not available for ESCON channels, they are a

FICON specific metric.

Copyright © Stephen R. Guendert 2007 200


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

2. CONN ∑ channel busy ≠ ∑ CONN time. This is a significant differences from

ESCON. FICON CONN times can be overlapped while ESCON CONN times

are simply a measure of occupancy. FICON CONN times cannot be summed to

determine average utilization of the channel connected to the subsystem.

3. Even when there is no I/O activity present, FICON channels will report a

utilization of 10-13%. This channel utilization percentage represents polling

activities which are always present regardless of I/O activity.

3.4.2.1 FICON director measurements


Coincident with the introduction of native FICON and FICON switching technology, a new

RMF record type was added to the RMF portfolio. This is the RMF 74-7 record, commonly

known as the FICON Director Activity Report. The RMF 74-7 record documents read/write

data rates, frame sizes, as well as buffer to buffer credit starvation induced frame pacing delay.

These measurements are facilitated via a API interface known as the FICON CUP (Control Unit

Port).

There are four primary categories of data elements reported on for every FICON director (Artis,

Guendert, 2006):

1. Global data: this category consists of configuration change flags, the number of FICON

switches in the configuration, and descriptive data for the active I/O definition file

(IODF).

2. Connector data: this category describes elements such as channel and storage subsystems

which are connected to the FICON director.

3. Switch data: this describes the physical configuration of the FICON director.

Copyright © Stephen R. Guendert 2007 201


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4. Port data: this reports the average read/write frame sizes, aggregate bandwidth passing

through each port on the director, error counts, and frame pacing delay occurrences for

each port.

3.4.2.2 Graphical analysis of a FICON I/O


FICON measurement is based on both the channel’s and the subsystem’s perception of events

(Guendert, 2004). The following is a 7 step graphical analysis of an individual FICON I/O.

1. The operating system passes a physical I/O request to the channel subsystem using a SSCH

command.

2. The channel transmits one or more packets to the storage subsystem. At this point an open

exchange number is assigned to the first packet of an I/O. This number is employed by all

subsequent packets for the I/O operation. The unique open exchange number allows the packets

for multiple simultaneous I/Os to be interleaved over a FICON path.

3. After the internal delays associated with the subsystem’s receipt of the channel program and

device number verification in the prefix CCW, the storage subsystem transmits a packet

containing an initial status CMR back to the channel.

Copyright © Stephen R. Guendert 2007 202


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4. When the CMR leaves the subsystem, the subsystem begins the accumulation of DISC time

as it processes the remainder of the channel program. The storage subsystem accumulates DISC

at any time when it is waiting on back end devices to promote/demote data to/from cache.

5. PEND time ends and CONN time starts when initial status CMR arrives at the channel.

PEND time includes one round-trip time from the channel to the storage subsystem.

6. CONN time starts when PEND time ends since the storage subsystem is responsible for

measuring DISC time in FICON.

Copyright © Stephen R. Guendert 2007 203


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

7. When the storage subsystem completes the execution of the channel program it reports the

measured DISC time along with the status for the I/O operation in the last packet transmitted

from the subsystem to the channel for the I/O.

3.4.2.3 Formal FICON definitions


“FICON PEND time is the time between the acceptance of the SSCH command by the channel

subsystem and the receipt of the initial status CMR from the storage subsystem. At a minimum

this time includes one round-trip turn around time over the link and the internal response time

required for the storage subsystem to present initial status for the prefix CCW” (Ross, 2001, p9).

“FICON DISC time is the portion of the service time that is not formally defined as PEND or

CONN time. At a minimum it includes the subsystem measured back end device delays

experienced for the I/O request” (Ross, 2001, p9).

“FICON CONN time is the data transfer time plus the propagation time from the subsystem to

the channel” (Ross, 2001, p9).

For FICON channels, the responsibility for measuring DISC time is assigned to the storage

subsystem rather than being a function performed by the channel. Zero DISC time can be

expected for I/O requests that do not require physical back end device activity.

Copyright © Stephen R. Guendert 2007 204


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON measurement of CONN time differs in three ways from the ESCON measurement of

CONN time. First, FICON eliminates the elongation of CONN time which was a direct result of

the “mother-may-I?” protocol employed by ESCON which caused ESCON droop. Second, at

high fibre utilizations a new type of elongation can result from interleaving delays. Third, if the

channel path incorporates a FICON director, frame pacing delays can make significant

contributions to the CONN times of channel data transfers that require a large number of FICON

frames (McGavin, Mungal, 2004).

3.4.3. Using RMF/CMF and RMF Magic to understand Disk Subsystem Performance

Disk Subsystems are very powerful computers themselves, which can process tremendous

amounts of information. Yet many installations struggle with disk performance management, as

their I/O workload grows just as fast as or even faster than the capabilities of hardware. This

section will review current disk subsystem architecture along with the performance management

issues, and show how new approaches to disk reporting using RMF Magic can help to avoid

unpleasant surprises in disk subsystem performance.

3.4.3.1 Visibility gap


Most installations do not have a daily understanding of the disk subsystem performance, they

suffer from a visibility gap in this are. The visibility gap is not from a lack of data, but rather it

is due to the difficulty of obtaining meaningful visibility into all of the data. One very important

reason visibility into I/O performance is limited, is that the reporting methods commonly used

do not provide full views of both the front-end (MVS) and back-end (hard disk) performance. In

addition to that, there is a lot of background activity. Some of the tasks are ‘real-time’ like read-

miss processing, but most I/O handled on the back-end are actually more or less asynchronous:

read ahead and write destaging. Asynchronous copy services (XRC, Global Mirror, SRDF/A,

Copyright © Stephen R. Guendert 2007 205


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

TrueCopy Asynchronous) add even more activity and potential delays. Also FlashCopy adds

overhead whenever a consistent copy needs to be made.

All this asynchronous activity is of course a very good thing, as it means that applications do

not have to wait for the housekeeping-type activities. The biggest risk from an operation point of

view, however, is that the resources involved are not monitored with standard performance tools:

the back-end drives or subsystem internal processors can be quite busy with hardly any impact

on pending, disconnect or connect time. Only when they saturate, the service time will quickly

explode. This is very different from old subsystems, where queuing on disks would show up in

IOSQ and disconnect time. Figure 41 below shows the old and new performance curves

schematically.

Figure 42. Response time as a function of utilization for old and new subsystems

This figure shows clearly why it is important to study the utilization of internal components of

disk subsystems. Lack of visibility into what happens inside means that you may suddenly see

unexpected performance degradation with minor increases in the workload.

Copyright © Stephen R. Guendert 2007 206


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3.4.3.2 RMF Data Collection on Disk Subsystems


Before discussing disk subsystem implementations in more detail, a review of the information

that RMF provides on I/O is in order. As RMF runs on each z/OS image, it provides first and

foremost the processor perspective on disk I/O in the record types 73 (channels), 74.1 (logical

devices), and 78.3 (logical control units). These records describe the view of one z/OS image on

the I/O subsystem; in order to get the cumulative numbers you need to add them across all

systems in the sysplex. As more and more of the work in I/O operations is moved down from

the z/OS systems to the disk subsystems, new record types were introduced to provide more

information from the hardware perspective: 74.5 records give caching statistics at the logical

device level, and 74.8 provide array group (rank) and link statistics at the physical device and

link level. Figure 42 shows three z/OS images with FICON directors and disk subsystems

schematically, with the records types discussed.

Figure 43. RMF record types for various system components

72: Workload

z/OS z/OS 73: z/OS


Channel

CHPIDs / EMIF CHPIDs


74.1:
73:z/OS
74.7: Channel
Device
78-3: I/O
Queueing Director

FCD FCD

74.5:
74.8:
74.5:
Link Cache
and
Cache
HDD

Copyright © Stephen R. Guendert 2007 207


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Unfortunately there is no common key between all these records, so consolidating them requires

a significant amount of coding and processing. For example, with information from the 74.1,

74.5 and 78.3 LCU records together one can allocate the activity from the channels across the

various disk subsystems, thus giving a good estimate for the read and write activity to each disk

subsystem.

The remainder of this section will show how intelligent interpretation of the information from

RMF records can be used to gain significant visibility into disk performance. It will show both

results from RMF printed reports and from the RMF Magic product developed by IntelliMagic.

3.4.3.3 Reviewing the Disk response time components


The RMF 74.1 records are still the most commonly used records to monitor disk performance.

They provide the well known IOSQ, pending, disconnect connect time. Clearly, there have been

significant changes since these metrics were introduced. It is also clear that in particular

disconnect and connect time are hard to evaluate by themselves; it is very important to

understand what components contribute to disconnect time, and how efficient FICON channels

are. In the sections below we will discuss this in more detail.

A very important different is also that while the RMF response time components originally

corresponded to certain operations of the physical drives (3390-3 images at the time), they are

now derived metrics for logical volumes.

3.4.3.4 RAID
Another complication in the analysis of disk subsystems is the use of RAID back-ends, that

provide very little instrumentation. Therefore, it is impossible to know exactly how busy each of

the drives in the back-end is, while this is very important for the maximum throughput capability

of the disk subsystem. The general perception is that modern subsystems use large amounts of

Copyright © Stephen R. Guendert 2007 208


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

cache, and therefore can avoid most I/Os. While it is true that cache does help to improve read

response times, caching is not nearly as effective in reducing back-end I/Os. This is because

sequential read I/O typically does result in pre-loading of the information from disk in cache; this

will give a high read hit ratio, but the sequential reads to generate (asynchronous) disk I/O.

Another reasons is that while all writes are cached, the information written has to be destaged to

disk. And with a RAID-5 architecture, every random-write update results in 4 back-end I/O

operations! Sequential writes can be written more efficiently, but just as for reads little

optimization is possible. The result is that for many installation the number of maximum

physical disk I/Os is very close to the number of logical disk I/Os.

Clearly it is very important to have a good understanding of the back-end I/O rate, as this will

determine the maximum throughput that your configuration can handle.

3.4.3.5 Measuring back-end performance


IBM introduced limited measurements from the back-end of their subsystems starting with the

ESS architecture. Hitachi also supports these measurements for their XP1024/9980V and

XP12000/USP architectures.

In the ESS implementation, the counters are maintained only for z/OS ranks, and reported

through a new subtype of 74.5. Six counters are maintained:

• Read Response Time

• Write Response Time

• Read Operations / Second

• Write Operations / Second

• Read MByte/second

• Write MByte/second

Copyright © Stephen R. Guendert 2007 209


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

It is not entirely clear from the documentation how these counters are normalized, in particular

whether they are ‘before’ or ‘after’ RAID. And this might even be different between the IBM

and HDS implementations of the counters! The counters provide not only information on

‘normal’ I/O operations and destaging, but also about PPRC secondaries and FlashCopy activity.

For these operations, the throughput numbers are most useful. Please note that this does require

that at least one device in the rank is online.

For performance analysis, and to find bottlenecks in the array groups, the Read Response

Time is most useful. As this includes queuing for the HDD’s on the back-end, it is a good metric

for contention on these drives. As the HDD utilization is not reported, it is in fact the hint at

physical device utilization available through RMF. In a lightly loaded disk subsystem, the read

response time might be as low as 6 ms for a 15 k RPM drive, or 10-12 ms for a 10 k RPM drive.

In a heavily loaded subsystem, the read response time will be in the 50-200 ms range, indicating

that these devices are saturated (near 100% busy).

Figure xx below shows a chart with the RANK performance data. As each line represents the

performance of a all HDDs in an array group, this information is very useful to determine

problem spots in the back-end of the subsystem.

Copyright © Stephen R. Guendert 2007 210


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 44. Sample RAID rank (array group) read response time reporting for one subsystem. Each line
represents one RANK or array group

0001 CKD
All activity: Rank Read Response (ms) by Rank for DSS
0100 CKD
0101 CKD
60
0200 CKD
50 0201 CKD
0300 CKD
40 0301 CKD
0400 CKD
ms

30
0401 CKD
0500 CKD
20
0501 CKD
10 0600 CKD
0601 CKD
0 0700 CKD
0701 CKD
00

00

00

00

00

00

00

00
5:

5:

5:

5:

5:

5:

5:

5:
0800 CKD
:4

:4

:4

:4

:4

:4

:4

:4
08

09

10

11

12

13

14

15
0801 CKD

When we use the back-end rate of the drives along with the response time, it is even possible to

estimate the HDD utilization of the individual array groups. Figure 44 shows this information

for the same arrays as in Figure 43. Expressed as a utilization, it is even easier to understand the

information.

Copyright © Stephen R. Guendert 2007 211


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 45. Sample RAID rank (array group) HDD utilization reporting for one subsystem. Each line
represents one RANK or array group.

0001
HDD busy estimate (RAID 5) (%) for DSS
0100

100 0101
90 0200
80 0201
70 0300
60
0301
%

50
40 0400
30 0401
20 0500
10 0501
0
0600
00

00

00
00

00

00

00

00
0601
5:

5:

5:

5:

5:

5:

5:

5:
:4

:4

:4
:4

:4

:4

:4

:4
08

09

10

11

12

13

14

15
0700
0701

3.4.3.6 Understanding Disconnect time


Disconnect time is the portion of the I/O service time allocated to other tasks than transferring

data across the channel. We have seen that the RMF manual still describes it in terms of

physical disk access, but that disconnect time is also caused by synchronous copy delays,

cache/NVS full conditions and internal subsystem queuing.

A very good way to assess the health of your disk subsystems is to go combine all

information available from the various RMF records, and to see how disconnect time is made up.

The three main components are disk access for read-miss, synchronous copy delays for writes,

and ‘other’. The ‘other’ category may include cache/NVS bypass, pacing and other internal

subsystem delays. For lightly loaded subsystems it should be small.

• The read-miss component is estimated as

t_disc_read_miss = (%read) * (%read miss) * (HDD service time).

Copyright © Stephen R. Guendert 2007 212


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The read and read miss probabilities are available from the RMF 74.5 cache records, and

the service time from the 74.5 rank section.

• The synchronous copy component is estimated as

t_disc_synccopy = (%write)*(Sync setup overhead + sync data transfer)

The setup overhead is mostly a function of the microcode implementation, and the data

transfer time depends on the volume of the transfer and the speed of the synchronous

copy links.

• Whatever is left, will be put in the ‘other’ category. Typically, we find that this number

remains very small until some internal processing limit is hit.

Figure 46. Breakdown of disconnect time. Note that during the night batch window, the ‘other’
category is larger, as some internal subsystem components operate near saturation.

Disconnect Time Breakdown

4.5

3.5

3
Internal Queueing
ms

2.5 HDD (Disk) Access


Synchronous Copy
2

1.5

0.5

0
23:45

11:00
12:15

13:30
15:00

16:15
17:30
18:45

20:00
21:15

22:30
1:00

2:15
3:30

4:45
6:00

7:15
8:30
9:45

For EMC subsystems, that do not report the HDD service time, an estimate can be used based on

the drive technology and expected drive utilization to determine the read miss component, or the

Copyright © Stephen R. Guendert 2007 213


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

logic can be used the other way around, estimating the HDD service time from the disconnect

time. Figure xx below shows an example of that approach.

Figure 47. Estimated average back-end service time for EMC subsystem. Note that this is the
average value, individual disks may have much higher response time values

HDD Read Response time (ms)

30

25

20

15

10

0
0
30

30

30

:30
00
00

:30

:30

:3
:3
:3

:3
4:

6:

8:
0:

12

22
2:

10

14

16

18

20
05

05

05
05
05

05
05

05

05

05

05
05
20

20

20
20
20

20
20

20

20

20

20
20
11/

11/

11/
11/
11/

11/
11/

11/

11/

11/

11/
11/
7/

7/

7/
7/
7/

7/
7/

7/

7/

7/

7/
7/

3.4.3.7 RMF Magic and FICON performance


Host based channel reporting is not easily used to find out how heavily disk subsystem channels

are used. One reason is that the disks are usually shared between systems, and that each host

channel connects to multiple disk subsystems through FICON directors. This mapping can be

identified from the LCU information (78.3 records), such that channel activity can be mapped to

disk subsystems. Alternatively, FICON director reporting (74.7) could be used to determine the

traffic on each disk subsystem channel.

The most significant problem from an analysis point of view is that no disk subsystem can

fully utilize the capabilities offered by FICON channels. For example, when a disk subsystem

like an ESS800 or HDS 9980V is connect with 8 channels to directors, the maximum data rate

Copyright © Stephen R. Guendert 2007 214


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

from a theoretical FICON perspective would be 8 * (200 Mbyte/s reads + 200 Mbyte/s writes),

or 3200 Mbyte/s. In reality, these subsystems will be saturated around 400 Mbytes/second with

most workloads, especially when copy services are also deployed on them. They just do not

have the horse power to fully drive all channels. When more work is added, this will only result

in increased connect or pending time. Figure 47 below shows a typical XY scatter chart with

throughput and service time.

Figure 48. Scatter chart showing throughput and service time: as the throughput for the whole subsystem
increases, so does the service time for individual I/O operations

Service time vs Throughput

10
(PEND+DISC+CONN) ms

9
8
7
Service time

6
5
4
3
2
1
0
0 100 200 300 400 500
Total throughput MByte/sec

Later generations of these vendors are faster, but still not nearly capable of driving the FICON

channels at 200 Mbyte/s each way. Neither are the FICON interfaces processors for that matter.

The only useful metrics to monitor are therefore the aggregate data rates to the disk subsystems,

and to relate those to the number of FICON links on the disk subsystem and the subsystem type.

Copyright © Stephen R. Guendert 2007 215


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Another way found to be very effective to monitor the health of FICON disk subsystem is to

plot the “effective data rate”, that is the average data rate in Mbyte/s during the connected period.

Because of protocol overhead and channel program decoding time, that are included in connect

time, this metric is always less than 200 Mbyte/s, but for long transfers it could get close to it

with very fast channel interfaces. Most workloads, however, show values in the 30..50 Mbyte

per second range.

Figure 49. Effective data rate for three disk subsystems. Note that during periods of very high activity,
individual I/O operations are running at very low data rates

Effective Channel Data Rate (MB/s) for all DSSs

70
60
50
40 DSS-FOUR
MB/s

DSS-THREE
30
DSS-TWO
20
10
0
00

00

00

00

00
0

0
:0

:0

:0

:0

:0

:0

:0

:0
1:

3:

5:

7:

9:
17

19

21

23

11

13

15

17

The interesting thing about this metric is that when the channels, host adapters or disk

subsystems are overloaded, this effective data rate will become lower indicating ‘stress’ on the

channels.

Finally, in rare cases your workloads may hit the open exchange limit on FICON channels. It

is our experience that this only occurs when subsystems have extremely poor cache hit ratios, or

when other components are already saturated, causing elongated pending, connect or disconnect

time for other reasons.

Copyright © Stephen R. Guendert 2007 216


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3.5 Making the business case for a FICON migration

The business case is one of the most important, yet misunderstood and underutilized resources

available to ensure successful selling of large scale IT infrastructure projects such as an ESCON

to FICON migration or FICON technology refresh.

A business case is an analysis describing the business reasons why (or why not) specific

investment options should be selected. A business case identifies the most relevant decision

factors associated with a proposed investment, assesses their likelihood, and computes their

value. Value includes both quantifiable and non-quantifiable considerations. The findings of the

business case are then presented to the decision makers for their selection or rejection of the

recommendations developed by the business case authors. The premise of the business case is

that the investment option with the best cost-benefit payoff, related to all alternatives, should be

selected.

Use of business cases for justifying large scale IT projects has become increasingly common

during this decade. In an era of ever increasing demands for CIO budget dollars, but flat CIO

budgets, projects compete with each other internally for those budget dollars. The projects that

have the best business cases are the projects a company will undertake for a given budget year.

Some aspects of an information system may produce hard or tangible benefits which will

directly improve the performance of the firm, such as reducing costs, and will therefore be seen

in the accounts of the organization as an improvement in profit and perhaps in return on

investment (ROI). These benefits are of course relatively easy to identify and to quantify both in

physical terms (number of people employed or widgets used) and in financial terms (money

saved or earned). But other aspects of an information system may only create soft or intangible

benefits. These intangible benefits, while they might improve the general circumstances of the

Copyright © Stephen R. Guendert 2007 217


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

staff and thus make life easier in the organization will not directly lead to identifiable

performance improvements and as such will not be easily seen in the company accounts.

Although it is difficult to be precise about their actual value, especially in financial terms,

intangible benefits can make a critical contribution to the success of an organization and success

of an IT investment.

This section is intended to serve two purposes. The first purpose is to briefly introduce some

concepts on making a total business case to help you sell FICON solutions to mainframe

customers. The second purpose is to explain to you how to use a tool that was developed to help

you make the financial justification component (tangible benefits) in your ROI based business

case.

ROI based business cases naturally compliment and reinforce several powerful business

practices in widespread use today. These include (Gardner, 2000):

• Balanced Scorecard (Kaplan and Norton’s globally popular performance measure

methodology for aligning organizational effort to enterprise strategies): Business cases

can benefit from usage of the Balanced Scorecard value categories of “financial”,

“customer”, “internal processes,” and “employee learning and growth.” Similarly,

Balanced Scorecard users can benefit from a more reliable and tightly integrated business

case process for identifying the true business value of strategic options which the

Balanced Scorecard is helping to identify and measure.

• IT Portfolio Management (a methodology for optimizing IT investment value by

managing multiple IT projects for the greater good of the entire enterprise). Business

cases are more effective when they recognize and account for portfolio management

concepts. In addition, IT portfolio management proponents will find that having a


Copyright © Stephen R. Guendert 2007 218
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

stronger business case-grounded IT investment selection process will further enhance

portfolio management benefits.

• IT Value Management (the process of identifying key value drivers that impact the

business payoff from IT investments, then managing those to assure each forecasted

payoff actually occurs). Using business cases as the official conveyor of value

expectations throughout a project’s life cycle provides the strong foundation upon which

good IT value management depends.

• IT Project Management (the process of assuring that an IT project implementation

achieves its planned goals, via such activities as organizing, planning, monitoring,

controlling, and correcting implementation resources and tasks). Business case methods

are a key enabler of effective project management, provide the discovery of IT value

potential and then provide the baseline for tracking value realization.

The Brocade FICON Migration ROI scenario modeling tool is designed to assist Brocade

personnel in providing an economic and financial justification to their clients/customers for two

scenarios. The first scenario is for clients who are considering migrating from a predominantly

ESCON environment to a FICON environment. The second scenario is for clients/customers

who are already a predominantly FICON environment, but are looking at doing a technology

refresh of older FICON hardware to newer technology hardware.

The tool provides a financial justification for all aspects of the migration, i.e., the financial

justification is not solely for the Brocade hardware, software, and services inherent in the

migration. The justification is comprehensive and also includes the hardware, software, and

services from the OEM partner you are working with. For example, if part of the FICON

migration in a technology refresh scenario includes new DASD arrays or tape drives in addition
Copyright © Stephen R. Guendert 2007 219
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

to new FICON directors, the ROI model takes into account the DASD arrays and tape drives. If

the client/customer needs to do a cable plant upgrade as part of a migration from ESCON to

FICON the tool includes that as well. In other words, you can help your OEM partners justify

the sale of their disk, tape, virtual tape, and mainframe upgrades in addition to justifying

Brocade’s directors, switches, and extension technology.

The tool produces a customized cost savings and revenue improvement analysis achievable

through the implementation of a new FICON environment. In addition, the tool calculates the

four key financial metrics that CXOs would consider when making an investment decision—

Return on Investment, Net Present Value, Payback Period, and Internal Rate of Return.

This ROI scenario modeling tool has been developed in Excel and is very customer friendly and

easy to use. This user’s guide will show you how to successfully use and understand each section

of the ROI scenario modeling tool and should help you feel comfortable while conducting an

ROI consultation with your client. The last section of this guide walks you through a complete

example scenario of using the tool, including screen shots from the ROI example.

3.5.1 Background on why this tool was developed

This tool initially was developed to use with clients/customers on justifying their migration from

ESCON to FICON. A major factor in delaying a migration from ESCON to FICON is the initial

cost of entry, including the purchase of new hardware and infrastructure. One thing that was

difficult for mainframe end users to recognize and quantify were the cost savings associated with

the elimination of some ESCON infrastructure, as well as their ability to leverage other existing

ESCON hardware by attaching it to the FICON network.

When performing a comprehensive technical evaluation of each protocol, FICON proves to

be a major technological improvement over ESCON. This was true in 1998/1999 at its inception,

Copyright © Stephen R. Guendert 2007 220


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

and is even truer in 2007 with the advent of 4 Gbps technology, cascaded FICON, and

FICON/FCP intermix. The key to the successful sale of an ESCON to FICON migration is

translating this technical improvement into TCO savings. The technical advantages that

contribute directly to business and/or cost advantages include; FICON’s greater addressing limits,

improved bandwidth and I/O capabilities, improved distance, and improvements made to the

protocol itself allowing for better response times.

There are 6 primary business benefits realized when migrating from ESCON to FICON

(Guendert, Seitz, 2004):

1) The technical advantages of FICON over ESCON enhance overall performance in the

mainframe environment, meaning more work can be performed in less time. More

transactions can be processed in a given amount of time, or looking at it another way, less

time goes by between processed transactions. This has been extremely important in the

financial sector and is one of the primary reasons why that industry moved to FICON en

masse. In the financial sector the speed in which transactions can be executed is directly

tied to a firms’ ability to generate revenue, so moving to FICON can translate directly to

additional revenue and profit gains.

2) FICON enhances overall enterprise resiliency and disaster recovery planning based on

the extended distances that the protocol supports. The bandwidth capability of FICON

enables faster recovery over those distances. The faster performance of FICON also

allows an enterprise to better meet its Recovery Point and Recovery Time Objectives

(RPO and RTO respectively). FICON’s improved performance compared with ESCON

enables DR site disk volumes to be addressed more rapidly, and an environment to be

Copyright © Stephen R. Guendert 2007 221


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

brought on line from a cold start more rapidly. When recovering from tape, FICON

significantly speeds up the recovery process.

3) FICON enhances a business’ accessibility to data with its higher addressing limitations,

which translates to more disk volumes being accessible to a given channel path. FICON

also allows a business to take full advantage of storage technologies such as dynamic

parallel access volumes (PAVs), the z9’s MIDAW facility, and HyperPAVs.

4) FICON protocol has room for growth (16,000 addresses supported today with available

growth to 65,000 addresses), and as such, allows a business to be better prepared for

internal growth, mergers and acquisitions, or consolidation.

5) The cumulative advantages of FICON present businesses with an opportunity to

consolidate the IT infrastructure in terms of the overall footprint of the mainframe and

mainframe storage environments, as well as perhaps the total number of data centers. For

those businesses looking to build new data centers for disaster recovery, the cost savings

of building out these data centers with a FICON storage network infrastructure for the

mainframe is significantly lower that building it with ESCON. The primary reason for

this is the storage consolidation FICON enables.

6) FICON and FCP intermix allows a business to better utilize IT budget dollars targeted for

storage networking infrastructure. Z/OS and Linux can now be supported on the same

mainframe footprint. FICON and open systems SAN networks can leverage the same

directors. Using a common infrastructure for all storage connectivity provides significant

opportunity for cost savings to be realized.

These business benefits for your client/customer need to be quantified and balanced against

the following costs associated with a migration from ESCON to FICON (Guendert, 2005e):

Copyright © Stephen R. Guendert 2007 222


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

1) FICON cabling: The end user will need to either a) put in a new cabling infrastructure

(9/125 micron long wave single mode fiber is recommended) or b) leverage their

existing ESCON cabling via the addition of mode conditioning patch (MCP) cables. Use

of MCP cables will restrict you to a 1 Gbps FICON network, therefore investing in single

mode infrastructure is strongly recommended. Either alternative will be a cost to

consider.

2) FICON DASD: Most businesses that migrate to FICON do not do so just for the

connectivity advantages - generally, there is a more significant driving factor. Often, the

key driver is that older ESCON DASD is coming off of lease, or a maintenance contract

is nearing expiration. These businesses typically will not invest in new DASD array(s)

and continue to use ESCON for attachment. They will want to take advantage of their

new DASD technology and use FICON for connectivity.

3) FICON tape drives/libraries/virtual tape: Similar to DASD above.

4) FICON directors: Larger mainframe environments moving to FICON will want to

purchase FICON directors to replace their ESCON directors. Direct attachment of

FICON only makes sense for the smallest of FICON environments (i.e. 1 host, 1-2 DASD

frames). FICON directors, however can be a significant cost in the equation.

5) FICON processors (possible): Pre 9672 S/390 G5 mainframes are not FICON capable.

One of the primary drivers for an ESCON to FICON migration is replacement of older

processors with newer machines, such as the System z9.

6) FICON controllers: If you have existing ESCON DASD arrays or tape drives and the

leases have not yet expired, the following options are available. 1) You can leave them

as ESCON. 2) If the customer is not using the System z9 processors, you can implement

Copyright © Stephen R. Guendert 2007 223


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON bridge cards (FCV) into the model 5 ESCON directors, and run your FICON at 1

Gbps. 3) you can upgrade their controllers to FICON. Or, 4) you can implement protocol

converter technology that converts native FICON to ESCON for attachment of legacy

devices to FICON mainframe channels. This technology will allow a customer to

migrate their mainframes to FICON, run native FICON channels into the converter (via a

FICON director if desired), and run ESCON channels out of the converter to their

existing ESCON storage.

For clients/customers who are already running a predominantly FICON mainframe

infrastructure, most of these same principles apply. You will see as we proceed with this guide,

and you go through the spreadsheet and follow the example scenario how easily you will be able

to use this tool for FICON technology refreshes in addition to ESCON to FICON migration

scenarios.

3.5.2 Brief Overview of the tool

The ROI Tool consists of five modules that can be easily accessed by clicking on a navigation

bar, quite similar to one that you would find on any website. The navigation bar at the top of

each screen will tell you where you are any given time by showing a modified tab. Each module

of the tool performs a specific function and a detailed description of each one is summarized

below:

- Module 1: Home: The ROI Modeling tool starts up on this screen, and it provides a quick

overview of the specific offer and a place for you to enter the customer’s name and your

name and contact information. You should fill in these fields before proceeding, as this

information is used throughout the tool and on the printable reports page as well. When

Copyright © Stephen R. Guendert 2007 224


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

you’ve entered this information, you should click “START” which will automatically

move you to the Input screen.

- Module 2: Input: This screen is where you will enter your customer’s specific operational

information that is critical for estimating the benefits and return on investment they could

expect from upgrading to a FICON solution. You will usually enter this information as

part of a conversation or input gathering session with a prospective customer. There is a

significant amount of guidance that is provided on this screen as well as sample data for

you to use as a benchmark for your own Input process.

- Module 3: Investment: The third section allows you to input both projected startup, and

ongoing costs for your customer, as well as any capitalization of the cost that will be

needed.

- Module 4: ROI Summary: This is the fourth section of the ROI scenario modeling tool,

and here you see a summary of your customer’s expected return as measured by four key

financial metrics, estimated benefits year-by-year as well as the projected costs for

deploying a FICON solution.

- Module 5: Benefit Detail: The fifth section of the tool is a worksheet where all the

benefits are calculated. This screen allows you to adjust the assumption factors for each

year (see Section 3 for more information about what these mean) and see how each

benefit is calculated.

- Module 6: Assumptions: This is the sixth section of the ROI scenario modeling tool.

Here you can modify each impact assumption. It also includes a “risk adjustment” section

where you can adjust the benefits upward or downward depending on your own intuition,

Copyright © Stephen R. Guendert 2007 225


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

customer expectations, and unforeseen risks that might impact the benefits each year of

the proposed contract. Of particular note is a productivity adjustment factor that you and

the user can use to modify the gross savings from freeing up labor hours to do other

activities.

- Module 7: Printed Report: This screen provides a summary of the ROI analysis, a

number of different “views” of the results that have been developed in conjunction with

conversations with prospective customers and internal reviews to provide your prospects

with the perspective they need to see the benefits. You can print reports from this screen,

and you can select from a variety of report options.

Copyright © Stephen R. Guendert 2007 226


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

3.5.3 Understanding the key financial metrics: ROI methodology overview

The objective of any information technology systems development in business is to increase the

wealth of shareholders by adding to the growth premium of their stock. Ideally, the increase

achieved should be the maximum obtainable. Maximizing shareholder wealth consists of

maximizing the value of the cash flow stream generated by operations, specifically those cash

flows that are generated by future investment in an information technology system.

Underlying the tool is a basic, fundamental economic concept called “opportunity cost.” This

concept essentially states that the real economic cost of any activity, whether it’s buying shoes at

the department store or automating the budgeting and forecasting processes of the organization,

is the cost of the next best “opportunity” that provides the same or “next best” function of the

product or service in question. Estimating the economic value of moving to a FICON

environment requires not only the price the customer will pay for the new FICON hardware, but

also must take into account the cost of continuing to operate using the existing ESCON hardware

and/or the older existing FICON hardware.

This may seem like common sense at first, but one of the first questions many of your customers

might ask is, “how much does it cost?” Using the methodology demonstrated in the Brocade ROI

scenario modeling tool, this question should really be phrased, “how much is it worth?” By

turning the conversation into one about value creation for your customers, instead of a

conversation about pricing and costs, you can be much more effective in today’s hyper-

competitive marketplace as well as prove to your customers just how valuable their relationship

with you, and Brocade, really is.

Copyright © Stephen R. Guendert 2007 227


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Financial Metrics

The financial metrics used in the ROI scenario modeling tool are commonplace in financial

organizations in any modern corporation. They are time-tested and well understood by CFO’s,

financial and business analysts, and are basic methods for calculating the value of business

activities and investment projects (Money, Remenyi, 2000).

Discounting: The Time Value of Money

The financial metrics shown on the ROI Summary page of the Brocade tool utilize the same

concepts as those used by investment banks and consulting firms, valuation experts and financial

decision makers. Central to these metrics is the concept of the “time value of money.” This

concept essentially means that “a dollar tomorrow is less valuable than a dollar today,” or to coin

a bit of popular folk wisdom, “a bird in the hand is worth two in the bush.”

To equate the value of dollars or economic benefits received tomorrow to the value of dollars

today, financial professionals use a process called “discounting.” Future benefits are

“discounted” to make them equivalent to dollars spent or received today. Once discounted, the

“present day” equivalent value of a future cash flow or financial benefit is called “present value.”

It is not important that you fully understand the mechanics of discounting or how to calculate

present values, since the ROI scenario modeling tool will do that for you. You should be able to

interpret these metrics for a customer in terms that they can easily understand and communicate

to their organization. However, below is a description of how to calculate present values that is

central to understanding three of the four ROI key performance metrics in the Brocade FICON

Migration ROI scenario modeling tool. The better of an understanding you have of these metrics,

the more convincing of a case you will make.

Copyright © Stephen R. Guendert 2007 228


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The mathematical formula for generating the present value of a stream of “benefits” or cash

flows is as follows:


m
Cash Flow i
i =1 (1 + Cost of Capital per period)
m

where m is the number of periods over the forecast horizon. If the cash flow in each period is

factored out, the formula:


⎡ ⎤
⎢ × ⎥
m
1
i =1 ⎣ (1 + Cost of Capital per period m) m ⎦
Cash Flow i

The second term (1/ (1+Cost of Capital per period m)^m) is called the discount factor, and is the

number that converts a cash flow or benefit in period m to today’s dollars. This number is always

less than one, so mathematically this fits with our description of future dollars as being less

valuable than today’s dollars. Using these formulas in a simple spreadsheet, Table 3-1 shows

how to calculate present value of a series of future cash flows over a six month period:

Copyright © Stephen R. Guendert 2007 229


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 8. Example of discounting future cash flows


Cost of Capital per Year: 10.00%

Cost of Capital per Month: 0.83% (10.00% divided by 12)

Month Cash Discount Discount Present

Date of Cash Flow # Flow Formula Factor Value

January 31, 2003 1 $1,000 1/(1+.83%)1 0.9917 $991.74

February 28, 2003 2 $500 1/(1+.83%)2 0.9835 $491.77

March 31, 2003 3 $450 1/(1+.83%)3 0.9754 $436.77

April 30, 2003 4 $500 1/(1+.83%)4 0.9673 $483.67

May 31, 2003 5 $450 1/(1+.83%)5 0.9594 $431.71

June 30, 2003 6 $1,000 1/(1+.83%)6 0.9514 $951.43

Total Value of All Cash $3,900 Total Present Value on $3,789.25

Flows: December 31, 2002:

As you can see, even though the total of all the cash flows is $3,900 over the six-month period,

the present value on December 31, 2002 is only $3789.25. If we raise the cost of capital, to say,

12%, the present value drops to $3767.71. You can also see that cash flows received sooner,

such as the $450 benefit received on March 31, has a higher present value ($436.77) than an

equivalent cash flow received two months later on May 31 ($431.71).

Copyright © Stephen R. Guendert 2007 230


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The concept of present value is central to understanding the key ROI metrics as outlined below.

First, however, it is crucial that you understand where the cost of capital comes from, and how to

estimate a value for it.

Cost of Capital

The cost of capital is an input on the Input screen, and it is the “discount rate” at which all future

benefits are discounted to convert them into today’s dollars. In many basic business applications,

the “weighted average cost of capital” is used, and this is the weighted average of all the rates

that investors and creditors expect to get from supplying capital (funds supplied in the form of

stock purchases, loans, bond purchases, etc.) to the company (Money, Remenyi, 2000)

The formula for weighted average cost of capital for a company with only debt and equity

(common stock) on its balance sheet is calculated as follows:

× kD + × kE
D E
Weighted Average Cost of Capital =
V V

Where:

V = Total market value of the company

D = Market value of the company’s debt

kD = Expected rate of return on the firm’s debt

E = Market value of the company’s equity (common stock)

kE= Expected rate of return on the firm’s equity

Remember that for this example, the total market value of the company (V) is equal to the value

of its debt (D) plus the value of its equity (E). For a company with more than these two types of

Copyright © Stephen R. Guendert 2007 231


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

capital, such as preferred stock (which in many ways is more like debt than stock, and in other

ways is more like stock than debt, depending on the company’s legal and ownership structure),

then the weighted average cost of capital should also include the weighted rates of return for

these sources of funds as well.

The issues associated with estimating the true cost of capital for a company or a specific project

are very complex (not to mention analyzing most large company’s capital structures), and there

is more than one way to determine an appropriate cost of capital. If an estimate can’t be supplied

directly from your customer (via his or her finance department) then one of the following rules of

thumb will be sufficient for use in the ROI modeling tool.

1. Most large, publicly traded companies with relatively stable earnings and business

operations have a cost of capital that ranges from 8% to 15%; depending on how much of

the business is financed with debt (more debt usually means a lower cost of capital).

2. This number is typically higher for firms like biotech companies or high-tech startup

ventures that have riskier business models or uncertain cash flows, and can be as low as

15% and as high as 50% or higher for these companies.

3. Most customers will have an internally estimated cost of capital, but if this isn’t available

then an estimate of between 10% and 15% should be sufficient for use with most

customers. A popular finance professor once said (somewhat jokingly) that if all else fails,

use 10%.

Copyright © Stephen R. Guendert 2007 232


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Net Present Value (NPV)

NPV is a specific method of calculation related to the net financial impact of a set of costs and

benefits. “Present value” is the value today of a future amount of cash invested at a specific

interest rate. For example, the present value of $110 received a year from now, assuming a 10%

interest rate, is $100. In other words, $100 today is equivalent to $110 a year from now,

providing the money can be invested successfully at 10 percent interest pet year. Net present

value is defined as the present value of all future cash flows, at a given interest rate.

In other words, NPV is simply the present value of all the benefits and costs including the cost

of an initial investment or cash outlay at the beginning of a project. A positive number means the

project is a good investment—and the larger the number the more value the project creates (in

“today’s” dollars).

A mathematical formula for NPV that is typical in most college-level finance texts looks like this

(Money, Remenyi, 2000):

Net Present Value (NPV) = Initial Investment (in Period “0”) +

∑ (1 + Cost of Capital per period)


m
Cash Flow i
i =1
m

In this formula the initial investment (and any other net cash outflows) must be depicted as a

negative number and positive numbers are considered cash inflows in this formula. A “positive

NPV” project is one in which the present value of future net cash inflows is greater than the

initial investment. A project with an initial investment of $10,000 and a present value of future

benefits of $11,000 will have an NPV of $1,000.

Copyright © Stephen R. Guendert 2007 233


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Note that we do not discount the initial investment (in the second part of the equation above), but

only discount the net cash flows for each period over the future life of the project. See Table 3-2

for an example of how to calculate Net Present Value, using the same example as in Table 3-1,

with a project requiring a $3,000 initial investment.

Table 9. Calculating net present value

Cost of Capital per Year: 10.00%

Cost of Capital per Month: 0.83% (10.00% divided by 12)

Month Cash Discount Discount Present

Date of Cash Flow # Flow Formula Factor Value

December 31, 2003 0 ($3,000) 1/(1+.83%)0 1.0000 ($3,000)

January 31, 2003 1 $1,000 1/(1+.83%)1 0.9917 $991.74

February 28, 2003 2 $500 1/(1+.83%)2 0.9835 $491.77

March 31, 2003 3 $450 1/(1+.83%)3 0.9754 $436.77

April 30, 2003 4 $500 1/(1+.83%)4 0.9673 $483.67

May 31, 2003 5 $450 1/(1+.83%)5 0.9594 $431.71

June 30, 2003 6 $1,000 1/(1+.83%)6 0.9514 $951.43

Total Value of All Cash $ 900 Net Present Value on $ 789.25

Flows: December 31, 2002:

Copyright © Stephen R. Guendert 2007 234


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The most important benefit of using NPV as a measure of the value of a project is that for every

time a firm undertakes a positive NPV project, the value of the firm is increased by the amount

of NPV. In other words, NPV is a measure of the total value created for a firm by a given project.

Nearly all respectable financial professionals, from a CFO in a large company to Wall Street

equity analysts use the NPV formula to measure the value of a firm as well as the projects it

undertakes; most of the individual differences in the professional application of this concept lie

in the details of their financial models and the assumptions underlying their forecasts.

Many customers with Project Management Offices (PMOs) will use NPV to decide on which

proposed projects to undertake. A cutoff value will be assigned and if a proposed project does

not meet the cutoff, that project is not undertaken.

Return on Investment

Return on Investment is the present value of all the cost-adjusted (or “net”) benefits divided by

the present value of all the costs of the offer. It is a ratio measure, so a value of 100% means that

the customer is “doubling” their money on the investment, in today’s dollars (Money, Remenyi,

2000).

⎢∑
⎡ m Net Cash Inflows (Total Benefits - Total Costs) i ⎤

(1 + Cost of Capital per period) m
ROI = ⎣ ⎦ × 100%
i =0

⎢∑
⎡ ⎤
m ⎥
m
Cash Outflows(or Costs) i
⎣ i =0 (1 + Cost of Capital per period) ⎦

In Table 3-3, the net cash inflows also have costs associated with them (much like the model that

underlies the Brocade FICON Migration ROI scenario modeling tool):

Copyright © Stephen R. Guendert 2007 235


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 10

Cost of Capital per Year: 10.00%

Cost of Capital per Month: 0.83%

Net Cash Cash Present Present

Date of Cash Inflow Outflow Discount Discount Value of Value of

Flow Month (Benefit) (Cost) Formula Factor Benefits Costs

December 31,

2002 0 0 $1,000 1/(1+.83%)0 1.0000 $1,000

January 31,

2003 1 $1,000 $150 1/(1+.83%)1 0.9917 $992 $149

February 28,

2003 2 $500 $150 1/(1+.83%)2 0.9835 $492 $148

March 31, 2003 3 $450 $150 1/(1+.83%)3 0.9754 $439 $146

April 30, 2003 4 $500 $150 1/(1+.83%)4 0.9673 $484 $145

May 31, 2003 5 $450 $150 1/(1+.83%)5 0.9594 $432 $144

June 30, 2003 6 $1,000 $150 1/(1+.83%)6 0.9514 $951 $143

Total Value of All Cash $3,900 $1,900 Total Present Value $3,789 $1,874

Flows: on December 31,

2002:

In this example ROI = ($3789 - $1874) / $1874 = 1.02 x 100% = 102.2%.

Copyright © Stephen R. Guendert 2007 236


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

If you choose not to multiply this ratio by 100%, you would have an analogous measure that

some finance professionals call the “profitability index” or the “return ratio”. Note also that in

this formula, the initial period i = 0, which indicates that we also take into account the initial

period investment.

Internal Rate of Return (IRR)

Internal Rate of Return is a bit more complex to explain than the other metrics. IRR is defined as

that rate of return that makes equivalent the positive cash flow from savings with the negative

cash flow created by the investment itself. Stated another way, IRR is the discount rate at which

the cash inflows are exactly equal to the cash outflows. Stated in financial terms, IRR is the

interest rate at which the present value of cash inflows equals the present value of cash outflows,

i.e., the combined discounted cash flow (DCF) equals zero.

Essentially, IRR is the discount rate that would make the future benefits equal to the initial

investment in today’s dollars. An example of IRR in practice is the “Yield to Maturity” for

publicly traded bonds. This “yield” is the internal rate of return that sets the interest and final

principal payment on the bond equal to its current market price. For an investment project, it is

the solution for Cost of Capital per period in the following equation (Money, Remenyi, 2000):

∑ (1 + Cost of Capital per period)


m
Cash Flow i
Initial Investment (in Period “0”) =
i =1
m

For more than one period in the future, there can be as many solutions to this equation as there

are periods (this isn’t proven here, but if you can perform basic algebra you can easily see this),

so professionals almost always use a financial calculator or a computer to solve for the IRR in

the above formula. For this reason, IRR is at best a measure that should be used in context with

other measures like NPV, and at worst it can be unreliable as a decision rule.

Copyright © Stephen R. Guendert 2007 237


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Since the initial investment needed to implement a FICON solution is quite low relative to the

net benefits that can be realized, this number, expressed as a percentage, will usually be very

high. This is a standard way some firms measure the return on a project, so it is included, but you

should be careful to explain to your customers that this number can be misleading if taken out of

context from the other key financial metrics.

Payback Period

Payback period is different from the other financial metrics in that it is not a measure of benefits

and costs, but is expressed in the ROI scenario modeling tool in months. Payback period is

defined as the period of time needed to recover the investment for the option being evaluated.

The tools reports payback period as the number of months that it takes for your customers to

recoup their initial setup fees and any other initial costs associated with each FICON project.

You may have heard the term “breakeven point”, and this is analogous to that measure of the

return on a project. There are various ways to calculate this, but for the following example,

where each monthly cash inflow/benefit is the same ($12,500), for a $50,000 investment the

payback period is:

Copyright © Stephen R. Guendert 2007 238


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 11
Month/Period Investment Net Benefit Cumulative

per Month Net Benefit

0 $ 50,000 $ 0.00 $ 0

1 $12,500 $ 12,500

2 $12,500 $ 25,000

3 $12,500 $ 37,500

4 $12,500 $ 50,000

5 $12,500 $ 62,500

6 $12,500 $ 75,000

In this example, payback period equals the initial investment divided by the monthly net benefit,

or 4 months. For cash flows that are the same each period, this simple formula will give you the

payback period:

Payback Period =
Initial Investment
Net Cash Flow per Period

For uneven cash flows or benefits, we must iteratively solve for the payback period until we find

the point in time (in this case in months) where the total cumulative net benefits are equal to the

initial investment. The weakness of payback period is that it doesn’t tell you how much value a

project or an investment creates in today’s dollar terms—for these purposes NPV and ROI are

much better measures of a project’s economic value.

Copyright © Stephen R. Guendert 2007 239


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Conclusion

This chapter has described the systems used for statistical analysis of mainframe environments.

It has included a discussion of SMF, RMF, and performance measurements and metrics for

ESCON and FICON environments. The chapter concluded with a “user’s guide” to a Excel

based ESCON to FICON migration ROI tool developed by the author while at McDATA

corporation and at Brocade following Brocade’s acquisition of McDATA in 2007.

Copyright © Stephen R. Guendert 2007 240


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Chapter 4

4.1 Introduction

This chapter contains a summary of the analysis work done during consulting engagements with

four clients that occurred between August 2005 and October 2007. Three of these engagements

were complete ESCON to FICON migration assessments performed at three U.S. financial

institutions. The assessment work was done using Intellimagic’s RMF Magic software, in

conjunction with Intellimagic’s Disk Magic and Batch Magic (mainframe tape/virtual tape

analysis) software. These assessments were done prior to the development of the

McDATA/Brocade Excel based FICON ROI tool, but financial cost savings existed.

A fourth consulting engagement was done at a very large U.S. financial institution in which

the work done was not technical analysis, but was strictly financial justification. This

engagement was the precursor to the development of the Excel tool. All clients are not referred

to by their actual names in this document. Due to the competitive nature of their industries, and

government security, they do not allow their names to be used in external documentation.

4.2 ESCON to FICON assessment at the “ABC” company

A consulting engagement took place with ABC Company in summer and fall 2005 to assess the

current mainframe ESCON infrastructure with the goal of determining whether or not migrating

to FICON will benefit ABC Company, and if so, provide details on what those benefits are and

how the environment should be configured. This section is a summary of that assessment and

contains an analysis of the existing ESCON infrastructure and recommendations for migrating

that infrastructure to FICON.

Copyright © Stephen R. Guendert 2007 241


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.2.1 Pre-assessment DASD environment

ABC Company has a good sized size ESCON environment and would like to know if FICON

would make sense for them. At the time of the assessment, the mainframe environment had 12

DASD subsystems, 10 manufactured by HDS and two (2) manufactured by IBM. All are

ESCON attached, with a total of 40 TB of usable DASD space. All storage in the above

subsystems is allocated to mainframe hosts. The subsystems in this assessment are connected to

six (6) mainframe processors and are used by nine (9) MVS images, or LPARs. Table 12 is a

summary of the DASD environment, from a performance perspective.

Table 12 DASD ESCON environment at "ABC" company

ESCON
Channels ESCON ESCON
I/O Rate Usable I/O (17 Resp Chan
Subsys ID (avg) GB Intensity Mbyte/s) (ms) Util
HDS7000 2628 8878.2 0.296 24 4.2 26%
HDS8000 2538 8866.1 0.286 32 4.1 17%
HDSE000 2270 4953.1 0.458 16 3.4 27%
HDS6000 1602 4283.8 0.374 16 4.1 24%
HDS1000 1537 1571.9 0.978 16 2.8 14%
HDSF000 1019 3301 0.309 8 3.8 27%
IBMC000 731 2749.4 0.266 16 3.4 12%
HDS9000 424 1618.1 0.262 8 3.4 11%
HDS4000 359 1876.3 0.191 8 3.4 10%
HDSA000 264 826.2 0.320 4 4.6 17%
IBMB000 176 964.7 0.182 16 6 6%
HDSD000 17 289.4 0.059 8 13.4 2%
Overall 1130.4 40178.2 0.028 172 4.7 16%

Copyright © Stephen R. Guendert 2007 242


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.2.2 DASD environment findings

Below are issues that existed in the ABC Company DASD environment, based on information

gathered in interviews of key staff members and through analysis of data supplied by ABC

Company.

4.2.2.1 DASD capacity under-utilized

The larger DASD subsystems (9900 series) installed at ABC Company have at most 8 TB of

usable DASD space installed, although the subsystem is capable of growing to at least 50 TB.

Because of the limitations of ESCON, adding more DASD capacity would require the volumes

to be reconfigured into a smaller number of larger volumes.

4.2.2.3 Some subsystems are partitioned


Some DASD subsystems installed at ABC Company were partitioned into multiple control units,

to address some of the limitations of ESCON. This adds complexity to the overall storage

environment.

4.2.2.4 Many ESCON channels with low utilization


In figure 50, the overall utilization for the 172 ESCON channels is 16%, with the highest

utilization being 27% on three (3) subsystems. Considering the rated bandwidth of ESCON at

17MB/sec, this equates to an average of 2.72 MB/sec per ESCON channel.

4.2.3 DASD recommendations at “ABC” company

The following sections contain the results of the analysis of the mainframe DASD environment

and recommendations for a possible storage environment based on FICON. Intellimagic’s

software tools were used to estimate the effects of changing over the existing ESCON

environment to a FICON environment. The table 13 contains those estimates.

Copyright © Stephen R. Guendert 2007 243


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 13. FICON environment model


ESCON FICON
Channels Channels ESCON FICON ESCON FICON
Subsys I/O Rate Usable I/O (17 (Express Resp Resp Chan Chan
ID (avg) GB Intensity Mbyte/s) 2Gb) (ms) (ms) Util Util
HDS7000 2628 8878.2 0.296 24 4 4.2 2.3 26% 13%
HDS8000 2538 8866.1 0.286 32 4 4.1 2.2 17% 12%
HDSE000 2270 4953.1 0.458 16 2 3.4 1.8 27% 21%
HDS6000 1602 4283.8 0.374 16 2 4.1 2 24% 16%
HDS1000 1537 1571.9 0.978 16 2 2.8 1.7 14% 12%
HDSF000 1019 3301 0.309 8 2 3.8 2.1 27% 10%
IBMC000 731 2749.4 0.266 16 2 3.4 2.4 12% 9%
HDS9000 424 1618.1 0.262 8 2 3.4 1.7 11% 4%
HDS4000 359 1876.3 0.191 8 2 3.4 1.6 10% 3%
HDSA000 264 826.2 0.320 4 2 4.6 2.5 17% 3%
IBMB000 176 964.7 0.182 16 2 6 3.8 6% 3%
HDSD000 17 289.4 0.059 8 2 13.4 5.5 2% 0%
Overall 1130.4 40178.2 0.028 172 28 4.7 2.5 16% 9%

The table shows that ABC Company can replace 172 ESCON channels with 28 FICON channels

across their 12 DASD subsystems and at the same time cut their overall average response times

in half.

4.2.3.1 DASD consolidation


Today’s enterprise class DASD subsystems are capable of 8000-10000+ I/Os per second and can

scale to over 50 TB of usable DASD. With this in mind while reviewing the above table,

Intellimagic’s software tools generated a potential configuration that would consolidate the

number of subsystems down to four (4). Table 14 shows the estimated performance

characteristics of that new configuration.

Copyright © Stephen R. Guendert 2007 244


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 14. Consolidated FICON environment model

FICON
Channels FICON FICON
I/O Rate Usable I/O (Express Resp Chan
Subsys ID (avg) GB Intensity 2Gb) (ms) Util
subsys 1e6 5350 10800.8 0.495 4 2.2 14%
subsys 89 2962 10484.2 0.283 4 2.2 14%
subsys a7d 2909 9993.7 0.291 4 2.4 15%
subsys bcf4 2264 8875.3 0.255 4 2.1 12%
Overall 3371 40154.0 0.331 16 2.2 14%

By consolidating down to four (4) subsystems, ABC Company saved on maintenance costs,

while improving their overall response times. It is possible to consolidate down to two

subsystems; however this would not provide much room for future growth, and would require

some consolidation of smaller volumes into larger volumes. FICON allows for more volumes on

a connection, but does not increase the number of volumes supported by the storage subsystem.

4.2.3.2 Cost estimates


The difference in initial cost for 12 subsystems versus 4 subsystems is significant. ABC

Company received the list price from their DASD vendor for three different scenarios of 44TB.

The different scenarios were 12 subsystems, 4 subsystems and 2 subsystems. The list prices were

as follows:

Table 15. DASD list prices for "ABC" company

Channels
Total Per List
Subsystem Pricing TB Subsystem Price
12 Subsystems 44 16 ESCON $4.3M
4 Subsystems 44 4 FICON $3.3M
2 Subsystems 44 8 FICON $2.8M

Copyright © Stephen R. Guendert 2007 245


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.2.4 “ABC” Company virtual tape environment

When discussing FICON, the first thing that comes to mind is DASD. FICON can provide create

benefits in the tape environment as well, by allowing today’s tape drives to run at their rated

speeds. Virtual tape systems can benefit from FICON, allowing them to accept and deliver more

tape workload from the host while using fewer channels. This section will discuss the analysis of

the virtual tape environment at ABC Company.

4.2.4.1 Current virtual tape environment


ABC Company has three (3) IBM VTS model B20 subsystems connected to two (2) IBM 3494

tape libraries. Each VTS subsystem has 12 IBM 3590E tape drives and 1.7 TB of DASD cache.

Each VTS emulates 64 3490E tape drives. Below are some statistics about the overall virtual

tape environment at ABC Company.

• 41% of all physical mounts are for recall – host requests for volumes not in DASD cache
• Average virtual mount time = 7 seconds
• Average physical mount time = 80 seconds
• 3.5% of all virtual mounts are not satisfied by cache and require physical mounts
• 7% of all read mounts not satisfied by cache (write mounts are always in cache)
• ABC Company writes an average of 4TB/day to the 3 VTS subsystems

Appendix B contains the statistics on each of the three (3) VTS subsystems analyzed at ABC

Company.

4.2.4.2 Findings
ABC Company has three VTS subsystems and believes that they have reached full capacity from

a workload point of view. The Intellimagic tools based analysis estimates that 97% of the

workload can be handled with only two VTS subsystems. That number goes up to 98.1% if the

Copyright © Stephen R. Guendert 2007 246


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

ESCON channels are replaced with FICON channels. The following table shows the breakdown

of the number of hours and the number of VTS subsystems required.

Table 16. 1 Distribution of Required Subsystems at "ABC" company

FICON
ESCON VTS
Required VTS (#
VTS' (# Hrs) +% % Hrs) +% %
0 0 0 0 0 0 0
0.1 0 0 0 0 0 0
0.2 0 0 0 0 0 0
0.3 0 0 0 0 0 0
0.4 0 0 0 0 0 0
0.5 0 0 0 0 0 0
0.6 0 0 0 0 0 0
0.7 0 0 0 0 0 0
0.8 440 72.6 72.6 463 73.6 73.6
0.9 19 75.7 3.1 21 76.9 3.3
1 17 78.5 2.8 21 80.3 3.3
1.1 19 81.7 3.1 38 86.3 6
1.2 18 84.7 3 19 89.3 3
1.3 13 86.8 2.1 12 91.3 1.9
1.4 11 88.6 1.8 4 91.9 0.6
1.5 15 91.1 2.5 8 93.2 1.3
1.6 8 92.4 1.3 10 94.8 1.6
1.7 6 93.4 1 5 95.5 0.8
1.8 10 95 1.7 9 97 1.4
1.9 9 96.5 1.5 3 97.5 0.5
2 3 97 0.5 4 98.1 0.6
2.1 3 97.5 0.5 1 98.3 0.2
2.2 5 98.3 0.8 3 98.7 0.5
2.3 5 99.2 0.8 1 98.9 0.2
2.4 1 99.3 0.2 2 99.2 0.3
2.5 3 99.8 0.5 1 99.4 0.2
2.6 0 99.8 0 0 99.4 0
2.7 0 99.8 0 1 99.5 0.2
2.8 1 100 0.2 0 99.5 0
2.9 0 100 0 0 99.5 0
3 0 100 0 0 99.5 0
3.1 0 100 0 1 99.7 0.2
3.2 0 100 0 0 99.7 0
3.3 0 100 0 0 99.7 0

Copyright © Stephen R. Guendert 2007 247


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON
ESCON VTS
Required VTS (#
VTS' (# Hrs) +% % Hrs) +% %
3.4 0 100 0 2 100 0.3

The analysis indicated that if ABC Company were to upgrade the existing VTS subsystems to

FICON, the throughput bottleneck would shift from the front-end host connections to the back

end tape drives. The required number of back-end drives would drive the number of VTS

subsystems required to four (4), suggesting that ABC Company purchase an additional VTS

subsystem if they were to upgrade to FICON. This requirement is for only .5% of the workload

measured. The reason for the bottleneck shift is that the FICON upgrade would triple the

throughput capabilities of the front-end host connections, but the rated speed of the back end

3590E drives would remain the same. Upgrade to 3590H drives on the back end would only

increase the capacity per cartridge, not the data transfer speed. Appendix C contains a table that

compares results of all existing VTS workload being directed to ESCON VTS and FICON VTS.

The table in Appendix B shows that the VTS workload is not evenly distributed between

subsystems. Looking at the average time to mount physical drives, total physical mounts,

average virtual volumes ready and the total virtual mounts, one subsystem seems to have more or

less activity than the others in each category. Table 17 contains these statistics for each VTS.

Table 17. VTS activity comparison

DAILY PHYSICAL DRIVE VTS VTS VTS TOTA


ACTIVITY B2157 B2263 B2293 L
Average time to mount physical drive
(sec) 66.8 88.5 81.9 79.1
Total physical mounts for VTS 8861 8079 8720 25660
Average virtual volumes ready 8 8 11 27
10493 11137
Total virtual mounts 95526 9 3 311838

Copyright © Stephen R. Guendert 2007 248


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.2.4.3 Recommendations
The first recommendation that was made to ABC Company was to leave the VTS systems

ESCON attached until they had fully tested the IBM 3592 tape drives in their test environment.

The IBM 3592 tape drive is 3 times faster and 5 times higher in capacity than the IBM 3590H

tape drive. Upgrading the VTS to 3592 drives and FICON connections at the same time would

maintain the balance of throughput between the front-end and the back-end of the VTS. The

second recommendation made to ABC Company was to better balance and distribute workloads

across all of their VTS subsystems. By doing this, ABC Company will be able to more fully

utilize all of the VTS resources and accommodate some workload growth within the VTS

environment without the addition of another VTS subsystem.

4.2.5 Native tape environment

The native tape environment at ABC Company contains two (2) IBM 3494 tape libraries. All

tape drives within those libraries are attached to the hosts with ESCON connections. Also, there

are six (6) STK 9310 tape libraries with ESCON-attached tape drives. The following is a

breakdown of the tape drive types:

• 6 IBM 3480 drives – used to send and receive tapes from outside of ABC Company

• 56 IBM 3490 STK Silo drives – used for general MVS tape processing, including some

application backups

• 24 IBM 3490 non-automated drives – used for mounting ABC Company-owned tapes

residing outside of the robotic libraries.

• 48 IBM 3590 drives – used for HSM and some infrastructure backups.

Table 18 show the native tape drive statistics by workload type, and then by technology type.

Copyright © Stephen R. Guendert 2007 249


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 18. ABC Company native tape statistics

Workload Summary All HSMML2 HSMBAK INFRABKP DEVBKUP DB2BKUP CORPBKUP OTHER
Peak Mounts 2314 365 10 107 548 259 3 2232
Peak Througput MB/s 402.5 79.7 9.6 54 229.1 67.7 1.5 269.2
Peak Avg. Drives 140 32.7 9 14 52.3 26.3 0.3 92.4
Peak Concur. Drives . 41 11 27 56 38 2 132

Hourly Information All 3480 3490E 3590B 3590E 3590H 3590 all
Peak Mounts 2314 65 2299 8 20 349 377
Peak Througput MB/s 402.5 2.4 381.3 6.8 9.8 100.3 116.9
Peak Avg. Drives 140 3.2 124.7 0 2.2 34.4 36.6
Peak Concur. Drives . 6 159 0 7 42 49

Appendix C contains the complete reports from which the above tables were created.

4.2.5.1 Findings
The first finding was that the DB2 application was causing backup problems. ABC Company

has one particular tape workload for which they would like to improve the efficiency. The DB2

backups are currently written directly to 3590E tapes and sent offsite. ABC Company would like

to stack these backups onto the 3590E tapes to improve the utilization of these tapes and send

fewer tapes offsite. The second finding was consolidating onto 3590H tape drives was not

enough of an improvement to solve the problem. The analysis showed that if ABC Company

were to consolidate all of their native tape workload onto 3590H drives, they would need 55

33590H drives to handle 100% of the native tape workload. For 97% of the workload, only 40

3590H drives are needed. Appendix D contains the complete distribution of hourly concurrent

drives required by recording technology.

4.2.5.3 Recommendations
The first recommendation was to expedite testing and implementation of the IBM 3592 tape

drive for native tape workloads. The IBM 3592 tape drive provides three (3) times the speed and

five (5) times the capacity of the IBM 3590H tape drive. These increases would reduce the

Copyright © Stephen R. Guendert 2007 250


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

amount of time needed to move data to tape, and will allow ABC Company to use fewer tape

cartridges and reduce the number of 3494 slots required.

Based on the estimated 3590H requirements for ABC Company (55 drives for 100% of workload,

40 drives for 97%), the following shows how this would directly translate into 3592 tape drives:

• 55 3590H -> 18 3592 drives

• 40 3590H -> 14 3592 drives

However, simple mathematics do not account for all of the nuances of a real tape environment

and the demands that its workloads require of it. Considering this, the analysis recommended

adding extra 3592 to the calculations, and recommended that ABC Company replace the existing

native tape environment with 24 3592 tape drives. The second recommendation was to consider

using TMM for solving stacking problems. The precursor to VTS was Tape Mount Management

(TMM), a storage management methodology that directed tape allocation to primary DASD, and

then used HSM to move the data to tape. VTS is simply a host independent hardware

implementation of this methodology. Unfortunately, one of the shortcomings of VTS is that data

is not easily placed on physical tapes and sent offsite. One option is to use TMM to direct tape

data to DASD, then use HSM or another copy tool to stack the data onto physical tape. This

independent tape can then be removed and sent offsite. The third recommendation was to

Expand HSM ML1 pool to hold short term data. By increasing the ML1 DASD pool for HSM,

ABC Company can store data with relatively short retention periods (up to 30 days) without

having to copy it to tape. Data expires on DASD, avoiding the overhead of tape mounts for both

migrate and recall, and recycles.

Copyright © Stephen R. Guendert 2007 251


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.2.6 Disaster recovery

4.2.6.1 Findings
The current disaster recovery plan for the mainframe is entirely dependent on tape. Full backups

are taken weekly and sent offsite. The current Recovery Time Objective (RTO) is 72 hours.

There is no replication of disk or tape taking place today. The tape hardware is located in a

different bay, 80 feet away in the same building, than the disk and CPU. The datacenter bays are

designed to prevent a physical disaster in one bay from affecting the equipment in other

datacenter bays. For the mainframe, the current recovery site is located at the IBM facility in

Gaithersburg, MD. ABC Company has a second location in Scranton, PA, but it is not used for

mainframe recovery purposes. The distance between the sites is too great to provide a disaster

recovery plan that can satisfy the specified RTO of 24 hours without excessive additional costs.

ABC Company has an objective to reduce their RTO from 72 hours to 24 hours. Using a tape-

only approach to recovering the datacenter, this is not possible with the existing tape

environment and processes.

4.2.6.2 Recommendations
The first recommendation made to ABC Company in terms of disaster recovery was that ABC

Company conducts a business continuity and disaster recovery assessment to help them make the

strategic decisions on their DR infrastructure that will allow them to meet their objectives. The

second recommendation made to ABC Company was to upgrade to IBM 3592 tape drive

technology. The increases in capacity and performance of the IBM 3592 tape drive over the

IBM 3590H tape drive will allow ABC Company to store more data per cartridge and take less

time writing their data to those cartridges. This in turn will allow for faster recovery of data from

those cartridges and will lower the tape transport and storage costs by shipping fewer tapes. The

Copyright © Stephen R. Guendert 2007 252


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

third recommendation was that ABC Company should take advantage of existing bandwidth

between their own sites to replicate their mainframe data. Migrating from ESCON to FICON

will reduce the number of replication connections required and extend the allowable distance for

replication. In the interim, ABC Company should replicate the mainframe data between bays at

the production site, then take full backups to tape and send them offsite.

4.2.7 Mainframe processors

At the time of the assessment, ABC Company currently had six (6) z/OS processors. Only four

(4) of them were included in this assessment, as the other two are not part of the main processing

environment. IBM offered ABC Company a proposal to replace the four (4) processors in this

assessment with two (2) new z/OS processors, with FICON capabilities. ABC Company had not

yet made the decision to accept that offer, pending the outcome of this assessment. It was

recommended that ABC Company upgrade their processors and begin migrating to FICON as

soon as possible. Estimates showed that by upgrading to FICON, ABC Company can reduce

their MIPS requirements by 10% and finish their workload in the same amount of time. A 10%

reduction in mainframe MIPS requirements translates into several millions of dollars of

mainframe software licensing costs each year.

4.2.8 Conclusions

The following represent elements of a new storage management paradigm at ABC Company:

• ABC Company should develop a plan to consolidate their DASD subsystems in their

mainframe environment. The use of FICON will make this consolidation possible.

• ABC Company could reduce the environment down to two (2) large (21TB) DASD

subsystems.

Copyright © Stephen R. Guendert 2007 253


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• The savings over time is estimated at $700K for initial purchase price for these 2

subsystems when compared with continuing to use the 14 smaller subsystems.

The actual recommendation is to migrate to four (4) DASD subsystems of 11TB each, which will

allow for at least 3 years of storage growth in the same footprint.

ABC Company currently is running three (3) virtual tape (VTS) subsystems. These VTSs

represent, for the most part, ABC Company’s batch tape processing.

• The 3 VTSs are currently bottlenecked at the backend, by the number of tape drives to

move data to and from the DASD cache. This is unusual and is probably caused by too

many restores.

• The front-end throughput requirements could be satisfied by 2 VTSs, if the back end is

upgraded to IBM 3592 tape technology.

• ABC Company should study their VTS usage and see what tape traffic could be easily

moved to DASD. If there are lots of restores occurring from recently created tapes, then

putting that data to DASD will be more cost effective.

• Eventually removing the third VTS subsystem could save ABC Company $300K and

simplify operations.

ABC Company sends about 5000 tape cartridges offsite each day. Trying to recover this many

tapes in a disaster will not be quick. ABC Company should perform a DR study to determine if

the current tape backup process is robust enough to meet ABC Company’s RPOs and RTOs.

• There are over 60 3590 tape drives at ABC Company. This workload could be satisfied

by about 24 FICON attached 3592 tape drives.

Copyright © Stephen R. Guendert 2007 254


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• ABC Company should develop a plan to migrate to IBM 3592s to save money. The

estimate of cost savings of migrating to 28 3592s is about $600K.

• For ABC Company tape usage it was determined that 3TB of DASD space (about

$150K) could replace 85% of the HSM recall tape mounts. This would likely cut down

on the number of drives by about 8 (about $200K) while speeding up the recall process.

ABC Company’s current disaster recovery plan for the mainframe is totally tape based and

has an RTO of 72 hours. A new requirement at ABC Company forces this to a lower RTO of 24

hours.

• It will be difficult for tape alone to meet this 24-hour RTO.

• The use of full volume dumps and 3592s might make this possible.

• ABC Company should conduct a strategic DR/BC mini-assessment to help ABC

Company make strategic decisions on their DR infrastructure.

• As a stopgap measure, ABC Company should plan for a short synchronous replication of

most of the mainframe DASD, taking daily “snapshots” of the locally replicated data, and

then taking full tape backups of those “snapshots”.

• The strategic direction for ABC Company to gain shorter RTO (likely to be under 4

hours) will be to migrate to DASD replication between sites.

Currently ABC Company has 6 Z/OS processors. Two of them are really not part of the main

processing complex and were not included in this study. IBM had proposed to replace the other 4

processors with 2 larger, newer processors.

• Migrating to FICON can save ABC Company about $1.8M in peripheral devices – tape,

VTS and DASD.

Copyright © Stephen R. Guendert 2007 255


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• Migrating to FICON will reduce I/O response time by about ½ for the DASD and will

allow the tape devices to run at native speeds.

• FICON will reduce MIPS requirements by about 10%.

• ABC Company should complete the processor upgrades with IBM and begin the

migration to FICON peripherals as soon as practical.

4.3 ESCON to FICON migration assessment at the “XYZ” Company

XYZ Company is a large U.S. based financial conglomerate. A consulting engagement was

performed at the XYZ Company in spring 2006 to assess the current mainframe ESCON

infrastructure with the goal of determining whether or not migrating to FICON will benefit the

XYZ Company, and if so, provide details on what those benefits are and how the environment

should be configured. This section is a summary of that assessment and contains an analysis of

the existing ESCON infrastructure and recommendations for migrating that infrastructure to

FICON.

4.3.1 DASD subsystems

At the time of the assessment the mainframe environment had (4) DASD subsystems, three (3)

EMC 8830s and one (1) EMC 8530. All were ESCON attached, with a total of 26TB of usable

DASD space. All storage in the above subsystems was allocated to mainframe hosts. The

subsystems in this assessment connect to five (5) mainframe processors and are used by six (6)

MVS images, or LPARs. Each processor has (76) ESCON connections to a total of four (4)

ESCON directors There are another four (4) ESCON directors in the environment, but they are

used for tape and non-storage devices. There are a total of (76) ESCON connections to the

storage subsystems.

The following diagram shows this disk environment.


Copyright © Stephen R. Guendert 2007 256
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 50. XYZ Company ESCON DASD environment

Processor Y Processor V Processor W Processor X Processor R


76 connections 76 connections 76 connections 76 connections 76 connections

380 total Host ports


76 total disk ports
456 total director ports
ESCON Director ESCON Director ESCON Director ESCON Director

EMC 8530 EMC 8830 EMC 8830 EMC 8830


SN 0273 SN 1608 SN 1711 SN 0002
8 connections 24 connections 24 connections 20 connections

Table 19 is a summary of the DASD environment, from a performance perspective.

Copyright © Stephen R. Guendert 2007 257


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 19. XYZ Company ESCON DASD performance

ESCON
I/O Channels ESCONESCON
Subsys Rate Usable I/O (17 Resp Chan
ID (Sum) GB Intensity Mbyte/s) (ms) Util
EM0273 573 2074.1 0.276 8 3.2 18%
EM0002 5202 9564.5 0.544 20 3.3 38%
EM1608 4898 7167.9 0.683 24 3.8 35%
EM1711 5826 7176.9 0.812 24 3.3 33%
Overall 16499.0 25983.4 0.635 76 3.4 31%

4.3.1.1 DASD findings


In table 19, the overall utilization for the 76 ESCON channels is 31%, with the highest utilization

being 38% on one (1) subsystem. Considering the rated bandwidth of ESCON at 17MB/sec, this

equates to an average of 5.27 MB/sec per ESCON channel. If XYZ’s throughput needs grow,

they will need to add more ESCON channels to this environment.

4.3.1.2 DASD analysis and recommendations


Intellimagic analysis software tools were used to estimate the effects of changing over the

existing ESCON environment to a FICON environment. Two configurations were suggested to

XYZ Company. The first configuration involves moving all of the volumes to be SRDF

protected and all of the volumes in subsystem 0002 to the EMC DMX subsystem with serial

number 0126. The rest of the volumes are to be moved into a new EMC DMX 3000 subsystem.

This configuration model is contained in table 20.

Table 20. FICON option 1 for XYZ Company

FICON
I/O Channels FICONFICON
Rate Usable I/O (Express Resp Chan
Subsys ID (Sum) GB Intensity 2Gb) (ms) Util
DMX3000 6862 13080.6 0.525 16 2.1 18%
EM0126 9061 10780.4 0.841 16 2.3 19%
Overall 15923 23861.0 0.683 32 2.2 19%

Copyright © Stephen R. Guendert 2007 258


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The data in EMC subsystem 0273 is not being migrated to the new DMX subsystems. Instead

this workload will be moved to the CopyCross environment. The figures above indicate that

XYZ Company can consolidate their data into two (2) FICON connected subsystems and achieve

faster response times over the existing ESCON environment.

Table 21contains the estimates for the second configuration option. In this case, the non-

SRDF volumes are split between two (2) EMC DMX 3000 subsystems.

Table 21. FICON option 2 for XYZ Company

FICON
I/O ChannelsFICONFICON
Rate Usable I/O (Express Resp Chan
Subsys ID (Sum) GB Intensity 2Gb) (ms) Util
DMX3000 #1 3493 6541.8 0.534 8 1.8 17%
DMX3000 #2 3369 6538.8 0.515 8 2 20%
EM0126 906110780.4 0.841 16 2.3 19%
Overall 15923.023861.0 0.630 32 2.0 19%

These figures indicate that by splitting data between two subsystems, the added backplane

capacity slightly improves response time, while keeping the required number of channels the

same. Also, this option provides more room for growth than the first option.

4.3.1.3 Workload growth


While the above tables provide configuration statistics for existing workload, growth in

workload has not yet been considered. Using the XYZ Company-supplied growth estimates of

40% per year in I/O and 20% per year in disk capacity, the following table shows how much

capacity XYZ COMPANY would need over the next 3 years.

Copyright © Stephen R. Guendert 2007 259


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 22. FICON projected growth at XYZ Company

FICON
Channels FICONFICON
I/O Rate Usable I/O (Express Resp Chan
Year (Sum) GB Intensity 2Gb) (ms) Util
Current 22785 36942 0.617 48 2.1 19
Mid 2005 31899 44330 0.720 48 2.2 26
Mid 2006 44659 53196 0.840 48 2.3 36
Mid 2007 62522 63835 0.979 48 2.5 51

The figures in table 22 were generated using the second configuration option specified by XYZ

Company, which involves three (3) EMC DMX subsystems. The above table indicates that

overall, the initial configuration will be able to support the workload growth expected by XYZ

Company without much performance impact, however more FICON connections should be

added in 2006 and 2007 to reduce the overall channel utilization. Appendix E contains the

growth tables for the individual DMX subsystems at XYZ Company.

4.3.2 Virtual tape environment

When discussing FICON, the first thing that comes to mind is DASD. FICON can provide create

benefits in the tape environment as well, by allowing today’s tape drives to run at their rated

speeds. Virtual tape systems can benefit from FICON, allowing them to accept and deliver more

tape workload from the host while using fewer channels.

At the time of the assessment, XYZ Company was planning to move their virtual tape workload

to EMC’s CopyCross. XYZ Company asked for assistance with planning and sizing the

CopyCross configuration. This section will discuss the analysis of the virtual tape environment at

XYZ Company. The analysis was performed using Intellimagic’s Batch Magic software tool.

Copyright © Stephen R. Guendert 2007 260


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.3.2.1 Current ESCON virtual tape environment


XYZ Company has four (4) IBM VTS model B18 subsystems connected to four (4) IBM 3494

tape libraries. Each VTS subsystem has (6) IBM 3590E tape drives and (288) GB of DASD

cache. Each VTS emulates (64) 3490E tape drives. Below are some statistics about the overall

virtual tape environment at XYZ Company.

• 62% of all physical mounts are for recall – host requests for volumes not in DASD cache

• Average virtual mount time = 17 seconds.

• Average physical mount time = 62 seconds.

• 7% of all virtual mounts are not satisfied by cache and require physical mounts.

• 16% of all read mounts are not satisfied by cache (write mounts are always in cache).

• XYZ COMPANY writes an average of 8TB/day to each of the 4 VTS subsystems.

Appendix F contains the statistics on each of the four (4) VTS subsystems analyzed at XYZ

Company.

4.3.2.2 Findings

The primary finding was that the existing VTS subsystems are overloaded with high recall rates

The VTS statistics table in Appendix F indicates that each of the VTS subsystems is suffering

from recall throttling. Throttling is a measure used by the VTS subsystem to prevent a single

function from consuming all of the resources within the VTS. The VTS does this by adding

delays to its responses to the host, slowing down throughput and turnaround times to the host.

Copyright © Stephen R. Guendert 2007 261


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.3.2.3 Analysis and recommendations


This section discusses the requirements of a CopyCross environment that will support the

workload XYZ Company has identified as CopyCross candidates. XYZ Company specified that

the existing VTS workload would to be moved to CopyCross. Also, the backup workload

currently written to remote native 3480 and 3490 tape drives is to be re-directed to the

CopyCross environment. When sizing a CopyCross environment, there are certain factors that

must be considered that do not necessarily apply to native tape or disk-tape virtual tape

subsystems. The key data points needed to configure a CopyCross environment are:

• Max data throughput – The amount of data both read and written to the

CopyCross environment will determine the number of front-end channels required

in the CopyCross environment.

• Total amount of data written – This number indicates how much disk is required

to store all of the data created during the time period examined. Implementing an

offload to physical tape process will reduce this requirement, but increase

physical tape resources.

• Total amount of data written in one day – This number is used to determine how

much free disk should be available. EMC recommends having 2-3 times this

amount on hand to account for anomalies in processing and to account for

problems that prevent the disk buffer from being maintained properly.

• Max number of concurrent tape drives in use – This number determines the

number of virtual tape drive addresses required. Depending on the maximum

supported number for any one CopyCross instance, multiple instances may be

required.

Copyright © Stephen R. Guendert 2007 262


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• Size of cartridge to be emulated – In a CopyCross environment, the logical

volume size is user-defined at the CopyCross instance level. It is important to

select a size that closely matches the workload. Each logical volume occupies the

entire defined size of the volume, no matter how much data is actually written.

• Number of volumes written in one day – This, along with the defined volume size,

will determine how much disk space is required to hold one day’s worth of tape

activity.

Table 23. CopyCross sizing metrics

CopyCross Sizing Metrics


Total Disk TB required 52
Avg TB written per day 1.8
Peak day TB written 3.9
Avg. Vol Size MB(maxda) 427.4
Avg. Scratch Mounts per day 5019
Peak MB/Sec throughput required 108
MB/Sec throughput - 95% of workload 89
Peak Concurrent drives in use 202

Based on the above table, EMC recommended that XYZ Company configure the CopyCross

environment with the following capabilities:

Table 24. CopyCross recommended configuration

CopyCross Configuration
Emulated volume size (MB) 500
TB disk each day activity 4
Total TB disk 52
# of emulated tape devices 216
# of FICON channels 4
# of ESCON channels 10

4.3.3 Native tape

Copyright © Stephen R. Guendert 2007 263


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.3.3.1 Current environment


At the time of the assessment, the native tape environment at XYZ Company contained four (4)

IBM 3494 tape libraries. All tape drives within those libraries attached to the hosts with ESCON

connections. Within these libraries there are a total of (50) IBM 3590B tape drives, used for

HSM, full volume dumps and other large dataset application processing. XYZ Company also has

standalone 3480 and 3490E tape drives, but the workload on these drives will be re-directed to

the CopyCross environment. The 3590B drives are expected to be the only native tape drives left

in the environment, excluding the few older technology drives retained for compatibility reasons.

For this reason, this section focuses on the 3590B workload only.

4.3.3.2 Findings
Table 25 shows the native tape drive statistics for the different 3590B workloads.

Table 25. 3590B tape drive statistics

Total
Local Vault Other Non-
3590B Statistics Dump Dump 3590B HSM All HSM
Total Specific Mounts 60 22 2826 2908 8260
Total Scratch Mounts 970 5724 358 7052 475
Total Mounts 1030 5746 3184 9960 8735
Total GB Read 633 2 22374 23010 3009
Total GB Written 18027 4125 5479 27631 15085
Total GB 18660 4127 27853 50641 18094
Specific Mounts % 5.8 0.4 88.8 29.2 94.6
Scratch Mounts % 94.2 99.6 11.2 70.8 5.4
GB Read % 3.4 0 80.3 45.4 16.6
GB Written % 96.6 100 19.7 54.6 83.4
Avg. Vol Size MB(maxda) 13125.8 599.9 17262.5 7213.6 32824.7
Total # of Cartridges 1000 2094 2557 5651 3383
Total # of Carts(maxda) 968 2093 808 3869 1576
DAILY INFORMATION
Avg. Specific Mounts 2 1 100 103 293
Avg. Scratch Mounts 34 203 13 250 17
Avg. Mounts 36 204 112 352 309
Avg. GB Read 22 0 790 812 107
Avg. GB Written 639 146 193 978 534
Copyright © Stephen R. Guendert 2007 264
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Total
Local Vault Other Non-
3590B Statistics Dump Dump 3590B HSM All HSM
Avg. GB 661 146 983 1790 641
Peak Mounts 113 579 234 841 702
Peak Mounts Day 6/28/2004 6/28/2004 7/1/2004 6/28/2004 7/1/2004
Peak GB 1677 414 2712 3739 1235
Peak GB Day 6/28/2004 6/28/2004 6/22/2004 6/22/2004 6/29/2004
Peak Read GB 147 0 2104 2104 283
Peak Read GB Day 6/29/2004 6/28/2004 6/22/2004 6/22/2004 6/21/2004
HOURLY INFORMATION
Peak Mounts 20 71 38 84 84
Peak Throughput MB/s 59.3 14.5 77.2 112.4 36.3
Peak Avg. Drives 14.4 13.2 31.9 40.4 11
Peak Concur. Drives 17 15 34 44 15

Table 26 compares the above statistics with the same workload simulated on IBM 3592 tape

drives.

Table 26. 3592 tape drive model comparison

Non Non
HSM HSM All HSM All HSM
3592 Statistics current 3592 current 3592
Total Specific Mounts 2908 2908 8260 8260
Total Scratch Mounts 7052 53 475 16
Total Mounts 9960 2961 8735 8276
Total GB Read 23010 23010 3009 3009
Total GB Written 27631 27631 15085 15085
Total GB 50641 50641 18094 18094
Specific Mounts % 29.2 98.2 94.6 99.8
Scratch Mounts % 70.8 1.8 5.4 0.2
GB Read % 45.4 45.4 16.6 16.6
GB Written % 54.6 54.6 83.4 83.4
Avg. Vol Size MB(maxda) 7213.6 27436 32824.7 16582
Total # of Cartridges 5651 113 3383 115
Total # of Carts(maxda) 3869 1576
DAILY INFORMATION
Avg. Specific Mounts 103 103 293 293
Avg. Scratch Mounts 250 2 17 1

Copyright © Stephen R. Guendert 2007 265


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Avg. Mounts 352 104 309 293


Avg. GB Read 812 812 107 107
Avg. GB Written 978 978 534 534
Avg. GB 1790 1790 641 641
Peak Mounts 841 214 702 680
Peak Mounts Day 6/28/2004 7/1/2004 7/1/2004 7/1/2004
Peak GB 3739 3739 1235 1235
Peak GB Day 6/22/2004 6/22/2004 6/29/2004 6/29/2004
Peak Read GB 2104 2104 283 283
Peak Read GB Day 6/22/2004 6/22/2004 6/21/2004 6/21/2004
HOURLY INFORMATION
Peak Mounts 84 38 84 80
Peak Throughput MB/s 112.4 77.2 36.3 36.3
Peak Avg. Drives 40.4 13 11 11
Peak Concur. Drives 44 15 15 15

In the above table, most of the statistics are the same for both the 3590B and the 3592 tape

environments. The difference lies with the number of scratch mounts and the total number of

cartridges used. Since the 3592 cartridges can hold five times the amount of data that a 3590B

can, this dramatically reduces the required number of cartridges to store the same data. Also, this

reduces the number of scratch mounts required, since fewer tapes are being written. The

disadvantage of having a larger tape volume size is that the potential for cartridge contention –

two or more operations request the same tape for dataset recall – is higher. These advantages are

automatically realized with the HSM workload, but for the other workloads there is some

planning and code/JCL changes required to utilize the full capacity of the 3592 tape cartridges.

The number of required tape drives went down for the non–HSM workload, but remained the

same for the HSM workload. This is because HSM is not able to drive the tape drive at native

speeds. HSM has many tasks it performs during backup and migration processes, such as control

dataset updates, that prevent it from driving higher data rates. Other programs and applications,

Copyright © Stephen R. Guendert 2007 266


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

such as FDR, can drive tape drives at native speeds and therefore would be able to take

advantage of the faster 3592 tape drive.

4.3.3.3 Recommendations
The IBM 3592 tape drive provides three (3) times the speed and fifteen (15) times the capacity of

the IBM 3590B tape drive. These increases will reduce the amount of time needed to move data

to tape, and will allow XYZ Company to use fewer tape cartridges and reduce the number of

3494 slots required. However, simple mathematics do not account for all of the nuances of a real

tape environment and the demands that its workloads require of it. Considering this, we

recommended adding extra 3592 to the values in the table above, and recommends that XYZ

Company replace the existing native tape environment with (30) 3592 tape drives.

4.3.4 Conclusions

4.3.4.1 DASD subsystems


• XYZ Company has a plan to move to FICON on their existing DMX subsystem and will

add more DMX subsystems with FICON channels.

• XYZ Company could reduce the environment down to two (2) EMC DMX subsystems,

each with 16 FICON channels for host connectivity, and still see some performance

improvement over the existing configuration. By going with three (3) DMX subsystems,

XYZ Company will have increase growth capability and slightly better performance for

non-SRDF data because of the added backplane capacity.

• By Mid 2008, XYZ Company will need to add FICON channels to the DMX subsystems

and host processors to accommodate workload growth (20% in disk, 40% in I/O) without

sacrificing performance. Based on these growth estimates, XYZ Company should pursue

the three (3) DMX option.


Copyright © Stephen R. Guendert 2007 267
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

• It was estimated that this would yield a long term cost savings to XYZ Company of

approximately $650,000 per year.

4.3.4.2 Virtual Tape


XYZ Company is currently is running four (4) VTS subsystems. The workload in these VTS

subsystems is planned to move into a new EMC CopyCross environment.

• The (4) VTS subsystems are overloaded with recall processing, which are specific host

mounts that require a physical tape to be mounted on the back-end of the VTS. The

existing VTS subsystems are an older generation, which are slower and smaller than new

models available today.

• The CopyCross environment should have at least 52TB of total disk capacity, with at

least 4TB free for daily write activity. There should be (10) ESCON channels from host

to CopyCross disk, or 4 FICON channels from host to CopyCross. Because CopyCross

allocates the entire defined volume size, the defined size should be close to average

volume size. CNT recommends 500MB.

• XYZ Company could use the existing 8730 subsystems as the target disk for CopyCross.

The internal architecture of the 8830 does not allow it to gain much benefit from FICON

channels over ESCON channels. It was recommended keeping these devices on ESCON

and use them for the CopyCross environment.

• Estimates of the cost savings to XYZ Company for this strategy was $925,000 per year,

primarily from removing footprint and maintenance savings.

Copyright © Stephen R. Guendert 2007 268


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.3.4.3 Native Tape


XYZ Company currently has four (4) IBM 3494 Tape libraries with a total of (50) 3590B tape

drives installed. All of the tape drives are ESCON connected to the hosts. These drives are used

for HSM, local full volume backups and certain large file applications. These workloads are

expected to stay on native 3590 technology.

• XYZ Company can expect performance improvement along with the increased per

cartridge capacity by upgrading the 3590B tape drives to 3592 drives.

• The performance improvements will likely not be seen with the HSM workload, only

with the non-HSM workloads.

• The HSM workload would automatically utilize the extra cartridge capacity, while the

other workloads will require planning and JCL changes to fully utilize the new cartridges.

• Moving to fewer, higher performance drives will save XYZ Company $130,000 per year

long term. This cost savings is realized through lower maintenance costs, environmental

savings realized by running fewer drives, and needing less tape media due to the

increased capacity of the 3592 cartridges.

4.4 FICON DASD assessment for the ACSH Company

ACSH Company is a mid size U.S. based financial institution and had a good sized ESCON

attached disk (DASD) environment. ACSH Company made the decision to move to FICON.

This section describes the analysis of the existing ESCON attached DASD environment and

proposes a FICON DASD configuration. The analysis and modeling was done in the winter of

2007 using Intellimagic’s RMF Magic software.

Copyright © Stephen R. Guendert 2007 269


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.4.1 Current ESCON attached DASD environment

At the time of the assessment, the ACSH Company mainframe environment had two (2) DASD

subsystems: one (1) IBM ESS F-20 (SN 19756) and one (1) IBM ESS-800 (SN 24213). SN

19756 has a total of 9.8 TB of useable space on a mix of 36 GB (24 8 packs) and 73 GB (10 8

packs) 10K RPM drives. This array is configured with 16 GB of cache, has 32 ESCON

connections to ESCON directors, and does not have parallel access volumes (PAV) active.

SN19756 is running (1868) 3390 Mod 3 and (480) 3390 Mod 9 volumes.

SN 24213 has a total of 6 TB usable space, exclusively on 73 GB (10 8 packs) 10K RPM drives.

This array is currently configured with 16 GB of cache and has 32 ESCON connections to

ESCON directors. This array does have parallel access volumes (PAV) active. SN 24213 is

running (28) 3390 Mod 3 and (672) 3390 Mod 9 volumes.

There is a total of 15.8 TB usable DASD space. All storage in the above subsystems is allocated

to mainframe hosts. The subsystems in this assessment are connected to two (2) IBM 2064

mainframe processors (105 and 107) and are used by 5 (5) MVS images, or LPARs. Each

processor has sixty-four (64) ESCON connections to a total of four (4) ESCON directors. There

are a total of sixty four (64) ESCON connections to the storage subsystems. Each processor

currently has 8 open ESCON channels, processor 105 has 4 FICON channels, and processor 107

has no FICON channels in its currently configured state. The current disk connectivity is at the

1024 addresses/channel limitation.

Figure shows the current disk environment at ACSH Company.

Copyright © Stephen R. Guendert 2007 270


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 51. Current Storage Environment

Below is a summary of the DASD environment, from a performance perspective.

Table 27 ESCON attached DASD performance at ASCH Company

ESCON
Channels ESCON
Subsys I/O Rate Usable I/O (17 ESCON Chan
ID (Sum) GB Intensity Mbyte/s) Resp (ms) Util
I24213 2527 6082.7 0.415 32 3.9 8%
I19756 6698 9844.1 0.680 32 2.8 23%
Overall 9225.0 15926.8 0.579 64 3.4 16%

I/O R/W Read W rite %Cache Record Record Destage Seeks Chan
ESS F20 Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %W rite Perc. per IO Util
Average 6698 1 0.4 1.1 0.3 2.8 7.9 96 99 100 0 100 2 1.01 23%
SHC1 2025 0.2 0.3 1.1 0.3 1.9 7.9 96 99 100 0 100 2 1.01 7%
SHN1 113 0.1 1.3 1 0.1 2.5 7.9 96 99 100 0 100 2 1.01 0%
CTP1 2004 1.1 0.4 1 0.2 2.7 7.9 96 99 100 0 100 2 1.01 6%
CTT1 110 0.1 1.3 1 0.1 2.5 7.9 96 99 100 0 100 2 1.01 0%
SHA1 2446 1.8 0.3 1.3 0.3 3.8 7.9 96 99 100 0 100 2 1.01 10%

Copyright © Stephen R. Guendert 2007 271


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

I/O R/W Read W rite %Cache Record Record Destage Seeks Chan
ESS F20 Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %W rite Perc. per IO Util
Average 6698 1 0.4 1.1 0.3 2.8 7.9 96 99 100 0 100 2 1.01 23%
SHC1 2025 0.2 0.3 1.1 0.3 1.9 7.9 96 99 100 0 100 2 1.01 7%
SHN1 113 0.1 1.3 1 0.1 2.5 7.9 96 99 100 0 100 2 1.01 0%
CTP1 2004 1.1 0.4 1 0.2 2.7 7.9 96 99 100 0 100 2 1.01 6%
CTT1 110 0.1 1.3 1 0.1 2.5 7.9 96 99 100 0 100 2 1.01 0%
SHA1 2446 1.8 0.3 1.3 0.3 3.8 7.9 96 99 100 0 100 2 1.01 10%

4.4.2 Findings

Below are issues found in the ACSH Company DASD environment, based on information

gathered in interviews of key staff members and through analysis of data supplied by ACSH

Company.

ACSH has many ESCON channels, with low to moderate utilization. In table 27, the overall

utilization for the 64 ESCON channels is 16%, with the highest utilization being 23% on one (1)

subsystem. Considering the rated bandwidth of ESCON at 17MB/sec, this equates to an average

of 2.72 MB/sec per ESCON channel. This is very low.

4.4.3 Analysis and recommendations

This sections contain the results of the analysis of the mainframe ESCON attached DASD

environment, and modeling of a recommended FICON DASD environment based on ACSH’s

stated plans, as well as information from a high availability/storage assessment. The analysis

was performed using Intellimagic’s RMF Magic and Disk Magic software tools to estimate the

effects of changing over the existing ESCON environment to a FICON environment. For the

high availability storage planning purposes, the assumption was made in the modeling that the

existing ESS F20 (SN 19756) would be replaced by an ESS 800, the modeling reflects this. A

second modeled configuration is based on migrating the existing ESS F20 to a DS 8300

configured with two (2) storage LPARs. One storage LPAR being used for the 9.8 TB of date

Copyright © Stephen R. Guendert 2007 272


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

from the ESS F20, and the other storage LPAR being used as the 6 TB XRC target for the

existing ESS 800.

Table 28. Existing ESS 800 configuration

I/O R/W Read Write %Cache Record Record Destage Seeks Chan
ESS 800 Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %Write Perc. per IO Util
Average 2527 0.1 0.2 0.9 2.7 3.9 4.2 55 89 100 32 100 6 1.38 8%
SHC1 1481 0.1 0.2 1.1 4.4 5.9 4.2 55 89 100 32 100 6 1.38 5%
SHN1 39 0 0.2 0.5 0 0.7 4.2 55 89 100 32 100 6 1.38 0%
CTP1 176 0 0.2 1.1 0.5 1.7 4.2 55 89 100 32 100 6 1.38 1%
CTT1 40 0 0.2 0.5 0 0.7 4.2 55 89 100 32 100 6 1.38 0%
SHA1 791 0 0.2 0.6 0.1 0.9 4.2 55 89 100 32 100 6 1.38 1%

Table 29. ESS 800 modeled with four FICON channels


I/O R/W Read Write %Cache Record Record Destage Seeks Chan
Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %Write Perc. per IO Util
Average 2527 0.1 0.2 0.4 2.6 3.3 4.2 55 89 100 32 100 6 1.38 8%
SHC1 1481 0.1 0.3 0.5 4.4 5.2 4.2 55 89 100 32 100 6 1.38 5%
SHN1 39 0 0.2 0.2 0 0.4 4.2 55 89 100 32 100 6 1.38 0%
CTP1 176 0 0.2 0.5 0.5 1.2 4.2 55 89 100 32 100 6 1.38 1%
CTT1 40 0 0.2 0.2 0 0.4 4.2 55 89 100 32 100 6 1.38 0%
SHA1 791 0 0.2 0.3 0.1 0.6 4.2 55 89 100 32 100 6 1.38 2%

A variety of channel consolidation options were considered and modeled. Given the relatively

low ESCON channel utilization percentages found on the ESS 800, it was determined that an 8:1

channel consolidation (ESCON: FICON) was ideal for performance, cost, and growth. Table 29

above shows the performance improvements that would be realized by converting the ESS800 to

FICON. Please note that the improvements primarily are achieved via reductions in CONN time.

Table 30. ESS 800 FICON, additional 16GB cache


I/O R/W Read Write %Cache Record Record Destage Seeks Chan
Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %Write Perc. per IO Util
Average 2527 0 0.2 0.4 1.9 2.6 4.2 64 91 100 32 100 6 1.21 8%
SHC1 1481 0 0.2 0.5 3.2 3.9 4.2 64 91 100 32 100 6 1.21 5%
SHN1 39 0 0.2 0.2 0 0.4 4.2 64 91 100 32 100 6 1.21 0%
CTP1 176 0 0.2 0.5 0.3 1 4.2 64 91 100 32 100 6 1.21 1%
CTT1 40 0 0.2 0.2 0 0.4 4.2 64 91 100 32 100 6 1.21 0%
SHA1 791 0 0.2 0.3 0.1 0.6 4.2 64 91 100 32 100 6 1.21 2%

Some additional scenarios were modeled for the existing ESS 800 as a FICON array. One such

scenario was analyzing how converting the ESS 800 to FICON, and adding additional cache to

Copyright © Stephen R. Guendert 2007 273


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

the array would enhance performance. It was determined that the optimal additional cache

addition (considering the cost involved) would be to add an additional 16 GB cache, for a total of

32 GB cache on the existing ESS 800. Table 30 above shows the performance improvements

realized (exclusively DISC time), and the improvements made in cache hit ratios and in seeks per

IO.

Table 31. ESS 800 FICON, 32 GB cache, 15K RPM drives


I/O R/W Read Write %Cache Record Record Destage Seeks Chan
Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %Write Perc. per IO Util
Average 2527 0 0.2 0.4 1.2 1.9 4.2 64 91 100 32 100 6 1.38 8%
SHC1 1481 0 0.2 0.5 2 2.8 4.2 64 91 100 32 100 6 1.38 5%
SHN1 39 0 0.2 0.2 0 0.4 4.2 64 91 100 32 100 6 1.38 0%
CTP1 176 0 0.2 0.5 0.2 0.9 4.2 64 91 100 32 100 6 1.38 1%
CTT1 40 0 0.2 0.2 0 0.4 4.2 64 91 100 32 100 6 1.38 0%
SHA1 791 0 0.2 0.3 0.1 0.5 4.2 64 91 100 32 100 6 1.38 2%

A final scenario modeled for the existing ESS 800 was replacing the existing 10K RPM 73 GB

drives with 15K RPM 73 GB drives. This scenario, while impractical to actually do, showed

some additional performance improvements that could be realized in reducing DISC time.

Table 32. Current ESS F20, ESCON

I/O R/W Read W rite %Cache Record Record Destage Seeks Chan
ESS F20 Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %W rite Perc. per IO Util
Average 6698 1 0.4 1.1 0.3 2.8 7.9 96 99 100 0 100 2 1.01 23%
SHC1 2025 0.2 0.3 1.1 0.3 1.9 7.9 96 99 100 0 100 2 1.01 7%
SHN1 113 0.1 1.3 1 0.1 2.5 7.9 96 99 100 0 100 2 1.01 0%
CTP1 2004 1.1 0.4 1 0.2 2.7 7.9 96 99 100 0 100 2 1.01 6%
CTT1 110 0.1 1.3 1 0.1 2.5 7.9 96 99 100 0 100 2 1.01 0%
SHA1 2446 1.8 0.3 1.3 0.3 3.8 7.9 96 99 100 0 100 2 1.01 10%

Table 33. Replace ESS F20 with 2nd FICON ESS 800
I/O R/W Read Write %Cache Record Record Destage Seeks Chan
Rate IOSQ Pend Conn Disc Tot ratio Hit Hit enable %Read %Write Perc. per IO Util
Average 6698 0 0.3 0.5 0.3 1.1 7.9 96 99 100 0 100 2 1.01 24%
SHC1 2025 0 0.3 0.5 0.3 1.1 7.9 96 99 100 0 100 2 1.01 7%
SHN1 113 0 0.8 0.5 0.1 1.3 7.9 96 99 100 0 100 2 1.01 0%
CTP1 2004 0 0.3 0.5 0.2 1 7.9 96 99 100 0 100 2 1.01 7%
CTT1 110 0 0.8 0.5 0 1.3 7.9 96 99 100 0 100 2 1.01 0%
SHA1 2446 0 0.3 0.6 0.3 1.2 7.9 96 99 100 0 100 2 1.01 9%

Copyright © Stephen R. Guendert 2007 274


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 32 above is a review/summary of the current ESS F20 ESCON performance. Table 33 is

the summary of the results of modeling, assuming that the ESS F20 was to be replaced with a 2nd

ESS 800. Modeling performed led to the conclusion that 16 GB cache, 4 FICON channels, and

PAV enabled would be an ideal configuration for cost/performance trade offs. (The ESS F20 16

GB cache was already producing cache hit rations > 96%). 4 FICON channels on this new array

result in an IOSQ time of 0, and 50-55% reductions in CONN time for all LPARs attached to the

new subsystem.

Table 34. DS8300 performance


FICON
Channels FICON
Subsys I/O Rate Usable I/O (Express FICON Chan
ID (Sum) GB Intensity 2Gb) Resp (ms) Util
DS8300 9225 15926.8 0.579 8 1 18%

Table 34 is the summary of the results done modeling a 15.8 TB FICON attached DS8300 with

two (2) storage LPARS and 32 GB cache. LPAR 1 to be the replacement for the ESS F20, and

LPAR 2 serving as the XRC target for the current ESS 800. For licensing purposes the DS 8300

would appear as two (2) subsystems due to the storage LPARs.

4.4.4 Switched FICON or direct attached (P2P) FICON?

There are clearly performance improvements to be realized when ACSH moves to FICON for

DASD connectivity. Moving to FICON will also alleviate the addressing constraints ACSH is

currently experiencing with their ESCON connected DASD. The question remains as to whether

direct attached point to point (P2P) FICON is a better alternative, or if switched FICON is a

better alternative when costs, performance, reliability, availability, and scalability are considered.

Direct attached P2P FICON is a short term lower cost solution for an environment the size of

Copyright © Stephen R. Guendert 2007 275


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

ACSH. However, in the long term, switched FICON will be an all around better solution cost

wise, and in terms of performance, reliability, availability, and scalability.

4.4.4.1 Cost considerations


Switched FICON has the upfront costs associated with the purchase of the FICON switches

and/or FICON directors. In addition, there are the costs associated with the purchase of FICON

channel cards for the mainframes, and for the FICON adapters on the DASD and/or tape drives

involved in a FICON migration. With P2P FICON, each storage port requires its very own

physical port connection on the mainframe. FICON channel cards are relatively expensive and

the costs of a FICON channel “port” on a mainframe have historically been significantly more

expensive than the costs of a FICON switch/director “port”. As a FICON environment scales,

running P2P FICON results in a long term higher TCO. Therefore, large enterprises moving to

FICON will almost always go with switched FICON. To run point to point direct attached

FICON would simply be cost prohibitive for them (conceivably they could need to buy

additional mainframe footprints and/or DASD footprints just to add connectivity).

In a switched FICON environment the fan-in/fan-out ratios solve this problem, just like it

solves other connectivity and scalability problems. Director/switch ports are less expensive than

the FICON ports on the mainframe channel cards. ACSH Company can use the fan-in/fan-out to

minimize the CHPID ports required as long as the bandwidth requirements are satisfied. In effect,

the FICON switch/director functions as a CHPID multiplier. This will become even more

important should ACSH Company decide to move their tape infrastructure to FICON. Using a

200 MB/sec CHPID to drive a 30MB/sec tape drive is not a cost effective method.

Copyright © Stephen R. Guendert 2007 276


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 52. Switched FICON Fan-in/Fan-out

Same
16
FICON
Storage
Director
Ports

Use only 8
of the CHPIDs
to contain $$

P l us
More

4.4.4.2 Reliability and Availability Considerations

Figure 53. Point to Point (P2P) FICON


FICON
Both sides are down

X
Cable or optic failure occurs

Point-to-Point

Referring to Figure 53, it can be seen that in the event of a failure of a channel card,

mainframe channel port, failure of the P2P cable, failure of the storage port optic, or failure of

the storage HBA causes two things, both of which are bad. First, the mainframe port/CHPID

becomes unavailable. Secondly, the storage port becomes unavailable for everyone. Therefore,

a failure anywhere effects both the mainframe connection and the storage connection. There is

no redundancy in the path. A P2P FICON topology provides the worst reliability and availability

of any connectivity option.

Copyright © Stephen R. Guendert 2007 277


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

In a switched FICON environment similar to Figure 54, for a similar type of failure

described above, only a segment of the FICON connectivity is unavailable. The non-failing side

is still available, and if the storage connectivity has not failed, its port is still available for other

host CHPIDs to access via the FICON switch.

Figure 54.Switched FICON Availability

FICON
Director

4.4.4.3 Performance considerations


In figure 55, if CHPIDs 1E and 21 are consistently are pushing low amounts of data, there really

is no opportunity to make better use of either the channel or storage port. FICON does not do

channel disconnect, P2P FICON does not allow for sharing of ports, and the capability of the

storage device may be under utilized as a result of the P2P connections.

Copyright © Stephen R. Guendert 2007 278


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 55. P2P FICON Balanced Workloads


CHPIDs
FICON Path Busy %

1E 5%
80%
37%
20 15%
Point-to-Point

In a switched FICON environment, such as illustrated in figure 56, Fan-In/Fan-Out ratios help

evenly distribute the storage workload across all ports. Fewer channel ports can actually often

push more bandwidth across fewer storage ports. A storage manager can also typically put more

capacity inside of a DASD array when using switched FICON and Fan-In/Fan-out rather than

P2P FICON while using equal or fewer storage connections. Finally, the CHPID is capable of

driving a channel beyond what a storage port can function at. P2P FICON does not allow you to

realize the full potential of the mainframe FICON channel cards that you pay for.

Figure 56. Switched FICON balanced workloads

Maybe
only
FICON Buy
Director 10 of 16
Storage
Ports
Use only 8
of the CHPIDs
to contain $$

Copyright © Stephen R. Guendert 2007 279


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.4.5 Final recommendations and conclusions

1) ACSH Company should upgrade the existing ESS F20 to a DS 8300 with storage LPARs,

PAV and FICON connectivity.

2) For optimal performance ACSH should consider upgrading the existing ESS 800 with FICON,

an additional 16 GB cache, and faster back end disk drives (15 K RPM 73 GB drives).

3) Replacing the existing ESCON directors with (2) 32 port FICON 3232 switches, each

configured with 16 ports.

4) ACSH Company should also explore converting their mainframe tape infrastructure to FICON,

either by upgrading to FICON tape drives, use of a FICON to ESCON conversion device, or a

combination of the two. An assessment of the current mainframe tape environment to determine

a best course of action is recommended.

5) If migrating the tape environment to FICON is planned in the near future, based on our

understanding of the current tape environment, it is recommended that ACSH Company plan on

an additional (2) 32 port FICON switches for tape connectivity.

Copyright © Stephen R. Guendert 2007 280


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 57 Recommended FICON DASD Infrastructure

2064-107 2064-105
16 FICON 16 FICON

2 2 2

FICON Switch FICON Switch


32 ports 32 ports

4 2

4 2

DS8000 LPAR ESS 800


32GB cache 16GB cache
LPAR0=9.8TB F20 data 6TB usable disk
LPAR1=6TB XRC target PAV active
PAV Active XRC source

4.5 BBB Company FICON financial analysis

In October 2006, the BBB Company started to look for guidance on what the long term TCO

savings would be realized by migrating from ESCON to FICON. The BBB Company is a very

large U.S. financial institution with headquarters in New York City, with 2 data centers. The

study that was done for BBB Company served as the catalyst for developing a dedicated FICON

ROI tool (Excel based). This section summarizes the financial case that was made for BBB

Company.

Copyright © Stephen R. Guendert 2007 281


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4.5.1 ESCON environment

Table 35 Teaneck data center ESCON environment

Teaneck
P Ports Q Ports DASD Ports Tape Ports
Director 1 24 24 16 6
Director 2 10 10 8 6
Director 3 24 24 16 7
Director 4 10 10 10 7
Director 5 28 28 16 0
Director 6 26 27 16 2
Director 7 24 24 15 2
Director 8 23 22 17 0
Director 9 3 3 0 6
Director 10 3 3 0 6
Sub-Totals: 175 175 114 42
Total: 506

Copyright © Stephen R. Guendert 2007 282


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 58. Teaneck ESCON directors 1-4

P - 2064-2C7 ~ Q - 2064-1C8 ~
1C4FA 49776
LPAR1 - A580 LPAR6 - C090
LPAR2 - SYST LPAR7 - SYSD
LPAR3 - B090 LPAR8 - SYSQ
LPAR4 - SYSW
LPAR5 - SYSP 10
24
24

24 10
10 24
10

9032-5 SW#1 9032-2 SW#2 9032-5 SW#3 9032-2 SW#4


(E4C) 40767 (E4D) 11474 (E4E) 41441 (E4F) 11473
8 4 8 4 4 2 4

4
2 4 6 7 7
21 - 8730 4
6

26 - 7700 ~
HK184700840
3036
7000-77FF
C00-CFF
200 - 20F
B0, B1, B2, B3

22 - 8830 9840
HK185700978 F70-F76
B000-B2FF F80-F85
C0 - CF SILO
4000-41FF
4400-45FF

Copyright © Stephen R. Guendert 2007 283


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 59. Teaneck ESCON directors 5-8

P - 2064-2C7 ~ Q - 2064-1C8 ~
1C4FA 49776
LPAR1 - A580 LPAR6 - C090
LPAR2 - SYST LPAR7 - SYSD
LPAR3 - B090 LPAR8 - SYSQ
LPAR4 - SYSW
LPAR5 - SYSP
22 2
C
2 CT
CT
2 C 27 2
27 26 CT 22
6 CTC 22 C 22
2

9032-3 SW#5 9032-3 SW#6 9032-3 SW#7 9032-3 SW#8


(E50) 23256 (E51) 23257 (E52) 23258 (E53) 23259
6 6 6 6
2 2 2 2
4 4 4 1 1
4

20 - 8730
HK184700466 27 - 7700
9000-95FB SRDF 54166
100 - 10F 1100-11FF
B4, B5, B6, B7
22 - 8830
42 - 9393
HK185700978
13-23517
B000-B2FF
2100-21FF
C0 - CF
10, 11, 12, 13
4000-41FF
4400-45FF

Copyright © Stephen R. Guendert 2007 284


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 60. Teaneck ESCON directors 5-10

P - 2064-2C7 ~ Q - 2064-1C8 ~
1C4FA 49776
LPAR1 - A580 LPAR6 - C090
LPAR2 - SYST LPAR7 - SYSD
LPAR3 - B090 LPAR8 - SYSQ
LPAR4 - SYSW
LPAR5 - SYSP
3 2 2 3
2 2 C
CT
C CT
3 2 27
2 6 27 CT 2 22 3
TC 22 C 22
26 C

9032-2 SW#9 9032-3 SW#5 9032-3 SW#6 9032-3 SW#7 9032-3 SW#8 9032-3 SW#10
(AC9) 41695 (E50) 23256 (E51) 23257 (E52) 23258 (E53) 23259 (ACA) 41696
2
1
9490 / 13
9490 / 11 F5B-F5C
F57-F58 2 2 2 2 2 2 2
2

1
8

8
3494 / VTS
11 - 8430 07 - 8430 9490 / 12
A000-A0FF
hK182501288 HK184501077 F59-F5A
A020-A02F
1800-19FF 4C00-4DFF
A040-A04F
70,71,72,73,74 90, 91, 92, 93
A060-A06F
A080-A08F

Copyright © Stephen R. Guendert 2007 285


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 61Sterling Forest ESCON directors (1)

Y - 2064-1C6 ~ Z - 2064-1C7 ~
11B11 11B19
LPAR 1 - GSYS LPAR 6 - KSYS
LPAR2 - TSYS LPAR 7 - FSYS
LPAR 3 - HSYS LPAR 8 - NSYS
LPAR 4 - SSYS
LPAR 5 - MSYS

TC
45 C 2
2

45
CT 3

47 2
C

49
C

CT
CTC
44 46 45
48

9032 - SW#9 9032 - SW#10 9032 - SW#11 9032 - SW#12


(AC9) 1051D (ACA) 1051E (ACB) 1051F (ACC) 1051C

6 4 6 4 6 4 6 4 6 6 6
6

8830/ 01 8830/ 02 8830/ 03


HK185700722 HK185700876 HK185700719
1100-16FF 9700-99FF 7000-75FF
1700-17FF 9000-91FF
2100-21FF 9300-96FF

Figure 62. Sterling Forest ESCON directors (2)

Copyright © Stephen R. Guendert 2007 286


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Y - 2064-1C6 ~ Z - 2064-1C7 ~
11B11 11B19
LPAR 1 - GSYS LPAR 6 - KSYS
LPAR2 - TSYS LPAR 7 - FSYS
LPAR 3 - HSYS LPAR 8 - NSYS
LPAR 4 - SSYS
LPAR 5 - MSYS

2
CT
C
2 2
CT 45 45 CT 3

49
C C CTC
47 46
44 45
48

9032 - SW#9 9032 - SW#10 9032 - SW#11 9032 - SW#12


(AC9) 1051D (ACA) 1051E (ACB) 1051F (ACC) 1051C

5
4 4 4 5
4

2 2 2
2

2105/ 04 9490 / 3 9840


13-19545 F70 - F77 F30-F39
8000-87FF SILO

Table 36 Sterling Forest ESCON port count

Copyright © Stephen R. Guendert 2007 287


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Sterling Forest
Y Ports Z Ports DASD Ports Tape Ports
Director 9 40 39 14 2
Director 10 44 44 7 7
Director 11 39 40 14 2
Director 12 37 37 9 7
Sub-Totals: 160 160 44 18
Total: 382
4.5.2 FICON at BBB Company

The storage OEM and IBM did the technical analysis work for BBB Company and the below

diagrams illustrate the consolidated FICON environment. Considerable storage consolidation

was able to be achieved by moving to FICON and new mainframes with FICON channel cards.

Figure 63 Teaneck FICON

P - 2064-2C7 ~ Q - 2064-1C8 ~
1C4FA 49776
LPAR1 - A580 Future FICON Future FICON LPAR6 - C090
LPAR2 - SYST Capable VTS Capable Tape LPAR7 - SYSD
LPAR3 - B090 Devices Drives LPAR8 - SYSQ
LPAR4 - SYSW
LPAR5 - SYSP 10 10
?
?

20 - 8730 ? 10
HK184700466 10 ? 2G FICON
10 Capable
9000-95FB SRDF
100 - 10F EMC
DMX
6 10

10

21 - 8730
HK184700840 2G FICON
10 Capable
7000-77FF 6 42 - 9393
200 - 20F EMC 13-23517
DMX 2100-21FF
1? 10, 11, 12, 13

6
3 3
22 - 8830 3
3
HK185700978 11 - 8430 07 - 8430
B000-B2FF hK182501288 HK184501077
C0 - CF 1800-19FF 4C00-4DFF 27 - 7700 26 - 7700 ~
4000-41FF 70,71,72,73,74 90, 91, 92, 93 54166 3036
4400-45FF 1100-11FF C00-CFF
B4, B5, B6, B0, B1, B2,
B7 B3

Figure 64. Sterling Forest FICON

Copyright © Stephen R. Guendert 2007 288


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Y - 2064-1C6 ~ Z - 2064-1C7 ~
11B11 FICON 11B19
LPAR 1 - GSYS Capable LPAR 6 - KSYS
LPAR2 - TSYS EMC LPAR 7 - FSYS
LPAR 3 - HSYS DMX LPAR 8 - NSYS
LPAR 4 - SSYS 6
LPAR 5 - MSYS 1
0 10
10
10
8830/ 01
HK185700722 6 FICON
10 Capable
1100-16FF 10
1700-17FF 10 EMC
2100-21FF DMX

4
10
6

8830/ 02 2105/ 04
HK185700876 13-19545
9700-99FF ? ? 8000-87FF
4

? ?

8830/ 03
HK185700719
7000-75FF Future FICON Future FICON
9000-91FF Capable VTS Capable Tape
9300-96FF Devices Drives

4.5.3 Cost savings summary

The tables that follow summarize the cost savings calculated for the BBB Company by migrating

to FICON.

Table 37 FICON Consolidation savings Teaneck

Copyright © Stephen R. Guendert 2007 289


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 38. FICON consolidation savings Sterling Forest

Copyright © Stephen R. Guendert 2007 290


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 39. Total FICON consolidation cost savings

Copyright © Stephen R. Guendert 2007 291


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

BBB Company was able to forecast a annual cost savings of $2,069,260 by migrating from
ESCON to FICON, and was able to realize a capital cost savings of $6,971,200. The vast
majority of that cost savings was due to storage consolidation, mainframe consolidation, and
floorspace and environmental cost savings.

Copyright © Stephen R. Guendert 2007 292


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Copyright © Stephen R. Guendert 2007 293


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Chapter 5

FICON has been around for seven plus years and all storage vendors are in at least their second

iteration of the hardware, software, and firmware. With FICON Express4, IBM is in their fourth

iteration of the channel technology. It is quite clear from all of the documentation and literature,

as well as real world performance studies that have been done, that FICON is a major technology

improvement over ESCON. Data centers that have migrated from ESCON to FICON have seen

improved response times and performance. The FICON protocol also has many inherent

advantages over the ESCON protocol. FICON supports distances well over 10 km with hardly

any speed degradation or data droop. As FICON supports higher effective data rates per link,

fewer links are needed, so it will be more cost-effective to buy a FICON solution. Yet, only an

estimated 1/3 of existing ESCON customers worldwide have migrated to FICON.

There have been two major factors for the seeming lack of rapid adoption for FICON

technology. First, many of the concepts associated with FICON, particularly the concepts

associated with planning the environment, are diametrically opposed to what the mainframers

have learned in the preceding 43 years of parallel and ESCON connectivity. Couple that with

what has been a lack of a clear, concise, and statistically scientific planning methodology that

emphasizes storage device performance and not just channel utilization statistics and the reason

for hesitancy among a very traditionally risk conscious, conservative user base is obvious. A

second major factor in delaying a migration to FICON is the initial cost of entry, including the

purchase of new hardware and infrastructure. One thing that is difficult for mainframe end users

to recognize and quantify are the cost savings associated with the elimination of some ESCON

infrastructure, as well as their ability to leverage other existing ESCON hardware by attaching it

to the FICON network.

Copyright © Stephen R. Guendert 2007 294


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

This dissertation reviewed ESCON and FICON as technologies and made technical comparisons

between the two technologies. The technical advantages that contribute directly to business

and/or cost advantages include: FICON’s greater addressing limits, improved bandwidth and I/O

capabilities, improved distance, and improvements made to the protocol itself allowing for better

response times. Four case studies were examined in which clients of the author engaged the

author’s company for a technical/financial assessment of migrating from ESCON to FICON.

The assessments described in detail how this translates to cost savings with respect to DASD

environments, tape/virtual tape environments, disaster recovery applications, the host, and the

physical infrastructure in the appropriate client environments. . However, at a high level, we can

summarize the costs/benefits below:

5.1 Business Benefits and costs of migrating to FICON

There are 6 primary business benefits realized when migrating from ESCON to FICON.

1) The technical advantages of FICON over ESCON enhance overall performance in the

mainframe environment, meaning more work can be performed in less time. More transactions

can be processed in a given amount of time, or looking at it another way, less time goes by

between processed transactions. This has been extremely important in the financial sector. In the

financial sector the speed in which transactions can be executed is directly tied to a firms’ ability

to generate revenue, so moving to FICON can translate directly to additional revenue and profit

gains.

2) FICON enhances overall enterprise resiliency and disaster recovery planning based on the

extended distances that the protocol supports. The bandwidth capability of FICON enables faster

recovery over those distances. The faster performance of FICON also allows an enterprise to

better meet its Recovery Point and Recovery Time Objectives (RPO and RTO respectively).

Copyright © Stephen R. Guendert 2007 295


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON’s improved performance compared with ESCON enables DR site disk volumes to be

addressed more rapidly, and an environment to be brought on line from a cold start more rapidly.

When recovering from tape, FICON significantly speeds up the recovery process.

3) FICON enhances a business’ accessibility to data with its higher addressing limitations, which

translates to more disk volumes being accessible to a given channel path.

4) The FICON protocol has room for growth (16,000 addresses supported today with available

growth to 65,000 addresses), and as such, allows a business to be better prepared for internal

growth, mergers and acquisitions, or consolidation.

5) The cumulative advantages of FICON present businesses with an opportunity to consolidate

the IT infrastructure in terms of the overall footprint of the mainframe and mainframe storage

environments, as well as perhaps the total number of data centers.

6) FICON and FCP intermix allows a business to better utilize IT budget dollars targeted for

storage networking infrastructure. Z/OS and Linux are supported on the same mainframe

footprint and FICON and open systems SAN networks can leverage the same directors. Using a

common infrastructure for all storage connectivity provides significant opportunity for cost

savings.

These business benefits need to be quantified and balanced against the following costs

associated with a migration from ESCON to FICON:

1) FICON cabling: Enterprises will need to either a) put in a new cabling infrastructure

(9/125 micron long wave single mode fiber is recommended) or b) leverage your existing

ESCON cabling via the addition of mode conditioning patch (MCP) cables. Use of MCP cables

will restrict you to a 1 Gbps FICON network, therefore investing in single mode infrastructure is

strongly recommended. Either alternative will be a cost to consider, although long wave single

Copyright © Stephen R. Guendert 2007 296


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

mode better positions an enterprise for the future as we see continued improvements in

speed/bandwidth.

2) FICON DASD: Most businesses that migrate to FICON do not do so just for the

connectivity advantages - generally, there is a more significant driving factor. Often, the key

driver is that older ESCON DASD is coming off of lease, or a maintenance contract is nearing

expiration. These businesses typically will not invest in new DASD array(s) and continue to use

ESCON for attachment.

3) FICON tape drives/libraries/virtual tape: Similar to DASD above.

4) FICON directors/switches: Larger mainframe environments moving to FICON will want

to purchase FICON directors to replace their ESCON directors. Direct attachment of FICON

only makes sense for the smallest of FICON environments (i.e. 1 host, 1-2 DASD frames).

FICON directors, however can be a significant cost in the equation.

5) FICON processors: Pre 9672 S/390 G5 mainframes are not FICON capable. Should an

enterprise contemplate moving to FICON have one of these older mainframes on its data center

floor, this deserves some serious consideration. Regardless, unless buying all new mainframes,

the enterprise will need to swap ESCON channel cards for FICON channel cards. If about to

purchase a mainframe with ESCON channel cards, it is strongly recommended to negotiate with

your vendor to put in writing that you can change those out at a later date at minimal cost to you.

6) FICON controllers: If an enterprise has existing ESCON DASD arrays or tape drives and

the leases have not yet expired, the following options are available. a) leave them as ESCON. b)

implement FICON bridge cards (FCV) into the model 5 ESCON directors, and run your FICON

at 1 Gbps c) you can upgrade their controllers to FICON. Or, d) you can implement protocol

converter technology that converts native FICON to ESCON for attachment of legacy devices to

Copyright © Stephen R. Guendert 2007 297


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON mainframe channels. This technology will allow a customer to migrate their mainframes

to FICON, run native FICON channels into the converter (via a FICON director if desired), and

run ESCON channels out of the converter to their existing ESCON storage.

Cost justifying FICON DASD

FICON has been available for more than 7 years, yet it has not been widely implemented. Those

organizations that have implemented FICON did so primarily because they had a real need to

overcome the device address and performance limitations associated with ESCON. In order to

fully realize the benefits of FICON, an organization must have CPU, DASD, and directors that

all support FICON. In many cases, this can mean a wholesale replacement of the existing

environment, a proposition that can be very expensive. Further, most IT departments do not view

their existing ESCON performance as a limiting issue, therefore making the case for investment

in FICON much more difficult.

The industry is coming to a point where older, ESCON capable equipment is being phased

out of the support and maintenance matrix, making that equipment very expensive to maintain.

One way to reduce ongoing costs is to replace older equipment with new, FICON capable

equipment that comes with a warranty. Once this is done, the move to FICON is inevitable – the

hardware is in place, and it simply requires some configuration changes.

Today’s mainframe environment, in particular the z Series, has strained the limits of ESCON.

FICON provides relief from these limitations. FICON allows organizations to use more storage

per DASD subsystem without changing to larger volume sizes, and maintain the same or better

response times. This translates into requiring fewer subsystems and channels to support a given

mainframe DASD environment. In other words, migrating a DASD environment from ESCON

to FICON is not just about channel consolidation. It’s about a reduction in serial numbers! The

Copyright © Stephen R. Guendert 2007 298


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

ESCON limitations of 1024 addresses in a DASD environment running mod 3 disks has

translated on average, into requiring a new DASD array/frame/footprint every 3-4 TB. These

same ESCON DASD frames had an average (2000 timeframe) advertised capacity of 7-11 TB.

An enterprise could not fully utilize the frame purchased due to the limitations of ESCON!

Today’s FICON capable DASD arrays have an average advertised capacity of 20-80 TB. Much

more of that array can be utilized in a mainframe environment due to FICON’s relief of the

ESCON limitations, as well as the ability to use larger logical volume sizes and PAVs more

effectively. Fewer serial numbers translates into real cost savings specific to hardware, floor

space, power/cooling, management, and software such as advanced copy features. This software

is typically licensed per array, and the cost varies with the useable storage capacity in the DASD

array frame. In other words, DASD vendors tier such software costs based on two factors: the

number of arrays, and then the amount of storage in each array. In general, for “X” TB useable

storage total in your environment, it is more expensive in terms of these software licensing costs

to have that “X” TB spread over more frames with fewer useable TB/frame, than if the “X” TB

were consolidated and contained on fewer total frames.

How significant of a consolidation and TCO savings can be achieved? Look back at the

analysis/modeling tables from earlier. Obviously every enterprise has a unique environment;

however, the author respectfully submits that the results will be similar: TCO savings that justify

the expenditure. Being able to consolidate 12 older ESCON attached frames on maintenance

onto 4 newer FICON attached frames resulted in significant TCO savings over a 3 year period.

Copyright © Stephen R. Guendert 2007 299


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Cost justifying FICON for native and virtual tape

FICON has allowed for tremendous improvements in performance on the DASD side of

mainframe storage. This is not the only component in the mainframe infrastructure that has

benefited from the FICON technology. While DASD is the focus of performance and the

performance improvements/benefits of FICON over ESCON are frequently focused on DASD, a

compelling case can be made for tape as well.

Since FICON’s inception and adoption in the late 1990s, rapid technological improvements

have been made in native tape and virtual tape subsystems. The improvements made in DASD

during the same time period have led to FICON DASD arrays being able to perform full dumps

to disk much more rapidly than in the ESCON era of the early to mid 1990s. The improvements

that have been made in native and virtual tape technologies can truly only be realized by

migrating to FICON.

FICON Express channels allow an enterprise to run over 150-170 MB/sec for large data

transfers, typically seen with highly sequential tape jobs. Therefore, the metric most crucial to

designing the native tape component of your FICON infrastructure is that MB/sec number, which

can be correlated to the bus busy metric. Aggregating multiple ESCON tape channels onto a

single FICON channel can significantly reduce mainframe tape infrastructure by reducing the

number of channels, director ports, and control unit ports needed for mainframe tape. But,

FICON, and the new advancements made in tape technology, allow realization of another gain:

consolidating the number of tape drives, as each drive handles its work more quickly.

The past two-three years have seen significant advances made in enterprise tape technology,

both in head to tape transfer rates, and cartridge accessibility and capacity. The one most crucial

Copyright © Stephen R. Guendert 2007 300


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

to our FICON discussion is head to tape transfer rates. These new tape drives allow an end user

to run at 30+MB/sec for native head to tape transfer rates, resulting in even higher speeds for

uncompressed data. This is several times what the preceding generation of tape drives was

capable of achieving. Recall from earlier discussion that the theoretical bandwidth of ESCON is

17-18 MB/sec. That was more than enough to drive an ESCON IBM 3590 or STK 9840 drive at

its advertised native speed.

Obviously, FICON is more than capable of running these older 3590 or 9840 drives at their

advertised speeds as well. FICON presents several attractive options: a) run FICON and

significantly reduce channels and ports dedicated to tape by putting several of these tape drives

onto one FICON express channel via logical daisy chaining behind a director. Another option

that will work well with both IBM’s shared control unit type tape drives, and STK’s direct fibre

“1x1” tape drive is to replace several of the older, slower tape drives with the new drive

technology that can achieve 30+ MB/sec and still daisy chain multiple tape drives on a FICON

channel. An October 2003 STK white paper/study found that it was possible to put 6 of the new

9840B FICON tape drives on one FICON express channel. The same type of study from IBM

applies equally to the IBM 3592 tape drives. The assessments at ABC and XYZ companies

validated these assertions.

Finally, do not buy these high performance tape drives and run them on ESCON channels.

Enterprises doing so will not get what they are paying for. An appropriate analogy is the one

with the US driver in Ohio who decides to purchase that Lamborghini because it’s the fastest car

out there today, yet when he drives it in the US, he can only drive it at 65 MPH. He leaves a lot

of performance that he paid for untapped, as does the end user who purchases 9840B/C or 3592

tape technology and runs it on ESCON channels.

Copyright © Stephen R. Guendert 2007 301


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

While FICON allows the end users of today’s tape drives to run them at their rated native

speeds, virtual tape systems can also benefit from FICON. FICON allows a virtual tape system

to accept and deliver more tape workload while using fewer channels. In general (specifics vary

by vendor, either IBM or STK), the virtual tape system will have a maximum number of ESCON

channels for input. Also, the same virtual tape system will have a maximum number of FICON

channels for input; however the number of FICON channels is less. Using FICON for virtual

tape rather than ESCON reduces the number of control units, back end tape drives, and channel

paths required. But when you look at the bandwidth coming in on those less numerous FICON

channels, the MB/sec coming into the FICON virtual tape system is typically 3x what the

number was for ESCON. However, looking solely at this can put the end user in a quandary.

To deal with that bottleneck on the front end by upgrading to a FICON capable virtual tape

system, and doing nothing with the existing native tape drives on the back end of the virtual tape

system may merely shift the bottleneck to the back end. This depends on the hit ratios that users

will be able to achieve for virtual mounts. When the old ESCON tape drives on the back end of

your virtual tape system are the bottleneck, you can deal with this by adding more of the same

tape drives. However, a virtual tape system will have a maximum number of supported native

tape drives per system, and taking this route will lead to adding additional virtual tape systems.

Since the reason for migrating to FICON virtual tape is to reduce the number of virtual tape

systems, this is not the way to go.

How can this problem be solved? It is elementary. When moving to FICON virtual tape,

upgrade to the new FICON native tape technology discussed earlier for the back end of your

virtual tape systems. In the study cited earlier for native tape, virtual tape was also analyzed

Copyright © Stephen R. Guendert 2007 302


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

using Batch Magic. The conclusion reached by the study showed that for the environment in

question, by migrating from ESCON IBM VTS subsystems, they were able to consolidate from 3

VTS subsystems running 12 IBM 3590E ESCON tape drives per VTS, to 2 FICON IBM VTS

subsystems running 8 IBM 3592 tape drives.

Final thoughts

FICON will continue to be a “hot topic” for some time to come. More and more enterprises are

already well familiar with the underlying fibre channel technology through their experience with

open systems storage area networks (SAN). FICON and open systems SAN intermix

(FICON/FCP intermix), i.e. running one common storage network for both mainframe and open

systems, is becoming more and more attractive to enterprises. It makes a lot of financial sense in

today’s era of cash strapped CIO budgets, and as the “kinks” are worked out by those of on the

“bleeding edge”, it will make more and more technical sense as well. It is the author’s belief

that the world is becoming less and less operating systems centric, and more and more data

centric. We’re seeing open source Linux on mainframes in ever increasing frequency. We are

heading towards a world that is going to be run on a common systems area network spanning

open systems and mainframes. FICON is going to be around for awhile.

Copyright © Stephen R. Guendert 2007 303


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

References

Adlung, I. & Banzhaf, G. (2002). FCP for the IBM eserver zSeries systems: Access to distributed

storage. IBM Journal of Research and Development, 46-4, Sept 2002, 487-502.

Amann, S. & Banzhaf, G. (2007). Sharing FCP adapters through virtualization. IBM Journal of

Research and Development, 51-1, Jan 2007, 103-118.

Artis, H.P. (1998). DIBS, data buffers, and distance: Understanding the performance

characteristics of ESCON links. Proceedings of the 1998 Computer Measurement Group.

Turnersville, NJ: Computer Measurement Group.

Artis, H.P. (2006a). Understanding the performance implications of HyperPAVs. Pagosa

Springs, CO: Performance Associates, Inc.

Artis, H.P. (2006b). Understanding the performance implications of MIDAWs. Pagosa Springs,

CO: Performance Associates, Inc.

Copyright © Stephen R. Guendert 2007 304


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Artis, H.P., & Guendert, S. (2006). Designing and managing FICON interswitch link

infrastructures. Proceedings of the 2006 Computer Measurement Group. Turnersville, NJ.

Computer Measurement Group.

Artis, H.P., & Houtekamer, G. (1993). MVS I/O subsystems: Configuration management and

performance analysis. New York, NY: McGraw Hill.

Artis, H. P. & Ross, R. (2004). Understanding FICON channel path metrics. Pagosa Springs,

CO: Performance Associates, Inc.

Aulet, N.R. & Doerstler, D.W. (1992). IBM enterprise systems multimode fiber optic

technology. IBM Journal of Research and Development, 36-4, July 1992, 553-576.

Banzhaf, G. & Friedrich, R. (2005). Host-based access control for zSeries FCP channels.

zJournal, 304, August 2005, 99-103.

Barbara, D., Dodge, R., & Menasce, D. (2001). Preserving QoS of E-commerce sites through

self tuning: A performance model approach. Proceedings of the 2001 ACM Conference on

E-Commerce, Tampa, FL.

Basener, R. & Cronin, C. (2003). Performance Considerations for a Cascaded FICON Director

Environment. IBM.

Beal. M. & Trowell, K. (1999). Introduction to IBM S/390 FICON. Poughkeepsie, NY: IBM

Redbooks.

Beal, M. & Trowell, K. (2001). FICON CTC implementation. Poughkeepsie, NY: IBM

Redbooks.

Becht, M. & Easton, J.R. (2007). Redundant I/O interconnect. IBM Journal of Research and

Development, 51-1, January 2007, 173-184.

Copyright © Stephen R. Guendert 2007 305


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Berger, J. & Bruni, P. (2006). How does the MIDAW facility improve the performance of

FICON channels using DB2 and other workloads? Poughkeepsie, NY: IBM Redbooks.

Boyd, R. & Guendert, S. (2006). FICON and mainframe disaster recovery insourcing. Disaster

Recovery Journal, 19-1, Winter 2006, 81-82.

Calta, S.A., & deVeer, J.A. (1992). Enterprise systems connection (ESCON) architecture-

system overview. IBM Journal of Research and Development. 36-4, July 1992, 535-550.

Case, R.P., & Padegs, A. (1978). Architecture of the IBM System/370. Communications of the

ACM, 21, January 1978.

Casper, D.F., Flanagan, J.R. & Gregg, T.A. (1992). The IBM enterprise systems connection

(ESCON) channel-a versatile building block. IBM Journal of Research and Development,

36-4, July 1992, 617-632.

Cassier, P. & Kornhonen, R. (2005). Effective zSeries performance monitoring using resource

measurement facility. Poughkeepsie, NY: IBM Redbooks.

Chambers, G. & Hatfield, B. (2006). IBM System z9 enterprise class technical guide.

Poughkeepsie, NY: IBM Redbooks.

Coleman, J.J., Meltzer, C.B., & Weiner, J.L. (1992). Fiber distributed data interface attachment

to System/390. IBM Journal of Research and Development. 36-4, July 1992, 647-654.

Cormier, R.L. (1983). System/370 extended architecture: The channel subsystem. IBM Journal

of Research and Development, 27-3, May 1983.

Cronin, C. (2002). FICON and FICON Express channel performance. Poughkeepsie, NY: IBM

Corporation.

Copyright © Stephen R. Guendert 2007 306


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Cronin, C. (2007). IBM System z9 I/O and FICON Express4 channel performance. Retrieved

August 28, 2007 from

ftp://ftp.software.ibm.com/common/ssi/rep_wh/n/ZSW03005USEN/ZSW03005USEN.PDF

Dhondy, N. & Petersen, D. (2006). GDPS: The e-business availability solution. Retrieved

online August 14, 2007 from

ftp://ftp.software.ibm.com/common/ssi/rep_wh/n/ZSW01920USEN/ZSW01920USEN.PDF

Elliott, J.C. & Sachs, M.W. (1992). The IBM enterprise systems connection (ESCON)

architecture. IBM Journal of Research and Development, 36-4, July 1992, 577-591.

Franaszek, P.A. & Widmer, A.X. (1983). A DC balanced, partitioned, block 8b/10b

transmission code. IBM Journal of Research and Development, #27-2, April 1983, 440-451.

Fries, W. & Kordmann, H. (2005). IBM System z9 and eserver zSeries connectivity handbook.

Poughkeepsie, NY: IBM Redbooks.

Futral, W. (2001). Infiniband architecture development and deployment: A strategic guide to

server I/O solutions. Hillsboro, OR: Intel Press.

Gardner, C. (2000). The valuation of information technology. New York, NY: Wiley.

Georgiou, C.J. & Larsen, T.A. (1992). The IBM enterprise systems connection (ESCON)

director: A dynamic switch for 200 MB/sec fiber optic links. IBM Journal of Research and

Development, 36-4, July 1992, 593-616.

Guendert, S. (2004). The CNT FICON Workshop. Plymouth, MN: Computer Network

Technology Corp.

Guendert, S. (2005a). Next generation directors, DASD arrays, and multi-service, multi-protocol

storage networks. zJournal, 301, February 2005, 26-29.

Copyright © Stephen R. Guendert 2007 307


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Guendert, S. (2005b). FICON CTC: A primer parts 1 and 2. Computer Measurement Group

Measure IT Online. Retrieved September 10, 2007.

http://www.cmg.org/measureit/issues/mit22/m_22_9.html

Guendert, S. (2005c). Taking FICON to the next level: Cascaded high performance FICON.

Proceedings of the 2005 Computer Measurement Group. Turnersville, NJ: Computer

Measurement Group.

Guendert, S. (2005d). Buffer to buffer credits and their effect on FICON performance.

Computer Measurement Group Measure IT Online. Retrieved September 10, 2007 from

http://www.cmg.org/measureit/issues/mit20/m_20_8.html.

Guendert, S. (2005e). Proper sizing and modeling of ESCON to FICON Migrations.

Proceedings of the 2005 Computer Measurement Group. Turnersville, NJ: Computer

Measurement Group.

Guendert, S. (2006). IBM System z9 I/O improvements. Computer Measurement Group

Measure IT online. Retrieved Sept 29, 2007 from

http://www.cmg.org/measureit/issues/mit29/m_29_2.html

Guendert, S. (2007). Revisiting business continuity and disaster recovery planning and

performance for 21st century regional disasters: The case for GDPS. Journal of Computer

Resource Management, 120, Summer 2007, 33-41.

Guendert, S., & Houtekamer, G. (2005). Sizing your FICON conversion. zJournal, 303, June

2005, 22-30.

Guendert, S. & Lytle, D. (2006). McDATA FICON Seminar. Broomfield, CO: McDATA Corp.

Guendert, S. & Lytle, D. (2007a). MVS and z/OS servers were pioneers in the fibre channel

world. Journal of Computer Resource Management, 121, Fall 2007, 3-17.

Copyright © Stephen R. Guendert 2007 308


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Guendert, S. & Lytle, D. (2007b). FICON buffer to buffer credit management: An Oxymoron?

zJournal, 503, June 2007, 28-35.

Guendert, S. & Lytle, D/ (2007c). Advanced Mainframe Infrastructure Seminar. San Jose, CA:

Brocade Communications.

Guendert, S., & Seitz, S. (2004). FICON migration: Save money and improve performance in

your mainframe infrastructure. zJournal, 205, Oct 2004, 43-47.

IBM (1966). IBM System/360 principles of operation. GA-22-6821. Poughkeepsie, NY: IBM

Corporation.

IBM (1990). IBM enterprise systems architecture/390 principles of operation. SA22-7200.

Poughkeepsie, NY: IBM Corporation.

IBM (2006). z/OS resource measurement facility performance management guide. Boeblingen,

Germany: IBM Corporation.

IBM Systems Group (2003). IBM storage infrastructure for business continuity. Retrieved

online August 18, 2007 from

http://www.ibm.com/support/docview.wss?uid=ssg1S7000806&aid=1

Jain, R. (1991). The art of computer systems performance analysis. New York, NY: Wiley.

Johnson, R. H. (1989). MVS concepts and facilities. New York, NY: McGraw Hill

Kembel, R. (2003). Fibre Channel: A comprehensive introduction. Tucson, AZ: Northwest

Learning Associates.

Lorin, H. (1971). Parallelism in hardware and software: Real and apparent concurrency.

Englewood Cliffs, NJ: Prentice Hall.

McGavin, T. & Mungal, A. (2004). ESCON to FICON migration planning. Proceedings of the

2004 Computer Measurement Group. Turnersville, NJ: Computer Measurement Group.

Copyright © Stephen R. Guendert 2007 309


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Merrill, H.W. (1984). Merrill’s expanded guide to computer performance using the SAS system.

Cary, NC: SAS Institute.

Money, A. & Remenyi, D. (2000). The effective measurement and management of IT costs and

benefits. Burlington, MA: Butterworth-Heinemann.

Neville, I. & White, B. (2006). FICON Implementation Guide. Poughkeepsie, NY: IBM

Redbooks.

Ogden, B. & White, B. (2005). IBM System z9 109 technical introduction. Poughkeepsie, NY:

IBM Redbooks.

Padegs, A. (1983). System/370 extended architecture: Design Considerations. IBM Journal of

Research and Development, 27-3, May 1983.

Prasad, N. S., (1989). IBM Mainframes: Architecture and design. New York, NY: McGraw Hill

Raften, D. & Ratte, M. (2005). GDPS family-An introduction to concepts and capabilities.

Poughkeepsie, NY: IBM Redbooks.

Ross, R. (2001). FICON I/O measurements: Everything you know is wrong! Pagosa Springs,

CO: Performance Associates, Inc.

Simitci, H. (2003). Storage network performance analysis. Indianapolis, IN: Wiley.

Tucker, S.G. (1986). The IBM 3090 system: An overview. IBM Systems Journal, 25-1, Jan

1986, 15-16.

Trowell, K. & White, B. (2002). FICON native implementation and reference guide.

Poughkeepsie, NY: IBM Redbooks.

United States Federal Reserve System (2003). Interagency paper on sound practices to

strengthen the resilience of the U.S. financial system. Retrieved online Sept 14, 2007 from

http://sec.gov/news/studies/34-47638.htm.

Copyright © Stephen R. Guendert 2007 310


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

United States Securities and Exchange Commission. (2002). Summary of “lessons learned”

from events of September 11 and implications for business continuity. Retrieved online

September 14, 2007 from http://sec.gov/divisions/marketreg/lessonslearned.htm.

Wyman, L. & Yudenfriend, H. M. (2004). Multiple logical channel subsystems: Increasing

zSeries I/O scalability and connectivity. IBM Journal of Research and Development, 48-3,

May 2004. 489-505.

Copyright © Stephen R. Guendert 2007 311


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Appendix A

Step-by-Step Instructions for use of the McDATA-Brocade FICON ROI tool

Step 1: Preparing to Use the Tool

You must have Microsoft Excel 97 or a later version installed on your computer in order to use

the ROI scenario modeling tool. Note: The screenshots in this user guide may appear somewhat

differently from your screen depending on your version of Excel. All figures shown in this

User’s guide are from Microsoft Excel Version 10 (as shipped with Office XP Professional). The

ROI scenario modeling tool is designed to work with all versions of Excel back to Excel 97, and

the instructions are identical for all versions.

Prior to launching the tool the first time your security settings need to be set to medium. This

process only needs to be completed one time and the tool will run successfully. In order to

accomplish this please follow these steps.

1. Click on TOOLS

2. Scroll down to MACRO (it is usually close to the bottom of the list)

3. Scroll over to SECURITY, and the Macro Security Dialog Box will appear.

4. Then select MEDIUM in the Security Level tab

5. Now click OK and this process is complete.

Figure 2-1 shows the menu that you will use to perform this operation, and Figure 2-2 shows the

macro security dialog box.

Copyright © Stephen R. Guendert 2007 312


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 2-1

Figure 2-2: Macro Security Dialog Box

Copyright © Stephen R. Guendert 2007 313


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Step 2: Launching the Tool with Macros Enabled

Once you have launched the ROI scenario modeling tool, you will be prompted to Disable or

Enable Macros. For this tool you will always select ENABLE MACROS. Once that has been

selected you will be taken directly to the home page.

With your security settings set to “Medium”, you will see the following dialog box when you

first open the tool:

Figure 2-3: Macro enable/disable dialog box appearing when you open the ROI scenario

modeling tool.

For the tool to function properly, you need to click “Enable Macros”. The spreadsheet will open;

the screen will flicker as a few macros run that reset your toolbars for simplicity and ease of use.

When the macros are finished running, the tool will settle on the Get Started page.

Module 1: Home

Figure 2-4 shows the “Home” screen that should appear when you successfully open the tool

using Microsoft Excel: Enter your customer’s name and your name and contact
information and the version of the scenario generated here.

Copyright © Stephen R. Guendert 2007 314


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

When you’ve entered your


contact information and reviewed
the Brocade FICON ROI tool
description with your customer,
click “Get Started” to open the
Figure 2-4: Input section of the tool.

On the “Home” screen there are five fields for you to enter information. At the top of the screen

you should enter:

1. Your customer’s company name

2. The name of your direct contact at that company

3. Your Name

4. Your telephone number

5. Version of the model (for multiple iterations of a scenario)

(These fields are highlighted in yellow in Figure 2-4)

The Home page also includes a currency converter where you can convert the model to the

currency of another country, should it be necessary. Available choices for the currency include:

U.S., Canadian, and Australian dollars, Euros, British Pounds, Brazilian Real, Chinese Yuan,

Copyright © Stephen R. Guendert 2007 315


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Indian Rupees, South African Rands, and Japanese Yen. The address of a reputable foreign

exchange website is also provided.

When you’re ready to start modeling, click on the “Start” button (circled in red in Figure 2-4)

and you will be taken to the next module—Input

Module 2: Input

In this module you will populate the tool with inputs that you will gather directly from your

client/customer. It is best to populate the analysis with what you know or your best assumptions

prior to meeting with your client for an ROI consultation. This will allow you to prepare for a

more interactive conversation with your client and help ensure that the results are presented in

the best light of the offer. It also gives you the chance to demonstrate that you have done your

homework and may reduce the amount of work you will have to do in the conversation with your

client.

When you sit down with your customer, walk through each input section so they understand

what is needed from them. This process will also be beneficial to you to get a better

understanding of their budget process. It is critical that you get the most accurate information

from your customer, and take the time to thoroughly understand their current situation and future

direction. For each input, get the customer’s best estimate and type the number into the white

input field. If you aren’t sure about the best number to enter into an input field, you can use the

“sample” field as a guide.

You will notice as you proceed down the spreadsheet that some of the cell descriptions have

comments added to them. This will be indicated by a small red half triangle in the upper right

corner of the cell. These comments are intended to clarify/amplify the field in question. For

Copyright © Stephen R. Guendert 2007 316


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

example, in the Labor section, there is a cell for “total number of people managing cable plant.

“ When you move the cursor over the small red half triangle, the comment “please enter the

number of people responsible for managing the cable plant” appears.

As you can see by looking through the fields, effective use of this tool will require consultative

selling in which you will need to ask many questions to arrive at the answers needed to use the

tool. While the tool does provide default values based on industry research, the most realistic

end result will be achieved when you ask the right questions to get the answers directly from

your client/customer.

Figure 2-5

The gray fields are


“sample” inputs to help
guide you. The white fields
are for you to enter the
inputs from your specific
customer.

Make sure that you verify all of the inputs with the appropriate people within your client’s

organization. At any time during the entry process, you can press the “Save” button to save your

work. If you wish to create different scenario models for a single client, you can save each

scenario model to a different file.

Copyright © Stephen R. Guendert 2007 317


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Module 3: Investment

This module is where you will enter the expenditures for the investment in the proposed FICON

solution being made by the client/customer. Most of the fields are self-explanatory. Please note

the comment in the FICON hardware cell: this cell is intended to reflect all new non-mainframe

hardware expenditures for the proposed FICON solution. This includes new DASD, tape, virtual

tape, FICON switches/directors, extension gear, and FICON converters. It does not include

mainframe hardware upgrades such as new processors and/or new channel cards as that is taken

into account in the “Mainframe Hardware upgrades” cell.

Costs for software (such as Intellimagic software) may be placed into the professional services

cells, or the FICON hardware cell. If the client/customer intends on using their own personnel

for the migration work, either to do the implementation work on their own, or in conjunction

with a Brocade professional services engagement, the fields exist for you to include their

manpower/personnel costs.

Module 4: ROI Summary

This page is a summary view of the four key ROI metrics, Return on Investment (ROI), Net

Present Value (NPV), and Payback Period in the number of months it takes to recoup the initial

setup costs and licensing fees, and Internal Rate of Return. Section 3 of this user’s guide

describes each of these metrics, how they are calculated in this tool, and how to interpret them

for your customer.

In addition to the four ROI metrics, there is a year-by-year summary of each cost reduction

and/or increase in revenues or profits the customer could expect from implementing a FICON

environment. Both the cost reductions and revenue enhancements include a “Custom benefit”

Copyright © Stephen R. Guendert 2007 318


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

cell that allows you to directly add a potential cost reducing benefit or revenue enhancement

specific to the client/customer you are working with.

This section summarizes the Return on


Figure 2-6 Investment for the current set of inputs and
assumptions.

This section summarizes


the year-over-year
benefits and costs
associated with moving
to a FICON environment

Module 5: Benefit Detail and Impact Assumptions

In the Benefit Detail Module of the ROI scenario modeling tool you will find detailed

explanations of how each benefit is calculated. The module outlines all of the inputs that were

part of the calculations in addition to any additional assumptions that where also factored into the

calculation. See Figure 2-7 for an example of a benefit detail calculation.

Figure 2-7: Using the Benefit Detail and Impact Assumptions

Copyright © Stephen R. Guendert 2007 319


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Each benefit detail calculation shows specifically


which inputs are included and how the calculations
are derived with pop-up comments that show the
formula for each calculation.

The Benefit Detail page is divided into four sections, corresponding to the four benefit

dimensions: Labor, IT Operating Costs, Capital Costs, and Revenue. Selecting the appropriate

benefit dimension button in the second row of buttons will show the detailed benefit calculations

associated with that benefit dimension.

To adjust the steady state impact assumption, click on the assumption number in the blue box on

the left side of the table. That will take you to the assumptions page, where you can modify the

Copyright © Stephen R. Guendert 2007 320


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

assumption. Once you are done making the adjustment to the assumption value, you can click on

the assumption number on the assumption page to return to the benefit detail page.

In addition to adjusting the steady state impact assumption, you can adjust the amount of the

benefit that will be achieved in each of the first three years. This is accomplished by changing

the annual adjustment number for the desired year using the spinner controls to the left of the

factor. These adjustment factors scale the amount of benefit to be achieved in each year and can

be adjusted from 0% to 100%.

These calculations are all summarized on the ROI Summary page and will automatically be

updated in that sheet as they are adjusted in this module. Each benefit on this screen:

Figure 2-7 shows you an example. You can see in this diagram how the benefit totals up the

current costs, and adjusts them by the impact assumption to calculate the benefit for each year.

On the Benefit Detail screen, each benefit has detailed descriptions for how each benefit is

calculated.

When you’ve adjusted each of the impact assumptions to match your customer’s expectation of

the magnitude of each benefit to be achieved you can go back to the ROI summary page and

review the impact of any changes in the total benefits on the key financial metrics.

Module 6: Assumptions

The Impact Assumptions help you to fine tune the realized savings according to your customer’s

business constraints and comfort level and can be adjusted for each year of the three year

analysis. In this section, you will want to have a very interactive conversation with your client to

tailor each benefit to their expectations. All of these numbers are directly tied to the Benefit

Detail page and will automatically be updated in that sheet as they are adjusted in this module.

Copyright © Stephen R. Guendert 2007 321


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

At the bottom of the Assumptions page is a section entitled “Global Adjustment Factors.” Figure

2-8 shows this section and how to adjust them. Please note the productivity adjustment factor.

You and the client/customer can adjust this downwards if you think the labor hours freed up

through implementation of a FICON environment won’t completely go to another productive use

for the organization or will not be eliminated from a headcount budget.

Figure 2-8

Determine the percentage


(you could think of this as a
probability as well) of the
currently estimated benefits
you and your customer
expect given factors that you
may not be able to foresee
or control at this time.
Module 7: Printed Reports Use the up or down controls
on the spinner button to
In the Reports module you will find a series of detailed adjust the percentage.

reports that summarize the cost savings and adjusted revenue gains projected from migrating to

the new FICON environment. These reports can be left behind with your clients and should help

them provide support in discussing the benefits of a Brocade FICON environment with their

executive team.

By clicking on the Print Preview button it is possible to print the reports out by section.

When you click on “Print Preview” on the Reports screen (see Figure 2-9) you will need to select

the sections of the report page you wish to print. When you’ve selected the appropriate section

and click “OK”, the Print Preview pane will open, as shown in Figure 2-10. When you’ve

previewed exactly how the report will look when printed on your default printer, select Print. If

Copyright © Stephen R. Guendert 2007 322


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

you wish to cancel, click Close. And, if you need to make adjustments to your print job, select

Setup.

Figure 2-9

Select which report


option you’d like to
print and press “OK”
to go to the “Print
Preview” screen.

If you need to make adjustments to the default printer, contact your network administrator or an

experienced Excel user to help you make adjustments using Excel’s main menu functions.

Copyright © Stephen R. Guendert 2007 323


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Figure 2-10

From the Print


Preview Pane,
click Print to
send your print
job to the printer.

If you wish to
cancel, click
Close.

Saving Your Scenario

If you wish to save your spreadsheet, you can perform the following functions:

• From the report page you can press the “Save” button to save your work. If you wish

to create different scenario models for a single client, you can save each scenario

model to a different file.

• You can use the standard File menu command to save your file with a name of

corresponding to your client and the specific scenario modeled by clicking on the

“Save As” command.

Copyright © Stephen R. Guendert 2007 324


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Section 3: Understanding the Key Financial Metrics

ROI Methodology: Overview

The objective of any information technology systems development in business is to increase the

wealth of shareholders by adding to the growth premium of their stock. Ideally, the increase

achieved should be the maximum obtainable. Maximizing shareholder wealth consists of

maximizing the value of the cash flow stream generated by operations, specifically those cash

flows that are generated by future investment in an information technology system.

Underlying the tool is a basic, fundamental economic concept called “opportunity cost.” This

concept essentially states that the real economic cost of any activity, whether it’s buying shoes at

the department store or automating the budgeting and forecasting processes of the organization,

is the cost of the next best “opportunity” that provides the same or “next best” function of the

product or service in question. Estimating the economic value of moving to a FICON

environment requires not only the price the customer will pay for the new FICON hardware, but

also must take into account the cost of continuing to operate using the existing ESCON hardware

and/or the older existing FICON hardware.

This may seem like common sense at first, but one of the first questions many of your customers

might ask is, “how much does it cost?” Using the methodology demonstrated in the Brocade ROI

scenario modeling tool, this question should really be phrased, “how much is it worth?” By

turning the conversation into one about value creation for your customers, instead of a

conversation about pricing and costs, you can be much more effective in today’s hyper-

competitive marketplace as well as prove to your customers just how valuable their relationship

with you, and Brocade, really is.


Copyright © Stephen R. Guendert 2007 325
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Financial Metrics

The financial metrics used in the ROI scenario modeling tool are commonplace in financial

organizations in any modern corporation. They are time-tested and well understood by CFO’s,

financial and business analysts, and are basic methods for calculating the value of business

activities and investment projects.

Discounting: The Time Value of Money

The financial metrics shown on the ROI Summary page of the Brocade tool utilize the same

concepts as those used by investment banks and consulting firms, valuation experts and financial

decision makers. Central to these metrics is the concept of the “time value of money.” This

concept essentially means that “a dollar tomorrow is less valuable than a dollar today,” or to coin

a bit of popular folk wisdom, “a bird in the hand is worth two in the bush.”

To equate the value of dollars or economic benefits received tomorrow to the value of dollars

today, financial professionals use a process called “discounting.” Future benefits are

“discounted” to make them equivalent to dollars spent or received today. Once discounted, the

“present day” equivalent value of a future cash flow or financial benefit is called “present value.”

It is not important that you fully understand the mechanics of discounting or how to calculate

present values, since the ROI scenario modeling tool will do that for you. You should be able to

interpret these metrics for a customer in terms that they can easily understand and communicate

to their organization. However, below is a description of how to calculate present values that is

central to understanding three of the four ROI key performance metrics in the Brocade FICON

Migration ROI scenario modeling tool. The better of an understanding you have of these

metrics, the more convincing of a case you will make.

Copyright © Stephen R. Guendert 2007 326


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The mathematical formula for generating the present value of a stream of “benefits” or cash

flows is as follows:


m
Cash Flow i
i =1 (1 + Cost of Capital per period)
m

where m is the number of periods over the forecast horizon. If the cash flow in each period is

factored out, the formula:


⎡ ⎤
⎢ × ⎥
m
1
i =1 ⎣ (1 + Cost of Capital per period m) m ⎦
Cash Flow i

The second term (1/ (1+Cost of Capital per period m)^m) is called the discount factor, and is the

number that converts a cash flow or benefit in period m to today’s dollars. This number is always

less than one, so mathematically this fits with our description of future dollars as being less

valuable than today’s dollars. Using these formulas in a simple spreadsheet, Table 3-1 shows

how to calculate present value of a series of future cash flows over a six month period:

Copyright © Stephen R. Guendert 2007 327


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Table 3-1: Example of Discounting Future Cash Flows

Cost of Capital per Year: 10.00%

Cost of Capital per Month: 0.83% (10.00% divided by 12)

Month Cash Discount Discount Present

Date of Cash Flow # Flow Formula Factor Value

January 31, 2003 1 $1,000 1/(1+.83%)1 0.9917 $991.74

February 28, 2003 2 $500 1/(1+.83%)2 0.9835 $491.77

March 31, 2003 3 $450 1/(1+.83%)3 0.9754 $436.77

April 30, 2003 4 $500 1/(1+.83%)4 0.9673 $483.67

May 31, 2003 5 $450 1/(1+.83%)5 0.9594 $431.71

June 30, 2003 6 $1,000 1/(1+.83%)6 0.9514 $951.43

Total Value of All Cash $3,900 Total Present Value on $3,789.25

Flows: December 31, 2002:

As you can see, even though the total of all the cash flows is $3,900 over the six-month period,

the present value on December 31, 2002 is only $3789.25. If we raise the cost of capital, to say,

12%, the present value drops to $3767.71. You can also see that cash flows received sooner,

such as the $450 benefit received on March 31, has a higher present value ($436.77) than an

equivalent cash flow received two months later on May 31 ($431.71).

Copyright © Stephen R. Guendert 2007 328


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The concept of present value is central to understanding the key ROI metrics as outlined below.

First, however, it is crucial that you understand where the cost of capital comes from, and how to

estimate a value for it.

Cost of Capital

The cost of capital is an input on the Input screen, and it is the “discount rate” at which all future

benefits are discounted to convert them into today’s dollars. In many basic business applications,

the “weighted average cost of capital” is used, and this is the weighted average of all the rates

that investors and creditors expect to get from supplying capital (funds supplied in the form of

stock purchases, loans, bond purchases, etc.) to the company.

The formula for weighted average cost of capital for a company with only debt and equity

(common stock) on its balance sheet is calculated as follows:

× kD + × kE
D E
Weighted Average Cost of Capital =
V V

Where:

V = Total market value of the company

D = Market value of the company’s debt

kD = Expected rate of return on the firm’s debt

E = Market value of the company’s equity (common stock)

kE= Expected rate of return on the firm’s equity

Remember that for this example, the total market value of the company (V) is equal to the value

of its debt (D) plus the value of its equity (E). For a company with more than these two types of

Copyright © Stephen R. Guendert 2007 329


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

capital, such as preferred stock (which in many ways is more like debt than stock, and in other

ways is more like stock than debt, depending on the company’s legal and ownership structure),

then the weighted average cost of capital should also include the weighted rates of return for

these sources of funds as well.

The issues associated with estimating the true cost of capital for a company or a specific project

are very complex (not to mention analyzing most large company’s capital structures), and there

is more than one way to determine an appropriate cost of capital. If an estimate can’t be supplied

directly from your customer (via his or her finance department) then one of the following rules of

thumb will be sufficient for use in the ROI modeling tool.

4. Most large, publicly traded companies with relatively stable earnings and business

operations have a cost of capital that ranges from 8% to 15%; depending on how much of

the business is financed with debt (more debt usually means a lower cost of capital).

5. This number is typically higher for firms like biotech companies or high-tech startup

ventures that have riskier business models or uncertain cash flows, and can be as low as

15% and as high as 50% or higher for these companies.

6. Most customers will have an internally estimated cost of capital, but if this isn’t available

then an estimate of between 10% and 15% should be sufficient for use with most

customers. A popular finance professor once said (somewhat jokingly) that if all else fails,

use 10%.

Copyright © Stephen R. Guendert 2007 330


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Net Present Value (NPV)

NPV is a specific method of calculation related to the net financial impact of a set of costs and

benefits. “Present value” is the value today of a future amount of cash invested at a specific

interest rate. For example, the present value of $110 received a year from now, assuming a 10%

interest rate, is $100. In other words, $100 today is equivalent to $110 a year from now,

providing the money can be invested successfully at 10 percent interest pet year. Net present

value is defined as the present value of all future cash flows, at a given interest rate.

In other words, NPV is simply the present value of all the benefits and costs including the cost

of an initial investment or cash outlay at the beginning of a project. A positive number means the

project is a good investment—and the larger the number the more value the project creates (in

“today’s” dollars).

A mathematical formula for NPV that is typical in most college-level finance texts looks like

this:

Net Present Value (NPV) = Initial Investment (in Period “0”) +

∑ (1 + Cost of Capital per period)


m
Cash Flow i
i =1
m

In this formula the initial investment (and any other net cash outflows) must be depicted as a

negative number and positive numbers are considered cash inflows in this formula. A “positive

NPV” project is one in which the present value of future net cash inflows is greater than the

initial investment. A project with an initial investment of $10,000 and a present value of future

benefits of $11,000 will have an NPV of $1,000.

Copyright © Stephen R. Guendert 2007 331


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Note that we do not discount the initial investment (in the second part of the equation above), but

only discount the net cash flows for each period over the future life of the project. See Table 3-2

for an example of how to calculate Net Present Value, using the same example as in Table 3-1,

with a project requiring a $3,000 initial investment.

Table 3-2: Calculating Net Present Value

Cost of Capital per Year: 10.00%

Cost of Capital per Month: 0.83% (10.00% divided by 12)

Month Cash Discount Discount Present

Date of Cash Flow # Flow Formula Factor Value

December 31, 2003 0 ($3,000) 1/(1+.83%)0 1.0000 ($3,000)

January 31, 2003 1 $1,000 1/(1+.83%)1 0.9917 $991.74

February 28, 2003 2 $500 1/(1+.83%)2 0.9835 $491.77

March 31, 2003 3 $450 1/(1+.83%)3 0.9754 $436.77

April 30, 2003 4 $500 1/(1+.83%)4 0.9673 $483.67

May 31, 2003 5 $450 1/(1+.83%)5 0.9594 $431.71

June 30, 2003 6 $1,000 1/(1+.83%)6 0.9514 $951.43

Total Value of All Cash $ 900 Net Present Value on $ 789.25

Flows: December 31, 2002:

Copyright © Stephen R. Guendert 2007 332


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The most important benefit of using NPV as a measure of the value of a project is that for every

time a firm undertakes a positive NPV project, the value of the firm is increased by the amount

of NPV. In other words, NPV is a measure of the total value created for a firm by a given project.

Nearly all respectable financial professionals, from a CFO in a large company to Wall Street

equity analysts use the NPV formula to measure the value of a firm as well as the projects it

undertakes; most of the individual differences in the professional application of this concept lie

in the details of their financial models and the assumptions underlying their forecasts.

Many customers with Project Management Offices (PMOs) will use NPV to decide on which

proposed projects to undertake. A cutoff value will be assigned and if a proposed project does

not meet the cutoff, that project is not undertaken.

Return on Investment

Return on Investment is the present value of all the cost-adjusted (or “net”) benefits divided by

the present value of all the costs of the offer. It is a ratio measure, so a value of 100% means that

the customer is “doubling” their money on the investment, in today’s dollars.

⎢∑
⎡ m Net Cash Inflows (Total Benefits - Total Costs) i ⎤

⎣ i =0 (1 + Cost of Capital per period) m ⎦ × 100%

⎢∑
⎡m ⎤
ROI =
m ⎥
Cash Outflows(or Costs) i
⎣ i =0 (1 + Cost of Capital per period) ⎦

In Table 3-3, the net cash inflows also have costs associated with them (much like the model that

underlies the Brocade FICON Migration ROI scenario modeling tool):

Table 3-3

Copyright © Stephen R. Guendert 2007 333


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Cost of Capital per Year: 10.00%

Cost of Capital per Month: 0.83%

Net Cash Cash Present Present

Date of Cash Inflow Outflow Discount Discount Value of Value of

Flow Month (Benefit) (Cost) Formula Factor Benefits Costs

December 31,

2002 0 0 $1,000 1/(1+.83%)0 1.0000 $1,000

January 31,

2003 1 $1,000 $150 1/(1+.83%)1 0.9917 $992 $149

February 28,

2003 2 $500 $150 1/(1+.83%)2 0.9835 $492 $148

March 31, 2003 3 $450 $150 1/(1+.83%)3 0.9754 $439 $146

April 30, 2003 4 $500 $150 1/(1+.83%)4 0.9673 $484 $145

May 31, 2003 5 $450 $150 1/(1+.83%)5 0.9594 $432 $144

June 30, 2003 6 $1,000 $150 1/(1+.83%)6 0.9514 $951 $143

Total Value of All Cash $3,900 $1,900 Total Present Value $3,789 $1,874

Flows: on December 31,

2002:

In this example ROI = ($3789 - $1874) / $1874 = 1.02 x 100% = 102.2%.

Copyright © Stephen R. Guendert 2007 334


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

If you choose not to multiply this ratio by 100%, you would have an analogous measure that

some finance professionals call the “profitability index” or the “return ratio”. Note also that in

this formula, the initial period i = 0, which indicates that we also take into account the initial

period investment.

Internal Rate of Return (IRR)

Internal Rate of Return is a bit more complex to explain than the other metrics. IRR is defined as

that rate of return that makes equivalent the positive cash flow from savings with the negative

cash flow created by the investment itself. Stated another way, IRR is the discount rate at which

the cash inflows are exactly equal to the cash outflows. Stated in financial terms, IRR is the

interest rate at which the present value of cash inflows equals the present value of cash outflows,

i.e., the combined discounted cash flow (DCF) equals zero.

Essentially, IRR is the discount rate that would make the future benefits equal to the initial

investment in today’s dollars. An example of IRR in practice is the “Yield to Maturity” for

publicly traded bonds. This “yield” is the internal rate of return that sets the interest and final

principal payment on the bond equal to its current market price. For an investment project, it is

the solution for Cost of Capital per period in the following equation:

∑ (1 + Cost of Capital per period)


m
Cash Flow i
Initial Investment (in Period “0”) =
i =1
m

For more than one period in the future, there can be as many solutions to this equation as there

are periods (this isn’t proven here, but if you can perform basic algebra you can easily see this),

so professionals almost always use a financial calculator or a computer to solve for the IRR in

the above formula. For this reason, IRR is at best a measure that should be used in context with

other measures like NPV, and at worst it can be unreliable as a decision rule.

Copyright © Stephen R. Guendert 2007 335


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Since the initial investment needed to implement a FICON solution is quite low relative to the

net benefits that can be realized, this number, expressed as a percentage, will usually be very

high. This is a standard way some firms measure the return on a project, so it is included, but you

should be careful to explain to your customers that this number can be misleading if taken out of

context from the other key financial metrics.

Payback Period

Payback period is different from the other financial metrics in that it is not a measure of benefits

and costs, but is expressed in the ROI scenario modeling tool in months. Payback period is

defined as the period of time needed to recover the investment for the option being evaluated.

The tools reports payback period as the number of months that it takes for your customers to

recoup their initial setup fees and any other initial costs associated with each FICON project.

You may have heard the term “breakeven point”, and this is analogous to that measure of the

return on a project. There are various ways to calculate this, but for the following example,

where each monthly cash inflow/benefit is the same ($12,500), for a $50,000 investment the

payback period is:

Month/Period Investment Net Benefit Cumulative

per Month Net Benefit

0 $ 50,000 $ 0.00 $ 0

1 $12,500 $ 12,500

2 $12,500 $ 25,000

3 $12,500 $ 37,500

Copyright © Stephen R. Guendert 2007 336


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

4 $12,500 $ 50,000

5 $12,500 $ 62,500

6 $12,500 $ 75,000

In this example, payback period equals the initial investment divided by the monthly net benefit,

or 4 months. For cash flows that are the same each period, this simple formula will give you the

payback period:

Payback Period =
Initial Investment
Net Cash Flow per Period

For uneven cash flows or benefits, we must iteratively solve for the payback period until we find

the point in time (in this case in months) where the total cumulative net benefits are equal to the

initial investment. The weakness of payback period is that it doesn’t tell you how much value a

project or an investment creates in today’s dollar terms—for these purposes NPV and ROI are

much better measures of a project’s economic value.

Section 4 Example Scenario

We will now go through a complete example scenario for the fictional Dunder-Mifflin paper

company. Our customer contact is Michael Scott, and our Brocade rep. is Dwight Schrute. This

complete example has been done with the tool as the “Example Scenario”. A screenshot showing

this listed on the ROI tool home page is shown as Figure 2-11 below.

Figure 4-1

For our scenario, let’s assume Dunder-Mifflin is planning a major technology refresh of its

primary data center, and is also building a disaster recovery data center 40 miles away. Dunder

Copyright © Stephen R. Guendert 2007 337


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Mifflin is undertaking the refresh of its production data center first. Dunder Mifflin has 5 older

mainframes consisting of 4 z900s and 1 z990. They have been running all of their processors

with predominantly ESCON channels, although they have done some testing of FICON via

direct attachment from the z990 to a DASD array. They have 6 older ESCON attached DASD

arrays consisting of two EMC 8830 Symmetrix arrays and four IBM ESS F20 arrays. They also

have 2 ESCON attached IBM VTS subsystems as well as two stand alone ESCON attached 3494

automated tape libraries. They have (8) 9032-5 ESCON directors on the floor today. All

together, there are about 1000 ESCON channels total coming off the 5 host processors connected

to the ESCON directors, 180 ESCON director ports are connected to DASD, and another 120

ESCON connections from the ESCON directors to tape/virtual tape and CTC.

This data and associated date for labor, maintenance, etc has been added to the input page in the

sample scenario in the tool and is shown on Figure 2-12.

They engaged Brocade Professional Services for an ESCON to FICON migration assessment

and have decided to migrate off of the older technology devices and to implement a 4 Gbps

FICON infrastructure consisting of 2 System z9 processors, 2 IBM DS8300 DASD arrays, a

single VTS, and 4 Brocade 48000 FICON directors. They decided to lease the new technology.

They also engaged Brocade PS for a cabling engagement, as well as implementation. This data

is shown on Figure 2-13 and figure 2-14. Since the new hardware is not under maintenance

contract, it is assumed to be zero cost.

In our example, we assume that the time to start fully realizing the benefits of this project is 6

months-in other words, it would take 6 months to complete and implement the new production

FICON environment. This can be seen in the example’s assumptions page, at the bottom under

“Global adjustment factors”.

Copyright © Stephen R. Guendert 2007 338


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

The example scenario demonstrated a 17 month payback period.

Copyright © Stephen R. Guendert 2007 339


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Appendix B

ABC Company ESCON VTS activity

SMF
94 VTS VTS VTS
DAILY PHYSICAL DRIVE ACTIVITY FIELD B2157 B2263 B2293 TOTAL
Maximum physical drives mounted.............. vtx 12 12 12 36
Average physical drives mounted.............. vtv 6.1 6.3 6.6 19
Maximum time to mount physical drive (sec)... vmx 1359 1933 1820 1704
Average time to mount physical drive (sec)... vmv 66.8 88.5 81.9 79.1
Physical mounts for recall.................. vps 3710 2994 3903 10607
Physical mounts for copy.................... vpm 4144 4102 3923 12169
Physical mounts for reclaim................. vpr 1007 983 894 2884
Total physical mounts for vts................ ... 8861 8079 8720 25660
Total physical mounts for native............. ... na na na
Virtual volumes premigrated.................. vmp 61990 70224 68572 200786
DAILY VIRTUAL DRIVE ACTIVITY
Maximum virtual volumes ready............... vdx 64 58 64 186
Average virtual volumes ready............... vda 8 8 11 27
Maximum virtual mount time (sec)............. vrx 1484 2073 1978 1845
Average virtual mount time (sec)............. vra 7 5.9 7.2 6.7
Total virtual mounts......................... ... 95526 104939 111373 311838
Fast ready - Number of mounts................ vfr 48653 54932 54004 157589
- Maximum mount time (sec)........ maxfr 58 94 137 96.3
- Minimum mount time (sec)........ minfr 1 1 1 1
- Average mount time (sec)........ avgfr 1 0 0 0.3
Cache hits - Number of mounts................ vmh 43002 47124 53675 143801
- Maximum mount time (sec)........ maxch 315 641 539 498.3
- Minimum mount time (sec)........ minch 1 1 1 1
- Average mount time (sec)........ avgch 0 0 0 0
Recalls - Number of mounts................ vms 3871 2883 3694 10448
- Maximum mount time (sec)........ maxrm 1484 2073 1978 1845
- Minimum mount time (sec)........ minrm 5 2 6 4.3
- Average mount time (sec)........ avgrm 142 155 168 155
Percentage of mounts not satisfied in cache.. ... 4 2.7 3.3 3.3
Virtual volume residency (sec) - Maximum..... vvx 56091 65535 65535 62387
Virtual volume residency (sec) - Minimum..... vvn 1 1 1 1
Virtual volume residency (sec) - Average..... vva 305.3 297.1 366.7 323.0
DAILY TAPE VOLUME CACHE ACTIVITY
Megabytes written to cache................... vbw 36583739 40693300 40250914 117527953
Megabytes read from cache.................... vbr 23223433 22577083 21668723 67469239
Maximum channel megabytes per second......... ... 134.96 100.47 98.56 333.99
Average channel megabytes per second......... ... 23.66 25.03 24.6 73.29

Copyright © Stephen R. Guendert 2007 340


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

SMF
94 VTS VTS VTS
DAILY PHYSICAL DRIVE ACTIVITY FIELD B2157 B2263 B2293 TOTAL
Ratio: max to avg channel megabytes per sec.. ... 5.7 4.01 4 4.57
Megabytes written to physical 3590.......... vtw 13694769 15815770 15627113 45137652
Megabytes read from physical 3590........... vtr 1444777 1043581 1597191 4085549
Maximum hh:mm virt volume in cache........... vca 72:59:00 2.722916667 2.536111111 2.8
Maximum hh:mm of virt vol in cache .......... mtvca 762:04:00 17.13611111 17.55625 22.1
Average virt vol size in TVC (mb)............ vcz 263 267 266 265.3
Average size all virt vols (mb)............ vba/vla 257 264 263 261.3
Maximum number of virt vols in cache ....... vnm 6121 5713 5582 17416
Virtual volumes premigrated.................. vmp 61990 70224 68572 200786
Maximum # of virt vols managed by vts........ vla 129144 104338 101937 335419
Maximum private stacked volume count......... prict 2463 1746 1592 5801
Minimum scratch stacked volume count......... srtct 175 55 30 260
Maximum active data on stacked cartridges(gb) vba 33242 27585 26880 87707
Minimum available stacked capacity (gb)...... vec 3773 1863 1041 6677
DAILY PERFORMANCE INDICATORS (*=hourly maximums)
Percent of <= 2k blocks written.............. 0kb 100* 100* 98*
Percent of <= 4k blocks written.............. 2kb 99* 67* 28*
Percent of <= 8k blocks written.............. 4kb 100* 98* 92*
Percent of <=16k blocks written.............. 8kb 96* 76* 85*
Percent of <=32k blocks written.............. 16kb 100* 100* 100*
Percent of <=64k blocks written.............. 32kb 98* 100* 96*
Percent of > 64k blocks written.............. 64kb 0* 0* 0*
Total number of blocks written............... ... 83410K 79826K 67189K
Average blocksize written.................... ... 47000* 48000* 46000*
Compression ratio into cache................. harat 7.75* 7.93* 7.78*
Compression ratio to 3590.................... bsrat 1.07* 1.21* 1.28*
............................................. ... * * *
Maximum recall throttling percentage......... rcprt 0* 0* 0*
Maximum write throttling percentage.......... wrovt 0* 0* 0*
Average recall throttling value (msec)....... avrct 0* 0* 0*
Average write throttling value (msec)........ avwot 500* 0* 0*
Overall throttling value (msec).............. totat 4* 0* 0*
DAILY ACTIVE DATA & RECLAMATION (*=last hour of day)
Cartridges containing <= 05% of active data.. adv05 0* 1* 0*
Cartridges containing <= 10% of active data.. adv10 0* 0* 0*
Cartridges containing <= 15% of active data.. adv15 0* 0* 1*
Cartridges containing <= 20% of active data.. adv20 0* 0* 0*
Cartridges containing <= 25% of active data.. adv25 0* 0* 0*
Cartridges containing <= 30% of active data.. adv30 0* 1* 0*
Cartridges containing <= 35% of active data.. adv35 0* 2* 0*
Cartridges containing <= 40% of active data.. adv40 0* 10* 8*

Copyright © Stephen R. Guendert 2007 341


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

SMF
94 VTS VTS VTS
DAILY PHYSICAL DRIVE ACTIVITY FIELD B2157 B2263 B2293 TOTAL
Cartridges containing <= 45% of active data.. adv45 380* 232* 199*
Cartridges containing <= 50% of active data.. adv50 327* 237* 216*
Cartridges containing <= 55% of active data.. adv55 291* 208* 192*
Cartridges containing <= 60% of active data.. adv60 287* 158* 165*
Cartridges containing <= 65% of active data.. adv65 195* 120* 117*
Cartridges containing <= 70% of active data.. adv70 166* 119* 82*
Cartridges containing <= 75% of active data.. adv75 121* 103* 83*
Cartridges containing <= 80% of active data.. adv80 122* 85* 95*
Cartridges containing <= 85% of active data.. adv85 111* 86* 75*
Cartridges containing <= 90% of active data.. adv90 93* 88* 65*
Cartridges containing <= 95% of active data.. adv95 107* 77* 98*
Cartridges containing <=100% of active data.. adv00 214* 186* 167*
Total active cartridges...................... ... 2414* 1713* 1563*
Percentage of active data.................... ... 63* 64* 65*
Reclaim threshold percentage................. thres 40* 45* 45*
Number of cartridges that can be reclaimed... ... 0* 136* 115*

Copyright © Stephen R. Guendert 2007 342


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Appendix C

ABC Company ESCON/FICON model results

TARGET DEFINITION ESCON FICON


VTS type B20 B20
Initial VTS Estimate 2 1
(based on thruput)
TVC Size 1728 1728
Drive Type 3590E 3590E
Media J J
Max. Physical Drives 12 12
Max. Virtual Drives 128 128
Dual Copy No No
Volume Pooling No No
Pref group 0 list list
Channel Count 16 8
Channel Compressed Yes Yes
Channel Fast Yes Yes
GPFS Yes Yes
ESCON Channel Count 16 0
FICON Channel Count 0 8
FICON Accelerator No Yes
Virtual Controllers . .
Virtual Compression . .
Operations Mode . .
Copy Mode . .
Logical Volume Size 800 800
Logical Volume Max. 250000 250000
BUFNO parameter 5 5
Compression Ratio 2.5 2.5
Skew Factor 1.1 1.1
Sustain. Hit Thr. 109.26 297.51
Sustain. Miss Thr. 68.8 68.8
Sustain. Write Thr. 115.85 259.86
Sus Wrt Thr/100% Cpy . .
Peak Write Thr. 142.19 259.86
Used Write Thr. 115.85 259.86

MODELING RESULTS
Hit % 91.5 88.7
Read Hit % 83.3 77.6
Min. TVC Res. (hr) 72.6 32.8
Avg. TVC Res. (hr) 98.3 54
Copyright © Stephen R. Guendert 2007 343
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

TARGET DEFINITION ESCON FICON


Max. TVC Res. (hr) 118.6 89
Logical Volume Count 183394 183394
Max. Physical Drives 16 14
Max. Virtual Drives 79 79
Peak Phys Mnt Rte 169 169
Peak Log Mnt Rte 709 709
Peak Thr. (% of VTS) 150 59.9
Req. VTS systems 2 2
Min. 3494s required 1 1
Avg. Block Size 28660 28660

TARGET DEFINITION
VTS type B20 B20
Initial VTS Estimate 2 1
(based on thruput)
TVC Size 1728 1728
Drive Type 3590E 3590E
Media J J
Max. Physical Drives 12 12
Max. Virtual Drives 128 128
Dual Copy No No
Volume Pooling No No
Pref group 0 list list
Channel Count 16 8
Channel Compressed Yes Yes
Channel Fast Yes Yes
GPFS Yes Yes
ESCON Channel Count 16 0
FICON Channel Count 0 8
FICON Accelerator No Yes
Virtual Controllers . .
Virtual Compression . .
Operations Mode . .
Copy Mode . .
Logical Volume Size 800 800
Logical Volume Max. 250000 250000
BUFNO parameter 5 5
Compression Ratio 2.5 2.5
Skew Factor 1.1 1.1
Sustain. Hit Thr. 109.26 297.51
Sustain. Miss Thr. 68.8 68.8
Sustain. Write Thr. 115.85 259.86

Copyright © Stephen R. Guendert 2007 344


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

TARGET DEFINITION ESCON FICON


Sus Wrt Thr/100% Cpy . .
Peak Write Thr. 142.19 259.86
Used Write Thr. 115.85 259.86

VTS MODELING RESULTS


Performance
Hit % 91.5 88.7
Read Hit % 83.3 77.6
Min. TVC Res. (hr) 72.6 32.8
Avg. TVC Res. (hr) 98.3 54
Max. TVC Res. (hr) 118.6 89
Avg. Virtual Mount Time 10.5 14
Avg. Spec Virt Mount Time 18.9 25.8
Mounts
Projected Logical Scratch 48277 48277
Projected Physical Scratch 879 879
Projected Logical Specific 49238 49238
Projected Physical Specific 7050 9940
Projected Read Miss Mounts 7050 9940
Projected Read Hit Mounts 42188 39298
Max. Hourly Read Miss Mounts 167 167
Back-end Drives
Max. Read Drives 10.1 10.1
Max. Write Drives 13 13
Max. Copy Drives . .
Max. Physical Drives 16 14
Capacity
Logical Volume Count 183394 183394
Physical Volume Count 5367 5367
Storage Capacity in GB 134175 134175
Max. Virtual Drives 79 79
Throughput
Peak Total Throughput MB/sec 162.8 162.8
Peak Write Throughput MB/sec 148.2 148.2
Peak Read Throughput MB/sec 92.1 92.1
Peak Thr. (% of VTS) 150 59.9
Req. VTS systems 2 2
Min. 3494s required 1 1
Avg. Block Size 28660 28660

GLOBAL INFORMATION
Total Specific Mounts 49256 49256

Copyright © Stephen R. Guendert 2007 345


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

TARGET DEFINITION ESCON FICON


Total Scratch Mounts 48277 48277
Total Mounts 97532 97532
Total GB Read 22016 22016
Total GB Written 43489 43489
Total GB 65505 65505
Specific Mounts % 50.5 50.5
Scratch Mounts % 49.5 49.5
GB Read % 33.6 33.6
GB Written % 66.4 66.4
Avg. Vol Size MB(maxda) 773.4 773.4
Total # of Cartridges 183589 183589
Total # of Carts(maxda) 103891 103891

DAILY INFORMATION
Avg. Specific Mounts 1649 1649
Avg. Scratch Mounts 1614 1614
Avg. Mounts 3263 3263
Avg. GB Read 735 735
Avg. GB Written 1456 1456
Avg. GB 2191 2191
Peak Mounts 4679 4679
Peak Mounts Day 37881 37881
Peak GB 3684 3684
Peak GB Day 37868 37868
Peak Read GB 1593 1593
Peak Read GB Day 37875 37875

HOURLY INFORMATION
Peak Mounts 709 709
Peak Througput MB/s 162.8 162.8
Peak Avg. Drives 41.2 41.2
Peak Concur. Drives 79 79

Copyright © Stephen R. Guendert 2007 346


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Appendix D

ABC Company Native Tape Statistics

Statistics by recording technology

Recording Technology -> All 3420 3480 3490E 3590B 3590E 3590H

GLOBAL INFORMATION
Total Specific Mounts 207923 0 456 148185 32 543 58708
Total Scratch Mounts 269262 0 276 267156 0 0 1830
Total Mounts 477186 0 732 415341 32 543 60538
Total GB Read 76602 0 85 56603 24 1490 18399
Total GB Written 292282 0 35 238118 53 0 54077
Total GB 368883 0 119 294721 78 1490 72476
Specific Mounts % 43.6 0 62.3 35.7 100 100 97
Scratch Mounts % 56.4 0 37.7 64.3 0 0 3
GB Read % 20.8 0 71 19.2 31.5 100 25.4
GB Written % 79.2 0 29 80.8 68.5 0 74.6
Avg. Vol Size MB(maxda) 1834.8 . 238.8 767.1 . 90602.3 99354.3
Total # of Cartridges 628496 0 73510 552400 13 365 2208
Total # of Carts(maxda) 201971 0 1307 198459 0 118 2087

DAILY INFORMATION
Avg. Specific Mounts 2285 . 5 1628 0 6 645
Avg. Scratch Mounts 2959 . 3 2936 0 0 20
Avg. Mounts 5243 . 8 4564 0 6 665
Avg. GB Read 842 . 1 622 0 16 202
Avg. GB Written 3212 . 0 2616 1 0 594
Avg. GB 4053 . 1 3238 1 16 796
Peak Mounts 18110 . 250 17044 10 82 3933
Peak Mounts Day 9/9/2003 . 9/5/2003 9/13/2003 9/3/2003 9/9/2003 9/2/2003
Peak GB 17148 . 41 14897 24 287 3562
Peak GB Day 9/20/2003 . 9/5/2003 9/20/2003 9/11/2003 9/15/2003 9/26/2003
Peak Read GB 3697 . 40 2945 24 287 1822
Peak Read GB Day 9/20/2003 . 9/5/2003 9/20/2003 9/11/2003 9/15/2003 9/9/2003

HOURLY INFORMATION
Peak Mounts 2314 . 65 2299 8 20 349
Peak Througput MB/s 402.5 . 2.4 381.3 6.8 9.8 100.3
Peak Avg. Drives 140 . 3.2 124.7 0 2.2 34.4
Peak Concur. Drives . 0 6 159 0 7 42

Copyright © Stephen R. Guendert 2007 347


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Statistics by workload

CURR3 HSMC HSMM HSMB INFRAB DEVBK DB2BK CORPBK OTHE


Workload -> All 480 PY L2 AK KP UP UP UP R

GLOBAL
INFORMATION
Total Specific
Mounts 207922 456 0 58620 311 1649 782 52 0 146053
Total Scratch
Mounts 269262 276 0 419 25 11689 78937 26619 6 151292
Total Mounts 477186 732 0 59039 336 13338 79719 26671 6 297345
Total GB Read 76602 85 0 11419 474 1370 121 0 0 63133
Total GB Written 292283 35 0 27887 897 32484 99678 24915 7 106380
Total GB 368884 119 0 39306 1370 33854 99800 24915 7 169512
Specific Mounts % 43.6 62.3 0 99.3 92.6 12.4 1 0.2 0 49.1
Scratch Mounts % 56.4 37.7 0 0.7 7.4 87.6 99 99.8 100 50.9
GB Read % 20.8 71 0 29.1 34.6 4 0.1 0 0 37.2
GB Written % 79.2 29 0 70.9 65.4 96 99.9 100 100 62.8
Avg. Vol Size 129579.
MB(maxda) 1834.8 238.8 . 1 41929.2 3394 723.2 978.7 157.5 772.7
Total # of Cartridges 628496 73510 3 1732 96 2890 186771 20819 23 342652
Total # of
Carts(maxda) 201971 1307 0 1582 64 2760 49564 19585 5 127104

DAILY
INFORMATION
Avg. Specific
Mounts 2285 5. 644 3 18 9 1 0 1605
Avg. Scratch Mounts 2959 3. 5 0 128 867 292 0 1662
Avg. Mounts 5243 8. 649 4 147 876 293 0 3267
Avg. GB Read 842 1. 125 5 15 1 0 0 694
Avg. GB Written 3212 0. 306 10 357 1095 274 0 1169
Avg. GB 4053 1. 432 15 372 1097 274 0 1863
Peak Mounts 18110 250 . 3871 21 598 4268 1337 4 11140
9/9/200 9/2/200 9/23/20 9/13/200 9/27/200 9/13/20
Peak Mounts Day 3 9/5/2003 . 3 03 9/9/2003 3 3 7/1/2003 03
Peak GB 17148 41 . 2106 150 1330 5528 1240 7 7973
9/20/20 9/26/20 9/17/200 9/27/200 9/20/20
Peak GB Day 03 9/5/2003 . 03 . 3 9/6/2003 3 7/1/2003 03
Peak Read GB 3697 40 . 977 69 246 33 0 0 3356
9/20/20 9/24/20 9/15/20 8/31/200 8/26/200 9/20/20
Peak Read GB Day 03 9/5/2003 . 03 03 3 9/9/2003 3 9/6/2003 03

Copyright © Stephen R. Guendert 2007 348


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

CURR3 HSMC HSMM HSMB INFRAB DEVBK DB2BK CORPBK OTHE


Workload -> All 480 PY L2 AK KP UP UP UP R
Peak Mounts 2314 65 . 365 10 107 548 259 3 2232
Peak Througput
MB/s 402.5 2.4 . 79.7 9.6 54 229.1 67.7 1.5 269.2
Peak Avg. Drives 140 3.2 . 32.7 9 14 52.3 26.3 0.3 92.4
Peak Concur. Drives . 6 0 41 11 27 56 38 2 132

Distribution of hourly required drives

Concurrent
Drive 3490-
Range 3480 +% % 3590H +% % V +% %
3 2174 99.5 99.5 1453 66.5 66.5 625 28.6 28.6
7 10 100 0.5 62 69.4 2.8 494 51.2 22.6
11 0 100 0 53 71.8 2.4 210 60.9 9.6
15 0 100 0 64 74.7 2.9 141 67.3 6.5
19 0 100 0 79 78.3 3.6 101 71.9 4.6
23 0 100 0 93 82.6 4.3 58 74.6 2.7
27 0 100 0 93 86.9 4.3 56 77.2 2.6
31 0 100 0 86 90.8 3.9 43 79.1 2
35 0 100 0 78 94.4 3.6 47 81.3 2.2
39 0 100 0 67 97.4 3.1 33 82.8 1.5
43 0 100 0 21 98.4 1 25 83.9 1.1
47 0 100 0 23 99.5 1.1 42 85.9 1.9
51 0 100 0 9 99.9 0.4 52 88.2 2.4
55 0 100 0 3 100 0.1 38 90 1.7
59 0 100 0 0 100 0 39 91.8 1.8
63 0 100 0 0 100 0 28 93 1.3
67 0 100 0 0 100 0 28 94.3 1.3
71 0 100 0 0 100 0 22 95.3 1
75 0 100 0 0 100 0 11 95.8 0.5
79 0 100 0 0 100 0 15 96.5 0.7
83 0 100 0 0 100 0 5 96.7 0.2
87 0 100 0 0 100 0 3 96.9 0.1
91 0 100 0 0 100 0 4 97.1 0.2
95 0 100 0 0 100 0 7 97.4 0.3
99 0 100 0 0 100 0 4 97.6 0.2
103 0 100 0 0 100 0 2 97.7 0.1
107 0 100 0 0 100 0 8 98 0.4
111 0 100 0 0 100 0 9 98.4 0.4
115 0 100 0 0 100 0 5 98.7 0.2
119 0 100 0 0 100 0 10 99.1 0.5

Copyright © Stephen R. Guendert 2007 349


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Concurrent
Drive 3490-
Range 3480 +% % 3590H +% % V +% %
123 0 100 0 0 100 0 5 99.4 0.2
127 0 100 0 0 100 0 5 99.6 0.2
131 0 100 0 0 100 0 2 99.7 0.1
135 0 100 0 0 100 0 0 99.7 0
139 0 100 0 0 100 0 4 99.9 0.2
143 0 100 0 0 100 0 1 99.9 0
147 0 100 0 0 100 0 1 100 0
151 0 100 0 0 100 0 0 100 0
155 0 100 0 0 100 0 1 100 0
159 0 100 0 0 100 0 0 100 0

Copyright © Stephen R. Guendert 2007 350


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Appendix E

XYZ Company Storage Subsystem Growth

DMX 0126 Growth Table


The following table estimates the growth of the DMX subsystem that will contain all of the

SRDF replicated volumes.

FICON
I/O Channels FICONFICON
EM0126 Rate Usable I/O (Express Resp Chan
(SRDF) (Sum) GB Intensity 2Gb) (ms) Util
Current 9061 10780 0.841 16 2.3 19
Mid 2005 12685 12936 0.981 16 2.4 27
Mid 2006 17760 15524 1.144 16 2.6 38
Mid 2007 24863 18629 1.335 16 2.8 53

DMX 3000 Growth Table


The following table estimates the growth of the DMX subsystem referenced in configuration

option #1, containing all of the non-SRDF replicated volumes.

FICON
I/O Channels FICONFICON
Rate Usable I/O (Express Resp Chan
DMX3000 (Sum) GB Intensity 2Gb) (ms) Util
Current 6862 13081 0.525 16 2.1 18
Mid 2005 9607 15697 0.612 16 2.2 25
Mid 2006 13450 18836 0.714 16 2.3 35
Mid 2007 18829 22603 0.833 16 2.4 50

DMX 3000 #1 Growth Table


The following table estimates the growth of one of the DMX subsystems referenced in

configuration option #2, containing half of the non-SRDF replicated volumes.

Copyright © Stephen R. Guendert 2007 351


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

FICON
Channels FICON FICON
DMX3000 I/O Rate Usable I/O (Express Resp Chan
#1 (Sum) GB Intensity 2Gb) (ms) Util
Current 3493 6542 0.534 8 1.8 17
Mid 2005 4890 7850 0.623 8 2 23
Mid 2006 6846 9420 0.727 8 2.1 33
Mid 2007 9585 11304 0.848 8 2.3 46

DMX 3000 #2 Growth Table


The following table estimates the growth of one of the DMX subsystems referenced in

configuration option #2, containing half of the non-SRDF replicated volumes.

FICON
Channels FICON FICON
DMX3000 I/O Rate Usable I/O (Express Resp Chan
#2 (Sum) GB Intensity 2Gb) (ms) Util
Current 3369 6539 0.515 8 2 20
Mid 2005 4717 7847 0.601 8 2.1 27
Mid 2006 6603 9416 0.701 8 2.2 38
Mid 2007 9245 11299 0.818 8 2.3 54

Copyright © Stephen R. Guendert 2007 352


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

Appendix F

XYZ Company VTS statistics

VTS VTS VTS VTS


Library Name LIB2 LIB3 LIB4 LIB5 Total/
VTS Serial Number 60275 60312 70371 60641 Average
VTS Model B18 B18 B18 B18
Number of physical drives 6 6 6 6 24
Number of logical drives 64 64 64 64 256
Amount of cache (GB) 288 288 288 288 1152
Number of channels 4 ESCON 4 ESCON 4 ESCON 4 ESCON 16 ESCON
DAILY PHYSICAL DRIVE ACTIVITY
Maximum physical drives mounted 6 6 6 5 23
Average physical drives mounted 4.5 4.5 4.7 3.8 17.5
Maximum time to mount physical drive (sec) 1564 795 449 854 915.5
Average time to mount physical drive (sec) 59.2 63.4 62.4 63.1 62.025
Physical mounts for recall 2750 2682 3056 3112 11600
Physical mounts for copy 1849 1899 1938 1534 7220
Physical mounts for reclaim 0 0 0 0 0
Total physical mounts for vts 4599 4581 4994 4646 18820
Total physical mounts for native 3210 3931 3375 2886 13402
Virtual volumes premigrated 51544 54306 55124 54052 215026
DAILY VIRTUAL DRIVE ACTIVITY
Maximum virtual volumes ready 57 61 57 57 58
Average virtual volumes ready 15 19 17 16 16.75
Maximum virtual mount time (sec) 1904 6277 1019 951 2537.75
Average virtual mount time (sec) 15.4 14 17.1 22.9 17.35
Total virtual mounts 36250 37290 37565 36498 147603
Fast ready - Number of mounts 19762 21217 21251 19895 82125
- Maximum mount time (sec) 125 164 223 457 242.25
- Minimum mount time (sec) 1 1 1 1 1
- Average mount time (sec) 1 1 2 1 1.25
Cache hits - Number of mounts 13888 13530 13528 13818 54764
- Maximum mount time (sec) 347 339 480 574 435
- Minimum mount time (sec) 1 1 1 1 1
- Average mount time (sec) 0 3 4 1 2
Recalls - Number of mounts 2600 2543 2786 2785 10714
- Maximum mount time (sec) 1904 6277 1019 951 2537.75
- Minimum mount time (sec) 1 1 145 1 37
- Average mount time (sec) 175 172 195 232 193.5
Percentage of mounts not satisfied in cache 7.1 6.8 7.4 7.6 7.225
Copyright © Stephen R. Guendert 2007 353
All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

VTS VTS VTS VTS


Library Name LIB2 LIB3 LIB4 LIB5 Total/
VTS Serial Number 60275 60312 70371 60641 Average
VTS Model B18 B18 B18 B18
Virtual volume residency (sec) - Maximum 65535 65535 65535 65535 65535
Virtual volume residency (sec) - Minimum 1 1 1 1 1
Virtual volume residency (sec) - Average 654.5 622.7 650.9 776.9 676.25
DAILY TAPE VOLUME CACHE ACTIVITY
Megabytes written to cache 7439498 8108749 8838331 7982035 8092153
Megabytes read from cache 10137406 10884679 10124724 9975316 10280531
Maximum channel megabytes per second 29.7 23.15 26.88 26.66 27
Avg Max channel megabytes per second 17.38 16.95 19.78 18.02 18
Average channel megabytes per second 11.34 12.17 12.16 11.59 12
Ratio: max to avg channel megabytes per sec 2.61 1.9 2.21 2.3 2
Megabytes written to physical 3590 11187128 12719898 13320868 12364944 12398210
Megabytes read from physical 3590 944459 805859 964500 948058 915719
Maximum hh:mm virt volume in cache 24:37:00 26:09:00 21:55 37:51:00
Maximum hh:mm of virt vol in cache 307:49:00 957:46:00 639:53:00 17564:28
Average virt vol size in TVC (mb) 307 333 380 329 337
Average size all virt vols (mb) 128 127 151 134 135
Maximum number of virt vols in cache 1538 1298 1207 1461 1376
Virtual volumes premigrated 51544 54306 55124 54052 53757
Maximum # of virt vols managed by vts 74047 75324 66875 67256 70876
Maximum private stacked volume count 712 735 721 709 719
Minimum scratch stacked volume count 22 18 20 13 18
Maximum active data on stacked cartridges(gb) 9527 9640 10119 9072 9590
Minimum available stacked capacity (gb) 382 320 380 223 326
DAILY IMPORT - EXPORT ACTIVITY
Physical volumes imported 0 0 0 0 0
Logical volumes imported 0 0 0 0 0
Total megabytes imported 0 0 0 0 0
Megabytes moved from stacked to stacked vol 0 0 0 0 0
Physical volumes exported 0 0 0 0 0
Logical volumes exported 0 0 0 0 0
Total megabytes exported 0 0 0 0 0
Megabytes moved from stacked to stacked vol 0 0 0 0 0
DAILY PERFORMANCE INDICATORS
Percent of <= 2k blocks written 62 54 86 69 68
Percent of <= 4k blocks written 80 37 14 56 47
Percent of <= 8k blocks written 73 68 70 80 73
Percent of <=16k blocks written 70 52 35 52 52
Percent of <=32k blocks written 100 100 100 100 100
Percent of <=64k blocks written 4 4 4 1 3
Percent of > 64k blocks written 0 0 0 0 0

Copyright © Stephen R. Guendert 2007 354


All Rights Reserved
A Comprehensive Justification For Migrating From ESCON to FICON

VTS VTS VTS VTS


Library Name LIB2 LIB3 LIB4 LIB5 Total/
VTS Serial Number 60275 60312 70371 60641 Average
VTS Model B18 B18 B18 B18
Total number of blocks written 25703K 27144K 28364K 23153K
Average blocksize written 24000 24000 24000 24000 24000
Compression ratio into cache 5.41 4.32 6.2 5.12 5
Compression ratio to 3590 0.88 0.92 0.96 0.9 1
THROTTLING
Maximum recall throttling percentage 35 51 60 61 52
Maximum write throttling percentage 0 0 0 0 0
Average recall throttling value (msec) 1162 1200 1120 1150 1158
Average write throttling value (msec) 0 0 0 0 0
Overall throttling value (msec) 373 516 633 636 540
DAILY ACTIVE DATA & RECLAMATION
Cartridges containing <= 05% of active data 2 2 4 2 10
Cartridges containing <= 10% of active data 1 6 1 3 11
Cartridges containing <= 15% of active data 1 0 0 0 1
Cartridges containing <= 20% of active data 0 2 1 1 4
Cartridges containing <= 25% of active data 0 1 0 0 1
Cartridges containing <= 30% of active data 0 0 0 1 1
Cartridges containing <= 35% of active data 0 2 0 1 3
Cartridges containing <= 40% of active data 0 0 1 0 1
Cartridges containing <= 45% of active data 0 0 1 0 1
Cartridges containing <= 50% of active data 0 5 15 0 20
Cartridges containing <= 55% of active data 13 154 127 162 456
Cartridges containing <= 60% of active data 131 85 103 93 412
Cartridges containing <= 65% of active data 112 88 77 91 368
Cartridges containing <= 70% of active data 92 64 54 66 276
Cartridges containing <= 75% of active data 69 55 51 45 220
Cartridges containing <= 80% of active data 52 58 48 46 204
Cartridges containing <= 85% of active data 54 35 45 33 167
Cartridges containing <= 90% of active data 49 46 35 42 172
Cartridges containing <= 95% of active data 37 42 52 39 170
Cartridges containing <=100% of active data 71 79 73 81 304
Total active cartridges 684 724 688 706 2802
Percentage of active data 72 69 69 69 279
Reclaim threshold percentage 55 50 50 50 205
Number of cartridges that can be reclaimed 9 13 14 6 42

Copyright © Stephen R. Guendert 2007 355


All Rights Reserved

You might also like