SSF01G08
SSF01G08
• Read operations
– When a host sends a read request to the DS8000.
• A cache hit occurs if the requested data resides in the cache.
– The I/O operation will not disconnect from the channel/bus until the read is complete.
– A read hit provides the highest performance.
• A cache miss occurs if the data is not in the cache.
– The I/O operation is logically disconnected from the host:
> Allowing other I/Os to take place over the same interface
> And a stage operation from the disk subsystem takes place
No SARC
SARC
SCSI-4 Gbps or 2 Gbps SCSI- 4Gbps or 2 Gbps SCSI- 4Gbps SCSI- 8Gbps
Interface protocols
FCP/FICON FCP/FICON FCP/FICON FCP/FICON
SAN
SAN
RAID RAID
adapters adapters
• Logical track
– 112 sectors for CKD (56 K)
– 128 sectors for fixed block (64 K)
• Device adapter
– Max effective BW: 1440MBps (1 DA pair)
– Max 4 KB RAID 5 read misses: 10 K OPps per DA
• Disk drives:
– A combined number of 8 disks will potentially sustain 1,440 IOPS (15 K RPM)
• Reduce the number by 12.5% when you assume a spare drive in the eight pack
© Copyright IBM Corporation 2011
Size and number of disks
L
Extent
U
1 GB Rank 3 Extent pool 1 N
2 GB LUN1
1
=
8
Extent pool 1
2 GB LUN 1 7
0
Rank 5 0
Extent pool 1
2 GB LUN 1
Rank 7
Rank 1
Extent pool 1
k Pool1
64
1 GB 1 GB 1 GB 1 GB
1 GB
256 k
256 k
256 k
256 k
• To avoid overflowing this queue, operating system driver will not send
more than a certain number of outstanding commands to any SCSI
device.
• You must tune that value according to the number a physical disks a
logical volume is using.
Warning: Setting the queue depth to a value larger than the disk can
handle will result in I/Os being held off once a QUEUE FULL condition
exists on the disk.
AA small
small script
script to
to change
change
#!/bin/ksh
#!/bin/ksh
liste_disk=`lsdev
liste_disk=`lsdev -Ccdisk|
-Ccdisk| grep
grep 2107|awk
2107|awk '{
'{ print
print $1}'
$1}' ``
for
for ii in
in $liste_disk
$liste_disk
do
do
set
set -x
-x
chdev
chdev -l
-l $i
$i -a
-a q_type=simple
q_type=simple -a
-a queue_depth=255
queue_depth=255
set
set --
done
done
for
for ii in
in 00 11 22 33
do
do
set
set -x
-x
chdev
chdev -l
-l fcs$i
fcs$i -a -a num_cmd_elems=512
num_cmd_elems=512 -a
-a max_xfer_size=0x1000000
max_xfer_size=0x1000000
set
set --
##
80
68.5
70 • With SPS, the 32 volumes
60 53.3 are striped across 32 ranks
50
on an extent pool (single
IO/sec
DS8000 server)
40
• All disks are 15 K rpm
30
• Volumes are short stroked
20 (not a full seek test)
10 2.5 2.9
0
32 Vols. No SPS 32 Vols. SPS
This is an EXTREME test case but it
Random Read DBO (70/30/50) helps show the potential for SPS to
deliver huge performance gains
% increase (versus
DS8300 DS8700 DS8800 DS8300)
• So, should you try to spread your workloads or isolate your workloads?
– You must strike a balance.
• Spreading workloads across components wherever possible and isolating workloads
when required to ensure performance of critical workloads.
• You can combine the techniques by isolating workloads to a group of components and
then spreading the workload evenly within the group.
rankgrp 0 rankgrp 1
S9 RAID 5 - A8 format FB to R8
P8 P9
S10 RAID 5 - A9 format FB to R9
S11 RAID 5 - A10 format FB to R10
S12 RAID 5 - A11 format FB to R11 P10 P11
DA 0
S13 RAID 5 - A12 format FB to R12
S14 RAID 5 - A13 format FB to R13 P12 P13
S15 RAID 10 - A14 format FB to R14
S16 RAID 10 - A15 format FB to R15 P14 P15
RAID 5 P0, P1, P2, P3, even # pools on Server 0 odd on server 1
RAID 10 P4, P5, P6, P7, even # pools on Server 0 odd on server 1
© Copyright IBM Corporation 2011
FB volume creation and addressing
rankgrp 0 rankgrp 1
LSS 10 LUNs 1000-101F
P0
LSS 11 LUNs 1100-111F
P1
P2 LSS 12 LUNs 1200-121F
P3 LSS 13 LUNs 1300-131F 128 20 GB LUNs
P4
LSS 14 LUNs 1400-141F
P5
LSS 15 LUNs 1500-151F
P6
LSS 16 LUNs 1600-161F
P7 128 10 GB LUNs
LSS 17 LUNs 1700-171F
LSS 18 LUNs 1800-181F
P8
LSS 19 LUNs 1900-191F
P9
P10 LSS 1A LUNs 1A00-1A1F
P11 LSS 1B LUNs 1B00-1B1F
P12
LSS 1C LUNs 1C00-1C1F
P13
LSS 1D LUNs 1D00-1D1F 192 20 GB LUNs
P14
LSS 1E LUNs 1E00-1E1F
P15 64 10 GB LUNs
LSS 1F LUNs 1F00-1F1F
Extent
1 GB Rank 2 Extent pool 2
2GB LUN 2
Extent pool 3
2GB LUN 3
Rank 3
Extent pool 4
2GB LUN 4
Rank 4
• Consider choosing your LUN size so that one LUN from every rank
gives your host system the amount of storage it needs.
– If 256 GB for one database on four ranks gives the size of the LUNs, adding one
LUN by rank of 64 GB each.
– If 1024 GB for a data base using 4 ranks, 2 * 128 Gb LUNs by rank is a good
idea.
Reads
L1,2
L1,2
Memory L1,2
L1,2
Memory Memory Processor
Processor
Processor Memory
Processor Memory
Memory
Memory
SERVER 0
SERVER 1
L3
L3
L1,2
L1,2 RIO-2 Interconnect L1,2
L1,2
Memory
Memory
Memory Processor
Processor L3
L3
Memory Processor
Processor Memory
Memory Memory
Memory
20 port switch
DA 20 port switch DA
DAs have an affinity to server 0 LUN1 ooo DAs have an affinity to server 1
16 DDM
20 port switch
HDD arrays
• Efficient use of SSD capacity: Easy Tier moves 1 GB data extents between
storage tiers. This enables very efficient utilization of SSD resources. Other systems
may operate on a full logical volume level. Logical volumes in modern storage
systems are trending towards larger and larger capacities. This makes migration of
data at a volume-level of granularity all the more inefficient by:
– Potentially wasting precious SSD space on portions of logical volumes that are not really hot, and
– Creating more HDD contention when executing the data movement.
• Intelligence: Easy Tier learns about the workload over a period of time as it makes
decisions about which data extents to move to SSDs. As workload patterns change,
Easy Tier finds any new highly active (“hot”) extents and exchanges them with
extents residing on SSDs that may have become less active (“cooled off”).
Advisor Tool System Summary after Easy Tier learning period, no extents were moved to
SSD ranks yet.
© Copyright IBM Corporation 2011
IBM Storage Tier Advisor Tool (3 of 5)
Advisor Tool Volume Heat Distribution for ESS11 after Easy Tier learning period: No extents were
moved to SSD ranks yet.
Advisor Tool System Summary after Easy Tier migration fills SSD capacity
Advisor Tool Volume Heat Distribution for ESS11 after Easy Tier migration fills SSD capacity.
Capture current
configuration to generate a
new DS8000 configurations
Linux or Windows
ThinkPad with “Disk
Storage Configuration
Migrator”
Brocade
IBM Cisco
• Provides platform to Director
Director
Director
automate routine storage
administrative tasks
STK Tape
IBM IBM Tape Library
Library
• Themes
• Simplify
– Simplify storage deployment and management by reducing
implementation and operational complexities for administrators,
improving management productivity
• Optimize
– Optimize performance and storage utilization by improving overall
availability of the storage network and mission-critical applications to
become invisible to end-user
• Centralize
– Centralize end-to-end storage management by greatly increasing
management capabilities to a global level of the industry's most popular
storage environments
© Copyright IBM Corporation 2011
TPC components
TPC for
TPC for Disk TPC for Data TPC for Fabric
Replication
DS8000
SVC DS3000 TS3500
XIV DS4000 TS3310
DS5000
TPC
TPC forEdition
Basic Disk+Disk
delivers MRE* TPC for Data
• • Discovery
Performance management for SAN • Enables visibility into detailed
and attached
• Topology disk Data
view and storagePath devices
Explorer storage utilization
• • Health/Status
Near real-time data path performance
Monitoring • Host, file system and file-level
statistics
• Event Management capacity analytics
• • Device
Historical performance
Capacity Mgmt reports to help • Advanced analytics for actual disk
diagnose problems
• Policy-based Alerting usage by file types, attributes, user
• Performance trend analysis • Enables policy enforcement
• Export
Installed withreport
IBM data
HW at fortime
offline
of purchase (enterprise-wide user quotas, data
processing retention, inappropriate data
*Note: Restricted to managing VDS, DS3000, DS4000, DS5000 as stand-alone devices or when attached to an
IBM SVC and IBM SVC Entry Edition.
© Copyright IBM Corporation 2011
IBM TPC Basic Edition
• Monitor entire storage
infrastructure
– Device discovery
– IBM and non-IBM device support
• LUN Provisioning Wizard
– LUN mapping/masking
– San fabric zoning
• SAN Topology Viewer
– Health/status monitoring
• SAN Fabric configuration
• Tape library reporting
• Basic asset and capacity reporting
• Policy based alert monitoring
• Easily upgraded to advanced
functions
3. True or False: If you require LUNs larger than one rank, then it is OK
to place multiple ranks into a extent pool.
2. True or False: IBM still recommends that one rank be assigned to each extent
pool.
The answer is false. It was the official IBM recommendation until R3.0, but now two pools with
eight ranks in it with volume in rotate extents is the new idea.
3. True or False: If you require LUNs larger than one rank, then it is OK to place
multiple ranks into a extent pool.
The answer is true. This is one of the valid reasons to place more than one rank in a pool.
4. True or False: IBM recommends that a single path is the best connection
approach.
The answer is false. IBM recommends to use multipath access whenever possible.
5. True or False: The best way to achieve high performance is to manually select a
rank for each volume.
The answer is false.