0% found this document useful (0 votes)
32 views112 pages

SSF01G08

Uploaded by

raspat2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views112 pages

SSF01G08

Uploaded by

raspat2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 112

Unit 8: Performance and tuning

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Unit objectives
After completing this unit, you should be able to:
• Plan for DS8000 configuration
• Plan for performance
• Discuss rules of thumb
• Perform data collection and monitoring with TPC

© Copyright IBM Corporation 2011


Topic 1: Cache management

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Cache and I/O operations (1 of 2)
• Caching is a fundamental technique for hiding I/O latency.
– Cache is used to keep both the data read and written by the host servers.

• Read operations
– When a host sends a read request to the DS8000.
• A cache hit occurs if the requested data resides in the cache.
– The I/O operation will not disconnect from the channel/bus until the read is complete.
– A read hit provides the highest performance.
• A cache miss occurs if the data is not in the cache.
– The I/O operation is logically disconnected from the host:
> Allowing other I/Os to take place over the same interface
> And a stage operation from the disk subsystem takes place

• Write operations: Fast writes


– A fast write hit occurs when the write I/O operation completes.
• The data received from the host is transferred to the cache and a copy is made in the
persistent memory
– Data written to the DS8000 is almost 100% fast write hits.

© Copyright IBM Corporation 2011


Cache and I/O operations (2 of 2)
• Cache is basically the DS8000 processor memory.
– DS8000 cache
– DS8000 persistent memory

• Processor memory can be 16 to 128 GB per controller.


– Cache owned by the servers, odd volumes and pool virtualization is
calculated in server 1
– Persistent memory for the other controller

• DS8000 uses sequential prefetching in Adaptive Replacement


Cache (SARC).
– Better cache usage than current LRU algorithms
– Better response time, higher hit ratios

© Copyright IBM Corporation 2011


Advanced cache techniques
• The DS8000 benefits from advanced caching techniques
– Sequential prefetching in adaptive replacement cache (SARC)
• This technology improves cache efficiency and enhances cache hits ratios
• The SARC provides:
– Sophisticated patented algorithms to determine what data to store in cache
> Based upon the recent access and frequency needs of the hosts
– Prefetching, which anticipates data prior to a host request and loads it into cache
– Self-learning algorithms to adapt and dynamically learn what data to store in cache
> Based upon the frequency needs of the hosts

– Adaptive multi-stream prefetching (AMP)


• Introduces an autonomic, workload-responsive, self-optimizing prefetching technology
– That adapts both the amount of prefetch and the timing of prefetch
> On a per-application basis in order to maximize the performance of the system
• Provides provably optimal sequential read performance
– Maximizing the aggregate sequential read throughput of the system

© Copyright IBM Corporation 2011


Cache and SARC (1 of 4)
• SARC is a self-tuning, self-optimizing solution for:
– A wide range of workloads with a mix of sequential and random I/O
streams
– This cache algorithm attempts to determine four things:
• When data is copied into the cache
• Which data is copied into the cache
• Which data is evicted when the cache becomes full
• How the algorithm dynamically adapts to different workload

– This cache is organized in 4 KB pages called CACHE PAGES or


SLOTS
• This unit of allocation ensures that small I/Os do not waste cache
memory

© Copyright IBM Corporation 2011


Cache and SARC (2 of 4)
• The decision to copy some data can be triggered from two policies:
– Demand paging: (cache management)
• Means that eight disk blocks (a 4 K cache page) are brought in only on a cache miss.
• Demand paging is always active for all volumes and ensures that I/O patterns with
some locality find at least some recently used data in the cache.
– Prefetching:
• Means that data is copied into the cache speculatively even before it is requested.
– To prefetch, a prediction of likely future data accesses is needed.
– For prefetching, the cache management uses tracks
• A track is a set of 128 disk blocks (16 cache pages = 64 k)
• To detect a sequential access pattern, counters are maintained with every track to
record if a track has been accessed together with its predecessor.
• DS8000 monitors application read-I/O patterns and dynamically determines whether it
is optimal to stage into cache.
– Just the page requested
– The page requested plus remaining data on the disk track
– An entire disk track or multiple tracks that have not yet been requested

© Copyright IBM Corporation 2011


Cache and SARC (3 of 4)

MRU = Most recently used


LRU = Least recently used

© Copyright IBM Corporation 2011


Cache and SARC (4 of 4)

Effective cache space = 33% greater


Cache miss rate = 11% reduced
Peak throughput = 12.5% increased
Response time = 50% reduced

No SARC

SARC

© Copyright IBM Corporation 2011


Adaptive multi-stream prefetching
• SARC and AMP play complementary roles

• AMP algorithm solves two problems that plague prefetching algorithms:


– Prefetch wastage occurs
• When prefetched data is evicted from the cache before it can be used
– Cache pollution occurs
• When less useful data is prefetched instead of more useful data

• AMP provides optimal sequential read performance


– Maximizing the aggregate sequential read throughput of the system
– The amount prefetched for each stream
• Is dynamically adapted according to the application’s needs and the space available
(SEQ list)
– The timing of the prefetches is also continuously adapted for each stream
• To avoid misses, and the same time, to avoid any cache pollution

• AMP dramatically improves performance


– For common sequential and batch processing workloads
© Copyright IBM Corporation 2011
Determining the right amount of cache storage
• There are a number of factors that influence cache requirements:
– Is the workload sequential or random?
– Are the attached host servers System z or Open Systems?
– What is the mix of reads to writes?
– What is the probability that data will be needed again after its initial access?
– Is the workload cache friendly?
• A cache-friendly workload performs much better with relatively large amounts of cache

• The most common general rules are:


– For Open Systems: Each TB of capacity needs between 2 GB and 4 GB of
cache
– For System z: Each TB of capacity needs between 4 GB and 5 GB of cache

• Recommendations to consolidate your environment into DS8000:


– Choose a cache size for the DS8000 series.
• That has a similar ratio between cache size and disk storage
– To that of the configuration that you currently use
– When you consolidate multiple disk storage servers
• Configure the sum of all cache from the source disk storage servers for the largest
DS8000 processor memory or cache size
© Copyright IBM Corporation 2011
Topic 2: Physical planning

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Comparison between DS8000 models
DS8100 DS8300 DS8700 DS8800
2-way 4-way 2-way / 4-way 2-way / 4-way
Disks 16-384 16-1024 16-128 / 16-1024 16-144 / 16-1056
Disks interface FC-AL 2 Gb FC-AL 2 Gb FC-AL 2 Gb 6 Gb SAS-2
Device adapters ports FC-AL (2 Gbps) x 4 FC-AL (2 Gbps) x 4 FC-AL (2 Gbps) x 4 FC-AL (8 Gbps) x 4
RAID types RAID 5,6,10 RAID 5,6,10 RAID 5,6,10 RAID 5,6,10
32-128 GB // 1-4 GB 32-128 GB // 1-4 GB
Cache // NVS 16-128 GB // 1-4 GB 32-256 GB // 1-8 GB
32-384 GB // 1-8 GB 32-384 GB // 1-8GB
DS8000 P5+ DS8000 P5+ DS8000 P6 DS8000 P6+
Processor
2-way 4-way 2-way or 4-way 2-way or 4-way
ESCON x 2 ESCON x 2
Host adapters FC(4 Gbps) x 4 FC(8 Gbps) x 4 or x 8
FC(4 Gbps) x 4 FC(4 Gbps) x 4
Host adapter slots 16 32 16 / 32 8/ 16
Max host adapter ports 64 128 64 / 128 64 / 128

SCSI-4 Gbps or 2 Gbps SCSI- 4Gbps or 2 Gbps SCSI- 4Gbps SCSI- 8Gbps
Interface protocols
FCP/FICON FCP/FICON FCP/FICON FCP/FICON

PPRC Fabric FCP FCP FCP FCP


DA slots 8 16 8 / 16 8 / 16

© Copyright IBM Corporation 2011


DS8000 topology: All models
Host Host
Host 2 3
1

SAN
SAN

Host Host Host Host Host Host


adapters adapters adapters adapters adapters adapters

High bandwidth, fault tolerant interconnect


High bandwidth, fault tolerant interconnect
N-way Volatile memory Volatile memory N-way
SMP SMP

Persistent memory Persistent memory

RAID RAID
adapters adapters

© Copyright IBM Corporation 2011


DS8700 hardware performance characteristics
(1 of 6)
• The DS8700 features IBM POWER6 server technology and a
PCI Express I/O infrastructure to help support high
performance.
• Compared to the POWER5+ processor in previous DS8100
and DS8300 models, the POWER6 processor can enable over
a 50% performance improvement in I/O operations per
second in transaction processing workload environments.
Additionally, sequential workloads can receive as much as
150% bandwidth improvement, which is an improvement
factor of 2.5 compared to the previous models.
• The DS8700 offers either a dual 2-way processor complex or a
dual 4-way processor complex.
• For the DS8700, the device adapters have been upgraded with
a twice as fast processor on the adapter card compared to
DS8100 and DS8300, providing a much higher throughput on
the device adapter.
© Copyright IBM Corporation 2011
DS8700 hardware performance characteristics
(2 of 6)
• While the DS8100 and DS8300 used the RIO-G connection between the
clusters as a high bandwidth interconnection to the device adapters, the
DS8700 uses dedicated PCI Express connections to the I/O
enclosures and the device adapters.
– This increases the bandwidth to the storage subsystem backend by a factor of up
to 16 times to a theoretical bandwidth of 64 GBps.

© Copyright IBM Corporation 2011


DS8800 hardware performance characteristics
(3 of 6)
• The DS8800 features IBM POWER6+ server technology and a PCI
Express I/O infrastructure to help support high performance.
– The DS8800 model can be equipped with the 2-way processor feature
or the 4-way processor feature for highest performance requirements
• Compared to the POWER5+ processor in previous models, the
POWER6 and POWER6+ processors can enable a more than 50%
performance improvement in I/O operations per second in
transaction processing workload environments.
• Additionally, peak large-block sequential workloads can receive as
much as 200% bandwidth improvement, which is an improvement
factor of three compared to the DS8300 models.
• The DS8800 offers either a dual 2-way processor complex or a dual 4-
way processor complex.

© Copyright IBM Corporation 2011


DS8800 hardware performance characteristics
(4 of 6)
• While the DS8100 and DS8300 used the RIO-G connection between the
clusters as a high-bandwidth interconnection to the device adapters, the
DS8800 and DS8700 use dedicated PCI Express connections to the I/O
enclosures and the device adapters.
– This increases the bandwidth to the storage subsystem back-end by a factor of up
to 16 times to a theoretical bandwidth of 64 GBps.

© Copyright IBM Corporation 2011


DS8800 hardware performance characteristics
(5 of 6)
• DS8800 works with SAS disks. Shortly before the FC-to-SAS
conversion is made, Fibre Channel switching is used in the DS8800
back-end.
• The DS8800 RAID device adapter (DA) has been modified to run at
8 Gbps.
• Additional enhancements to these new DA bring a major performance
improvement compared to DS8700:
– For DA limited workloads, the maximum I/Ops throughput (small blocks) per DA
has been increased by 40% to 80%
– DA sequential throughput in MB/s (large blocks) has increased by approximately
85% to 210% from DS8700 to DS8800
– For instance, a single DA under ideal workload conditions can process a
sequential large-block read throughput of up to 1600 MBps
• These improvements are of value in particular when using solid-state
drives (SSDs), but also give the DS8800 system very highly sustained
sequential throughput, for instance in high-performance computing
configurations.

© Copyright IBM Corporation 2011


DS8800 hardware performance characteristics
(6 of 6)
• The following improvements have been implemented on the
architecture of DS8800 the host adapter, leading to HA
throughputs which are more than double compared to DS8700:
– The architecture is fully on 8 Gbps.
– The single-core 1 GHz PowerPC processor (750 GX) has been
replaced by a dual-core 1.5 GHz (Freescale MPC8572).
– Adapter memory has increased four-fold.
• The 8 Gbps adapter ports can negotiate to 8, 4, or 2 Gbps
– 1 Gbps is not possible.
• These new HA adapters are available with either eight or four
Fibre Channel (FC) ports, which can be configured to support
either FCP or FICON.

© Copyright IBM Corporation 2011


DS8700 and DS8800 vertical growth and
scalability

DS8700 or DS8800 2-way system with four I/O enclosures

DS8700 or DS8800 4-way system with eight I/O enclosures


© Copyright IBM Corporation 2011
DS8000 logical track and data striping
• Sector size is 524 B
– 512 B customer data
– 8 B iSeries header (whether iSeries LUN or not)
– 2 B sequence # and 2 B LRC
Note: CKD has LRC added/stripped by DA, seq# not used

• Logical track
– 112 sectors for CKD (56 K)
– 128 sectors for fixed block (64 K)

• Strip size is four logical tracks (both CKD and open)


– Data for logical volume rotates to next drive in array after four tracks
(256 K)

© Copyright IBM Corporation 2011


DS8100 and DS8300 performance considerations
• HA 2 GB = 550 MBps with four ports available (two ports = 2* 200 MB)
– With 16 ports using eight HBAs, you can create a workload of 3,200 MBps
• 6,400 MBps with 32 ports, and 12,800 MBps with 64 ports

• HA 2 GB = 100,000 I/Ops, one port is about 40,000 I/Ops


– 16 ports using eight HBAs = 640,000 I/Ops
• 1,280,000 I/Ops with 32 ports, 2,560,000 I/Ops with 64 ports, and up to 3,400,000
I/Ops with 128 ports
– All data in DS8300 cache for the measurement

• Internal fabric (RIOG)


– 2 GBps per link (DS8100 has 1 link, and DS8300 has 2 links)

• Device adapter
– Max effective BW: 1440MBps (1 DA pair)
– Max 4 KB RAID 5 read misses: 10 K OPps per DA

• Disk drives:
– A combined number of 8 disks will potentially sustain 1,440 IOPS (15 K RPM)
• Reduce the number by 12.5% when you assume a spare drive in the eight pack
© Copyright IBM Corporation 2011
Size and number of disks

• 16 X 8 146 GB 10 K rpm DDMs versus 8 X 8 300 GB 10 K rpm DDMs


• Same capacity but twice as many DDMs using 146 GB.

© Copyright IBM Corporation 2011


Effects of 15K DDMs on performance

© Copyright IBM Corporation 2011


Planning array sites: Example 1

• Workload isolation considerations


– Because only one DA pair is available, no DA pair isolation is possible
– You can dedicate two ranks to a workload
• Resource-sharing workload considerations
– You can plan for three ranks assigned to each processor complex (0 and 1)
© Copyright IBM Corporation 2011
Planning array sites: Example 2

• Workload isolation considerations


– Because multiple DA pairs are available, DA pair isolation is possible.
• In addition to individual rank isolation
• Resource-sharing workload considerations
– There might be more array sites than the number of volumes required for any single workload.

© Copyright IBM Corporation 2011


Rank and extent pool assignments
• Each block represents one 8-
DDM rank assigned to an
extent pool
– A rank has no relationship to
server 0 or server 1 until after it server 0 server 1 server 0 server 1 server 0 server 1
has been assigned to an extent D R0 R1 R16 R17 R48 R49
pool R2 R3
D D
R18 R19 R50 R51
A A A
R4 R5 R20 R21 R52 R53
• Rank ID (Rx) does not 2 R6 R7 6 R22 R23 3 R54 R55
necessarily indicate server
D R8 R9 R24 R25 R56 R57
– Ranks on each DA should be R10 R11
D D
R26 R27 R58 R59
balanced across server 0 and A A A
R12 R13 R28 R29 R60 R61
server 1 0 4 R30 R31 1 R62 R63
R14 R15
• Extent pools R32 R33 R64 R65
D D
R34 R35 R66 R67
– Options: A A
R36 R37 R68 R69
• P0=R0 or P0=R0,R2,R4,R6 7 R38 R39 2 R70 R71
• P1=R1 or P1=R1,R3,R5,R7
R40 R41 R72 R73
• P2=R2 or P2=R8,R10,R12,R14 D D
R42 R43 R74 R75
A R44 R45 A R76 R77
• P3=R2 or P2=R9,R11,R13,R15
5 R46 R47 0 R78 R79
• P79=R79 or
P19=R73,R75,R77,R79

© Copyright IBM Corporation 2011


Storage pool striping: Rotate extents
• New function that stripes logical volumes across ranks within an extent
pool
– 1 GB granularity (wide stripe) so will generally benefit random more than
sequential
– Potential to flatten hot spots and greatly simplify performance management
– Less dependency on OS, database, or application striping

mkextpool -rankgrp 1 -stgtype fb pool1


mkrank -array A1 -stgtype fb -extpool P1
mkrank -array A3 -stgtype fb -extpool P1
mkrank -array A5 -stgtype fb -extpool P1
mkrank -array A7 -stgtype fb -extpool P1

mkfbvol -extpool P1 -type ds -cap 20 -name 'DB' -sam standard -eam


rotateexts 8700-870F
© Copyright IBM Corporation 2011
Striping volume across ranks of the same pool P1
Balanced method: LVM striping
1 rank per extent pool
Rank 1 Extent pool 1
2 GB LUN 1

L
Extent
U
1 GB Rank 3 Extent pool 1 N
2 GB LUN1
1
=
8
Extent pool 1
2 GB LUN 1 7
0
Rank 5 0
Extent pool 1
2 GB LUN 1

Rank 7

LV striped across 4 LUNs Logical view of volumes


One volume= 4 x 8 disks

© Copyright IBM Corporation 2011


Stripe size

Host Write to volume ID 8700

Rank 1
Extent pool 1

DS8000 cache / server 0 DS8000 cache / server 1 Page: 4 K


Track 64 k
Logical volume 8700

k Pool1
64

Rank 1 Rank 3 Rank 5 Rank 7

1 GB 1 GB 1 GB 1 GB
1 GB

mkfbvol -extpool P1 -type ds -cap 20 -name


256 k
256 k
256 k
256 k

256 k
256 k

256 k
256 k

'DB' -sam standard -eam rotateexts 8700

© Copyright IBM Corporation 2011


Host queue depth setting (1 of 2)
• This is the maximum number of I/Os that a single SCSI target will queue
up for execution.

• The parameter has a minimum value of 1, a default set value of 3, and a


maximum value of 255.

• To avoid overflowing this queue, operating system driver will not send
more than a certain number of outstanding commands to any SCSI
device.

• You must tune that value according to the number a physical disks a
logical volume is using.

Warning: Setting the queue depth to a value larger than the disk can
handle will result in I/Os being held off once a QUEUE FULL condition
exists on the disk.

© Copyright IBM Corporation 2011


Host queue depth setting (2 of 2)
• Volume 8700 is our example is based on eight ranks:
– Each rank is based on seven disks (7+P)
– Or six disks (6+P+S), depending on spare disk or not

• So the queue depth for that volume can be set to:


– 7 disks x 4 ranks x 3 minimal queue = 84

• We must take care of the path to access that volume.


– If you choose four paths, then 4 x 84 = 336

• The maximum for queue_depth is 255.

© Copyright IBM Corporation 2011


Host queue depth setting in AIX
• To see the default values for unknown SCSI disks:
## lsattr
lsattr –D
–D –c
–c Disk
Disk –s
–s scsi
scsi –t
–t osdisk
osdisk
pvid
pvid none
none Physical
Physical volume
volume identifier
identifier False
False
clr_q
clr_q no
no Device
Device CLEARS
CLEARS its
its Queue
Queue on
on error
error
q_err
q_err yes
yes Use
Use QERR
QERR bit
bit
q_type
q_type simple
simple Queuing
Queuing TYPE
TYPE
queue_depth
queue_depth 33 Queue
Queue DEPTH
DEPTH
reassign_to
reassign_to 120
120 REASSIGN
REASSIGN time
time out
out value
value
rw_timeout
rw_timeout 30
30 READ/WRITE
READ/WRITE time
time out
out value
value
start_timeout
start_timeout 6060 START
START unit
unit time
time out
out value
value

AA small
small script
script to
to change
change
#!/bin/ksh
#!/bin/ksh
liste_disk=`lsdev
liste_disk=`lsdev -Ccdisk|
-Ccdisk| grep
grep 2107|awk
2107|awk '{
'{ print
print $1}'
$1}' ``
for
for ii in
in $liste_disk
$liste_disk
do
do
set
set -x
-x
chdev
chdev -l
-l $i
$i -a
-a q_type=simple
q_type=simple -a
-a queue_depth=255
queue_depth=255
set
set --
done
done

for
for ii in
in 00 11 22 33
do
do
set
set -x
-x
chdev
chdev -l
-l fcs$i
fcs$i -a -a num_cmd_elems=512
num_cmd_elems=512 -a
-a max_xfer_size=0x1000000
max_xfer_size=0x1000000
set
set --
##

© Copyright IBM Corporation 2011


Storage pool striping performance benefits
Comparing random performance with 32 volumes (fixed
block) • Each volume is 11 GB in size
Random Throughput • Without SPS, all 32 volumes
32 FB Volumes are placed on one single rank
extent pool
Thousands

80
68.5
70 • With SPS, the 32 volumes
60 53.3 are striped across 32 ranks
50
on an extent pool (single
IO/sec

DS8000 server)
40
• All disks are 15 K rpm
30
• Volumes are short stroked
20 (not a full seek test)
10 2.5 2.9
0
32 Vols. No SPS 32 Vols. SPS
This is an EXTREME test case but it
Random Read DBO (70/30/50) helps show the potential for SPS to
deliver huge performance gains

© Copyright IBM Corporation 2011


FlashCopy and device adapters

© Copyright IBM Corporation 2011


Physical planning rules of thumb
• Place one rank in each extent pool or two pools for the full DS8000.
• Select multiple ranks for an extent pool using ranks from the same DA
pair.
• Use RAID 5 for most applications.
• Use RAID 10 for heavy write streams and recommended for System i
LUNs.
• Use 15K rpm DDMs for better response time if it is a requirement.
• Use 500 GB FATA drives for archival or long term and less used
storage.
• Model performance objectives using Disk Magic tool before ordering to
determine DDM, cache, and RAID requirements.
• Provide enough adapters for multi-pathing for host server or redundant
SANs.
• Load balance LUNs across both DS8000 servers.

© Copyright IBM Corporation 2011


FATA DDM performance considerations
• Nearline DDMs are slower than Enterprise DDMs.
– 7.2 K rpm versus 10 K or 15 K rpm
– Nearline DDMs will throttle I/O if overdriven
• Write performance can drop to 50% if DDM throttling kicks in
• DDM throttling based on temperature sensors in DDM
– Ambient temperature + DDM workload generated heat
• Intermix of Enterprise/nearline DDMs in PPRC/FlashCopy relationships
may affect performance of Enterprise DDMs

• Modified writes to NVS for nearline DDMs are limited to


prevent Enterprise DDM NVS starvation.

• User responsible for targeting appropriate workloads to


nearline DDMs.
© Copyright IBM Corporation 2011
SATA, FATA, and Fibre Channel disk drive

© Copyright IBM Corporation 2011


Performance comparison across DS8000 models
(1 of 2)
Open results summary
DS8800 component results –
4-port HA; RAID5 arrays
DS8700 DS8800 % increase
HA 4KB read 4KB K IOps 78 190 144%
HA 4KB write 4KB K IOps 42 110 162%
HA 64KB read GBps 530 2530 377%
HA 64KB write GBps 400 1420 255%
DA Pair 4KB read 4KB K IOps 53 77 45%
DA Pair 4KB write 4KB K IOps 19 32 68%
DA Pair 64KB read GBps 1.2 3.2 167%
DA Pair 64KB write GBps 0.77 1.42 84%

DS8800 full box results– 96x RAID5 arrays


768x 15K RPM HDDs, 16x SSDs, 8x DA Pair, 16x HA w/ 32x 8Gb ports
% increase (versus
DS8300 DS8700 DS8800 DS8300)

Seq read GBps 3.9 9.7 11.8 22% (203%)

Seq write GBps 2.2 4.7 6.7 43% (205%)

Database open 4KB K IOps 165 191 196 3% (19%)


4K read miss 4KB K IOps 111 137 160 17% (44%)
4K read hits 4KB K IOps 425 523 530 1% (25%)
4K write hits 4KB K IOps 164 203 222 1% (35%)
© Copyright IBM Corporation 2011
Performance comparison across DS8000 models
(2 of 2)
CKD results summary

DS8800 full box results: RAID5


384x 15K RPM HDDs, 48x 10K RPM HDDs, 8x DA Pair, 16x HA with 32x 8 Gb ports

% increase (versus
DS8300 DS8700 DS8800 DS8300)

FICON seq read GBps 4.1 9.4 10.0 6% (144%)

FICON seq write GBps 2.1 5.6 5.7 2% (171%)

zHPF 4K write hits 4 KB K I/Ops 124 159 175 10% (41%)

zHPF 4K read hits 4 KB K I/Ops 344 423 440 4% (28%)

zHPF DB zOS 4 KB K I/Ops 165 201 204 2% (24%)

FICON DB z/OS 4 KB K I/Ops 124 174 181 4% (46%)

© Copyright IBM Corporation 2011


Topic 3: Logical planning

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Optimal performance: Configuration principles
• There are three major principles for achieving a logical configuration:
– Workload isolation
• Can provide a high priority workload with dedicated DS8000 hardware resources
– To reduce the impact of less important workloads
– Workload resource-sharing
• Means multiple workloads use a common set of DS8000 hardware resources
– Such as: Ranks, device adapters, I/O ports, and host adapters
– Workload spreading
• Means balancing and distributing workload evenly across all of the DS8000 hardware
resources available, including:
– Processor complex 0 and processor complex 1
– Device adapters
– Ranks
– I/O enclosures
– Host adapters

© Copyright IBM Corporation 2011


Workload isolation
• DS8000 disk capacity isolation
– Rank level
• Certain ranks are dedicated to a workload
– DA level
• All ranks on one or more device adapter pairs are dedicated to a workload
– Processor complex level
• All ranks assigned to extent pools managed by processor complex (0 or 1) are
dedicated to a workload
– Storage unit level
• All ranks in a physical DS8000 are dedicated to a workload

• DS8000 host connection isolation


– I/O port level
• Certain DS8000 I/O ports are dedicated to a workload. This subsetting is quite
common.
– Host adapter level
• Certain host adapters are dedicated to a workload
– I/O enclosure level
• Certain I/O enclosures are dedicated to a workload

© Copyright IBM Corporation 2011


Performance guidelines
• Allocate hardware components to workloads by one of two methods:
– Spreading the workloads across all components (HBA, disks)
• Try to share the use of hardware components across all workload.
• The more hardware components are shared among multiple workloads
– The more effectively the hardware components are utilized
> Which reduces total cost of ownership (TCO).
– Isolating workloads
• HBA and disk components are used for one workload.
• On the downside, this means that certain components are unused when their workload
is not demanding service.
• On the upside, it means that when that workload does demand service, the component
is available immediately.
– And the workload does not have to contend with other workloads for that resource.

• So, should you try to spread your workloads or isolate your workloads?
– You must strike a balance.
• Spreading workloads across components wherever possible and isolating workloads
when required to ensure performance of critical workloads.
• You can combine the techniques by isolating workloads to a group of components and
then spreading the workload evenly within the group.

© Copyright IBM Corporation 2011


Spreading resources
• Volumes are created from space allocated from a single extent pool,
therefore:
• A user must be aware of:
– What server manages the storage
– What DA pair the ranks in the pool are attached
– How many ranks are in the pool
– Whether the ranks are in different disk enclosures
– What RAID level is on the rank
• So:
– How many ranks on each DA pair? _______
– Are there RAID 5 and RAID 10 ranks? _____
– Split the RAID 5 ranks evenly between server 0 and 1
– Split the RAID 10 ranks evenly between server 0 and 1
– One rank per pool or multiple ranks per pool

• Example: All DDMs 146 GB 15 K rpm


– Eight ranks on DA pair 2 and eight ranks on DA pair 0
– Four RAID 10 ranks and 12 RAID 5 ranks
• Create two RAID 10 ranks on each DA pair and six RAID 5 ranks
• Create four pools for RAID 10: Two for server 0 and two for server 1
• Create eight pools for RAID 5: Four for server 0 and four for sever 1

© Copyright IBM Corporation 2011


Example layout for 16 array sites
rankgrp 0 rankgrp 1
S1 RAID 5 - A0 format FB to R0 P0 P1
S2 RAID 5 - A1 format FB to R1
S3 RAID 5 - A2 format FB to R2 P2 P3
S4 RAID 5 - A3 format FB to R3 DA 2
S5 RAID 10 - A4 format FB to R4
P4 P5
S6 RAID 10 - A5 format FB to R5
S7 RAID 10 - A6 format FB to R6
P6 P7
S8 RAID 10 - A7 format FB to R7

rankgrp 0 rankgrp 1
S9 RAID 5 - A8 format FB to R8
P8 P9
S10 RAID 5 - A9 format FB to R9
S11 RAID 5 - A10 format FB to R10
S12 RAID 5 - A11 format FB to R11 P10 P11
DA 0
S13 RAID 5 - A12 format FB to R12
S14 RAID 5 - A13 format FB to R13 P12 P13
S15 RAID 10 - A14 format FB to R14
S16 RAID 10 - A15 format FB to R15 P14 P15

RAID 5 P0, P1, P2, P3, even # pools on Server 0 odd on server 1
RAID 10 P4, P5, P6, P7, even # pools on Server 0 odd on server 1
© Copyright IBM Corporation 2011
FB volume creation and addressing
rankgrp 0 rankgrp 1
LSS 10 LUNs 1000-101F
P0
LSS 11 LUNs 1100-111F
P1
P2 LSS 12 LUNs 1200-121F
P3 LSS 13 LUNs 1300-131F 128 20 GB LUNs
P4
LSS 14 LUNs 1400-141F
P5
LSS 15 LUNs 1500-151F
P6
LSS 16 LUNs 1600-161F
P7 128 10 GB LUNs
LSS 17 LUNs 1700-171F
LSS 18 LUNs 1800-181F
P8
LSS 19 LUNs 1900-191F
P9
P10 LSS 1A LUNs 1A00-1A1F
P11 LSS 1B LUNs 1B00-1B1F
P12
LSS 1C LUNs 1C00-1C1F
P13
LSS 1D LUNs 1D00-1D1F 192 20 GB LUNs
P14
LSS 1E LUNs 1E00-1E1F
P15 64 10 GB LUNs
LSS 1F LUNs 1F00-1F1F

© Copyright IBM Corporation 2011


DS8000 OPEN: Use host striping for performance
Striping across ranks and pools

Balanced method: LVM striping


One rank per extent pool
Rank 1 Extent pool 1
2GB LUN 1

Extent
1 GB Rank 2 Extent pool 2
2GB LUN 2

Extent pool 3
2GB LUN 3

Rank 3
Extent pool 4
2GB LUN 4

Rank 4

LV striped across four LUNs

© Copyright IBM Corporation 2011


Open system: LUN size
• There is no performance difference for logical disk (LUN) sizes.
• For a DS8000, with the advantage of high capacity drives:
– Consider LUN sizes in the range of 8 GB to the size of one physical disk in an
array. 64 GB can be a good choice.
– It is not necessary to have all the LUNs in a DS8000 be the same size.
– What is important is that a host system’s LUNs be evenly balanced among ranks.

• The recommended method advocates assigning one LUN at minimum


from each rank to your server.

• Consider choosing your LUN size so that one LUN from every rank
gives your host system the amount of storage it needs.
– If 256 GB for one database on four ranks gives the size of the LUNs, adding one
LUN by rank of 64 GB each.
– If 1024 GB for a data base using 4 ranks, 2 * 128 Gb LUNs by rank is a good
idea.

© Copyright IBM Corporation 2011


DS8000 OPEN dual port host attachment
IBM

Reads

HAs don't have DS8000 server affinity


LUN1
HA FC0 FC1
I/Os I/Os

L1,2
L1,2
Memory L1,2
L1,2
Memory Memory Processor
Processor
Processor Memory
Processor Memory
Memory
Memory
SERVER 0
SERVER 1
L3
L3
L1,2
L1,2 RIO-2 Interconnect L1,2
L1,2
Memory
Memory
Memory Processor
Processor L3
L3
Memory Processor
Processor Memory
Memory Memory
Memory

RIO-G Module Extent pool 1


20 port Extent pool 4
switch RIO-G Module
16 DDM
LUN1 ooo

20 port switch

DA 20 port switch DA

DAs have an affinity to server 0 LUN1 ooo DAs have an affinity to server 1
16 DDM

20 port switch

Extent pool 1 oooo Extent pool 4


controlled by server 0 controlled by server 1

© Copyright IBM Corporation 2011


Data placement
• Once you have determined the disk subsystem throughput, the disk
space, and the number of disks required by your different hosts and
applications, you have to make a decision regarding data placement.
• As is common for data placement, and to optimize the DS8000
resources utilization, you should:
– Equally spread the LUNs/volumes across the DS8000 servers. Spreading the
volumes equally on rank group 0 and 1 will balance the load across the DS8000
servers.
– Use as many disks as possible. Avoid idle disks, even if all storage capacity will
not be initially utilized.
– Distribute capacity and workload across DA pairs.
– Use multi-rank Extent Pools.
– Stripe your logical volume across several ranks.
– Consider placing specific database objects (such as logs) on different ranks.
– For an application, use volumes from both even and odd numbered Extent Pools
(even numbered pools are managed by server 0, odd numbers are managed by
server 1).
– For large, performance-sensitive applications, consider two dedicated Extent
Pools (one managed by server 0, the other managed by server1).
– Consider different Extent Pools for 6+P+S arrays and 7+P arrays. If you use
Storage Pool Striping, this will ensure that your ranks are equally filled.
© Copyright IBM Corporation 2011
Topic 4: Easy Tier

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
What are solid state disks?
• Not magnetic or optical
• Semiconductor
– Electronically erasable
persistent medium
– No mechanical read/
write interface
– No moving parts
• Comes with a variety
of form factors (1.8”,
2.5”, 3.5” HDD, PCI-e,
appliance) and
interfaces (SAS, FC, Memory
storage
SATA, PCI-e) hierarchy

© Copyright IBM Corporation 2011


SSD constraints and performance

• SSD utilize NAND technology


• NAND: Possible to write 1 but NOT 0
– Writing a 0 request to re-write a full bloc
• Copy of the bloc and modify the 1 by 0 during the copy
• Old bloc is erased and prepared for a new writing operation

FC and SAS 15 Krpm disks performance: SSD disks performance:


• 130 MBps • 220 MBps read
• 180 I/Ops • 150 MBps write
• Response time: 3 ms • 45 000 I/Ops read
• 16 000 I/Ops write
• Response time: 0.1 ms
SSDs are very much higher performing
but
A lot more expensive

© Copyright IBM Corporation 2011


Performance comparison between SSD and HDD
disks

© Copyright IBM Corporation 2011


IBM Easy Tier
• As SSDs disks remain considerably more expensive than traditional
disks (HDD), it should be interesting to reduce the number of SSD disks
to the portion of data that really requires high performance.
• IBM Easy Tier feature address this requirement by automatically and
dynamically migrating appropriate data between HDD and SSD disks
based on ongoing performance monitoring (automatic data relocation)
– Easy Tier automatic mode uses a 24-hour window of time to determine which
extents would most benefit from SSD
– It bases its decisions on both the disk access rates and the overall latency of disk
I/O operations at the sub-volume level
– Easy Tier algorithms look for high concentrations of random operations with small
transfer sizes that are currently on HDDs as candidates for movement to SSDs
(hot extents)
– It also monitors the extents currently residing on SSDs to ensure that they are still
active and “hot” enough compared to the extents that are on HDDs.
– When “hotter” extents are seen on the HDDs they may be swapped with cooler
SSD extents

© Copyright IBM Corporation 2011


IBM Easy Tier: Configuration
• The storage administrator need only configure the storage
pools with a mix of hard disk drives (HDDs) and SSDs and turn
it on.
– Easy Tier automatic mode is very simple to use and the primary
control is to turn it on or off through the DSCLI (DS8000 command line
interface) or the DSGUI (DS8000 graphical user interface).
• Easy Tier automatic mode will monitor continually and move
data as needed between the SSD ranks and HDD ranks within
the storage pool.
• Currently, Easy Tier automatic mode only manages movement
between SSDs and HDDs and does not distinguish classes of
HDD by rotational speed or RAID (redundant array of
independent disks) configuration.

© Copyright IBM Corporation 2011


Easy Tier: Automatic data relocation

Logical volume Easy Tier Managed extent pool


Extent
virtualization
SSD arrays

Cold extents Hot extents


migrate down migrate up

HDD arrays

© Copyright IBM Corporation 2011


IBM Easy Tier: Key advantages
• Designed to be easy: The user is not required to make a lot of decisions or go
through an extensive implementation process to start utilizing Easy Tier.

• Efficient use of SSD capacity: Easy Tier moves 1 GB data extents between
storage tiers. This enables very efficient utilization of SSD resources. Other systems
may operate on a full logical volume level. Logical volumes in modern storage
systems are trending towards larger and larger capacities. This makes migration of
data at a volume-level of granularity all the more inefficient by:
– Potentially wasting precious SSD space on portions of logical volumes that are not really hot, and
– Creating more HDD contention when executing the data movement.

• Intelligence: Easy Tier learns about the workload over a period of time as it makes
decisions about which data extents to move to SSDs. As workload patterns change,
Easy Tier finds any new highly active (“hot”) extents and exchanges them with
extents residing on SSDs that may have become less active (“cooled off”).

• Negligible performance impact: Easy Tier moves data gradually to avoid


contention with I/O activity associated with production workloads. It will not move
extents unless a measurable latency benefit would be realized by the move. The
overhead associated with Easy Tier management is so small that the effect on
overall system performance is nearly undetectable. This eliminates the need for
storage administrators to worry about scheduling when migrations occur.

© Copyright IBM Corporation 2011


Easy Tier: Manual mode
• Easy Tier can be used also on manual mode and allows a
logical volume to be migrated to the same or a different extent
pool without interruption to host I/O.
– Similar to FlashCopy, manual mode generates asynchronous
background activities.
– It is designed to move data at a rate that will have minimal impact to
host I/O performance.
• Easy Tier manual mode can improve performance in following
cases:
– Volumes migration within the same extent pool, especially when
more rank capacity was added to an extent pool that was fully
populated, Easy Tier manual mode can be used to re-stripe existing
volumes across all available ranks.
– Volumes migration across different extent pools: To reorganize
volumes balance between ranks, DA, and DS8000 servers.
© Copyright IBM Corporation 2011
IBM Storage Tier Advisor Tool (1 of 5)
• The Storage Tier Advisor is a free software tool that offers
guidance to users on how existing workloads can benefit by
moving certain data to SSDs.
• The Storage Tier Advisor Tool (Advisor Tool) provides a high-
level summary of workload characteristics and hot spots of
volumes that are monitored. It provides assistance for SSD
capacity planning with Easy Tier.
• The Advisor Tool is supported on Windows2 and can be
installed similarly as DSCLI.
• Input data files for the advisor tool, (Easy Tier summary data,
can) be offloaded from the DS8700 and the output from the
advisor tool can be viewed using any web browser.

© Copyright IBM Corporation 2011


IBM Storage Tier Advisor Tool (2 of 5)

Advisor Tool System Summary after Easy Tier learning period, no extents were moved to
SSD ranks yet.
© Copyright IBM Corporation 2011
IBM Storage Tier Advisor Tool (3 of 5)

Advisor Tool Volume Heat Distribution for ESS11 after Easy Tier learning period: No extents were
moved to SSD ranks yet.

© Copyright IBM Corporation 2011


IBM Storage Tier Advisor Tool (4 of 5)

Advisor Tool System Summary after Easy Tier migration fills SSD capacity

© Copyright IBM Corporation 2011


IBM Storage Tier Advisor Tool (5 of 5)

Advisor Tool Volume Heat Distribution for ESS11 after Easy Tier migration fills SSD capacity.

© Copyright IBM Corporation 2011


Easy Tier: Best practices (1 of 2)
• One of the only decisions a storage administrator needs to make when
using Easy Tier automatic mode is how much SSD capacity is required
and how the DS8700 should be configured. Once the storage pools are
created with both SSD and HDDs, the rest of the storage management
is done by the Easy Tier algorithms
• There are a few options when setting up the storage pools that you can
choose from. The most obvious is two storage pools, one per DS8700
storage server, with ½ the SSD ranks and ½ the HDD ranks and
capacities in each pool, with storage pool striping used for volume
allocation. This configuration allows all volumes to benefit from SSD
performance if one or more of their extents are hot enough to be placed
on SSD. This configuration should suit the predominate number of
applications.
• It is also possible to create two pools per storage server. One pool with
SSD ranks and HDD ranks and the other with strictly HDD ranks. The
HDD-only pool could be used for test volumes, to archive data where
there is already in place archiving to disk, or production data that is not
performance sensitive. The HDD-only pool can be of a different RAID
type or HDD type (for example, 2TB SATA RAID-6 versus 600GB 15K
rpm RAID-5 in the SSD+HDD pools).

© Copyright IBM Corporation 2011


Easy Tier: Best practices (2 of 2)
• Not all workloads will benefit equally from Easy Tier. When
considering what workloads to place in an Easy Tier pool, the
most improvement will usually be seen on those extents with
low to modest cache read-hit-ratios, high de-stage rates, and
smaller transfer sizes that exhibit non-sequential (random)
access patterns.
• Workloads with higher read percentages tend to show more
disk response time improvement than write intensive
workloads. However, Easy Tier has the capability to improve
performance of any mix of reads and writes. Transactional
applications such as order entry, financial services, ERP, and
customer inquiry will tend to benefit most from Easy Tier.
Batch-oriented workloads will benefit less but mixtures of batch
and transactional workloads should be well managed by the
Easy Tier algorithms.
© Copyright IBM Corporation 2011
Topic 5: Tools and services offering

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Capacity Magic
• Designed as an easy-to-use tool with a single main dialog

• Offers a graphical interface that allow you to enter:


– The disk drive configuration of a DS8000, DS6000, or ESS 800
– The number and type of disk drive sets; and the RAID type

• With this input:


– Capacity Magic calculates the raw and net storage capacities, also:
• There is functionality into the tool to display the number of extents that are produced per rank

© Copyright IBM Corporation 2011


Capacity Magic: Configuration window

© Copyright IBM Corporation 2011


Capacity Magic: Configuration report

© Copyright IBM Corporation 2011


Disk Magic
• A Windows-based disk subsystem performance modeling tool

• Some examples of what Disk Magic can model:


– Move the current I/O load to a different disk subsystem model
– Merge the current I/O load of multiple disk subsystems into a single DS8000
– Insert a SAN volume controller in an existing disk configuration
– Increase the current I/O load
– Implement a storage consolidation
– Increase the disk subsystem cache size
– Change to larger capacity disk drives
– Change to higher disk rotational speed
– Upgrade from ESCON to FICON host adapters
– Upgrade from SCSI to Fibre Channel host adapters
– Increase the number of host adapters
– Use fewer or more logical unit numbers (LUNs)
– Activate Metro Mirror, z/OS Global Mirror, and Global Mirror
© Copyright IBM Corporation 2011
Disk Magic: Interfaces and open disk input
windows

© Copyright IBM Corporation 2011


Disk Storage Configuration Migrator

Capture current
configuration to generate a
new DS8000 configurations

Linux or Windows
ThinkPad with “Disk
Storage Configuration
Migrator”

Create logical configuration


on DS8000 and generate
copy services scripts from
the available config data

© Copyright IBM Corporation 2011


IBM Certified Secure Data Overwrite (1 of 2)
• STG Lab Services offers this service for the DS8000 and ESS800/750.

• This new offering is meant to overcome the following issues:


– Deleted data does not mean gone forever
• Usually deleted means that the pointers to the data are invalidated and the space can
be reused
• Until the space is reused, the data remains on the media and what remains can be
read with the right tools
– Regulations and business prudence
• Require that the data actually be removed when the media is not longer available

• The service executes a multi pass overwrite of the data disks.


– It operates on the entire box
– It is a three pass overwrite compliant
• With the DoD 5220.20-M procedure for purging disks
– There is a fourth pass of zeros with InitSurf
– IBM also purges client data from the server and HMC disks

• A certificate of completion is delivered when this process is completed.

© Copyright IBM Corporation 2011


IBM Certified Secure Data Overwrite (2 of 2)

© Copyright IBM Corporation 2011


Topic 6: TotalStorage Productivity Center

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
TPC family portfolio

• Simplify storage deployment and management


– By reducing implementation time and operational
complexities for administrators, improving management
productivity

• Optimize performance and storage utilization


– By improving overall availability of the storage network and
mission-critical applications to become invisible to end-
user

• Centralize end-to-end storage management


– By greatly increasing management capabilities to a global
level of the industry's most popular storage environments

• Frees your IT staff to focus on mission critical tasks


© Copyright IBM Corporation 2011
IBM Tivoli Storage Productivity Center
System p

• Centralized management BladeCenter System x System x


tool that reduces complexity System p
– End-to-end storage
management

• One tool to manage many


devices (even multi-vendor). Tivoli Storage
Productivity
Center
• Central reporting repository
for management

Brocade
IBM Cisco
• Provides platform to Director
Director
Director
automate routine storage
administrative tasks

STK Tape
IBM IBM Tape Library
Library

© Copyright IBM Corporation 2011


TPC overview
• IBM Tivoli Storage Productivity Center (TPC) is a leader in
helping organizations manage complex heterogeneous storage
environments.
– Asset and capacity reporting
– SAN configuration guidance
– End-to-end topology views
– Advanced analytics

• Themes

• Simplify
– Simplify storage deployment and management by reducing
implementation and operational complexities for administrators,
improving management productivity

• Optimize
– Optimize performance and storage utilization by improving overall
availability of the storage network and mission-critical applications to
become invisible to end-user

• Centralize
– Centralize end-to-end storage management by greatly increasing
management capabilities to a global level of the industry's most popular
storage environments
© Copyright IBM Corporation 2011
TPC components

IBM Tivoli Storage Productivity Center


Basic Edition
Standard Edition Replication

Data Fabric Replication


Disk/ MRE Replication Two-/three-site BC
+ System z

Productivity Center for Disk Productivity Center for


Administration, Operations and Performance Replication
management for storage (disk, tape, virtualization) Administration and operations
management for advanced copy
Productivity Center for Data services (ESS, DS8000,
Asset and capacity reporting DS6000, SVC)
File systems and database management
Productivity Center for
Productivity Center for Fabric Replication: Two-/three-site BC
Administration, operations, and performance + System z
management for fabric (switches and directors) Disaster Recovery Management
including fail back function
© Copyright IBM Corporation 2011
TPC architecture design overview

TPC for
TPC for Disk TPC for Data TPC for Fabric
Replication

Tivoli Storage Productivity Center server

APIs for Storage Management applications GUI


Common graphical user interface for
all functions
Device server Data server New highly scalable topology display
Performs discovery Common Schema and Database and status display.
Real time availability, perf. monitoring Time-based data collection engine Display and configuration of SMI-S
Configure the SAN and storage devices Collects data on a periodic basis for capacity
devices
and performance analysis purposes.
(performance data collection requires TPC
for Disk and/or Fabric features)

COMMON HOST AGENT FRAMEWORK

VMWare virtual Subordinate Storage


Managed Managed
CIMOM infrastructure TPC SNMP Resource Agent
System System
interface servers and TPC 4.1
RM servers

© Copyright IBM Corporation 2011


System Storage Productivity Center
• IBM’s Unified Storage Console
• IBM System x3550 M2
– 1 Quad-Core Intel Xeon processor
– 8 GB memory
– 2 – 146 GB 15 K drives
IBM SSPC
• Pre-installed software:
– IBM TPC Basic Edition
– SVC Admin Console
– DS8000 Storage Manager linkage
– DS3K, DS4K and DS5K Storage Manager
– TS3500 Tape Specialist linkage
• Pre-installed priced features:
– IBM TPC Standard Edition
– IBM TPC for Replication IBM DS6000
IBM SVC IBM
IBM DS8000 DS3000 IBM
DS4000 TS3500
DS5000 TS3310

© Copyright IBM Corporation 2011


Storage Software Management value progression
(1 of 5)
SSPC and TPC Basic Edition

TPC Basic Edition delivers:


Administrator points
• Discovery
browser at SSPC for
• Topology view and Data Path Explorer
enterprise storage view
• Health/status monitoring
of multiple devices
• Event management
• Device capacity management IBM SSPC
• Policy-based alerting

Installed with IBM HW at time of purchase

DS8000
SVC DS3000 TS3500
XIV DS4000 TS3310
DS5000

© Copyright IBM Corporation 2011


Storage Software Management value progression
(2 of 5)
SSPC and TPC Basic Edition

TPC
TPC forEdition
Basic Disk+Disk
delivers MRE* TPC for Data
• • Discovery
Performance management for SAN • Enables visibility into detailed
and attached
• Topology disk Data
view and storagePath devices
Explorer storage utilization
• • Health/Status
Near real-time data path performance
Monitoring • Host, file system and file-level
statistics
• Event Management capacity analytics
• • Device
Historical performance
Capacity Mgmt reports to help • Advanced analytics for actual disk
diagnose problems
• Policy-based Alerting usage by file types, attributes, user
• Performance trend analysis • Enables policy enforcement
• Export
Installed withreport
IBM data
HW at fortime
offline
of purchase (enterprise-wide user quotas, data
processing retention, inappropriate data

© Copyright IBM Corporation 2011


Storage Software Management value progression
(3 of 5)
SSPC and TPC Basic Edition

TPC Standard Edition


• Includes TPC Basic Edition, TPC for Disk and TPC for Data and more

• Enables end-to-end heterogeneous storage resource management with advanced


TPC analytics
Basic Edition delivers
• • Discovery
Provisioning planners provide workflow-like process for end-to-end storage
• Topology view and Data Path Explorer
provisioning
• • Health/Status
Configuration Monitoring
change management to enable proactive management
• • Event Management
Performance analytics for optimization and tuning
• • Device
Host, Capacity Mgmt
file system and file-level capacity analytics
• • Policy-based
Provides SAN Alerting
Fabric Management

Installed with IBM HW at time of purchase

© Copyright IBM Corporation 2011


Storage Software Management value progression
(4 of 5)
SSPC and TPC Basic Edition

TPC for Replication


• Adds copy services management for IBM SAN-attached storage devices
• Supports FlashCopy, Metro Mirror, and Global Mirror configurations
• Automatic failover and failback in the event of a disaster
• Volume protection and practice volumes ensure the reliability of high priority storage
assets
• Optional three-site configuration for IBM DS8000 storage devices

© Copyright IBM Corporation 2011


Storage Software Management value progression
(5 of 5)
SSPC and TPC Basic Edition

TPC Standard Edition


• Includes TPC Basic Edition, TPC for Disk and TPC for Data and more

• Enables end-to-end heterogeneous storage resource management with advanced


TPC
TPC forEdition
Basic
analytics Disk+Disk
delivers MRE* TPC for Data
• •• Discovery
Performance
Provisioning planners provide workflow-like •process
management for SAN Enables
for visibility
end-to-endintostorage
detailed
and
• Topology attached disk Data
view and
provisioning storagePath devices
Explorer storage utilization
• •• Health/Status
Near real-time
Configuration data path
Monitoring
change performance
management • proactive
to enable Host, file management
system and file-level
statistics
• • Event Management
Performance analytics for optimization and tuning capacity analytics
• •• Device
Historical performance
Host, Capacity
file systemMgmt reports to
and file-level help analytics
capacity • Advanced analytics for actual disk
diagnose problems
• Policy-based Alerting usage by file types, attributes, user
• Performance trend analysis • Enables policy enforcement
• Export
Installed with report
IBM data
HW at fortime
offline
of purchase (enterprise-wide user quotas, data
processing retention, inappropriate data

TPC for Replication


• Adds copy services management for IBM SAN-attached disks
• Supports FlashCopy, Metro Mirror, and Global Mirror configurations
• Automatic failover and failback in the event of a disaster
• Volume protection and practice volumes ensure the reliability of high priority storage
assets
• Optional three-site configuration for IBM DS8000 storage devices

*Note: Restricted to managing VDS, DS3000, DS4000, DS5000 as stand-alone devices or when attached to an
IBM SVC and IBM SVC Entry Edition.
© Copyright IBM Corporation 2011
IBM TPC Basic Edition
• Monitor entire storage
infrastructure
– Device discovery
– IBM and non-IBM device support
• LUN Provisioning Wizard
– LUN mapping/masking
– San fabric zoning
• SAN Topology Viewer
– Health/status monitoring
• SAN Fabric configuration
• Tape library reporting
• Basic asset and capacity reporting
• Policy based alert monitoring
• Easily upgraded to advanced
functions

© Copyright IBM Corporation 2011


IBM TPC for Disk (1 of 2): Manages disk systems
• IBM TotalStorage Productivity Center
– Is a standard software package for managing complex storage environments

• Productivity Center for Disk is designed to:


– Configure multiple storage devices from a single console
– Monitor and track the performance of SAN-attached SMI-S compliant storage
devices
– Enable proactive performance management by setting performance thresholds
• Based on performance metrics and the generation of alerts

• IBM TPC for Disk


– Centralizes the management of networked storage devices that implement the
SNIA SMI-S specification, which includes
• The IBM System Storage DS family
• SAN Volume Controller (SVC)
– Designed to reduce storage management complexity and costs while
• Improving data availability, centralizing management of storage devices through open
standards
• Enhancing storage administrator productivity, increasing storage resource utilization
• Offering proactive management of storage devices

© Copyright IBM Corporation 2011


IBM TPC for Disk (2 of 2): Manages disk systems
• Centralized point of control of disk
configuration
– Device grouping services
– Logging
• Automated management and
provisioning
– Capacity monitoring/reporting
– Scheduled actions
– Create and assign LUNs
– Integrated with fabric management
• Performance trending
• Performance thresholds and
notification
• Automated status and problem alters
– Integrated with third-party system
management through SNMP

© Copyright IBM Corporation 2011


IBM TPC for Data: File-level reporting
• Comprehensive discovery
– Files, filesystems, databases
– Servers
– Storage
• Enterprise wide reporting
• Threshold monitoring
• Alerts
• Automated scripting facility
• Automatic policy-based storage
provisioning
• Tivoli Storage Manager
integration
• Chargeback capability
• Reports on all storage
– IBM, EMC, HDS, HP, NetApp, and
so on.
© Copyright IBM Corporation 2011
IBM TPC for Replication: Manages Copy Services

• Simplifies and automates


replication
– FlashCopy
– Metro Mirror
– Global Mirror
– Metro Global Mirror
(DS8000)
• Matches source and target
volume
• Manages mainframe and
distributed systems
• Supports CKD and FBA
• Runs native on z/OS or • Disaster recovery management
distributed systems – Site awareness
• Provides basic HyperSwap for – Stand-by server (2-site) option
z/OS – Disaster recovery testing
• Supports DS8000, DS6000, – Practice volumes
ESS 800 and SVC
© Copyright IBM Corporation 2011
IBM TPC Standard Edition:
End-to-end storage resource management
• TPC Standard Edition enables end-
to-end heterogeneous storage
resource management with
advanced analytics and SAN Fabric
Management
• Includes disk and data functions
• Fabric performance monitoring and
zoning
• Provisioning planners provide
workflow-like process for end-to-end
storage provisioning
• Configuration Change Management
to enable proactive management
• Performance analytics for
optimization and tuning

© Copyright IBM Corporation 2011


Topology Viewer (1 of 2)

Display current performance metrics for


a DS8000 storage system

© Copyright IBM Corporation 2011


Topology Viewer: Focus on volumes
Drill down and display current performance
metrics for individual volumes

View volume performance “at-a-glance”

© Copyright IBM Corporation 2011


Topology Viewer (2 of 2)

© Copyright IBM Corporation 2011


Access to historical data

© Copyright IBM Corporation 2011


TPC: Measurement of DS8000 components
• To gather performance data:
– You must first need to set up a job that is called Subsystem Performance Monitor
• TCP for Disk can collect the following DS8000 components

© Copyright IBM Corporation 2011


Performance monitor: Collection of information
• To obtain performance reporting, you need first to define and
start a performance monitoring job:

At regular interval, this job will


collect and store in TPC database
all performance counters from the
storage subsystem which is
monitored.

© Copyright IBM Corporation 2011


Performance monitor: Reporting
• Once performance counters have been collected, you can
generate performance reports on different components of the
DS8000:

© Copyright IBM Corporation 2011


Performance monitor: Tabular view
Performance reports can be displayed in tabular view.
To generate a graphical view select the components in the table and click the
chart button to specify performance Metrics to display in graphical view.

© Copyright IBM Corporation 2011


Performance monitor: Graphical view
Example of graphical view displayed on one DS8000 array (I/O rates)

© Copyright IBM Corporation 2011


Performance monitor: Accessing historical data
You can modify the time limits of your graph to analyze data collected during a
specific period:

© Copyright IBM Corporation 2011


Performance monitor: Exporting reports
You can export any graphical or tabular report in different format by printing:

© Copyright IBM Corporation 2011


TPC for disk: Performance metric
• DS6000/8000/ESS family
– Array level
– Number of writes, reads
– Total time to satisfy reads, writes
– Average data transfer rate for reads, writes
– Average subsystem I/O rate
– Average response time (milliseconds)
– Total I/Os issued to the volumes
– Total sequential I/Os issued to the volumes
• Volume level
– Number of writes, reads,
– Number of cache hits (reads, writes)
– Disk to cache transfers (sequential, non-sequential)
– Cache to disk transfers
– Cache hit ratio (reads, writes, overall)
– Fast reads, fast writes (number and percentage)
delayed due to NVS full
• Cluster level
– Cluster I/O rate
– Percent of total I/O requests delayed

© Copyright IBM Corporation 2011


TPC: Predefined performance reports
A set of predefined performance reports exists for disks and fabrics
Just click the report name to generate the corresponding tabular report

© Copyright IBM Corporation 2011


Checkpoint
1. True or False: The DS8000 provides dual servers and redundant
access to all disk DDMs for improved performance.

2. True or False: IBM still recommends that one rank be assigned to


each extent pool.

3. True or False: If you require LUNs larger than one rank, then it is OK
to place multiple ranks into a extent pool.

4. True or False: IBM recommends that a single path is the best


connection approach.

5. True or False: The best way to achieve high performance is to


manually select a rank for each volume.
© Copyright IBM Corporation 2011
Checkpoint solutions
1. True or False: The DS8000 provides dual servers and redundant access to all
disk DDMs for improved performance.
The answer is true. The DS80000 has dual p5 servers and redundant DA cards to provide up to
four paths to any disk

2. True or False: IBM still recommends that one rank be assigned to each extent
pool.
The answer is false. It was the official IBM recommendation until R3.0, but now two pools with
eight ranks in it with volume in rotate extents is the new idea.

3. True or False: If you require LUNs larger than one rank, then it is OK to place
multiple ranks into a extent pool.
The answer is true. This is one of the valid reasons to place more than one rank in a pool.

4. True or False: IBM recommends that a single path is the best connection
approach.
The answer is false. IBM recommends to use multipath access whenever possible.

5. True or False: The best way to achieve high performance is to manually select a
rank for each volume.
The answer is false.

© Copyright IBM Corporation 2011


Unit summary
Having completed this unit, you should be able to:
• Plan for DS8000 configuration
• Plan for performance
• Discuss rules of thumb
• Perform data collection and monitoring with TPC

© Copyright IBM Corporation 2011

You might also like