DS 8000 Architechture
DS 8000 Architechture
DS 8000 Architechture
DS 8000 consists 2 processor complexes. Each complex access to multiple host adapters to connect to
channel, FICON & ESCON hosts. DS8000 can have up to 32 host adapters
To access the disk subsystem, each complex uses several four-port Fibre Channel arbitrated loop (FC-
AL) device adapters. Maximum 16 adapters in 8 pairs
Each adapter connects the complex to two separate switched Fibre Channel networks. Each switched
network attaches disk enclosures that each contains up to 16 disks.
Each enclosure contains two 20-port Fibre Channel switches. Of these 20 ports, 16 are used to attach to
the 16 disks in the enclosure and the remaining four are used to either interconnect with other enclosures
or to the device adapters. Each disk is attached to both switches.
Data Transfer Process
The attached hosts interact with software which is running on the complexes( servers) to access data on
logical volumes
The servers manage all
read and write requests to
the logical volumes on the
disk arrays.
During write requests, the
servers use fast-write,
means the data is written
to volatile memory on one
complex and persistent
memory on the other
complex. The server then
reports the write as
complete before it has
been written to disk. This
provides much faster write
performance. Persistent
memory is also called NVS
or non-volatile storage.
Read Operation, the
servers fetch the data from
the disk arrays via the high
performance switched disk
architecture. The data is
then cached in volatile
memory in case it is
required again. The
servers attempt to
anticipate future reads by
an algorithm known as
SARC (Sequential prefetching in Adaptive Replacement cache ). Data is held in cache as long as possible
using this smart algorithm. If a cache hit occurs where requested data is already in cache, then the host
does not have to wait for it to be fetched from the disks.
Array Site:
An array site is group of 8 DDMs. What DDMs make up an array site is pre-determined by the
DS8000, but note, that there is no pre-determined server affinity for array sites. The DDMs selected for an array
site are chosen from two disk enclosures on different loops. The DDMs in the array site are of the same DDM
type, which means the same capacity and the same speed (RPM).
An array is created from one Array Site. Forming an array means defining a specific RAID type. RAID
types supported RAID 5 & RAID 10.
According to the DS8000 sparing algorithm, from zero to two spares may be taken from the array site.
creation of a RAID-5 array with one spare, also called a 6+P+S array (capacity of 6 DDMs for data, capacity of
one DDM for parity, and a spare drive). According to the RAID-5 rules, parity is distributed across all seven drives
in this example.
The DS Storage Manager GUI guides the user to use the same RAID types in an extent pool.
As such, when an extent pool is defined, it must be assigned with the following attributes:
Server affinity
Extent type
RAID type
The minimum number of extent pools is one; however, you would normally want at least two,
one assigned to server 0 and the other assigned to server 1 so that both servers are active. In an
environment where FB and CKD are to go onto the DS8000 storage server, you might want to define
four extent pools, one FB pool for each server, and one CKD pool for each server, to balance the
capacity between the two servers. Of course you could also define just one FB extent pool and assign it
to one server, and define a CKD extent pool and assign it to the other server. Additional extent pools
may be desirable to segregate ranks with different DDM types.
Ranks are organized in two rank groups:
Rank group 0 is controlled by server 0.
Rank group 1 is controlled by server 1.
You can expand extent pools by adding more ranks to an extent pool.
Important: You should balance your capacity between the two servers for optimal performance
Logical Volumes :
A logical volume is composed of set of extents from one extent pool.
On DS8000, up to 65280 logical volumes can be created.
Fixed Block LUN : is composed of set of FB extents. A fixed block LUN is composed of one or more 1
GB (230) extents from one FB extent pool. A LUN cannot span multiple extent pools, but a LUN can
have extents from different ranks within the same extent pool. You can construct LUNs up to a size of 2
TB (240). LUNs can be allocated in binary GB (230 bytes), decimal GB (109 byes), or 512 or 520 byte
blocks. However, the physical capacity that is allocated for a LUN is always a multiple of 1 GB, so it is a
good idea to have LUN sizes that are a multiple of a gigabyte. If you define a LUN with a LUN size that
is not a multiple of 1 GB, for example, 25.5 GB, the LUN size is 25.5 GB, but 26 GB are physically
allocated and 0.5 GB of the physical storage is unusable.
iSeries LUNS :
iSeries LUNs are also composed of fixed block 1 GB extents. There are, however, some special
aspects with iSeries LUNs. LUNs created on a DS8000 are always RAID protected. LUNs are based on
RAID-5 or RAID-10 arrays. However, you might want to deceive OS/400 and tell it that the LUN is not
RAID protected. This causes OS/400 to do its own mirroring. iSeries LUNs can have the attribute
unprotected, in which case the DS8000 will lie to an iSeries host and tell it that the LUN is not RAID
protected.
Logical Sub System (LSS) : is logical construct which groups logical volumes, LUNS in groups
up to 256 logical volumes.
You can now define up to 255 LSSs for the DS8000. You can even have more LSSs than
arrays.
For each LUN or CKD volume you can now choose an LSS. You can put up to 256 volumes into
one LSS. There is, however, one restriction. We already have seen that volumes are formed from a
bunch of extents from an extent pool. Extent pools, however, belong to one server, server 0 or server 1,
respectively. LSSs also have an affinity to the servers. All even numbered LSSs (X00, X02, X04, up
to XFE) belong to server 0 and all odd numbered LSSs (X01, X03, X05, up to XFD) belong to
server 1. LSS XFF is reserved.
Address groups
Address groups are created automatically when the first LSS associated with the address group
is created, and deleted automatically when the last LSS in the address group is deleted.
LSSs are either CKD LSSs or FB LSSs. All devices in an LSS must be either CKD or FB. This
restriction goes even further. LSSs are grouped into address groups of 16 LSSs. LSSs are numbered
X'ab', where a is the address group and b denotes an LSS within the address group. So, for example
X'10' to X'1F' are LSSs in address group 1.
All LSSs within one address group have to be of the same type, CKD or FB. The first LSS
defined in an address group fixes the type of that address group.
zSeries clients that still want to use ESCON to attach hosts to the DS8000 should be aware of
the fact that ESCON supports only the 16 LSSs of address group 0 (LSS X'00' to X'0F'). Therefore this
address group should be reserved for ESCON-attached CKD devices, in this case, and not used as FB
LSSs.
Volume Access:
Host attachment
HBAs are identified to the DS8000 in a host attachment construct that specifies the HBAs World
Wide Port Names (WWPNs). A set of host ports can be associated through a port group attribute that
allows a set of HBAs to be managed collectively. This port group is referred to as host attachment
within the GUI.
A given host attachment can be associated with only one volume group. Each host attachment
can be associated with a volume group to define which LUNs that HBA is allowed to access. Multiple
host attachments can share the same volume group. The host attachment may also specify a port
mask that controls which DS8000 I/O ports the HBA is allowed to log in to. Whichever ports the HBA
logs in on, it sees the same volume group that is defined in the host attachment associated with this
HBA.
The maximum number of host attachments on a DS8000 is 8192.
Volume Group:
A volume group is a named construct that defines a set of logical volumes. When used in
conjunction with CKD hosts, there is a default volume group that contains all CKD volumes and any
CKD host that logs into a FICON I/O port has access to the volumes in this volume group. CKD logical
volumes are automatically added to this volume group when they are created and automatically
removed from this volume group when they are deleted.
When used in conjunction with Open Systems hosts, a host attachment object that identifies the
HBA is linked to a specific volume group. The user must define the volume group by indicating which
fixed block logical volumes are to be placed in the volume group. Logical volumes may be added to or
removed from any volume group dynamically.
There are two types of volume groups used with Open Systems hosts and the type determines
how the logical volume number is converted to a host addressable LUN_ID on the Fibre Channel SCSI
interface. A map volume group type is used in conjunction with FC SCSI host types that poll for LUNs
by walking the address range on the SCSI interface. This type of volume group can map any FB logical
volume numbers to 256 LUN_IDs that have zeroes in the last six bytes and the first two bytes in the
range of X'0000' to X'00FF'.
A mask volume group type is used in conjunction with FC SCSI host types that use the Report
LUNs command to determine the LUN_IDs that are accessible. This type of volume group can allow
any and all FB logical volume numbers to be accessed by the host where the mask is a bitmap that
specifies which LUNs are accessible. For this volume group type, the logical volume number X'abcd' is
mapped to LUN_ID X'40ab40cd00000000'. The volume group type also controls whether 512 byte
block LUNs or 520 byte block LUNs can be configured in the volume group.
When associating a host attachment with a volume group, the host attachment contains
attributes that define the logical block size and the Address Discovery Method (LUN Polling or Report
LUNs) that are used by the host HBA. These attributes must be consistent with the volume group type
of the volume group that is assigned to the host attachment so that HBAs that share a volume group
have a consistent interpretation of the volume group definition and have access to a consistent set of
logical volume types. The GUI typically sets these values appropriately for the HBA based on the user
specification of a host type. The user must consider what volume group type to create when setting up
a volume group for a particular HBA.
FB logical volumes may be defined in one or more volume groups. This allows a LUN to be
shared by host HBAs configured to different volume groups. An FB logical volume is automatically
removed from all volume groups when it is deleted.