Netapp Guide
Netapp Guide
In 2002 , NetApp would like to change the NAS tag to SAN. So they have renamed their product
lines to FAS (Fabric Attached SCSI) to support both NAS and SAN. In the FAS storage product lines, NetApp provides the unique storage solution which supports multiple protocols in
single system. NetApp storages uses DATA ONTAP operating system which is based on Net/2 BSD Unix.
In the past, NetApp provided 7-Mode storage. 7-Mode storage provides dual-controller, cost-effective storage systems. In 2010, NetApp Released the new Operating
System called DATA ONTAP 8 which includes the 7 Mode and C Mode. We just need to choose the mode in the storage controller start-up (Similar to Dual Boot OS system). In
NetApp Cluster Mode , you can easily scale out the environment on demand basis
From DATA ONTAP 8.3 operating system version onwards, you do not have option to choose 7 Mode. It’s just available only as Clustered DATA ONTAP .
1. Supported Protocols:
FC
NFS
FCoE
iSCSI
pNFS
CIFS
It supports De-duplication
Compression
Thin Provisioning.
Cloning
Snapshot Copies
Asynchronous Mirroring
Disk to Disk or Disk to tape backup option.
6. Management
Note:When you use both NAS and SAN on same system, the supported maximum cluster nodes are Eight. The 24 node cluster is possible when you use the Netapp storage
only for NAS.
An HA pair consists of 2 identical controllers; each controller actively provides data services and has redundant cabled paths to the other controller’s disk storage. If either controller is
down for any planned or unplanned reason, its HA partner can take over its storage and maintain access to the data. When the downed system rejoins the cluster, the partner will give
back the storage resources.
Block-Based Protocol : Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)
6. FC (Fibre Channel)
Fibre Channel, or FC, is a high-speed network technology (commonly running at 2-, 4-, 8- and 16-gigabit per second rates) primarily used to connect computer data storage.
Fibre Channel is a widely used protocol for high-speed communication to the storage device. The Fibre Channel interface provides gigabit network speed. It provides a serial data
transmission that operates over copper wire and optical fiber. The latest version of the FC interface (16FC) allows transmission of data up to 16Gb/s.
Supports SAN, NAS, FC, SATA, iSCSI, FCoE and Ethernet all on the same platform
Supports either SATA, FC and SAS disk drives
Supports block protocols such as iSCSI, Fibre Channel and AoE
Supports file protocols such as NFS, CIFS , FTP, TFTP and HTTP
High availability
Easy Management
Scalable
The most common NetApp configuration consists of a filer (also known as a controller or head node) and disk enclosures (also known as shelves), the disk enclosures are
connected by Fibre Channel or parallel/serial ATA, the filer is then accessed by other Linux, Unix or Window servers via a network (Ethernet or FC). First we need to describe the Data
ontap storage model architecture.
As per the above diagram for a write operation whenever some write request appears on NetApp D-Blade via N-Blade (Either Via NAS or San Protocols) is cached into Memory
buffer cache (Cache Memory) and Simultaneously a copy into the NVRAM that is divided into NVLOG’s, and one thing that is need to be remembered that NVRAM in NetApp.
NVRAM
NetApp storage systems use several types of memory for data caching. Non-volatile battery-backed memory (NVRAM) is used for write caching (whereas main memory and flash
memory in forms of either extension PCIe card or SSD drives is used for read caching). Before going to hard drives all writes are cached in NVRAM. NVRAM memory is split in half and
each time 50% of NVRAM gets full, writes are being cached to the second half, while the first half is being written to disks. If during 10 seconds interval NVRAM doesn’t get full, it is
forced to flush by a system timer.
To be more precise, when data block comes into NetApp it’s actually written to main memory and then journaled in NVRAM. NVRAM here serves as a backup, in case filer fails. The
active file system pointers on the disk are not updated to point to the new locations until a write is completed. Upon completion of a write to disk, the contents of NVRAM are cleared
and made ready for the next batch of incoming write data. This act of writing data to disk and updating active file system pointers is called a Consistency Point (CP). In FAS32xx series
NVRAM has been integrated into main memory and is now called NVMEM.
RAID
RAID (originally redundant array of inexpensive disks, now commonly redundant array of independent disks) is a data storage virtualization technology that combines multiple
physical disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. Using RAID increases performance or provides fault
tolerance or both. Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some of its components.
Note :
RAID Uses two or more physical disk drives and a RAID controller. Here RAID controller Acts as an interface between the host and disks.
RAID 0:
A popular disk subsystem that increases performance by interleaving data across two or more drives. Data are broken into blocks, called "stripes," and alternately written to two or
more drives simultaneously to increase speed. For example, stripe 1 is written to drive 1 at the same time stripe 2 is written to drive 2. Then stripes 3 and 4 are written to drives 3
and 4 simultaneously and so on. When reading, stripes 1 and 2 are read simultaneously; then stripes 3 and 4 and so on.
RAID 1:
A popular disk subsystem that increases safety by writing the same data on two drives. Called "mirroring," RAID 1 does not increase performance. However, if one drive fails, the
second drive is used, and the failed drive is manually replaced. After replacement, the RAID controller duplicates the contents of the working drive onto the new one.
RAID 10 and RAID 01:
A RAID subsystem that increases safety by writing the same data on two drives (mirroring), while increasing speed by interleaving data across two or more mirrored "virtual"
drives (striping). RAID 10 provides the most security and speed but uses more drives than the more common RAID 5 method.
RAID Parity:
Parity computations are used in RAID drive arrays for fault tolerance by calculating the data in two drives and storing the results on a third. The parity is computed by XOR 'ing a
bit from drive 1 with a bit from drive 2 and storing the result on drive 3. After a failed drive is replaced, the RAID controller rebuilds the lost data from the other two drives. RAID
systems often have a "hot" spare drive ready and waiting to replace a drive that fails.
RAID 2:
RAID 2, which is rarely used in practice, stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. Hamming-code parity is calculated across
corresponding bits and stored on at least one parity drive.
RAID 3:
RAID 3 consists of byte-level striping with dedicated parity. RAID 3 stripes data for performance and uses parity for fault tolerance. Parity information is stored on a dedicated drive
so that the data can be reconstructed if a drive fails in a RAID set. For example, in a set of five disks, four are used for data and one for parity. Although implementations exist,[20]
RAID 3 is not commonly used in practice. RAID 3 provides good performance for applications that involve large sequential data access, such as data backup or video streaming.
RAID 4:
RAID 4 is very similar to RAID 3. The main difference is the way of sharing data. They are divided in to blocks (16, 32, 64 lub 128 kB) and written on disk s – similar to RAID 0. For
each row of written data, any recorded block is written on a parity disk. This uses block level striping.
RAID 5:
RAID 5 is a versatile RAID implementation. It is similar to RAID 4 because it uses striping. The drives (strips) are also independently accessible. The difference between RAID 4 and
RAID 5 is the parity location. In RAID 4, parity is written to a dedicated drive, creating a write bottleneck for the parity disk. In RAID 5, parity is distributed across all disks to
overcome the write bottleneck of a dedicated parity disk.
RAID 6:
RAID 6 works the same way as RAID 5, except that RAID 6 includes a second parity element to enable survival if two disk failures occur in a RAID set. Therefore, a RAID 6
implementation requires at least four disks. RAID 6 distributes the parity across all the disks. The write penalty in RAID 6 is more than that in RAID 5; therefore, RAID 5 writes
perform better than RAID 6. The rebuild operation in RAID 6 may take longer than that in RAID 5 due to the presence of two parity sets.
RAID DP:
RAID DP used as RAID 4 first a horizontal parity (P). As an extension of RAID 4, RAID-DP adds a diagonal parity (DP). The double parity up to two drives fail without resulting in
the RAID group to data loss. RAID-DP fulfills the requirements for a RAID 6 according SNIA definition. NetApp RAID-DP uses two parity disks per RAID group. One parity disk stores
parity calculated for horizontal stripes, as described earlier. The second parity disk stores parity calculated from diagonal stripes.
The diagonal parity stripe includes a block from the horizontal parity disk as part of its calculation. RAID-DP treats all disks in the original RAID 4 construct—including both data
and parity disks—the same. Note that one disk is omitted from the diagonal parity stripe.
This caused Data ONTAP to create an aggregate named aggr1 with five disks in it. Let’s take a look at this with the following command:
sysconfig –r
If you notice aggr1, you can see that it contains 5 disks. Three disks are data disks and there are two parity disks, “parity” and “dparity”. The RAID group was created
automatically to support the aggregate. If I need more space, I can add disks to the aggregate and they will be inserted into the existing RAID group within the aggregate. I can add 3
disks with the following command:
Before the disks can be added, they must be zeroed. If they are not already zeroed, then Data ONTAP will zero them first. This may take a significant amount of time.
Raid groups
Before all the physical hard disk drives (HDDs) are pooled into a logical construct called an aggregate (which is what ONTAP’s FlexVol is about), the HDDs are grouped into a RAID
group. A RAID group is also a logical construct, in which it combines all HDDs into data or parity disks. The RAID group is the building block of the Aggregate.
Raid groups are protected sets of disks. consisting of 1 or 2 parity, and 1 or more data disks. We don’t build raid groups, they are built automatically behind the scene when you
build an aggregate. For example:
In a default configuration you are configured for RAID-DP and a 16 disk raid group (assuming FC/SAS disks). So, if i create a 16 disk aggregate i get 1 raid group. If I create a 32
disk aggregate, i get 2 raid groups. Raid groups can be adjusted in size. For FC/SAS they can be anywhere from 3 to 28 disks, with 16 being the default. An aggregate is made of Raid
Groups. Lets do a few examples using the command to make an aggregate:
If the default raid group size is 16, then the aggregate will have one raid group. But, if i use the command:
Now I have two full raid groups, but still only one aggregate. So, the aggregate gets the performance benefit of 2 RGs worth of disks. Notice we did not build a raid group. Data
ONTAP built the RG based on the default RG size.
If I had created an aggregate with 24 disks, then Data ONTAP would have created two RAID groups. The first RAID group would be fully populated with 16 disks (14 data disks and
two parity disks) and the second RAID group would have contained 8 disks (6 data disks and two parity disks). This is a perfectly normal situation. For the most part, it is safe to
ignore RAID groups and simply let Data ONTAP take care of things.
Volumes
Volumes are data containers. A volume is analogous to a partition. It’s where you can put data. Think of the previous analogy. An aggregate is the raw space (hard drive), the
volume is the partition, its where you put the file system and data. Some other similarities include the ability to have multiple volumes per aggregate, just like you can have multiple
partitions per hard drive. and you can grow and shrink volumes, just like you can grow and shrink partitions.
Qtrees
A qtree is analogous to a subdirectory. Lets continue the analogy. Aggregate is hard drive, volume is partition, and qtree is subdirectory. Why use them? to sort data. The same
reason you use them on your personal PC. There are 5 things you can do with a qtree you can’t do with a directory and thats why they aren’t just called directories:
Oplocks
Security style
Quotas
Snapvault
Qtree SnapMirror
Quotas:
Quotas provide a way to restrict or track the disk space and number of files used by a user, group, or qtree. You specify quotas using the /etc/quotas file. A quota limits the
amount of disk space and the number of files that a particular user or group can consume. A quota can also restrict the total space and files used in a qtree, or the usage of users and
groups within a qtree. A request that would cause a user or group to exceed an applicable quota fails with a ``disk quota exceeded'' error. A request that would cause the number of
blocks or files in a qtree to exceed the qtree's limit fails with an ``out of disk space'' error.
User and group quotas do not apply to the root user or to the Windows Administrator account; tree quotas, however, do apply even to root and the Windows Administrator
account.
Before going to Snapvault and Snapmirror we have to know what is snapshots.
Snapshots
NetApp snapshot is the patient technology of netapp that allow the Storage admin to take the backup of files locally, on the storage box and a very fast restoration of files in case
of any file corruption and file deletion by mistake. To learn the snapshot we have to first go through the concept of AFS (Active file system).
Active file system
Netapp write data using the 4 KB block size so when ever we write the data into netapp file sytem it breaks up that file into 4 Kb blocks for example we want to write a file ABC
then it will be written as below. So in above example ABC is using 4 netapp block to write the file ABC. So at any point of time a file is represented by the active file system.
Definition of snapshot on the basis of active file system
Whenever we take snapshot of any file, all the block’s constituting the file becomes frozen means after taking the snapshot of any file in netapp, blocks constituting the file can not
be altered and deleted, when end use make some changes into that file, changes are written using new blocks.
for example some end user make some changes in file ABC and convert the block C into C’ then new file will be ABC’.so if storage admin has taken the snapshot then the file will
be written as below.
So for definition wise snapshot can be defined as the the read only image of active file system at any point of time.
As described snapshot in above lines snapshot is the read only image of active file system at any point of time. so after the first snapshot which has frozen the ABC blocks, file got
deleted or got corrupt, means file ABC’ got corrupt and it is required to retrieve the last image of that file, as because of snapshot ABC blocks are their as those got frozen because of
the snapshot so retrieval only changes the pointer towards the previous frozen blocks and with in second file got retrieved.
Other some Patient technology of Netapp are fully based on snapshot engine, Below is a description of those technologies.
1. Snaprestore.
2. Snapmirror.
3. Snapvault.
Above is the data backup and recovery spectrum in netapp fas storage system. Only using snapshot we can backup a whole volume or aggregate of file but we can only restore a
single file, which does not seems practical in any sense, so their are other technologies that usages snapshot engine but provide more feature and granularity to storage admin.
* Snap restore enables the storage admin to recover a whole qtree, volume using a single command.
* Snapmirror that provide the feature of DR (Disaster recovery solution) works on snapshot engine technology.
* Snapvault is the technology that enable the storage admin to take backup of data on a remote storage.
Both snapvault and snapmirror usages the snapshot engine and seems slimier in functionality but their is a very basic difference in both technologies.
Snapvault
Snapvault is a feature that provide a backup solution on a remote storage system independent of remote storage type, Means we can take backup on non netapp storage system
as well, in netapp taking backup on a non netapp storage can be achieved using OSSV (Open sytem snap vault).
Snapmirror
Snapmirror a DR solution provided by Netapp means we enables a snapmirror relationship between two Netapp sytstem, then in case of Disaster storage admin can route the user
to access the data from the replica or mirror storage.
https://netappnotesark.blogspot.com/2015/08/index_12.html
\\usdata003\esr-storage\FileServices\Script_Outputs\Scripts
HOME
To identify the failed disk you can verify using physically / logically.
NetappFiler>vol status -f
NetappFiler>aggr status -f
If your yet in-front of the disk shelf run below commands to ensure the disk. Anyway you see the LED light as RED
NetappFiler*>blink_on DiskName
NetappFiler*>blink_off DiskName
Disk LED blink will off
NetappFiler*>led_on DiskName
Disk LED will on
NetappFiler*>led_off DiskName
Disk LED will switchoff
Use this step in extreme Cases only. You can also bring back the disk online forcefully, but the same disk may fail again after
few hours, if you did not have any spare drives you can try this as temporary solution.
Replace your Disk by accessing the disk shelf physically then follow the steps to use new disk.
Ans:- There is no direct answer for this question but we shall do it in several way.
If volume/lun present in ATA/SATA harddisk aggregate, then the volume can be migrated to FC/SAS disk aggregate. Either you can use flash cache to
improve performance.
For NFS/CIFS instead of accessing from single interface, multi mode vif can be configured to get better bandwidth and fault tolerance.
Aggr/volume/lun reallocation can be done to re–distribute the data to multiple disk for better striping performance.
Create multiple loops and connect different types of shelf's to each loop
Avoid mixing up different speeds of disk and different types of disk in a same aggregate.
Always keep sufficient spare disk to replace in case of disk failure. Because reconstruction time will take more time and cause negative performance.
2. Unable to map lun to solaris server, but solaris server side no issue. How to resolve the issue?
Ans:-
Netapp>qtree create /vol/vol1/qtreename
Netapp>qtree security /vol/vol1/qtree unix|ntfs|mixed
5. How to copy volume filer to filer?
Ans:-
Netapp> aggr add AggName no.of.disk
Flexible Volume
8. What is qtree?
Ans:- 5%
Ans:-
A Snapshot copy is a read-only image of a traditional or FlexVol volume, or an aggregate, that captures the state of the file system at a point in time.
11. What are the raid groups Netapp supporting?, what is the difference between them?
Ans:-
Supported RAID types:
Raid-4
Raid-6
Raid-Dp
Ans:-
Iscsi-sending block through. iSCSI does not required dedicated network, it will work on existing network also. it work's an TCP/IP.
Fcp-send through fibre medium. Required an dedicated FC network. Performance is so high compare to the iSCSI
15. What is the difference between ndmp copy and vol copy?
In ONTAP 7 the individual aggregate is limited to maximum of 16 TB. Where ONTAP 8 supports the new 64 bit aggregate and hence the size of the individual
aggregate extends to 100 TB.
For each source volume or qtree to replicate, perform an initial baseline transfer. For volume SnapMirror
Then initialize the volume SnapMirror baseline, using the following syntax on the destination:
oldst_hostname:dst_vol
For a qtree SnapMirror baseline transfer, use the following syntax on the destination:
dst_hostname:/vol/dst_vol/dst_qtree
18. While doing baseline transfer you’re getting error message. What are the troubleshooting steps you’ll do?
Ans:-
Check both the hosts are reachable by running “ping” command
Check whether the TCP port 10566 & 10565 are open from firewall
Check whether the snapmirror license are installed in both filers
The SnapMirror Async mode replicates Snapshot copies from a source volume or qtree to a destination. It will support to replicate more than 800Kms Long.
volume or qtree. Incremental updates are based on a schedule or are performed manually using the snapmirror update command. Async mode works with both
volume SnapMirror and qtree SnapMirror.
SnapMirror Sync mode replicates writes from a source volume to a destination volume at the same time it is written to the source volume. SnapMirror Sync is
used in environments that have zero tolerance for data loss. it will note support more then 300Kms long.
SnapMirror Semi-Sync provides a middle-ground solution that keeps the source and destination systems more closely synchronized than Async mode, but with
less impact on performance.
In the context of disk storage, De-duplication refers to any algorithm that searches for duplicate data objects (for example, blocks, chunks, files) and discards
those duplicates. When duplicate data is detected, it is not retained, but instead a “data pointer” is modified so that the storage system references an exact
copy of the data object already stored on disk. This De-duplication feature works well with datasets that have lots of duplicated date (for example, full
backups).
22. What is the command used to see amount of space saved using De-duplication?
df –s <volume name>
sis status
Metadata is defined as data providing information about one or more aspects of the data,
1. Inode file
2. Used block bitmap file
3. Free block bitmap file
27. After creating LUN (iSCSI) & mapped the LUN to particular igroup, the client not able to access the LUN. What are the trouble shooting steps you take?
cifs top
cifs stat
30. What do you do if a customer reports a particular CIFS share is responding slow?
31. what is degraded mode? If you don't have parity for failed disks then?
If the spare disk is not added within 24hours,then filer will be shutdown automatically to avoid further disk failures and data loss.
32. Did you ever do ontap upgrade? From which version to which version and for what reason?
Yes i have done ontap upgrade from version 7.2.6.1 to 7.3.3 due to lot of bugs in old version.
Using DFM(Data Fabric Manager) or also using SNMP you can monitor the filer. Using any monitoring systems like .i.e.Nagios
shelf connect should be properly done for both the controllers with Path1 and Path2
If partner shelf power is off, if you try to takeover it will not take. if you do as force using (-f) it will work
A Vserver is defined as logical container which holds the volumes. A 7 mode vfiler is called as a vserver in Clustered mode .
NetApp Infinite Volume is a software abstraction hosted over clustered Data ONTAP
# NFS TROUBLESHOOTING
Error Explanation:
A “stale NFS file handle” error message can and usually is caused by the following events:
1. A certain file or directory that is on the NFS server is opened by the NFS client
2. That specific file or directory is deleted either on that server or on another system that has access to the same share
3. Then that file or directory is accessed on the client
A file handle usually becomes stale when a file or directory referenced by the file handle on the client is removed by another host, while your
client is still holding on to an active reference to that object.
Resolution Tips
Resolution Tips
Use ping to contact the hostname of the storage system (server) from client
Use ping to contact the client from the storage system
Check ifconfig from the storage system
Check that the correct NFS version is enabled
Check all nfs options on the storage system
Check /etc/rc file for nfs options
Check nfs license
Error Explanation: Permission is not there but trying to access NFS share from server
Resolution Tips
Resolution Tips
Clear Screen
To clear the screen of cluttered with commands and commands output just press CTRL+L
Suddenly struck up in middle of the task because did not remember the command need help use inbuilt man pages to get help from command line.
Cluster::> rows 45
1. Privilege as Advanced
2. Privilege to admin
3. Privilege as Diag
Note: Until unless you have a compulsary requirement in advanced mode then only using
above command, Strictly not required for normal usage.
Diagnostic log collection or performance deep digging we can use above command
Setting as Admin we can come from diag and advanced mode to admin mode
TOP command
top command is used to change from any level directory path to direct top which means
out of directory path.
UP Command
Cluster:: aggregate> up
Cluster:: storage>
This post contains the list of commands that will be most used and will come handy when managing or monitoring or troubleshooting a Netapp
filer in 7-mode.
sysconfig -a : shows hardware configuration with more verbose information
sysconfig -d : shows information of the disk attached to the filer
version : shows the netapp Ontap OS version.
uptime : shows the filer uptime
dns info : this shows the dns resolvers, the no of hits and misses and other info
nis info : this shows the nis domain name, yp servers etc.
rdfile : Like “cat” in Linux, used to read contents of text files/
wrfile : Creates/Overwrites a file.
aggr status : Shows the aggregate status
aggr status -r : Shows the raid configuration, reconstruction information of the disks in filer
aggr show_space : Shows the disk usage of the aggregate, WAFL reserve, overheads etc.
vol status : Shows the volume information
vol status -s : Displays the spare disks on the filer
vol status -f / aggr status -f : Displays the failed disks on the filer
vol status -r : Shows the raid configuration, reconstruction information of the disks
df -h : Displays volume disk usage
df -i : Shows the inode counts of all the volumes
df -Ah : Shows “df” information of the aggregate
license : Displays/add/removes license on a netapp filer
maxfiles : Displays and adds more inodes to a volume
aggr create <Aggr Name> <Disk Names> : Creates aggregate
vol create : Creates volume in an aggregate
vol offline : Offlines a volume
vol online : Onlines a volume
vol destroy : Destroys and removes an volume
vol size [+|-] : Resize a volume in netapp filer
vol options : Displays/Changes volume options in a netapp filer
qtree create : Creates qtree
qtree status : Displays the status of qtrees
quota on : Enables quota on a netapp filer
quota off : Disables quota
quota resize : Resizes quota
quota report : Reports the quota and usage
snap list : Displays all snapshots on a volume
snap create : Create snapshot
snap sched : Schedule snapshot creation
snap reserve : Display/set snapshot reserve space in volume
/etc/exports : File that manages the NFS exports
rdfile /etc/exports : Read the NFS exports file
wrfile /etc/exports : Write to NFS exports file
exportfs -a : Exports all the filesystems listed in /etc/exports
cifs setup : Setup cifs
cifs shares : Create/displays cifs shares
cifs access : Changes access of cifs shares
lun create : Creates iscsi or fcp luns on a netapp filer
lun map : Maps lun to an igroup
lun show : Show all the luns on a filer
igroup create : Creates netapp igroup
lun stats : Show lun I/O statistics
disk show : Shows all the disk on the filer
disk zero spares : Zeros the spare disks
disk_fw_update : Upgrades the disk firmware on all disks
options : Display/Set options on netapp filer
options nfs : Display/Set NFS options
options timed : Display/Set NTP options on netapp.
options autosupport : Display/Set autosupport options
options cifs : Display/Set cifs options
options tcp : Display/Set TCP options
options net : Display/Set network options
ndmpcopy : Initiates ndmpcopy
ndmpd status : Displays status of ndmpd
ndmpd killall : Terminates all the ndmpd processes.
ifconfig : Displays/Sets IP address on a network/vif interface
vif create : Creates a VIF (bonding/trunking/teaming)
vif status : Displays status of a vif
netstat : Displays network statistics
sysstat -us 1 : begins a 1 second sample of the filer’s current utilization (crtl – c to end)
nfsstat : Shows nfs statistics
nfsstat -l : Displays nfs stats per client
nfs_hist : Displays nfs historgram
statit : beings/ends a performance workload sampling [-b starts / -e ends]
stats : Displays stats for every counter on netapp. Read stats man page for more info
ifstat : Displays Network interface stats
qtree stats : displays I/O stats of qtree
environment : display environment status on shelves and chassis of the filer
storage show <disk|shelf|adapter> : Shows storage component details
snapmirror intialize : Initialize a snapmirror relation
snapmirror update : Manually Update snapmirror relation
snapmirror resync : Resyns a broken snapmirror
snapmirror quiesce : Quiesces a snapmirror bond
snapmirror break : Breakes a snapmirror relation
snapmirror abort : Abort a running snapmirror
snapmirror status : Shows snapmirror status
lock status -h : Displays locks held by filer
sm_mon : Manage the locks
storage download shelf : Installs the shelf firmware
software get : Download the Netapp OS software
software install : Installs OS
download : Updates the installed OS
cf status : Displays cluster status
cf takeover : Takes over the cluster partner
cf giveback : Gives back control to the cluster partner
reboot : Reboots a filer
MISC
set -privilege advanced (Enter into privilege mode)
set -privilege diagnostic (Enter into diagnostic mode)
set -privilege admin (Enter into admin mode)
system timeout modify 30 (Sets system timeout to 30 minutes)
system node run – node local sysconfig -a (Run sysconfig on the local node)
The symbol ! means other than in clustered ontap i.e. storage aggregate show -state !online (show all aggregates that are not online)
node run -node -command sysstat -c 10 -x 3 (Running the sysstat performance tool with cluster mode)
system node image show (Show the running Data Ontap versions and which is the default boot)
dashboard performance show (Shows a summary of cluster performance including interconnect traffic)
node run * environment shelf (Shows information about the Shelves Connected including Model Number)
DIAGNOSTICS USER CLUSTERED ONTAP
security login unlock -username diag (Unlock the diag user)
security login password -username diag (Set a password for the diag user)
security login show -username diag (Show the diag user)
SNAPVAULT
snapmirror create -source-path vserver1:vol5 -destination-path vserver2:vol5_archive -type XDP -schedule 5min -policy backup-vspolicy
(Create snapvault relationship with 5 min schedule using backup-vspolicy)
NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP (transition), RST (transient restore)
NETWORK INTERFACE
network interface show (show network interfaces)
network port show (Shows the status and information on current network ports)
network port modify -node * -port -mtu 9000 (Enable Jumbo Frames on interface vif_name>
network port modify -node * -port -flowcontrol-admin none (Disables Flow Control on port data_port_name)
network interface revert * (revert all network interfaces to their home port)
INTERFACE GROUPS
ifgrp create -node -ifgrp -distr-func ip -mode multimode (Create an interface group called vif_name on node_name)
network port ifgrp add-port -node -ifgrp -port (Add a port to vif_name)
net int failover-groups create -failover-group data__fg -node -port (Create a failover group – Complete on both nodes)
ifgrp show (Shows the status and information on current interface groups)
net int failover-groups show (Show Failover Group Status and information)
ROUTING GROUPS
network interface show-routing-group (show routing groups for all vservers)
network routing-groups show -vserver vserver1 (show routing groups for vserver1)
network routing-groups route create -vserver vserver1 -routing-group 10.1.1.0/24 -destination 0.0.0.0/0 -gateway 10.1.1.1 (Creates a
default route on vserver1)
ping -lif-owner vserver1 -lif data1 -destination www.google.com (ping www.google.com via vserver1 using the data1 port)
DNS
services dns show (show DNS)
UNIX
vserver services unix-user show
vserver services unix-user create -vserver vserver1 -user root -id 0 -primary-gid 0 (Create a unix user called root)
vserver name-mapping create -vserver vserver1 -direction win-unix -position 1 -pattern (.+) -replacement root (Create a name mapping from
windows to unix)
vserver name-mapping create -vserver vserver1 -direction unix-win -position 1 -pattern (.+) -replacement sysadmin011 (Create a name mapping
from unix to windows)
vserver name-mapping show (Show name-mappings)
NIS
vserver services nis-domain create -vserver vserver1 -domain vmlab.local -active true -servers 10.10.10.1 (Create nis-domain called
vmlab.local pointing to 10.10.10.1)
vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch referencing a file)
vserver services nis-domain show
NTP
system services ntp server create -node -server (Adds an NTP server to node_name)
system services ntp config modify -enabled true (Enable ntp)
system node date modify -timezone (Sets timezone for Area/Location Timezone. i.e. Australia/Sydney)
node date show (Show date on all nodes)
DATE AND TIME
timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ? after -timezone for a list)
date 201307090830 (Sets date for yyyymmddhhmm)
date -node (Displays the date and time for the node)
CONVERGED NETWORK ADAPTERS (FAS 8000)
ucadmin show -node NODENAME (Show CNA ports on specific node)
ucadmin -node NODENAME -adapter 0e -mode cna (Change adapter 0e from FC to CNA. NOTE: A reboot of the node is required)
PERFORMANCE
show-periodic -object volume -instance volumename -node node1 -vserver vserver1 -counter total_ops|avg_latency|read_ops|read_latency (Show
the specific counters for a volume)
statistics show-periodic 0object nfsv3 -instance vserver1 -counter
nfsv3_ops|nfsv3_read_ops|nfsv3_write_ops|read_avg_latency|write_avg_latency (Shows the specific nfsv3 counters for a vserver)
sysstat -x 1 (Shows counters for CPU, NFS, CIFS, FCP, WAFL)
Direct-attached storage (DAS) is digital storage directly attached to the computer accessing it, as opposed to storage accessed over a
computer network. Examples of DAS include hard drives, optical disc drives, and storage on external drives.
In Above Screen it shows that Storage device is directly attached to Server it means in real
Example: If you attach a External USB drive to your Server/Desktop it is best example for DAS.
Same as above example, we can attach SCSI and SAS Storage Devices such as HDD and RAID Controllers.
Disadvantages
An initial investment in a server with built in storage can meet the needs of a small organization for a period of time. But as data is added
and the need for storage capacity increases, the server has to be taken out of service to add additional drives.
DAS expansion generally requires the expertise of an IT professional, which means either staffing someone or taking on the expense of a
consultant.
A Host Bus Adapter can only support a limited number of drives. For environments with stringent up time requirements, or for
environments with rapidly increasing storage requirements, DAS may not be the right choice.
Advantages
Advantages
SAN Architecture facilitates scalability - Any number of storage devices can be added to store hundreds of terabytes.
SAN reduces down time - We can upgrade our SAN, replace defective drives, backup our data without taking any servers offline. A well-
configured SAN with mirroring and redundant servers can bring zero downtime.
Sharing SAN is possible - As SAN is not directly attached with any particular server or network, a SAN can be shared by all
SAN provides long distance connectivity - With Fibre channel capable of running upto 10 kilometers, we can keep our data in a remote,
physically secure location. Fibre channel switching also makes it very easy to establish private connections with other SANs for mirroring,
backup, or maintenance.
SAN is truly versatile - A SAN can be single entity, a master grouping of several SANs and can include SANs in remote locations.
Disadvantages
Leveraging of existing technology investments tends to be much difficult. Though SAN facilitates to make use of already existing legacy
storage, lack of SAN-building skills has greatly diminished deployment of homegrown SANs.
Management of SAN systems has proved to be a real tough one due to various reasons. Also for some, having a SAN storage facility
seems to be wasteful one.
Also, there are a few SAN product vendors due to its very high price and very few mega enterprises need SAN set up.
Network-attached storage (NAS) is a type of dedicated file storage device that provides local-area network local area network (LAN) nodes
with file-based shared storage through a standard Ethernet connection.
Network Attached Storage Example
NAS devices, which typically do not have a keyboard or display, are configured and managed with a Web-based utility program. Each NAS
resides on the LAN as an independent network node and has its own IP address.
Advantages
NAS systems stores data as files and support both CIFS and NFS protocols. They can be accessed easily over the commonly used TCP/IP
Ethernet based networks and support multiple users connecting to it simultaneously.
Entry level NAS systems are quite inexpensive – they can be purchased for capacities as low as 1 or 2 TB with just two disks. This
enables them to be deployed with Small and Medium Business (SMB) networks easily.
A NAS device may support one or more RAID levels to make sure that individual disk failures do not result in loss of data.
A NAS appliance comes with a GUI based web based management console and hence can be centrally accessed and administered from
remote locations over the TCP/IP networks including Internet/ VPN / Leased Lines etc.
NAS appliances are connected to the Ethernet network. Hence servers accessing them can also be connected to the Ethernet network.
So, unlike SAN systems, there is no need for expensive HBA adapters or specialized switches for storage or specialized skills required to set
up and maintain the NAS systems. With NAS, its simple and easy.
Ethernet networks are scaling up to support higher throughputs – Currently 1 GE and 10GE throughputs are possible. NAS systems are
also capable of supporting such high throughputs as they use the Ethernet based networks and TCP/IP protocol to transport data.
The management tools required to manage the Ethernet network are well established and hence no separate training is required for
setting up and maintaining a separate network for storage unlike SAN systems.
Disadvantages
Transaction intensive databases, ERP, CRM systems and such high performance oriented data are better off when stored in SAN (Storage
Area Network) than NAS as the former creates a network that has low latencies, reliable, lossless and faster.Also, for large, heterogeneous
block data transfers SAN might be more appropriate.
At the end of the day, NAS appliances are going to share the network with their computing counterparts and hence the NAS solution
consumes more bandwidth from the TCP/IP network. Also, the performance of the remotely hosted NAS will depend upon the amount of
bandwidth available for Wide Area Networks and again the bandwidth is shared with computing devices. So, WAN optimization needs to be
performed for deploying NAS solutions remotely in limited bandwidth scenarios.
Ethernet is a lossy environment, which means packet drops and network congestion are inevitable. So, the performance and architecture
of the IP networks are very important for effective high volume NAS solution implementation at least till the lossless Ethernet framework is
implemented.
For techniques like Continuous Data Protection, taking frequent Disk Snapshots for backup etc, block level storage with techniques like
Data De-duplication as available with SAN might be more efficient than NAS.
Sometimes, the IP network might get congested if operations like huge data back up is done during business hours.
UNIFIED STORAGE
Unified storage is a storage system that makes it possible to run and manage files and applications from a single device. To this end, a
unified storage system consolidates file-based and block-based access in a single storage platform and supports fibre channel SAN, IP-based
SAN (iSCSI), and NAS.
V-Series: Virtualization Series, this Series is used most of times to virtulize the SAN with other SAN devices. To Reduce the cost in real terms.
E-Series: The E-Series is NetApp's name for new platforms resulting from the acquisition of Engenio. Aimed at the storage stress resulting
from high-performance computing (HPC) applications, NetApp offers full-motion video storage built on the E-Series Platform that enables, for
example, government agencies to take advantage of full-motion video and improve battlefield intelligence. Additionally, NetApp offers a
Hadoop Storage Solution on the E-Series that is designed to enable real-time or near-real-time data analysis of larger and more complex
datasets.
----------
WHAT IS CONSISTENCY POINT..? HOW ITS DIFFER FROM SNAPSHOT..?
CONSISTENCY POINT: A CP IS TRIGGERED WHENEVER THE FILESYSTEM REACHES A POINT WHERE IT WANTS TO UPDATE THE PHYSICAL DATA ON THE DISKS, WITH
WHATEVER HAS ACCUMULATED IN CACHE (AND WAS JOURNALED IN NVRAM).
SNAPSHOT: A SNAPSHOT IS CREATED WHENEVER THE SNAP-SCHEDULE IS CONFIGURED TO TRIGGER IT OR ANY OTHER OPERATION (SNAPMANAGER, SNAPDRIVE,
SNAPMIRROR, SNAPVAULT, ADMINISTRATOR) CREATES A NEW SNAPSHOT. CREATING A SNAPSHOT ALSO TRIGGERS A CP, BECAUSE THE SNAPSHOT IS ALWAYS A
CONSISTENT IMAGE FO THE FILESYSTEM AT THIS POINT IN TIME
copy the below Script and Paste in one file. Save file with .sh Extension provide executable permissions to that file.
Schedule this script using crontab based on your requirement, Every 4 Hours as a example.
NetApp® FlexClone® technology instantly replicates data volumes and datasets as transparent, virtual copies—true clones—without
compromising performance or demanding additional storage space.
using recent snapshot we can create a Flexclone and splitting the volume will create actual volume clone.
If you need a temporary copy of your data that can be made quickly and without using a lot of disk space, you can create a FlexClone
volume. FlexClone volumes save data space because all unchanged data blocks are shared between the FlexClone volume and its parent.
Check wheather you have latest snapshot exists in your volume Or not using below command.
Estimate the Aggregate space is enough to split your volume Or Not below command
NetappFiler> vol clone split estimate CLONE
THIN PROVISIONING:
Thin provisioning is used in large environment because we can assign the more space than what we have in current environment.
If we take an example that, if you have 1TB storage space in SAN still you can allocate to the clients 1.5TB, if you enable the thin
provisioning SAN will take only what you have used.
if you clearly observe the above example after allocating 1.5TB of space to all the clients still we have available space, thin provisioning will
take only used space in consideration.
Thin provisioning is enabled on NetApp storage by setting the appropriate option on a volume or LUN. You can thin provision a volume by
changing the "guarantee" option to "none." You can thin provision a LUN by changing the reservation on the LUN. These settings can be set
using NetApp management tools such as NetApp Operations Manager and NetApp Provisioning Manager or by entering the following
commands:
When not to use thin provisioning. There are some situations in which thin provisioning may not be appropriate. Keep these in mind when
deciding whether to implement it and on which volumes to apply it:
You can periodically initiate the space reclamation process on your LUNs. The GUI tool will first determine how much space can be reclaimed
and ask if you wish to continue. You can limit the amount of time the process will use so that it does not run during peak periods.
Here are a few things to keep in mind when you run space reclamation:
It's a good practice to run space reclamation before creating a Snapshot copy. Otherwise, blocks that should be available for freeing will
be locked in the Snapshot copy and not be able to be freed.
Because space reclamation initially consumes cycles on the host, it should be run during periods of low activity.
Normal data traffic to the LUN can continue while the process runs. However, certain operations cannot be performed during the space
reclamation process:
Creating or restoring a Snapshot copy stops space reclamation.
The LUN may not be deleted, disconnected, or expanded.
The mount point cannot be changed.
Running Windows® defragmentation is not recommended.
THICK PROVISIONING:
Thick provisioning you can't use in large environments, because as soon as you allocate the space to the client it will reserve for the same
client.
If you clearly observe the above example as i have allocated 1TB space to the clients, after assign the space to clients there is no space is
available in the SAN. Thick provisioning will not consider the used space it will only consider allocated space.
Please provide your valuable comments and understanding about this topic. Subscribe with your email address to get more
updates directly to your mail box.
This page is dedicated to a few examples I’ve put together in Visio to highlight the correct SAS and ACP cabling combinations for Netapp disk shelf cabling.
Two types of controllers and two types of disk shelves are used in these examples. They are:
Netapp FAS2040
Netapp FAS3240
Disk Shelf DS4243
Disk Shelf DS2246
This example shows a Netapp FAS2040 with 2 controllers connected to a single DS4243 or DS2246 Netapp disk shelf.
This example follows on from the previous example by adding an additional disk shelf for a total of 2 disk shelves.
FAS2040 – 3 Netapp Disk Shelves
This example follows on from the previous example by adding another additional disk shelf for a total of 3 disk shelves.
FAS3240 – 1 Netapp Disk Shelf
This example shows 2 Netapp FAS3240’s connected to a single DS4243 Netapp disk shelf.
FAS3240 – 2 Netapp Disk Shelves
This example follows on from the previous example by adding an additional disk shelf for a total of 2 disk shelves.
FAS3240 – 3 Netapp Disk Shelves
This example follows on from the previous example by adding an additional disk shelf for a total of 3 disk shelves.
FAS3240 – 6 Netapp Disk Shelves in 2 Seperate Stacks
In this example we use 3 Netapp DS4243 disk shelves in Stack 1 and 3 Netapp DS2246 disk shelves in Stack 2. The requirement for this configuration is 4 SAS ports. I have
added the 4-Port SAS expansion card Netapp X2065A into each controller.
If you have any technical questions about this tutorial or any other tutorials on this site, please open a new thread in the forums and the community will be able to help you
out.
Disclaimer:
All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate
steps to date, we take no responsibility if you implement any of these steps in a production environment.
NetApp® FlexClone® technology instantly replicates data volumes and datasets as transparent, virtual copies—true clones—without
compromising performance or demanding additional storage space.
using recent snapshot we can create a Flexclone and splitting the volume will create actual volume clone.
If you need a temporary copy of your data that can be made quickly and without using a lot of disk space, you can create a FlexClone
volume. FlexClone volumes save data space because all unchanged data blocks are shared between the FlexClone volume and its parent.
Check wheather you have latest snapshot exists in your volume Or not using below command.
NetappFiler> snap list Volume1
Volume Volume1
working...
Estimate the Aggregate space is enough to split your volume Or Not below command
NetappFiler> vol clone split estimate CLONE
# NFS TROUBLESHOOTING
Error Explanation:
A “stale NFS file handle” error message can and usually is caused by the following events:
1. A certain file or directory that is on the NFS server is opened by the NFS client
2. That specific file or directory is deleted either on that server or on another system that has access to the same share
3. Then that file or directory is accessed on the client
A file handle usually becomes stale when a file or directory referenced by the file handle on the client is removed by another host, while your
client is still holding on to an active reference to that object.
Resolution Tips
Resolution Tips
Use ping to contact the hostname of the storage system (server) from client
Use ping to contact the client from the storage system
Check ifconfig from the storage system
Check that the correct NFS version is enabled
Check all nfs options on the storage system
Check /etc/rc file for nfs options
Check nfs license
Error Explanation: Permission is not there but trying to access NFS share from server
Resolution Tips
Resolution Tips
SAN zoning may be utilized to implement compartmentalization of data for security purposes.
Before doing the zoning using the GUI mode we have to connect the servers using the FC cables. From SAN Switch port to Server HBA port.
#####################################################################################3
After reinitializing and resetting a filer to factory defaults there is always a time when you want to re-use your prescious 100k+ baby. Thinking, this should be a piece of
cake, I encountered some unforeseen surprises which led to this document. Here I'll show you how to setup a filer from the start to the end. Notice this is a filer with a
partner, so there is a lot of switching around.
Note that I copied a lot of output from the filers to this article for your convenience. Also note, that I mixed output from the two filers and that all steps need to be done on both filers.
Initial Configuration
When starting up the filer and connecting to the management console (serial cable, COM1 etc, all default settings if using a Windows machine with Putty) you'll see a
configuration setup. Simply answer the questions, and don't be shy if you're not sure, everything can be changed afterwards:
IP Addresses
Setup IP addresses:
filer01b> ifconfig -a
e0M: flags=0x2948867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.18.1.132 netmask-or-prefix 0xffff0000 broadcast 10.18.255.255
partner inet 10.18.1.131 (not in use)
ether 00:a0:98:29:16:32 (auto-100tx-fd-up) flowcontrol full
e0a: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:29:16:30 (auto-unknown-cfg_down) flowcontrol full
e0b: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:29:16:31 (auto-unknown-cfg_down) flowcontrol full
lo: flags=0x1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 9188
inet 127.0.0.1 netmask-or-prefix 0xff000000 broadcast 127.0.0.1
filer01b> ifconfig e0M 10.18.1.132 netmask 255.255.0.0 partner 10.18.1.131
Route
By setting these files correctly your settings will be persistent over reboots:
Node A:
Node B:
Setup a password for the root user. Notice that you'll get a warning if you use a password with less than 8 characters, but you're still allowed to set it:
filer01a> passwd
New password:
Retype new password:
Mon Sep 17 14:44:09 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from this user. Reason: Password is
too short (SNMPv3 requires at least 8 characters).
Mon Sep 17 14:44:09 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too
short (SNMPv3 requires at least 8 characters).
Mon Sep 17 14:44:09 GMT [passwd.changed:info]: passwd for user 'root' changed.
CIFS AND AUTHENTICATION
This is a little bit confusing part. Allthough we're not using CIFS, I have to do a cifs setup to configure the normal authentication for these filers:
Your filer does not have WINS configured and is visible only to
clients on the same subnet.
Do you want to make the system visible via WINS? [n]: ?
Answer 'y' if you would like to configure CIFS to register its names
with WINS servers, and to use WINS server queries to locate domain
controllers. You will be prompted to add the IPv4 addresses of up to 4
WINS servers. Answer 'n' if you are not using WINS servers in your
environment or do not want to use them.
Do you want to make the system visible via WINS? [n]: n
A filer can be configured for multiprotocol access, or as an NTFS-only
filer. Since multiple protocols are currently licensed on this filer,
we recommend that you configure this filer as a multiprotocol filer
SSH server needs two RSA keys to support ssh1.x protocol. The host key is
generated and saved to file /etc/sshd/ssh_host_key during setup. The server
key is re-generated every hour when SSH server is running.
SSH server needs a RSA host key and a DSA host key to support ssh2.0 protocol.
The host keys are generated and saved to /etc/sshd/ssh_host_rsa_key and
/etc/sshd/ssh_host_dsa_key files respectively during setup.
SSH Setup will now ask you for the sizes of the host and server keys.
For ssh1.0 protocol, key sizes must be between 384 and 2048 bits.
For ssh2.0 protocol, key sizes must be between 768 and 2048 bits.
The size of the host and server keys must differ by at least 128 bits.
Please enter the size of host key for ssh1.x protocol [768] :
Please enter the size of server key for ssh1.x protocol [512] :
Please enter the size of host keys for ssh2.0 protocol [768] :
Setup will now generate the host keys. It will take a minute.
After Setup is finished the SSH server will start automatically.
filer01a*> Thu Sep 20 08:14:06 GMT [filer01a: secureadmin.ssh.setup.success:info]: SSH setup is done and ssh2 should be enabled. Host keys
are stored in /etc/sshd/ssh_host_key, /etc/sshd/ssh_host_rsa_key, and /etc/sshd/ssh_host_dsa_key.
APPLY A SOFTWARE UPDATE
Notice that after reinitializing a netapp you don't only remove the data, you also remove the software of a filer, leaving you with a basic version. You need to get the software
update from NetApp / your reseller. You can't download it, so make sure you can get this before you proceed with the wiping.
There is a really easy way to get a new software version on your filer. If you have a local webserver you can download it from there, or use the miniweb webserver:
filer01a> download
filer01a> version -b
1:/x86_64/kernel/primary.krn: OS 7.3.7
1:/backup/x86_64/kernel/primary.krn: OS 7.3.4
1:/x86_64/diag/diag.krn: 5.6.1
1:/x86_64/firmware/excelsio/firmware.img: Firmware 1.9.0
1:/x86_64/firmware/DrWho/firmware.img: Firmware 2.5.0
1:/x86_64/firmware/SB_XV/firmware.img: Firmware 4.4.0
1:/boot/loader: Loader 1.8
1:/common/firmware/zdi/zdi_fw.zpk: Flash Cache Firmware 2.2 (Build 0x201012201350)
1:/common/firmware/zdi/zdi_fw.zpk: PAM II Firmware 1.10 (Build 0x201012200653)
1:/common/firmware/zdi/zdi_fw.zpk: X1936A FPGA Configuration PROM 1.0 (Build 0x200706131558)
filer01a>
filer01a> reboot
Data ONTAP Release 7.3.7: Thu May 3 04:27:32 PDT 2012 (IBM)
Copyright (c) 1992-2012 NetApp.
Starting boot on Thu Sep 20 08:56:56 GMT 2012
Thu Sep 20 08:57:28 GMT [kern.version.change:notice]: Data ONTAP kernel version was changed from Data ONTAP Release 7.3.4 to Data ONTAP
Release 7.3.7.
Thu Sep 20 08:57:31 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system
Thu Sep 20 08:57:34 GMT [sas.link.error:error]: Could not recover link on SAS adapter 1d after 10 seconds. Offlining the adapter.
Thu Sep 20 08:57:34 GMT [cf.nm.nicReset:warning]: Initiating soft reset on Cluster Interconnect card 0 due to rendezvous reset
Thu Sep 20 08:57:34 GMT [cf.rv.notConnected:error]: Connection for cfo_rv failed
Thu Sep 20 08:57:34 GMT [cf.nm.nicTransitionDown:warning]: Cluster Interconnect link 0 is DOWN
Thu Sep 20 08:57:34 GMT [cf.rv.notConnected:error]: Connection for cfo_rv failed
Thu Sep 20 08:57:34 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk.
Thu Sep 20 08:57:34 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk.
Thu Sep 20 08:57:34 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.
Thu Sep 20 08:57:35 GMT [sas.link.error:error]: Could not recover link on SAS adapter 1b after 10 seconds. Offlining the adapter.
Thu Sep 20 08:57:35 GMT [shelf.config.spha:info]: System is using single path HA attached storage only.
Thu Sep 20 08:57:35 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.12 is a partner HA mailbox disk.
Thu Sep 20 08:57:35 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.13 is a partner HA mailbox disk.
Thu Sep 20 08:57:35 GMT [fmmb.instStat.change:info]: normal mailbox instance on partner side.
Thu Sep 20 08:57:35 GMT [cf.fm.partner:info]: Cluster monitor: partner 'filer01b'
Thu Sep 20 08:57:35 GMT [cf.fm.kernelMismatch:warning]: Cluster monitor: possible kernel mismatch detected local 'Data ONTAP/7.3.7',
partner 'Data ONTAP/7.3.4'
Thu Sep 20 08:57:35 GMT [cf.fm.timeMasterStatus:info]: Acting as cluster time slave
Thu Sep 20 08:57:36 GMT [cf.nm.nicTransitionUp:info]: Interconnect link 0 is UP
Thu Sep 20 08:57:36 GMT [ses.multipath.ReqError:CRITICAL]: SAS-Shelf24 detected without a multipath configuration.
Thu Sep 20 08:57:36 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks.
Thu Sep 20 08:57:36 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes.
Thu Sep 20 08:57:37 GMT [localhost: cf.fm.launch:info]: Launching cluster monitor
Thu Sep 20 08:57:38 GMT [localhost: cf.fm.partner:info]: Cluster monitor: partner 'filer01b'
Thu Sep 20 08:57:38 GMT [localhost: cf.fm.notkoverClusterDisable:warning]: Cluster monitor: cluster takeover disabled (restart)
sparse volume upgrade done. num vol 0.
Thu Sep 20 08:57:38 GMT [localhost: cf.fsm.takeoverOfPartnerDisabled:notice]: Cluster monitor: takeover of filer01b disabled (cluster
takeover disabled)
Tadd net 127.0.0.0: gateway 127.0.0.1h
u Sep 20 08:57:40 GMT [localhost: cf.nm.nicReset:warning]: Initiating soft reset on Cluster Interconnect card 0 due to rendezvous reset
Vdisk Snap Table for host:0 is initialized
Thu Sep 20 08:57:40 GMT [localhost: rc:notice]: The system was down for 164 seconds
Thu Sep 20 08:57:41 GMT [localhost: rc:info]: Registry is being upgraded to improve storing of local changes.
Thu Sep 20 08:57:41 GMT [filer01a: rc:info]: Registry upgrade successful.
Thu Sep 20 08:57:41 GMT [filer01a: cf.partner.short_uptime:warning]: Partner up for 2 seconds only
Set the Vol0 to 20 GB and some additional settings (see NetApp Data Planning for more information on these settings):
DNS
Configure DNS:
Note: Wrfile needs an empty line at the end and save with CTRL+C.
NTP
Configure NTP:
First configure RLM failover since IP failover is already managed by the startup files:
filer01a> options cf.hw_assist.partner.address 10.18.1.32
Validating the new hw-assist configuration. Please wait...
Thu Sep 20 12:45:19 GMT [filer01a: cf.hwassist.localMonitor:warning]: Cluster hw_assist: hw_assist functionality is inactive.
cf hw_assist Error: can not validate new config.
No response from partner(filer01b), timed out.
filer01a> Thu Sep 20 12:46:00 GMT [filer01a: cf.hwassist.hwasstActive:info]: Cluster hw_assist: hw_assist functionality is active on IP
address: 10.18.1.31 port: 4444
Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.localMonitor:warning]: Cluster hw_assist: hw_assist functionality is inactive.
Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.missedKeepAlive:warning]: Cluster hw_assist: missed keep alive alert from partner(filer01a).
Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.hwasstActive:info]: Cluster hw_assist: hw_assist functionality is active on IP address:
10.18.1.32 port: 4444
filer01a> cf status
Cluster disabled.
filer01a> cf partner
filer01b
filer01a> cf monitor
current time: 20Sep2012 12:48:38
UP 03:19:42, partner 'filer01b', cluster monitor disabled
filer01a> cf enable
filer01a> Thu Sep 20 12:50:48 GMT [filer01a: cf.misc.operatorEnable:warning]: Cluster monitor: operator initiated enabling of cluster
Thu Sep 20 12:50:48 GMT [filer01a: cf.fsm.takeoverOfPartnerEnabled:notice]: Cluster monitor: takeover of filer01b enabled
Thu Sep 20 12:50:48 GMT [filer01a: cf.fsm.takeoverByPartnerEnabled:notice]: Cluster monitor: takeover of filer01a by filer01b enabled
filer01a> Thu Sep 20 12:51:01 GMT [filer01a: monitor.globalStatus.ok:info]: The system's global status is normal.
filer01a> cf status
Cluster enabled, filer01b is up.
filer01a> cf partner
filer01b
filer01a> cf monitor
current time: 20Sep2012 12:51:21
UP 03:22:22, partner 'filer01b', cluster monitor enabled
VIA Interconnect is up (link 0 up, link 1 up), takeover capability on-line
partner update TAKEOVER_ENABLED (20Sep2012 12:51:21)
SYSLOG
Configure syslog to send the logging to a central syslog server, see NetApp Syslog for more information:
Now this is quite personal I guess, you need to configure you're disks. Assign the correct disks to the correct heads. How you do this is up to you, I usually devide them
equally:
See all disk and their ownerships (to only see unowned disks do disk show -n):
Now assign disks to owner. There are 24 unowned disks, 12 for one head, 12 for the other:
So I had to remove the ownership of the disks that got wrong assigned:
filer01a*> disk remove_ownership 1c.01.12 1c.01.13 1c.01.14 1c.01.15 1c.01.16 1c.01.17 1c.01.18 1c.01.19 1c.01.20 1c.01.21 1c.01.22
1c.01.23
Disk 1c.01.12 will have its ownership removed
Disk 1c.01.13 will have its ownership removed
Disk 1c.01.14 will have its ownership removed
Disk 1c.01.15 will have its ownership removed
Disk 1c.01.16 will have its ownership removed
Disk 1c.01.17 will have its ownership removed
Disk 1c.01.18 will have its ownership removed
Disk 1c.01.19 will have its ownership removed
Disk 1c.01.20 will have its ownership removed
Disk 1c.01.21 will have its ownership removed
Disk 1c.01.22 will have its ownership removed
Disk 1c.01.23 will have its ownership removed
Volumes must be taken offline. Are all impacted volumes offline(y/n)?? y
filer01a*> disk assign 1c.01.12 1c.01.13 1c.01.14 1c.01.15 1c.01.16 1c.01.17 1c.01.18 1c.01.19 1c.01.20 1c.01.21 1c.01.22 1c.01.23 -o
filer01b
filer01a*> disk show -v
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
1a.00.13 filer01b (151762803) Pool0 B9JGMTRT
1a.00.12 filer01b (151762803) Pool0 B9JB8DGF
1a.00.16 filer01b (151762803) Pool0 B9JMT0JF
1a.00.15 filer01b (151762803) Pool0 B9JM4N7T
1a.00.19 filer01b (151762803) Pool0 B9JMDT2T
1a.00.18 filer01b (151762803) Pool0 B9JHNPST
1a.00.14 filer01b (151762803) Pool0 B9JHNPLT
1a.00.17 filer01b (151762803) Pool0 B9JLT7UT
1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE
1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8
1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE
1c.01.12 filer01b (151762803) Pool0 6SL3QHLG0000N238D6W8
1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG
1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD
1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC
1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X
1c.01.14 filer01b (151762803) Pool0 6SL3R1FP0000N239553G
1c.01.13 filer01b (151762803) Pool0 6SL3RBAS0000N2395038
1c.01.17 filer01b (151762803) Pool0 6SL3R3LJ0000N239575J
1c.01.22 filer01b (151762803) Pool0 6SL3PBX20000N237NBWY
1c.01.23 filer01b (151762803) Pool0 6SL3PR9T0000N237EL3Q
1c.01.20 filer01b (151762803) Pool0 6SL3PND40000N238H5GX
1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT
1c.01.16 filer01b (151762803) Pool0 6SL3R6KL0000N23907K2
1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE
1c.01.19 filer01b (151762803) Pool0 6SL3P3NH0000N238608X
1c.01.21 filer01b (151762803) Pool0 6SL3QQQP0000N23903DJ
1c.01.15 filer01b (151762803) Pool0 6SL3RC5M0000M125NTE1
1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z
1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV
1c.01.18 filer01b (151762803) Pool0 6SL3PWDK0000N238L4QM
1a.00.2 filer01a (151762815) Pool0 B9JMDP0T
1a.00.0 filer01a (151762815) Pool0 B9JMZ14F
1a.00.7 filer01a (151762815) Pool0 B9JLS1MT
1a.00.1 filer01a (151762815) Pool0 B9JM4AUT
1a.00.5 filer01a (151762815) Pool0 B9JHNPJT
1a.00.4 filer01a (151762815) Pool0 B9JMYGNF
1a.00.3 filer01a (151762815) Pool0 B9JLT7ST
1a.00.6 filer01a (151762815) Pool0 B9JMDL9T
1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944
Now you can create the aggregates, and consider reading this page before you continue:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
Partner disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816
filer01a>
Now create the aggregates considering the speed and type of disks (do not mix that):
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
Partner disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816
filer01a>
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688
parity 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688
data 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
data 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688
spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816
Partner disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688
partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816
partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816
filer01a>