Access Management
Access Management
In my NetApp introduction section I spoke about 2 ways on accessing the NetApp filer, either file based access or
block based access
File-Based Protocol
Block-Based Protocol
Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)
In this section I will cover the following common based protocols, if any others are not covered then please checkout
the documentation
iSCSI
FC
NFS
CIFS
HTTP
FTP
Data ONTAP 7.2 added support for the Asymmetric Logical Unit Access (ALUA) features of SCSI, also known as
SCSI Target Port Groups or Target Port Group Support. ALUA defines a standard set of SCSI commands for
discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA allows the initiator to
query the target about path attributes, such as primary path and secondary path. It also allows the target to communicate
events back to the initiator. As a result, multipathing software can be developed to support any array. Proprietary SCSI
commands are no longer required as long as the host supports the ALUA standard. For iSCSI SANs, ALUA is
supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS.
iSCSI Introduction
The iSCSI protocol is a licensed service on the storage system that enables you to transfer block data to hosts using the
SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720. In an iSCSI network, storage
systems are targets that have storage target devices, which are referred to as LUNs (logical units). A host with an iSCSI
host bus adapter (HBA), or running iSCSI initiator software, uses the iSCSI protocol to access LUNs on a storage
system. The iSCSI protocol is implemented over the storage systems standard gigabit Ethernet interfaces using a
software driver. The connection between the initiator and target uses a standard TCP/IP network. No special network
configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or it can be your
regular public network. The storage system listens for iSCSI connections on TCP port 3260.
In an iSCSI network, there are two types of nodes: targets and initiators
Targets
Initiators
Storage systems and hosts can be direct-attached or connected through Ethernet switches. Both direct-attached and
switched configurations use Ethernet cable and a TCP/IP network for connectivity. You can of course use existing
networks but if possible try to make this a dedicated network for the storage system, as it will increase performance.
Every iSCSI node must have a node name. The two formats, or type designators, for iSCSI node names are iqn and eui.
The storage system always uses the iqn-type designator. The initiator can use either the iqn-type or eui-type designator.
iqn
iqn.1992-08.com.netapp:sn.serial-number
The following example shows the default node name for a storage system with the serial number 12345678:
iqn.1992-08.com.netapp:sn.12345678
The storage system checks the format of the initiator node name at session login time. If the initiator node name does
not comply with storage system node name requirements, the storage system rejects the session.
A target portal group is a set of network portals within an iSCSI node over which an iSCSI session is conducted. In a
target, a network portal is identified by its IP address and listening TCP port. For storage systems, each network
interface can have one or more IP addresses and therefore one or more network portals. A network interface can be an
Ethernet port, virtual local area network (VLAN), or virtual interface (vif).
The assignment of target portals to portal groups is important for two reasons:
The iSCSI protocol allows only one session between a specific iSCSI initiator port and a single portal group on
the target.
All connections within an iSCSI session must use target portals that belong to the same portal group.
The Internet Storage Name Service (iSNS) is a protocol that enables automated discovery and management of iSCSI
devices on a TCP/IP storage network. An iSNS server maintains information about active iSCSI devices on the
network, including their IP addresses, iSCSI node names, and portal groups. You obtain an iSNS server from a thirdparty vendor. If you have an iSNS server on your network, and it is configured and enabled for use by both the initiator
and the storage system, the storage system automatically registers its IP address, node name, and portal groups with the
iSNS server when the iSNS service is started. The iSCSI initiator can query the iSNS server to discover the storage
system as a target device. If you do not have an iSNS server on your network, you must manually configure each target
to be visible to the host.
The Challenge Handshake Authentication Protocol (CHAP) enables authenticated communication between iSCSI
initiators and targets. When you use CHAP authentication, you define CHAP user names and passwords on both the
initiator and the storage system. During the initial stage of an iSCSI session, the initiator sends a login request to the
storage system to begin the session. The login request includes the initiators CHAP user name and CHAP algorithm.
The storage system responds with a CHAP challenge. The initiator provides a CHAP response. The storage system
verifies the response and authenticates the initiator. The CHAP password is used to compute the response.
During an iSCSI session, the initiator and the target communicate over their standard Ethernet interfaces, unless the
host has an iSCSI HBA. The storage system appears as a single iSCSI target node with one iSCSI node name. For
storage systems with a MultiStore license enabled, each vFiler unit is a target with a different node name. On the
storage system, the interface can be an Ethernet port, virtual network interface (vif), or a virtual LAN (VLAN)
interface. Each interface on the target belongs to its own portal group by default. This enables an initiator port to
conduct simultaneous iSCSI sessions on the target, with one session for each portal group. The storage system supports
up to 1,024 simultaneous sessions, depending on its memory capacity. To determine whether your hosts initiator
software or HBA can have multiple sessions with one storage system, see your host OS or initiator documentation. You
can change the assignment of target portals to portal groups as needed to support multiconnection sessions, multiple
sessions, and multipath I/O. Each session has an Initiator Session ID (ISID), a number that is determined by the
initiator.
FC Introduction
FC is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using
the SCSI protocol over a Fibre Channel fabric. In a FC network, nodes include targets, initiators, and switches. Nodes
register with the Fabric Name Server when they are connected to a FC switch.
Targets
Initiators
Storage systems and hosts have adapters so they can be directly connected to each other or to FC switches with optical
cable. For switch or storage system management, they might be connected to each other or to TCP/IP switches with
Ethernet cable. When a node is connected to the FC SAN, it registers each of its ports with the switchs Fabric Name
Server service, using a unique identifier. Each FC node is identified by a worldwide node name (WWNN) and a
worldwide port name (WWPN). WWPNs identify each port on an adapter.
WWPNs are used for the following purposes:
Creating an initiator group - The WWPNs of the hosts HBAs are used to create an initiator group (igroup). An
igroup is used to control host access to specific LUNs. You create an igroup by specifying a collection of
WWPNs of initiators in an FC network. When you map a LUN on a storage system to an igroup, you grant all
the initiators in that group access to that LUN. If a hosts WWPN is not in an igroup that is mapped to a LUN,
that host does not have access to the LUN. This means that the LUNs do not appear as disks on that host. You
can also create port sets to make a LUN visible only on specific target ports. A port set consists of a group of FC
target ports. You bind a port set to an igroup. Any host in the igroup can access the LUNs only by connecting to
the target ports in the port set.
Uniquely identifying a storage systems HBA target ports -The storage systems WWPNs uniquely identify each
target port on the system. The host operating system uses the combination of the WWNN and WWPN to identify
storage system adapters and host target IDs. Some operating systems require persistent binding to ensure that the
LUN appears at the same target ID on the host.
When the FCP service is first initialized, it assigns a WWNN to a storage system based on the serial number of its
NVRAM adapter. The WWNN is stored on disk. Each target port on the HBAs installed in the storage system has a
unique WWPN. Both the WWNN and the WWPN are a 64-bit address represented in the following
format: nn:nn:nn:nn:nn:nn:nn:nn, where n represents a hexadecimal value. The storage system also has a unique
system serial number that you can view by using the sysconfig command. The system serial number is a unique sevendigit identifier that is assigned when the storage system is manufactured. You cannot modify this serial number. Some
multipathing software products use the system serial number together with the LUN serial number to identify a LUN.
You use the fcp show initiator command to see all of the WWPNs, and any associated aliases, of the FC initiators that
have logged on to the storage system. Data ONTAP displays the WWPN as Portname. To know which WWPNs are
associated with a specific host, see the FC Host Utilities documentation for your host. These documents describe
commands supplied by the Host Utilities or the vendor of the initiator, or methods that show the mapping between the
host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, or SANsurfer applications, and
for UNIX hosts, use the sanlun command.
Getting the Storage Ready
I have discussed in detail how to create the following in my disk administration section:
Aggregates
Plexes
FlexVol and Traditional Volumes
QTrees
Files
LUNs
A plex is a collection of one or more RAID groups that together provide the storage for one or more Write
Anywhere File Layout (WAFL) file system volumes. Data ONTAP uses plexes as the unit of RAID-level
mirroring when the SyncMirror software is enabled.
An aggregate is a collection of one or two plexes, depending on whether you want to take advantage of RAIDlevel mirroring. If the aggregate is unmirrored, it contains a single plex. Aggregates provide the underlying
physical storage for traditional and FlexVol volumes.
A traditional volume is directly tied to the underlying aggregate and its properties. When you create a traditional
volume, Data ONTAP creates the underlying aggregate based on the properties you assign with the vol create
command, such as the disks assigned to the RAID group and RAID-level protection.
A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share
its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the
storage used by all the FlexVol volumes contained by that aggregate.
Once you set up the underlying aggregate, you can create, clone, or resize FlexVol volumes without regard to the
underlying physical storage. You do not have to manipulate the aggregate frequently. You use either traditional or
FlexVol volumes to organize and manage system and user data. A volume can hold qtrees and LUNs. A qtree is a
subdirectory of the root directory of a volume. You can use qtrees to subdivide a volume in order to group LUNs. You
create LUNs in the root of a volume (traditional or flexible) or in the root of a qtree, with the exception of the root
volume. Do not create LUNs in the root volume because it is used by Data ONTAP for system administration. The
default root volume is /vol/vol0.
Autodelete is a volume-level option that allows you to define a policy for automatically deleting Snapshot copies based
on a definable threshold. Using autodelete is recommended in most SAN configurations.
You can set that threshold, or trigger, to automatically delete Snapshot copies when:
Two other things that you need to be aware of are Space Reservation and Fractional Reserve
Space Reservation
When space reservation is enabled for one or more LUNs, Data ONTAP reserves enough space in the volume (traditional or
FlexVol) so that writes to those LUNs do not fail because of a lack of disk space.
Fractional Reserve
Fractional reserve is a volume option that enables you to determine how much space Data ONTAP reserves for Snapshot copy
overwrites for LUNs, as well as for space-reserved files when all other space in the volume is used.
When provisioning storage in a SAN environment, there are several best practices to consider. Selecting and following
the best practice that is most appropriate for you is critical to ensuring your systems run smoothly.
There are generally two ways to provision storage in a SAN environment:
In Data ONTAP, fractional reserve is set to 100 percent and autodelete is disabled by default. However, in a SAN
environment, it usually makes more sense to use autodelete (and sometimes autosize).
When using fractional reserve, you need to reserve enough space for the data inside the LUN, fractional reserve, and
snapshot data, or: X + X + Delta. For example, you might need to reserve 50 GB for the LUN, 50 GB when fractional
reserve is set to 100%, and 50 GB for snapshot data, or a volume of 150 GB. If fractional reserve is set to a percentage
other than 100%, then the calculation becomes more complex.
In contrast, when using autodelete, you need only calculate the amount of space required for the LUN and snapshot
data, or X + Delta. Since you can configure the autodelete setting to automatically delete older snapshots when space is
required for data, you need not worry about running out of space for data.
For example, if you have a 100 GB volume, you might allocate 50 GB for a LUN, and the remaining 50 GB is used for
snapshot data. Or in that same 100 GB volume, you might reserve 30 GB for the LUN, and 70 GB is then allocated for
snapshots. In both cases, you can configure snapshots to be automatically deleted to free up space for data, so fractional
reserve is unnecessary.
LUN's, iGroups, LUN maps
When you create a LUN there are a number of items you need to know
Path name
Name
Multiprotocol type
Size
Description
Identification number
space reservation setting
The path name of a LUN must be at the root level of the qtree or volume in which the LUN is located. Do not create
LUNs in the root volume. The default root volume is /vol/vol0, for example /vol/database/lun1.
The name of the LUN is case-sensitive and can contain 1 to 256 characters. You cannot use spaces. LUN names must
use only specific letters and characters. LUN names can contain only the letters A through Z, a through z, numbers 0
through 9, hyphen (-), underscore (_), left brace ({), right brace (}), and period (.).
The LUN Multiprotocol Type, or operating system type, specifies the OS of the host accessing the LUN. It also
determines the layout of data on the LUN, the geometry used to access that data, and the minimum and maximum size
of the LUN. The LUN Multiprotocol Type values are solaris, solaris_efi, windows, windows_gpt, windows_2008 ,
hpux, aix, linux, netware, xen, hyper_v, and vmware. When you create a LUN, you must specify the LUN type. Once
the LUN is created, you cannot modify the LUN host operating system type.
You specify the size of a LUN in bytes or by using specific multiplier suffixes (k, m, g, t).
The LUN description is an optional attribute you use to specify additional information about the LUN.
A LUN must have a unique identification number (ID) so that the host can identify and access the LUN. You map the
LUN ID to an igroup so that all the hosts in that igroup can access the LUN. If you do not specify a LUN ID, Data
ONTAP automatically assigns one.
When you create a LUN by using the lun setup command or FilerView, you specify whether you want to enable space
reservations. When you create a LUN using the lun create command, space reservation is automatically turned on.
Initiator groups (igroups) are tables of FCP host WWPNs or iSCSI host nodenames. You define igroups and map them
to LUNs to control which initiators have access to LUNs. Typically, you want all of the hosts HBAs or software
initiators to have access to a LUN. If you are using multipathing software or have clustered hosts, each HBA or
software initiator of each clustered host needs redundant paths to the same LUN. You can create igroups that specify
which initiators have access to the LUNs either before or after you create LUNs, but you must create igroups before
you can map a LUN to an igroup. Initiator groups can have multiple initiators, and multiple igroups can have the same
initiator. However, you cannot map a LUN to multiple igroups that have the same initiator.
Host with HBA WWPN's
igroups
linux-group0
10:00:00:00:c9:2b:7c:8f
/vol/vol2/lun0
linux-group1
10:00:00:00:c9:2b:3e:3c
10:00:00:00:c9:2b:09:3c
/vol/vol2/lun1
The igroup name is a case-sensitive name that must satisfy several requirements. Contains 1 to 96 characters. Spaces
are not allowed. Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_),
colon (:), and period (.). Must start with a letter or number.
The igroup type can be either -i for iSCSI or -f for FC.
The ostype indicates the type of host operating system used by all of the initiators in the igroup. All initiators in an
igroup must be of the same ostype. The ostypes of initiators are solaris, windows, hpux, aix, netware, xen, hyper_v,
vmware, and linux. You must select an ostype for the igroup.
Finally we get to LUN mapping which is the process of associating a LUN with an igroup. When you map the LUN to
the igroup, you grant the initiators in the igroup access to the LUN. You must map a LUN to an igroup to make the
LUN accessible to the host. Data ONTAP maintains a separate LUN map for each igroup to support a large number of
hosts and to enforce access control. Specify the path name of the LUN to be mapped. Specify the name of the igroup
that contains the hosts that will access the LUN.
Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN ID begins with 0 and
increments by 1 for each additional LUN as it is created. The host associates the LUN ID with the location and path
name of the LUN. The range of valid LUN ID numbers depends on the host.
There are two ways to setup a LUN
ontap1> lun setup
LUN setup command
Note: the "lun setup" will display prompts that lead you through the setup process
# Create the LUN
lun create -s 100m -t windows /vol/tradvol1/lun1
Good old fashioned
commandline
# Create the igroup, you must obtain the nodes identifier (my home pc is: iqn.1991-05.com.microsoft:xblade)
igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
# Map the LUN to the igroup
lun map /vol/tradvol1/lun1 win_hosts_group1 0
The full set of commands for both lun and igroup are below
LUN configuration
Display
lun show
lun show -m
lun show -v
Initialize/Configure LUNs,
mapping
Create
lun setup
Note: follow the prompts to create and configure LUN's
lun create -s 100m -t windows /vol/tradvol1/lun1
lun destroy [-f] /vol/tradvol1/lun1
Destroy
Note: the "-f" will force the destroy
lun resize <lun path> <size>
Resize
lun resize /vol/tradvol1/lun1 75m
lun show -m
Note: use "-f" to force the mapping
lun show -m
lun offline /vol/tradvol1
lun unmap /vol/tradvol1/lun1 win_hosts_group1 0
Comments
number
Manage LUN properties
igroup configuration
display
igroup show
igroup show -v
igroup show iqn.1991-05.com.microsoft:xblade
create (iSCSI)
create (FC)
destroy
rename
Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre
Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path
secondary path. It also enables the target to communicate events back to the initiator. As long as the host suppor
the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no
longer required.
Enabling ALUA
There are a number of iSCSI commands that you can use, I am not going to discuss iSCSI security (CHAPS or
RADIUS), I will leave you to look at the doucmentation on this advanced topic.
display
iscsi
iscsi
iscsi
iscsi
initiator show
session show [-t]
connection show -v
security show
status
iscsi status
start
iscsi start
stop
iscsi stop
stats
iscsi stats
iscsi nodename
nodename
interfaces
portals
accesslists
Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The
storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol
iscsi interface accesslist show
Note: you can add or remove interfaces from the list
We have discussed how to setup a server using iSCSI but what if the server is using FC to connect to the NetApp.
A port set consists of a group of FC target ports. You bind a port set to an igroup, to make the LUN available only on a
subset of the storage system's target ports. Any host in the igroup can access the LUNs only by connecting to the target
ports in the port set. If an igroup is not bound to a port set, the LUNs mapped to the igroup are available on all of the
storage systems FC target ports. The igroup controls which initiators LUNs are exported to. The port set limits the
target ports on which those initiators have access. You use port sets for LUNs that are accessed by FC hosts only. You
cannot use port sets for LUNs accessed by iSCSI hosts.
All ports on both systems in the HA pairs are visible to the hosts. You use port sets to fine-tune which ports are
available to specific hosts and limit the amount of paths to the LUNs to comply with the limitations of your
multipathing software. When using port sets, make sure your port set definitions and igroup bindings align with the
cabling and zoning requirements of your configuration
Port Sets
display
portset show
portset show portset1
create
destroy
add
remove
binding
FCP service
display
fcp start
stop
fcp stop
fcp stats -i interval [-c count] [-a | adapter]
stats
fcp stats -i 1
target
expansion
adapters
target adapter
speed
set WWPN #
fcp portname set -f 1b 50:0a:09:85:87:09:68:ad
fcp portname swap [-f] adapter1 adapter2
swap WWPN #
fcp portname swap -f 1a 1b
# display nodename
fcp nodename
change WWNN
ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is
possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this
unlikely event, you can change the WWNN of the storage system.
fcp wwpn-alias show
WWPN Aliases
fcp wwpn-alias show -a my_alias_1
- display
fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2
fcp wwpn-alias set [-f] alias wwpn
WWPN Aliases
- create
fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f
fcp wwpn-alias remove [-a alias ... | -w wwpn]
WWPN Aliases
- remove
fcp wwpn-alias remove -a my_alias_1
fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2
Restore a LUN or file system to an earlier preserved state in less than a minute without rebooting the storage system, regardless of the size o
being restored.
Recover from a corrupted database or a damaged application, a file system, a LUN, or a volume by using an existing Snapshot copy.
Replicate data or asynchronously mirror data from one storage system to another over local or wide area networks (LANs or WANs).
Transfer Snapshot copies taken at specific points in time to other storage systems or near-line systems. These replication targets can be in th
through a LAN or distributed across the globe connected through metropolitan area networks (MANs) or WANs. Because SnapMirror operates a
SnapRestore
SnapMirror
level instead of transferring entire files or file systems, it generally reduces bandwidth and transfer time requirements for replication.
SnapVault
Back up data by using Snapshot copies on the storage system and transferring them on a scheduled basis to a destination storage system.
Store these Snapshot copies on the destination storage system for weeks or months, allowing recovery operations to occur nearly instantaneo
destination storage system to the original storage system.
Manage storage system Snapshot copies directly from a Windows or UNIX host.
Configure access to storage directly from a host. SnapDrive for Windows supports Windows 2000 Server and Windows Server 2003. SnapDrive f
number of UNIX environments.
NDMP
(Network Data
Management Protocol)
Control native backup and recovery facilities in storage systems and other file servers. Backup application vendors provide a common interface betwe
and file servers.
A LUN clone is a point-in-time, writable copy of a LUN in a Snapshot copy. Changes made to the parent LUN after the
clone is created are not reflected in the Snapshot copy. A LUN clone shares space with the LUN in the backing
Snapshot copy. When you clone a LUN, and new data is written to the LUN, the LUN clone still depends on data in the
backing Snapshot copy. The clone does not require additional disk space until changes are made to it. You cannot
delete the backing Snapshot copy until you split the clone from it. When you split the clone from the backing Snapshot
copy, the data is copied from the Snapshot copy to the clone, thereby removing any dependence on the Snapshot copy.
After the splitting operation, both the backing Snapshot copy and the clone occupy their own space.
Use LUN clones to create multiple read/write copies of a LUN. You might want to do this for the following reasons:
You want to create a clone of a database for manipulation and projection operations, while preserving the
original data in unaltered form.
You want to access a specific subset of a LUN's data (a specific logical volume or file system in a volume group,
or a specific file or set of files in a file system) and copy it to the original LUN, without restoring the rest of the
data in the original LUN. This works on operating systems that support mounting a LUN and a clone of the LUN
at the same time. SnapDrive for UNIX allows this with the snap connect command.
Display clones
snap list
# Create a LUN by entering the following command
lun create -s 10g -t solaris /vol/tradvol1/lun1
create clone
# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010
# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010
# display the snapshot copies
lun snap usage tradvol1 tradvol1_snapshot_08122010
destroy clone
# Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following
command
lun destroy /vol/tradvol1/clone_lun1
# Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear
snap delete tradvol1 tradvol1_snapshot_08122010
vol options <vol_name> <snapshot_clone_dependency> on
vol options <vol_name> <snapshot_clone_dependency> off
Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones
were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN
clone dependency clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first del
all of the more recent backing Snapshot copies.
This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set t
off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable
this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the syste
restores whichever value was present at the time the Snapshot copy was taken.
Restoring snapshot
stop clone
splitting
delete snapshot
copy
aggr show_space
df
snap delta