ISILON Administration and Manage 2015 PDF
ISILON Administration and Manage 2015 PDF
ISILON Administration and Manage 2015 PDF
secondary storage
The job engine consists of all the job daemons across the Metadata checksums are housed in
whole cluster. The job daemons elect a job coordinator. The the metadata blocks themselves,
election is by the first daemon to respond when a job is whereas file data checksums are
started. stored as metadata, thereby
providing referential integrity.
Jobs can have a number of phases. There might be only one
phase, for simpler jobs, but more complex ones can have In the event that the recomputed
multiple phases. Each phase is executed in turn, but the job is checksum does not match the
not finished until all the phases are complete. How the Job Engine works ISI Data Integrity (IDI) is the OneFS stored checksum, OneFS will
process that protects file system generate a system event, log the
Each phase is broken down into tasks. These tasks are event, retrieve and return the
structures against corruption via
distributed to the nodes by the coordinator, and the job is corresponding FEC block to the
32-bit CRC checksums. All Isilon
executed across the entire cluster. client and attempt to repair the
blocks, both for file and metadata,
use checksum verification. suspect data block.
Each task consists of a list of items. The result of each item’s
execution is logged, so that if there is an interruption the job Permanent Internal structures
can restart from where it stopped.
On-Disk data structures
Lesson 1
The isi job status command is used to view currently running, paused, or
queued jobs, and the status of the most recent jobs. Use this command
to view running and most recent jobs quickly. Failed jobs are clearly
indicated with messages.
The isi job statistics command includes the options of list and view. The The available N+Mn requested
verbose option is provided to provide detail information about the job protection levels are +1n, +2n, +3n,
operations. To get the most information about all current jobs, use the isi and +4n.
job statistics list –v command.
With N+Mn protection only one
stripe unit is located on a single
node. Each stripe unit is written to a
single drive on the node. Assuming
the node pool is large enough, the
maximum size of the file stripe
width is 16 data stripe units plus
the protection stripe units for the
requested protection level.
The raw events are processed by the CELOG coalescers and are stored in
An event is a notification that provides important information about the
log databases. Events are presented in a reporting format through SNMP
health or performance of the cluster. Some of the areas include the task
polling, as CLI messages, or as web administration interface events. The
state, threshold checks, hardware errors, file system errors, connectivity
events generate notifications, such as ESRS notifications, SMTP email
state and a variety of other miscellaneous states and errors.
alerts, and SNMP traps.
The purpose of cluster events log (CELOG) is to monitor, log and report
Multiple data stripe units and FEC
important activities and error conditions on the nodes and cluster.
stripe units are placed on separate
Different processes that monitor cluster conditions, or that have a need
The CELOG system receives event messages from other processes in the drive on each node. This is referred
to log important events during the course of their operation, will
system. Multiple related or duplicate event occurrences are grouped, or to as N+M:B or N+Md:Bn
communicate with the CELOG system. The CELOG system is designed
coalesced, into one logical event by the OneFS system. protection. These protection
to provide a single location for the logging of events. CELOG provides a
schemes are represented as
single point from which event notifications are generated, including
+Md:Bn in the OneFS web
sending alert emails and SNMP traps.
administration interface and the
Instance ID – The unique event identifier command line interface. The single protection stripe spans
the nodes and each of the included
Start time – When the event began
drives on each node. The
End time – When the event ended, if supported N+Md:Bn protections
applicable Quieted time – When the event are N+2d:1n, N+3d:1n, and N+4d:
was quieted by the user 1n
Event type – The event database reference N+2d:1n is the default node pool
ID. Each event type references a table of requested protection level in
information that populates the event details OneFS. M is the number the
N+Md:Bn utilizes multiple drives
and provides the template for the messages number of stripe units or drives per
per node as part of the same data
displayed. node, and the number of FEC stripe
stripe with multiple stripe units per
units per protection stripe. The
Category – Displays category of the event, node. N+Md:Bn protection lowers
same maximum of 16 data stripe
hardware, software, connectivity, node status, the protection overhead by
units per stripe is applied to each
etc. increasing the size of the protection
protection stripe.
stripe.
Message – More specific detail about the
event
Event hierarchy – Normal event or a To display the event details, on the Summary page,
coalescing event in the Actions column, click View details. N+3d:1n and N+4d:1 are most
effective with larger file sizes on
Severity – The level of the event severity from smaller node pools. Smaller files
just informational (info), a warning event are mirrored when these protection
N+2d:1n contains 2 FEC stripe levels are requested.
(warn), a critical event, or an emergency event
units, and has 2 stripe units per
Extreme severity – The highest severity level examples for the available node. N+3d:1n contains 3 FEC
received for coalesced events where the N+Md:Bn requested protection stripe units, and has 3 stripe units
severity level may have changed based on the levels. per node. N+4d:1n contains 4 FEC
values received especially for threshold stripe units, and has 4 stripe units
violation events per node.
Value – A variable associated with a particular
event. What is displayed varies according to
the event generated. In the example displayed,
the value 1 represents true, where 0 would
represent false for the condition. In certain
N+3d:1n1d includes three FEC
events it represents the actual value of the The maximum number of data
stripe units per protection stripe,
monitored event and for some events the value stripe units is 15 and not 16 when
and provides protection for three
field is not used. using N+3d:1n1d requested
simultaneous drive losses, or one
protection.
Extreme value – Represents the threshold node and one drive loss.
setting associated with the event. In the
N+4d:2n includes four FEC stripe
example displayed, the true indicator is the
units per stripe, and provides
threshold for the event. This field could In addition to the previous
The available requested protection protection for four simultaneous
represent the threshold exceeded that N+Md:Bn there are two advanced
levels N+3d:1n1d and N+4d:2n. drive losses, or two simultaneous
triggered the event notification to occur. forms of requested protection. node failures.
If you configure the OneFS cluster for SNMP monitoring, you select
events to send SNMP traps to one or more network monitoring
stations, or trap receivers. When you configure event notification rules, you can choose from
three methods to notify recipients: email, ESRS or SNMP trap.
The isi events notifications command is used to manage the details
Each event notification can be configured through the web
for specific or all notification rules.
administration interface or the command-line interface.
The isi events settings command manages the values of global
settings or the settings for a specific notification policy.
InsightIQ Logs
Lesson 3
ESRS Principles
Kerberos is not a mandatory requirement for a Hadoop evolved from other open-source Apache
Hadoop, like many open source technologies, such as
Hadoop cluster, making it possible to run projects, directed at building open source web search
UNIX and TCP/IP, was not created with security in mind.
entire clusters without deploying any security. engines and security was not a primary consideration.
Lesson 4
When a client requests that a file be written to the cluster, The layout decisions are made by the BAM on the node that
the node to which the client is connected is the node that initiated a particular write operation. The BAM makes the
receives and processes the file. That node creates a write decision on where best to write the data blocks to ensure the
plan for the file including calculating FEC. Data blocks file is properly protected. To do this, the BSW generates a write
assigned to the node are written to the NVRAM of that node. plan, which comprises all the steps required to safely write the
Data blocks assigned to other nodes travel through the new data blocks across the protection group. Once complete,
Infiniband network to their L2 cache, and then to their the BSW will then execute this write plan and guaranty its
NVRAM. Once all nodes have all the data and FEC blocks in successful completion. OneFS will not write files at less than
NVRAM a commit is returned to the client. Data block(s) the desired protection level, although the BAM will attempt to
assigned to this node stay cached in L2 for future reads of use an equivalent mirrored layout if there is an insufficient stripe
that file. Data is then written onto the spindles. width to support a particular FEC protection level.
The other major improvement in
Endurant Cache or EC is only for synchronous writes, or over all node efficiency with
writes that require a stable write acknowledgement be synchronous writes comes from
returned to the client. EC provides Ingest and staging of utilizing the Write Coalescer’s full
stable synchronous writes. EC manages the incoming capabilities to optimize writes to
write blocks and stages them to stable battery backed disk.
Lesson 1: Demystifying Hadoop NVRAM. Insuring the integrity of the write. EC also
provides Stable synchronous write loss protection by Endurant Cache was specifically
creating multiple mirrored copies of the data, further developed to improve NFS
guaranteeing protection from single node and often synchronous write performance and
multiple node catastrophic failures. write performance to VMware
VMFS and NFS datastore.
Stages and stabilizes the write – At
the point the ACK request is made
by the client protocol, the EC
Logwriter process mirrors the data
block or blocks in the Write
Coalescer to the EC log files in
NVRAM where the write is now
protected and considered stable.
The NameNode now resides on the Isilon cluster giving it a Once stable, the acknowledgement
complete and automated failover process. In the event that the or ACK is now returned to the
node running as the NameNode fails, another Isilon node will client. At this point the client
immediately pick up the function of the NameNode. No data or considers the write process
metadata would be lost since the distributed nature of Isilon will complete. The latency or delay time
spread the metadata across the cluster. There is no downtime is measured from the start of the
when this occurs and most importantly there is no need for process to the return of the
administrative intervention to failover the NameNode. acknowledgement to the client.
Data Protection – Hadoop does 3X mirror for data From this point forward, our
protection and had no replication capabilities. Isilon The Endurant Cache, or EC, ingests and stages standard asynchronous write
supports snapshots, clones, and replication using it’s stable synchronous writes. Ingests the write into process is followed. We let the
Enterprise features. the cluster – The client sends the data block or Write Coalescer manage the write
blocks to the node’s Write Coalescer with a in the most efficient and
No Data Migration – Hadoop requires a landing zone economical manner according to
synchronous write acknowledgement, or ACK,
for data to come to before using tools to ingest data the Block Allocation Manager, or
request.
to the Hadoop cluster. Isilon allows data on the cluster BAM, and the BAM Safe Write or
to be analyzed by Hadoop. Imagine the time it would BSW path processes.
take to push 100TB across the WAN and wait for it to Module 6: Application Integration
migrate before any analysis can start. Isilon does in with OneFS The write is completed – Once the
place analytics so no data moves around the network. standard asynchronous write
process is stable with copies of the
Security – Hadoop does not support kerborized different blocks on each of the
authentication it assumes all members of the domain involved nodes’ L2 cache and
are trusted. Isilon supports integrating with AD or NVRAM, the EC Log File copies are
LDAP and give you the ability to safely segment de-allocated using the Fast Invalid
access Path process from NVRAM. The
Dedupe – Hadoop natively 3X mirrors files in a cluster, Hadoop Advantage using EMC write is always secure throughout
meaning 33% storage efficiency. Isilon is 80% efficient Isilon the process. Finally the write to the
hard disks is completed and the file
Compliance and security – Hadoop has no native copies in NVRAM are de-allocated.
encryption. Isilon supports Self Encrypting Drives, Copies of the writes in L2 cache will
using ACLS and ModeBits, Access zones, RBAC and remain in L2 cache until flushed
is SEC compliant though one of the normal
processes.
Multi-Distribution Support – Each physical HDFS
cluster can only support one distribution of a client sends a file to the cluster requesting a synchronous write
Hadoop...we let you co-mingle physical and virtual acknowledgement. The client begins the write process by sending
versions of any apache standards-based distros you 4KB data blocks. The blocks are received into the node’s Write
like Coalescer; which is a logical separation of the node’s RAM similar to
but distinct from L1 and L2 Cache. Once the entire file has been
Scale Compute and Storage Independently – Hadoop received into the Write Coalescer, the Endurant Cache (EC)
pairs the storage with the compute o if you need more synchronous write
LogWriter Process writes mirrored copies of the data blocks (with
space, you have to pay for more CPU that may go some log file–specific information added) in parallel to the EC Log
unused or if you need more compute, you end up with Files, which reside in the NVRAM. The protection level of the
lots of overhead space. We let you scale compute as mirrored EC Log Files is based on the Drive Loss Protection Level
needed and Isilon for storage as needed; aligning your assigned to the data file to be written; the number of mirrored
costs with your requirements copies equals 2X, 3X, 4X, or 5X.
Synchronous writes request an ACK after each piece of the file. The size of
the piece is determined by the client and may not match the 8KB block
size used by OneFS. If there is a synchronous write flag, the Endurant
cache process is used to accelerate the process to having the write
considered stable and protected in NVRAM and providing the ability to
return the ack to the client faster. After the synchronous write is secure the
file blocks follow the asynchronous write process.
If the write is asynchronous, the data blocks are processed from the write
coalescer using the Block Allocation Manager (BAM) Safe Write or BSW
process. This is where FEC is calculated, the node pool, sub pool, nodes,
drives and specific blocks to write the data to are determined and the
128KB stripe units are formed.
To view the cache statistics, use the
isi_cache_stats –v command. Statistics for L1, L2
and L3 cache are provided. Separate statistics for
L3 data and L3 metadata are provided.
Each node pool must contain at least three nodes. If you have
less than three nodes, the node pool is considered to be under
provisioned. If you submit a configuration for a node pool that
contains less than three nodes, the web administration interface
will notify you that the node pool is under provisioned. The
cluster will not store files on an under provisioned node pool.
All node pools in a tier and all file pools policies targeting a tier should
be removed before a tier is deleted. When a tier is deleted still
containing node pools, the node pool is removed from all tiers and listed SBR mitigates how previous versions of
SmartPools Configuration
as the node pool. Any file pool policies targeting the deleted tier will OneFS only used the highest priority gateway.
generate notifications and require modification by the administrator. Source-based routing ensures that outgoing
client traffic (from the cluster) is directed
through the gateway of the source subnet.
Node Compatibility
Create Compatibility
Domain names are managed under a hierarchy headed by the
Internet Assigned Numbers Authority (IANA), which manages the
top of the DNS tree by administrating the data in the root
nameservers.
In the CLI, use the command isi storagepool compatibilities active For example, a server by the name of server7
create with arguments for the old and new node types. The changes An A-record maps the hostname to a specific IP address would have an A record that mapped the
to be made are displayed in the CLI. You must accept the changes by to which the user would be sent for each domain or hostname server7 to the IP address assigned to it:
entering yes, followed by ENTER to initiate the node compatibility. subdomain. It is simple name-to-IP resolution.
Server7.support.emc.com A 192.168.15.12
NS records indicate which name servers are authoritative for For example if you have a domain called Mycompany.com and
the zone or domain. NS Records are primarily used by you want all DNS Lookups for Seattle.Mycompany.com to go
companies that wish to divide their domain into to a server located in Seattle. You would create an NS record
subdomains. Subdomains indicate that you are delegating a that maps Seattle.Mycompany.com to the Name Server in
portion of your domain name to a different group of name Seattle with a hostname of SIP thus the mapping looks like:
The Directory Protection setting is configured to protect servers. You create NS records to point the name of this Seattle.Mycompany.com NS
directories of files at one level higher than the data. delegated subdomain to different name servers. SrvNS.Mycompany.com
GNA enables SSDs to be used for cluster-wide metadata acceleration and use SSDs in one The Global namespace acceleration setting enables file
part of the cluster to store metadata for nodes that have no SSDs. The result is that critical metadata to be stored on node pool SSD drives, and requires
SSD resources are maximized to improve performance across a wide range of workflows. that 20% of the disk space be made up of SSD drives.
Global namespace acceleration can be enabled if 20% or more of the nodes in the cluster
contain SSDs and 1.5% or more of the total cluster storage is SSD-based. The Once the client is at the front-end interface, the associated
recommendation is that at least 2.0% of the total cluster storage is SSD-based before access zone then authenticates the client against the proper
enabling global namespace acceleration. If you go below the 1.5% SSD total cluster space directory service; whether that is external like LDAP and AD or
capacity requirement, GNA is automatically disabled and all GNA metadata is disabled. If you internal to the cluster like the local or file providers.
SmartConnect is a client load balancing feature that allows
SmartFail a node containing SSDs, the SSD total size percentage or node percentage
segmenting of the nodes by performance, department or Access zones do not dictate which front-end interface the client The access token contains all the permissions and
containing SSDs could drop below the minimum requirement and GNA would be disabled.
SmartPools Settings subnet. SmartConnect deals with getting the clients from rights that the user has. When a user attempts to
connects to, it only determines what directory will be queried to
their devices to the correct front-end interface on the verify authentication and what shares that the client will be able access a directory the access token will be checked
cluster. to view. Once authenticated to the cluster, mode bits and ACLs to verify if they have the necessary rights.
(access control lists) dictate the files, folders and directories In OneFS 7.0.x the maximum number of supported
that can be accessed by this client. Remember, when the client Access Zone is five. As of OneFS 7.1.1 the maximum
Lesson 1 is authenticated Isilon generates an access token for that user. number of supported Access Zones is 20.
SmartConnect is a client connection balancing management
feature (module) that enables client connections to be balanced
The Virtual hot spare (VHS) option reserves the free space needed to across all or selected nodes in an Isilon cluster. It does this by
rebuild the data if a disk or node failure occurs. Up to four full drives can providing a single virtual host name for clients to connect to,
be reserved. If you choose the Reduce amount of available space option, which simplifies connection mapping.
free space calculations do not include the space reserved for the virtual
hot spare. The reserved VHS free space can still be used for writes unless It provides load balancing and dynamic NFS failover and
you select the Deny new data writes option. If these first two VHS options failback of client connections across storage nodes to provide
are enabled, it is possible for the file system use to report at over 100%. optimal utilization of the cluster resources. SmartConnect
SmartConnect zones allow a granular control of where a
A minimum number of virtual drives in each node pool (1-4) eliminates the need to install client side drivers, enabling
connection is directed. An administrator can segment the
administrators to manage large numbers of clients in the event
A minimum percentage of total disk space in each node pool (0-20 cluster by workflow allowing specific interfaces within a
of a system failure.
percent) node to support different groups of users.
VHS reserved space allocation is
SmartConnect simplifies client connection management. Based
A combination of minimum virtual drives and total disk space. The defined using these options:
on user configurable policies, SmartConnect Advanced applies
larger number of the two settings determines the space allocation, intelligent algorithms (e.g., CPU utilization, aggregate
not the sum of the numbers. If you configure both settings, the throughput, connection count or round robin) and distributes
enforced minimum value satisfies both requirements. clients across the cluster to optimize client performance.
SmartConnect can be configured into multiple zones that can be
used to ensure different levels of service for different groups of
The Enable global spillover section, controls whether the cluster clients. All of this is transparent to the end-user.
can redirect write operations to another storage pool if the target
storage pool is full, otherwise the write operation fails. Perhaps a client with a 9-node cluster containing three S-
nodes, three X-nodes and three NL-nodes wants their
Research team to connect directly to the S-nodes to utilize a
variety of high I/O applications. The administrators can then
SmartPools Action Settings give you a way to enable or disable managing
have the Sales and Marketing users connect to the front-end
requested protection settings and I/O optimization settings. If the box is
of the X-nodes to access their files.
unchecked (disabled), then SmartPools will not modify or manage settings on the
files. The option to Apply to files with manually managed protection provides the The first external IP subnet was configured during the
ability to override any manually managed requested protection setting or I/O initialization of the cluster. The initial default subnet, subnet0,
optimization. This option can be very useful if manually managed settings were is always an IPv4 subnet. Additional subnets can be
made using file system explorer or the isi set command. configured as IPv4 or IPv6 subnets. The first external IP
IP address pools partition a cluster’s external network address pool is also configured during the initialization of the
interfaces into groups or pools of IP address ranges in a cluster. The initial default IP address pool, pool0, was
subnet, enabling you to customize how users connect to your created within subnet0. It holds an IP address range and a
cluster. Pools control connectivity into the cluster by allowing physical port association.
different functional groups, such as sales, RND, marketing,
etc., access into different nodes. This is very important in
those clusters that have different node types.
The file pool policies are listed and applied in the order of that list. Only
one file pool policy can apply to a file, so after a matching policy is
found, no other policy is evaluated for that file. The default file pool
policy is always last in the ordered list of enabled file pool policies.
The SmartPools File Pool Policies page displays currently configured file
pool policies and available template policies. You can add, modify, delete,
and copy file pool policies in this section. The Template Policies section
lists the available templates that you can use as a baseline to create new
file pool policies.
File pool polices are applied to the cluster by the SetProtectPlus job or
the SmartPools job if SmartPools is licensed. By default, this job runs
SmartConnect is available in a basic (unlicensed) and
at 22:00 hours every day at a low priority.
advanced (licensed) versions.
With licensed SmartPools multiple file pool policies can be created to manage file
and directory storage behavior. By applying file pool policies to the files and
directories, files can be moved automatically from one storage pool to another within
the same cluster. File pool policies provide a single point of management to meet
performance, requested protection level, space, cost, and other requirements.
The SmartConnect service IP answers queries from DNS. There
can be multiple SIPs per cluster and they will reside on the node
with the lowest array ID for their node pool. If you know the IP
SmartConnect Components
address of the SIP and wish to know just the zone name, you
can use isi_for_array ifconfig –a | grep <IP of SIP> and it will
show you just the zone that the SIP is residing within.
To modify the default file pool policy, click File System, click Storage The default file pool policy is defined under the default policy. The In the Microsoft Windows DNS Management Console, an
Pools and then click the File Pool Policies tab. On the File Pool individual settings in the default file pool policy apply to all files that do NS record is called a New Delegation. On a BIND server,
Policies page, next to the default policy, click View / Edit. After have not that setting configured in another file pool policy that you the NS record must be added to the parent zone (in BIND
configure SmartConnect 9, the “IN” is optional). The NS record must contain the
finishing the configuration changes, you need to submit and then create. You cannot reorder or remove the default file pool policy.
confirm your changes. FQDN that you want to create for the cluster and the name
you want the client name resolution requests to point to. In
Under I/O Optimization Settings, the SmartCache setting is enabled addition to an NS record, an A record (for IPv4 subnets) or
by default. SmartCache can improve performance by prefetching data AAAA record (for IPv6 subnets) that contains the SIP of the
for read operations. In the Data access pattern section, you can cluster must also be created.
choose between Random, Concurrency, or Streaming. Random is the
recommended setting for VMDK files. Random access works best for A single SmartConnect zone does not support both IP
small files (<128 KB) and large files with random access to small versions, but you can create a zone for each IP version
blocks. This access pattern turns off prefetching. Concurrency is the and give them duplicate names. So, you can have an IPv4
default setting. It is the middle ground with moderate prefetching. Use subnetandIPaddresspoolwiththezonenametest.mycompan
concurrency access for file sets that get a mix of both random and y.com andyoucanalso define IPv6 subnet using the same
sequential access. Streaming access works best for medium to large zone name
files that have sequential reads. This access pattern uses aggressive
prefetching to improve overall read throughput.
A pool for data and a pool for snapshots can be specified. For
data, you can choose any node pool or tier, and the snapshots
can either follow the data, or be assigned to a different storage
location. You can also apply the cluster’s default protection level Cluster Name Resolution Process
to the default file pool, or specify a different protection level for
the files that are allocated by the default file pool policy.
File pools policies are a set of conditions that move data to specific
targets, either a specific node pool or a specific tier. By default, all
files in the cluster are written anywhere on the cluster as defined in Connection count data is collected
the default file pool policy. If a cluster is licensed the administrator has four options to every 10 seconds
SmartConnect will load balance client connections across load balance: round robin, connection count, network
Network throughput data is
the front end ports based on what the administrator has throughput and CPU usage. If the cluster does not have
collected every 10 seconds.
determined to be the best choice for their cluster. SmartConnect licensed it will load balance by Round
Robin only. CPU statistics are collected every
10 seconds.
File pool policies with path-based policy filters and storage pool location actions are executed
during the write of a file matching the path criteria. Path-based policies are first executed when
the SmartPools job runs, after that they are executed during the matching file write. File Pool
Policies with storage pool location actions and policy filters based on other attributes besides
path get written to the node pool with the highest available capacity and then moved, if
necessary to match a file pool policy, when the next SmartPools job runs. This ensures that
write performance is not sacrificed for initial data placement.
File pool policies are used to filter files by attributes and values that
you can specify. This feature, available with the licensed
File pool policy creation can be divided into two
SmartPools module, helps to automate and simplify high file
parts: specifying the file filter and specifying the
volume management. In addition to the storage pool location, the
actions. Static pools are best used for SMB clients because of the stateful
requested protection and I/O optimization settings for files that
match certain criteria can be set. nature of the SMB protocol. When an SMB client establishes a
connection with the cluster the session or “state” information is
negotiated and stored on the server or node. If the node goes offline
the state information goes with it and the SMB client would have to
When configuring IP-address pools on the cluster, an reestablish a connection to the cluster. SmartConnect is intelligent
administrator can choose either static pools or dynamic enough to hand out the IP-address of an active node when the SMB
pools. client reconnects.
If a node with client connections established goes offline, the behavior is protocol-
specific. NFSv3 automatically re-establishes an IP connection as part of NFS
failover. In other words, if the IP address gets moved off an interface because that
interface went down, the TCP connection is reset. NFSv3 re-establishes the
Module 3: Networking
connection with the IP on the new interface and retries the last NFS operation.
However, SMBv1 and v2 protocols are stateful. So when an IP is moved to an
interface on a different node, the connection is broken because the state is lost.
NFSv4 is stateful (just like SMB) and like SMB does not benefit from NFS failover.
Note: A best practice for all non-NFSv3 connections is to set the IP allocation
method to static. Other protocols such as SMB and HTTP have built-in
mechanisms to help the client recover gracefully after a connection is unexpectedly
disconnected.
Review and analyze reports that can help identify storage usage patterns
Hard quotas limit disk usage to a specified amount. Writes are Manual Failback – IP address rebalancing is done
denied after the quota threshold is reached and are only allowed manually from the CLI using isi networks modify
again if the usage falls below the threshold. pool. This causes all dynamic IP addresses to
The rebalance policy determines how IP addresses are rebalance within their respective subnet.
Soft quotas enable an administrator to configure a grace period that redistributed when node interface members for a given IP
starts after the threshold is exceeded. After the grace period expires, address pool become available again after a period of Automatic Failback – The policy automatically
the boundary becomes hard, and additional writes are denied. If the unavailability. The rebalance policy could be: redistributes the IP addresses. This is triggered by
usage drops below the threshold, writes are again allowed. a change to either the cluster membership,
external network configuration or a member
Advisory quotas do not deny writes to the disk, but they can trigger network interface.
alerts and notifications after the threshold is reached.
SmartConnect deals with getting the clients from
Enforcement quotas support three subtypes and are based on their devices to the correct front-end interface on the
administrator-defined thresholds: cluster.
Access zones do not dictate which front-end
interface the client connects to, it only determines
what directory will be queried to verify authentication
and what shares that the client will be able to view.
Default group quotas are applied to all groups, unless a group has an
explicitly defined quota for that directory. Default group quotas operate The default access zone within the cluster is
like default user quotas, except on a group basis. called the System access zone.
You should not configure any quotas on the root of the file system (/ifs), as Each access zone has their own
it could result in significant performance degradation. authentication providers (File, Local, Active
Directory, or LDAP) configured. Multiple
1. Default: The default setting is to only track user data, which instances of the same provider can occur in
is just the data that is written by the user. It does not include different access zones.
any data that the user did not directly store on the cluster.
2. Snapshot Data: This option tracks both the user data and Access Zone Architecture
any associated snapshots. This setting cannot be changed
after a quota is defined. To disable snapshot tracking, the
quota must be deleted and recreated. The options are:
Most quota configurations do not need to include overhead
3. Data Protection Overhead: This option tracks both the user calculations. If you configure overhead settings, do so
data and any associated FEC or mirroring overhead. This carefully, because they can significantly affect the amount of
option can be changed after the quota is defined. disk space that is available to users.
4. Snapshot Data and Data Protection Overhead: Tracks all
data user, snapshot and overhead with the same restrictions.
Module 5: All Storage
Quotas can also be configured to include the space that is consumed by When joining the Isilon cluster to an AD
Administration
snapshots. A single path can have two quotas applied to it: one without domain, the Isilon cluster is treated as a
snapshot usage (default) and one with snapshot usage. If snapshots are resource.
included in the quota, more files are included in the calculation.
If the System access zone is set to its defaults,
1. It allows a smaller initial purchase of capacity/nodes, the Domain Admins and Domain Users groups
and the ability to simply add more as needed, promoting a Local Provider - System from the AD domain are automatically added to
capacity on demand model. Doing this accomplishes two the cluster’s local Administrators and Users
2. It enables the administrator to set larger quotas initially things: groups, respectively.
and so that continually increases as users consume their Thin provisioning is a tool that enables an administrator to It’s important to note that, by default, the
allocated capacity are not needed. define quotas that exceed the capacity of the cluster. cluster’s local Users group also contains the AD
However, thin provisioning requires that cluster capacity use be domain group: Authenticated Users.
monitored carefully. With a quota that exceeds the cluster capacity, there
is nothing to stop users from consuming all available space, which can Now with the release of OneFS 7.2, NFS is zone-
result in service outages for all users and services on the cluster. aware, meaning the NFS exports and aliases can
exist and be visible on a per zone basis and not
Lesson 3 Access Zones allow administrators to carve a large exist only with the System zone.
cluster into smaller clusters. In prior versions of
OneFS, only SMB and HDFS was zone aware Each export is associated with only one zone, can
Nesting quotas refers to having multiple quotas
within the same directory structure. only be mounted only by clients in that zone, and
can only expose paths below the zone root. By
default, any export command applies to the
client’s current zone.
The System access zone supports the protocols
Quota events can generate notifications by email or through a cluster SMB, NFS, FTP, HTTP, and SSH.
If you are using LDAP or Active Directory to
event. The email option sends messages using the default cluster
authenticate users, the Isilon cluster uses the email If only the System access zone is used, all joined
settings. You can specify to send the email to the owner of the event,
settings for the user stored within the directory. If no or newly created authentication providers are
which is the user that triggered the event, or you can send email to an
email information is stored in the directory, or automatically contained within the System access
alternate contact, or both the owner and an alternate. You also have the
authentication is performed by a Local or NIS provider, zone. All SMB shares and NFS exports are also
option to use a customized email message template. If you need to send
you must configure a mapping. available through the System access zone.
the email to multiple users, you need to use a distribution list.
OneFS enables you to configure multiple
The default access zone within the cluster is called authentication providers on a per-zone basis.
the System access zone. By default, the built-in In other words, more than one instance of
A default notification is enabled when SmartQuotas is System access zone includes a local provider and LDAP, NIS, File, Local, and Active Directory
enabled. You can specify different notification parameters a file provider and can contain one of each of the providers per one Isilon cluster is possible.
for each type of quota (advisory, soft, and hard). other authentication providers. Multiple access zones can be created to
SMB shares that are bound to an access zone are
accommodate an enterprise environment. It is a
An access zone becomes an independent only visible/accessible to users connecting to the
best practice to ensure that each of these access
point for authentication and access to the SmartConnect zone/IP address pool to which the
zones has its own Zone Base Directory to ensure a
You can use snapshots to protect data against cluster. Only one Active Directory provider can access zone is aligned. SMB authentication and
unique namespace per access zone.
accidental deletion and modification. be configured per access zone. If you connect access can be assigned to any specific access zone.
the cluster to multiple AD environments NFS may be accessed through each zone and NFS
To use the SnapshotIQ, you must activate a SnapshotIQ
(untrusted) only one of these AD providers can authentication can now occur in its own NFS zone
license on the cluster. However, some OneFS operations
exist in a zone at one time. because the NFS protocol is now zone aware in
generate snapshots for internal system use without
requiring a SnapshotIQ license. If an application OneFS 7.2.
generates a snapshot, and a SnapshotIQ license is not
First, the joined authentication sources do not
configured, you can still view the snapshot. However, all
belong to any zone, instead they are seen by
snapshots generated by OneFS operations are
zones; meaning that the zone does not own the
automatically deleted after they are no longer needed.
OneFS snapshot is a logical pointer to data stored on a authentication source. This allows other zones to
You can disable or enable SnapshotIQ at any time.
cluster at a specific point in time. Snapshots target also include an authentication source that may
SnapshotIQ captures Copy on Write (CoW) images. You directories on the cluster, and include all data within that already be in use by an existing zone.
can configure basic functions for the SnapshotIQ directory, including any subdirectories contained within. Second, when joining AD domains, only join those
application, including automatically creating or deleting There are three things to know about joining multiple
Authentication Sources and Access Zone that are not in the same forest. Trusts within the
snapshots, and setting the amount of space that is authentication sources through access zones.
same forest are managed by AD, and joining them
assigned exclusively to snapshot storage.
could allow unwanted authentication between
The default is 20,000 snapshots. Snapshots should be set zones.
up for separate distinct and unique directories. Do not
Finally, there is no built-in check for overlapping
snapshot the /ifs directory. Instead you can create
UIDs. So when two users in the same zone - but
snapshots for the subdirectory structure under the /ifs
from different authentication sources - share the
directory. Snapshots only start to consume space when
same UID, this can cause access issues
files in the current version of the directory are changed or
deleted. First, administrators should create a separate /ifs
tree for each access zone. This process enables
Snapshots are created almost instantaneously regardless of
overlapping directory structures to exist without
the amount of data contained in the snapshot. A snapshot is
conflict and a level of autonomous behavior
Because snapshots do not consume a set not a copy of the original data, but only an additional set of
without the risk of unintentional conflict with other
amount of storage space, there is no pointers to the original data. So, at the time it is created, a
access zone structures.
requirement to pre-allocate space for creating snapshot consumes a negligible amount of storage space on
a snapshot. You can choose to store the cluster. Snapshots reference or are referenced by the Second, administrators should consider the
snapshots in the same or a different physical original file. If data is modified on the cluster, only one copy of System access zone exclusively as an
location on the cluster than the original files. the changed data is made. This allows the snapshot to administration zone. To do this, they should
maintain a pointer to the data that existed at the time that the remove all but the default shares from the System
snapshot was created, even after the data has changed. access zone, and limit authentication into the
There are some best practices for configuring
System access zone only to administrators. Each
They can be found within the path that is being snapped: access zones.
access zones works with exclusive access to its
i.e., if we are snapping a directory located at /ifs/data/
own shares providing another level of access
students/tina, we would be able to view, thru the cli or a
control and data access isolation.
Windows Explorer window (with the view hidden files
attribute enabled) the hidden .snapshot directory. The Isilon recommends joining the cluster to the LDAP
path would look like: /ifs/data/students/tina. environment before joining AD so that the AD users
Snapshot files can be found in two do not have their SIDs mapped to cluster
The second location to view the .snapshot files is at the places. ‘generated’ UIDs. If the cluster is a new
root of the /ifs directory. From here you can view all
configuration and no client access has taken place,
the .snapshots on the system but users can only open
the order LDAP/AD or AD/LDAP doesn’t matter as
the .snapshot directories for which they already have
there have been no client SID-to-UID or UID-to-SID
permissions. They would be unable to open or view
mappings.
any .snapshot file for any directory to which they did not
already have access rights. SMB time is enabled by default and is used to maintain time synchronization
between the AD domain time source and the cluster. Nodes use NTP between
The first is through the /ifs/.snapshot directory.
themselves to maintain cluster time. When the cluster is joined to an AD domain,
This is a virtual directory that will allow you to see
the cluster must stay in sync with the time on the domain controller otherwise
all the snaps listed for the entire cluster. There are two paths through which The Cluster Time property sets the cluster’s date and time settings, authentication may fail if the AD time and cluster time have more than a five
The second way to access your snapshots is to to access snapshots. either manually or by synchronizing with an NTP server. There may minute differential
access the .snapshot directory in the path in be multiple NTP servers defined. The first NTP server on the list is
which the snapshot was taken. used first, with any additional servers used only if a failure occurs. The best case support recommendation is to not use SMB time and only use NTP
Lesson 3 After an NTP server is established, setting the date or time manually if possible on both the cluster and the AD domain controller. The NTP source on
Clones can be created on the cluster using the is not allowed. After a cluster is joined to an AD domain, adding a the cluster should be the same source as the AD domain controller’s NTP source.
cp command and do not require you to license new NTP server can cause time synchronization issues. The NTP If SMB time must be used, then NTP should be disabled on the cluster and only
the SnapshotIQ module. server will take precedence over the SMB time synchronization with use SMB time.
AD and overrides the domain time settings on the cluster.
The isi snapshot list | wc –l command will tell Only one node on the cluster should be setup to coordinate NTP for the cluster.
you how many snapshots you currently have on This NTP coordinator node is called the chimer node. The configuration of the
disk. chimer node is by excluding all other nodes by their node number using the
isi_ntp_config add exclude node# node# node# command.
You can take snapshots at any point in the directory tree.
Each department or user can have their own snapshot
Permissions are preserved at the time of the
schedule. All snapshots are accessible in the virtual
snapshot. If the permissions or owner of the
directory /ifs/.snapshot. Snapshots are also available in
current file change, it does not affect the
any directory in the path where a snapshot was taken,
permissions or owner of the snapshot version.
such as /ifs/data/music/.snapshot. Snapshot remembers
which .snapshot directory you entered through.
The lsassd daemon mediates between the authentication protocols used by clients and
the authentication providers in the third row, that check their data repositories, represented
To manage SnapshotIQ in the web administration on the bottom row, to determine user identity and subsequent access to files.
interface, browse to the Data Protection tab, click
SnapshotIQ, and then click Settings. You can manage snapshots by using the web
administration interface or the command line.
isi snapshot settings view To manage SnapshotIQ at the command Authentication Providers
isi snapshot settings modify line, use the isi snapshot command.
The authentication providers handle communication with authentication sources. These sources can
Manual snapshots are useful if you want to create
be external, such as Active Directory (AD), Lightweight Directory Access Protocol (LDAP), and
a snapshot immediately, or at a time that is not
Network Information Service (NIS). The authentication source can also be located locally on the
specified in a snapshot schedule.
cluster or in password files that are stored on the cluster. Authentication information for local users on
The most common method is to use schedules to You can create snapshots either by configuring a snapshot the cluster is stored in /ifs/.ifsvar/sam.db.
generate the snapshots. A snapshot schedule schedule or manually generating an individual snapshot.
ou can also assign an expiration period to the Under FTP and HTTP, the Isilon cluster supports Anonymous mode, which allows users to access
generates snapshots of a directory according to a
snapshots that are generated, automating the files without providing any credentials and User mode, which requires users to authenticate to a
schedule. The benefits of scheduled snapshots is
deletion of snapshots after the expiration period. configured authentication source.
not having to manually create a snapshot every
time you would like one taken. It does not offer advanced features that exist in other directory services such as
Active Directory.
If data is accidentally erased, lost, or otherwise corrupted or
compromised, any user with Windows Shadow Copy Client Within LDAP, each entry has a set of attributes and each attribute has a name and one
installed locally on their computer can restore the data from or more values associated with it that is similar to the directory structure in AD. Each
the snapshot file. To recover an accidentally deleted file, right entry consists of a distinguished name, or DN, which also contains a relative
click the folder that previously contained the file, click Restore distinguished name (RDN). The base DN is also known as a search DN since a given
Previous Version, and then identify the specific file you want base DN is used as the starting point for any directory search. The top level names
to recover. To restore a corrupted or overwritten file, right- almost always mimic DNS names, for example, the top-level Isilon domain would be
click the file itself, instead of the folder that contains file, and dc=isilon,dc=com for Isilon.com.
then click Restore Previous Version. This functionality is
enabled by default starting in OneFS 7.0. Users, groups, and netgroups
Replication provides for making additional copies of Configurable LDAP schemas. For example, the ldapsam schema
Isilon’s replication feature is called allows NTLM authentication over the SMB protocol for users with
data, and actively updating those copies as changes are
SyncIQ. Windows-like attributes.
made to the source.
SyncIQ creates and references snapshots to replicate a Simple bind authentication (with or without SSL)
Isilon’s replication feature, SyncIQ uses asynchronous The LDAP provider in an Isilon cluster supports the following features:
consistent point-in-time image of a root directory which will be replication. Asynchronous replication is similar to an Redundancy and load balancing across servers with identical
the source of the replication. Metadata, such as access control asynchronous file write. The target system passively directory data
lists (ACLs) and alternate data streams (ADS), are replicated acknowledges receipt of the data and returns an ACK. The
along with data. SyncIQ enables you to maintain a consistent data is then passively written to the target. SyncIQ enables Multiple LDAP provider instances for accessing servers with
backup copy of your data on another Isilon cluster. you to replicate data from one Isilon cluster to another. different user data
LDAP can be used in mixed environments and is
SyncIQ offers automated failover and failback capabilities that You must activate a SyncIQ license on both the primary
widely supported. Encrypted passwords
enable you to continue operations on another Isilon cluster if a and the secondary Isilon clusters before you can replicate
primary cluster becomes unavailable. data between them.
On the Join a Domain page, type the name of the domain you want the
The Enable Secure NFS check box enables users to log in using LDAP credentials,
cluster to join. Type the user name of the account that has the right to add
but to do this, Services for NFS must be configured in the AD environment
computer accounts to the domain, and then type the account password.
The NIS provider exposes the passwd, group, and netgroup maps
NIS provides authentication and uniformity across local area from a NIS server. Hostname lookups are also supported. Multiple
networks. servers can be specified for redundancy and load balancing.
NIS is different from NIS+, which Isilon clusters do not support.
Failover changes the target directory from read-only to a read- Access tokens form the basis of who you are when
write status. Failover is managed per SyncIQ policy. Only those performing actions on the cluster and supply the primary
policies failed over are modified. SyncIQ only changes the owner and group identities to use during file creation.
directory status and does not change other required operations OneFS, during the authentication process, creates its own
for client access to the data. Network routing and DNS must be token for users that successfully authenticate to the
redirected to the target cluster. Any authentication resources cluster. Access tokens are also compared against
such as AD or LDAP must be available to the target cluster. All permissions on an object during authorization checks.
Failover is the process of allowing clients to modify data on
shares and exports must be availble on the target cluster or be
a target cluster. If the offline source cluster later becomes User identifier, or UID, is a 32-bit string that uniquely identifies users
created as part of the failover process.
accessible again, you can fail back to the original source on the cluster. UIDs are used in UNIX-based systems for identity
Each SyncIQ policy must be failed back. Like failover, failback cluster. Failback is the process of copying changes that management.
must be selected for each policy. The same network changes occurred on the original target while failed over back to the
original source. This allows clients to access data on the OneFS supports three primary identity types, each of Group identifier, or GID, for UNIX serves the same purpose for
must be made to restore access to direct clients to the source
source cluster again, and resuming the normal direction of which can be stored directly on the file system. These groups that UID does for users.
cluster.
replication of data back from the source to target. identity types are used when creating files, checking file The identity types supported by OneFS are:
ownership or group membership, and performing file Security identifier, or SID, is a unique identifier that begins with the
Failover revert may occur even if data modification has
access checks. domain identifier and ends with a 32-bit relative identifier (RID). Most
occurred to the target directories. If data has been modified on Failover revert is a process useful for instances when the source SIDs take the form S-1-5-21-<A>- <B>-<C>-<RID>, where <A>,
the original target cluster, then either a fail back operation becomes available sooner than expected. Failover revert allows <B>, and <C> are specific to a domain or computer, and <RID>
must be performed to preserve those changes, otherwise any administrators to quickly return access to the source cluster, and denotes the object inside the domain. SID is the primary identifier for
changes to the target cluster data will be lost. restore replication to the target. users and groups in Active Directory.
A Failover Revert is not supported for SmartLock directories.
4. Allocate a UID/GID.
Deduplication on Isilon is an asynchronous batch job that occurs You can configure ID mappings on the Access page. To open this
transparently to the user. Stored data on the cluster is inspected, block by page, expand the Membership & Roles menu, and then click
block, and one copy of duplicate blocks is saved. File records point to the User Mapping. When you configure the settings on this page, the
shared blocks, but file metadata is not deduplicated. The user should not OneFS does deduplication by deduplicating blocks. settings are persistent until changed. The settings in here can
experience any difference except for greater efficiency in data storage on however have complex implications, so if you are in any doubt as
the cluster, because the user visible metadata remains untouched - only to the implications, the safe option is to talk to Isilon support
internal metadata are altered. staff, and establish what the likely outcome will be.
Another limitation is that the deduplication does not occur across the UNIX assumes unique case-sensitive namespaces for
length and breadth of the entire cluster, but only on each disc pool users and groups. For example, Name and name can
individually. represent different objects.
Deduplication on Isilon is a relatively nonintrusive process. Windows provides a single namespace for all objects
Rather than increasing the latency of write operations by Some examples of this include the that is not case-sensitive, but specifies a prefix that
deduplicating data on the fly, it is done after the fact. This means UIDs, GIDs, and SIDs are primary identifiers of identity. Names, such as following: targets a specific Active Directory domain. For example
that the data starts out at the full literal size on the cluster’s usernames, are classified as a secondary identifier. This is because domain\username.
drives, and might only reduce to its deduplicated, more efficient different systems such as LDAP and Active Directory may not use the
representation hours or days later. same naming convention to create object names and there are many Kerberos and NFSv4 define principals, which requires
variations in the way a name can be entered or displayed. that all names have a format similar to email addresses.
For example name@domain.
As an example, given the name support and the
domain EXAMPLE.COM, then support,
EXAMPLE\support, and [email protected] are
all names for a single object in Active Directory.
Deduplication on Isilon identifies identical blocks of storage On-disk identities map identities at a global level for individual protocols.
duplicated across the pool. Instead of storing the blocks in It is important to choose the preferred identity to store on disk because
multiple locations, deduplication stores them in one location. OneFS uses an on-disk identity to transparently map identities for
most protocols require some level of mapping to operate correctly. Only
Deduplication reduces storage expenses by reducing the different protocols. Using on-disk identities, you can choose whether to
The on-disk identity types are UNIX, Sid, and Native. one set of permission, POSIX compatible or Microsoft, is authoritative.
storage needs. Less duplicated data, fewer blocks required to store the UNIX or the Windows identity, or allow the system to determine
The on-disk identity helps the system decide, which is the authoritative
store it. the correct identity to store.
representation of an object’s permissions. The authoritative
representation preserves the file’s original permissions.
If the Unix on-disk identity type is set, the system always stores the UNIX
identifier, if available. During authentication, the system authentication lsassd
daemon looks up any incoming SIDs in the configured authentication sources. If
a UID/GID is found, the SID is converted to either a UID or GID. If a UID/GID
does not exist on the cluster, whether it is local to the client or part of an
untrusted AD domain, the SID is stored instead. This setting is recommended for
NFSv2 and NFSv3, which use UIDs and GIDs exclusively.
How Deduplication Works on OneFS
If the SID on-disk identity type is set, the system will always store a SID, if
available. During the authentication process, lsassd searches the configured
authentication sources for SIDs to match to an incoming UID or GID. If no SID is
found, the UNIX ID is stored on-disk.
If the Native on-disk identity is set, the lsassd daemon attempts to choose the
correct identity to store on disk by running through each of the ID mapping
The available on-disk identity types are UNIX, Sid, and Native. methods. If a user or group does not have a real UNIX identifier (UID or GID), it
stores the SID. This is the default setting in OneFS 6.5 and later.
Lesson 5
OneFS supports the standard UNIX tools for changing permissions, chmod and
chown.
The chown command is used to change ownership of a file. You must have root
Use Cases user access to change the owner of a file.
Windows includes many rights that you can assign individually or you can
assign a set of rights bundled together as a permission.
When working with Windows, you should remember a few important rules that
dictate the behavior of Windows permissions. First, if a user has no permission
assigned in an ACL, then the user has no access to that file or folder. Second,
permissions can be explicitly assigned to a file or folder and they can also be
After enabling the Deduplication license, you can find the In Windows environments, file and directory access rights are inherited from the parent folder.
Deduplication under the File System tab. From this screen you defined in Windows Access Control List, or ACL. A Windows
ACL is a list of access control entries, or ACEs. Each entry By default, when a file or folder is created, it inherits the permissions of the
can start a deduplication job and view any reports that have
contains a user or group and a permission that allows or denies parent folder. If a file or folder is moved, it retains the original permissions. You
been generated. You can also make alterations to settings in
access to a file or folder. can view security permissions in the properties of the file or folder in Windows
terms of which paths are deduplicated.
Explorer. If the check boxes in the Permissions box are not available (grayed
Lesson 6: SmartLock out), those permission are inherited. You can explicitly assign a permission. It is
important to remember that explicit permissions override inherited permissions.
The last rule to remember is that Deny permissions take precedence over Allow
permissions. However, an inherited Deny permission is overridden by an explicit
Allow permission.
ACLs are more complex than mode bits and are also capable Instead of the standard three permissions available for mode bits, ACLs have 32
of expressing much richer sets of access rules. However, not bits of fine grained access rights. Of these, the upper 16 bits are general and
all POSIX mode bits can be represented by Windows ACLs any apply to all object types. The lower 16 bits vary between files and directories but
more than POSIX mode bits can represent all Windows ACL are defined in a compatible way that allows most applications to use the same
values. bits for files and directories.
On a Windows computer, you can configure ACLs in Windows Explorer.
In OneFS, an ACL can contain ACEs with a UID, GID, or SID
as the trustee. For OneFS, in the web administration interface, you can change ACLs in
the ACL policies page.
NFS exports and SMB shares on the cluster can be configured for the same data.
Mixed Environments
When you assign UNIX permissions to a file, no ACLs are stored for that file.
However, a Windows system processes only ACLs; Windows does not process
UNIX permissions. Therefore, when you view a file’s permissions on a Windows
Lesson 2 system, the Isilon cluster must translate the UNIX permissions into an ACL.
Synthetic ACLs are the cluster’s translation of UNIX permissions so they can be
understood by a Windows client. If a file also has Windows-based ACLs (and not
only UNIX permissions), it is considered by OneFS to have advanced ACLs.
If a file has UNIX permissions, you may notice synthetic ACLs when you run the ls
–le command on the cluster in order to view a file’s ACLs. Advanced ACLs display
a plus (+) sign when listed using an isi –l command.
Module 4: Authentication
OneFS stores an internal representation of the permissions of a file system object, such as a
directory or a file. The internal representation, which can contain information from either the
POSIX mode bits or the ACLs, is based on RFC 3530, which states that a file’s permissions
must not make it appear more secure than it really is. The internal representation can be
Permissions Overview used to generate a synthetic ACL, which approximates the mode bits of a UNIX file for an
SMB client. Since OneFS derives the synthetic ACL from mode bits, it can express only as
much as permission information as mode bits can and not more.
Since the ACL model is richer than the POSIX model, no permissions information is lost
when POSIX mode bits are mapped to ACLs. When ACLs are mapped to mode bits,
however, ACLs must be approximated as mode bits and some information may be lost.
OneFS compares the access token presented during the connection with the authorization
data found on the file. All user and identity mapping occurs during token generation, so no
Authorization Process mapping is performed when evaluating permissions.
OneFS supports two types of authorization data on a file: access control lists (ACLs) and
UNIX permissions.
Click UNIX only for cluster permissions to operate with UNIX semantics, as opposed
to Windows semantics. This option prevents ACL creation on the system.
Click Balanced for cluster permissions to operate in a mixed UNIX and Windows
environment. This setting is recommended for most cluster deployments.
Click Windows only for the cluster permissions to operate with Windows semantics,
as opposed to UNIX semantics. If you enable this option, the system returns an error
on UNIX chmod requests.
Click Configure permission policies manually to configure individual permission-
policy settings.
In the command-line interface you can create shares using the isi
smb shares create command. You can also use the isi smb
shares modify to edit a share and isi smb shares list to view the
current Windows shares on a cluster.
OneFS supports the automatic creation of SMB home directory paths for
users. Using variable expansion, user home directories are automatically
provisioned. Home directory provisioning enables you to create a single
home share that redirects users to their SMB home directories. A new
directory is automatically created if one does not already exist.
Network File System (NFS) is a protocol that allows a client computer Isilon supports NFS protocol versions 3 and 4. Kerberos
to access files over a network. It is an open standard that is used by authentication is supported. You can apply individual host rules to
UNIX clients. You can configure NFS to allow UNIX clients to address each export, or you can specify all hosts, which eliminates the need to
content stored on Isilon clusters. NFS is enabled by default in the create multiple rules for the same host. When multiple exports are
cluster; however, you can disable it if it isn’t needed. created for the same path, the more specific rule takes precedence.
Enable NFS
The support for NFS version 3 is enabled, NFSv4 is disabled
by default. If NFSv4 is enabled, the name for the NFSv4
domain needs to be specified in the NFSv4 domain box.
The Lock Protection Level setting allows the NFS lock state
to be preserved when a node fails in the cluster. The number
set is the number of nodes that can fail simultaneously and
In the web administration interface, click PROTOCOLS > UNIX still preserve the lock state.
Sharing (NFS), and then select Global Settings. The NFS service
settings are the global settings that determine how the NFS file Other configuration steps on the NFS Settings page are the
Lesson 4 sharing service operates. possibilities to reload the cached NFS exports configuration
to ensure any DNS or NIS changes take affect immediately,
to customize the user/group mappings, and the security
types (UNIX and/or Kerberos), as well as other advanced NFS
settings.
NFSv3 does not track state. A client can be redirected to another node, if
configured, without interruption to the client. NFSv4 tracks state, including
NFSv3 and NFSv4 Compared file locks. Automatic failover is not an option in NFSv4.
NFSv4 can use Windows Access Control Lists (ACLs). NFSv4 mandates
strong authentication. It can be used with or without Kerberos, but NFSv4
drops support for UDP communications, and only uses TCP because of the
need for larger packet payloads than UDP will support