IBM - NFS in AIX
IBM - NFS in AIX
IBM - NFS in AIX
Chris Almond
Lutz Denefleh
Sridhar Murthy
Aniket Patel
John Trindle
ibm.com/redbooks
International Technical Support Organization
November 2004
SG24-7204-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Contents v
5.6 Setting up the NAS with a legacy database . . . . . . . . . . . . . . . . . . . . . . 134
5.6.1 Setup of a KDC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.6.2 Installing the IBM NAS file sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.6.3 Initial basic KDC functions test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.6.4 Create user principals on the KDC server. . . . . . . . . . . . . . . . . . . . 139
5.6.5 Create the NFS server principals on the KDC server . . . . . . . . . . . 141
5.7 Setting up an NFS V4 server with NAS on a different KDC server . . . . . 142
5.7.1 Create the NFS server keytab file entry . . . . . . . . . . . . . . . . . . . . . 142
5.7.2 Check the NFS V4 server before client access. . . . . . . . . . . . . . . . 143
5.7.3 Set up the NFS registry daemon. . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.7.4 Set up the gssd daemon on the NFS V4 server . . . . . . . . . . . . . . . 144
5.8 Setting up an NFS V4 client with NAS . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.8.1 General steps for all types of clients . . . . . . . . . . . . . . . . . . . . . . . . 145
5.8.2 Install the NAS client code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.8.3 Set up the NFS domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.8.4 Set up the NFS domain-to-realm map . . . . . . . . . . . . . . . . . . . . . . 146
5.8.5 Full client installation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.8.6 Slim client installation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.8.7 Configuring RPCSEC_GSS on the clients . . . . . . . . . . . . . . . . . . . 154
5.9 Preparing the system for Tivoli Directory Server and Kerberos V5 . . . . . 155
5.9.1 Set up procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.9.2 Configure IBM Tivoli Directory Server . . . . . . . . . . . . . . . . . . . . . . . 160
5.9.3 Configure the KDC server with LDAP backend. . . . . . . . . . . . . . . . 165
5.9.4 Configure the NFS V4 client for integrated login services. . . . . . . . 170
5.10 Integrating NFS V4 with a Linux client . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.10.1 NFS server and client setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.10.2 Read-only NFS V4 mount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.10.3 Read/write NFS V4 mounts on Linux . . . . . . . . . . . . . . . . . . . . . . 185
5.10.4 Pseudo-file system in NFS V4 Linux client . . . . . . . . . . . . . . . . . . 187
5.11 Windows KDC and NFS V4 AIX 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.12 Setting up Kerberos cross-realm access. . . . . . . . . . . . . . . . . . . . . . . . 199
5.12.1 Add the krbtgt service principal to every KDC server . . . . . . . . . . 200
5.12.2 Kerberos configuration file changes on the KDC server, NFS V4 client
and server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.12.3 Add NFS domain-to-realm map on NFS V4 client and server . . . 204
5.12.4 Client access verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.12.5 Client access mount using cross-realms. . . . . . . . . . . . . . . . . . . . 205
Contents vii
IBM Redbooks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Other IBM publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Non-IBM publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Other information sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, and service names may be trademarks or service marks of others.
Network File System Version 4 (NFS V4) is the latest defined client-to-server
protocol for NFS. A significant upgrade of NFS V3, it was defined under the IETF
framework by many contributors. NFS V4 introduces a major changes to the way
NFS has been implemented and used up until now, including stronger security,
wide area network sharing, and broader platform adaptability.
In the initial implementation of NFS V4 in AIX 5.3, the most important functional
differences are related to security. Chapter 3 and parts of the planning and
implementation chapters in Part 2 cover this topic in detail.
Part 3 Appendices
Appendix A. Kerberos
This appendix provides detailed backgound information on the inner
workings of the Kerberos authentication system.
Lutz Denefleh is a team leader at the PLM Technical Support Eastern Region
Support Center in Mainz, Germany. He holds a Graduate Engineer degree in
Fluid Dynamics. He has 16 years of experience in the Information Technology
field and has worked at IBM for 15 years. His areas of expertise include solution
implementations on RS/6000®, such as CATIA, Lotus Notes®/Domino, Tivoli®,
DCE/DFS™, and Internet technologies. He is now responsible for the IBM
internal infrastructure used by the PLM World Wide Technical Support.
Aniket Patel is a Team Leader at the IBM UK UNIX Support Centre. He has
eight years of experience in UNIX and has worked at IBM for six years. Aniket
graduated from Kingston University, UK, with a Bachelor of Science (Honours
Degree) in Computer Science. His areas of expertise include UNIX Support on
AIX, DYNIX/ptx®, and Linux, NFS, TCP/IP, SNA, X.25, DCE, Sendmail,
MQSeries®, and the Microsoft® Windows® operating systems. Prior to recently
Preface xiii
taking on the role of Team Leader, Aniket was responsible for a team of Network
Specialists within the IBM UK UNIX Support Centre.
John Trindle works for Boeing in Seattle, Washington, as a systems design and
integration specialist. He has worked with NFS on various UNIX systems
(including SunOS/Solaris, DEC Ultrix, HP-UX, and most recently AIX) for 18
years. He holds a Bachelor of Science degree in General Engineering from the
United Stated Military Academy, West Point, NY. His current areas of expertise
include the AIX operating system and storage integration with AIX. He also
designs and oversees several hierarchical storage management implementations
using IBM Tivoli Storage Manager for Space Management.
Acknowledgements
The following people provided significant support for this project:
Carl Burnett, IBM AIX Kernel Architecture Team: for his thoughtful insights and
overall guidance in helping us develop a content strategy for this book.
Ufuk Celikka, IBM AIX Security Development Team: for technical support during
NFS V4 testing.
Bill McAllister, Team Leader IBM UK UNIX Support Centre: for his support in
reviewing draft content and providing feedback during the project.
Brian L. McCorkle, IBM AIX NFS Development Team: for his continued patience
and assistance while we tried to understand the implementation of NFS V4 on
AIX. Brian’s support enabled the team to meet its content goals for the book.
Dave Sheffield, Team Leader IBM AIX NFS Development Team: for facilitating
access to Development Team resources whenever we needed it.
Steve Sipocz, IBM BCS Systems Integration Services: for technical support and
review of our integration scenarios.
Betsy Thaggard
ITSO Editor, Austin Center
And other members of the AIX NFS Development Team and the AIX NAS
Development Team, for fheir support on various technical issues.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you’ll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us because we want our Redbooks™ to be as
helpful as possible. Send us your comments about this or other Redbooks in one
of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. JN9B Building 905 3D004
11501 Burnet Road
Austin, Texas 78758-3493
Preface xv
xvi Securing NFS in AIX
Part 1
Part 1 NFS V4
fundamentals
We give you an introduction to the NFS protocol, beginning with a look at how
NFS works. We then move on to a brief history of NFS followed by NFS V4. We
also introduce you to some of the AIX 5.3 implementation specifics of NFS V4.
Over the past few years there have been developments in multiple types of
network capable file systems. These include the Apollo Domain, the Andrew File
System (AFS®), the AT&T Remote File System (RFS), IBM DFS, and Sun
Microsystems’ Network File System (NFS). Each of these has had beneficial
features and limiting drawbacks.
Of all the network file systems, NFS is probably the most widely used. NFS is
available on almost all versions of UNIX, as well as on Apple Macintosh systems,
MS-DOS, Windows, OS/2®, and VMS. It has continued to mature, and the latest
revision, Version 4, will help to advance and expand its reach.
So, what is NFS? NFS is a distributed file system that enables users to access
files and directories on remote servers as if they were local. For example, the
user can use operating system commands to create, remove, read, write, and set
file attributes for remote files and directories. NFS is independent of machine
types, operating systems, and network architectures through the use of Remote
Procedure Calls (RPCs).
NFS operates on a client/server basis. An NFS server has files on a local disk,
which are accessed through NFS on a client machine. To handle these
operations, NFS consists of:
Networking protocols
Client and server daemons
Kernel extensions
The kernel extensions are outside the scope of this book, but the protocols,
daemons, planning, and implementation of NFS V4 will be discussed.
NFS V1 was Sun’s prototype version and was never released for public use.
NFS V2 was released in 1985 with the SunOS V2 operating system. Many UNIX
vendors licensed this version of NFS from Sun. NFS V2 suffered many
undocumented and subtle changes throughout its 10-year life. Some vendors
The NFS V3 specification was developed during July 1992. Working code for
NFS V3 was introduced by some vendors in 1995 and was made widely available
in 1996. Version 3 incorporated many performance improvements over Version 2
but did not significantly change the way that NFS worked or the security model
used by the network file system.
For more detailed information about the new features of the protocol, there are
many references available. Refer to “Related publications” on page 301. One
particularly useful reference for a more detailed discussion of Version 4 protocol
functional design goals is a white paper titled The NFS Version 4 Protocol by
Brian Pawlowski, et al., which can be found at:
http://www.nluug.nl/events/sane2000/papers/pawlowski.pdf
These chapters provide technical background to prepare the reader for planning
an NFS V4 deployment. Before you can proceed with any implementation, you
have to plan it, so Chapter 4, “Planning for NFS V4” on page 93 is the key
chapter in this book. The chapter walks you through a design decision process
for planning your NFS V4 infrastructure.
Continuing from the NFS design discussion, we talk about the differences
between the AIX v5.3 NFS V4 implementation and what the NFS V4 RFC
RFC3530 describes. The reader is advised to refer to the NFS V4 RFC for a
detailed explanation of the protocol. Detailed scenarios and implementation
options will be discussed in the planning and implementation chapters.
TCP is stateful, and NFS is stateless. A stateless server treats each request as
an independent transaction, unrelated to any previous request. This simplifies
the server design because it does not need to allocate storage to deal with
conversations in progress or worry about freeing it if a client dies in
mid-transaction. A disadvantage is that it may be necessary to include more
information in each request, and this extra information will have to be interpreted
by the server each time. UDP neither guarantees delivery nor does it require a
connection. As a result, it is lightweight and efficient, but all error processing and
retransmission must be taken care of by the application program.
So, the statement that NFS is stateless would seem to be a contradiction and you
would think it impossible for NFS to work over TCP. However, NFS and TCP
Note: The RPC described here is the Sun RPC and is not to be confused with
the RPC used by products such as DCE. RPC services use TCP or UDP to
transport data packets. NFS V4 only uses TCP as a transport protocol.
The rpc.mountd and nfsd daemons run only on the server, and the rpc.lockd and
rpc.statd daemons run on the server and the client. The biod daemon runs only
on the client.
A client will only consult the portmap daemon once for each program the client
tries to call. The portmap daemon tells the client which port to send the call to.
The client stores this information for future reference. As standard RPC servers
are normally started by the inetd daemon, the portmap daemon must be started
before the inetd daemon is invoked.
Important: The rpc.statd daemon should always be started before the lockd
daemon.
The NFS daemons are inactive if there are no NFS requests to handle. When the
NFS server receives RPC calls on the nfsd daemon’s receive socket, the
daemon is awakened to pick up the packet off the socket and invoke the
requested operations.
2.4 NFS V3
Even though NFS V2 (discussed briefly in the introduction) was widely accepted,
the protocol was not without its problems. This led to the introduction of NFS V3
RFC1813. The Version 2 protocol had the following major problems:
A file size limit of 4 GB. Only files of up to 4 GB in size could be accessed. As
computer environments began to grow and the need for larger data
repositories became apparent, this limitation became a problem.
Writes had to be synchronous. This led to write-intensive applications
suffering performance problems. There were quite a few workarounds for this
problem, but most violated the NFS V2 standard.
The main goals for the NFS V3 design were to solve these two problems. In
1994, Brian Pawlowski published a paper (referenced at end of this section) that
provided an overview of the process the designers of the Version 3 protocol went
through. This paper identified the following major areas where the protocol was
enhanced:
The 4 GB file restriction. All arguments within the protocol (such as file sizes)
were changed from 32-bit to 64-bit.
The write model was changed to introduce a write/commit phase. This
enabled asynchronous writes.
A new ACCESS procedure was introduced. This resolved
permission-checking problems when mapping the ID of the superuser. This
procedure is also important for correct behavior when ACLs exist on files.
The 8 Kb write per procedure call limit was relaxed.
The READDIRPLUS procedure was introduced. This returned both a file
handle and attributes. This returns both directory entries and per-entry
File locking was not something that could be overlooked easily and was therefore
implemented in SunOS as the Network Lock Manager (NLM). NLM went
through various iterations, with Version 3 being most widely used with NFS V2.
With the introduction of NFS V3, the definition of NLM (Version 4) was included in
the NFS specifications, but was still left as a separate protocol. The NLM also
relied on the Network Status Monitor protocol. This was required so the clients
and servers could be notified of a crash so that a lock state could be recovered.
As stated earlier, the NLM was not widely adopted. The NFS protocol in
Version 4 has been extended to include a file locking mechanism. You can find
more information in 2.6.7, “File locking” on page 29.
NFS V4 goes a long way to overcome the shortcomings of NFS V2 and V3, and
adds additional features left out in the NFS V3 implementation. NFS V4 is
described in detail in RFC3530. The main changes that are discussed in this
section are:
Attribute classes
User name to UID mapping
Better namespace handling (pseudo-file systems)
Built-in security
Client-side caching and delegation
Compound procedures
File locking
Internationalization
Volatile file handles
Change indicator The value created by the server that the client can
use to determine if file data, directory contents, or
attributes of a given object have been modified. The
server may return the object’s modify time for this
attribute’s value, but only if the file system object
cannot be updated more frequently than the
resolution of the modify time.
Lease duration The time frame in which the leases at the server side
end, in seconds.
UNIX LINK Support Determines whether UNIX hard links are supported.
Recommended attributes
The recommended attributes contain information such as the type of ACLs that
the file system supports, the ACLs themselves, information about the owner and
group, access timestamps, and quota attributes. They also contain information
about the file system such as free space, total number of files, files available for
use, and the file system limits such as maximum filename length and maximum
number of links. These are summarized in Table 2-2 on page 22.
Case preservation Checks whether the file name case in a given file
system is preserved.
Change owner restricted If this attribute is set to TRUE, the server will
reject any request to change the owner or group
associated with a given file if the caller is not a
privileged user (for example, the root user in
UNIX operating systems).
No file name truncation beyond Checks to ensure that either an error is returned
maximum or the name is truncated when the maximum file
name size supported for a given object is
exceeded,
Maximum file size The maximum file size supported for a file
system for a given object.
Maximum number of links The maximum number of links for a given object.
Maximum file name size The maximum file name size supported for a
given object.
Maximum read size The maximum read size supported for a given
object.
Maximum write size The maximum write size supported for a given
object. It should be supported if the file is
writable. Lack of this attribute can lead to the
client either wasting bandwidth or receiving poor
performance.
Owner string The string name of the owner for a given object.
File system free space The free disk space, in bytes, on the file system
containing a given object.
File system total space The total disk space, in bytes, on the file system
containing a given object.
Space used by object The number of file system bytes used by a given
object.
Named attributes
Named attributes provide a mechanism to associate additional properties with a
filesystem object (such as file or directory). A named attribute is an uninterpreted
opaque stream of bytes with a name. Applications can use named attributes to
place auxiliary application specific data on files. Multiple named attributes can
exist on an object. The OPENATTR procedure is used to access named
attributes for an object. NFS V4 organizes named attributes as a directory of
attribute names. The READDIR and LOOKUP operations are used to obtain the
attribute names. The READ, WRITE, and CREATE operations are then used to
operate on the individual named attributes.
or
group@nfs_domain
String-based identities require NFS clients and servers to translate between the
protocol string attributes and the internal formats (UID and GID) that are used
within the operating system.
Representing users and groups as strings provides added flexibility and removes
the requirement that all systems utilize the same numeric ID space. It is expected
that all systems share a common view of the user and group name space within
a given NFS domain. The use of strings also opens the potential capability for
interdomain NFS sharing when the capability exists to map an identity from a
foreign domain into the receiver’s local domain.
The contents of the newly mounted file system appear in Example 2-1 and
Figure 2-5 on page 27.
The server has provided the client with a single view of the exported file systems.
In NFS V4, a server’s named space is a single hierarchy. In the example above,
the export list hierarchy is not connected. When a server chooses to export a
disconnected portion of its name space, the server creates a pseudo-file system
to bridge the unexported portions of the name space, allowing a client to reach
the export points from the single common root. A pseudo-file system is a
structure containing only directories, created by the server having a unique fsid,
that enables a client to browse the hierarchy of the exported file systems.
The client view of the pseudo-file system is limited to paths that lead to the
exported file systems. Because /home/joe and /dept are not exported in the
example, they do not appear on the client during browsing operations
(Figure 2-5).
The basic NFS security mechanisms are extended in NFS V4 through the
mandated support of the RPCSEC_GSS RPC security flavor. RPCSEC_GSS is
implemented at the RPC layer. It is capable of supporting different security
mechanisms. Examples include Kerberos Version 5, and public-key-based
mechanisms such as SPKM. NFS V4 requires that RPCSEC-GSS be provided
All versions of NFS are capable of using RPCSEC_GSS. The difference is that
while an implementation can claim to conform to NFS V2 and NFS V3 without
implementing support for RPCSEC_GSS, a conforming implementation (one that
claims to be based on RFC3530) of NFS V4 must implement security based on
Kerberos Version 5 (as done in AIX 5.3) and LIPKEY. Kerberos V5 (KRB5) and
LIPKEY are GSS-API conforming mechanisms.
Kerberos divides user communities into realms. Each realm has an administrator
responsible for maintaining a database of principals or users, one master Key
Distribution Center (KDC), and one or more slave KDCs that give user tickets to
access services on specific hosts in a realm. Users in one realm can access
services in another realm via trust relationships between the KDCs. Kerberos is a
very good choice for enterprise work groups operating within an intranet. It
provides centralized control, as well as single sign-on to the network. We will
discuss Kerberos further in upcoming chapters.
NFS V4, like its predecessors, has a weak cache consistency model. Clients are
not guaranteed to see the most recent changes to data at a server. Delegations
are optional and are granted at the NFS server’s discretion. Without a delegation,
the NFS V4 client operates similar to previous versions of NFS.
A stateid is a unique 64-bit object that defines the locking state of a specific file.
When a client requests a lock, it presents a clientid and unique-per-client lock
owner identification to identify the lock owner.
NFS V4 divides file handles into two types: persistent and volatile. Persistent file
handles describe the traditional file handle. Volatile file handles is a concept
introduced in NFS V4, in which a client must cache the mapping between path
name and file handle, and regenerate the file handle upon expiration.
Note: In AIX 5.3, the default for exports is still NFS V3, not Version 4. You
must explicitly declare an export for NFS V4 using the vers option. See
Example 2-1 on page 26.
Note: The AIX Enhanced Journaled File System is a JFS2 file system with the
Extended Attributes Version 2 capability enabled to use NFS V4 ACLs.
When you first look at NFS on AIX V5.3, you will not see anything different. NFS
on AIX V5.3 supports NFS V2, NFS V3, and NFS V4. For this reason, all of the
daemons that you would see in previous versions of AIX are included in AIX 5.3.
One side effect of incorporating NFS V4 into AIX 5.3 is that NFS V3 security
controls can been extended by using some of the security features that are
required to create a V4 conforming implementation.
Note: Only JFS2 with extended attribute Version 2 (J2) supports NFS V4 ACLs.
See Chapter 3, “Enhanced security in NFS V4” on page 45 for more information
about NFS V4 ACL support.
On an NFS V4 server, AIXC ACL is supported when the underlying file system
instance supports AIXC ACL. All instances of JFS and JFS2 support the AIXC
ACL.
An NFS V4 or NFS V3 client has a mount option that enables or disables support
for AIXC ACL. The default is to not support AIXC ACL. A user on an NFS V4
client file system can read and write AIXC ACL when both the client and the
server are running AIX, the underlying physical file system instance on the server
supports AIXC ACL, and the AIX client mounts the file system instance with
AIXC ACL enabled. AIXC ACL support in NFS V4 is similar to the AIXC ACL
support in AIX NFS V2 and NFS V3 implementations.
All instances of a JFS2 file system with extended attribute Version 2 support both
AIXC ACL and NFS V4 ACL. A file in this type of file system may have mode bits
only (no ACL), an NFS4 ACL, or an AIXC ACL. But it cannot have NFS4 ACL and
AIXC ACL at the same time.
The aclgettypes command can be used to determine the ACL types that can be
read and written on a file system instance. This command may return different
output when it runs against a physical file system on an NFS V4 server locally
than when it runs against the same file system on an NFS V4 client. For
example, an NFS V4 file system instance on an NFS V4 server may support NFS
V4 ACL and AIXC ACL, but the client is only configured to send and receive NFS
V4 ACL (mounted with the -noacl option). In this case, when aclgettypes is
executed from an NFS V4 client file system, only NFS V4 is returned. Also, if a
user on the client requests an AIXC ACL, an error is returned.
The authoritative source for access checking lies in the underlying file system
exported by the NFS server. The file system takes into consideration the file’s
access controls (ACLs or permission bits), the caller’s credentials, and other local
system restrictions that might apply.
The aclget, aclput, and acledit commands can be used on the client to
manipulate either NFS4 or AIXC ACLs. For more information, see Access
Control Lists in AIX 5L Version 5.3 Security Guide, SC23-4907.
Figure 2-6 Representation of how the server builds the exname pseudo FS view
We also want to make sure that we do not expose our server’s file system tree to
the client. How can we achieve this?
Attention: We must make sure that we set the pseudo-root on the server to
/exports. When the server renders the pseudo FS view for the client, the
directory or file under the /exports directory will be hidden. So, if you want to
have directories and files user /exports available to the clients, you should
either move them to another directory and export or choose a different
directory to be the anchor for the pseudo-root.
We can see from this example that the exname option does not have the full path
to the individual exports under /exports. In fact, you could specify the full path for
all other exports. However, by following our example, the client is never shown
the server’s directory tree. This hides visibility of the actual server file system
layout from the client.
If you intend to export a large number of directories under /local allowing the local
component to be seen under /exports, then the exname option could also be
used as shown:
/local/trans -vers=4,rw,exname=/exports/local/trans
/local/dept -vers=4,rw,exname=/exports/local/dept
/local/home -vers=4,rw,exname=/exports/local/home
/usr/codeshare/ThirdPartyProgs
-vers=4,ro,exname=/exports/local/ThirdPartyProgs
Here are some differences between NFS V2, NFS V3, and NFS V4 in how
mounts are handled. In NFS V2 and NFS V3, the server exported the directories
that it wanted to make available for mounting. The NFS V2 or NFS V3 client then
had to explicitly mount each export to which it wanted access.
With NFS V4, the server still specifies export controls for each server directory or
file system to be exported for NFS access. From these export controls, the server
renders a single directory tree of all the exported directories. This tree, a
pseudo-file system, starts at the NFS V4 server’s pseudo-root. The NFS V4
pseudo-file system model enables an NFS V4 client, depending on its
implementation, to perform a single mount of the server’s pseudo-root in order to
access all of the server’s exported data. The AIX NFS V4 client supports this
feature. The actual content seen by the client is dependent on the server’s export
controls. (See 2.6.3, “Better namespace handling” on page 25.)
Client communication with the rpc.mountd daemon does not occur with NFS V4
mount processing. Operations in the core NFS V4 protocol are used to service
client side mount operations. The NFS V4 server implementation does utilize the
rpc.mountd daemon as part of handling NFS V4 access.
Important: NFS V4 does not support file exporting. If you need to export a
specific file, export it as Version 2 or 3 (using the vers=2 or vers=3 options).
The directory is the full path name of the directory. Options can designate a
simple flag such as ro (read only) or a list of host names. See the specific
documentation of the /etc/exports file and the exportfs command for a complete
list of options and their descriptions.
Note: The /etc/rc.nfs script will not start the nfsd daemons or the rpc.mountd
daemon if the /etc/exports file does not exist.
The second entry in this example specifies that the /exports/home directory can
be mounted by the system nfs404 for read/write access. Additionally, nfs404 may
access data on the server as the root user.
The third entry in this example specifies that any client can mount the /var/tmp
directory (with read/write access).
Attention: You will notice there is no access list specified for the /var/tmp
entry. This means that all clients can mount this directory, and this is very
insecure. However, you now have the ability to use the sec=krb5 option, and
this would take away the worry of openly exporting the directory.
Tip: Note the introduction of two new options: vers and sec.
/etc/xtab file
The /etc/xtab file has a format similar to the /etc/exports file and lists the currently
exported directories. Whenever the exportfs command is run, the /etc/xtab file
changes. This enables you to export a directory temporarily without having to
change the /etc/exports file. If the temporarily exported directory is unexported,
the directory is removed from the /etc/xtab file.
Important: The /etc/xtab file is updated automatically. You should not edit this
file manually.
/etc/nfs/hostkey file
This file is used by the NFS server to specify the Kerberos host principal and the
location of the keytab file. For instructions on how to configure and administer
this file, see the nfshostkey command description in AIX 5L Version 5.3
Commands Reference, Volume 4, SC23-4891, or the command’s man pages.
/etc/nfs/local_domain file
This file contains the local NFS domain of the system. It is implied that systems
that share the same NFS local domain also share the same user and group
registries. The chnfsdom command is used to administer this file:
# chnfsdom itsc.austin.ibm.com
#
# chnfsdom
Current local domain: itsc.austin.ibm.com
#
Important: You should not edit this file manually. Always use the chnfsdom
command. The command automatically tells the nfsrgyd daemon that you
have changed the local NFS domain. Also, a given NFS server or client
machine can only belong to one NFS V4 domain.
For further information about the chnfsdom command, see the AIX 5L Version 5.3
Commands Reference, Volume 1, SC23-4888, and the command’s man pages.
If the Kerberos realm name is always the same as the server’s NFS domain, this
file is not needed.
Important:
Multiple Kerberos realms can map to a single nfs-domain. The previous
example demonstrates two realms mapping to one domain. However, a
single realm cannot map to multiple nfs domains.
We recommended that you do not edit the /etc/nfs/realm.map file manually.
Always use the chnfsrtd command.
When the foreign identity mapping features of AIX NFS V4 support are used to
facilitate inter-nfs-domain access, the mapping rules managed by the chnfsim
utility allow mapping of realms into domains. In this case, chnfsim should be
used instead of chnfsrtd.
/etc/nfs/princmap
This file maps host names to Kerberos principals when the principal is not the
fully qualified domain name of the server. It consists of any number of lines of the
following format:
<host part of principal> <alias1> <alias2> ...
To add, edit, or remove entries from this file, use the nfshostmap command. For
more information, see the nfshostmap command description in AIX 5L Version
5.3 Commands Reference, Volume 4, SC23-4891, or the command’s man(1)
pages.
In the following example, UDP packets sent by the client will have a source port
in the range 3000 to 4000, and TCP connections will have a source port in the
range 5000 to 6000. This means that you can now restrict traffic on all other
higher range ports apart from the ones listed above.
NFS_PORT_RANGE=udp[3000-4000]:tcp[5000-6000]
/usr/sbin/biod Sends the client’s read and write requests to the server. The
biod daemon is SRC-controlled.
/usr/sbin/rpc.mountd Answers requests from clients for file system mounts. The
mountd daemon is SRC-controlled.
/usr/sbin/nfsd Starts the daemons that handle a client’s request for file
system operations. nfsd is SRC-controlled.
Table 2-4 gives an overview of all of the NFS files in AIX 5.3. The following files
are new to AIX 5.3 and were introduced specifically for NFS V4:
/etc/nfs/hostkey
/etc/nfs/local_domain
/etc/nfs/realm.map
/etc/nfs/princmap
/etc/nfs/security_default
The /etc/filesystems file also has new options for NFS V4.
Table 2-4 List of NFS files and those new to AIX 5.3
Files Description
/etc/nfs/security_default Contains the list of security flavors that may be used by the
NFS client, in the order in which they should be used.
/etc/filesystems Lists all file systems that can potentially be mounted and
their mounting configuration - persistent mounts.
/etc/bootparms Lists the servers that diskless clients can use for booting
from.
Table 2-5 gives a list of all NFS commands in AIX 5.3. The following commands
are new in AIX 5.3 and were specifically included for NFS V4:
/usr/sbin/chnfsdom
/usr/sbin/chnfsrtd
/usr/sbin/chnfssec
/usr/sbin/nfshostkey
/usr/sbin/nfshostmap
/usr/sbin/nfs4cl
/usr/sbin/chnfsim (delivered in the bos.cim.rte fileset)
Table 2-5 List of NFS commands and those new to AIX 5.3
Commands Description
Keep in mind that each increasing level of protection comes with a performance
penalty. Choose the minimum level that meets your data protection requirements.
In AIX 5.3, IBM has implemented Kerberos V5, but not SPKM/LIPKEY at this
point in time.
Note: Triple DES encryption gives the best protection, but you may need to
use single DES to get better performance and interoperability.
People do not usually work directly with the numeric IDs; they work with text user
and group names that are normally easier to associate with an actual individual
or group. When presenting information about process and file ownership, the
system translates the numeric IDs into the names. The relationship between the
names and the IDs is maintained in a user registry, which can be standard UNIX,
NIS, or LDAP.
All but the smallest organizations will want to use a shared user registry, rather
than maintaining separate /etc/passwd and /etc/group files on all hosts. Here is
one reason why. All clients using data stored on an NFS server directory should
use the same identifier to represent the same user or group. This is necessary to
maintain consistent file ownership. For example, in an NFS V2 and NFS V3
environment, if UID 100 is Joe on one NFS client and Mary on another client,
NFS files created by Joe from the first client will show as being owned by Mary
on the 2nd client, and vice versa. To avoid this, the separate /etc/passwd and
/etc/group files will need to be kept in sync on all NFS clients that access data on
a common NFS server. This can be a very expensive and error-prone task.
NIS has been widely used in conjunction with NFS, and it can work well for
medium-sized organizations. It becomes problematic, however, when a large
organization requires multiple NIS domains, or when two different organizations
merge. Consolidating two different NIS domains can be a very difficult task.
More information about NIS can be found on Sun Microsystems’ Web site:
http://www.sun.com
Notes:
1. When we say “NFS” throughout the rest of this section, we mean NFS V4.
2. The examples in this section are not intended to depict everything that
goes on in an NFS transaction. They are simplified to convey pertinent
concepts.
The ownership of the requesting user process is in the form of UID and GID. The
NFS client translates the UID into a user name string and translates the GID into
a group name string. These strings are then sent across to the server. The server
checks to make sure that the NFS domains in the request match its own NFS
domain. (If they didn’t match, extra steps would be required. See the multiple
domain discussion below.) The server then translates the strings back to
UID/GID using its user registry, which may not be the same as the client’s, and
stores the UID/GID as ownership attributes of the file being created.
Figure 3-2 shows how the user information is passed from the NFS server to the
client when the client has requested ownership information for an existing file.
The ownership information that is stored with the file is in the form of UID and
GID. The NFS server translates the UID into a user name string and the GID into
If an ls -l command was issued, the client would take the UID/GID information
and translate it back to names, resulting in output similar to:
-rw-r--r-- 1 sally hr 255 Aug 11 11:05 myfile
Figure 3-3 on page 54 shows how an NFS server determines ownership when an
NFS client requests to create a file in a directory that is under Kerberos security.
The server takes the requesting principal’s realm and maps it to an NFS domain
via the contents of the /etc/nfs/realm.map file. It then checks to see whether the
resulting NFS domain matches its own NFS domain. In this case, the domains
match, so the server then looks up the user name in its user registry to determine
the UID. The server also gets the user’s GID from its user registry, not from the
NFS request. It then places the UID and GID in the newly created file’s attributes.
Figure 3-3 illustrates this. Although the user sally’s primary group is eng on the
client, the created file is owned by the group hr because it is sally’s primary
group on the server. Using ls -l on the file on both the server and the client will
show that it is owned by the ‘hr’ group.
Figure 3-4 on page 56 shows what happens when an NFS client requests an
NFS server for ownership information for a file that is in a Kerberos-protected
directory.
The process is the same as the AUTH_SYS process in Figure 3-2 on page 52,
except (again) that the client user’s identity is determined from the Kerberos
principal accompanying the request. The NFS server translates the principal’s
realm to an NFS domain to check to make sure that the requester’s domain is the
If EIM is not in place, the NFS server will view the client identities as foreign, and
it will map them to the user/group nobody.
Figure 3-5 on page 58 shows the process of creating a file under AUTH_SYS
when EIM is used.
Note the extra EIM step after the server determines that the client’s NFS domain
does not match its own. The server requests from EIM which nfsdom1 user
matches sally@nfsdom2, and which nfsdom1 group matches hr@nfsdom2. The
figure shows the user and group names being the same between the two
domains, but it is possible for them to be different. The user name
sally@nfsdom2 might have mapped to the name sally2@nfsdom1, or it might
even have mapped to mary@nfsdom1.
The other scenarios presented earlier in this section all also require the
additional EIM step if the client’s NFS domain does not equal the server’s.
Attention: Because of this vulnerability, you should not use AUTH_SYS user
authentication if controlling access to your data is important.
What is Kerberos?
The AIX 5L Version 5.3 Security Guide describes Kerberos as a network
authentication service that provides a means of verifying the identities of
principals (users and hosts) on physically insecure networks. Kerberos provides
mutual authentication, data integrity, and privacy under the realistic assumption
that network traffic is vulnerable to capture, examination, and substitution.
Kerberos tickets are credentials that verify a principal’s identity. There are two
types of tickets: ticket-granting and service. The ticket-granting ticket is for the
initial identity request. When logging into a host system, a user needs something
that verifies his or her identity, such as a password or a token. This password or
token is used to obtain a ticket-granting ticket, which can then be used to request
service tickets for specific services. This two-ticket method is the called the
Kerberos third-party authentication model.
The Kerberos database keeps a record of every principal; the record contains the
name, private key, expiration date of the principal, and some administrative
information about each principal. This database is maintained on the master
KDC, and it can be replicated to one or more replica KDCs.
See Appendix A, “Kerberos” on page 243 for more information about what
Kerberos is and how it works.
Note: Although you can also use AIX ACLs (known in AIX 5.3 as the AIXC
ACL type), they are only supported on AIX systems, and they have not been
widely adopted. We do not discuss them in detail in this document. For more
about AIXC ACLs, see the AIX 5L Version 5.3 Security Guide, SC23-4907.
ACLs provide for more granular access control than standard UNIX file
permissions. One of the main differences is in group access. Whereas standard
UNIX permissions only provide access control for one group (the group that owns
the file), ACLs enable different access permissions to be specified for multiple
groups. ACLs also allow access permissions to be specified at a user level, but
Here is an example of an AIXC ACL (in aclget format) that has extended
permissions disabled:
*
* ACL_type AIXC
*
attributes: SGID
base permissions
owner(root): rwx
group(system): r-x
others: r-x
extended permissions
disabled
There is one aspect in which an NFS V4 client handles AIXC ACLs differently
from an NFS V3 client. An NFS V4 client depicts an AIXC ACL as an NFS V4
ACL if the AIXC ACL has extended permissions disabled, or if extended
permissions are enabled but there are no entries in the extended permissions
list. If the AIXC ACL has extended permissions enabled and there are entries in
the extended permissions list, the NFS V4 client shows it as an AIXC ACL. An
NFS V3 client always shows an AIXC ACL as an AIXC ACL.
For example, assume a file has the following AIXC ACL on the server:
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): r--
others: r--
extended permissions
disabled
The AIX 5L Version 5.3 Security Guide, SC23-4907, has more about AIXC ACLs.
Note: In order to use NFS V4 ACLs, the server file system must support them.
As of this writing, AIX5L Version 5.3 only supports NFS V4 ACLs in two file
system types: Enhanced Journaled File System (JFS2) with the extended
attribute format set to Version 2 (EAv2), and General Parallel File System
(GPFS). For more information about NFS V4 ACL support, see the AIX 5L
Version 5.3 Security Guide, SC23-4907, and the AIX 5L Differences Guide
Version 5.3 Edition, SG24-7463.
The additional special who strings specified in RFC 3530 (shown in Table 3-3)
are not currently supported in AIX.
a Allow access
d Deny access
Per the RFC 3530 NFS V4 standard and the AIX 5L Version 5.3 Security Guide,
an AIX NFS V4 server evaluates the ACL list from the top down, applying the
following rules:
Only ACEs that have a who that matches the requester are considered. The
credentials of the requester are not checked while processing the ACE with
special who EVERYONE@.
Each ACE is processed until all of the bits of the requester’s access have
been allowed or at least one of the requested bits not previously allowed has
been denied.
When a permission bit has been allowed, it is no longer considered in the
processing of later ACEs.
If a deny ACE_TYPE is encountered where the ACE_MASK has bits in
common with not-yet-allowed bits in the request, access is denied, and the
remaining ACEs are not processed.
If the entire ACL has been processed and some of the requested access bits
still have not been allowed, access is denied.
If the user sally requests READ_DATA (r) and WRITE_DATA (w) access, the ACL
evaluation will proceed as follows:
The s:(OWNER@):a... ACE is processed because sally owns the file.
– READ_DATA is allowed because that bit is set in the ACE_MASK.
– WRITE_DATA is not yet allowed because it is not set in the ACE_MASK.
The s:(OWNER@):d... ACE is processed because sally owns the file.
– WRITE_DATA is denied because that bit is set in the ACE_MASK, and
WRITE_DATA has not yet been allowed by a previous ACE.
No further ACEs are processed, and the requested access is denied.
Notes:
1. In the previous example, even though the GROUP@ and EVERYONE@
ACEs allow WRITE_DATA access, Sally is denied WRITE_DATA access
because it is specifically denied by the owner ACE.
2. The ACE order is important. If the group allow ACE had appeared in the list
before the owner deny ACE, then Sally would be allowed write access to
the file.
If the user joe, who is a member of the group sales, requests READ_DATA (r)
and WRITE_DATA (w) access, the ACL evaluation will proceed as follows:
The g:sales:d... ACE is processed because joe is a member of the group
sales.
– READ_DATA is not denied because it is not set in the ACE_MASK.
– WRITE_DATA is denied because that bit is set in the ACE_MASK, and
WRITE_DATA has not yet been allowed by a previous ACE.
No further ACEs are processed, and the requested access is denied.
If joe requests just READ_DATA (r) access, the ACL evaluation will proceed as
follows:
The g:sales:d... ACE is processed because joe is a member of the group
sales.
– READ_DATA is not denied because it is not set in the ACE_MASK.
The s:(EVERYONE@):a... ACE is processed.
– READ_DATA is allowed because it is set in the ACE_MASK.
All requested permission bits have now been allowed. No further ACLs are
processed, and the requested access is granted.
The difference comes into play when mapping the rwx bits to user (owner),
group, and other. This mapping is unspecified in RFC 3530. Here is an example
of the mapping we observe in AIX.
Initially, you might think that the bits will map straight from OWNER@ to user,
GROUP@ to group, and EVERYONE@ to other. As you can see from the
example, this is not the case. Here you see that the user permissions show as
rwx, where the s:OWNER@:a ACE has none of those bits set. Furthermore, even
though the ls -l output makes it look like the user sally has write access to the
file, she actually does not. Evaluating the ACEs from top down, write access is
denied by the s:GROUP@:d entry.
Based on this, we must draw the conclusion that you cannot use the standard
UNIX permissions bits to reliably predict access when using NFS V4 ACLs.
Note: The aclconvert and aclgettypes commands are new in AIX 5L V5.3.
2. Choose Access Control List either from the main window or from the
Filesystems menu. This opens a window like Figure 3-8.
3. Either: Type in the full path name of the file or directory whose ACL you would
like to modify, or click Browse to choose the file or directory from the GUI.
After you have entered the name, either type Enter or click Next.
7. Make sure that the selected access mask entries are the ones you want and
click OK. You will be returned to the ACL edit screen (Figure 3-10 on
page 76).
8. Repeat steps 5 and 6 for other ACEs that you want to change. After you have
finished with all of the ACEs that you want to change, click OK on the ACL
edit screen. You will see a pop-up window that indicates the status of the
operation. When you are done viewing the status, click Close on that window.
Important: Using the chmod command to manipulate the rwx permission bits,
either in octal form (for example, 755) or in symbolic form (u+x) replaces the
NFS V4 ACL with an AIXC ACL, wiping out the original permissions that were
on the file or directory.
Never use the octal form of the chmod command if you are using NFS V4
ACLs. Even if you think that you are leaving the rwx bits alone, using the octal
form will replace the NFS V4 ACL with an AIXC ACL.
Note: If you use chmod to manipulate rwx permission bits on an NFS client and
then (again on the client) run aclget on the file, the ACL will still appear to be
an NFS V4 ACL. However, it will be an AIXC ACL on the NFS server. The NFS
protocol translates AIXC ACLs that have extended permissions disabled to
look like NFS V4 ACLs at the client.
For example, if a directory has the following ACL, a file created in that directory
will have the same ACL, even if the umask is set to 777.
*
* ACL_type NFS4
*
*
* Owner: root
* Group: system
*
s:(OWNER@): a rwpRWxDaAdcCs fidi
s:(OWNER@): d o fidi
s:(GROUP@): a rwpRxadcs fidi
mv The file retains the same ACL that it had in the original location if the
source and target file systems are the same. If not, then ACL
assignment occurs as per cp below.
cp The file inherits its ACL from the directory where it is being placed, just
as if it were a newly created file.
cp -p The file retains the same ACL it had in the original location.
Choosing a method depends on your requirements. Each method has its own
characteristics, some of which are as follows.
Each department has its own directory for data, and each department creates
separate project directories under its directory. The resulting directory structure
is depicted in Figure 3-12 on page 81.
This structure has the project directories replicated under each department. If a
permissions change has to be made to one of the projects, those changes must
be made in three different places.
A directory structure that better lends itself to managing the project permissions
is depicted in Figure 3-13.
Figure 3-13 Directory structure that makes better use of ACL inheritance
This structure has a separate projects directory where the permissions for each
special project can be managed in one place. The departments still put their own
non-project-related data under the dept directory.
The first method is simpler to implement, but all files and directories being
changed will take on exactly the same permissions, eradicating any variation that
may have existed. This may be a good thing or a bad thing, depending on your
structure. The second method is much more complex to implement, but it does
allow for other differences to exist in the ACLs. This is another example where
carefully planning your directory structure around permissions requirements can
make administration easier.
The rest of this section illustrates possible ways to implement the ACL
propagation method #1 above.
You can use a different source and directory name, or you can specify the same
directory name for both source and destination to copy a directory’s ACL to all of
its descendants (including itself).
Using the aclget | aclput combination is convenient, but there are drawbacks:
If you mistype the name of the source directory, you will completely wipe out
the permissions in the destination directory. This can be remedied quickly by
reissuing the command with the correct source name, but meanwhile you will
have blocked access to any user or application that tries to access the data.
The aclput -R command stops at the first error it encounters, leaving the rest
of the files untouched.
Example 3-1 Sample script for copying an ACL (with recursive option)
#!/usr/bin/ksh
#
# copy_acl.sh
#
# Copy the ACL for the given source file/directory to other files/directories
#
#
# Functions
#
function usage {
echo "Usage: $scrname [-R] <source> <dest>"
echo " where"
echo " -R indicates a recursive copy"
echo " (copy ACL to all files and directories below and including"
echo " the destination.)"
echo " <source> = the name of the file or directory to copy the ACL from"
echo " <dest> = the name of the file or directory to copy the ACL to"
exit 1
}
if [[ $# -eq 0 ]]
then
usage
fi
#
# Process input parameters
#
#
# Initialize other variables
#
NBERR=0
TMP_ACLFILE="/tmp/.AIXACL_$$"
rm -f "${TMP_ACLFILE}"
exit ${NBERR}
This is difficult to do with standard UNIX permissions. There are two basic
options:
Make the user the owner of the directory and allow only owner access.
Because the user owns the directory, he or she can change its permissions,
which we do not want to allow.
Create a group for each user, where the user is the only member; make the
home directory owned by root and the user’s group; and allow only owner and
group access.
The user cannot change the directory permissions, but this option requires
maintaining a whole set of groups, one for each user.
This is easier to do with NFS V4 ACLs. Make root the owner of the directory and
add a user ACE to allow the user access to the directory. This is how that ACL
would look:
*
* ACL_type NFS4
*
*
* Owner: root
* Group: system
*
s:(OWNER@): a rwpRWxDaAdcCs fidi
s:(OWNER@): d o fidi
u:sally(sally@nfsdom1): a rwpRWxDaAdcs
u:sally(sally@nfsdom1): d Co
s:(GROUP@): d rwpRWxDaAdcCos fidi
s:(EVERYONE@): d rwpRWxDaAdcCos fidi
The user can open up permissions for files and subdirectories that he or she
creates in the directory because the user owns them, but the home directory
itself will still block access to those files.
Note: It might be possible to NFS mount a lower-level directory that has more
open permissions and gain access to those files, but normally the mount
operation is under system administrator control. If mounts are managed
correctly, users will not be able to get directly at lower directories underneath
the home directory.
If the groups are company1 and company2, the ACL on company1’s data would
look like this:
*
* ACL_type NFS4
*
*
* Owner: root
* Group: system
*
g:company2(company2@nfsdom1): d rwpRWxDaAdcCos fidi
s:(OWNER@): a rwpRWxDaAdcCs fidi
s:(OWNER@): d o fidi
s:(GROUP@): a rRxadcs fidi
s:(GROUP@): d wpWDACo fidi
s:(EVERYONE@): a rRxadcs fidi
s:(EVERYONE@): d wpWDACo fidi
No matter what the rest of the ACEs are, company2 will be denied access to
company1’s data.
The bottom line: You may have NFS V3 clients mount file systems that use NFS
V4 ACLs. ACL inheritance and evaluation will work normally on the server.
However, do not attempt to manipulate access permissions directly from the NFS
V3 client. Any permissions change at the NFS V3 client will overwrite the NFS V4
ACL with an AIXC ACL.
Unfortunately, there is no good way to block a user on the NFS V3 client from
running chmod or aclput on files or directories that he or she owns. You will have
to publish policy and rely on well-behaved users. (You could completely disable
the chmod and aclput commands on the client, but that would also disable them
for other client file systems where using those commands is perfectly valid.)
Also keep in mind when using NFS V3 clients that the UIDs and GIDs have to
match between server and client.
Another way that Kerberos indirectly identifies a host is through the NFS service
principal. (This is the identification of the NFS service running on the host.) The
service principal name is the fully qualified host name prefixed with nfs/ (as in
nfs/nfs402.itsc.austin.ibm.com). NFS clients using Kerberos authentication
identify NFS servers with this service principal.
Think of NFS V4 authentication of clients being mostly at the user level rather
than at the host level. (See 3.3.2, “RPCSEC_GSS user authentication using
Kerberos” on page 59.)
Kerberos does authenticate NFS server identities to the clients via the NFS
service principal. (See the NFS service principal discussion in the previous
section.)
Exporting directories from an NFS server is still fundamentally the same in NFS
V4 as it was in NFS V3. The main difference is that NFS V4 has added the new
security-related options shown in Table 3-10 on page 89.
vers Controls which version NFS mounts are allowed. Possible values are 2, 3,
and 4. Versions 2 and 3 cannot be enforced separately. Specifying Version 2
or 3 allows access by clients using either NFS protocol versions 2 or 3.
Version 4 can be specified independently and must be specified to allow
access by clients using Version 4 protocol. The default is 2 and 3.
The sec option is unique in that it can appear more than once in the exports
definition for a directory. This allows different ro, rw, root, and access options to
be specified for the different security options. For example, hosts using the sys
security method might only be allowed read access, while hosts using the krb5
security method might be allowed read and write access.
Note: You cannot specify the same security option in more than one sec=
stanza in a single exports definition.
Here is a sample /etc/exports line with the new NFS V4 security options:
/exports -vers=3:4,sec=krb5:krb5i:krb5p,rw,sec=sys:none,ro
Part 2 Implementing
ing
NFS V4
In this part we introduce you to the planning and implementation methodologies
that were used while preparing this book.
Considerations that must be taken into account when beginning any planning
phase include:
Currently available and deployed hardware, software, and applications
Consider the following questions:
– Do you have an inventory of your currents assets and their usage?
– Do you have a logical overview of your infrastructure?
The future IT strategies of your company
– Do you want to have centralized user management?
Business-driven design and implementation issues
– Do you have the need to exchange data with other customers or
departments?
– Do you have other (third-party) applications to be considered?
We also look at the necessary planning requirements if you want to use NFS V4
without some of the enhanced features, simply as a replacement to your current
NFS V3 environment.
Important: With NFS V4, the following two general design constraints should
be taken into account while planning the deployment. These two topics reach
a level of primary importance because of functional changes introduced by the
NFS V4 standard:
NFS domain and Identity Mapping
Authentication
For this redbook project, we did not have an existing environment to migrate
from, so we planned and built one from scratch using the flow chart. However,
most of you will be looking at this from an existing systems migration point of
view. Some considerations in using the flow chart:
The flow chart is designed to be modular. So, based on what you have
already implemented in your existing environment, you can skip to the next
step on the flow chart and continue with the planning.
It was impractical to cover all possible permutations when considering a
migration path, so in 4.9, “Migration considerations” on page 116, we walk
you through some of the scenarios we consider to be most common.
Certain parts of the flow chart may not easily plug into your environment, as
they are new concepts introduced by NFS V4. On the whole, the flow chart
should serve as a good building block for your planning considerations.
You may take RFC1537, Common DNS Data File Configuration Errors, and
RFC2181, Clarifications to the DNS Specification into account while planning
your name resolution infrastructure.
Obviously, this method does not scale, so just as DNS now serves the purpose
of the old /etc/hosts file, DNS can also be used to provide Kerberos
configuration.
Kerberos can use DNS as a service location protocol by using the DNS SRV
record as defined in RFC2052. In addition, Kerberos can use a TXT record to
locate the appropriate realm for a given host or domain name. These DNS
entries are not required to run a Kerberos realm, but they do eliminate the need
for manual configuration of clients. With these DNS records, Kerberos clients can
find the appropriate KDCs without the use of a configuration file.
Windows will establish the necessary SRV records automatically when an Active
Directory domain is created. Those using UNIX for their KDCs can create these
DNS entries manually in their zone files as a convenience to the DNS clients.
Important: The AIX implementation does not require the NFS domain to
match your DNS domain. You can call your NFS domain pretty much anything
you like, but keeping a relationship to your DNS domain simplifies managing
the environment. It also helps make sure that the name is unique.
For simplified management, we logically linked our NFS domain to the DNS
domain. In our sample environment our DNS domain was:
itsc.austin.ibm.com
User and group identities are maintained in some kind of repository. This
repository typically is implemented through one of the following methods:
Standard UNIX /etc/passwd and /etc/group files
Network Information Services (NIS)
Lightweight Directory Access Protocol (LDAP)
Use the following guidelines when choosing which method to use for your
user/group repository.
Using /etc/passwd and /etc/group might be desirable if each user uses only one
system. You will need to somehow maintain a central clearing house for user
identities to make sure that you choose names and IDs that are unique.
If users log into multiple systems using the same user name and password, you
will need to replicate your /etc/passwd and /etc/group information to all
If you are using Kerberos authentication with NFS, we recommend that you
choose LDAP, using the schema defined in RFC2307 as your user/group
repository. LDAP supports Kerberos integrated logon, but NIS does not.
Note: The RFC2307 schema enables NIS maps to be imported into an LDAP
directory. If your existing infrastructure uses NIS to manage user and group
information, you may want to consider migrating to LDAP. After the NIS maps
have been migrated, AIX 5L and other RFC2307 compliant platforms can use
LDAP instead of NIS to access this information.
For further information on how this can be achieved we refer you to the
following Technote: AIX - Migrating NIS Maps into LDAP, TIPS-0123, at:
http://www.redbooks.ibm.com/abstracts/tips0123.html?Open
You can also use a combination of methods for your user/group repository. For
example, if your security policy does not allow the same password to be used on
multiple systems, you could maintain the password locally and still use a central
repository such as NIS or LDAP for the rest of the user and group information.
For more information about EIM, refer to the AIX 5L version 5.3 Security Guide
and the IBM Redbook Windows-based Single Signon and the EIM Framework on
the IBM ^ iSeries Server, SG24-6975.
Under the AUTH_SYS security flavor, the user is authenticated at the client,
usually via a logon name and password. The NFS server trusts the user and
group identities presented by its clients.
Because of this vulnerability, you should not use AUTH_SYS user authentication
if controlling access to your data is important.
See Appendix A, “Kerberos” on page 243 for more information. Also, detailed
information can be found in IBM Network Authentication Service Version 1.4 for
AIX, Linux, and Solaris Administrator’s and User’s Guide. This document is
made available with fileset krb5.doc.en_US on the AIX 5.3 Expansion CD.
In addition to data encryption and integrity checking, you can use NFS V4 with
Kerberos authentication to help control which clients can access exported
directories on an NFS server. We demonstrate this in Figure 4-2.
This authentication checking was not possible with NFS V3, and to achieve a
similar control over mounts it is very common to use the exportfs option
-access=Client[:Client] within the file /etc/exports. Sometimes the list of clients
becomes very large and the server exports list must be changed every time a
client is added, deleted, or renamed.
With NFS V4 and Kerberos authentication, with the option sec=krb5, the client is
required to be authenticated before contacting the NFS V4 server. This means
that even if clientA resides in the same Kerberos realm as the NFS V4 Server, a
mount request will fail if clientA itself or the user is not authenticated.
KDC considerations
Authentication requests to the KDC can be easily handled with today’s available
processors, therefore a single or dual processor machine should suffice for
thousands of clients.
The KDC server will be one of the most important servers in your network, so:
The system should be running 24x7.
Planning for disaster recovery should be taken into account. Your KDC
database should be replicated if possible.
The KDC server should ideally not be used for any other purpose because if
the KDC is compromised, all Kerberos principals are compromised.
Note: The default value for maximum clock skew is 300 seconds (five
minutes). For security reasons, we recommend that you not change this value.
Multi-homed servers
Multi-homed servers use more than one physical network interface with different
IP addresses to serve clients. There are several reasons why you would want to
do this, including:
Subnetworks
Load balancing
The KDC server can only be bound to one IP address. This is defined during the
setup of the KDC by providing the system host name.
Therefore, mapping between the client and server network interfaces as well as
the KDC bound network interface has to be planned and managed.
Network infrastructure
You should take into consideration not only how many authentication clients you
will be serving, but also where these clients are located. While the bandwidth
requirements for Kerberos authentication are miniscule, the important metric for
Kerberos performance is the network latency between clients and the Kerberos
KDC. Each authentication exchange requires time for at least one full round trip
between client and KDC. Users’ authentication requests will become noticeably
slow if this latency is long, for reasons such as:
Going through a satellite uplink
Traversing across DSL connected backbone
Consequently, you should position your KDCs so that they are as close to the
clients’ network as possible. To support geographically dispersed networks with
possibly different types of connections, Kerberos implementations such as IBM
Network Authentication Service are capable of using the replication mechanism
to set up a KDC server and propagate the KDC database as needed.
and
REALM2.IBM.COM
Because the principal is a unique identifier, you should plan a strategy for
principal naming throughout your Kerberos environment. This is especially true
for the NFS service principal names, because NFS V4 must have proper name
resolution in place to compile the ticket request. See “NFS V4–related name
resolution” on page 97.
Tip: Always use the FQDN for the NFS service principal.
We decided that all NFS V4 Server and NFS V4 full clients in our environment
would have the following service principal name:
nfs/<hostname>.itsc.austin.ibm.com@REALM
However, the NFS V4 implementation on AIX 5.3 supports only the following
types of encryption:
des–cbc–crc, des–cbc–md4, des–cbc–md5, des3–cbc–sha1
For the best performance and interoperability, we recommend that you use
Single-DES as the standard encryption type on your installation. This may
depend, of course, on your overall security strategy. If protecting packet privacy
is critical, you might need to stay with Triple-DES encryption.
In addition to the chosen standard Kerberos encryption, you can select between
the following security flavors, which are options used during exporting file
systems and depend on your security needs:
krb5
Use Kerberos V5 protocol to authenticate users before granting access to the
shared file system.
Note: We only used krb5, and we recommend that you start with this.
krb5i
Use Kerberos V5 authentication with integrity checking (checksums) to verify
that the data has not been tampered with in transmission.
krb5p
Use Kerberos V5 authentication, integrity checksums, and privacy protection
(encryption) on the shared file data. This provides the most secure file
sharing, as all traffic is encrypted.
In the AIX 5.3’s NFS V4 implementation, two types of Kerberos clients are
defined: slim clients and full clients.
An NFS client without a dedicated NFS service principal is called a slim client.
An NFS client with a dedicated NFS service principal in the form
nfs/<hostname>@REALM is called a full client. The full client provides
stronger security. However, it is the stronger Kerberos-based RPC security
that this type of client provides that requires more administrative overhead:
You should run the config.krb5 command on each client with the Kerberos
Administrative ID.
If you trust all systems connected to your internal network and authentication at a
user level fulfills your needs, then using slim clients would suffice. The following
reasons are why you would want to deploy slim clients in your infrastructure:
Pre-installed new systems
Mass rollout of new systems
Unprompted upgrade of clients
Share all the same Kerberos realm as well as NFS domain
In addition, there are several choices for installing the slim clients, and these are
based on the way your systems are installed:
Complete new installation (scratch installation)
Cloning by use of a full system backup image
AIX 5L supports load modules that are responsible for identification, for
authentication, or both. You can use either one load module, which supports
both, or you can specify one load module that is responsible for the identification
part, and another that is responsible for authentication. Such a combination of
two modules is called a compound module.
We recommend that you use a compound module comprised of NAS 1.4 for
authentication and LDAP RFC2307 for identification. With NAS and LDAP in
place and KRB5LDAP as the authentication module on the clients, you do not
have to deal with creating users on every client.
For more about configuring the system to use these authentication modules, see
5.9.4, “Configure the NFS V4 client for integrated login services” on page 170.
For more details about using AIX directory integration to support user
authentication and identification, refer to the Security chapter in the redbook AIX
5L Version 5.3 Differences Guide, SG24-7463.
There is only one way to control access to client hosts: via the /etc/exports file
and the exportfs command. This is described in 3.7, “NFS V4 host
authorization” on page 88.
You have three options when it comes to controlling client user access to files
and directories:
Standard UNIX permissions
AIXC ACLs
NFS V4 ACLs
Standard UNIX permissions enable you to control access to only three identities:
the owning user, the owning group, and everyone else. If that is not sufficient to
meet your access control requirements, then you should choose one of the ACL
options. For example, if you have data where you need one group to have write
access, one or more other groups to have read-only access, and everyone else
to have no access at all, you will not be able to accomplish this using standard
UNIX permissions.
If standard UNIX permissions do not meet your requirements, you can then
choose to use AIXC ACLs or NFS V4 ACLs. If you choose NFS V4 ACLs, then
make sure that you choose file system types on your server that support this.
See 4.6, “Choosing the appropriate file system types” on page 111 for more
information.
You should not use AIXC ACLs if your requirements include one of the following:
You have non-AIX NFS clients that must be able to manipulate ACLs for data
on your NFS server. AIXC ACLs are only supported in AIX.
You require finer granularity access control than AIXC ACLs support. For
example, AIXC ACLs do not provide a way for you to set up a directory where
users can create files but not delete them after they are created.
More details about permissions and ACLs can be found in 3.4, “NFS V4 user
authorization” on page 62.
Depending on how computer-savvy your users are, you will need to either
educate them on how to properly maintain their own ACLs or set up ACL
inheritance so that they do not have to worry about changing permissions. (See
“Directory structure and ACLs” on page 79 for more about setting up ACL
inheritance.)
What if you do not want to allow end users to change permissions, thus keeping
that task under the control of designated system or data administrators? The
system by default does not accommodate this. Files created by a user are owned
by that user, and a file’s owner can always change its permissions. You would
need to devise some way to make sure that all files are owned by an
administrator account. One way is to run a periodic cron job that changes
ownership of all files to that account. This leaves a window of time when a user
can change permissions on a newly created file, but this can be remedied by the
job that changes ownership. It can also make sure that the permissions are set
as they should be after it changes ownership.
You can see from this discussion that you may need to implement additional
administrative controls on top of those that are provided by default.
The file system you can choose will be dictated by the authorization method you
chose above. If you want to use NFS V4 ACLs, then your choice should be either
JFS2 EAv2 or GPFS. This does not mean that your system cannot use the other
file system types. The restriction only applies to the file systems where you want
to use NFS V4 ACLs.
Figure 4-3 on page 113 shows one way the single namespace could be
implemented and how it could benefit you, using one NFS server and five NFS
clients.
You want to export five file systems from your server. If you use NFS V3, then
your server’s exports file will look something like Example 4-1.
The NFS V3 client would then have to mount each export individually, as shown
in Example 4-2.
Example 4-2 The mount commands you would need to run on each NFS V3 client
mount serverY:/home /mount1
mount serverY:/usr/codeshare/ThirdPartyProgs /mount2
mount serverY:/usr/local /mount3
The NFS V3 server exported five file systems and the client had to mount five file
systems to five different mount points.
Now we look at how a single namespace would simplify the export and mount
process. We continue to use the directory structure defined in Example 4-1 on
page 113 to carry this out. The server’s /etc/exports file will look something like
Example 4-3.
Example 4-3 Rendering a single view for the NFS V4 client on the NFS V4 server
/exports -nfsroot
/home -vers=4,rw,exname=/exports/home
/usr/codeshare/ThirdPartyProgs -vers=4,rw,exname=/exports/ThirdPartyProgs
/usr/local -vers=4,ro,exname=/exports/local
/var/db2/v81/DB -vers=4,rw,exname=/exports/DB
/exports/scratch -vers=4,rw,exname=/exports/scratch
What have we done to the /etc/exports file to make the single namespace
concept into a practical implementation?
1. We need to set the pseudo-root on the server: This acts as the glue for all
other file systems. We used the /exports directory.
2. We then export all the other file systems, as we did with the NFS V3 exports,
but we add two new options:
vers=4 This tells the NFS server that the export is of type NFS V4.
exname This is the AIX implementation extension of the single
namespace concept. We have taken the file systems that we
want to export and glued them to /exports (defined in the first
line of the /etc/exports file).
If you look at the exname option (for all exports), we have also chosen to not
show the parent directories of all the exported directories. This enables us to
keep the server’s directory structure private and unexposed to the clients.
We will look at this from a practical point after we have shown what the client
must do to see the logical view. The command that the client has to run to mount
the exported file systems is:
# mount -o vers=4 serverY:/ /nfs
The mount command tells serverY that the client is requesting a view of its single
namespace, and after the server allows the client access to the view, the client
will mount it onto the /nfs mount point locally.
We can see that all the file systems we exported on the server are represented
under the client’s mount point (/nfs). If we now decide that we want to move the
/exports/scratch file system, on the server, to a different location, say /scratch,
and we also want to move /home to /users/home, we only have to make the
appropriate changes to the server’s /etc/exports file, and the client’s mount
command will remain the same.
While the pseudo-root FS - alias tree model delivers a good solution to build a
Global Namespace, you have to think about the current logical as well as
physical layout of your File systems on the NFS V4 Server. Nevertheless, you
can adopt the pseudo-root FS - alias tree model to run with your current physical
and logical layout.
All logical migration paths have a normal NFS V3 deployment on AIX as the entry
point, and end with deployment of Kerberos-based authentication. In addition,
the identification method can be chosen between classic (using local
/etc/passwd) or centralized, by the use of IBM Tivoli Directory Server or any
other backend system.
Figure 4-4 on page 117 shows three possible logical migration paths indicated by
the different numbers and arrows:
Dark arrows Probable migration paths expected in production
environments
Dashed arrows Alternative paths for crossover
Number 1 target Migration to NFS V4 without enhanced security
Number 2 target Migration to NFS V4 with enhanced security using new
authentication infrastructure
Number 3 target Migration to NFS V4 with enhanced security by extending
the available identification infrastructure
At this point we discuss only the migration paths marked by the dark arrows.
1. In this case, migration to NFS V4 has been considered using a two-step
approach by migrating all server systems first to AIX 5.3 without having NFS
V4 deployed. After migrating all client systems to AIX 5.3, general
deployment of NFS V4 can take place.
Important: It is important that you do not confuse the term NAS with Network
Attached Storage. In this context NAS stands for Network Authentication
Service.
We found that the NAS user commands, such as kinit, are not found in the
PATH. Further, kinit is also installed with Java14.sdk.
Example 5-1 Sample output of different kinit versions with different PATH settings
# type kinit
kinit is /usr/java14/jre/bin/kinit
#
# type kinit
kinit is /usr/krb5/bin/kinit
#
We distributed this file to all of our AIX NFS V4 servers and clients.
To capture errors logged by the NFS daemons, we made the following changes
to our systems:
1. A local file system was created and mounted to the /var/nfsv4log directory. (It
is always better to have a separate file system for log daemons, so that your
root file system is safeguarded from being filled up.)
2. The following stanza was added to the /etc/syslog.conf file:
*.debug /var/nfs4log/syslog.out rotate time 1d archive /var/nfs4log/archive/
3. We then refreshed the syslogd using the refresh -s syslog command to
activate these changes immediately on the running system.
Using the setting for syslogd as given in step 2 may log more information as
needed while running a system in production mode. Thus you can set back the
log level to ERROR in the /etc/syslog.conf file to limit the amount of data written.
The new stanza will now show:
*.error /var/nfs4log/syslog.out rotate time 1d archive /var/nfs4log/archive/
The changes will take affect only after you refresh the syslog daemon.
Example 5-2 shows the server view of NFS V3 exported file systems.
Example 5-2 Server output of NFS V3 exported file systems on AIX 5.3
# pg /etc/exports
/exports -rw
/exports/home -rw
/exports/project/projA -ro
/exports/project/projB -ro
/usr/ldap -vers=3,ro
#
#exportfs -va
Example 5-3 shows the client view for the NFS V3 mounted file systems from
server nfs404.
Example 5-3 Client output for NFS V3 mounted file system on AIX 5.3
# showmount -e nfs404
export list for nfs404:
/exports (everyone)
/exports/home (everyone)
/exports/project/projA (everyone)
/exports/project/projB (everyone)
/usr/ldap (everyone)
# mount nfs404:/exports /nfs
# mount nfs404:/exports/home /nfs/home
# mount nfs404:/exports/project/projA /nfs/project/projA
# mount nfs404:/exports/project/projB /nfs/project/projB
# cd /nfs/project/projB
# df .
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
nfs404:/exports/project/projB 65536 64856 2% 5 1%
/nfs/project/projB
If the NFS domain is not set, the output looks similar to Example 5-5.
In our sample environment, we already set the NFS domain so that the output
looks like Example 5-6.
Note: As this book is being written, changing the NFS domain does not
recycle or start nfsrgyd. You have to manually start or recycle the daemon.
Note: There are several ways to change the pseudo-root on the server. We
recommend using the -nfsroot option within the /etc/exports file; however, all
NFS exported server file systems have to be unexported or this will not work.
Example 5-9 shows all commands and system responses when changing the
pseudo-root FS using -nfsroot option.
Important: Although the nfsd process still shows the option -root as /
the actual pseudo-root FS has been changed to /exports. This has been
verified by running the nfsd -getnodes command.
Example 5-10 shows all commands and system responses when changing the
pseudo-root FS using chnfs.
Note: The gssd daemon is not running because we are not using
RPCSEC_GSS at this point in our sample environment.
On the server, we created three file systems (Example 5-11), which were
mounted under the /exports directory using following subdirectories:
/exports/home
/exports/project/projA
/exports/project/projB
To be able to export these file systems to the clients, we created the /etc/exports
file on the server using smitty mknfsexp with the entries in Example 5-12.
In Example 5-12, the entries are similar to the /etc/exports file used with NFS V3
except for the fact that option vers=4 indicates that the export is of type NFS V4
only. In addition, you still have to export every local file system so that the NFS
server is capable of internally rendering and crossing the file system borders.
The main changes are seen on the client side. They are:
1. Only a single mount command is required enable retrieval and changing of
directory to all available file systems.
2. With NFS V3, the same functionality is not available, so to mount all file
systems, a mount command for each exported file system is required.
In our example, only the following command is needed on the client to make the
pseudo-root FS and all exported file systems beneath available to the client:
mount -o vers=4 nfs404:/ /nfs
Example 5-13 shows the output of an NFS V4 client accessing the pseudo-root
FS.
You may have recognized that the command df is not useful in this context. That
is why the nfs4cl command is recommended.
Note: The described pseudo-root FS setup cannot coexist with the alias tree
model. You have to choose between the two models.
We created three new file systems on the server that does not share the same
root mount path:
/local/trans
/local/trans1
/usr/codeshare/ThirdPartyProgs
Example 5-14 Sample df output from NFS V4 server for alias tree model
/dev/exname_lv 65536 63376 4% 20 1% /local/trans
/dev/exname2_lv 65536 63392 4% 18 1% /local/trans1
/dev/exname3_lv 65536 63392 4% 18 1%
/usr/codeshare/ThirdPartyProgs
The second step is to generate a new /etc/exports file and include the exname
option as shown in Example 5-15.
Finally, we export the new file systems to the clients using the exportfs -va
command as shown in Example 5-16.
The global namespace is available using the pseudo-root FS - alias tree model.
In Example 5-17 we show the access to the alias tree on a NFS V4 client.
To do so we created the following /etc/exports file on the server and included the
exname option shown in Example 5-18.
Example 5-18 Sample alias tree file /etc/exports file on the server
/local/trans -vers=4,rw,exname=/exports/trans
/local/dept -vers=4,rw,exname=/exports/dept
/local/home -vers=4,rw,exname=/exports/home
/usr/codeshare/ThirdPartyProgs -vers=4,ro,exname=/exports/ThirdPartyProgs
The Global Namespace is made available using the pseudo-root FS - alias tree
model.
Example 5-19 shows the access to the alias tree on an NFS V4 client.
Example 5-19 Extended client view of the pseudo-root FS - alias tree model
#mount -o sec=krb5,vers=4 nfs404:/ /nfs
#
#cd /nfs
#
# find . -print
.
./ThirdPartyProgs
./ThirdPartyProgs/bin
./ThirdPartyProgs/src
./ThirdPartyProgs/contrib
./home
./home/sally
While running this command, the system asks for a Master Database password
and a password for the administrative principal called admin. Record the name
and chosen password in an secure place as these principals are essential for
your NAS environment.
Example 5-20 shows the output from the mkkrb5srv command with the options
shown above.
Path: /etc/objrepos
krb5.server.rte 1.4.0.0 COMMITTED Network Authentication Service
Server
The -s option is not supported.
The administration server will be the local host.
Initializing configuration...
Creating /etc/krb5/krb5_cfg_type...
Creating /etc/krb5/krb5.conf...
Creating /var/krb5/krb5kdc/kdc.conf...
Creating database files...
Initializing database '/var/krb5/krb5kdc/principal' for realm 'REALM2.IBM.COM'
master key name 'K/[email protected]'
You are prompted for the database Master Password.
It is important that you DO NOT FORGET this password.
Enter database Master Password:
Re-enter database Master Password to verify:
WARNING: no policy specified for admin/[email protected];
defaulting to no policy. Note that policy may be overridden by
ACL restrictions.
Enter password for principal "admin/[email protected]":
Re-enter password for principal "admin/[email protected]":
Principal "admin/[email protected]" created.
Creating keytable...
Starting krb5kdc...
krb5kdc was started successfully.
Starting kadmind...
kadmind was started successfully.
The command completed successfully.
Restarting kadmind and krb5kdc
In addition, the two lines shown in Example 5-21 are added automatically to the
/etc/inittab file so that the KDC server is automatically started after system
reboot.
Attention: If you intend to install the KDC on an NFS V4 server, be aware that
after activating the KDC server your NFS V4 server will not work properly until
you have completed all further steps.
After successful installation and configuration of the KDC, on the server, the files
shown in Figure 5-1 are installed in the /etc/krb5 directory.
Example 5-22 on page 138 shows the contents of the /etc/krb5/krb5.conf file,
and Example 5-23 on page 138 shows the contents of the
/etc/krb5/krb5_cfg_type file.
[realms]
REALM2.IBM.COM = {
kdc = nfs404.itsc.austin.ibm.com:88
admin_server = nfs404.itsc.austin.ibm.com:749
default_domain = ibm.com
}
[domain_realm]
.ibm.com = REALM2.IBM.COM
nfs404.itsc.austin.ibm.com = REALM2.IBM.COM
[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log
#
Note: kadmin.local can only be run on the master KDC, whereas kadmin can
be run on any machine that is part of the Kerberos realm. We use both
variations of the command in examples in this chapter.
In Example 5-25, the principal named sally is added to the KDC database via
command line. This requires that you have already authenticated to Kerberos
using an administrative principal, for example admin/admin.
Note: We assume that the local AIX user sally already exists on the KDC
server. In addition, this user name must also exist on all NFS V4 clients.
Attributes:
REQUIRES_PRE_AUTH
Policy: [none]
Example 5-26 shows that the number of keys equals 1, and the standard
encryption for this principal is set to Single-DES. You could have principals with
different number of keys and standard encryption types within the KDC.
In the next step we verify that the user can be authenticated using the created
principle sally as shown in Example 5-27.
This procedure has to be executed for every AIX user in your environment. For a
large number of users you may want to create a script to automate this process.
Example 5-28 should give you an idea of how to achieve this.
Note: This script will run only if you are already authenticated as admin. Using
this flag in a shell script can be dangerous if unauthorized users gain read
access to this script. The script is only provided to give you an idea of how this
can be achieved.
You also have the option to run the utility mkseckrb5, delivered with the AIX Base
Operating System bos.rte.security file set, to import all existing users from the
local system into the KDC database. For further details, see the commands
reference chapter of the AIX 5.3 online documentation.
The setup for this principal is slightly different from the user principal setup, as we
use a random password instead of one that is user-defined.
Attributes:
Policy: [none]
The next step is to set up a keytab entry on the NFS V4 server using the kadmin
command as shown in Example 5-30.
Can the server principal get valid tickets? Use the kinit command.
In our sample environment, we already set the NFS domain so that the output
appears as:
# chnfsdom
Current local domain: nfs.ibm.com
Note: Changing the NFS domain does not recycle or start the nfsrgyd. You
have to start and recycle the daemon manually.
The next step is to create the file /etc/nfs/realm.map. Use the chnfsrtd
command. In our sample environment, the realm is REALM2.IBM.COM and the
NFS domain is itsc.austin.ibm.com.
Example 5-35 shows the installation process and the file sets that are installed
as part of the above.
At this point, you should start the necessary NFS processes by running the
/etc/rc.nfs script. Before completing the client setup by enabling RPSSEC_GSS,
we have to set up the NAS environment by using a full or slim NFS V4 Kerberos
client, which we describe in the next two sections.
The same result can be achieved by using the config.krb5 command with
options -C -d itsc.austin.ibm.com -r REALM1.IBM.COM -c
nfs407.itsc.austin.ibm.com -s nfs407.itsc.austin.ibm.com
The NFS server (or an NFS V4 Full Kerberos client), using RPCSEC_GSS
security, must be able to acquire credentials for its service principal
nfs/<host_name>@REALM to authenticate requests. This type of principal must
be created using the kadmin command on either a server or the client during
installation. We create this principal on the client. For this operation, you should
know the Kerberos administrative principal (default is admin/admin@REALM)
and the password for the administrative principal.
In addition, this operation verifies that the client can communicate with the KDC
server.
The next step is to add the NFS service principal using the kadmin command.
Example 5-40 on page 148 shows that we used the kadmin: listprincs
commands to verify that no service principal of type
Now we create the keytab file so that the machine can request a valid ticket after
reboot. This enables the machine to mount NFS V4 directories with the security
flavor sec=krb5.
This step created the local keytab file krb5.keytab in the /etc/krb5 directory. If this
keytab file is generated on another machine, such as a Windows KDC server,
We verify the created keytab file using the kinit command (Example 5-42).
We again verify that the nfshostkey map file can be used to get a valid Kerberos
ticket.
Restriction: While writing this book, we learned that a slim client is not
capable of automatically mounting file systems that require RPCSEC_GSS
authorization only. In this case, the security flavor on the server must be at
least sec=sys:krb5(x).
The following are reasons why you would want to deploy slim clients in your
infrastructure:
Pre-installed new systems
Mass rollout of new systems
Unprompted upgrade of clients
In addition, there are several choices for how to install the slim clients, based on
the way your systems are installed:
Complete new installation (scratch installation)
Cloning by use of a full system backup image
The provided examples show the configuration of a slim client. They can be used
if the requirement is met and all targeted systems are to be in the same NFS
domain and Kerberos realm. Otherwise, the slim client will not function properly
and you will have to manually change the Kerberos configuration, the NFS
domain configuration, or both.
Example 5-46 Creating a tar image of the basic NAS and NFS configuration files
# ls /etc/nfs/* > /tmp/SlimClientInList
#
# ls /etc/krb5/* >> /tmp/SlimClientInList
#
# tar -cvf /tmp/SlimClientImage.tar -L /tmp/SlimClientInList
a /etc/nfs/local_domain 1 blocks.
a /etc/nfs/realm.map 1 blocks.
a /etc/krb5/krb5.conf 2 blocks.
a /etc/krb5/krb5_cfg_type 1 blocks.
#
Example 5-47 Installing the contents of the tar image onto the target system
# tar -xvf /tmp/SlimClientImage.tar
x /etc/nfs/local_domain, 12 bytes, 1 media blocks.
x /etc/nfs/realm.map, 68 bytes, 1 media blocks.
x /etc/krb5/krb5.conf, 864 bytes, 2 media blocks.
x /etc/krb5/krb5_cfg_type, 7 bytes, 1 media blocks.
#
# kinit sally
Password for [email protected]:
#
# klist
Ticket cache: FILE:/var/krb5/security/creds/krb5cc_0
The slim client will not be able to mount a file system using security flavor
sec=krb5 without having valid Kerberos tickets. To achieve automatic mount of
the pseudo-root FS during startup, the security flavor sec=sys on the server and
the client for the NFS pseudo-root FS has to be used.
The slim client is now rebooted and with UID root, we access the NFS remote
mounted directory.
This system is now ready to be used as the master full system backup for
prompted installation of the clients.
At this point, the client setup is finished and can mount and access directories
with enhanced security.
options :
rw,intr,rsize=32768,wsize=32768,timeo=300,retrans=5,biods=0,numclust=2,maxgroup
s=0,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,sec=krb5
#
If the result is 32, then your system is a 32-bit machine and you will not be able to
proceed any further. If the result is as shown in the previous example, then
continue with the rest of the steps.
2. We confirm what mode the system is running in:
# bootinfo -K
32
#
Important: Make sure that user ldapdb2 has a password assigned and can
log on without any challenges before proceeding.
Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
ldap.client.rte 5.2.0.0 USR APPLY SUCCESS
ldap.client.adt 5.2.0.0 USR APPLY SUCCESS
ldap.client.rte 5.2.0.0 ROOT APPLY SUCCESS
ldap.server.java 5.2.0.0 USR APPLY SUCCESS
db2_08_01.client 8.1.1.16 USR APPLY SUCCESS
db2_08_01.cnvucs 8.1.1.16 USR APPLY SUCCESS
db2_08_01.conv 8.1.1.16 USR APPLY SUCCESS
db2_08_01.db2.rte 8.1.1.16 USR APPLY SUCCESS
db2_08_01.db2.samples 8.1.1.16 USR APPLY SUCCESS
db2_08_01.essg 8.1.1.16 USR APPLY SUCCESS
db2_08_01.icuc 8.1.1.16 USR APPLY SUCCESS
db2_08_01.icut 8.1.1.16 USR APPLY SUCCESS
db2_08_01.jdbc 8.1.1.16 USR APPLY SUCCESS
db2_08_01.jhlp.en_US.iso885 8.1.1.16 USR APPLY SUCCESS
db2_08_01.cj 8.1.1.16 USR APPLY SUCCESS
db2_08_01.ldap 8.1.1.16 USR APPLY SUCCESS
db2_08_01.msg.en_US.iso8859 8.1.1.16 USR APPLY SUCCESS
db2_08_01.pext 8.1.1.16 USR APPLY SUCCESS
db2_08_01.repl 8.1.1.16 USR APPLY SUCCESS
db2_08_01.sqlproc 8.1.1.16 USR APPLY SUCCESS
ldap.server.rte 5.2.0.0 USR APPLY SUCCESS
ldap.server.com 5.2.0.0 USR APPLY SUCCESS
ldap.server.cfg 5.2.0.0 USR APPLY SUCCESS
ldap.server.com 5.2.0.0 ROOT APPLY SUCCESS
ldap.server.cfg 5.2.0.0 ROOT APPLY SUCCESS
db2_08_01.conn 8.1.1.16 USR APPLY SUCCESS
db2_08_01.cs.rte 8.1.1.16 USR APPLY SUCCESS
db2_08_01.das 8.1.1.16 USR APPLY SUCCESS
db2_08_01.db2.engn 8.1.1.16 USR APPLY SUCCESS
#
14.Install the Kerberos V5 file sets as described in 5.6.2, “Installing the IBM NAS
file sets” on page 135.
We have now successfully installed all of the software that is required to support
KDC and RFC2307 with IBM Directory Server.
3. Although the DB2 database is now configured and running, we need to set
the database to autostart after reboot of the system.
5. We now stop the IBM Tivoli Directory Server to facilitate the addition of the
container object for Kerberos:
# ibmdirctl -D cn=admin -w succ3ss stop
Stop operation succeeded
#
6. To be able to provide an LDAP backend to the KDC, run this command:
# ldapcfg -q -s "o=IBM,c=US"
#
We do this so that we do not have to use a legacy or file-based backend. For
a detailed description of why this is done, refer to the NAS Admin Guide IBM
Network Authentication Service Version 1.4 for AIX, Linux, and Solaris
Administrator’s and User’s Guide delivered with the krb5.doc.en_US file set.
7. We now start the IBM Directory Server to enable us to add the Kerberos
schema:
# ibmdirctl -D cn=admin -w succ3ss start
Start operation succeeded
#
The sample file contents are shown in “LDIF sample file for KDC” on
page 270.
10.We now modify the dn: o=IBM, c=US container by adding the schema.
add objectclass:
BINARY (11 bytes) KrbRealm-V2
BINARY (11 bytes) KrbRealmExt
add krbrealmName-V2:
BINARY (15 bytes) REALM1.IBM.COM
add krbprincSubtree:
BINARY (45 bytes) krbrealmName-V2=REALM1.IBM.COM, o=IBM, c=US
add krbDeleteType:
BINARY (1 bytes) 3
adding new entry krbrealmName-V2=REALM1.IBM.COM, o=IBM, c=US
add objectclass:
BINARY (9 bytes) container
add cn:
BINARY (9 bytes) principal
adding new entry cn=principal, krbrealmName-V2=REALM1.IBM.COM, o=IBM, c=US
add objectclass:
BINARY (9 bytes) container
add cn:
BINARY (6 bytes) policy
adding new entry cn=policy, krbrealmName-V2=REALM1.IBM.COM, o=IBM, c=U
#
11.At this point we can verify the IBM Directory using a simple ldap query that
shows all available container names in the newly created LDAP directory.
namingcontexts=CN=SCHEMA
namingcontexts=CN=LOCALHOST
namingcontexts=CN=AIXDATA
namingcontexts=CN=PWDPOLICY
namingcontexts=CN=IBMPOLICIES
namingcontexts=O=IBM,C=US
#
Path: /etc/objrepos
krb5.server.rte 1.4.0.0 COMMITTED Network Authentication Service
Server
The -s option is not supported.
The administration server will be the local host.
Initializing configuration...
Creating /etc/krb5/krb5_cfg_type...
Creating /etc/krb5/krb5.conf...
Creating /var/krb5/krb5kdc/kdc.conf...
Creating database files...
Initializing database 'LDAP' for realm 'REALM1.IBM.COM'
master key name 'K/[email protected]'
Attempting to bind to one or more LDAP servers. This may take a while...
You are prompted for the database Master Password.
Enter database Master Password:
Re-enter database Master Password to verify:
Attempting to bind to one or more LDAP servers. This may take a while...
WARNING: no policy specified for admin/[email protected];
defaulting to no policy. Note that policy may be overridden by
ACL restrictions.
Enter password for principal "admin/[email protected]":
Re-enter password for principal "admin/[email protected]":
Principal "admin/[email protected]" created.
Creating keytable...
Attempting to bind to one or more LDAP servers. This may take a while...
Creating /var/krb5/krb5kdc/kadm5.acl...
Starting krb5kdc...
Attempting to bind to one or more LDAP servers. This may take a while...
krb5kdc was started successfully.
Starting kadmind...
Attempting to bind to one or more LDAP servers. This may take a while...
kadmind was started successfully.
2. Check that the KDC server process krb5kdc and kadmind has been started.
4. To be able to add, modify, and delete users and groups using the
mkgroup/mkuser -R KRB5LDAP commands, we have to add the following
authentication grammar to the /usr/lib/security/methods.cfg file and append
the following lines.
KRB5LDAP:
options = db=LDAP,auth=KRB5
5. In addition, the local LDAP security client daemon must be running on the
system. This can be accomplished with the following command.
6. To verify the LDAP security client daemon, you can use the ls-secldapclntd
command.
At this point, the KDC and IBM Directory Server initial setup is complete.
In order to create users in the Kerberos realms and store the user/group
identification information in the RFC2307, we run the following commands:
1. We added the root user to the realm to enable the AIX mkuser command to
access the LDAP backend database.
2. The next step is to verify the newly created principal. In addition, we need to
have valid tickets to perform all further user creation steps (shown in
Example 5-65).
Example 5-66 Creation of user and group in the KDC with LDAP backend
# mkgroup -R KRB5LDAP -a id=1400 eng
#
# mkuser -R KRB5LDAP id='6023' pgrp='staff' groups='eng' \ home='/home/sally'
shell='/bin/ksh' gecos='NFS V4 KDC Test user sally' sally
#
# passwd -R KRB5LDAP sally
4. Verify that the user sally is created correctly by executing the lsuser sally
command.
5. To verify that the user ID is valid, try to log on as sally and display the
Kerberos ticket details.
telnet (nfs407)
AIX Version 5
(C) Copyrights by IBM and by others 1982, 2004.
login: sally
sally's Password:
$
$ /usr/krb5/bin/klist
Ticket cache: FILE:/var/krb5/security/creds/[email protected]_6023
Default principal: [email protected]
The user sally can log on to any machine in the Kerberos realm
REALM1.IBM.COM without being defined as a local user (if the client has been
set up with the previously mentioned authentication grammar). The user’s
identification will be stored in LDAP RFC2307.
Also, because user sally has a Kerberos ticket granted automatically during the
AIX logon process, she can have access to all NFS V4 mounted file systems with
sec=krb5 option if provided ACLs are set for access.
All further steps to set up an NFS V4 server were described in previous sections.
1. The plan is to use AIX integrated login within KDC and IBM Directory Server
as the LDAP backend, so we install the LDAP client software from the AIX
5.3. Base media.
2. Check that the system uses the correct PATH to locate the NAS binaries.
3. Now we can configure this system into our NAS infrastructure as a client
system using the mkkrb5clnt command, delivered by AIX BOS file set
bos.rte.security. (nfs407.itsc.austin.ibm.com is the KDC serving the Kerberos
realm REALM1.IBM.COM and also the IBM Tivoli Directory Server.)
4. We verify that the newly created principal can be used to authenticate with
Kerberos.
Note: This principal cannot be used for NFS V4 authentication. You still
must create the service principal of type nfs/<FQDN>@REALM.
If using the config.krb5 command, delivered with IBM NAS V1.4, the
configuration will be valid, but the principal nfs/<FQDN>@REALM will not be
created.
5. The next step is to add an NFS service (machine) principal and create a
keytab file for the NFS V4 client nfs402.itsc.austin.ibm.com. This is done by
carrying out the following process.
6. Verify that the machine principal can log on using the keytab file that we just
generated.
7. To create the hostkey map file, change the NFS domain and enable
RPCSEC_GSS by executing the commands shown in Example 5-76.
KRB5LDAP:
options = db=LDAP,auth=KRB5
11.Communication to the IBM Directory Server from the client must be enabled:
#mksecldap -c -h nfs407.itsc.austin.ibm.com -a cn=admin -p succ3ss
#
With these settings in place, any principal that is defined in the realm
REALM1.IBM.COM can log on to nfs402.itsc.austin.ibm.com. Before trying to log
on to the system, we have to verify that the local LDAP security client daemon is
running. Otherwise, the logon attempt would fail as the AIX login process will not
be able to contact the LDAP server.
The local LDAP security client daemon is running and we can try the integrated
logon for user sally.
The user sally can log on to any machine in the Kerberos realm
REALM1.IBM.COM without being defined as a local user if the client has been
set up with the previously mentioned authentication grammar. The user’s
identification will be stored in LDAP RFC2307.
Also, because user sally has a Kerberos ticket granted automatically during the
AIX logon process, she can have access to all NFS V4 mounted file systems with
sec=krb5 option if ACLs are set for access.
Note: As we write this book, the unmodified Fedora Core 2 Linux does not
contain a complete and working RPCSEC_GSS (Kerberos 5, LIPKEY,
SPKM-3) implementation. This will probably happen in Fedora Core 3 Linux.
Therefore we are unable to test this functionality in this book. You are welcome
to attempt to patch the 2.6.5.x kernel to a later kernel manually. This should
give you the missing functionality.
Linux was installed from the install media with the NFS services chosen at install
time. In this section, we look at setting up some basic NFS V4 scenarios using
the AUTH_SYS security mechanism.
There are many configuration files that make up a Linux NFS client and you
should make sure that they exist before you proceed:
/etc/fstab
/etc/auto.master
/etc/idmapd.conf
/etc/gssapi_mech.conf
/etc/init.d/portmap
/etc/init.d/rpcidmapd
/etc/init.d/rpcgssd (required on the client when RPCSEC_GSS is used)
2. Our NFS server has the directories exported as shown in Figure 5-5. You can
edit /etc/exports using either a text editor or smitty mknfsexp to create your
exports list.
Tip: Manually editing /etc/exports is easier if you need to export more than
one file system. Run exportfs -va after applying your changes to the
exports file. With smitty, add one file system at a time.
Figure 5-10 chkconfig to make sure all services start and stop automatically
6. Ensure that all of the right daemons are restarted or stopped on the NFS
client.
Figure 5-11 Confirm the correct daemons are restarted or stopped on the NFS client
Figure 5-12 Checking NFS daemons and what ports they are listening on
3. Mount the exported directory on the NFS client. We first mount it manually
and then show how the /etc/fstab file can be modified to allow mounts via
directory name.
5. Unmount the NFS file system that was mounted in the previous steps.
As you can see, Linux can mount AIX NFS V4 exports without any problems. We
now unmount the file system as done in step 5 on page 184 to continue with the
next section.
The NFS server already has a user called sally, with UID 401, created on it. We
create the same user and UID on our NFS V4 client. For this example, we are not
using LDAP. If you use LDAP, you can use users created there for testing.
Figure 5-19 Adding a user on the Linux client to match the NFS server
Figure 5-20 Adding and exporting user sally’s home directory on the NFS V4 server
Figure 5-23 Test on user sally’s home directory as the root user
Important: Ensure that the NFS V4 client has no directories mounted from the
NFS V4 server before proceeding. Use the umount command to unmount it.
2.Stop NFS.
The output shows that, although the server is exporting /exports/home/sally and
/export/project/proja, the client only sees these as /nfs/home/sally and
/nfs/project/proja. This is as a direct result of setting the nfs root on the server to
/exports.
Note: The Kerberos realm name is derived from the Active Directory
domain name. The realm name is the domain name converted to
uppercase (For the example.com domain name, the realm name is
EXAMPLE.COM.) In our case this is KDC.ITSC.AUSTIN.IBM.COM on
server nfs409.kdc.austin.ibm.com.
2. Download and install the public Windows MIT Kerberos V5 utilities, which are
available at:
http://web.mit.edu/kerberos/www/
3. We created a sample user called sally within Windows Active directory. This
is achieved by carrying out the following steps:
a. Click Start.
b. Click All Programs.
c. Click Administrative Tools.
d. Click Active Directory Users and Computers.
Use the following guidelines to configure the AIX NFS V4 server hosts.
Unlike Kerberos principal names, Windows Server 2003 account names are not
multipart. Because of this, it is not possible to directly create an account of the
name nfs/[email protected]. Such a principal instance is created
through the service principal name mappings.
3. Verify the created SPN (service principal name) using the setspn command
on Windows. The SPN must show the NFS service principal; otherwise, the
systems will be not be able to authenticate with the KDC.
Attention: Be sure to have set all host names using the Fully Qualified
Domain Name as used within your Active Directory; otherwise, the gssd
daemon will not be able to authenticate with the Windows KDC. In our
example, all systems used <hostname>.kdc.austin.ibm.com.
Example 5-87 Sample Kerberos configuration file for the Windows Active Directory
[libdefaults]
default_realm = KDC.ITSC.AUSTIN.IBM.COM
default_keytab_name = FILE:/etc/krb5/krb5.keytab
default_tkt_enctypes = des-cbc-md5 des-cbc-crc
default_tgs_enctypes = des-cbc-md5 des-cbc-crc
[realms]
KDC.ITSC.AUSTIN.IBM.COM = {
kdc = nfs409.kdc.itsc.austin.ibm.com:88
admin_server = nfs409.kdc.itsc.austin.ibm.com:749
default_domain = kdc.itsc.austin.ibm.com
}
[domain_realm]
.kdc.itsc.austin.ibm.com = KDC.ITSC.AUSTIN.IBM.COM
nfs409.kdc.itsc.austin.ibm.com = KDC.ITSC.AUSTIN.IBM.COM
[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log
5. Merge the keytab file with the /etc/krb5/krb5.keytab file on the AIX System by
copying the keytab file to the AIX system (for example, using binary FTP),
then entering this command in the directory that it has been copied to:
/usr/krb5/sbin/ktutil
Example 5-88 Adding the Windows keytab file to the AIX System
# /usr/krb5/sbin/ktutil
ktutil: rkt nfs403.keytab
ktutil: l
slot KVNO Principal
------ ------ ------------------------------------------------------
1 4 nfs/[email protected]
ktutil: wkt /etc/krb5/krb5.keytab
ktutil: quit
#
For further details about configuring AIX Integrated Login using Kerberos and
Windows Active Directory, see the AIX 5.3 online documentation chapter about
security and authenticating to AIX using Kerberos.
Even if most principals in a realm are generally created with the requires_preauth
flag enabled, this flag is not desirable on cross-realm authentication keys
because doing so makes it impossible to disable preauthentication on a
service-by-service basis. Disabling it as in the example above is recommended.
It is also very important that these principals have good passwords. MIT
recommends that the TGT principal passwords be at least 26 characters of
random ASCII text.
Example 5-92 shows the command and output as performed on our KDC server
in realm REALM1.IBM.COM.
Attributes:
Policy: [none]
The same has to be done on the KDC server in the other realm.
Attributes:
Policy: [none]
kadmin:
REALM2.IBM.COM = {
kdc = nfs403.itsc.austin.ibm.com:88
admin_server = nfs403.itsc.austin.ibm.com:749
default_domain = itsc.austin.ibm.com
}
3. Now we restart the gssd daemon using stopsrc -g gssd and startsrc -g
gssd.
Note: Due to internal kernel caching, it may take up to two minutes until
gssd recognizes the change.
Note: You have to recycle the nfsrgyd daemon for the new mapping to be
loaded.
We start by giving a brief introduction to the tools and aids available with AIX and
other third-party tools we have found useful. We then guide you through issues
that we encountered while writing the book and how we resolved them.
You may find additional useful information on this topic in Introduction to the IBM
Problem Determination Tools, SG24-6296.
Tip: We created a separate file system for the syslog output and mounted it to
/var/logs/. It is always good to do this, as logging daemons are notorious at
filling up file systems. Allowing the syslog daemon to write to the root file
system, then having it subsequently fill up, can lead to service interruption.
3. We tell the syslogd that we have made changes to its configuration file and
that it should reread it.
# refresh -s syslogd
0513-095 The request for subsystem refresh was completed successfully.
#
Example 6-21 on page 220 shows sample entries in the syslog output file.
4. When you have completed your problem determination steps, you may want
to disable syslog logging. Doing so is usually a personal preference. Some
systems administrators prefer to have all logging turned on while others
choose to do it on demand. To disable syslog logging, comment out the line
that was added to /etc/syslog.conf in step 1 on page 208 and refresh the
syslogd as shown in Example 6-2.
Using the setting for syslogd shown in Example 6-2 may log more information
than you need while running your system in production mode. You can set
syslog to log only ERROR in the /etc/syslog.conf file. This limits the amount of
data written to the log file. This is the new line in the /etc/syslog.conf file:
#*.error /var/logs/syslog/syslog.out rotate time 1d archive \
/var/logs/syslog/archive/
We now decode the iptrace file into a readable format using the ipreport
command. You can also use Ethereal to decode the trace file (see 6.5.2, “Using
the Ethereal utility” on page 212). We recommend either of two ways to decode
the iptrace file:
The first method is to decode everything:
ipreport -v [LogFile] > [OutputFile]
The second method is to decode RPC packets only:
ipreport -nsrv [LogFile] > [OutputFile]
If you are trying to debug NFS V2 or NFS V3 problems, then the second method
would be the correct option to choose. For NFS V4, we recommend that you use
the first method. This enables you to decode the Kerberos packets as well as all
other relevant information.
If you want to have more information logged by Tivoli Directory Server, you can
change the level of variable ibm-slapdSysLogLevel in the /etc/ibmslapd.conf file.
Restriction: Third-party tools are not supported by IBM. If you choose to use
these tools, you do so at your own risk.
lsof for AIX is available for download from the following sites:
http://aixpdslib.seas.ucla.edu
http://www.bullfreeware.com
Solution
The message is telling you that Enterprise Identity Mapping (EIM) is not
configured. If you are not using EIM, you can safely ignore the message. If you
are using EIM, consult your EIM documentation for further assistance.
Solution
This behavior is expected. You cannot map the same realm to a second NFS
domain.
After this, the /etc/rc.nfs command always fails with the message shown in
Example 6-7.
Solution
Check that the /exports directory exists. After creating the directory using
command mkdir /exports everything worked fine.
Example 6-9 Output of nfsd -getnodes command for invalid argument problem
# nfsd -getnodes
#root:public
/prootfs:/prootfs
#
Next step is to check whether the /etc/exports file contains the -nfsroot option.
Finally we verified that the nfsd daemon shows the -root / option.
Solution
The problem is caused by the missing -nfsroot option within the /etc/exports file.
After adding the / -nfsroot option to the /etc/exports file as shown in the next
example, everything worked fine.
After this, the exportfs -va command always fails with the message shown in
Example 6-14.
After removing all non-exname entries from the /etc/exports file, the exportfs
command ran without any further problems.
Solution
Caused by use of the vers=4 option on the client; by default, NFS V3 is used.
We see in Example 6-16 that in the Local Path column, nfs4cl showfs reports
the mounted file system incorrectly. Example 6-17 shows the correct output.
Solution
This is a problem with the nfs4cl command on AIX 5.3.
Note: As we wrote this book, this problem had been identified as a bug and
should be resolved in a future Maintenance Level for AIX 5.3.
Example 6-19 Confirming the NFS domain is set and nfsrgyd is running
# chnfsdom
Current local domain: itsc.austin.ibm.com
#
# lssrc -s nfsrgyd
Subsystem Group PID Status
nfsrgyd nfs inoperative
#
Solution
Start the nfsrgyd using /etc/rc.nfs.
Example 6-20 Mount command used with an incorrect NFS version number
# mount -o vers=5,sec=krb5 nfs402:/ /nfs
mount: 1831-010 server nfs402 not responding: RPC: Success
mount: retrying
Solution
Use a supported version number (Version 2, 3, or 4).
Solution
Restart the gssd subsystem or make sure it is running.
This problem has multiple fail reasons even though this command works:
mount -o vers=4 nfs404:/ /nfs
This entry is incorrect. We do not have to map this host, as nfs407 does not have
a second interface. Carry out the following steps to resolve the problem:
1. Run the following command:
nfshostmap -d nfs407
Restriction: As we write this book, we have found that the gssd takes
approximately two minutes from the stop-and-start operation to update all
local changes.
The local gssd must be able resolve the NFS V4 server as well as the KDC
server, so we check whether the server names can be found using the host
command as shown in Example 6-29.
This test passed, so the NFS server can be resolved. Next, we test whether the
KDC server is known and resolvable (Example 6-30).
Example 6-30 Test to see whether the KDC server is known and accessible
# grep ":88" /etc/krb5/krb5.conf
kdc = nfs409.kdc.itsc.austin.ibm.com:88
#
# host nfs409.kdc.itsc.austin.ibm.com
nfs409.kdc.itsc.austin.ibm.com is 9.3.4.71
This test passed as well. We finally check the local host name and loopback
interface using the hostname and rcpinfo commands (Example 6-31).
Solution
The system’s loopback interface name resolution failed (see RFC1537 for
details). This indicates an incorrect entry on the DNS server or /etc/hosts setup.
Correcting the system loopback entry resolves the problem and the host
command returns the correct entry, shown in Example 6-32.
6.8.8 File and directory access: cd, ls, etc. return “permission denied”
A NFS client cannot access the mounted files and directories. Several
commands, such as cd or ls, return a permission-denied message
(Example 6-33 on page 227).
These errors indicate that no valid ticket has been issued to the user.
2. You can also check the ticket status by running the klist command.
Example 6-37 Error showing the gssd being unable to access a valid ticket cache
Aug 10 12:25:50 nfs404 unix: kgss_init_sec_context returned GSS_S_FAILURE
KRB5_FCC_NOFILE
The error indicates that the gssd process is unable to access a valid ticket cache
file for the principal.
Check to see whether the KRB5CCNAME environment variable has been set.
Example 6-41 NFS V4 registry daemon entries in the syslog log file
Aug 11 11:09:22 nfs404 syslog: nfsrgyd: Unable to map local user (sally) to a
foreign user
Aug 11 11:09:22 nfs404 syslog: nfsrgyd: Unable to map local group (staff) to a
foreign group
Solution
The realm to NFS domain mapping in the /etc/nfs/realm.map file is incorrect.
Carry out the following steps to resolve the problem:
1. Change the mapping using the chnfsrtd command to make the change from:
realm1.ibm.com nfstest.itsc.austin.ibm.com
to
realm1.ibm.com itsc.austin.ibm.com
2. Stop and start the nfsrgyd on NFS server nfs404:
stopsrc -s nfsrgyd
startsrc -s nfsrgyd
Example 6-42 shows the test we carried out to prove that the issue had been
resolved.
Example 6-42 The resolved issue after steps 1 and 2 have been carried out
$ cd /nfs
$
$ touch thisisatest2
$
$ ls -l thisisatest2
-rw-r--r-- 1 sally staff 0 Aug 11 11:08 thisisatest2
$
# kinit admin/admin
Password for admin/[email protected]:
#
# klist
Ticket cache: FILE:/var/krb5/security/creds/krb5cc_0
Default principal: admin/[email protected]
Solution
The problem is caused by the fact that principal root/admin is not known as an
administrative principal. So this message is normal and we recommend that you
use the kadmin command with the -p admin/admin option.
Minor status codes are returned by the underlying security mechanisms that are
supported by a given implementation of GSS-API. Every GSS-API function takes
as the first argument a minor_status or minor_stat parameter. An application can
examine this parameter when the function returns, successfully or not, to see the
status that is returned by the underlying mechanism.
The following two sections provide a quick messages reference. A more detailed
list can be found in IBM Network Authentication Service Version 1.4 for AIX,
Linux, Solaris and Windows Application Development Reference, available in the
krb5.doc.en_US file set.
KRB5KDC_ERR_NONE No error.
Part 3 Appendixes
This part includes the following sections:
Appendix A, “Kerberos” on page 243
Appendix B, “Sample scripts, files, and output” on page 255
Appendix C, “AIX 5.3 NFS quick reference” on page 287
Appendix A. Kerberos
This appendix describes the Kerberos authentication method. It also provides a
list of references that contain more in-depth information about Kerberos.
The first four sections are reprinted from the IBM Redbook The RS/6000 SP
Inside Out, SG24-5374.
Kerberos: Also spelled Cerberus, the watchdog of Hades, whose duty was to
guard the entrance (against whom or what does not clearly appear)… It is
known to have had three heads.
- Ambrose Bierce, The Enlarged Devil's Dictionary
This section describes the protocol that Kerberos uses to provide these services,
independent of a specific implementation. A more detailed rationale for the
Kerberos design can be found in the MIT article Designing an Authentication
System: a Dialogue in Four Scenes, which is available from:
http://web.mit.edu/kerberos/www/dialogue.html
The request is processed by the authentication server. Using the client’s name, it
looks up the corresponding key in the Kerberos database. It also generates a
random session key to be shared by the client and the TGS, which will be used to
encrypt all future communication of the client with the TGS. With this information,
the AS constructs the ticket-granting ticket for the client, which (as with all
Kerberos tickets) contains six parts:
1. The service for which the ticket is good (here, the TGS)
2. The client’s (principal’s) name
3. The client machine’s IP address
4. A timestamp showing when the ticket was issued
5. The ticket lifetime (configurable in K5)
6. The session key for client/TGS communications
This ticket is encrypted with the secret key of the TGS, so only the TGS can
decrypt it. Because the client needs to know the session key, the AS sends back
a reply that contains both the TGT and the session key, all of which is encrypted
by the client’s secret key. This is shown in Figure A-2 on page 247.
Now the sign-on command prompts the user for the password and generates a
DES key from the password using the same algorithm as the Kerberos server. It
then attempts to decrypt the reply message with that key. If this succeeds, the
password has matched the one used to create the user’s key in the Kerberos
database, and the user has authenticated herself. If the decryption fails, the
sign-on is rejected and the reply message is useless. Assuming success, the
client now has the encrypted TGT and the session key for use with the TGS, and
stores them both in a safe place. Note that the authentication has been done
locally on the client machine, and the password has not been transferred over the
network.
The authenticator is encrypted with the session key that the client shares with the
TGS. The client then sends a request to the TGS consisting of the name of the
service for which a ticket is requested, the encrypted TGT, and the encrypted
authenticator. This is shown in Figure A-3.
The TGS can decrypt the TGT because it is encrypted with its own secret key. In
that ticket, it finds the session key to share with the client. It uses this session key
to decrypt the authenticator, and can then compare the client’s name and
address in the TGT and the authenticator.
If all checks pass, the TGS generates a service ticket for the service indicated in
the client’s request. The structure of this service ticket is identical to the TGT
described in “Authenticating to the Kerberos server” on page 246. The content
differs in the service field (which now indicates the application service rather than
the TGS), the timestamp, and the session key. The TGS generates a new,
random key that the client and application service will share to encrypt their
communications. One copy is put into the service ticket (for the server), and
another copy is added to the reply package for the client since the client cannot
decrypt the service ticket. The service ticket is encrypted with the secret key of
the service, and the whole package is encrypted with the session key that the
TGS and the client share. The resulting reply is shown in Figure A-4. Compare
this to Figure A-2 on page 247.
The client can decrypt this message using the session key it shares with the
TGS. It then stores the encrypted service ticket and the session key to share with
the application server, normally in the same ticket cache where it already has
stored the TGT and session key for the TGS.
The application server decrypts the service ticket with its secret key, uses the
enclosed session key to decrypt the authenticator, and checks the user’s identity
and the authenticator’s timestamp. Again, this processing is the same as for the
TGS processing the service ticket request. If all checks pass, the server performs
the requested service on behalf of the user.
Kerberos terminology
The following terms are used when discussing Kerberos authentication:
Realm A Kerberos domain that can consist of a number of
machines providing authentication services.
Principal A user or a service that uses authentication services and
is identified in the authentication database. For example,
[email protected], where root is the user
identity and admin the instance.
Instance In the case of a service instance, it represents the
occurrence of the server. Service example:
hardmon.sp21cw0, where hardmon represents the
service and sp21cw0 represents the node providing the
service.
In the case of a user, the instance represents the
Kerberos authority granted to the user. User example:
root.admin, where admin represents a Kerberos
authorization for administrative tasks in Kerberos.
Authentication database
A set of files containing the definitions of the Kerberos
authentication information. The authentication database is
maintained at the Kerberos server.
Ticket An encrypted message containing the identity of a user. A
ticket is passed from a client to a server as soon as a
Kerberos service is requested. Tickets have a
predetermined lifetime and have to be renewed
periodically.
Key An eight-byte form of a user or service password stored in
the authentication database. This password is associated
with a Kerberos user or service principal, not a user ID.
IBM Redbooks
(Redbooks are available for purchase or download at
http://www.redbooks.ibm.com.)
IBM Network Authentication Service Version 1.4 for AIX, Linux, and Solaris
Administrator’s and User’s Guide
This manual describes how to plan for, install, configure, and administer the
IBM Network Authentication Service Version 1.4.
Red Hat Enterprise Linux 3 Reference Guide, Red Hat, Inc., 2003
This book has a chapter dedicated to Kerberos, where it briefly describes
Kerberos, discusses advantages and disadvantages, explains how it works,
and defines terminology.
The book is available for download from http://www.redhat.com.
The following Requests for Comment (RFCs) specify standards for Kerberos
implementations. The RFCs are available in text format at
http://www.ietf.org/rfc.html. They are also available both in text and PDF
format at http://www.faqs.org/rfcs/index.html.
RFC 1508 Generic Security Service Application Program Interface
RFC 1510 The Kerberos Network Authentication Service (V5)
RFC 1964 The Kerberos Version 5 GSS-API Mechanism
NEWNFSROOT="/exports"
#
exportfs -ua
/etc/nfs.clean
#
chnfs -r ${NEWNFSROOT}
#
lssrc -Ss nfsd | grep ${NEWNFSROOT}
#
nfsd -getnodes
#
/etc/nfs.clean
#
/etc/rc.nfs
#
lssrc -g nfs
#
ps -ef | grep nfsd
#
exportfs
Example: B-2 Sample script to create a KDC server with legacy database
#!/bin/ksh
HOST=$(hostname)
Example: B-3 Sample script to create a full client with a legacy backend
#!/bin/ksh
HOST=$(hostname)
IREALM=”REALM2.IBM.COM”
KDCSERV=”nfs402.itsc.austin.ibm.com”
DNSDOM=”itsc.austin.ibm.com”
NFSDOM=”itsc.austin.ibm.com”
#
HOST=”${HOST}.${DNSDOM}”
#
SECRET=”succ3ss”
#
unset KRB5CCNAME
#
setclock ${KDCSERV}
#
#config.krb5 -C -d $DNSDOM -r $IREALM -c $KDCSERV -s $KDCSERV
mkkrb5clnt -c $KDCSERV -r $IREALM -s $KDCSERV -d $DNSDOM
#
kinit admin/admin
klist
#
/usr/krb5/sbin/kadmin -p admin/admin -w ${SECRET}<< EOF
add_principal -e des-cbc-crc:normal -randkey nfs/${HOST}
EOF
#
/usr/krb5/sbin/kadmin -p admin/admin -w ${SECRET}<< EOF
ktadd nfs/${HOST}
EOF
#
nfshostkey -p nfs/${HOST} -f /etc/krb5/krb5.keytab
nfshostkey -l
#
chnfsdom ${NFSDOM}
chnfsdom
#
chnfsrtd -a ${IREALM} ${NFSDOM}
chnfsrtd
#
mkdir /nfs
#
chnfs -S -B
#
/etc/rc.nfs
Example: B-4 Sample script to create a full client with KDC and LDAP backend
#!/bin/ksh
HOST=$(hostname)
IREALM=”REALM1.IBM.COM”
KDCSERV=”nfs407.itsc.austin.ibm.com”
LDAPSERV=”nfs407.itsc.austin.ibm.com”
DNSDOM=”itsc.austin.ibm.com”
NFSDOM=”itsc.austin.ibm.com”
#
HOST=”${HOST}.${DNSDOM}”
#
SECRET=”succ3ss”
ADMINDN=”cn=admin”
#
unset KRB5CCNAME
#
#synchronize the system time with the KDC Server
setclock ${KDCSERV}
#
mkkrb5clnt -c $KDCSERV -r $IREALM -s $KDCSERV -d $DNSDOM -l $LDAPSERV -i files
-A -K -T
#
mksecldap -c -h $LDAPSERV -a ${ADMINDN} -p ${SECRET}
#
/usr/sbin/ls-secldapclntd
#
echo “file /usr/lib/security/methods.cfg has to edited”
if grep -p ^KRB5LDAP /usr/lib/security/methods.cfg
then
echo “file /usr/lib/security/methods.cfg contains KRB5LDAP no changes will
occure”
else
echo “file /usr/lib/security/methods.cfg need to be changed”
cp /usr/lib/security/methods.cfg /usr/lib/security/methods.cfg.save
echo “\nKRB5LDAP:\n\toptions = db=LDAP,auth=KRB5” >>
/usr/lib/security/methods.cfg
fi
#
chsec -f /etc/security/user -s default -a registry=KRB5LDAP
chsec -f /etc/security/user -s default -a “SYSTEM=\”KRB5LDAP OR compat\””
#
chuser registry=files root
chuser SYSTEM=”compat” root
Example: B-5 Sample script for copying an ACL (with recursive option)
#!/usr/bin/ksh
#
# copy_acl.sh
#
# Copy the ACL for the given source file/directory to other files/directories
#
function usage {
echo "Usage: $scrname [-r] <source> <dest>"
echo " where"
echo " -r indicates a recursive copy"
echo " (copy ACL to all files and directories below and including"
echo " the destination.)"
echo " <source> = the name of the file or directory to copy the ACL from"
echo " <dest> = the name of the file or directory to copy the ACL to"
exit 1
}
if [[ $# -eq 0 ]]
then
usage
fi
#
# Process input parameters
#
#
# Initialize other variables
#
rm -f "${TMP_ACLFILE}"
exit ${NBERR}
Example: B-7 nfs_pd.script to gather additional information for IBM AIX support
#! /bin/ksh
# ****************************README***********************************
# SPECIAL NOTICES
#
# Information in this document is correct to the best of our
# knowledge at the time of this writing.
# Please use this information with care. IBM will not be
# responsible for damages of any kind resulting from its use.
# The use of this information is the sole responsibility of
# the customer and depends on the customer's ability to eval-
# uate and integrate this information into the customer's
# operational environment.
#
# This script will gather information about your networking environment
# so that IBM may attempt to determine why your computer is experiencing
# problems. You will need to run this script as the root user. If you
# do not have enough space in the /tmp filesystem, you may need to increase
# the size of this filesystem.
# In order to run the script, you will need to give it execute permissions.
# In order to do this, make sure your working directory is the directory
# where the script is located. Then issue -
# chmod +x nfs.script
#
# To run the script from this directory, type -
# ./nfs_pd.script
# *********************************************************************
#
# In addition to this script, you may be asked to supply an iptrace with
# your testcase. If you are asked to do so, the syntax is as follows -
# startsrc -s iptrace -a "-a /tmp/iptrace.bin"
#
# When you wish to stop the trace, issue -
# stopsrc -s iptrace
# You can tar this file along with the rest of the test case and upload it
# following the instructions that the script outlines.
#
# ******************************************************************
clear
echo ""
if [ -d /tmp/ibm ] ; then
echo ""
echo "Unable to create the /tmp/ibm directory because it already exists."
echo ""
echo "Exiting the script."
echo ""
exit
else
mkdir /tmp/ibm
fi
if [ -f /etc/auto* ]; then
/usr/bin/cp /etc/auto* /tmp/ibm/
fi
if [ -f /etc/exports ] ; then
if [ -f /etc/xtab ] ; then
/usr/bin/cp /etc/xtab /tmp/ibm/xtab
fi
if [ -f /etc/rmtab ] ; then
/usr/bin/cp /etc/rmtab /tmp/ibm/rmtab
fi
if [ -f /etc/netsvc.conf ] ; then
/usr/bin/cp /etc/netsvc.conf /tmp/ibm/netsvc.conf
fi
if [ -f /etc/resolv.conf ] ; then
/usr/bin/cp /etc/resolv.conf /tmp/ibm/resolv.conf
fi
if [ -f /etc/irs.conf ] ; then
/usr/bin/cp /etc/irs.conf /tmp/ibm/irs.conf
fi
if [ -f /etc/rc.nfs ] ; then
/usr/bin/cp /etc/rc.nfs /tmp/ibm/rc.nfs
fi
if [ -f /etc/nfs/hostkey ] ; then
/usr/bin/cp /etc/nfs/hostkey /tmp/ibm/hostkey
fi
if [ -f /etc/nfs/local_domain ] ; then
/usr/bin/cp /etc/nfs/local_domain /tmp/ibm/local_domain
fi
if [ -f /etc/nfs/realm.map ] ; then
/usr/bin/cp /etc/nfs/realm.map /tmp/ibm/realm.map
fi
if [ -f /etc/nfs/princmap ] ; then
/usr/bin/cp /etc/nfs/princmap /tmp/ibm/princmap
fi
if [ -f /etc/nfs/security_default ] ; then
/usr/bin/cp /etc/nfs/security_default /tmp/ibm/security_default
fi
clear
echo ""
echo "The script has completed. "
echo "There should be a "$pmr.$branch.$country".tar.Z file in your "
[realms]
REALM2.IBM.COM = {
kdc = nfs402.itsc.austin.ibm.com:88
admin_server = nfs402.itsc.austin.ibm.com:749
default_domain = ibm.com
}
[domain_realm]
.ibm.com = REALM2.IBM.COM
nfs402.itsc.austin.ibm.com = REALM2.IBM.COM
[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log
[realms]
REALM1.IBM.COM = {
kdc = nfs407.itsc.austin.ibm.com:88
admin_server = nfs407.itsc.austin.ibm.com:749
default_domain = itsc.austin.ibm.com
}
[domain_realm]
.itsc.austin.ibm.com = REALM1.IBM.COM
nfs407.itsc.austin.ibm.com = REALM1.IBM.COM
[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log
[domain_realm]
.kdc.itsc.austin.ibm.com = KDC.ITSC.AUSTIN.IBM.COM
nfs409.kdc.itsc.austin.ibm.com = KDC.ITSC.AUSTIN.IBM.COM
[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log
Example: B-12 Sample iptrace output showing successful authentication during mount
No. Time Source Destination Protocol Info
167 20.373421 9.3.5.173 9.3.4.71 KRB5
TGS-REQ
Important: Extreme care must be taken before values are changed using the
nsfo command. An incorrect change can render the system unusable.
“Network File System (NFS) Overview for System Management” and “TCP/IP
Overview for System Management,” AIX 5L Version 5.3 System User’s Guide:
Communications and Networks, SC23-4909
“Monitoring and Tuning NFS Use,” AIX 5L Version 5.3 Performance Management
Guide, SC23-4905
ACE. (Access Control Entry) One of the entries in GSS-API. Generic Security Services Application
an Access Control List (ACL). Programming Interface. A generic API for doing
client-server authentication.
ACL. (Access Control List) A list of permission
entries that control user and group access to an NFS mount. The operation that maps an exported
object. directory from an NFS server into a client’s directory
structure, making it appear as if the served directory
Authentication. Security method used to confirm is local on the client.
the identity of a system user, host, or service.
NFS server. A host that makes available for NFS
Authorization. Security method used to control access one or more of its directories.
what shared information each system user or client
machine can access. NFS. (Network File System) A protocol for sharing
files over a computer network.
GID. (Group Identifier) Number used to identify a
UNIX group. Opaque. Used in conjunction with bytes, data
structures, tokens, and so on, to represent a
Identification. Security method used to uniquely collection of bytes whose internal structure is
establish the identity of information system users, unknown at the time but will be interpreted later on
hosts, and services. in a processing sequence.
Integrated login. System login configured to Pseudo-file system. The portion of an NFS
obtain user authentication and identification server’s name space that is exported to NFS clients.
(optional) from an external source, such as
Kerberos/LDAP. Pseudo-root. The top level of an NFS server’s
pseudo-file system. By default, it corresponds to the
KDC. (Key Distribution Center) The trusted root directory in the server’s file tree, but it can be
third-party or intermediary in Kerberos that issues all specified to be a lower-level directory, such as
the Kerberos tickets to the clients. /exports.
Kerberos realm. Comprises a set of managed RPC. (Remote Procedure Call) A protocol whereby
hosts that share the same Kerberos database. one computer process (the client process) can direct
another process (the server process) to run a
NFS client. A host that accesses, via a mount, one procedure, appearing as if the client process had run
or more directories from an NFS server. the procedure in its own address space. The client
and server processes are typically on two separate
NFS domain. A name that identifies an operating computers, although they can both be on the same
context for NFS servers and clients. It is implied that computer.
systems that share the same NFS domain also
share the same user and group registries. RPCSEC_GSS. A security flavor that provides
authentication, integrity, and privacy protection for
NFS export. The operation that makes a directory remote procedure calls.
on an NFS server available for access by NFS
clients.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this Redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks”
on page 302. Note that some of the documents referenced here may be available
in softcopy only.
AIX - Migrating NIS Maps into LDAP, TIPS0123
AIX 4.3 Elements of Security, SG24-5962
AIX 5L Version 5.2 Security Supplement, SG24-6066
AIX 5L Differences Guide Version 5.3 Edition, SG24-7463
Exploiting RS/6000 SP Security: Keeping It Safe, SG24-5521
IBM IBM Eserver Certification Study Guide - AIX 5L Communications,
SG24-6186
IBM IBM Eserver pSeries Sizing and Capacity Planning: A Practical Guide,
SG24-7071
Introduction to the IBM Problem Determination Tools, SG24-6296
RS/6000 SP System Management: Easy, Lean and Mean, GG24-2563
Using LDAP for Directory Integration, SG24-6163
Windows-based Single Signon and the EIM Framework on the IBM
IBM Eserver iSeries Server, SG24-6975
Other publications
These publications are also relevant as further information sources:
AIX 5L Version 5.3 Commands Reference, Volume 1, SC23-4888
AIX 5L Version 5.3 Commands Reference, Volume 2, SC23-4889
AIX 5L Version 5.3 Commands Reference, Volume 4, SC23-4891
AIX 5L Version 5.3 Files Reference, SC23-4895
Online resources
These Web sites and URLs are also relevant as further information sources:
NFS Version 4 Open Source Reference Implementation
http://www.citi.umich.edu/projects/nfsv4/linux/
The NFS Version 4 Protocol, Brian Pawlowski, et al
http://www.nluug.nl/events/sane2000/papers/pawlowski.pdf
NFS Version 3 Design and Implementation, Brian Pawlowski, et al
http://citeseer.ist.psu.edu/pawlowski94nfs.html
IETF RFC page
http://www.ietf.org/rfc.html
Other RFC references for NFS version 4
http://www.nfsv4.org/nfsv4techinfo.html
NFS V4 Working Group
http://www.nfsv4.org/
Open Systems Interconnection (OSI) Reference Model
http://ourworld.compuserve.com/homepages/timothydevans/osi.htm
Symbols A
aclconvert command 73
/etc/auto.master 177
acledit command 32, 73
/etc/bootparms 289
aclget command 32, 73, 82
/etc/exports 37
aclgettypes command 33, 73
/etc/exports file 88
aclput command 32, 73, 82
/etc/filesystems 288
ACLs
/etc/fstab (Linux) 177
AIXC ACLs
/etc/group file 49
AIXC ACL and NFS V4 client 64
/etc/gssapi_mech.conf (Linux) 177
commands 73
/etc/ibmslapd.conf 212
NFS V4 ACLs
/etc/idmapd.conf (Linux) 177
access evaluation 69
/etc/init.d/portmap (Linux) 177
administration 72
/etc/init.d/rpcgssd (Linux) 177
chmod command and 77
/etc/init.d/rpcidmapd (Linux) 177
directory structure and 79
/etc/inittab 137
file system support 65
/etc/krb5/krb5.conf 268
format 65
/etc/krb5/krb5.keytab 143
inheritance 68
/etc/netgroup 289
inheritance and move vs. copy 79
/etc/networks 289
inheritance and umask 78
/etc/nfs.clean 126
inheritance, maximizing benefits of 80
/etc/nfs/hostkey 38, 288
maintaining existing ACLs 82
/etc/nfs/local_domain 38, 288
NFS V3 clients and 87
/etc/nfs/princmap 288
permission bits 67
/etc/nfs/princmap file 39
permission restrictions 69
/etc/nfs/realm.map 39, 288
permissions scenarios 85
/etc/nfs/realm.map file 54
special user permissions 69
/etc/nfs/security_default 40, 288
UNIX permissions and 71
/etc/passwd file 49
types
/etc/pcnfsd.conf 289
AIXC ACL 33, 62–63
/etc/rc.nfs 288
NFS V4 ACL 32, 65
/etc/rmtab 288
add_principal command 139
/etc/rpc 289
addprinc command 139
/etc/security/user 173
administration
/etc/services 158
sample scripts 256
/etc/sm 288
aio 157
/etc/sm.bak 288
alias tree
/etc/state 288
configuration 131
/etc/syslog.conf 208
setting up alias tree extension in NFS V4 131
/etc/xtab 38
attributes
/usr/lib/security/methods.cfg 166
mandatory attributes 21
/var/krb5/log/krb5kdc.log 211
named attributes 23
/var/ldap/db2cli.log 212
recommended attributes 21
/var/ldap/ibmslapd.log 212
authentication 46
Index 307
/var/krb5/log/krb5kdc.log 211 configuration file changes on KDC server 203
/var/ldap/db2cli.log 212 cross-realm access 199
find command 132 database 60
full client 147 description 59
fuser command 210 host identification 88
machine principal 88
mapping realm to NFS domain 54
G MIT Athena project 244
get_principals command 140
need for 103
GID 48
NFS client considerations 108
group file 49
NFS use of 60
GSS-API 28
other information sources 252
error codes 232
principal 106
gssd daemon 144
realms 105
service principal 88
I status codes 234
IBM Tivoli Directory Server 49, 155, 160 symmetric encryption 245
configuration 160 third-party authentication described 244
ibmdirctl command 162 ticket lifetime 60
id command 154 ticket-granting service 246
identification 46 tickets 59
host 87 user authentication 101
user/group 48 user principal 53
identity management Kerberos definitions
LDAP user registry 49 authentication database 251
NIS user registry 49 instance 251
UNIX user registry 49 KDC 252
identity mapping 50 key 251
AUTH_SYS 51 principal 251
Kerberos 53 realm 251
multiple NFS domains 57 ticket 251
installing the IBM NAS file sets 135 ticket-granting ticket 252
installp command 135 Kerberos overview 243
ipreport command 210 Kerberos V5 (KRB5) 28
iptrace command 210 KerberosV5 (KRB5)
sample output 271 Key Distribution Center (KDC) 28
Key Distribution Center
See KDC
K
kadmin.local command 139 keytab
KDC create NFS server entry 142
adding krbtgt service principal 200 krb5.keytab file 143
creating server principals 141 kinit command 138, 246
creating user principals 139 klist command 139
encryption types 107 krbtgt service principal 200
testing basic functions 135, 138 ktpass.exe (Windows) command 195
with LDAP backend 165 ktutil command 143
kdestroy command 149
Kerberos
Index 309
user authentication 59 LIPKEY 47
user authorization 62 protection levels
user/group identification 50 authentication 47
with Windows KDC 190 integrity 47
NFS V4 (NFS version 4) 20 privacy 47
NFS_NOBODY 41 security context 60
NFS_PORT_RANGE 41
nfs4cl command 43, 211, 292
nfshostkey command 38, 43, 292
S
scripts
nfshostmap command 39, 43
change pseudo-root FS 256
nfso command 44, 291
copy ACLs 260
options 294–295
create full client (LDAP) 259
nfsstat command 44, 211
create full client (legacy db) 258
NIM 32
create KDC server 256
NIS user registry 49
nfs_pd script 263
NTP
sec= exports option 89
security
P AUTH_SYS 51
passwd file 49 flavors 107
princmap file 39 Kerberos 28, 53
pseudo-file system 27 krb5 107
advantages 128 krb5i 107
configuration 125 krb5p 107
pseudo root FS 124 LIPKEY 28
RPC flavors 47
security categories 46
R security components
realm.map file 54
authentication 46
Redbooks Web site 302
authorization 47
contact us xv
identification 46
refresh command 209
service ticket 59
Remote Procedure Call
showmount command 211, 291
See RPC)
slim client 150, 152
RFC
slim client for cloning 150
RFC2078 7
slim client installation steps 150
RFC2307 7, 170
slim client verification 153
RFC3010 5
snap command 263
RFC3530 5
stateful 13
rmnfs command 44, 292
stateless 13
rmnfsexp command 44, 292
Symbols
RPC
/etc/exports 288
security flavors 47
/etc/krb5/krb5.conf 269
rpcgen command 44
/etc/syslog.conf 122
rpcinfo command 44, 210
/etc/xtab 288
RPCSEC_GSS 47
/var/krb5/log/kadmin.log 211
authentication flow 61
NFS_NOBODY 41
configuration 154
NFS_PORT_RANGE 41
configuring on clients 154
syslogd 121
Kerberos 47
U
UID 48
UNIX user registry 49
unmounting exported NFS V4 file system 123
user registries
NIS 49
UTF-8 30–31
V
vers=exports option 89
W
Web-based Systems Manager 73
ACL administration 73
Windows KDC 190
X
XDR
Index 311
312 Securing NFS in AIX
Securing NFS in AIX An Introduction to NFS V4 in AIX 5L
(0.5” spine)
0.475”<->0.875”
250 <-> 459 pages
Back cover ®