0% found this document useful (0 votes)
115 views34 pages

Distributed File Systems: - Objectives - Contents

This document provides an overview of distributed file systems (DFS) and Network File Systems (NFS) specifically. It discusses the objectives of DFS which include centralizing administration of disks and providing transparent file sharing across a network. It describes the typical components of a DFS including NFS servers that share local files on the network and NFS clients that mount shared files locally. The document summarizes how NFS works, the different versions, important daemons, and considerations for installing and configuring NFS servers and clients.

Uploaded by

ramalingam_dec
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views34 pages

Distributed File Systems: - Objectives - Contents

This document provides an overview of distributed file systems (DFS) and Network File Systems (NFS) specifically. It discusses the objectives of DFS which include centralizing administration of disks and providing transparent file sharing across a network. It describes the typical components of a DFS including NFS servers that share local files on the network and NFS clients that mount shared files locally. The document summarizes how NFS works, the different versions, important daemons, and considerations for installing and configuring NFS servers and clients.

Uploaded by

ramalingam_dec
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34

Distributed File Systems

Objectives
to understand Unix network file sharing

Contents

Installing NFS
How To Get NFS Started
The /etc/exports File
Activating Modifications The Exports File
NFS And DNS
Configuring The NFS Client
Other NFS Considerations

Practical
to share and mount NFS file systems

Summary

NFS/DFS: An Overview
Unix distributed filesystems are used to
centralise administration of disks
provide transparent file sharing across a network

Three main systems:


NFS: Network File Systems developed by Sun Microsystems 1984
AFS: Andrew Filesystem developed by Carnegie-Mellon University

Unix NFS packages usually include client and server


components
A DFS server shares local files on the network
A DFS client mounts shared files locally
a Unix system can be a client, server or both depending on which
commands are executed

Can be fast in comparasion to many other DFS


Very little overhead
Simple and stable protocols
Based on RPC (The R family and S family)

General Overview of NFS


Developed by Sun Microsystems 1984
Independent of operating system, network, and transport
protocols.
Available on many platforms including:
Linux, Windows, OS/2, MVS, VMS, AIX, HP-UX.

Restrictions of NFS
stateless open architecture
Unix filesystem semantics not guaranteed
No access to remote special files (devices, etc.)

Restricted locking
file locking is implemented through a separate lock daemon

Industry standard is currently nfsV3 as default in


RedHat, SuSE, OpenBSD, FreeBSD, Slackware, Solaris, HP-UX, Gentoo

Kernel NFS or UserSpace NFS

Three versions of NFS available


Version 2:
Supports files up to 4GB long (most common 2GByte)
Requires an NFS server to successfully write data to its disks before the
write request is considered successful
Has a limit of 8KB per read or write request. (1 TCP Window)

Version 3 is the industry standard:


Supports extremely large file sizes of up to 264 - 1 bytes
Supports files up to 8 Exabyte
Supports the NFS server data updates as being successful when the data
is written to the server's cache
Negotiates the data limit per read or write request between the client and
server to a mutually decided optimal value.

Version 4 is coming:
File locking and mounting are integrated in the NFS daemon and operate
on a single, well known TCP port, making network security easier
Support for the bundling of requests from each client provides more
efficient processing by the NFS server.
File locking is mandatory, whereas before it was optional

Important NFS Daemons


Portmap The primary daemon upon which all the RPC rely

Manages connections for applications that use the RPC specification


Listens to TCP port 111 for initial connection
negotiate a range of TCP ports, usually above port 1024, for further comms.
You need to run portmap on both the NFS server and client.

Nfs (rpc.nfsd)
Starts the RPC processes needed to serve shared NFS file systems
Listens to TCP or UDP port 2049 (port can vary)
The nfs daemon needs to be run on the NFS server only.

Nfslock (rpc.mountd)
Used to allow NFS clients to lock files on the server via RPC processes.
Neogated port UDP/TCP port
The nfslock daemon needs to be run on both the NFS server and client

netfs
Allows RPC processes run on NFS clients to mount NFS filesystems on the
server.
The nfslock daemon needs to be run on the NFS client only.

The NFS Protocol Stack aka. VFS


mountd
biod
statd
lockd

nfsd
MOUNT

statd
lockd

NFS

client

XDR

server

RPC

TRANSPORT, NETWORK, LINK & PHYSICAL LAYERS

RPC
RPC depend
depend on
on PORTMAP
PORTMAP which
which is
is on
on both
both client
client and
and
server
server

Installing kernelNFS, Linux


Check if NFS is installed with rpm
suse93:~
suse93:~ ## rpm
rpm -qa
-qa || grep
grep nfs
nfs
nfs-utils-1.0.7-3
nfs-utils-1.0.7-3
yast2-nfs-client-2.11.7-3
yast2-nfs-client-2.11.7-3
yast2-nfs-server-2.11.5-3
yast2-nfs-server-2.11.5-3

Check if RPC portmap package installed rpm


## rpm
rpm -qa
-qa || grep
grep portmap
portmap
portmap-5beta-733
portmap-5beta-733

If not Install them, allways begin with portmap

## rpm
rpm ivh
ivh
http://ftp.sunet.se/pub/os/Linux/distributions/suse/suse/i386/9.3/suse/i586/portm
http://ftp.sunet.se/pub/os/Linux/distributions/suse/suse/i386/9.3/suse/i586/portm
ap-5beta-733.i586.rpm
ap-5beta-733.i586.rpm
## rpm
rpm ivh
ivh
http://ftp.sunet.se/pub/os/Linux/distributions/suse/suse/i386/9.3/suse/i586/nfs-u
http://ftp.sunet.se/pub/os/Linux/distributions/suse/suse/i386/9.3/suse/i586/nfs-u
tils-1.0.7-3.i586.rpm
tils-1.0.7-3.i586.rpm

If you are not running SuSE


Install: portmap, nfs-utils,nfs-server (should be implemented in kernel)

How To Get kernelNFS server Started


Activate the 3 nessesary servers for NFS at boot
NFS server demon
NFS file locking
RPC portmap

Start the PORTMAPPER and NFS server


Which starts all dependent services
Whatever you do allways start PORTMAP first

Check that services for NFS is running with rpcinfo

## insserv
insserv portmap
portmap
## insserv
insserv nfsserver
nfsserver
## rcportmap
rcportmap start
start
## rcnfsserver
rcnfsserver start
start

## rpcinfo
localhost
rpcinfo -p
-pport
localhost
program vers proto
program vers proto
port
100000
2
tcp
111
100000
2
tcp
111
100000
2
udp
111
100000
2
udp
111
100003
2
udp
2049
100003
2
udp
2049
100003
3
udp
2049
100003
3
udp
2049
100227
3
udp
2049
100227
3
udp
2049
100003
2
tcp
2049
100003
2
tcp
2049
100003
3
tcp
2049
100003
3
tcp
2049
100227
3
tcp
2049
100227
3
tcp
2049
100024
1
udp
1034
100024
1
udp
1034
100021
1
udp
1034
100021
1
udp
1034

vers proto
port
portmapper program
vers proto
port
portmapper program
100021
4
udp
1034
portmapper
100021
4
udp
1034
portmapper
100024
1
tcp
1029
nfs
100024
1
tcp
1029
nfs
100021
1
tcp
1029
nfs
100021
1
tcp
1029
nfs
100021
3
tcp
1029
nfs_acl
100021
3
tcp
1029
nfs_acl
100021
4
tcp
1029
nfs
100021
4
tcp
1029
nfs
100005
1
udp
835
nfs
100005
1
udp
835
nfs
100005
1
tcp
838
nfs_acl
100005
1
tcp
838
nfs_acl
100005
2
udp
835
status
100005
2
udp
835
status
100005
2
tcp
838
nlockmgr
100005
2
tcp
838
nlockmgr
100005
3
udp
835
100005
3
udp
835
100005
3
tcp
838
100005
3
tcp
838

In some Unixes you need to separately start


/etc/init.d/portmap startor shortly portmap(d)
/etc/init.d/nfs start
or shortly nfs(d)
/etc/init.d/nfslock start or shortly nfslock(d)

nlockmgr
nlockmgr
status
status
nlockmgr
nlockmgr
nlockmgr
nlockmgr
nlockmgr
nlockmgr
mountd
mountd
mountd
mountd
mountd
mountd
mountd
mountd
mountd
mountd
mountd
mountd

How To Get NFS client Started


Activate the 2 nessesary servers for NFS at boot
NFS file locking nfslock
RPC portmap

## insserv
insserv portmap
portmap

Start the PORTMAPPER and NFS server


With rc

## rcportmap
rcportmap start
start

Check that services for NFS is running with rpcinfo


## rpcinfo
rpcinfo -p
-p localhost
localhost
rpcinfo -p
rpcinfo -p
program vers proto
program vers proto
100000
2
tcp
100000
2
tcp
100000
2
udp
100000
2
udp

port
port
111 portmapper
111 portmapper
111 portmapper
111 portmapper

Note! There can be more services running dependent on your system setup

In some Unixes you need to separately start


/etc/init.d/netfs start
/etc/init.d/nfslock start

or shortly netfs(d)
or shortly nfslock(d)

Allways start portmap first then netfs and last nfslock

NFS And DNS


Check FORWARD resolution
## host
host 192.168.0.1
192.168.0.1
1.0.168.192.in-addr.arpa
1.0.168.192.in-addr.arpa domain
domain name
name pointer
pointer a01.my-site.com.
a01.my-site.com.

Check REVERSE resolution


## host
host a01.my-site.com
a01.my-site.com
a01.my-site.com
a01.my-site.com has
has address
address 192.168.0.1
192.168.0.1

Both forward and reverse must be same


If not, fix your DNS zonefiles (review netadmin chapter 3)

Syncronized /etc/hosts in server and client will also do


Some common error messages
forward
Lookup: host resolution error
forward lookudoesn't
lookudoesn't exist
exist pp
Timeout: firewall port setup
RPC:
Timeout
RPC:
Timeout
Not registered: portmap is not running
RPC:
RPC: Program
Program not
not registered
registered

failed:
failed:server
serverisis
down.
down.

The NFS Server sharing directories


The exportfs command is used to share directories on the network

any directory can be exported


subdirectories of an exported directory may not be exported unless they are on a different disk
parents of an exported directory may not be exported unless they are on a different disk
only local filesystems can be exported

Some exportfs o sharing options

ro
rw
ro read
read only
only access
access
rw read
read and
and write
write access
access
sync
write
when
requested
wdelay
wait
for
sync
sync
write when requested wdelay wait for sync
hide
dont
hide dont show
show subdirs
subdirs that
that is
is exported
exported of
of other
other export
export
no_all_squash
remote
uids
&
gids
become
equal
of
no_all_squash remote uids & gids become equal of client
client
root_squash
remote
root
uid
become
anonymous
on
the
root_squash remote root uid become anonymous on the client
client
no_root_squash
remote
root
equals
to
local
root
user
no_root_squash
remote
root in
equals
to local
root user
Wesquash_uids
share the home
directory
v verbose
mode
remote
uids
&
gids
are
threated
squash_uids remote uids & gids are threated as
as identity
identity nobody
nobody

## exportfs
exportfs v
v -o
-o rw,squash_uids=0-499,squash_gids=0-499
rw,squash_uids=0-499,squash_gids=0-499 rosies:/home
rosies:/home
exporting
rosies:/home

rw
=
Read
Write
(default)
exporting rosies:/home
squash_uids, squash_gids = make user and group ids specified
to be squashed to user with identity nobody
directory is shared to host rosies only

More on Shared Directories


If someone is using the shared directory, you will not be
able unshare.
Check if someone is accessing RPC, using a share
The first red line show that someone is using
RPC against our server. The second red line
show that someone have accessed /home

## showmount
showmount a
a localhost
localhost
All
mount
points
All mount points on
on server:
server:
*,192.168.1.0/24:/home
*,192.168.1.0/24:/home
*:/home
*:/home
*:/install/suse9.3
*:/install/suse9.3
rosies:*
rosies:*
rosies:*,192.168.1.0/24
rosies:*,192.168.1.0/24

Unshare a share in v verbose mode


## exportfs
exportfs -v
-v -u
-u rosies:/home
rosies:/home
unexporting
roseis:/home
unexporting roseis:/home

Check what the server is


# exportfs
exportfs -v
-v
sharing #/home
192.168.1.0/24(rw,wdelay,root_squash)

/home
192.168.1.0/24(rw,wdelay,root_squash)
/exports/network-install/SuSE/9.3
/exports/network-install/SuSE/9.3
<world>(ro,wdelay,root_squash)
<world>(ro,wdelay,root_squash)
/install/suse9.3
/install/suse9.3
<world>(ro,wdelay,root_squash)
<world>(ro,wdelay,root_squash)

The /etc/exports File, static shares


Sample exports file
## cat
cat /etc/exports
/etc/exports
/data/files
/data/files
/home
/home
/data/test
/data/test
/data/database
/data/database

*(ro,sync)
*(ro,sync)
192.168.0.0/24(rw,sync)
192.168.0.0/24(rw,sync)
*.my-site.com(rw,sync)
*.my-site.com(rw,sync)
192.168.0.203/32(rw,sync)
192.168.0.203/32(rw,sync)

Some options in exports file (same as exportfs)


ro
rw
ro read
read only
only access
access
rw read
read and
and write
write access
access
sync
sync write
write when
when requested
requested wdelay
wdelay wait
wait for
for sync
sync
hide
hide dont
dont show
show subdirs
subdirs that
that is
is exported
exported of
of other
other export
export
no_all_squash
no_all_squash remote
remote uids
uids && gids
gids become
become equal
equal of
of client
client
root_squash
root_squash remote
remote root
root uid
uid become
become anonymous
anonymous on
on the
the client
client
no_root_squash
no_root_squash remote
remote root
root equals
equals to
to local
local root
root user
user
squash_uids
squash_uids remote
remote uids
uids && gids
gids are
are threated
threated as
as identity
identity
nobody
nobody

Squash changes remote identity to selectable local identity


Linux uses another format in /etc/exports than BSD systems

The /etc/exports File, Squashing


Sample exports file using map_static
## cat
cat /etc/exports
/etc/exports
/data/files
/data/files
/home
/home
/data/test
/data/test
/data/database
/data/database

*(ro,sync)
*(ro,sync)
192.168.0.0/24(map_static=/etc/squash.map,rw,sync)
192.168.0.0/24(map_static=/etc/squash.map,rw,sync)
*.my-site.com(rw,sync)
*.my-site.com(rw,sync)
192.168.0.203/32(rw,sync)
192.168.0.203/32(rw,sync)

Map_static file =/etc/squash.map


## /etc/squash.map
/etc/squash.map
## remote
remote local
local comment
comment
uid
## squash
uid 0-100
0-100 -squash to
to user
user nobody
nobody
gid
## squash
gid 0-100
0-100 -squash to
to group
group nobody
nobody
uid
uid 1-200
1-200 1000
1000 ## map
map to
to uid
uid 1000
1000 -- 1100
1100
gid
gid 1-200
1-200 500
500 ## map
map to
to gid
gid 500
500 -- 600
600
uid
uid 0-100
0-100 2001
2001 ## map
map individual
individual user
user to
to uid
uid 2001
2001
gid
gid 0-100
0-100 2001
2001 ## map
map individual
individual user
user to
to gid
gid 2001
2001

Squash changes remote identity to selectable local identity

Activating Modifications in Exports File


Re-reading all entries in /etc/exports file
When no directories have been exported to NFS, then the "exportfs -a"
command is used:
## exportfs
exportfs -a
-a

After adding share(s) to /etc/exports file


When adding a share you can use the "exportfs -r" command to export only the
new entries:
## exportfs
exportfs -r
-r

Deleting, Moving Or Modifying A Share


In this case it is best to temporarily unmount the NFS directories using the
"exportfs -ua" command followed by the "exportfs -a" command.
## exportfs
exportfs -ua
-ua
## exportfs
exportfs -a
-a

Termporary export /usr/src to hosts on net 192.168.0.0


## exportfs
exportfs 192.168.0.0/24:/usr/src
192.168.0.0/24:/usr/src o
o rw
rw

Exercise - Sharing Directories


Write down the commands to do the following?
With
Withone
onecommand
commandshare
share/usr/share
/usr/sharereadonly
readonlyfor
forall
allclients
clientsininyour
yournet
net
##
Permanently
PermanentlyShare
Share/etc
/etcreadonly
readonlyfor
forrosies
rosiesand
andtokyo
tokyoand
andread/write
read/writefor
forseoul
seoul
##
list
listthe
thefile
filecontaining
containingthe
thepermanent
permanentshares
shares
##
two
twocommands
commandsshowing
showingwhat
whatyour
yourhost
hosthas
hasshared
shared
##
##
check
checkwho
whohas
hasmounted
mountedyour
yourshared
shareddirectories
directories
##
check
checkwho
whohas
hasmounted
mounteddirectories
directorieson
onrosies
rosies
##
check
checkthe
theserver
servernfs
nfsstatus
status
##
From
Fromthe
theserver,
server,with
withone
onecommand
commandcheck
checkthat
thatthe
thenfs-client
nfs-clienthas
hasportmapper
portmapperrunning
running
##

The nfsstat Command


Server statistics

## nfsstat
nfsstat -s
-s

A large table arrives after command is issued


## nfsstat
nfsstat -c
-c

Client statistics

Server
Server nfs
nfs v3:
v3:
null
getattr
setattr
lookup
null
getattr
setattr
lookup
00
0%
15
31%
0
0%
0
0% 15
31% 0
0% 0
read
write
create
mkdir
read
write
create
mkdir
00
0%
0
0%
0
0%
0
0% 0
0% 0
0% 0
remove
rmdir
rename
link
remove
rmdir
rename
link
00
0%
0%
0%
0% 00
0% 00
0% 00
fsstat
fsinfo
pathconf
fsstat
fsinfo
pathconf commit
commit
17
35%
33%
0%
17
35% 16
16
33% 00
0% 00

access
readlink
access
readlink
0%
0
0%
0
0%
0% 0
0% 0
0%
symlink
mknod
symlink
mknod
0%
0%
0%
0% 00
0% 00
0%
readdir
readdirplus
readdir
readdirplus
0%
0%
0%
0% 00
0% 00
0%
0%
0%

Server numbers of filehandlers


Usage information on the server's file handle cache, including the total
number of lookups, and the number of hits and misses.
## nfsstat
nfsstat -o
-o fh
fh
Server
Server file
file handle
handle cache:
cache:
lookup
anon
ncachedir
lookup
anon
ncachedir ncachedir
ncachedir stale
stale
00
00
00
00
00
The server has a limited number of filehandlers that can be tuned

Error Thresholds For The "nfsstat" Command

The NFS Client side


Ensure Portmap Is Running
Clients need portmap only to be running ## rpcinfo
rpcinfo -p
-p localhost
localhost
Also check that server is up
## rpcinfo
rpcinfo -p
-p 192.168.0.10
192.168.0.10
rcportmap start
start
If not, start portmap ## rcportmap
Show exported shares on a remote server
## showmount
showmount -e
-e 192.168.0.10
192.168.0.10
Export
list
for
Export list for 192.168.0.10:
192.168.0.10:
/home
**
/home
/exports/network-install/SuSE/9.3
/exports/network-install/SuSE/9.3 **

Temporary mount nfs shares on client with default options


## mkdir
mkdir /mnt/nethome
/mnt/nethome
## mount
mount t
t nfs
nfs 192.168.0.10:/home
192.168.0.10:/home /mnt/nethome
/mnt/nethome

umount temporaty mounted nfs shares on client


## umount
umount /mnt/nethome
/mnt/nethome

To see what is mounted on client side


Using the df command show disk usage:
##df
dfF
FNFS
NFS
Filesystem
Filesystem

1k-blocks
1k-blocks
192.168.0.10:/install/suse9.3
192.168.0.10:/install/suse9.3

Used
UsedAvailable
AvailableUse%
Use%Mounted
Mountedon
on
79366688
79366688 58235488
58235488 21131200
21131200 74%
74%/mnt/a
/mnt/a

The mount command is most detailed about mount options


##mount
mount| |grep
grepnfs
nfs
192.168.0.10:/install/suse9.3
192.168.0.10:/install/suse9.3on
on/mnt/a
/mnt/atype
typenfs
nfs(rw,addr=192.168.0.10)
(rw,addr=192.168.0.10)

The showmount shows all exported shares on a remote


server plus all mounts from client ##showmount
showmount-a
-a192.168.1.60
192.168.1.60
Client nfsstat will show statistics
##nfsstat
nfsstatc
c
Client
Clientrpc
rpcstats:
stats:
calls
calls
129
129

retrans
retrans authrefrsh
authrefrsh
00
00

All
Allmount
mountpoints
pointson
on192.168.1.60:
192.168.1.60:
*,192.168.1.0/24:/home
*,192.168.1.0/24:/home
*:/home
*:/home
*:/install/suse9.3
*:/install/suse9.3
192.168.0.2:*
192.168.0.2:*

mount o <options> t nfs


NFS clients access network shared directories using the
mount command
NFS mount o options:
rw/ro
read-write (default) or read-only
hard
retry mount operation until server responds (default) or
soft
try mount once and allow to timeout
retrans & transmission and timeout parameters for soft mounted operations
timeout
bg
after first mount failure, retry mount in the background
intr
allow operations on filesystems to be interrupted with kill signals
nfsvers=n The version of NFS the mount command should attempt to use

Use /etc/fstab to make NFS mounts permanent


a02:/tmp

/mnt/nethome

nfs

soft,intr,nfsvers=3

Manually mounting /tmp as /mnt/nethome on local host from


a02: ## hostname
hostname
a01
a01
## mount
mount o
o rw,soft
rw,soft -t
-t nfs
nfs a02:/tmp
a02:/tmp /mnt/nethome
/mnt/nethome

Mount nfs-shares at boot in client


Make entries in /etc/fstab
#/etc/fstab
#/etc/fstab
#Directory
MountPoint
#Directory
MountPoint Type
Type
192.168.0.10:/data/files
/mnt/nfs
nfs
192.168.0.10:/data/files /mnt/nfs
nfs

Options
Options
soft,nfsvers=3
soft,nfsvers=3

Dump
Dump
00

FSCK
FSCK
00

Some /etc/fstab mount options


auto
auto mount
mount this
this when
when mount
mount a
a is
is used
used
defaults
defaults (rw
(rw suid
suid dev
dev exec
exec auto
auto nouser
nouser async)
async)
user
user allow
allow regular
regular users
users to
to mount/umount
mount/umount
sync
sync use
use syncron
syncron I/O
I/O most
most safe
safe
soft
soft skip
skip mount
mount if
if server
server not
not responding
responding
hard
hard try
try until
until server
server responds
responds
retry=minutes
retry=minutes
bg/fg
bg/fg retry
retry mounting
mounting in
in background
background or
or foreground
foreground

Mount all unmounted


If you made changes on live system in fstab, you can mount all unmounted filesystem
with:
mount a

mount a

Possible NFS Mount options

Exercise - Using mount with NFS


What command will mount /usr/share from mash4077 on the
local mount point /usr/share?
##

How do I check what filesystems are mounted locally?


##

Make a static mount in a01 /mnt/nethome of exported


a02:/tmp in /etc/fstab:
##

Manually mount exported a02:/usr/share as read only on


a01: ##
How can I show what is nfs exported on the server
##

NFS security
NFS is inherently insecure
NFS can be run in encrypted mode which encrypts data over the network
AFS more appropriate for security conscious sites

User IDs must be co-ordinated across all platforms


UIDs and not user names are used to control file access (use LDAP or NIS)
mismatched user id's cause access and security problems

Fortunately root access is denied by default


over NFS root is mapped to user nobody
## mount
mount || grep
grep "/share"
"/share"
mail:/share
mail:/share on
on /share
/share
## id
id
uid=318(hawkeye)
uid=318(hawkeye) gid=318(hawkeye)
gid=318(hawkeye)
## touch
touch /share/hawkeye
/share/hawkeye
## ssh
mail
ssh mail ls
ls -l
-l /share/hawkeye
/share/hawkeye
-rwxr-xr-x
-rwxr-xr-x 22 soonlee
soonlee sonlee
sonlee 00 Jan
Jan 11
11 11:21
11:21 /share/hawkeye
/share/hawkeye

NFS Hanging
Run NFS on a reliable network
Avoid having NFS servers that NFS mount each other's
filesystems or directories
Always use the sync option whenever possible
Mission critical computers shouldn't rely on an NFS server
to operate
Dont have NFS shares in search path

NFS Hanging continued


File Locking
Known issues exist, test your applications carefull

Nesting Exports
NFS doesn't allow you to export directories that are subdirectories of directories
that have already been exported unless they are on different partitions.

Limiting "root" Access


no_root_squash

Restricting Access to the NFS server


You can add user named "nfsuser" on the NFS client to let this user squash
access for all other users on that client

Use nfsV3 if possible

NFS Firewall considerations


NFS uses many ports

RPC uses TCP port 111


NFS server itself uses port 2049
MOUNTD listens on neogated UDP/TCP ports
NLOCKMGR listens on neogated UDP / TCP ports
Expect almost any TCP/UDP port over 1023 can be allocated for NFS

NFS need a STATEFUL firewall


A stateful firewall will be able dealing with traffic that originates from inside a
network and block traffic from outside

SPI can demolish NFS


Stateful packet inspection on cheaper routers/firewalls can missinteprete NFS
traffic as DOS attacks and start drop packages

NFSSHELL
This is a hacker tool, it can hack some NFS
Invented by Leendert van Doom

Use VPN and IPSEC tunnels


With complex services like NFS IPSEC or some kind of VPN should be
considered if used in untrusted networks.

Common NFS error messages

NFS Automounter for clients or servers


Automatically mount directories from server when needed
To activate automount manually and at boot
## rcautofs
rcautofs start
start

## insserv
insserv autofs
autofs

Management of shares centralized on server


Increases security and reduces lockup problems with static shares

Main configuration sit in /etc/auto.master


Simple format is: MOUNT-KEY
/doc
/doc
///home
/home

MOUNT-OPTIONS

-ro
-ro

LOCATION

server:/usr/doc
server:/usr/doc
/etc/auto.direct
/etc/auto.direct
/etc/auto.home
/etc/auto.home

MOUNT-KEY is local mountpoint, here /doc, /- (from root) and /home


MOUNT-OPTIONS is the standard mount options previously described, here -ro
LOCATION can be a direct share on a server like server and map file auto.direct
and indirect like /etc/auto.home.

Common configuration /etc/auto.misc is for floppy/cd/dvd.


Centralized administration
need
to set /etc/nsswitch.conf
automount:
files nis ldap
automount:

files nis ldap

Direct And Indirect Map Files structure


File /etc/auto.master sets the mandatory automount config
map files always try to mount in auto.master mount key

Direct map file /etc/auto.direct


/data/sales
-rw
/data/sales
-rw
server:/disk1/data/sales
server:/disk1/data/sales/sql/database
/sql/database
snail:/var/mysql/database
snail:/var/mysql/database

-ro,soft
-ro,soft

Direct Maps are used to define NFS filesystems that are mounted on different
servers or that all don't start with the same prefix.

Indirect map file /etc/auto.home


peter
peter
kalle
kalle
walker
walker

server:/home/peter
server:/home/peter
akvarius:/home/bob
akvarius:/home/bob
iss:/home/bunny
iss:/home/bunny

Indirect Maps define directories that can be mounted under the same mount
point. Like users home directories.

Wildcards In Map Files


Wildcards In Map Files
The asterisk (*), which means all
the ampersand (&), which instructs automounter to substitute the value of the key
for the & character.

Using the Ampersand Wildcard /etc/auto.home


peter
peter server:/home/&
server:/home/&
the key is peter, so the ampersand wildcard is interpreted to mean peter too. This
means you'll be mounting the server:/home/peter directory.

Using the Asterisk Wildcard /etc/auto.home


**

bigboy:/home/&
bigboy:/home/&

In the example below, the key is *, meaning that automounter will attempt to
mount any attempt to enter the /home directory. But what's the value of the
ampersand? It is actually assigned the value of the key that triggered the access
to the /etc/auto.home file. If the access was for /home/peter, then the ampersand
is interpreted to mean peter, and server:/home/peter is mounted. If access was
for /home/kalle, then akvarius:/home/kalle would be mounted.

Other DFS Systems


RFS: Remote File Sharing

developed by AT&T to address problems with NFS


stateful system supporting Unix filesystem semantics
uses same SVR4 commands as NFS, just use rfs as file type
standard in SVR4 but not found in many other systems

AFS: Andrew Filesystem

developed as a research project at Carnegie-Mellon University


now distributed by a third party (Transarc Corporation)
available for most Unix platforms and PCs running DOS, OS/2, Windows
uses its own set of commands
remote systems access through a common interface (the /afs directory)
supports local data caching and enhanced security using Kerberos
fast gaining popularity in the Unix community

Summary
Unix supports file sharing across a network
NFS is the most popular system and allows
Unix to share files with other O/S
Servers share directories across the network
using the share command
Permanent shared drives can be configured
into /etc/fstab
Clients use mount to access shared drives
Use mount and exportfs to look at distributed
files/catalogs

You might also like