NFS: Sharing Directory Hierarchies - Using Clients and Setting Up Servers - A practical guide to Fedora and Red Hat Enterprise Linux, 7th Edition (2014)

A practical guide to Fedora and Red Hat Enterprise Linux, 7th Edition (2014)

Part IV: Using Clients and Setting Up Servers

Chapter 22. NFS: Sharing Directory Hierarchies

In This Chapter

Running an NFS Client

JumpStart I: Mounting a Remote Directory Hierarchy

Improving Performance

Setting Up an NFS Server

JumpStart II: Configuring an NFS Server Using system-config-nfs (Fedora)

Manually Exporting a Directory Hierarchy


automount: Mounts Directory Hierarchies on Demand


After reading this chapter you should be able to:

Image Describe the use, history, and architecture of NFS

Image Mount remote NFS shares

Image Configure an NFS server to share directories with specific clients

Image Troubleshoot mount failures

Image Set up automount to mount directories on demand

The NFS (Network Filesystem) protocol, a UNIX de facto standard developed by Sun Microsystems, allows a server to share selected local directory hierarchies with client systems on a heterogeneous network. NFS runs on UNIX, DOS, Windows, VMS, Linux, and more. Files on the remote computer (the server) appear as if they are present on the local system (the client). Most of the time, the physical location of a file is irrelevant to an NFS user; all standard Linux utilities work with NFS remote files the same way as they operate with local files.

NFS reduces storage needs and system administration workload. As an example, each system in a company traditionally holds its own copy of an application program. To upgrade the program, the administrator needs to upgrade it on each system. NFS allows you to store a copy of a program on a single system and give other users access to it over the network. This scenario minimizes storage requirements by reducing the number of locations that need to maintain the same data. In addition to boosting efficiency, NFS gives users on the network access to the same data (not just application programs), thereby improving data consistency and reliability. By consolidating data, it reduces administrative overhead and provides a convenience to users.

NFS has been pejoratively referred to as the Nightmare Filesystem, but NFSv4 has fixed many of the shortcomings of NFSv3. NFSv4

• Performs well over high-latency, low-bandwidth Internet connections.

• Because it uses a single port, NFSv4 is much easier to set up with firewalls and NAT than NFSv3 was.

• Has strong, built-in security.

• Has lock and mount protocols integrated into the NFS protocol.

• Handles client and server crashes better than NFSv3 because it provides stateful operations.

• Improves performance by making extensive use of caching on the client.

• Supports ACLs (Access Control Lists).

AUTH_SYS authentication

NFSv4 supports AUTH_SYS authentication, the standard method of authentication under NFSv3. Because AUTH_SYS uses UIDs and GIDs to authenticate users, it is easy to crack. NFSv4 uses AUTH_SYS security by default. See the sec option (page 816) for more information.

RPCSEC_GSS authentication

NFSv4 also supports the newer RPCSEC_GSS (see the rpc.gssd man page and, which provides secure authentication as well as verification and encryption. RPCSEC_GSS is based on GSS-API (Generic Security Services Application Programming Interface; and can use Kerberos 5 (best for enterprise or LAN use) and LIBKEY (based on SPKM-3; best for Internet use) among other authentication protocols.

This chapter describes setting up an NFSv4 client and server using AUTH_SYS authentication, the Fedora/RHEL default mode.

Introduction to NFS

Figure 22-1 shows the flow of data in a typical NFS client/server setup. An NFS directory hierarchy appears to users and application programs as just another directory hierarchy. By looking at it, you cannot tell a given directory holds a remotely mounted NFS directory hierarchy and not a local filesystem. The NFS server translates commands from the client into operations on the server’s filesystem.


Figure 22-1 Flow of data in a typical NFS client/server setup

Diskless systems

In many computer facilities, user files are stored on a central fileserver equipped with many large-capacity disk drives and devices that quickly and easily make backup copies of the data. A diskless system boots from a fileserver (netboots—discussed next), a DVD, or a USB flash drive and loads system software from a fileserver. The Linux Terminal Server Project ( Web site says it all: “Linux makes a great platform for deploying diskless workstations that boot from a network server. The LTSP is all about running thin client computers in a Linux environment.” Because a diskless workstation does not require a lot of computing power, you can give older, retired computers a second life by using them as diskless systems.


You can netboot (page 1262) systems that are appropriately set up. Fedora/RHEL includes TFTP (Trivial File Transfer Protocol; tftp-server package) that you can use to set up a PXE (Preboot Execution Environment) boot server for Intel systems. Non-Intel architectures have historically included netboot capabilities, which Fedora/RHEL also supports. In addition, you can build the Linux kernel so it mounts root (/) using NFS. Given the many ways to set up a system, the one you choose depends on what you want to do. See the Remote-Boot mini-HOWTO for more information.

Dataless systems

Another type of Linux system is a dataless system, in which the client has a disk but stores no user data (only Linux and the applications are kept on the disk). Setting up this type of system is a matter of choosing which directory hierarchies are mounted remotely.

Image df: shows where directory hierarchies are mounted

The df utility displays a list of the directory hierarchies available on the system, along with the amount of disk space, free and used, on each. The –h (human) option makes the output more intelligible. Device names in the left column that are prepended with hostname: specify directory hierarchies that are available through NFS.

[zach@guava ~]$ cd; pwd

[zach@guava ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora-root 37G 3.8G 32G 11% /
/dev/sda1 477M 99M 354M 22% /boot
plum:/home/zach 37G 3.8G 32G 11% /home/zach

When Zach logs in on guava, his home directory, /home/zach, is on the remote system plum. Using NFS, the /home/zach directory hierarchy on plum is mounted on guava as /home/zach. Any files in the zach directory on guava are hidden while the zach directory from plum is mounted; they reappear when the directory is unmounted. The / filesystem is local to guava.

The df –T option adds a Type column to the display. The following output shows the directory mounted at /home/zach is an nfs4 directory hierarchy; / and /boot are local ext4 filesystems.

$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/fedora-root ext4 37G 3.8G 32G 11% /
/dev/sda1 ext4 477M 99M 354M 22% /boot
plum:/home/zach nfs4 37G 3.8G 32G 11% /home/zach

The df –t option displays a single filesystem type. With an argument of nfs4 it displays NFS directory hierarchies:

$ df -ht nfs4
Filesystem Size Used Avail Use% Mounted on
plum:/home/zach 37G 3.8G 32G 11% /home/zach


By default, an NFS server uses AUTH_SYS (page 803) authentication, which is based on the trusted-host paradigm (page 302) and has all the security shortcomings that plague other services based on this paradigm. In addition, in this mode NFS file transfers are not encrypted. Because of these issues, you should implement NFS on a single LAN segment only, where you can be (reasonably) sure systems on the LAN segment are what they claim to be. Make sure a firewall blocks NFS traffic from outside the LAN and never use NFS over the Internet. To improve security, make sure UIDs and GIDs are the same on the server and clients (page 817).

Alternately, you can use RPCSEC_GSS (page 803) authentication, one of the new security features available under NFSv4. RPCSEC_GSS provides authentication, verification, and encryption. This chapter does not cover setting up an NFS server that uses RPCSEC_GSS authentication.

More Information


NFSv4 secure installation tutorial:



PXE boot server tutorial (TFTP):


man pages: autofs (sections 5 and 8), automount, auto.master, exportfs, exports, mountstats, nfs (especially see SECURITY CONSIDERATIONS), rpc.idmapd, rpc.gssd, rpc.mountd, rpc.nfsd, and showmount




NFS Illustrated by Callaghan, Addison-Wesley (January 2000)

Running an NFS Client

This section describes how to set up an NFS client, mount remote directory hierarchies, and improve NFS performance. See page 819 for a list of error messages and troubleshooting tips.


Install the following packages:

nfs-utils (installed by default)

rpcbind (installed by default; not needed with NFSv4)

Check rpcbind is running

A new Fedora/RHEL system boots with rpcbind installed and running; For NFSv3 only (not NFSv4) rpcbind must be running before you can start the nfs service. Use the systemctl status command to make sure rpcbind is running.

# systemctl status rpcbind.service

JumpStart I: Mounting a Remote Directory Hierarchy

To set up an NFS client, mount the remote directory hierarchy the same way you mount a local directory hierarchy (page 520).

The following examples show two ways to mount a remote directory hierarchy, assuming dog is on the same network as the local system and is sharing /home and /export with the local system. The /export directory on dog holds two directory hierarchies you want to mount: /export/progsand /export/oracle. The example mounts dog’s /home directory on /dog.home on the local system, /export/progs on /apps, and /export/oracle on /oracle.

First run mkdir on the local (client) system to create the directories that are the mount points for the remote directory hierarchies:

# mkdir /dog.home /apps /oracle

You can mount any directory hierarchy from an exported directory hierarchy. In this example, dog exports /export and the local system mounts /export/progs and /export/oracle. The following commands manually mount the directory hierarchies one time:

# mount dog:/home /dog.home
# mount -o ro,nosuid dog:/export/progs /apps
# mount -o ro dog:/export/oracle /oracle

By default, directory hierarchies are mounted read-write, assuming the NFS server is exporting them with read-write permissions. The first of the preceding commands mounts the /home directory hierarchy from dog on the local directory /dog.home. The second and third commands use the–o ro option to force a readonly mount. The second command adds the nosuid option, which forces setuid (page 196) executables in the mounted directory hierarchy to run with regular permissions on the local system.

nosuid option

If a user has the ability to run a setuid program, that user has the power of a user with root privileges. This ability should be limited. Unless you know a user will need to run a program with setuid permissions from a mounted directory hierarchy, always mount a directory hierarchy with thenosuid option. For example, you would need to mount a directory hierarchy with setuid privileges when the root partition of a diskless workstation is mounted using NFS.

nodev option

Mounting a device file creates another potential security hole. Although the best policy is not to mount untrustworthy directory hierarchies, it is not always possible to implement this policy. Unless a user needs to use a device on a mounted directory hierarchy, mount directory hierarchies with the nodev option, which prevents character and block special files (page 518) on the mounted directory hierarchy from being used as devices.

Image fstab file

If you mount directory hierarchies frequently, you can add entries for the directory hierarchies to the /etc/fstab file (page 811). (Alternately, you can use automount; see page 821.) The following /etc/fstab entries mount the same directory hierarchies as in the previous example at the same time that the system mounts the local filesystems:

$ cat /etc/fstab
dog:/home /dog.home nfs4 rw 0 0
dog:/export/progs /apps nfs4 ro,nosuid 0 0
dog:/export/oracle /oracle nfs4 ro 0 0

A file mounted using NFS is always of type nfs4 on the local system, regardless of what type it is on the remote system. Typically you do not run fsck on or back up an NFS directory hierarchy. The entries in the third, fifth, and sixth columns of fstab are usually nfs (filesystem type), 0 (do not back up this directory hierarchy with dump [page 606]), and 0 (do not run fsck [page 525] on this directory hierarchy). The options for mounting an NFS directory hierarchy differ from those for mounting an ext4 or other type of filesystem. See the section on mount (next) for details.

Unmounting directory hierarchies

Use umount to unmount a remote directory hierarchy the same way you unmount a local filesystem (page 523).

Image mount: Mounts a Directory Hierarchy

The mount utility (page 520) associates a directory hierarchy with a mount point (a directory). You can use mount to mount an NFS (remote) directory hierarchy. This section describes some mount options. It lists default options first, followed by nondefault options (enclosed in parentheses). You can use these options on the command line or set them in /etc/fstab (page 811). For a complete list of options, refer to the mount and nfs man pages.

Following are two examples of mounting a remote directory hierarchy. Both mount the /home/public directory from the system plum on the /plum.public mount point on the local system. When called without arguments, mount displays a list of mounted filesystems, including the options the filesystem is mounted with.

The first mount command mounts the remote directory without specifying options. The output from the second mount command, which lists the mounted filesystems, is sent through a pipeline to grep, which displays the single (very long) logical line that provides information about the remotely mounted directory. All the options, starting with rw (read-write) are defaults or are specified by mount.

# mount plum:/home/public /plum.public
# mount | grep plum
plum:/home/public on /plum.public type nfs4
# umount /plum.public

The next mount command mounts the remote directory specifying the noac option (–o noac; next). The output from the second mount command shows the addition of the noac option. When you specify noac, mount adds the acregmin, acregmax, acdirmin, and acdirmax options (all next) and sets them to 0.

# mount -o noac plum:/home/public /plum.public
# mount | grep plum
plum:/home/public on /plum.public type nfs4 (rw,relatime,sync,vers=4.0,rsize=131072,wsize=131072,namlen=255,acregmin=0,acregmax=0,acd

Attribute Caching

A file’s inode (page 515) stores file attributes that provide information about a file, such as file modification time, size, links, and owner. File attributes do not include the data stored in a file. Typically file attributes do not change very often for an ordinary file; they change even less often for a directory file. Even the size attribute does not change with every write instruction: When a client is writing to an NFS-mounted file, several write instructions might be given before the data is transferred to the server. In addition, many file accesses, such as that performed by ls, are readonly operations and, therefore, do not change the file’s attributes or its contents. Thus a client can cache attributes and avoid costly network reads.

The kernel uses the modification time of the file to determine when its cache is out-of-date. If the time the attribute cache was saved is later than the modification time of the file itself, the data in the cache is current. The server must periodically refresh the attribute cache of an NFS-mounted file to determine whether another process has modified the file. This period is specified as a minimum and maximum number of seconds for ordinary and directory files. Following is a list of options that affect attribute caching:

ac (noac)

(attribute cache) Permits attribute caching. The noac option disables attribute caching. Although noac slows the server, it avoids stale attributes when two NFS clients actively write to a common directory hierarchy. See the example in the preceding section that shows that when you specifynoac, mount sets each of the following four options to 0, which in effect specifies that attributes will not be cached. Default is ac.


(attribute cache directory file maximum) The n is the number of seconds, at a maximum, that NFS waits before refreshing directory file attributes. Default is 60 seconds.


(attribute cache directory file minimum) The n is the number of seconds, at a minimum, that NFS waits before refreshing directory file attributes. Default is 30 seconds.


(attribute cache regular file maximum) The n is the number of seconds, at a maximum, that NFS waits before refreshing regular file attributes. Default is 60 seconds.


(attribute cache regular file minimum) The n is the number of seconds, at a minimum, that NFS waits before refreshing regular file attributes. Default is 3 seconds.


(attribute cache timeout) Sets acregmin, acregmax, acdirmin, and acdirmax to n seconds. Without this option, each individual option takes on its assigned or default value.

Error Handling

The following options control what NFS does when the server does not respond or when an I/O error occurs. To allow for a mount point located on a mounted device, a missing mount point is treated as a timeout.

fg (bg)

(foreground) Exits with an error status if a foreground NFS mount fails. The bg (background) option retries failed NFS mount attempts in the background. Default is fg.

hard (soft)

With the hard option, NFS retries indefinitely when an NFS request times out. With the soft option, NFS retries the connection retrans times and then reports an I/O error to the calling program. In general, it is not advisable to use soft. As the mount man page says of soft, “Usually it just causes lots of trouble.” For more information refer to “Improving Performance” on page 810. Default is hard.


(retransmission value) NFS generates a server not responding message after n timeouts. NFS continues trying after n timeouts if hard is set and fails if soft is set. Default is 3.


(retry value) The number of minutes that NFS retries a mount operation before giving up. Set to 0 to cause mount to exit immediately if it fails. Default is 2 minutes for foreground mounts (fg) and 10,000 minutes for background mounts (bg).


(timeout value) The n is the number of tenths of a second NFS waits for a response before retransmitting. For NFS over TCP, the default is 600 (60 seconds). See the nfs man page for information about the defaults used by NFS over UDP. On a busy network, in case of a slow server, or when the request passes through multiple routers, increasing this value might improve performance. See “Timeouts” on the next page for more information.

Miscellaneous Options

Following are additional useful options:

lock (nolock)

Permits NFS locking. The nolock option disables NFS locking and is useful with older servers that do not support NFS locking. Default is lock.


The n specifies the NFS version number used to contact the NFS server (2, 3, or 4). The mount fails if the server does not support the specified version. By default, the client negotiates the version number, starting with 4, followed by 3 and then 2.


The n specifies the number of the port used to connect to the NFS server. A 0 causes NFS to query rpcbind on the server to determine the port number. When this option is not specified, NFS uses port 2049.


(read block size) The n specifies the maximum number of bytes read at one time from an NFS server. Refer to “Improving Performance.” Default is negotiated by the client and server and is the largest both can support.


(write block size) The n specifies the maximum number of bytes written at one time to an NFS server. Refer to “Improving Performance.” Default is negotiated by the client and server and is the largest both can support.


Uses UDP for an NFS mount. Default is tcp.


Uses TCP for an NFS mount. Default is tcp.

Improving Performance


Several parameters can affect the performance of NFS, especially over slow connections such as a line with a lot of traffic or a line controlled by a modem. If you have a slow connection, make sure hard (default; page 809) is set so timeouts do not abort program execution.

Block size

One of the easiest ways to improve NFS performance is to increase the block size—that is, the number of bytes NFS transfers at a time. Try not specifying rsize and wsize (previous) so the client and server set these options to their maximum values. Experiment until you find the optimal block size. Unmount and mount the directory hierarchy each time you change an option. See the Linux NFS-HOWTO at for more information on testing different block sizes.


NFS waits the amount of time specified by the timeo (timeout; page 809) option for a response to a transmission. If it does not receive a response in this amount of time, NFS sends another transmission. The second transmission uses bandwidth that, over a slow connection, might slow things down even more. You might be able to increase performance by increasing timeo.

You can test the speed of a connection with the size of packets you are sending (rsize and wsize; both above) by using ping with the –s (size) option. The –c (count) option specifies the number of transmissions ping sends.

$ ping -s 4096 -c 4 plum
PING plum ( 4096(4124) bytes of data.
4104 bytes from plum ( icmp_seq=1 ttl=64 time=0.460 ms
4104 bytes from plum ( icmp_seq=2 ttl=64 time=0.319 ms
4104 bytes from plum ( icmp_seq=3 ttl=64 time=0.372 ms
4104 bytes from plum ( icmp_seq=4 ttl=64 time=0.347 ms

--- plum ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.319/0.374/0.460/0.056 ms

The preceding example uses a packet size of 4096 bytes and shows a fast average packet round-trip time of about one-half of a millisecond. Over a modem line, you can expect times of several seconds. If the connection is dealing with other traffic, the time will be even longer. Run the test during a period of heavy traffic. Try increasing timeo to three or four times the average round-trip time (to allow for unusually bad network conditions, such as when the connection is made) and see whether performance improves. Remember that the timeo value is given in tenths of a second (100 milliseconds = one-tenth of a second).

Image /etc/fstab: Mounts Directory Hierarchies Automatically

The /etc/fstab file (page 524) lists directory hierarchies that the system might mount automatically as it comes up. You can use the options discussed in the preceding sections on the command line or in the fstab file.

The following line from the fstab file on guava mounts the /home/public directory from plum on the /plum.public mount point on guava:

plum:/home/public /plum.public nfs4 rsize=8192,wsize=8192 0 0

A mount point should be an empty, local directory. (Files in a mount point are hidden when a directory hierarchy is mounted on it.) The type of a filesystem mounted using NFS is always nfs4, regardless of its type on its local system. You can increase the rsize and wsize options to improve performance. Refer to “Improving Performance” on page 810.

The next example from fstab mounts a filesystem from dog:

dog:/export /dog.export nfs4 timeo=50,hard 0 0

Because the local system connects to dog over a slow connection, timeo is increased to 5 seconds (50-tenths of a second). Refer to “Timeouts” on page 810. In addition, hard is set to make sure NFS keeps trying to communicate with the server after a major timeout. Refer to hard/soft on page 810.

The final example from fstab shows a remote-mounted home directory. Because dog is a local server and is connected via a reliable, high-speed connection, timeo is decreased and rsize and wsize are increased substantially:

dog:/home /dog.home nfs4 timeo=4,rsize=16384,wsize=16384 0 0

Setting Up an NFS Server


Install the following packages:

nfs-utils (installed by default)

rpcbind (installed by default; not needed for NFSv4)

system-config-nfs (Fedora; optional)

Check rpcbind is running

A new Fedora/RHEL system boots with rpcbind installed and running; For NFSv3 only (not NFSv4) rpcbind must be running before you can start the nfs service. Use the systemctl status command to make sure rpcbind is running.

# systemctl status rpcbind.service

Enable and start nfs

Run systemctl to cause the nfs service to start each time the system enters multiuser mode and then start the nfs service. Use the systemctl status command to make sure the service is running.

# systemctl enable nfs.service
# systemctl start nfs.service

After modifying nfs configuration files, give the second command again, replacing start with restart to cause nfs to reread those files.



See page 819 for a list of error messages and troubleshooting tips.


When SELinux is set to use a targeted policy, NFS is protected by SELinux. You can disable this protection if necessary. For more information refer to “Setting the Targeted Policy with system-config-selinux” on page 475.


When NFSv4 is using AUTH_SYS authentication, as it does by default, it uses port 2049. When NFS is using rpcbind to assign a port, which NFSv4 does not do by default, it uses port 111. If the NFS server system is running a firewall or is behind a firewall, you must open one or both of these ports. Give the following commands to open the ports each time the system boots (permanently) and on the running system; see page 906 for information on firewall-cmd.

# firewall-cmd --add-port=2049/tcp
# firewall-cmd --permanent --add-port=2049/tcp
# firewall-cmd --add-port=2049/udp
# firewall-cmd --permanent --add-port=2049/udp
# firewall-cmd --add-port=111/tcp # not usually needed
# firewall-cmd --permanent --add-port=111/tcp # not usually needed
# firewall-cmd --add-port=111/udp # not usually needed
# firewall-cmd --permanent --add-port=111/udp # not usually needed


NFSv3 (and not NFSv4) uses TCP wrappers to control client access to the server. As explained on page 485, you can set up /etc/hosts.allow and /etc/hosts.deny files to specify which clients can contact rpc.mountd on the server and thereby use NFSv3. The name of the daemon to use in these files is mountd.

JumpStart II: Configuring an NFS Server Using system-config-nfs (Fedora)

Open the NFS Server Configuration window (Figure 22-2) by giving the command system-config-nfs from an Enter a Command window (ALT-F2) or a terminal emulator. From this window you can generate an /etc/exports file, which is almost all there is to setting up an NFS server. If the system is running a firewall, see “Firewall” in the preceding section. The system-config-nfs utility allows you to specify which directory hierarchies you want to share and how they are shared using NFS. Each exported hierarchy is called a share.


Figure 22-2 NFS Server Configuration window

To add a share, click Add on the toolbar. To modify a share, highlight the share and click Properties on the toolbar. Clicking Add displays the Add NFS Share window; clicking Properties displays the Edit NFS Share window. These windows are identical except for their titles.

The Add/Edit NFS Share window has three tabs: Basic, General Options, and User Access. On the Basic tab (Figure 22-3) you can specify the pathname of the root of the shared directory hierarchy, the names or IP addresses of the systems (clients) the hierarchy will be shared with, and whether users from the specified systems will be able to write to the shared files.


Figure 22-3 Edit NFS Share window

The selections in the other two tabs correspond to options you can specify in the /etc/exports file. Following is a list of the check box descriptions in these tabs and the option each corresponds to:

General Options tab

Allow connections from ports 1024 and higher: insecure (page 816)

Allow insecure file locking: no_auth_nlm or insecure_locks (page 815)

Disable subtree checking: no_subtree_check (page 816)

Sync write operations on request: sync (page 816)

Force sync of write operations immediately: no_wdelay (page 816)

Hide filesystems beneath: nohide (page 816)

Export only if mounted: mountpoint (page 816)

User Access tab

Treat remote root user as local root: no_root_squash (page 817)

Treat all client users as anonymous users: all_squash (page 817)

Local user ID for anonymous users: anonuid (page 818)

Local group ID for anonymous users: anongid (page 818)

After making the changes you want, click OK to close the Add/Edit NFS Share window and click OK again to close the NFS Server Configuration window. There is no need to restart any daemons.

Manually Exporting a Directory Hierarchy

Exporting a directory hierarchy makes the directory hierarchy available for mounting by designated systems via a network. “Exported” does not mean “mounted”: When a directory hierarchy is exported, it is placed in the list of directory hierarchies that can be mounted by other systems. An exported directory hierarchy might be mounted (or not) at any given time.

Tip: Exporting symbolic links and device files

When you export a directory hierarchy that contains a symbolic link, make sure the object of the link is available on the client (remote) system. If the object of the link does not exist on a client system, you must export and mount it along with the exported link. Otherwise, the link will not point to the same file it points to on the server.

A device file refers to a Linux kernel interface. When you export a device file, you export that interface. If the client system does not have the same type of device available, the exported device will not work. To improve security on a client, you can use mount’s nodevoption (page 806) to prevent device files on mounted directory hierarchies from being used as devices.

Exported partition holding a mount point

A mounted directory hierarchy whose mount point is within an exported partition is not exported with the exported partition. You need to explicitly export each mounted directory hierarchy you want exported, even if it resides within an already exported directory hierarchy. For example, assume two directory hierarchies, /opt/apps and /opt/apps/oracle, reside on two partitions. You must export each directory hierarchy explicitly, even though oracle is a subdirectory of apps. Most other subdirectories and files are exported automatically. See also mountpoint and nohide, both on page 816.

/etc/exports: Holds a List of Exported Directory Hierarchies

The /etc/exports file is the ACL (access control list) for exported directory hierarchies that NFS clients can mount; it is the only file you need to edit to set up an NFS server. The exportfs utility (page 818) reads this file when it updates the files in /var/lib/nfs (page 818), which the kernel uses to keep its mount table current. The exports file controls the following NFS characteristics:

• Which clients can access the server (see also “Security” on page 805)

• Which directory hierarchies on the server each client can access

• How each client can access each directory hierarchy

• How client usernames are mapped to server usernames

• Various NFS parameters

Each line in the exports file has the following format:

export-point client1(option-list) [client2(option-list) ... ]

where export-point is the absolute pathname of the root directory of the directory hierarchy to be exported. The client1-n are the names or IP addresses of one or more clients, separated by SPACEs, that are allowed to mount the export-point. The option-list is a comma-separated list of options (next) that applies to the preceding client; it must not contain any SPACEs. There must not be any SPACE between each client name and the open parenthesis that starts the option-list.

You can either use system-config-nfs (page 812) to make changes to exports or edit this file manually. The following exports file gives the system at read-write access to /home/public and /home/zach:

$ cat /etc/exports

The specified directories are on the local server. In each case, access is implicitly granted for the directory hierarchy rooted at the exported directory. You can specify IP addresses or hostnames and you can specify more than one client system on a line. By default, directory hierarchies are exported in readonly mode.

General Options

The left column of this section lists default options, followed by nondefault options enclosed in parentheses. Refer to the exports man page for more information.

auth_nlm (no_auth_nlm) or secure_locks (insecure_locks)

Setting the auth_nlm or secure_locks option (these two options are the same) causes the server to require authentication of lock requests. Use no_auth_nlm for older clients when you find that only files that anyone can read can be locked. Default is auth_nlm.


Allows a directory to be exported only if it has been mounted. This option prevents a mount point that does not have a directory hierarchy mounted on it from being exported and prevents the underlying mount point from being exported. Also mp.

nohide (hide)

When a server exports two directory hierarchies, one of which is mounted on the other, the hide option requires a client to mount both directory hierarchies explicitly to access both. When the second (child) directory hierarchy is not explicitly mounted, its mount point appears as an empty directory and the directory hierarchy is hidden. The nohide option causes the underlying second directory hierarchy to appear when it is not explicitly mounted, but this option does not work in all cases. See “Exported partition holding a mount point” on page 814. Default is nohide.

ro (rw)

(readonly) The ro option permits only read requests on an NFS directory hierarchy. Use rw to permit read and write requests. Default is ro.

secure (insecure)

The secure option requires NFS requests to originate on a privileged port (page 1267) so a program running without root privileges cannot mount a directory hierarchy. This option does not guarantee a secure connection. Default is secure.

no_subtree_check (subtree_check)

The subtree_check option checks subtrees for valid files. Assume you have an exported directory hierarchy that has its root below the root of the filesystem that holds it (that is, an exported subdirectory of a filesystem). When the NFS server receives a request for a file in that directory hierarchy, it performs a subtree check to confirm the file is in the exported directory hierarchy.

Subtree checking can cause problems with files that are renamed while opened and, when no_root_squash is used, files that only a process running with root privileges can access. The no_subtree_check option disables subtree checking and can improve reliability in some cases.

For example, you might need to disable subtree checking for home directories. Home directories are frequently subtrees (of /home), are written to often, and can have files within them frequently renamed. You would probably not need to disable subtree checking for directory hierarchies that contain files that are mostly read, such as /usr. Default is no_subtree_check.


The mode is the type of RPCGSS security to use to access files on this mount point and can be sys (AUTH_SYS), krb5, krb5i, krb5p, lkey, lkeyi, lkeyp, spkm, spkmi, or spkmp. Refer to SECURITY CONSIDERATIONS in the nfs man page for more information. Default is sys.

sync (async)

(synchronize) The sync option specifies that the server should reply to requests only after disk changes made by the request are written to disk. The async option specifies that the server does not have to wait for information to be written to disk and can improve performance, albeit at the cost of possible data corruption if the server crashes or the connection is interrupted. Default is sync.

wdelay (no_wdelay)

(write delay) The wdelay option causes the server to delay committing write requests when it anticipates that another, related request will follow, thereby improving performance by committing multiple write requests within a single operation. The no_wdelay option does not delay committing write requests and can improve performance when the server receives multiple, small, unrelated requests. Default is wdelay.

User ID Mapping Options

Each user has a UID number and a primary GID number on the server. The /etc/passwd and /etc/group files on the server might map these numbers to names. When a user on a client makes a request of an NFS server, the server uses these numbers to identify the user on the client, raising several issues:

• The user might not have the same ID numbers on both systems. As a consequence, the user might have owner access to files of another user and not have owner access to his own files (see “NIS and NFS” [below] for a solution).

• You might not want a user with root privileges on the client system to have owner access to root-owned files on the server.

• You might not want a remote user to have owner access to some important system files that are not owned by root (such as those owned by bin).

Owner access to a file means the remote user can execute or—worse—modify the file. NFS gives you two ways to deal with these cases:

• You can use the root_squash option to map the ID number of the root account on a client to the nfsnobody user (UID 65534) on the server.

• You can use the all_squash option to map all NFS users on the client to nfsnobody (UID 65534) on the server.

The /etc/passwd file shows that nfsnobody has a UID and GID of 65534. Use the anonuid and anongid options to override these values.


When you use NIS (page 769) for user authorization, users automatically have the same UIDs on both systems. If you are using NFS on a large network, it is a good idea to use a directory service such as NIS or LDAP (page 786) for authorization. Without such a service, you must synchronize the passwd files on all systems manually.

root_squash (no_root_squash)

The root_squash option maps requests from root on a client so they appear to come from the UID for nfsnobody (UID 65534), a nonprivileged user on the server, or as specified by anonuid. This option does not affect other sensitive UIDs such as bin. The no_root_squash option turns off this mapping so requests from root appear to come from root. Default is root_squash.

no_all_squash (all_squash)

The no_all_squash option does not change the mapping of users on clients making requests of the NFS server. The all_squash option maps requests from all users—not just root—on client systems to appear to come from the UID for nfsnobody (UID 65534), a nonprivileged user on the server, or as specified by anonuid. This option is useful for controlling access to exported public FTP, news, and other directories. Default is no_all_squash.

Security: Critical files in NFS-mounted directories should be owned by root

Despite the mapping done by the root_squash option, a user with root privileges on a client system can use sudo or su to assume the identity of any user on the system and then access that user’s files on the server. Thus, without resorting to all_squash, you can protect only files owned by root on an NFS server. Make sure that root—and not bin or another user—owns and is the only user who can modify or delete critical files within any NFS-mounted directory hierarchy.

Taking this precaution does not completely protect the system against an attacker with root privileges, but it can help thwart an attack from a less experienced malicious user.

anonuid=un and anongid=gn

Set the UID or the GID of the anonymous account to un or gn, respectively. NFS uses these accounts when it does not recognize an incoming UID or GID and when it is instructed to do so by root_squash or all_squash. Both options default to nfsnobody (UID 65534).

Where the System Keeps NFS Mount Information

A server holds several lists of directory hierarchies it can export. The list that you as a system administrator work with is /etc/exports. This section describes the important files and pseudofiles that NFS works with. The following discussion assumes that the server, plum, is exporting the/home/public and /home/zach directory hierarchies.


(export table) On the server, /var/lib/nfs/etab holds a list of the directory hierarchies that are exported (can be mounted but are not necessarily mounted at the moment) and the options they are exported with:

$ cat /var/lib/nfs/etab


The preceding output shows that can mount /home/zach and /home/public. The etab file is initialized from /etc/exports when the system is brought up, read by mountd when a client asks to mount a directory hierarchy, and modified by exportfs (page 818) as the list of exported directory hierarchies changes. The /proc/fs/nfsd/exports pseudofile holds similar information.

exportfs: Maintains the List of Exported Directory Hierarchies

The exportfs utility maintains the /var/lib/nfs/etab file (above). When mountd is called, it checks this file to see if it is allowed to mount the requested directory hierarchy. Typically exportfs is called with simple options and modifies the etab file based on changes in /etc/exports. When called with client and directory arguments, it can add to or remove the directory hierarchies specified by those arguments from the list kept in etab, without reference to the exports file. An exportfs command has the following format:

exportfs [options] [client:dir ...]

where options is one or more options (next), client is the name of the system that dir is exported to, and dir is the absolute pathname of the directory at the root of the directory hierarchy being exported. Without any arguments, exportfs reports which directory hierarchies are exported to which systems:

# exportfs

The system executes the following command when it starts the nfsd daemon. This command reexports the entries in /etc/exports and removes invalid entries from /var/lib/nfs/etab so etab is synchronized with /etc/exports:

# exportfs -r



(all) Exports directory hierarchies specified in /etc/exports. This option does not unexport entries you have removed from exports (that is, it does not remove invalid entries from /var/lib/nfs/etab); use –r to perform this task.


(flush) Removes everything from the kernel’s export table.


(ignore) Ignores /etc/exports; uses what is specified on the command line only.


(options) Specifies options. You can specify options following –o the same way you do in the exports file. For example, exportfs –i –o ro dog:/home/sam exports /home/sam on the local system to dog for readonly access.


(reexport) Synchronizes /var/lib/nfs/etab with /etc/exports, removing invalid entries from /var/lib/nfs/etab.


(unexport) Makes an exported directory hierarchy no longer exported. If a directory hierarchy is mounted when you unexport it, users see the message Stale NFS file handle when they try to access the directory hierarchy from a remote system.


(verbose) Provides more information. Displays export options when you use exportfs to display export information.


This section describes NFS error messages and what you can do to fix the problems they report on. It also suggests some ways to test an NFS server.

Error Messages

Sometimes a client might lose access to files on an NFS server. For example, a network problem or a remote system crash might make these files temporarily unavailable. If you try to access a remote file in these circumstances, you will get an error message, such as NFS server xxx not responding. When the local system can contact the server again, NFS will display another message, such as NFS server xxx OK. A stable network and server (or not using NFS) is the best defense against this problem.

The mount: RPC: Program not registered message might mean NFS is not running on the server. Start the nfsd and rpcbind daemons on the server.

The Stale NFS filehandle message appears when a file that is opened on a client is removed, renamed, or replaced. Try remounting the directory hierarchy.

Testing the Server

From the server, run systemctl. If all is well, the system displays something similar to the following:

# systemctl status nfs.service
nfs-server.service - NFS Server
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
Active: active (exited) since Tue 2013-06-25 19:22:31 PDT; 26s ago
Process: 1972 ExecStartPost=/usr/lib/nfs-utils/scripts/nfs-server.postconfig (code=exited,
Process: 1955 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT (code=exited, status=0/SUCCESS)
Process: 1947 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 1940 ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-server.preconfig (code=exited,

Jun 25 19:22:31 plum systemd[1]: Starting NFS Server...
Jun 25 19:22:31 plum systemd[1]: Started NFS Server.

Also check that mountd is running:

$ ps -ef | grep mountd
root 2110 1 0 18:31 ? 00:00:00 /usr/sbin/rpc.mountd

Next, from the server, use rpcinfo to make sure NFS is registered with rpcbind:

$ rpcinfo -p localhost | grep nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049 nfs_acl
100227 3 tcp 2049 nfs_acl
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049 nfs_acl
100227 3 udp 2049 nfs_acl

Repeat the preceding command from the client, replacing localhost with the name of the server. The results should be the same.

If you get a permission denied message, check that

• You have not specified rw access in /etc/exports when the client has only read permission to the directory hierarchy.

• You have specified the directory hierarchy and client correctly in /etc/exports.

• You have not edited /etc/exports since you last ran exportfs –r.

• You are not trying to export a directory hierarchy and a subdirectory of that hierarchy. From the client you can mount a directory hierarchy and a subdirectory of that hierarchy separately, but you cannot export both from the server. Export the higher-level directory only.

Finally, try mounting directory hierarchies from remote systems and verify access.

automount: Mounts Directory Hierarchies on Demand

In a distributed computing environment, when you log in on any system on the network, all your files—including startup scripts—are available. All systems are also commonly able to mount all directory hierarchies on all servers: Whichever system you log in on, your home directory is waiting for you.

As an example, assume /home/zach is a remote directory hierarchy that is mounted on demand. When Zach logs in or when you issue the command ls /home/zach, autofs goes to work: It looks in the /etc/auto.misc map, finds /home/zach is a key that says to mount plum:/home/zach, and mounts the remote directory hierarchy. Once the directory hierarchy is mounted, ls displays the list of files in that directory.


The automount maps can be stored on NIS and LDAP servers in addition to the local filesystem. Unless you change the automount entry in the /etc/nsswitch.conf file (page 495), the automount daemon will look for maps as files on the local system.


Install the following package:


Enable and start autofs

Run systemctl to cause the autofs service (automount daemon) to start each time the system enters multiuser mode and then start the autofs service. Use the systemctl status command to make sure the service is running.

# systemctl enable autofs.service
# systemctl start autofs.service

After modifying autofs configuration files, give the second command again, replacing start with restart to cause autofs to reread those files.

autofs: Automatically Mounted Directory Hierarchies

An autofs directory hierarchy is like any other directory hierarchy but remains unmounted until it is needed, at which time the system mounts it automatically (demand mounting). The system unmounts an autofs directory hierarchy when it is no longer needed—by default, after five minutes of inactivity. Automatically mounted directory hierarchies are an important part of managing a large collection of systems in a consistent way. The automount daemon is particularly useful when an installation includes a large number of servers or a large number of directory hierarchies. It also helps to remove server-server dependencies (next).

Server-server dependency

When you boot a system that uses traditional fstab-based mounts and an NFS server is down, the system can take a long time to come up as it waits for the server to time out. Similarly, when you have two servers, each mounting directory hierarchies from the other, and both systems are down, both might hang as they are brought up while each tries to mount a directory hierarchy from the other. This situation is called a server–server dependency. The automount facility gets around these issues by mounting a directory hierarchy from another system only when a process tries to access it.

When a process attempts to access one of the files within an unmounted autofs directory hierarchy, the kernel notifies the automount daemon, which mounts the directory hierarchy. You must give a command, such as cd /home/zach, that accesses the autofs mount point (in this case/home/zach) to create the demand that causes automount to mount the autofs directory hierarchy; only then can the system display or use the autofs directory hierarchy. Before you issue this cd command, zach does not appear in /home.

The main file that controls the behavior of automount is /etc/auto.master. Each line in this file describes a mount point and refers to an autofs map that specifies the directory that is to be mounted on the mount point. A simple example follows:

$ cat /etc/auto.master
/- /etc/auto.misc --timeout=60
/plum /etc/auto.plum

Mount point

The auto.master file has three columns. The first column names the parent of the autofs mount point—the location where the autofs directory hierarchy is to be mounted. A /– in the first column means the mount point is the root directory.

Map files

The second column names the file, called a map file, that stores supplemental configuration information. The optional third column holds autofs options for the map entry. In the preceding example, the first line sets the timeout (the length of time a directory stays mounted when it is not in use) to 60 seconds; the default timeout is 300 seconds. You can change autofs default values in /etc/sysconfig/autofs.

Although the map files can have any names, one is traditionally named auto.misc. Following are the two map files specified in the preceding auto.master file:

$ cat /etc/auto.misc
/music -fstype=ext4 :/dev/mapper/fedora-music
/home/zach -fstype=nfs4 plum:/home/zach

$ cat /etc/auto.plum
public -fstype=nfs4 plum:/home/public

Relative mount point

The first column of a map file holds the absolute or relative autofs mount point. A pathname other than /– appearing in the first column of auto.master specifies a relative autofs mount point. A relative mount point is appended to the corresponding autofs mount point from column 1 of theauto.master. In this example, public (from auto.plum) is appended to /plum (from auto.master) to form /plum/public.

Absolute mount point

A /– appearing in the first column of auto.master specifies an absolute autofs mount point. In this example, auto.master specifies /– as the mount point for the auto.misc map file, so the mount points specified in this map file are absolute and are specified entirely in an auto.misc map file. The auto.misc file specifies the autofs mount points /music and /home/zach.

The second column holds autofs options, and the third column shows the server and directory hierarchy to be mounted.

The auto.misc file specifies a local filesystem (/dev/mapper/vg_guava-music; an LV (page 44) and a remote NFS directory (plum:/home/zach). You can identify a local directory hierarchy by the absence of a system name before the colon and usually the ext4 filesystem type. A system name appears before the colon of a remote directory hierarchy filesystem and the filesystem type is always nfs4.

Before the new setup can work, you must restart the autofs service (page 821). This process creates the directories that hold the mount points if necessary.

The following example starts with the automount daemon (autofs service) not running. The first ls command shows that the /music and /plum directories are empty. The next command, run with root privileges, starts the autofs service. Now when you list /music, autofs mounts it and lsdisplays its contents. The /plum listing shows nothing: It is not an autofs mount point.

$ ls /music /plum


$ su -c 'systemctl start autofs.service'

$ ls /music /plum
lost+found mp3 ogg


When you give an ls command and specify the mount point (/plum/public), automount mounts it and ls displays its contents. Once public is mounted on /plum, ls /plum shows the public directory is in place.

$ ls /plum/public
memos personal
$ ls /plum

A df command shows the remotely mounted directory hierarchy and the locally mounted filesystem:

$ df -h
Filesystem Size Used Avail Use% Mounted on
plum:/home/public 37G 3.8G 32G 11% /plum/public
/dev/mapper/fedora-music 7G 2.4G 4G 34% /music

Chapter Summary

NFS allows a server to share selected local directory hierarchies with client systems on a heterogeneous network, thereby reducing storage needs and administrative overhead. NFS defines a client/server relationship in which a server provides directory hierarchies that clients can mount.

On the server, the /etc/exports file lists the directory hierarchies that the system exports. Each line in exports specifies a directory hierarchy and the client systems that are allowed to mount it, including options for each client (readonly, read-write, and so on). An exportfs –r command causes NFS to reread this file.

From a client, a mount command mounts an exported NFS directory hierarchy. Alternately, you can put an entry in /etc/fstab to have the system automatically mount the directory hierarchy when it mounts the local filesystems.

Automatically mounted directory hierarchies help manage large groups of systems containing many servers and directory hierarchies in a consistent way and can help remove server–server dependencies. The automount daemon automatically mounts autofs directory hierarchies when they are needed and unmounts them when they are no longer needed.


1. What are three reasons to use NFS?

2. Which command would you give to mount on the local system the /home directory hierarchy that resides on the file server named plum? Assume the mounted directory hierarchy will appear as /plum.home on the local system. How would you mount the same directory hierarchy if it resided on the fileserver at How would you unmount /home?

3. How would you list the mount points on the remote server named plum that the local client named guava can mount?

4. Which command line lists the currently mounted NFS directory hierarchies?

5. What does the /etc/fstab file do?

6. From a server, how would you allow readonly access to /opt for any system in

Advanced Exercises

7. When is it a good idea to disable attribute caching?

8. Describe the difference between the root_squash and all_squash options in /etc/exports.

9. Why does the secure option in /etc/exports not really provide any security?

10. Some diskless workstations use NFS as swap space. Why is this approach useful? What is the downside?

11. NFS maps users on the client to users on the server. Explain why this mapping is a security risk.

12. What does the mount nosuid option do? Why would you want to use this option?