Difference between revisions of "NFS"

From ArchWiki
Jump to: navigation, search
(Using a NetworkManager dispatcher: Not relevant to NFS)
m (Server: redundant)
 
(14 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
[[Category:File systems]]
 
[[Category:File systems]]
 
[[Category:Network sharing]]
 
[[Category:Network sharing]]
 +
[[Category:Servers]]
 
[[ar:NFS]]
 
[[ar:NFS]]
 
[[cs:NFS]]
 
[[cs:NFS]]
Line 19: Line 20:
 
*NFS is not encrypted. Tunnel NFS through an encrypted protocol like [[Kerberos]], or [[tinc]] when dealing with sensitive data.
 
*NFS is not encrypted. Tunnel NFS through an encrypted protocol like [[Kerberos]], or [[tinc]] when dealing with sensitive data.
 
*Unlike [[Samba]], NFS doesn't have any user authentication by default, client access is restricted by their IP-address/[[hostname]].
 
*Unlike [[Samba]], NFS doesn't have any user authentication by default, client access is restricted by their IP-address/[[hostname]].
*NFS expects the [[user]] and/or [[group]] ID's are the same on both the client and server. It is however possible to overrule the UID/GID by using {{ic|anonuid}}/{{ic|anongid}} with {{ic|all_squash}} in {{ic|/etc/exports}}.
+
*NFS expects the [[user]] and/or [[user group]] ID's are the same on both the client and server. It is however possible to overrule the UID/GID by using {{ic|anonuid}}/{{ic|anongid}} with {{ic|all_squash}} in {{ic|/etc/exports}}.
 
}}
 
}}
  
Line 26: Line 27:
 
Both client and server only require the [[install|installation]] of the {{Pkg|nfs-utils}} package.
 
Both client and server only require the [[install|installation]] of the {{Pkg|nfs-utils}} package.
  
It is '''highly''' recommended to use a [[Time#Time synchronization|time sync daemon]] to keep client/server clocks in sync.  Without accurate clocks on all nodes, NFS can introduce unwanted delays.
+
It is '''highly''' recommended to use a [[time synchronization]] daemon to keep client/server clocks in sync.  Without accurate clocks on all nodes, NFS can introduce unwanted delays.
  
 
==Configuration==
 
==Configuration==
  
 
===Server===
 
===Server===
The NFS server needs a list of exports (shared directories) which are defined in {{ic|/etc/exports}}. NFS shares defined in {{ic|/etc/exports}} are relative to the so-called NFS root.  A good security practice is to define an NFS root in a discrete directory tree under the server's root file system which will keep users limited to that mount point. Bind mounts are used to link the share mount point to the actual directory elsewhere on the [[filesystem]].
+
Global configuration options are set in {{ic|/etc/nfs.conf}}. Users of simple configurations should not need to edit this file.
 +
 
 +
The NFS server needs a list of exports (directories to share) which are defined in {{ic|/etc/exports}}. These shares are relative to the so-called NFS root.  A good security practice is to define a NFS root in a discrete directory tree which will keep users limited to that mount point. Bind mounts are used to link the share mount point to the actual directory elsewhere on the [[filesystem]].
  
 
Consider this following example wherein:
 
Consider this following example wherein:
Line 42: Line 45:
 
{{Note|[[ZFS]] filesystems require special handling of bindmounts, see [[ZFS#Bind mount]].}}
 
{{Note|[[ZFS]] filesystems require special handling of bindmounts, see [[ZFS#Bind mount]].}}
  
To make it stick across reboots, add the bind mount to [[fstab]]:
+
To make the bind mount persistent across reboots, add it to [[fstab]]:
  
 
{{hc|/etc/fstab|
 
{{hc|/etc/fstab|
Line 74: Line 77:
 
[[Start]] and [[enable]] {{ic|nfs-server.service}}.
 
[[Start]] and [[enable]] {{ic|nfs-server.service}}.
  
==== Miscellaneous ====
+
{{Warning|A hard dependency of serving NFS ({{ic|rpc-gssd.service}}) will wait until the [[random number generator]] pool is sufficiently initialized possibly delaying the boot process.  This is particularly prevalent on headless servers.  It is ''highly'' recommended to populate the entropy pool using a utility such as [[Rng-tools]] (if [[TPM]] is supported) or [[Haveged]] in these scenarios.}}
  
===== Optional configuration =====
+
If you are exporting ZFS shares, also start and enable {{ic|zfs-share.service}}.  Without this, ZFS shares will no longer be exported after a reboot.
  
Advanced configuration options can be set in {{ic|/etc/nfs.conf}}. Users setting up a simple configuration may not need to edit this file.
+
==== Miscellaneous ====
 
 
When using NFS on a network with a significant of clients one may increase the default NFS threads from ''8'' to ''16'' or even a higher, depending on the server/network requirements.
 
  
 
===== Restricting NFS to interfaces/IPs =====
 
===== Restricting NFS to interfaces/IPs =====
Line 93: Line 94:
 
}}
 
}}
  
Restarting the service will apply the changes immediately.  
+
[[Restart]] {{ic|nfs-server.service}} to apply the changes immediately.  
 
 
# systemctl restart nfs-server.service
 
  
 
===== Enable NFSv4 idmapping =====
 
===== Enable NFSv4 idmapping =====
Line 108: Line 107:
 
Set as [[Kernel modules#Setting module options|module option]] to make this change permanent, e.g.:
 
Set as [[Kernel modules#Setting module options|module option]] to make this change permanent, e.g.:
  
{{hc|/etc/modprobe.d/nfsd.conf|<nowiki>
+
{{hc|/etc/modprobe.d/nfsd.conf|2=
 
options nfsd nfs4_disable_idmapping=0
 
options nfsd nfs4_disable_idmapping=0
</nowiki>}}
+
}}
 +
 
 +
To fully use ''idmapping'', make sure the domain is configured in {{ic|/etc/idmapd.conf}} on '''both''' the server and the client:
 +
 
 +
{{hc|/etc/idmapd.conf|2=
 +
# The following should be set to the local NFSv4 domain name
 +
# The default is the host's DNS domain name.
 +
Domain = ''domain.tld''
 +
}}
 +
 
 +
On the client one should also enable NFSv4 idmapping:
  
To fully use ''idmapping'', make sure the domain is configured in {{ic|/etc/idmapd.conf}} on '''both''' the server and the client.
+
{{hc|/etc/modprobe.d/nfsd.conf|2=
 +
options nfs nfs4_disable_idmapping=0
 +
options nfsd nfs4_disable_idmapping=0
 +
}}
  
 
===== Static ports for NFSv3 =====
 
===== Static ports for NFSv3 =====
Line 190: Line 202:
  
 
=== Client ===
 
=== Client ===
Users intending to use NFS4 with [[Kerberos]], also need to [[start]] and [[enable]] {{ic|nfs-client.target}}, which starts {{ic|rpc-gssd.service}}. However, due to bug {{Bug|50663}} in glibc, {{ic|rpc-gssd.service}} currently fails to start. Adding the "-f" (foreground) flag in the service is a workaround:
+
Users intending to use NFS4 with [[Kerberos]] need to [[start]] and [[enable]] {{ic|nfs-client.target}}.
 
 
{{hc|# systemctl edit rpc-gssd.service|2=
 
[Unit]
 
Requires=network-online.target
 
After=network-online.target
 
 
 
[Service]
 
Type=simple
 
ExecStart=
 
ExecStart=/usr/sbin/rpc.gssd -f
 
}}
 
  
 
==== Manual mounting ====
 
==== Manual mounting ====
Line 263: Line 264:
 
* If shutdown/reboot holds too long because of NFS,  [[enable]] {{ic|NetworkManager-wait-online.service}} to ensure that NetworkManager is not exited before the NFS volumes are unmounted. You may also try to add the {{ic|<nowiki>x-systemd.requires=network-online.target</nowiki>}} mount option if shutdown takes too long.
 
* If shutdown/reboot holds too long because of NFS,  [[enable]] {{ic|NetworkManager-wait-online.service}} to ensure that NetworkManager is not exited before the NFS volumes are unmounted. You may also try to add the {{ic|<nowiki>x-systemd.requires=network-online.target</nowiki>}} mount option if shutdown takes too long.
 
* Using mount options as {{ic|noatime}}, {{ic|nodiratime}}, {{ic|noac}}, {{ic|nocto}} may be used to increase NFS performance.}}
 
* Using mount options as {{ic|noatime}}, {{ic|nodiratime}}, {{ic|noac}}, {{ic|nocto}} may be used to increase NFS performance.}}
 
{{Note|Users trying to automount a NFS-share via systemd which is mounted the same way on the server may experience a freeze when handling larger amounts of data.}}
 
  
 
==== Mount using autofs ====
 
==== Mount using autofs ====
Line 274: Line 273:
 
=== Performance tuning ===
 
=== Performance tuning ===
  
In order to get the most out of NFS, it is necessary to tune the {{ic|rsize}} and {{ic|wsize}} mount options to meet the requirements of the network configuration.
+
When using NFS on a network with a significant of clients one may increase the default NFS threads from ''8'' to ''16'' or even a higher, depending on the server/network requirements:
 +
 
 +
{{hc|/etc/nfs.conf|2=
 +
[nfsd]
 +
threads=16
 +
}}
 +
 
 +
It may be necessary to tune the {{ic|rsize}} and {{ic|wsize}} mount options to meet the requirements of the network configuration.
  
 
In recent linux kernels (>2.6.18) the size of I/O operations allowed by the NFS server (default max block size) varies depending on RAM size, with a maximum of 1M (1048576 bytes), the max block size of the server will be used even if nfs clients requires bigger {{ic|rsize}} and {{ic|wsize}}. See https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/5.8_Technical_Notes/Known_Issues-kernel.html
 
In recent linux kernels (>2.6.18) the size of I/O operations allowed by the NFS server (default max block size) varies depending on RAM size, with a maximum of 1M (1048576 bytes), the max block size of the server will be used even if nfs clients requires bigger {{ic|rsize}} and {{ic|wsize}}. See https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/5.8_Technical_Notes/Known_Issues-kernel.html
Line 401: Line 407:
 
</nowiki>}}
 
</nowiki>}}
  
Finally, [[enable]] {{ic|auto_share.timer}}.
+
Finally, [[enable]] and [[start]] {{ic|auto_share.timer}}.
  
 
==== Using a NetworkManager dispatcher ====
 
==== Using a NetworkManager dispatcher ====
Line 438: Line 444:
 
</nowiki>}}
 
</nowiki>}}
  
{{Note|This script ignores mounts with the {{ic|noauto}} option, remove this mount option to allow the dispatcher to manage these mounts.}}
+
{{Note|This script ignores mounts with the {{ic|noauto}} option, remove this mount option or use {{ic|auto}} to allow the dispatcher to manage these mounts.}}
  
 
Create a symlink inside {{ic|/etc/NetworkManager/dispatcher.d/pre-down}} to catch the {{ic|pre-down}} events:
 
Create a symlink inside {{ic|/etc/NetworkManager/dispatcher.d/pre-down}} to catch the {{ic|pre-down}} events:

Latest revision as of 10:45, 3 November 2018

From Wikipedia:

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed.
Note:
  • NFS is not encrypted. Tunnel NFS through an encrypted protocol like Kerberos, or tinc when dealing with sensitive data.
  • Unlike Samba, NFS doesn't have any user authentication by default, client access is restricted by their IP-address/hostname.
  • NFS expects the user and/or user group ID's are the same on both the client and server. It is however possible to overrule the UID/GID by using anonuid/anongid with all_squash in /etc/exports.

Installation

Both client and server only require the installation of the nfs-utils package.

It is highly recommended to use a time synchronization daemon to keep client/server clocks in sync. Without accurate clocks on all nodes, NFS can introduce unwanted delays.

Configuration

Server

Global configuration options are set in /etc/nfs.conf. Users of simple configurations should not need to edit this file.

The NFS server needs a list of exports (directories to share) which are defined in /etc/exports. These shares are relative to the so-called NFS root. A good security practice is to define a NFS root in a discrete directory tree which will keep users limited to that mount point. Bind mounts are used to link the share mount point to the actual directory elsewhere on the filesystem.

Consider this following example wherein:

  1. The NFS root is /srv/nfs.
  2. The export is /srv/nfs/music via a bind mount to the actual target /mnt/music.
# mkdir -p /srv/nfs/music /mnt/music
# mount --bind /mnt/music /srv/nfs/music
Note: ZFS filesystems require special handling of bindmounts, see ZFS#Bind mount.

To make the bind mount persistent across reboots, add it to fstab:

/etc/fstab
/mnt/music /srv/nfs/music  none   bind   0   0

Add directories to be shared and limit them to a range of addresses via a CIDR or hostname(s) of client machines that will be allowed to mount them in /etc/exports, e.g.:

Tip: Use an asterisk (*) to allow access from any interface.
/etc/exports
/srv/nfs        192.168.1.0/24(rw,sync,crossmnt,fsid=0)
/srv/nfs/music  192.168.1.0/24(rw,sync)
/srv/nfs/home   192.168.1.0/24(rw,sync,nohide)
/srv/nfs/public 192.168.1.0/24(ro,all_squash,insecure) desktop(rw,sync,all_squash,anonuid=99,anongid=99) # map to user/group - in this case nobody

It should be noted that modifying /etc/exports while the server is running will require a re-export for changes to take effect:

# exportfs -rav

To view the current loaded exports state in more detail, use:

# exportfs -v

For more information about all available options see exports(5).

Tip: ip2cidr is a tool to convert an IP ranges to correctly structured CDIR specification.
Note: If the target export is a tmpfs filesystem, the fsid=1 option is required.

Starting the server

Start and enable nfs-server.service.

Warning: A hard dependency of serving NFS (rpc-gssd.service) will wait until the random number generator pool is sufficiently initialized possibly delaying the boot process. This is particularly prevalent on headless servers. It is highly recommended to populate the entropy pool using a utility such as Rng-tools (if TPM is supported) or Haveged in these scenarios.

If you are exporting ZFS shares, also start and enable zfs-share.service. Without this, ZFS shares will no longer be exported after a reboot.

Miscellaneous

Restricting NFS to interfaces/IPs

By default, starting nfs-server.service will listen for connections on all network interfaces, regardless of /etc/exports. This can be changed by defining which IPs and/or hostnames to listen on.

/etc/nfs.conf
[nfsd]
host=192.168.1.123
# Alternatively, you can use your hostname.
# host=myhostname

Restart nfs-server.service to apply the changes immediately.

Enable NFSv4 idmapping
Note: Another option is to make sure the UID's/GID's match on both the client and server.

The NFSv4 protocol represents the local system's UID and GID values on the wire as strings of the form user@domain. The process of translating from UID to string and string to UID is referred to as ID mapping [1].

Even though idmapd may be running, it may not be fully enabled. Verify if /sys/module/nfsd/parameters/nfs4_disable_idmapping returns N, on disabled run:

# echo "N" | tee /sys/module/nfsd/parameters/nfs4_disable_idmapping

Set as module option to make this change permanent, e.g.:

/etc/modprobe.d/nfsd.conf
options nfsd nfs4_disable_idmapping=0

To fully use idmapping, make sure the domain is configured in /etc/idmapd.conf on both the server and the client:

/etc/idmapd.conf
# The following should be set to the local NFSv4 domain name
# The default is the host's DNS domain name.
Domain = domain.tld

On the client one should also enable NFSv4 idmapping:

/etc/modprobe.d/nfsd.conf
options nfs nfs4_disable_idmapping=0
options nfsd nfs4_disable_idmapping=0
Static ports for NFSv3

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: Configuration should be done in /etc/nfs.conf since nfs-utils 2.1.1.[2] (Discuss in Talk:NFS#)

Users needing support for NFSv3 clients, may wish to consider using static ports. By default, for NFSv3 operation rpc.statd and lockd use random ephemeral ports; in order to allow NFSv3 operations through a firewall static ports need to be defined. Edit /etc/sysconfig/nfs to set STATDARGS:

/etc/sysconfig/nfs
STATDARGS="-p 32765 -o 32766 -T 32803"

The rpc.mountd should consult /etc/services and bind to the same static port 20048 under normal operation; however, if it needs to be explicity defined edit /etc/sysconfig/nfs to set RPCMOUNTDARGS:

/etc/sysconfig/nfs
RPCMOUNTDARGS="-p 20048"

After making these changes, several services need to be restarted; the first writes the configuration options out to /run/sysconfig/nfs-utils (see /usr/lib/systemd/scripts/nfs-utils_env.sh), the second restarts rpc.statd with the new ports, the last reloads lockd (kernel module) with the new ports. Restart these services now: nfs-config, rpcbind, rpc-statd, and nfs-server.

After the restarts, use rpcinfo -p on the server to examine the static ports are as expected. Using rpcinfo -p <server IP> from the client should reveal the exact same static ports.

NFSv2 compatibility

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: Configuration should be done in /etc/nfs.conf since nfs-utils 2.1.1.[3] (Discuss in Talk:NFS#)

Users needing to support clients using NFSv2 (for example U-Boot), should set RPCNFSDARGS="-V 2" in /etc/sysconfig/nfs.

Firewall configuration

To enable access through a firewall, TCP and UDP ports 111, 2049, and 20048 may need to be opened when using the default configuration; use rpcinfo -p to examine the exact ports in use on the server:

$ rpcinfo -p | grep nfs
100003    3   tcp   2049  nfs
100003    4   tcp   2049  nfs
100227    3   tcp   2049  nfs_acl

When using NFSv4, make sure TCP port 2049 is open. No other port opening should be required:

/etc/iptables/iptables.rules
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT

When using an older NFS version, make sure other ports are open:

# iptables -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport 20048 -j ACCEPT
# iptables -A INPUT -p udp -m udp --dport 111 -j ACCEPT
# iptables -A INPUT -p udp -m udp --dport 2049 -j ACCEPT
# iptables -A INPUT -p udp -m udp --dport 20048 -j ACCEPT

To have this configuration load on every system start, edit /etc/iptables/iptables.rules to include the following lines:

/etc/iptables/iptables.rules
-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20048 -j ACCEPT
-A INPUT -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -p udp -m udp --dport 2049 -j ACCEPT
-A INPUT -p udp -m udp --dport 20048 -j ACCEPT

The previous commands can be saved by executing:

# iptables-save > /etc/iptables/iptables.rules
Warning: This command will override the current iptables start configuration with the current iptables configuration!

If using NFSv3 and the above listed static ports for rpc.statd and lockd the following ports may also need to be added to the configuration:

/etc/iptables/iptables.rules
-A INPUT -p tcp -m tcp --dport 32765 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -p udp -m udp --dport 32765 -j ACCEPT
-A INPUT -p udp -m udp --dport 32803 -j ACCEPT

To apply changes, Restart iptables.service.

Client

Users intending to use NFS4 with Kerberos need to start and enable nfs-client.target.

Manual mounting

For NFSv3 use this command to show the server's exported file systems:

$ showmount -e servername

For NFSv4 mount the root NFS directory and look around for available mounts:

# mount server:/ /mountpoint/on/client

Then mount omitting the server's NFS export root:

# mount -t nfs -o vers=4 servername:/music /mountpoint/on/client

If mount fails try including the server's export root (required for Debian/RHEL/SLES, some distributions need -t nfs4 instead of -t nfs):

# mount -t nfs -o vers=4 servername:/srv/nfs/music /mountpoint/on/client
Note: Server name needs to be a valid hostname (not just IP address). Otherwise mounting of remote share will hang.

Mount using /etc/fstab

Using fstab is useful for a server which is always on, and the NFS shares are available whenever the client boots up. Edit /etc/fstab file, and add an appropriate line reflecting the setup. Again, the server's NFS export root is omitted.

/etc/fstab
servername:/music   /mountpoint/on/client   nfs   defaults,soft,rsize=32768,wsize=32768,timeo=900,retrans=5,_netdev	0 0
Note: Consult nfs(5) and mount(8) for more mount options.

Some additional mount options to consider:

rsize and wsize
The rsize value is the number of bytes used when reading from the server. The wsize value is the number of bytes used when writing to the server. The default for both is 1024, but using higher values such as 8192 can improve throughput. This is not universal. It is recommended to test after making this change, see #Performance tuning.
soft or hard
Determines the recovery behaviour of the NFS client after an NFS request times out. If neither option is specified (or if the hard option is specified), NFS requests are retried indefinitely. If the soft option is specified, then the NFS client fails a NFS request after retrans retransmissions have been sent, causing the NFS client to return an error to the calling application.
timeo
The timeo value is the amount of time, in tenths of a second, to wait before resending a transmission after an RPC timeout. The default value for NFS over TCP is 600 (60 seconds). After the first timeout, the timeout value is doubled for each retry for a maximum of 60 seconds or until a major timeout occurs. If connecting to a slow server or over a busy network, better stability can be achieved by increasing this timeout value.
retrans
The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times. The NFS client generates a "server not responding" message after retrans retries, then attempts further recovery (depending on whether the hard mount option is in effect).
_netdev
The _netdev option tells the system to wait until the network is up before trying to mount the share - systemd assumes this for NFS, although automount may be a more preferred solution.
Note: Setting the sixth field (fs_passno) to a nonzero value may lead to unexpected behaviour, e.g. hangs when the systemd automount waits for a check which will never happen.

Mount using /etc/fstab with systemd

Another method is using the systemd automount service. This is a better option than _netdev, because it remounts the network device quickly when the connection is broken and restored. As well, it solves the problem from autofs, see the example below:

/etc/fstab
servername:/home   /mountpoint/on/client  nfs  noauto,x-systemd.automount,x-systemd.device-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0

One might have to reboot the client to make systemd aware of the changes to fstab. Alternatively, try reloading systemd and restarting mountpoint-on-client.automount to reload the /etc/fstab configuration.

Tip:
  • The noauto mount option will not mount the NFS share until it is accessed: use auto for it to be available immediately.
    If experiencing any issues with the mount failing due to the network not being up/available, enable NetworkManager-wait-online.service. It will ensure that network.target has all the links available prior to being active.
  • The users mount option would allow user mounts, but be aware it implies further options as noexec for example.
  • The x-systemd.idle-timeout=1min option will unmount the NFS share automatically after 1 minute of non-use. Good for laptops which might suddenly disconnect from the network.
  • If shutdown/reboot holds too long because of NFS, enable NetworkManager-wait-online.service to ensure that NetworkManager is not exited before the NFS volumes are unmounted. You may also try to add the x-systemd.requires=network-online.target mount option if shutdown takes too long.
  • Using mount options as noatime, nodiratime, noac, nocto may be used to increase NFS performance.

Mount using autofs

Using autofs is useful when multiple machines want to connect via NFS; they could both be clients as well as servers. The reason this method is preferable over the earlier one is that if the server is switched off, the client will not throw errors about being unable to find NFS shares. See autofs#NFS network mounts for details.

Tips and tricks

Performance tuning

When using NFS on a network with a significant of clients one may increase the default NFS threads from 8 to 16 or even a higher, depending on the server/network requirements:

/etc/nfs.conf
[nfsd]
threads=16

It may be necessary to tune the rsize and wsize mount options to meet the requirements of the network configuration.

In recent linux kernels (>2.6.18) the size of I/O operations allowed by the NFS server (default max block size) varies depending on RAM size, with a maximum of 1M (1048576 bytes), the max block size of the server will be used even if nfs clients requires bigger rsize and wsize. See https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/5.8_Technical_Notes/Known_Issues-kernel.html It is possible to change the default max block size allowed by the server by writing to the /proc/fs/nfsd/max_block_size before starting nfsd. For example, the following command restores the previous default iosize of 32k:

# echo 32767 > /proc/fs/nfsd/max_block_size

To make the change permanent, create a systemd-tmpfile:

/etc/tmpfiles.d/nfsd-block-size.conf
w /proc/fs/nfsd/max_block_size - - - - 32768

To mount with the increased rsize and wsize mount options:

# mount -t nfs -o rsize=32768,wsize=32768,vers=4 servername:/srv/nfs/music /mountpoint/on/client

Automatic mount handling

This trick is useful for NFS-shares on a wireless network and/or on a network that may be unreliable. If the NFS host becomes unreachable, the NFS share will be unmounted to hopefully prevent system hangs when using the hard mount option [4].

Make sure that the NFS mount points are correctly indicated in fstab:

/etc/fstab
lithium:/mnt/data           /mnt/data	        nfs noauto,noatime,rsize=32768,wsize=32768 0 0
lithium:/var/cache/pacman   /var/cache/pacman	nfs noauto,noatime,rsize=32768,wsize=32768 0 0
Note:
  • You must use hostnames in fstab for this to work, not IP addresses.
  • In order to mount NFS shares with non-root users the users option has to be added.
  • The noauto mount option tells systemd to not automatically mount the shares at boot, otherwise this may causing the boot process to stall.

Create the auto_share script that will be used by cron or systemd/Timers to use ICMP ping to check if the NFS host is reachable:

/usr/local/bin/auto_share
#!/bin/bash

function net_umount {
  umount -l -f $1 &>/dev/null
}

function net_mount {
  mountpoint -q $1 || mount $1
}

NET_MOUNTS=$(sed -e '/^.*#/d' -e '/^.*:/!d' -e 's/\t/ /g' /etc/fstab | tr -s " ")$'\n'b

printf %s "$NET_MOUNTS" | while IFS= read -r line
do
  SERVER=$(echo $line | cut -f1 -d":")
  MOUNT_POINT=$(echo $line | cut -f2 -d" ")

  # Check if server already tested
  if [[ "${server_ok[@]}" =~ "${SERVER}" ]]; then
    # The server is up, make sure the share are mounted
    net_mount $MOUNT_POINT
  elif [[ "${server_notok[@]}" =~ "${SERVER}" ]]; then
    # The server could not be reached, unmount the share
    net_umount $MOUNT_POINT
  else
    # Check if the server is reachable
    ping -c 1 "${SERVER}" &>/dev/null

    if [ $? -ne 0 ]; then
      server_notok[${#Unix[@]}]=$SERVER
      # The server could not be reached, unmount the share
      net_umount $MOUNT_POINT
    else
      server_ok[${#Unix[@]}]=$SERVER
      # The server is up, make sure the share are mounted
      net_mount $MOUNT_POINT
    fi
  fi
done
Note: If you want to test using a TCP probe instead of ICMP ping (default is tcp port 2049 in NFS4) then replace the line:
 # Check if the server is reachable
 ping -c 1 "${SERVER}" &>/dev/null

with:

 # Check if the server is reachable
 timeout 1 bash -c ": < /dev/tcp/${SERVER}/2049"
in the auto_share script above.

Make sure the script is executable:

# chmod +x /usr/local/bin/auto_share

Next check configure the script to run every X, in the examples below this is every minute.

Cron

# crontab -e
* * * * * /usr/local/bin/auto_share

systemd/Timers

/etc/systemd/system/auto_share.timer
[Unit]
Description=Automount NFS shares every minute

[Timer]
OnCalendar=*-*-* *:*:00

[Install]
WantedBy=timers.target
/etc/systemd/system/auto_share.service
[Unit]
Description=Automount NFS shares
After=syslog.target network.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/auto_share

[Install]
WantedBy=multi-user.target

Finally, enable and start auto_share.timer.

Using a NetworkManager dispatcher

NetworkManager can also be configured to run a script on network status change.

The easiest method for mount shares on network status change is to symlink the auto_share script:

# ln -s /usr/local/bin/auto_share /etc/NetworkManager/dispatcher.d/30-nfs.sh

However, in that particular case unmounting will happen only after the network connection has already been disabled, which is unclean and may result in effects like freezing of KDE Plasma applets.

The following script safely unmounts the NFS shares before the relevant network connection is disabled by listening for the pre-down and vpn-pre-down events, make the script is executable:

/etc/NetworkManager/dispatcher.d/30-nfs.sh
#!/bin/bash

# Find the connection UUID with "nmcli con show" in terminal.
# All NetworkManager connection types are supported: wireless, VPN, wired...
WANTED_CON_UUID="CHANGE-ME-NOW-9c7eff15-010a-4b1c-a786-9b4efa218ba9"

if [[ "$CONNECTION_UUID" == "$WANTED_CON_UUID" ]]; then
    
    # Script parameter $1: NetworkManager connection name, not used
    # Script parameter $2: dispatched event
    
    case "$2" in
        "up")
            mount -a -t nfs4,nfs 
            ;;
        "pre-down");&
        "vpn-pre-down")
            umount -l -a -t nfs4,nfs >/dev/null
            ;;
    esac
fi
Note: This script ignores mounts with the noauto option, remove this mount option or use auto to allow the dispatcher to manage these mounts.

Create a symlink inside /etc/NetworkManager/dispatcher.d/pre-down to catch the pre-down events:

# ln -s /etc/NetworkManager/dispatcher.d/30-nfs.sh /etc/NetworkManager/dispatcher.d/pre-down.d/30-nfs.sh

Troubleshooting

There is a dedicated article NFS Troubleshooting.

See also