Difference between revisions of "NFSv3"
(→Unreliable performance, slow data transfer, and/or high load when using NFS and gigabit: Adding link to NFS Howto page)
(→systemd services: rpc-statd is required on the server too (see man page))
|Line 62:||Line 62:|
=== systemd services ===
=== systemd services ===
# systemctl enable rpc-mountd.service
# systemctl enable rpc-mountd.service
# echo nfsd > /etc/modules-load.d/nfsd.conf
# echo nfsd > /etc/modules-load.d/nfsd.conf
Revision as of 12:39, 7 December 2012
The goal of this article is to assist in setting up an nfs-server for sharing files over a network.
- nfs-utils has been upgraded since 2009-06-23, and NFS4 support is now implemented. Refer to the news bulletin.
- portmap has been replaced by rpcbind.
- 1 Required packages
- 2 Setting up the server
- 3 Setting up the client
- 4 Troubleshooting
- 4.1 Unreliable performance, slow data transfer, and/or high load when using NFS and gigabit
- 4.2 Portmap daemon fails to start at boot
- 4.3 Nfsd fails to start with "nfssvc: No such device"
- 4.4 rpcbind fails to start with no error when attempting to start via console
- 4.5 Nfsd seems to work, but I cannot connect from MacOS X clients
- 4.6 mount.nfs: Operation not permitted
- 4.7 Ownership of mounted shares is 4294967294:4294967294
- 5 Tips and tricks
- 6 Links and references
Required packages for both the server and the client are minimal. You will only need to install the package from the official repositories. Optionally, install the package to use the keyring based idmapper on the NFS client.
Setting up the server
You can now edit your configuration and then start the daemons.
This file defines the various shares on the nfs server and their permissions. A few examples:
/files *(ro,sync) # Read-only access to anyone /files 192.168.0.100(rw,sync) # Read-write access to a client on 192.168.0.100 /files 192.168.1.1/24(rw,sync) # Read-write access to all clients from 192.168.1.1 to 192.168.1.255 /bsd *(ro,sync,insecure) # BSD clients require insecure as otherwise connections will be rejected by the client
If you make changes to /etc/exports after starting the daemons, you can make them effective by issuing the following command:
# exportfs -r
If you decide to make your NFS share public and writable, you can use the all_squash option in combination with anonuid and the anongid option. For example, to set the privileges for the user nobody in the group nobody, you can do the following:
; Read-write access to a client on 192.168.0.100, with rw access for the user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99))
This also means, that if you want write access to this directory, nobody.nobody must be the owner of the share directory:
# chown -R nobody.nobody /files
Full details on the exports file are provided by the exports man page.
Edit this file to pass appropriate run-time options to nfsd, mountd, statd, and sm-notify. The default Arch NFS init scripts require the --no-notify option for statd, as follows:
Others may be left at the provided defaults, or changed according to your requirements. Please refer to the relevant man pages for full details.
You can now start the server with the following commands:
# rc.d start rpcbind (or: rc.d start portmap) # rc.d start nfs-common (or: rc.d start nfslock) # rc.d start nfs-server (or: rc.d start nfsd)
Please note that they must be started in that order. To start the server at boot time, add these daemons to the DAEMONS array in
/etc/rc.conf. It may be necessary to start the daemons as root / using sudo when you start them from the terminal.
# systemctl enable rpc-mountd.service rpc-statd.service # echo nfsd > /etc/modules-load.d/nfsd.conf
Setting up the client
Edit this file to pass appropriate run-time options to statd - the remaining options are for server use only. Do not use the --no-notify option on the client side, unless you are fully aware of the consequences of doing so.
Please refer to the statd man page for full details.
Kernels after 2.6.37 support using request-key to find and cache idmapper entries. Using request-key allows multiple idmap requests to be placed at a single time, making it significantly more scalable than the legacy code.
Please refer to the nfsidmap page for full details.
Start the rpcbind and nfs-common daemons:
rc.d start rpcbind (or: rc.d start portmap) rc.d start nfs-common (or: rc.d start nfslock)
Please note that they must be started in that order.
To start the daemons at boot time, add them to the DAEMONS array in /etc/rc.conf.
# systemctl enable rpc-statd.service
Show the server's exported filesystems:
showmount -e server
Then just mount as normal:
mount server:/files /files
Unlike CIFS shares or rsync, NFS exports must be called by the full path on the server; example, if /home/fred/music is defined in /etc/exports on server ELROND, you must call:
mount ELROND:/home/fred/music /mnt/point
instead of just using:
mount ELROND:music /mnt/point
or you will get mount.nfs: access denied by server while mounting
mount: wrong fs type, bad option, bad superblock on 192.168.1.99:/media/raid5-4tb, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail or so
Auto-mount on boot
If you want to mount on boot, make sure network, rpcbind (portmap), nfs-common (nfslock) and netfs are in the DAEMONS array in /etc/rc.conf. Make sure the order is this one. It is better not to put any '@' in front of them (although you could safely use @netfs); for instance:
DAEMONS=(... network rpcbind nfs-common @netfs ...)
DAEMONS=(... network portmap nfslock @netfs ...)
Add an appropriate line in /etc/fstab, for example:
server:/files /files nfs defaults 0 0
If you wish to specify a packet size for read and write packets, specify them in your fstab entry. The values listed below are the defaults if none are specified:
server:/files /files nfs rsize=32768,wsize=32768 0 0
Read the nfs man page for further information, including all available mount options.
Unreliable performance, slow data transfer, and/or high load when using NFS and gigabit
This NFS Howto page has some useful information regarding performance. Here are some further tips:
If your workload involves lots of small reads and writes, there may not be enough threads running on the server to handle the quantity of queries. To check if this is the case, run the following command on one or more of the clients:
# nfsstat -rc Client rpc stats: calls retrans authrefrsh 113482 0 113484
retrans column contains a number larger than 0, the server is failing to respond to some NFS requests, and the number of threads should be increased.
To increase the number of threads on the server, edit the file
/etc/conf.d/nfs-server.conf and change the value of the
NFSD_COUNT variable. The default number of threads is 8. Try doubling this number until
retrans remains consistently at zero. Don't be afraid of increasing the number quite substantially. 256 threads may be quite reasonable, depending on the workload. You will need to restart the NFS server daemon each time you modify the configuration file. Bear in mind that the client statistics will only be reset to zero when the client is rebooted.
Eventually, the bottleneck will cease to be the number of threads, and will likely become the CPU. It's clear when this is the case because the
retrans values are non-zero, but you can see
nfsd threads on the server doing no work. (Use htop, and disable the hiding of kernel threads.)
Verify that the async flag is used in
/nfs4exports 192.168.0.0/24(ro,fsid=0,no_subtree_check,async) /nfs4exports/data 192.168.0.0/24(rw,no_subtree_check,async,nohide) /nfs4exports/backup 192.168.0.0/24(rw,no_subtree_check,async,nohide)
This is a result of the default packetsize used by NFS, which causes significant fragmentation on gigabit networks. You can modify this behavior by the rsize and wsize mount parameters. Using rsize=32768,wsize=32768 should suffice. Please note that this problem does not occur on 100Mb networks, due to the lower packet transfer speed.
Default value for NFS4 is 32768. Maximum is 65536. Increase from default in increments of 1024 until maximum transfer rate is achieved.
Portmap daemon fails to start at boot
Make sure you place portmap before netfs in the daemons array in /etc/rc.conf.
Nfsd fails to start with "nfssvc: No such device"
Make sure the nfs and nfsd modules are loaded in the kernel.
rpcbind fails to start with no error when attempting to start via console
Try starting the daemon using root/sudo.
sudo rc.d start rpcbind
Nfsd seems to work, but I cannot connect from MacOS X clients
When trying to connect from a MacOS X client, you will see that everything is ok at logs, but MacOS X refuses to mount your NFS share. You have to add
insecure option to your share and re-run
mount.nfs: Operation not permitted
After updating to nfs-utils 1.2.1-2, mounting NFS shares stopped working. Henceforth, nfs-utils uses NFSv4 per default instead of NFSv3. The problem can be solved by using either mount option
'nfsvers=3' on the command line:
# mount.nfs <remote target> <directory> -o ...,vers=3,... # mount.nfs <remote target> <directory> -o ...,nfsvers=3,...
<remote target> <directory> nfs ...,vers=3,... 0 0 <remote target> <directory> nfs ...,nfsvers=3,... 0 0
The following two values in /etc/conf.d/nfs-common.conf (on the client) need to be set:
Tips and tricks
Configure NFS fixed ports
If you have a port-based firewall, you might want to set up a fixed ports. For rpc.statd and rpc.mountd you should set following settings in
/etc/conf.d/nfs-server (ports can be different):
STATD_OPTS="-p 4000 -o 4003"
MOUNTD_OPTS="--no-nfs-version 2 -p 4002"
# Static ports for NFS lockd options lockd nlm_udpport=4001 nlm_tcpport=4001
Then you need restart nfs daemons and reload lockd module:
# modprobe -r lockd # modprobe lockd # rc.d restart nfs-common nfs-server
After restart nfs daemons and reload modules you can check used ports with following command:
$ rpcinfo -p
rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 4000 status 100024 1 tcp 4000 status 100021 1 udp 4001 nlockmgr 100021 3 udp 4001 nlockmgr 100021 4 udp 4001 nlockmgr 100021 1 tcp 4001 nlockmgr 100021 3 tcp 4001 nlockmgr 100021 4 tcp 4001 nlockmgr 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100005 3 udp 4002 mountd 100005 3 tcp 4002 mountd
Then, you need to open the ports 111-2049-4000-4001-4002-4003 tcp and udp.
Links and references
- See also Avahi, a Zeroconf implementation which allows automatic discovery of NFS shares.
- HOWTO: Diskless network boot NFS root
- Very helpful
- If you are setting up the Archlinux NFS server for use by Windows clients through Microsoft's SFU, you will save a lot of time and hair-scratching by looking at this forum post first !
- Microsoft Services for Unix NFS Client info
- Unix interoperability and Windows Vista Prerequisites to connect to NFS with Vista