Difference between revisions of "NFSv3"

From ArchWiki
Jump to navigation Jump to search
(Deleted. Article now links to NFS.)
(16 intermediate revisions by 9 users not shown)
Line 1: Line 1:
[[Category:File systems]]
The goal of this article is to assist in setting up an nfs-server for sharing files over a network.
*For NFSv4, see: [[NFSv4]]
*nfs-utils has been upgraded since 2009-06-23, and NFS4 support is now implemented. Refer to the [http://www.archlinux.org/news/452/ news bulletin].
*portmap has been replaced by rpcbind.
==Required packages==
Required packages for both the server and the client are minimal. You will only need to [[pacman|install]] the {{pkg|nfs-utils}} package from the [[Official Repositories|official repositories]].  Optionally, install the {{pkg|keyutils}} package to use the keyring based idmapper on the NFS client.
==Setting up the server==
You can now edit your configuration and then start the daemons.
This file defines the various shares on the nfs server and their permissions. A few examples:
/files *(ro,sync) # Read-only access to anyone
/files,sync) # Read-write access to a client on
/files,sync) #  Read-write access to all clients from to
/bsd  *(ro,sync,insecure) #  BSD clients require insecure as otherwise connections will be rejected by the client
If you make changes to /etc/exports after starting the daemons, you can make them effective by issuing the following command:
# exportfs -r
If you decide to make your NFS share public and writable, you can use the all_squash option in combination with anonuid and the anongid option.
For example, to set the privileges for the user nobody in the group nobody, you can do the following:
; Read-write access to a client on, with rw access for the user 99 with gid 99
This also means, that if you want write access to this directory, nobody.nobody must be the owner of the share directory:
# chown -R nobody.nobody /files
Full details on the exports file are provided by the exports man page.
{{Note|This used to be in /etc/conf.d/nfs which is replaced by "/etc/conf.d/nfs-common.conf" and "/etc/conf.d/nfs-server.conf".}}
Edit this file to pass appropriate run-time options to nfsd, mountd, statd, and sm-notify. The default Arch NFS init scripts require the --no-notify option for statd, as follows:
Others may be left at the provided defaults, or changed according to your requirements. Please refer to the relevant man pages for full details.
You can now start the server with the following commands:
# rc.d start rpcbind (or: rc.d start portmap)
# rc.d start nfs-common (or: rc.d start nfslock)
# rc.d start nfs-server (or: rc.d start nfsd)
Please note that they must be started in that order. To start the server at boot time, add these daemons to the DAEMONS array in {{ic|/etc/rc.conf}}. It may be necessary to start the daemons as root / using sudo when you start them from the terminal.
{{Note|One or more of the daemons may not start if they are backgrounded in your {{ic|/etc/rc.conf}}.}}
=== systemd services ===
# systemctl enable rpcbind.service rpc-mountd.service exportfs.service
# echo nfsd > /etc/modules-load.d/nfsd.conf
==Setting up the client==
Edit this file to pass appropriate run-time options to statd - the remaining options are for server use only. Do ''not'' use the --no-notify option on the client side, unless you are fully aware of the consequences of doing so.
Please refer to the statd man page for full details.
Kernels after 2.6.37 support using request-key to find and cache idmapper entries.  Using request-key allows multiple idmap requests to be placed at a single time, making it significantly more scalable than the legacy code.
Please refer to the nfsidmap page for full details.
Start the rpcbind and nfs-common daemons:
rc.d start rpcbind (or: rc.d start portmap)
rc.d start nfs-common (or: rc.d start nfslock)
Please note that they must be started in that order.
To start the daemons at boot time, add them to the DAEMONS array in /etc/rc.conf.
===systemd services===
# systemctl enable rpcbind.service rpc-statd.service
Show the server's exported filesystems:
showmount -e server
Then just mount as normal:
mount server:/files /files
Unlike CIFS shares or [[rsync]], NFS exports must be called by the full path on the server; example, if /home/fred/music is defined in /etc/exports on server ELROND, you must call:
mount ELROND:/home/fred/music /mnt/point
instead of just using:
mount ELROND:music /mnt/point
or you will get ''mount.nfs: access denied by server while mounting''
{{Note|If you see the following message then you probably did not start the daemons from the [[#Daemons|previous section]] or something went wrong while starting them.
mount: wrong fs type, bad option, bad superblock on,
      missing codepage or helper program, or other error
      (for several filesystems (e.g. nfs, cifs) you might
      need a /sbin/mount.<type> helper program)
      In some cases useful info is found in syslog - try
      dmesg | tail  or so
===Auto-mount on boot===
If you want to mount on boot, make sure network, rpcbind (portmap), nfs-common (nfslock) and netfs are in the DAEMONS array in /etc/rc.conf. Make sure the order is this one. It is better not to put any '@' in front of them (although you could safely use @netfs); for instance:
DAEMONS=(... network rpcbind nfs-common @netfs ...)
DAEMONS=(... network portmap nfslock @netfs ...)
Add an appropriate  line in /etc/fstab, for example:
server:/files /files nfs defaults 0 0
If you wish to specify a packet size for read and write packets, specify them in your fstab entry. The values listed below are the defaults if none are specified:
server:/files /files nfs rsize=32768,wsize=32768 0 0
Read the nfs man page for further information, including all available mount options.
===Unreliable performance, slow data transfer, and/or high load when using NFS and gigabit===
====Server Threads====
If your workload involves lots of small reads and writes, there may not be enough threads running on the server to handle the quantity of queries.  To check if this is the case, run the following command on one or more of the clients:
# nfsstat -rc
Client rpc stats:
calls      retrans    authrefrsh
113482    0          113484
If the {{ic|retrans}} column contains a number larger than 0, the server is failing to respond to some NFS requests, and the number of threads should be increased.
To increase the number of threads on the server, edit the file {{ic|/etc/conf.d/nfs-server.conf}} and change the value of the {{ic|NFSD_COUNT}} variable.  The default number of threads is 8.  Try doubling this number until {{ic|retrans}} remains consistently at zero.  Don't be afraid of increasing the number quite substantially.  256 threads may be quite reasonable, depending on the workload.  You will need to restart the NFS server daemon each time you modify the configuration file.  Bear in mind that the client statistics will only be reset to zero when the client is rebooted.
Eventually, the bottleneck will cease to be the number of threads, and will likely become the CPU.  It's clear when this is the case because the {{ic|retrans}} values are non-zero, but you can see {{ic|nfsd}} threads on the server doing no work.  (Use ''htop'', and disable the hiding of kernel threads.)
Verify that the async flag is used in {{ic|/etc/exports}}
{{Warning|Bear in mind that this could cause data inconsistencies if the NFS server crashes while the client is writing to it.  The client may believe that the write succeeded when in fact it did not.  If data integrity is important to your setup, do not use the {{ic|async}} option.}}
This is a result of the  default packetsize used by NFS, which causes significant fragmentation on gigabit networks. You can modify this behavior by the rsize and wsize mount parameters. Using rsize=32768,wsize=32768 should suffice. Please note that this problem does not occur on 100Mb networks, due to the lower packet transfer speed.
Default value for NFS4 is 32768. Maximum is 65536. Increase from default in increments of 1024 until maximum transfer rate is achieved.
===Portmap daemon fails to start at boot===
Make sure you place portmap ''before'' netfs in the daemons array in /etc/rc.conf.
===Nfsd fails to start with "nfssvc: No such device"===
Make sure the nfs and nfsd modules are loaded in the kernel.
===rpcbind fails to start with no error when attempting to start via console===
Try starting the daemon using root/sudo.
sudo rc.d start rpcbind
===Nfsd seems to work, but I cannot connect from MacOS X clients===
When trying to connect from a MacOS X client, you will see that everything is ok at logs, but MacOS X refuses to mount your NFS share. You have to add {{ic|insecure}} option to your share and re-run {{ic|exportfs -r}}.
===mount.nfs: Operation not permitted===
After updating to nfs-utils 1.2.1-2, mounting NFS shares stopped working. Henceforth, nfs-utils uses NFSv4 per default instead of NFSv3. The problem can be solved by using either mount option {{ic|1='vers=3'}} or {{ic|1='nfsvers=3'}} on the command line:
# mount.nfs <remote target> <directory> -o ...,vers=3,...
# mount.nfs <remote target> <directory> -o ...,nfsvers=3,...
or in {{ic|/etc/fstab}}:
<remote target> <directory> nfs ...,vers=3,... 0 0
<remote target> <directory> nfs ...,nfsvers=3,... 0 0
===Ownership of mounted shares is 4294967294:4294967294===
The following two values in /etc/conf.d/nfs-common.conf (on the client) need to be set:
== Tips and tricks ==
=== Configure NFS fixed ports ===
If you have a port-based firewall, you might want to set up a fixed ports. For rpc.statd and rpc.mountd you should set following settings in {{ic|/etc/conf.d/nfs-common}} and {{ic|/etc/conf.d/nfs-server}} (ports can be different):
{{hc|/etc/conf.d/nfs-common|2=STATD_OPTS="-p 4000 -o 4003"}}
{{hc|/etc/conf.d/nfs-server|2=MOUNTD_OPTS="--no-nfs-version 2 -p 4002"}}
{{hc|/etc/modprobe.d/lockd.conf|2=# Static ports for NFS lockd
options lockd nlm_udpport=4001 nlm_tcpport=4001}}
Then you need restart nfs daemons and reload lockd module:
{{bc|<nowiki># modprobe -r lockd
# modprobe lockd
# rc.d restart nfs-common nfs-server</nowiki>}}
After restart nfs daemons and reload modules you can check used ports with following command:
{{hc|$ rpcinfo -p|rpcinfo -p
  program vers proto  port  service
    100000    4  tcp    111  portmapper
    100000    3  tcp    111  portmapper
    100000    2  tcp    111  portmapper
    100000    4  udp    111  portmapper
    100000    3  udp    111  portmapper
    100000    2  udp    111  portmapper
    100024    1  udp  4000  status
    100024    1  tcp  4000  status
    100021    1  udp  4001  nlockmgr
    100021    3  udp  4001  nlockmgr
    100021    4  udp  4001  nlockmgr
    100021    1  tcp  4001  nlockmgr
    100021    3  tcp  4001  nlockmgr
    100021    4  tcp  4001  nlockmgr
    100003    2  tcp  2049  nfs
    100003    3  tcp  2049  nfs
    100003    4  tcp  2049  nfs
    100003    2  udp  2049  nfs
    100003    3  udp  2049  nfs
    100003    4  udp  2049  nfs
    100005    3  udp  4002  mountd
    100005    3  tcp  4002  mountd}}
Then, you need to open the ports 111-2049-4000-4001-4002-4003 tcp and udp.
==Links and references==
* See also [[Avahi]], a Zeroconf implementation which allows automatic discovery of NFS shares.
* HOWTO: [[Diskless network boot NFS root]]
* [http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/nfs_perf.htm Very helpful]
* If you are setting up the Archlinux NFS server for use by Windows clients through Microsoft's SFU, you will save a lot of time and hair-scratching by looking at [http://bbs.archlinux.org/viewtopic.php?pid=523934#p523934 this forum post] first !
* [http://blogs.msdn.com/sfu/archive/2008/04/14/all-well-almost-about-client-for-nfs-configuration-and-performance.aspx Microsoft Services for Unix NFS Client info]
* [http://blogs.msdn.com/sfu/archive/2007/05/01/unix-interoperability-and-windows-vista.aspx Unix interoperability and Windows Vista] Prerequisites to connect to NFS with Vista

Latest revision as of 09:32, 14 September 2013

Redirect to: