SUSE Support

Here When You Need Us

NFS client cannot perform hundreds of NFS mounts in one batch

This document (7007308) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 2
SUSE Linux Enterprise Server 11 Service Pack 1
SUSE Linux Enterprise Server 10 Service Pack 3
SUSE Linux Enterprise Server 10 Service Pack 4

Situation

A system running SLES 10 or 11 has been set up to attempt to perform hundreds of NFS v3 client mounts at the same time.  This could be via the /etc/fstab file and it could be during boot, or it could be other methods, later after the machine is booted.  However, the system does not complete the task of mounting the directories.

Resolution

NOTE:  This Technical Information Document (TID) is somewhat outdated now, as newer kernels and other code in newer SLES 11 support packs have had changes implemented to streamline the NFS client port usage, and make this situation less likely to arise.  For example, people on SLES 11 SP4 can typically perform hundreds or thousands of NFS mounts in a large batch, without running into this problem.
 
However, this TID is being preserved for now for those on older support packs, who may still benefit from it.
 
NFS is implemented underneath the "sunrpc" specification.  There is a special range of client side ports (665 - 1023) which are available for certain sunrpc operations.  Every time a NFS v3 mount is performed, several of these client ports can be used.  Any other process on the server can conceivably use these as well.  If too much NFS related activity is being performed, then all these ports can be in use, or (when used for TCP) they can be in the process of closing and waiting their normal 60 seconds before being used by another process.  This type of situation is typically referred to as "port exhaustion".  While this port range can be modified, such changes are not recommended, because of potential side effects (discussed in item #5 below).
 
In this scenario, the port exhaustion is happening because too many NFS client ports are being used to talk to an NFS Server's portmapper daemon and/or mount daemon.  There are several approaches to consider that can help resolve this issue:
 
1.  The simplest solution is usually to add some options to each NFS client mount request, in order to reduce port usage.  The additional mount options would be:
 
proto=tcp,mountproto=udp,port=2049
 
Those can be used in an fstab file, or directly on mount commands as some of the "-o" options.  Note that successful usage of these options may depend on having the system up to date.  SLES 10 SP3 and above, or SLES 11 SP1 and above, are recommended.
 
To explain them in more detail:
 
The option "proto=tcp" (for NFS transport protocol) insures that the NFS protocol (after a successful mount) will use TCP connections.  Adding this setting is not mandatory (and is actually the default) but is mentioned to differentiate it from the second option, "mountproto=udp".
 
The option "mountproto=udp" causes the initial mount request itself to use UDP.  By using UDP instead of TCP, the port used briefly for this operation can be immediately reused instead of being required to wait 60 seconds.  This does not effect which transport protocol (TCP or UDP) NFS itself will use.  It only effects some brief work done during the initial mount of the nfs share.  (In NFS v3, the mount protocol and daemon are separate from the nfs protocol and daemon.)
 
The option "port=2049" tells the procedure to expect the server's NFS daemon to be listening on port 2049.  Knowing this up front eliminates the usage of an additional TCP connection.  That connection would have been to the sunrpc portmapper, which would have been used to confirm where NFS is listening.  Usage of port 2049 for the NFS protocol is standard, so there is normally no need to confirm it through portmapper.
 
 If many of the mounts point to the same NFS server, it may also help to allow one connection to an NFS server to be shared for several of the mounts.  This is automatic on SLES 11 SP1 and above, but on SLES 10 it is configured with the command:
sysctl -w sunrpc.max_shared=10
 
and if you want to ensure it is in effect without rebooting:
echo 10 > /proc/sys/sunrpc/max_shared
 
NOTE:  This feature was introduced to SLES in November 2008 so a kernel update may be needed on some older systems.  Valid values are 1 to 65535.  The default is 1, which means no sharing takes place.  A number such as 10 means that 10 mounts can share the same connection.  While it could be set very high, 10 or 20 should be sufficient.  Going higher than is necessarily is not recommended, as too many mounts sharing the same connection can cause performance degradation.
 
2.  Another option is to use automount (autofs) to mount nfs file systems when they are actually needed, rather than trying to have everything mount at the same time.  However, even with automount, if an application is launched which suddenly starts accessing hundreds of paths at once, the same problem could come up.
 
3.  Another option would be to switch to NFS v4.  This newer version of the NFS specification uses fewer ports, for two reasons: 
 
a.  It only connects to one port on the server (instead of 3 to 5 ports, as NFS v3 will)
 
b.  It does a better job using one connection for multiple activities.  NFS v4 can be requested by the Linux NFS Client system by specifying mount type "nfs4 ".  This can be placed in the NFS mount entry in the /etc/fstab file, or can be specified on a mount command with "-t nfs4 ".  Note, however, that there are significant differences in how NFS v3 and v4 are implemented and used, so this change is not trivial or transparent.
 
4.  A customized script could also be used to mount the volumes after boot, at a reasonable pace.  For example, /etc/init.d/after.local could be created and designed to mount a certain number of nfs shares, then sleep for some time, then mount more, etc.
 
5.  If none of the above options are suitable or help to the degree necessary, the last resort would be to change the range of ports in use.  This is controlled with:
 
sysctl -w sunrpc.min_resvport = value
sysctl -w sunrpc.max_resvport = value
 
and can be checked (and set) on the fly in the proc area:
 
/proc/sys/sunrpc/min_resvport
/proc/sys/sunrpc/max_resvport
 
On Suse Linux, these normally default to 665 and 1023.  Either the min or max (or both) can be changed, but there can be consequences of each:
 
a.  Letting more ports be used for RPC (NFS and other things) increases the risk that a port is already taken when another service wants to acquire it.  Competing services may fail to launch.  To see what services can potentially use various ports, see the information in /etc/services.  Note:  That file is a list of possible services and their port usage, not necessarily currently active services.
 
b.  Ports above 1023 are considered "non-privileged" or "insecure" ports.  In other words, they can be used by non-root processes, and therefore they are considered less trustworthy.  If an NFS Client starts making requests from ports 1024 or above, some NFS Servers may reject those requests.  On a Linux NFS Server, you can tell it to accept requests from insecure ports by exporting it with the option, "insecure".

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7007308
  • Creation Date: 03-Dec-2010
  • Modified Date:03-Mar-2020
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.