An NFS client hangs on various operations, including "df". Hard vs Soft NFS mounts.
This document (000020830) is provided subject to the disclaimer at the end of this document.
Also, /var/log/messages shows warnings in either of the following formats, after a process has stalled for approximately 3 minutes:
kernel: nfs: server <servername> not responding, still trying kernel: nfs: server <servername> not responding, timed out
These events are often associated with network outages or periods when the remote NFS Server (which holds the actual file system) is experiencing trouble or is down for maintenance.
Quickly fixing the source of the problem (getting the NFS Server up again or fixing a network infrastructure problem) is the best solution. Then the client can get on it's way.
In cases where the source of the problem CANNOT be quickly fixed, the NFS client is going to malfunction, and (within NFS client code) there are two options for dealing with such a condition. The NFS mount can be done as a "hard" mount (the default and recommended method) or it can be a "soft" mount. Besides those two methods, other possibilities exist for the code of the application itself (or of the way it is executed) to mitigate the situation. Precautions can also be taken around scheduled outages. All four of these are discussed below.
(1) Hard mounts: The NFS client code will retry the stalled operations indefinitely, hoping that the NFS Server will become reachable again. This is considered the safest method overall for data integrity, but it does have the known side effect that anything relying on NFS activity, or waiting behind anything else that is waiting on NFS activity, is going to appear hung or stalled while operations are being retried. Even though such processes may stall indefinitely, they have the possibility to eventually continue without any side effect, once communication with the NFS Server is restored.
(2) Soft mounts: The NFS client layer will abort after a certain amount of retries / timeouts have occurred. This will allow the NFS client to return an error (possibly EIO) to the process(es) which are waiting. Those processes can then move on, but it is impossible for the OS or NFS client code to predict how well (or how badly) the application will react to the error. The application could move on with no consequences; or it could experience a fatal condition and crash. If the application was writing data when the abort happened, the application may be mistaken about how much data was successfully written to the permanent storage on the other end. This can lead to missing data or what is called "silent data corruption." Therefore, SUSE strongly recommends against using the "soft" mount option. However, for a administrator who understands the risks and has chosen to use "soft" mounts anyway, SUSE recommends mitigating the situation further by lengthening the mount's "retrans" option.
"retrans" controls how many times a NFS client request will be retransmitted before a "major timeout" is considered to have occurred. When using "soft" mounts, a "major timeout" results in aborting the operation. Normally, the NFS client layer uses retrans=2, which means three attempts are made: The original attempt plus two retransmissions. In a default configuration, there will be 60 seconds between these retries, meaning the abort will happen after about three minutes of failure.
retrans=5 might be appropriate, to give six minutes of possible recovery before aborting. But this is just a matter of preference and judgement. How long should processes wait before an abort is appropriate? What will be the consequences of waiting versus the consequences of receiving an error? This all depends on the application code being run and how it is going to react to delays versus errors.
As a side note: Besides the NFS client retransmissions happening every 60 seconds, the TCP layer will also be doing retransmissions. These TCP level retransmissions initially happen after just a few milliseconds. Those timeouts grow in length if failures continue. Therefore, even though NFS waits 60 seconds before it's first retry, TCP may bring about faster recovery in the meantime. Both recovery approaches are active at the same time. In contrast to the TCP transport layer, the UDP transport will not trigger any retransmissions on its own. NFS over UDP would use ONLY the NFS layer recovery and retransmission methods. For this and other reasons, use of NFS over UDP is strongly discouraged.
3. Application level timeouts: The applications themselves (or the way they are executed) can mitigate such situations, as well. If an application has it's own to timeout and can abort the effort, the stalled process can be automatically ended even if the NFS client mount is "hard" and is never going abort on its own. This can give the application or users more control over such situations.
As one method, simple commands can be executed under the "timeout" command. A "df" command would usually complete within just a few seconds, so it could be executed this way:
timeout 15 dfTherefore, if df doesn't complete in 15 seconds, the shell would consider it to have timed-out and will abort it, rather than allow it to continue hanging. The user or script would regain control automatically at that point, and could proceed with other tasks.
For more complicated tasks and applications, the application could have timeout logic written into the code, whereby the application can watch for IO delays on individual operations and decide when to abort. That code could also implement intelligent precautions or recovery actions, surrounding that decision.
4. If the trigger event is done on a schedule (such as bringing an NFS Server host down for maintenance) it is advisable to first stop any processes at the NFS client machines which are relying on the NFS mounts, and even to umount the NFS mounts at those clients.
man nfsSpecifically, the sections on retrans, timeo, hard/soft, proto, and Transport Methods. See that output for more technical details.
If the nature of the outage which is causing hanging or "nfs: server <name> not responding" messages is unknown, it is often helpful to gather network packet traces of the events in question, to learn what hosts or routers are not answering or delivering the requests that are being sent. On Linux systems, tcpdump is the most commonly used tool for packet capturing. It is usually necessary to perform these traces at both the NFS client host and the NFS Server host, and sometimes at routers in between, to isolate these delivery problems.
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000020830
- Creation Date: 26-Oct-2022
- Modified Date:26-Oct-2022
- SUSE Linux Enterprise Server
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com