SUSE Support

Here When You Need Us

Partial record errors when writing over NFS to zOS

This document (7012647) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 1

Situation

SLES 11 SP1 is acting as an NFS client to an IBM zOS NFS Server.  Under heavy load, the application in use frequently warns that only partial records are being written.
 
Looking in a tcpdump, common errors returned from the zOS NFS server device are "Jukebox error" (NFS3ERR_JUKEBOX) and "Stale File Handle" (NFS3ERR_STALE).  The former indicates a resource temporarily unavailable.  The latter indicates that some file or directory entry handle is no longer valid.
 
Later, upon some updates to the zOS system, those errors were replaced with IO errors (NFS3ERR_IO).

Resolution

The limitation appears to be in the IBM zOS system, but a workaround is possible on the SLES 11 SP1 side.  The dirty cache size and dirty background cache size needed to be severely limited.  The values needed in any given case may vary, but for the documented case, it was successful to limit dirty cache to 100MB, and dirty background level to 50 MB.  (At double these values, the problem still occurred.)
 
This can be done on the fly with:
 
echo 104857600 > /proc/sys/vm/dirty_bytes
echo 52428800 > /proc/sys/vm/dirty_background_bytes
 
 
Also, this can be set to take effect during boot by editing /etc/sysctl.conf, and adding/'modifying the parameters:
 
vm.dirty_bytes = 104857600
vm.dirty_background_bytes = 52428800
 
The customer reported that at these levels, the NFS problem went away.  While these settings can effect *all* writes (local disk or NFS), and may effect overall performance, the customer reported no noticeable loss of write disk performance, neither NFS nor local.
 

Cause

Upon consulting with an IBM technician, it was learned that the zOS receive buffer was getting full. The IBM side does not like writing MVS datasets out of sequence, even though most Linux and NFS systems can routinely write out of sequence. If the zOS receive buffer gets full but some piece of data is not present to allow the writes to proceed sequentially, the zOS server cannot proceed to write out more data.
 
It appears that it can be worked around by limiting the dirty cache size on the SLES 11 SP1 NFS client.  Through this restriction, the amount of data going into the server's buffer at any given time, and/or the sequence of that data, is brought within tolerances for the zOS NFS Server.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7012647
  • Creation Date: 19-Jun-2013
  • Modified Date:03-Mar-2020
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.