SUSE Conversations


Configuring a NFSv4 Server and Client on SUSE Linux Enterprise Server 10



By: bpraveen1

July 27, 2006 12:00 am

Reads:8026

Comments:1

Rating:5.0

This AppNote describes how to configure a NFSv4 Server and Client on a SLES 10 box.

Table of Contents

  1. Introduction
  2. Daemons for NFSv4
  3. About NFSv4 Daemons
  4. NFS Server Configuration
  5. Client side configuration
    5.1. Automount in NFSv4
    5.2. Using /etc/fstab to mount NFSv4 exported volume
  6. References

1. Introduction

NFS is a UNIX protocol for large scale client/server file sharing. It is analogous to the server Message Block (SMB) and Common Internet File System (CIFS) protocols on Microsoft Windows. The Network File System Version 4 is a distributed filesystem protocol which owes heritage to NFSv2 and NFSv3. Unlike previous versions of NFS the present version(NFSv4) supports traditional file access while integrating support for file locking and mount protocol. There are many additional features with NFSv4 such as support for strong security, compound operations, client caching and internationalization.

NFSv4 is the successor of NFSv3. It has been designed to work on a LAN or over the Internet.

NFSv4 comes with several new features:

  • Advanced security management
  • Kerberos
  • SPKM
  • LIPKEY
  • Firewall friendly
  • Advanced and aggressive cache management
  • Non Unix compatibility (Windows)
  • Easy to administer (Replication, migration)
  • Crash recovery (Client and server sides)

NFSv4 uses 32 KBytes pages.

The NFSv3 and NFSv4 protocols are not compatible. A NFSv4 client cannot access a NFSv3
server, and vice versa. However, in order to simplify migrations from NFSv3 to NFSv4, both
NFSv3 and NFSv4 services are launched by the command: rpc.nfsd.

In the case of NFSv3 and NFSv4 clients simultaneously accessing the same server, one must be aware that two different file systems are used: there is no backward support to NFSv3 by the NFSv4 server.

In order to ensure a better reliability over the Internet, NFSv4 only uses TCP. To help NFS setup for internet use, one unique network port is used on NFSv4. This predetermined port is fixed. The default is port 2049.

2. Daemons for NFSv4:

  client side both sides server side
user commands: mount
exportfs
   
user daemons:   portmap
idmapd
nfsd
kernel parts:   NFSv4
RPC
XDR
TCP
Ipv4
 

The following are the Daemons that should be running on a NFSv4 Server:

  • rpc.idmapd
  • rpc.nfsd 8

The following are the Daemons that should be running on a NFSv4 client:

  • rpc.idmapd

3. About NFSv4 Daemons

A NFSv4 client communicates with corresponding NFSv4 Server via Remote Procedure Calls (RPS’s). The client sends a request and gets a reply from the server.

A NFSv4 server can only provide/export a single, hierarchical file system tree. If a server has to share more than one logical file system tree, the single trees are integrated in a new virtual root directory. This construction, called pseudo file system, is the one which is provided/exported to clients.

rpc.mountd — This process receives mount requests from NFS clients and verifies the requested file system is currently exported. This process is started automatically by the nfs service and does not require user configuration. This is not used with NFSv4.

rpc.idmapd — rpc.idmapd is the NFSv4 ID <-> name mapping daemon. It provides functionality to the NFSv4 kernel client and server, to which it communicates via upcalls, by translating user and group IDs to names, and vice versa.

rpc.svcgssd — This process provides the server transport mechanism for the authentication process (Kerberos Version 5) with NFSv4. This service is required for use with NFSv4.

rpc.gssd — This process provides the client transport mechanism for the authentication process (Kerberos Version 5) with NFSv4. This service is required for use with NFSv4.

To start the NFS server issue the command:

/etc/init.d/idmapd

/etc/init.d/svcgssd start (only if kerberos support is enabled/required)
/etc/init.d/nfsserver start

On the NFS client type the following commands:

/etc/init.d/idmapd start
/etc/init.d/gssd start (only if kerberos support is enabled/required)  

To check the exported volume from the server type the following command:

showmount  -e  NFSserver name

4. NFS Server Configuration

This document explains how to configure and use NFSv4 on a SLES 10 box and covers the basic NFSv4 configuration and the automount facility using autofs. This setup is made on SUSE Linux 10.1.

To enable NFSv4 on the machine check: /etc/sysconfig/nfs

NFS_SUPPORT = “yes”

In /etc/exports make an entry of your exported path with the export options for eg:-

/etc/exports – contains a list of all directories that are to be exported via
NFS. The syntax is slightly different from NFSv3. Here is a sample entry:

 
      /nfs  *(rw,fsid=0,insecure,no_subtree_check,sync,no_root_squash)
      /nfs  gss/krb5(rw,fsid=0,insecure,no_subtree_check,sync,no_root_squash)
      /nfs  gss/krb5i(rw,fsid=0,insecure,no_subtree_check,sync,no_root_squash)
      /nfs  gss/krb5p(rw,fsid=0,insecure,no_subtree_check,sync,no_root_squash)

Note: Single line entry for each security mode fsid – The value 0 has a special meaning when use with NFSv4. NFSv4 has a concept of a root of the overall exported filesystem (Pseudofilesystem). The export point exported with fsid=0 will be used as this root.

no_subtree_check – If a subdirectory of a filesystem is exported, but the whole filesystem isn’t then whenever a NFS request arrives, the server must check not only that the accessed file is in the appropriate filesystem (which is easy) but also that it is in the exported tree (which is harder). This check is called the subtree_check. This option disables subtree_check.

Insecure – The insecure option in this entry also allows clients with NFS implementations that don’t use a reserved port for NFS.

- /nfs     *(rw,fsid=0,no_subtree_check,no_root_squash,sync)

exported paths      export options for nfsv4

To export multiple volumes in NFSv4, follow the steps below:

If we want to export two directories say /NFS1 & /NFS2, then export the NFS1 as explained above. But for NFS2 we have to create a directory NFS2 in /NFS1.

  1. mkdir /NFS1/NFS2
  2. Bind the directory /NFS2 to /NFS1/NFS2 to do this execute the following command:
    mount ?bind /NFS2 /NFS1/NFS2
  3. Now configure /etc/exports with the sample entries shown below:
    /NFS1 *(rw,fsid=0,no_subtree_check,no_root_squash,sync)
    /NFS1/NFS2 *(rw,nohide,no_subtree_check,no_root_squash,sync).
    NOTE:- notice the highlighted fields in the above entries.
  4. Mount the server from the client
    mount -t nfs4 nfsserver:/ /mnt/

    You should be able to access the files under /NFS1 and files under /NFS1/NFS2.

Checklist to ensure NFSv4 is up and running:

  1. ps -ef | grep nfsd; ps -ef | grep idmapd; ps -ef | grep svcgssd to check server side daemons
  2. ps -ef | grep idmapd; ps -ef | grep gssd to check client side daemons
  3. rpcinfo -p to check all registered RPC programs & versions
  4. Check firewall is enabled on server/client from Yast -> Security and Users -> Firewall. Make sure NFS services is enabled.
  5. showmount -e server to check mount information on NFS server
  6. If you are using NFSv4, make sure that one and only one path is exported with fsid=0. Refer Pseudo file systems for more information.
  7. If users are not mapped properly check whether idmapd is running in both server & client and dns domain name is properly configured.
  8. If you encounter problems when you use kerberos security mode, check whether rpc.svcgssd (server) & rpc.gssd (clients) daemons are running and keytab file is extracted.
  9. If you unable to mount, check the exports file entry.

5. Client Side Configuration

5.1 Automount in NFSv4

To automount a NFSv4 exported volume using Autofs, follow the steps below:

There are two files which are mainly responsible for automount to work using autofs. These two files fall under /etc directory.

  1. auto.master
  2. auto.misc or auto.home or auto.xxxxxx.
    Here xxxxxx can be any name.

Here are the contents of auto.master:

#
# $Id: auto.master,v 1.4 2005/01/04 14:36:54 raven Exp $
#
# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).
#/misc  /etc/auto.misc --timeout=60
#/smb   /etc/auto.smb
#/misc  /etc/auto.misc
#/net   /etc/auto.net

/export /etc/auto.misc 

In the above file, auto.misc file can also be auto.home or auto.xxxxxxx and the corresponding entry in the auto.misc or auto.home or auto.xxxxxx should be the one below (I have used auto.misc).

#
# $Id: auto.misc,v 1.2 2003/09/29 08:22:35 raven Exp $
#
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# Details may be found in the autofs(5) manpage

cd              -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom

export          -fstype=nfs4,rw         NFSServer:/

# the following entries are samples to pique your imagination
#linux                 - ro,soft,intr          ftp.example.org:/pub/linux
#boot                  -fstype=ext2            :/dev/hda1
#floppy                -fstype=auto            :/dev/fd0
#floppy                -fstype=ext2            :/dev/fd0
#e2floppy              -fstype=ext2            :/dev/fd0
#jaz                   -fstype=ext2            :/dev/sdc1
#removable             -fstype=ext2            :/dev/hdd

After making these entries we have to restart the autofs. Type the command:

/etc/init.d/autofs restart   or /service/autofs restart

After this, you can check the status of the autofs by issuing the command:

/service/autofs status

Now check for the mount by typing the command: mount. It shows something like this:

$ mount
/dev/hda1 on / type reiserfs (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
udev on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
securityfs on /sys/kernel/security type securityfs (rw)
automount(pid4176) on /export type autofs (rw,fd=4,pgrp=4176,minproto=2,maxproto=4)
nfsd on /proc/fs/nfsd type nfsd (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) 

Now type: ls /mntpoint/exportedDir

After executing the above command, type: df -h

Filesystem     1K-blocks    Used       Available    Use%    Mounted on
/dev/hda1      31462264     2810852    28651412     9%      /
udev           518296       88         518208       1%      /dev
Nfsserver:/    69575360     2268576    67306784     4%      /export/export

5.2 Using /etc/fstab to mount NFSv4 exported volume

The NFS exported volume can also be mounted on the client just by making an entry in the /etc/fstab file. If your NFS server name is NFSserver and the mount point on the client is /mnt point then the entry in the fstab should look like something below.

The following entry is made in /etc/fstab

/dev/sda1      /                  reiserfs      defaults             1  1
/dev/sda2      swap               swap          defaults             0  0
proc           /proc              proc          defaults             0  0
sysfs          /sys               sysfs         noauto               0  0
usbfs          /proc/bus/usb      usbfs         noauto               0  0
devpts         /dev/pts           devpts        mode=0620,gid=5      0  0
/dev/fd0       /media/floppy      auto          noauto,user,sync     0  0

NFSserver:/    /mnt point         nfs4          rw,user,noauto       0  0

After making this entry in the /etc/fstab file, at the command prompt of the client just give the command:

mount /mnt point

Some Useful commands on NFS Server and Clients:

To check the NFS threads type: rpcinfo
on the server to check the server threads.

 $bb:  rpcinfo -p
   program      vers  proto port
    100000      2     tcp    111  portmapper
    100000      2     udp    111  portmapper
    100024      1     udp  32770  status
    100021      1     udp  32770  nlockmgr
    100021      3     udp  32770  nlockmgr
    100021      4     udp  32770  nlockmgr
    100024      1     tcp  57017  status
    100021      1     tcp  57017  nlockmgr
    100021      3     tcp  57017  nlockmgr
    100021      4     tcp  57017  nlockmgr
    1073741824  1     tcp  33805
    100003      2     udp   2049  nfs
    100003      3     udp   2049  nfs
    100003      4     udp   2049  nfs
    100003      2     tcp   2049  nfs
    100003      3     tcp   2049  nfs
    100003      4     tcp   2049  nfs
    100005      1     udp    975  mountd
    100005      1     tcp    976  mountd
    100005      2     udp    975  mountd
    100005      2     tcp    976  mountd
    100005      3     udp    975  mountd
    100005      3     tcp    976  mountd

To check the mount points on the client, type: mount.

 $bb: mount
/dev/hda3 on / type reiserfs (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
udev on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
nfsd on /proc/fs/nfsd type nfsd (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
xxx.xxx.xxx.xxx:/ on /mnt type nfs4 (rw,addr=xxx.xxx.xxx.xxx)

Here xxx.xxx.xxx.xxx is the ip address of the NFS server.

You can check the highlighted line to find out which version of NFS mount is done.

Enable debugging

Kernel NFS debugging can be enabled through /proc file system. All the debug messages will be logged in /var/log/messages.

echo "65535"  > /proc/sys/sunrpc/nfsd_debug (debugging server)                                                 
echo "65535"   > /proc/sys/sunrpc/nfs_debug (debugging client)
echo "65535"   > /proc/sys/sunrpc/rpc_debug (RPC) 

Note: Things tend to slow down in a production system if you enable all debugging. Make sure you revert it after getting the debug output.

6. References

General Information and References for the NFSv4 protocol

Informational RFCs:

  • RFC1094 – NFS version 2
  • RFC1813 – NFS version 3
  • RFC2054 – WebNFS Client Specification
  • RFC2054 – WebNFS Server Specification
  • RFC2224 – NFS URL Scheme
  • RFC2624 – NFS Version 4 Design Considerations
  • RFC2224 – Security Negotiation for WebNFS

Standards Track RFCs of Interest:

  • RFC1831 – RPC: Remote Procedure Call Protocol Specification Version 2
  • RFC1832 – XDR: External Data Representation Standard
  • RFC1964 – The Kerberos Version 5 GSS-API Mechanism
  • RFC2025 – The Simple Public-Key GSS-API Mechanism (SPKM)
  • RFC2203 – RPCSEC_GSS Protocol Specification
  • RFC2581 – TCP Congestion Control
  • RFC2623 – NFS Version 2 and Version 3 Security Issues and the NFS Protocol’s Use of RPCSEC_GSS and Kerberos V5
  • RFC2743 – Generic Security Service Application Program Interface, Version 2, Update 1
  • RFC2847 – LIPKEY – A Low Infrastructure Public Key Mechanism Using SPKM
  • RFC3010 – NFS version 4 Protocol (Obsoleted by RFC3530)
  • RFC3530 – NFS version 4 Protocol
VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
Configuring a NFSv4 Server and Client on SUSE Linux Enterprise Server 10 , 5.0 out of 5 based on 1 rating

Categories: Uncategorized

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

1 Comment

  1. By:needee

    nice, simple, and step-by-step. Great keep it up.

Comment

RSS