Saturday, September 21, 2013

Tuning NFS Performance

On a Ubuntu system, by default the NFS daemon will only spawn up to 8 processes. The default number may not be sufficient to handle multiple nfs connections by the clients for a heavily loaded system. To check whether the default is sufficient, we can look at RPC statistics using nfsstat command on the NFS client:

# nfsstat -rc
Client rpc stats:
calls      retrans    authrefrsh
236317426   2          236317430


In the example above, the retrans (retransmissions) value is larger than 0, indicating that the number of available NFS kernel threads on the server is insufficient to handle the requests from this client.

To increase the number of NFS threads on the server, we need to change the configuration of item RPCNFSDCOUNT in files /etc/default/nfs-kernel-server and /etc/init.d/nfs-kernel-server. Increase this number to 32 on a moderately busy server, or up to 128 on a more heavily used system. Restart NFS service and then run command nfsstat -rc on the client to check whether the number of NFS threads is sufficient. If the retrans value is 0, it is enough; otherwise, increase the number of threads further.

On the clients, we can change the mount command options rsize and wsize to optimize transfer speeds. These two options specify the size of the chunks of data that the client and server pass back and forth to each other. By default, most clients will mount remote NFS file systems with an 8-KB read/write block size. Significant performance gains can be made to NFS performance with some simple tweaks to the rsize/wsize options. It is suggested (see reference 1) to mount with the following options on the client for improved NFS performance:
    rsize=32768,wsize=32768,intr,noatime
If the NFS filesystem is mounted via /etc/fstab, change the mount configuration there like the following:
   server:/path/to/shared /shared nfs rsize=32768,wsize=32768,intr,noatime

Reference:
1. http://www.techrepublic.com/blog/linux-and-open-source/tuning-nfs-for-better-performance/

No comments: