NFS Exports on Centos 7 to ESXi
I spent a couple hours troubleshooting NFS today. The backup server I setup in 2010 is wearing out so I upgraded to a shiny new T320 (on sale!) and set out to reconfigure my NFS backup datastore to point to the new backup server.
Since I haven't played around with NFS in ages, I had forgotten what a joy it could be /s
In the end everything worked once I got the configuration straightened up, so yay!
References
- Location of NFS logs on Centos [serverfault.com]
- Setting Up an NFS Server [nfs.sourceforge.net]
- NFS storage traffic going out vmk0 instead of new VMkernel port [communities.vmware.com]
- How To Provide NFS Network Shares to Specific Clients [rootusers.com]
- Location of ESXi 5.1 and 5.5 log files [kb.vmware.com]
Troubleshooting
This ended up being a 3-phase process:
- Debug the ESXi box by running some commands:
- Try mounting the share via esxcli:
#esxcli storage nfs add -H 172.16.109.110 -s /mount/depth/location -v SANBACH
Sysinfo error on operation returned status : Unable to connect to NFS server. Please see the VMkernel log for detailed error information
- tail /var/log/vmkernel.log
NFS: 168: NFS mount 172.16.109.110:/mount/depth/location failed: Unable to connect to NFS server.
-
Check the ESXi firewall (just in case?)
JACKPOT: Under Configuration -> Security Profile -> Firewall I found I was restricting the NFS client to only talk to certain NFS servers
I altered the list of 'approved' IP addresses to include the SAN address and I was good to go!
- Debug the Centos 7 box
- Disable the firewall and try to connect from ESXi
The output from the ESXi CLI was different:
Sysinfo error on operation returned status : The NFS server denied the mount request. Please see the VMkernel log for detailed error information
-
Re-enable the firewall after enabling mountd
firewall-cmd --permanent --add-service=mountd
firewall-cmd --reloadVMWare connect now, yay!
- Now there's a problem with my NFS share: 'everyone' can connect!
- There's not much security if *any* host on *any* interface can connect to the backup share! Especially since I took the time to setup a separate SAN exclusively for storage traffic...
- Examining my exports file, everything looked fine:
/mount/depth/location 172.16.109.110(rw,no_root_squash)
- Yet when I run showmount -e I get craziness:
showmount -e
Export list for BACKUPHOST:
/mount/depth/location (everyone)
Yeah, I have a *little* problem with the 'everyone' directive here... - After some scrounging I found that I had to reload my NFS exports:
exportfs -arv
(-a exports everything in the 'exports' file, -r re-exports directories and clears outdated entries, while the -v flag gives verbose output)
Now it shows up correctly: the share is restricted by IP Address and properly configured