[OmniOS-discuss] NFS Datastore vmware esxi failover

David Wachter david.wachter at espros.ch
Sun Dec 8 07:01:45 UTC 2013


I would like to follow up on this matter. As I have followed the leads of Max(thanks) in this issue since having the same idea of building an active/active cluster with OMNIOS but with Proxmox instead. 


I am not shure yet - if we are following ghosts here. But ESXi and Netapp make such a „transparent“ failover without any problems. So I guess we are missing something here with ZFS. 


I did some further digging and found some other things - which I think haven’t been discussed here - yet. And I would like to throw them in discussion. 


INODE-Number: 
btw - I checked this, as it was mentioned somewhere. The inode stayed the same. 


FSID: 
I found this article: http://ben.timby.com/?p=109 


"The first is that when you export a file system via NFS, a unique fsid is generated for that file system. The client machines that mount the exported file system use this id to generate handles to directories/files. This fsid is generated using the major/minor of the device being exported. This is a problem for me, as the device being exported is a DRBD volume with LVM on top of it. This means that when the LVM OCF RA fails over the LVM volgroup, the major/minor will change. In fact, the first device on my system had a minor of 4. This was true of both nodes. If a resource migrates, it receives the minor 4, as the existing volgroup already occupies 4. This means that the fsid will change for the exported file system and all client file handles are stale after failover.“ 



to make it short - I was not able to set the fsid with the sharenfs property. 
"cannot be set to invalid options “ 


So after further digging I found those articles: 
https://github.com/zfsonlinux/zfs/issues/570 
and this .. 
https://github.com/zfsonlinux/zfs/commit/d2e032ca9cd62fd0e80cdce30c6d1c40421bf754 


They ask the that property should be enabled in ZFS source code. That just sounds somehow promising. 


Can this be added to sharenfs ? 


RMTAB: 
/etc/rmtab. 


from man: 

rmtab contains a table of filesystems that are remotely mounted by NFS clients. This file is maintained by mountd (1M), the mount daemon. The data in this file should be obtained only from mountd (1M) using the MOUNTPROC_DUMP remote procedure call. The file contains a line of information for each remotely mounted filesystem. There are number of lines of the form: 
hostname: fsname 


I checked the file on my install and the entries where there after the first switch. I did not try to inject lines here with the failover script. Need to solve FSID first. 


Summery: 
FSID 
to make „transparent“ failover with NFS working we need the FSID property in the sharenfs. (Well - I just assume that this will bring us forward.) 


Asking the omnios team: Can this be added ? 


/etc/rmtab: 
holds the client and mounted fs. Needs to be tested then as next issue. 


Thanks for reading and posting feedback on this 
—> Dave 






--- Disclaimer: --- 
This email and contents is for use only by the intended recipient. If you are not the individual or entity to whom it is addressed, you are hereby formally notified that any use, copying or distribution of this email and attachments, in whole or in part, is strictly prohibited. If you have received this email in error, please notify the sender and delete the message and attachment(s) from your system. Any views, opinions or information, expressed or contained in this email, are those of the sender and not necessarily reflect those of ESPROS Photonics AG. To help protect our environment, please avoid printing out this information unnecessarily.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20131208/636f5aeb/attachment.html>


More information about the OmniOS-discuss mailing list