[OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

Rune Tipsmark rt at steait.net
Mon Nov 3 07:09:06 UTC 2014


connectX2 and drivers are loaded, both OmniOS servers have LUNs I can access from both ESX and Windows... just the conection between them that I cant figure out.
Br,
Rune

From: Johan Kragsterman [mailto:johan.kragsterman at capvert.se]
Sent: Sunday, November 02, 2014 10:49 PM
To: Rune Tipsmark
Cc: omnios-discuss at lists.omniti.com
Subject: Ang: RE: RE: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol


Hej!

I can see I forgot the list in previous mail...

Anyway, are you using connectx adapters?

Have you checked if the driver is loaded in the host system?


Best regards from/Med vänliga hälsningar från

Johan Kragsterman

Capvert


-----Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>> skrev: -----
Till: Johan Kragsterman <johan.kragsterman at capvert.se<mailto:johan.kragsterman at capvert.se>>
Från: Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>>
Datum: 2014-11-03 00:07
Ärende: RE: RE: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol
It doesn't show up in stmfadm list-target.... only my other servers such as ESX and Windows show up...

Subnet Manager is OpenSM running on the ESX hosts, has been working OK ever since I installed then, prior to that I had a couple of Cisco(Topspin) DDR Switches with inbuilt SM.

Any other idea? All things I configured so far on both ZFS boxes,windows and esx has worked as expected... just this darn thing that's not doing what its supposed to...

It should be enough to just add the HCA from ZFS10 onto the host-group on ZFS00 for ZFS10 to see a LUN on ZFS00 granted that there is a view configured on ZFS00 as well... or am I missing a step?

Br
Rune

-----Original Message-----
From: Johan Kragsterman [mailto:johan.kragsterman at capvert.se]
Sent: Sunday, November 02, 2014 11:58 AM
To: Rune Tipsmark
Subject: Ang: RE: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol


-----Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>> skrev: -----
Till: Johan Kragsterman <johan.kragsterman at capvert.se<mailto:johan.kragsterman at capvert.se>>
Från: Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>>
Datum: 2014-11-02 20:05
Kopia: David Bomba <turbo124 at gmail.com<mailto:turbo124 at gmail.com>>, "omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>" <omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>>
Ärende: RE: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

I know, but how do I initiate a session from ZFS10?



If a session doesn't show up in "stmfadm list-targte -v", then you got something wrong in the IB fabric, if the view is right. Do you use a switch? Where do you have your IB Storage Manager?








Br,

Rune



From: Johan Kragsterman [mailto:johan.kragsterman at capvert.se]
Sent: Sunday, November 02, 2014 10:33 AM
To: Rune Tipsmark
Cc: David Bomba; omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>
Subject: Ang: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol






-----Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>> skrev: -----

Till: Johan Kragsterman <johan.kragsterman at capvert.se<mailto:johan.kragsterman at capvert.se>>
Från: Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>>
Datum: 2014-11-02 19:11
Kopia: David Bomba <turbo124 at gmail.com<mailto:turbo124 at gmail.com>>, "omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>" <omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>>
Ärende: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

Hi Johan

Got two ZFS boxes (ZFS00 recv, ZFS10 send), both with IB and all configured and views made for vSphere which works just fine..

What I can't  figure out is how to share a LUN with the other zfs box... see pasted info below...

ZFS00: (the box I want to receive my zfs snapshot) The below are all ESX servers, cannot see the other ZFS box

root at zfs00:/pool03# stmfadm list-target -v
Target: eui.0002C90300095E7C
    Operational Status: Online
    Provider Name     : srpt
    Alias             : -
    Protocol          : SRP
    Sessions          : 8
        Initiator: eui.0002C903000F397C
            Alias: 8102c90300095e7e:0002c903000f397c
            Logged in since: Sun Nov  2 02:09:41 2014
        Initiator: eui.0002C903000F397B
            Alias: 8102c90300095e7d:0002c903000f397b
            Logged in since: Sun Nov  2 02:09:40 2014
        Initiator: eui.0002C903000D3D04
            Alias: 8102c90300095e7e:0002c903000d3d04
            Logged in since: Sat Nov  1 21:14:47 2014
        Initiator: eui.0002C90300104F47
            Alias: 8102c90300095e7d:0002c90300104f47
            Logged in since: Sat Nov  1 21:12:54 2014
        Initiator: eui.0002C903000D3D03
            Alias: 8102c90300095e7d:0002c903000d3d03
            Logged in since: Sat Nov  1 21:12:32 2014
        Initiator: eui.0002C90300104F48
            Alias: 8102c90300095e7e:0002c90300104f48
            Logged in since: Sat Nov  1 21:10:45 2014
        Initiator: eui.0002C903000A48FA
            Alias: 8102c90300095e7e:0002c903000a48fa
            Logged in since: Sat Nov  1 21:10:40 2014
        Initiator: eui.0002C903000D3CA0
            Alias: 8102c90300095e7e:0002c903000d3ca0
            Logged in since: Sat Nov  1 21:10:39 2014
Target: iqn.2010-09.org.napp-it:1394106801
    Operational Status: Online
    Provider Name     : iscsit
    Alias             : 03.06.2014
    Protocol          : iSCSI
    Sessions          : 0

root at zfs00:/pool03# stmfadm list-lu -v
LU Name: 600144F007780B7F00005455EDD50002
    Operational Status: Online
    Provider Name     : sbd
    Alias             : /pool03/LU11
    View Entry Count  : 1
    Data File         : /pool03/LU11
    Meta File         : not set
    Size              : 2199023255552
    Block Size        : 512
    Management URL    : not set
    Vendor ID         : SUN
    Product ID        : COMSTAR
    Serial Num        : not set
    Write Protect     : Disabled
    Writeback Cache   : Disabled
    Access State      : Active

root at zfs00:/pool03# stmfadm list-hg -v
Host Group: ESX
Host Group: Windows
Host Group: ESX-iSER
Host Group: OmniOS
        Member: eui.0002C903000923E6 <<--- The other ZFS box
        Member: iqn.2010-09.org.napp-it:1402013225
        Member: iqn.1986-03.com.sun:01:58cfb38a32ff.5390f58d

root at zfs00:/pool03# stmfadm list-view -l 600144F007780B7F00005455EDD50002 View Entry: 0
    Host group   : OmniOS
    Target group : All
    LUN          : 0

ZFS10: (the sending box where I want to see the LUN from ZFS00) No disk show up from ZFS00...







 Hi!





You got eui.0002C903000923E6 in host group OmniOS, but you don't have a session from that eui to the target.



Rgrds Johan























root at zfs10:/root# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c4t5000C50055FC9533d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50055fc9533
       1. c4t5000C50055FE6A63d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50055fe6a63
       2. c4t5000C500625B7EA7d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625b7ea7
       3. c4t5000C500625B86E3d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625b86e3
       4. c4t5000C500625B886Fd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625b886f
       5. c4t5000C500625B8137d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625b8137
       6. c4t5000C500625B8427d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625b8427
       7. c4t5000C500625BB773d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625bb773
       8. c4t5000C500625BC2C3d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625bc2c3
       9. c4t5000C500625BD3EBd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500625bd3eb
      10. c4t5000C50057085A6Bd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50057085a6b
      11. c4t5000C50057086B67d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50057086b67
      12. c4t5000C50062878C0Bd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50062878c0b
      13. c4t5000C50062878C43d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50062878c43
      14. c4t5000C500570858EFd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500570858ef
      15. c4t5000C500570870D3d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c500570870d3
      16. c4t5000C5005708351Bd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c5005708351b
      17. c4t5000C5005708296Fd0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c5005708296f
      18. c4t5000C50057089753d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50057089753
      19. c4t5000C50057086307d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50057086307
      20. c4t5000C50062879687d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50062879687
      21. c4t5000C50062879707d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50062879707
      22. c4t5000C50062879723d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50062879723
      23. c4t5000C50062879787d0 <SEAGATE-ST4000NM0023-0004-3.64TB>
          /scsi_vhci/disk at g5000c50062879787
      24. c7t0d0 <ATA-INTELSSDSC2BA10-0270 cyl 15571 alt 2 hd 224 sec 56>
          /pci at 0,0/pci15d9,704 at 1f,2/disk at 0,0
      25. c7t1d0 <ATA-INTELSSDSC2BA10-0270 cyl 15571 alt 2 hd 224 sec 56>
          /pci at 0,0/pci15d9,704 at 1f,2/disk at 1,0
      26. c7t4d0 <ATA-InnoLite SATADOM-19-14.91GB>
          /pci at 0,0/pci15d9,704 at 1f,2/disk at 4,0
      27. c7t5d0 <ATA-InnoLite SATADOM-19-14.91GB>
          /pci at 0,0/pci15d9,704 at 1f,2/disk at 5,0
      28. c10d0 <Unknown-Unknown-0001-149.00GB>
          /pci at 0,0/pci8086,3c08 at 3/pci103c,178b at 0
      29. c11d0 <Unknown-Unknown-0001-149.00GB>
          /pci at 0,0/pci8086,3c0a at 3,2/pci103c,178b at 0
      30. c12d0 <Unknown-Unknown-0001-298.02GB>
          /pci at 79,0/pci8086,3c04 at 2/pci10b5,8616 at 0/pci10b5,8616 at 5/pci103c,178e at 0
      31. c13d0 <Unknown-Unknown-0001-298.02GB>
          /pci at 79,0/pci8086,3c04 at 2/pci10b5,8616 at 0/pci10b5,8616 at 6/pci103c,178e at 0
      32. c14d0 <Unknown-Unknown-0001-298.02GB>
          /pci at 79,0/pci8086,3c08 at 3/pci10b5,8616 at 0/pci10b5,8616 at 5/pci103c,178e at 0
      33. c15d0 <Unknown-Unknown-0001-298.02GB>
          /pci at 79,0/pci8086,3c08 at 3/pci10b5,8616 at 0/pci10b5,8616 at 6/pci103c,178e at 0
Specify disk (enter its number):



-----Original Message-----
From: Johan Kragsterman [mailto:johan.kragsterman at capvert.se]
Sent: Sunday, November 02, 2014 9:56 AM
To: Rune Tipsmark
Cc: David Bomba; omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>
Subject: Ang: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol


-----"OmniOS-discuss" <omnios-discuss-bounces at lists.omniti.com<mailto:omnios-discuss-bounces at lists.omniti.com>> skrev: -----
Till: David Bomba <turbo124 at gmail.com<mailto:turbo124 at gmail.com>>
Från: Rune Tipsmark
Sänt av: "OmniOS-discuss"
Datum: 2014-11-02 07:55
Kopia: "omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>" <omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>>
Ärende: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

Sounds sensible, how do I do that?

I tried creating a view for a thin lu with my other zfs box, but how do I detect it?





First of all, if you run IB, forget the iscsi stuff, it's only creating un unnecessary IP layer that you don't need, and adds latency to your application.

Did you create SRP target service using COMSTAR?

# svcadm enable -r ibsrp/target

What's the output of "srptadm list-target" ?(on storage box), or you can also use "stmfadm list-target -v"

Do you got all necessary IB stuff, like a storage manager(OpenSM), in place? HCA ports shows up in dladm show-link?

If so, your host system should discover it as a local disk, just with "format", if you have created a view with the right eui.xxxxxxxxxxxx for the initiator HCA.


Rgrds Johan






I also stumbled across something else interesting, wondering if its possible to set up two identical boxes and create a pool with local/remote disks as per this article http://www.ssec.wisc.edu/~scottn/Lustre_ZFS_notes/lustre_zfs_srp_mirror.html



Br,

Rune



From: David Bomba [mailto:turbo124 at gmail.com]
Sent: Saturday, November 01, 2014 6:01 PM
To: Rune Tipsmark
Cc: omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>
Subject: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol



I usually mount a iSer target and perform ZFS send to the target. This was the best way to exploit the rdma bandwidth to its full potential.





On 2 Nov 2014, at 11:45 am, Rune Tipsmark <rt at steait.net<mailto:rt at steait.net>> wrote:



Hi  all,



Is it possible to do zfs send/recv via SRP or some other RMDA enabled protocol? IPoIB is really slow, about 50 MB/sec between two boxes, no disks are more than 10-15% busy.



If not, is there a way I can aggregate say 8 or 16  IPoIB partitions and push throughput to a more reasonable speed...



Br,

Rune













_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss at lists.omniti.com<mailto:OmniOS-discuss at lists.omniti.com>
http://lists.omniti.com/mailman/listinfo/omnios-discuss



_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss at lists.omniti.com<mailto:OmniOS-discuss at lists.omniti.com>
http://lists.omniti.com/mailman/listinfo/omnios-discuss




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20141103/0919d1b1/attachment-0001.html>


More information about the OmniOS-discuss mailing list