[OmniOS-discuss] Pliant/Sandisk SSD ZIL

Doug Hughes doug at will.to
Wed Feb 19 04:11:30 UTC 2014


On 2/18/2014 10:54 PM, Marion Hakanson wrote:
> Thanks for everybody's comments.  More data from folks doing similar stuff
> is always welcome.
>
>
> Marion Hakanson wrote:
>>> Out of curiosity, what HBA is being used for these drives (and slog)
>>> in this R720xd, H710 or H310?  Something else?
>>
> Derek Yarnell wrote:
>> This is a R720xd shipped with a pristine LSI card (mine even came with
>> IT firmware).  You will need to talk to your rep but they can do it.  My
>> hope here was to get a fully supported OmniOS box from Dell which the
>> biggest problem was before making me take a H710/H800 card.  While we
>> have been successful running ZFS with a R510 w/ H700s drives all in R0
>> and setting zfs_nocacheflush it just isn't right.  We have also never
>> gone more than 9 data drives with the R0 configuration (also this has
>> been mostly on Nexenta not Omni).
>
> We have quite a few R710's with the PERC/6i internally, but for those used
> as ZFS file servers, the PERC's only apply to the two boot drives, where
> the "R0" approach is relatively painless.  BTW, with these RAID HBA's that
> have non-volatile caches, you shouldn't need "zfs_nocacheflush" -- the OS
> recognizes when a cache is NV, and won't send a flush.  That's been my
> experience since late 2008, anyway.
>
> We've had mixed results when asking Dell to send us LSI-branded HBA's.
> Sometimes they'll do it, sometimes not.  Lately, we've gotten them to
> sell us Dell branded SAS 6gbps HBA's for use with external MD1200's,
> and those seem to be working just like LSI 9200-8e's & 9207-8e's.
>
> Anyway, thanks for the suggestion, I'll see what our rep can do for the
> internal HBA.  That would be much nicer than having to deal with possible
> H310 glitches.
>
>
>> The R720xd flex bays are no longer internal but in the back of the
>> machine[1].
>
> Well, that's close enough to "internal" for my purposes (:-).  We have
> an Oracle X4270-M2 configured like that, now 3+ years old.  Very nice
> layout, it's a pity they stopped selling them to mere mortals.
>
>
>> I actually will test some spare DC S3700 drives as the slog
>> devices that we have for a Ceph cluster in this in the next few days and
>> will report back on this thread.
>
> Cool, I look forward to seeing what you find out.
>
> Thanks and regards,
>

We've used S3700 as SLOG. it's a fine device for that. I find the 
current price point for the Intel 320 series to be hard to beat though. 
It's just over $1/GB. It also seems to perform better than the DCS3700 
on the tests that I've run that seem to be reasonable predictors, up to 
20%.

The worst case for NFS performance tends to be lots of synchronous 
metadata operations, so I have a test that creates 100 0 byte files in 
each of 100 directories. And then an rm -rf on the lot. The rm 
difference is insignificant.

I didn't compare exactly the same physical hardware (machine or data 
disks) though, so in the end the performance difference may not be that 
significant.


More information about the OmniOS-discuss mailing list