[OmniOS-discuss] hardware specs for illumos storage/virtualization server

Denis Cheong denis at denisandyuki.net
Sat Nov 17 07:56:48 EST 2012


That's a lot of issues and questions to raise in the one post!

Let me start off by saying that your intentions are very similar to mine.
My OmniOS box is the storage backend for my household, and this includes
MythTV as well as stackloads of photos, videos, music, etc.

My primary system's configuration is:
Intel S2600CO Motherboard (16 x DIMM slots, dual LGA2011)
Xeon E5-2603 (single CPU even though the mobo is dual capable)
8 x 16GB Hynix DDR ECC DIMMs (total 128GB)
2 x LSI 9201-16i 6G SAS cards
Norco RPC-4224 case, 24 x hotswap 6G SAS bays
OCZ ZX Series 850W power supply (note importantly this PSU has a 30A 5V
rail)
13 x 3TB WD Red drives
plus miscellaneous other less interesting stuff.

Some of these parts you might like to consider instead of the configuration
you were looking at, like the mobo, the case and the SAS cards which can
all be obtained at good prices.

With respect to the NAS drives - it's too early for me to give much
feedback on them, but they are definitely the ones to get, for the reasons
you gave.

Regarding running a MythTV backend under OmniOS:

1. It's not going to work natively, nobody that I have been able to find
has actually got a backend build working on Solaris and kept it up to date
2. It would have been a good option, but BrandZ has gone by the wayside
long ago
3. KVM as I have so far tested has some major limitations that would make
it impractical, i.e. both network bandwith and disk throughput is terrible
to the point that it will cause problems with MythTV.

The last point is the main problem for MythTV under OmniOS for me; severely
limited network throughput will cause major problems, since all recordings
will need to be through network TV tuners (HD Homeruns for me) along with
recording playback.  While the network throughput issues I saw may have
been because of the problems that I had with the aggregated network cards,
I have still not had the opportunity to properly test KVM performance since
the proposed fix was released into the bloody build.  I hope to get the
opportunity to try it in the next few weeks though.

Please keep us up to date with how you go if you do deploy a MythTV backend
into a KVM instance though, I'm interested to see your observations on
network throughput, since mine was so crippled..


On Sat, Nov 17, 2012 at 3:36 PM, Paul B. Henson <henson at acm.org> wrote:

> I think I'm finally going to get around to putting together the illumos
> home/hobby server I've been thinking about for the past few years :), and
> would appreciate a little feedback on parts/compatibility/design.
>
> The box is intended to be both a storage server (music/video/etc media,
> documents, whatever) with content available via both NFS and CIFS, as well
> as a virtualization server using kvm to run some number of linux instances
> (the most heavyweight of which will probably be the mythtv instance, but
> there will be a number of other miscellaneous things going on). I'm
> thinking of using two SSD's with a partition mirrored for rpool, a 2nd
> separate partition as L2ARC on each, and possibly a third mirrored for slog
> (or potentially a separate SSD just for slog), and a storage pool
> consisting of 2 6 disk raidz2 vdevs.
>
> For the case, I'm looking at the Supermicro 836BA-R920B rackmount chassis,
> which has 16 3.5" hot-swap bays on the front, and 2 2.5" hot-swap bays on
> the back, along with dual redundant 960w 80+ platinum certified power
> supplies. This particular model has all 16 front bays direct attached, with
> four SFF-8087 connectors. There are two other models available with either
> one or two SAS expanders; however, from what I understand hooking up SATA
> drives on the other side of a SAS expander is a bad idea. If I went with
> near-line SAS, I could get the model with the expanders, which would reduce
> my cost in terms of SAS controllers, but the pricing on near-line SAS is
> ridiculous compared to SATA, and the extra cost in SAS controller should be
> outweighed by reduced cost in drives (I'm already looking at a way higher
> budget than I'd like for a hobby project, but I have few vices, and
> electronics are one of them ;) ).
>
> For the motherboard, I'm looking at the Supermicro X9DRD-7LN4F-JBOD, which
> is a dual LGA 2011 socket board with 16 DIMM slots, 2 x SATA3, 4 x SATA2,
> and 8 x SAS (LSI 2308 controller onboard) along with 4 intel i350 based gig
> nics. My understanding is that illumos is perfectly happy with the LSI 2308
> in IT mode. The -JBOD version of this motherboard comes from the factory
> with IT firmware. It doesn't seem readily available though, if I went with
> the regular version the LSI controller comes with RAID firmware, it's
> possible to reflash with IT but from what I've read it's a bit of a pain
> (you need to do it from the EFI shell). It also looks like illumos works
> with the intel i350 gig nics, and I assume there should be no issue with
> the onboard Intel AHCI SATA controller?
>
> CPU, 2 x Intel Xeon E5-2620. The hex core is a bit pricier than the quads,
> but I've just got my heart set on 12 cores, and no one said a hobby had to
> be cost effective ;). These are Sandy Bridge Xeons, I know there were some
> Sandy Bridge issues in the past, but I think there were workarounds, and it
> looks like Joyent recently fixed them (https://github.com/joyent/**
> illumos-joyent/commit/**4d86fb7f59410be72e467483b74e2e**ebff6052b2<https://github.com/joyent/illumos-joyent/commit/4d86fb7f59410be72e467483b74e2eebff6052b2>),
> so I'm hoping they will work well.
>
> I haven't really spec'd specific RAM, although I'm partial to crucial, it
> takes 1333MHz registered ECC DDR3. I think I want at least 32GB for the
> storage server side, and I'm not sure yet how much more I'll add in on top
> of that for virtualization.
>
> 8 of the 16 3.5" bays will be covered by the onboard LSI controller, I
> need to get an additional PCIe controller with 2 x SFF-8087 connectors to
> cover the rest. Seems there are a fair number of options, although I'm not
> sure if there's a clear winner among them. Any favorites?
>
> Hard drives are the parts I'm least confident in 8-/. I'd like to go 2TB
> or 3TB, that's cost prohibitive for near-line SAS, and pretty darn pricy
> for "enterprise" SATA. I don't really want to go with desktop class drives
> though.
>
> Is there any opinion yet on the new WD Red "NAS" drives? They're only $170
> for a 3TB drive, which is pretty cheap. On the plus side, they're
> engineered for 7x24 operation, have a three year warranty, and are supposed
> to be low power/low heat (both would be good; while I installed a 4.5kw
> solar power system a few years ago when I remodeled our house, and have
> been net negative powerwise since, I anticipate that to change when this
> beast starts running. I also set up a dedicated wiring closet with a
> separate 8000btu wall air conditioner, but still less heat = less cooling =
> less power utilization). They come out-of-the-box with 7 second TLER, plus
> the ability to tune that however you'd like. On the downside, while WD
> doesn't specify it, they evidently run at 5400rpm (where I suppose the low
> power low heat comes from), and aren't exactly screamers (streaming isn't
> too bad, but random IO leaves a bit to be desired).
>
> My mythtv vm will potentially be recording 4 HD ATSC streams (originating
> from network connected HD homeruns), reading all 4 back from disk at the
> same time (for commercial flagging) and potentially reading a different two
> streams for playback on the two front ends I currently have connected to
> TVs. Arguably worst case for an ATSC transport stream is about 18Mbps, so
> it's not really that much. But then all of the vm's will be doing their
> thing, plus whatever NFS/CIFS clients are up to. Sizing for IO is black
> magic to me <sigh>, on the one hand I want to maximize my storage for the
> cost, but on the other I don't want to have recordings that skip and
> stutter and vm's that lag and are unresponsive...
>
> I also don't really have a good handle on what SSD's to go with. As I
> mentioned, I'm thinking of getting two for rpool/l2arc, and hook them up to
> the onboard SATA3 controller. If I can find ones that are appropriate, I'd
> carve out a third partition on them for a mirrored slog; otherwise I'd get
> a separate third one and stick it in a 3.5" bay to be dedicated slog. I
> don't think I'd bother to mirror the slog if it is on a separate SSD, I
> believe there are no longer any critical failure modes from slog failure,
> worst-case being it fails when the pool is off-line and you need to
> manually import it. Any suggestions on good rpool/l2arc/slog SSD's, or
> rpool/l2arc SSD's with a different model slog SSD would be greatly
> appreciated.
>
> Thanks much for reading so far :), I realize I've gone on for quite a
> bit... Any comments/feedback/suggestions on compatibility or design issues
> with what I've laid out would be very welcome. Thanks again...
>
> ______________________________**_________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.**com <OmniOS-discuss at lists.omniti.com>
> http://lists.omniti.com/**mailman/listinfo/omnios-**discuss<http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20121117/6c921299/attachment-0001.html>


More information about the OmniOS-discuss mailing list