[OmniOS-discuss] hardware specs for illumos storage/virtualization server

Denis Cheong denis at denisandyuki.net
Sun Nov 18 03:55:51 EST 2012


On Sun, Nov 18, 2012 at 5:33 PM, Paul B. Henson <henson at acm.org> wrote:

> > Intel S2600CO Motherboard (16 x DIMM slots, dual LGA2011)
>
> Interesting, although I don't like that "feature key". It sounds like it
> has on-board SAS, but unless you type a magic "key" into the BIOS it
> doesn't work? I hate that model. I'll have to take a look at the prices
> though, the supermicro board was like $500.
>

It has on-board 3G SAS (not 6G), and it's 8 separate ports, so you need a
reverse-breakout connector to get it to work with a SFF-8087 connector ...
I don't like that configuration for a start.

To get it working, the "feature key" is actually a device you buy and plug
into a special header on the motherboard, not a serial number.

All that is moot anyway, since OmniOS / illumos does not support the SAS
chipset it uses anyway ...

So why bother, just use one / two of the many expansion slots on the mobo
with a tried and true SAS card ...

> 8 x 16GB Hynix DDR ECC DIMMs (total 128GB)
>
> Ouch, 16GB sticks? They were pretty pricy compared to the 8GB.
>

I had a special opportunity to purchase them in an auction down under at a
great price :)  Could have got 16 of them to fully populate the system, but
only running with 1 CPU atm it's a waste, and seriously do you need more
than 128GB ? :)


> > 2 x LSI 9201-16i 6G SAS cards
>
> Did you flash these with the IT firmware, or do they support JBOD out of
> the box?
>

These are non-RAID to begin with, they come with the IT firmware already.

These cards are really impressive, you can upgrade the firmware 'live' from
within Solaris, while you are still accessing the disks, with only about 15
seconds of downtime.

> Norco RPC-4224 case, 24 x hotswap 6G SAS bays
>
> I looked at a couple Norco cases. They were OK, but the Supermicro one
> just seemed better built, and
>

In Australia at least, the Supermicro are crazy-expensive.  The Norco are
incredibly cheap.  They're not 'amazingly' well made, but they are
perfectly fine build quality.  There's some minor silly design decisions,
but hey, it's 24 hotswappable 6G drive bays for $459 (AU$ / US$).


> > OCZ ZX Series 850W power supply (note importantly this PSU has a 30A 5V
> > rail)
>
> came with dual redundant 960w power supplies. With the Norco cases, by
> the time I added in the separately purchased power supplies, the cost
> differential wasn't that much.
>

Do your sums on each rail, you will probably not need a 960W, and dual
power supplies is probably an unnecessary expense in a home situation.  You
are probably better spending the money on a UPS ...

Note that the OCZ ZX power supply is only $175 (AU$ / US$).

How much is the Supermicro with only 16 drive bays?


> > 13 x 3TB WD Red drives
> [...]
> > With respect to the NAS drives - it's too early for me to give much
> > feedback on them, but they are definitely the ones to get, for the
> reasons
> > you gave.
>
> My only concern is performance, given they're only 5400 RPM. I just
> don't know how to qualify my I/O needs enough to tell if that'll be a
> problem.


These drives, in a single-drive config, easily do 100MBytes/sec sequential
each.  So 8 x 100Mbytes/sec = 800Mbytes/sec transfer rate in an 8-drive (+
parity) array.  Don't forget that gigabit ethernet is limited to best case
100Mbytes/sec.

Note that DVB video = maximum of 31.6Mbit/sec, i.e. 3.95Mbytes/sec so if
you have 6 concurrent streams you need 23.7Mbytes/sec which is still only a
quarter of the bandwidth of a single drive or a single gigabit connection.

With that many drives, you should be more concerned with power consumption
/ heat output control than throughput (IMHO).


> How do you have them set up?


I have 10 of them in a raidz2 (i.e. 8 x way data drives).  Arguably with
zfs it's best to have the data drives a as power of 2, i.e. 2, 4, 8, 16,
net of parity.  MythTV recordings are however on single drives - three
separate 3tb drives.  A friend of mine is trying to convince me otherwise,
however ... and to pool the recordings into the main 24TB 10-drive array.


> The newegg reviews also have a lot
> of people complaining about DOA or early death, so reliability is a
> concern too.


I bought 6 of them about 2 months ago, no DOA, and have now bought 13 for
myself, no issues with any of them.  There are still another 2 not yet
installed.

Just bear in mind that:
1. For every 1 person complaining, there are probably 50 people who had no
issues.
2. There are now only 2 hard disk manufacturers left in the entire world,
WD and Seagate.  Seagate has a 1 year warranty and no comparable product to
the WD Red drives.  'nuff said.


> With a 3 year warranty, it's not too big a deal if a couple
> go belly up, as long as it's not 3 in a row before the hot spare
> resilvers 8-/. Have you run bonnie++ on your setup?
>

No, but that's one of the reasons why I am a big advocate for 2-drive
parity in all arrays with a large number of drives.


> > 2. It would have been a good option, but BrandZ has gone by the wayside
> > long ago
>
> You mean the linux compatible zone? That would have been interesting, at
> least performance wise. I wonder how hard it would be to bring that back
> to illumos.
>

There were probably some significant compatibility issues that justified
the termination of the project that I'm not aware of ...


> > 3. KVM as I have so far tested has some major limitations that would make
> > it impractical, i.e. both network bandwith and disk throughput is
> terrible
> > to the point that it will cause problems with MythTV.
>
> Really?


Major limitations for me ... but it's been a while since I tested.  You can
probably find the results of my iperf tests in old posts on this mailing
list.  I've been told that these have been fixed in the build released a
couple of weeks ago - I've yet to verify it yet.


> What kind of throughput are you seeing?


Sub-100Mbit speeds on a VM that was connected to native dual gigabit ports
in the host.


> Are you using the e1000 nic or the virtio one?


Tried both, no difference.


> Given joyent runs this stuff commercially, I can't imagine it has endemic
> performance issues.
>

Not necessarily ... Most Joyent cloud connectivity is 'internet facing' not
high speed LAN.  If the internet-facing connectivity is limited to 100mbit
is anybody going to notice much?

I tried SmartOS without any happiness.  I'm really happy with the network
connectivity setup / control from Solaris.  It's really flexible, you can
do almost anything with it and Crossbow.  Suddenly with SmartOS, you are
dropped back to a dumb configuration file that's worse than Linux of more
than 15-years ago ... no VLAN configuration, no bonding / link aggregation,
no nothing ... it was horrible.  You can configure up all the network
interfaces using the standard Solaris commands, but it will lose the lot on
the next reboot.  I found this really incredibly frustrating.

Any way in my experience with KVM under OmniOS, I had two major issues with
network connectivity:  I was trying to mount an 8-drive ZFS array over
iSCSI from the KVM client (Solaris 11) to the host (OmniOS).  The drives
were shared out from their raw rdsk devices.  However the connectivity was
simply not there - it just kept timing out and it died ... would never
properly mounted anything.  It seems likely that the root cause of this
issue was the duplicate packet bug that was fixed a couple of weeks ago.

The second issue was in throughput - 100mbit in real throughput for iSCSI
is really bad, gigabit is a bare minimum, 10G is really required (but who
can afford that for home).

I'm actually planning to have my mythtv storage native on zfs, and share
> via nfs from the host to the guest, so the vm disk won't really be an
> issue for that.
>

NFS still goes over the ethernet connectivity, and despite being host-VM
native and should not go through the hardware, it was still far less than
gigabit throughput, when I would expect it to exceed gigabit ...

You could set up an entirely different subnet with an etherstub that was
not connected to real hardware, but that just seems like a horribly complex
workaround to get around a bug that should not exist in the first place ...
and it may not even fix it, because the VM ethernet throughput bug seemed
to be more complex than just that ...

Denis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20121118/3c42f462/attachment.html>


More information about the OmniOS-discuss mailing list