[OmniOS-discuss] hardware specs for illumos storage/virtualization server
Paul B. Henson
henson at acm.org
Sun Nov 18 21:20:36 EST 2012
On Sun, Nov 18, 2012 at 07:55:51PM +1100, Denis Cheong wrote:
> To get it working, the "feature key" is actually a device you buy and plug
> into a special header on the motherboard, not a serial number.
Meh, it still doesn't actually supply any actual functionality other
than "enabling" circuitry already on the board, right? I'm not sure if
they're still up to it, but remember a couple years ago when Intel was
selling "upgrade cards" with a magic code you could type in to the BIOS
and it would enable higher performance on the Pentium CPU already in it?
I just don't like that model :(.
> So why bother, just use one / two of the many expansion slots on the mobo
> with a tried and true SAS card ...
Hmm, looks like that Intel board runs over $600 around here, more than
the Supermicro I was already looking at (which has 8 LSI SAS ports
already on board (with no magic "upgrade" required)).
> Do your sums on each rail, you will probably not need a 960W, and dual
> power supplies is probably an unnecessary expense in a home situation.
Dude -- this *whole box* is an unnecessary expense in a home situation
;). I don't really want to have downtime if a power supply blows, and I
particularly don't want to have to open up the case and rip out /
reconnect a bunch of cables to replace it. The relatively low
incremental cost of dual redundants will be more than recouped in saved
frustration.
> You are probably better spending the money on a UPS ...
I already have 2 1500VA UPS's in my rack, one for each power supply :).
> Note that the OCZ ZX power supply is only $175 (AU$ / US$).
>
> How much is the Supermicro with only 16 drive bays?
About $1000. If you got dual rendundant supplies for your Norco, I bet
it would have come close to that. More bays, yes, but I'm already buying
too many hard drives, I wouldn't want to be tempted to buy more...
> These drives, in a single-drive config, easily do 100MBytes/sec sequential
> each. So 8 x 100Mbytes/sec = 800Mbytes/sec transfer rate in an 8-drive (+
> parity) array.
That's streaming; if you're running a lot of vm's doing different stuff,
you also have to worry about random IO, a given raidz vdev only provides
the performance of one drive in that scenario. If I go with 2 x 6 disk
raidz2, I'll have 8 data drives, and two vdevs. No idea how that'll work
out for my use case, guess I'll just have to cross my fingers and see.
> Note that DVB video = maximum of 31.6Mbit/sec, i.e. 3.95Mbytes/sec so if
In the US, an ATSC OTA stream maxes out at 19.39Mb/s, so my utilization
for a given number of streams will be less than yours.
> With that many drives, you should be more concerned with power consumption
> / heat output control than throughput (IMHO).
Yah. That's why the red drives seem appealing.
> I have 10 of them in a raidz2 (i.e. 8 x way data drives). Arguably with
That's big for a raidz2.
> net of parity. MythTV recordings are however on single drives - three
> separate 3tb drives. A friend of mine is trying to convince me otherwise,
> however ... and to pool the recordings into the main 24TB 10-drive array.
You don't care if you lose your recordings if a disk blows up? That's
really the deciding point. I don't really want to lose mine, I already
lost two year's worth once when a linux software raid blew up after a
sata controller failure.
> I bought 6 of them about 2 months ago, no DOA, and have now bought 13 for
> myself, no issues with any of them. There are still another 2 not yet
> installed.
[...]
> 1. For every 1 person complaining, there are probably 50 people who had no
> issues.
I was going to say exactly that :). But still, there were even more DOA
complaints than usual in a hard drive review.
> No, but that's one of the reasons why I am a big advocate for 2-drive
> parity in all arrays with a large number of drives.
Assuming you have a hot spare, my understanding of the cutover point
from raidz to raidz2 is the liklihood of a fatal bit error while
resilvering. The bigger the drives, the more likely the failure, so when
you're talking 2, 3 or even 4TB drives now I think raidz2 is recommended
even with a small number of drives.
> Sub-100Mbit speeds on a VM that was connected to native dual gigabit ports
> in the host.
Hmm. I'm almosted tempted to go into work on my vacation week to cut my
test box over to a gig port to see what happens :).
So what switch was on the other side of the aggregation? Port
aggregation can be pretty flaky sometimes :(. I had an openbsd box
recently that I was trying to bind two gig links together to a Cisco
switch, when both were hot there was a crapload of latency and packet
drop. Pull either one, worked fine. Never did figure that out, ran of
time and just left it on one link.
> Not necessarily ... Most Joyent cloud connectivity is 'internet facing' not
> high speed LAN. If the internet-facing connectivity is limited to 100mbit
> is anybody going to notice much?
True. But I'd think for some of the internet facing infrastructure,
there'd be datacenter local stuff going on, database replication,
whatever. Plus the joyent crew seems pretty analytical, I think they'd
have tested throughput and looked into any problems they found.
> no nothing ... it was horrible. You can configure up all the network
> interfaces using the standard Solaris commands, but it will lose the lot on
> the next reboot. I found this really incredibly frustrating.
Yah, I looked at smartos too. I really want to customize the global zone
though, pretty much required for NFS/CIFS service, so it got ruled out.
> properly mounted anything. It seems likely that the root cause of this
> issue was the duplicate packet bug that was fixed a couple of weeks ago.
I think I saw that. It wasn't actually "fixed", right, so much as worked
around? They're dropping the duplicate packets but still aren't sure
where/why they're happening.
> NFS still goes over the ethernet connectivity, and despite being host-VM
> native and should not go through the hardware, it was still far less than
> gigabit throughput, when I would expect it to exceed gigabit ...
Hmm, my quick guest to host test showed over 700Mb/s. With a virtual
e1000, it might be limited to gigabit no matter what, since it's
pretending to be a gig card. The vmxnet3 driver in esx claims to be a
10G card. I'm not sure what virtio reports to the OS, I haven't tried it
yet.
More information about the OmniOS-discuss
mailing list