[OmniOS-discuss] Slow zfs writes

Denis Cheong denis at denisandyuki.net
Mon Feb 11 17:50:35 EST 2013


You haven't enabled dedup on any zfs volumes?  That will drop performance
by 30x to 300x especially on an array that size unless you have an insane
amount of memory.




On Tue, Feb 12, 2013 at 3:52 AM, Ram Chander <ramquick at gmail.com> wrote:

> cp of 1Gb file takes 20min  to the pool whereas on normal disk it takes
> 35sec.
>
>
> On Mon, Feb 11, 2013 at 10:06 PM, Eric Sproul <esproul at omniti.com> wrote:
>
>> You still haven't said how you know that performance has dropped 30x.
>> Where are the numbers?
>>
>> On Mon, Feb 11, 2013 at 11:33 AM, Ram Chander <ramquick at gmail.com> wrote:
>> > I am not sure what happened on Jan 5. I havent run scrub or replaced
>> devices
>> > for more than 3 months. The underlying hardware is Dell Md1200 which
>> has 48
>> > disks.
>> > How to recover from this ? I had tried rebooting  but issue comes back.
>> >
>> >
>> > On Mon, Feb 11, 2013 at 8:14 PM, Eric Sproul <esproul at omniti.com>
>> wrote:
>> >>
>> >> On Mon, Feb 11, 2013 at 7:48 AM, Ram Chander <ramquick at gmail.com>
>> wrote:
>> >> > Hi,
>> >> >
>> >> > My OI box is expreiencing slow zfs writes ( around 30 times slower ).
>> >> > iostat
>> >> > reports below error though pool is healthy. This is happening in
>> past 4
>> >> > days
>> >> > though no change was done to system. Is the hard disks faulty ?
>> >> > Please help.
>> >>
>> >> How have you measured this 30x drop in performance?  You haven't
>> >> provided any data.
>> >>
>> >>
>> >> > c4t0d0           Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
>> >> > Vendor: iDRAC    Product: Virtual CD       Revision: 0323 Serial No:
>> >> > Size: 0.00GB <0 bytes>
>> >> > Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
>> >> > Illegal Request: 1 Predictive Failure Analysis: 0
>> >>
>> >> I wouldn't worry about errors here.  This is a virtual device provided
>> >> by your server's lights-out management system.
>> >>
>> >>
>> >> > root at host:~# fmadm faulty
>> >> > --------------- ------------------------------------  --------------
>> >> > ---------
>> >> > TIME            EVENT-ID                              MSG-ID
>> >> > SEVERITY
>> >> > --------------- ------------------------------------  --------------
>> >> > ---------
>> >> > Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a  ZFS-8000-HC
>> >> > Major
>> >> >
>> >> > Host        : host
>> >> > Platform    : PowerEdge-R810
>> >> > Product_sn  :
>> >> >
>> >> > Fault class : fault.fs.zfs.io_failure_wait
>> >> > Affects     : zfs://pool=test
>> >> >                   faulted but still in service
>> >> > Problem in  : zfs://pool=test
>> >> >                   faulted but still in service
>> >> >
>> >> > Description : The ZFS pool has experienced currently unrecoverable
>> I/O
>> >> >                     failures.  Refer to
>> >> > http://illumos.org/msg/ZFS-8000-HC
>> >> > for
>> >> >               more information.
>> >> >
>> >> > Response    : No automated response will be taken.
>> >> >
>> >> > Impact      : Read and write I/Os cannot be serviced.
>> >> >
>> >> > Action      : Make sure the affected devices are connected, then run
>> >> >                     'zpool clear'.
>> >>
>> >> What has happened since January 5?  The pool appears fine now.  Did
>> >> you run a scrub?  Replace devices?  Reboot?  It looks like ZFS
>> >> encountered an underlying problem with the hardware.
>> >>
>> >> Eric
>> >> _______________________________________________
>> >> OmniOS-discuss mailing list
>> >> OmniOS-discuss at lists.omniti.com
>> >> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>> >
>> >
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>
>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20130212/e561f38d/attachment.html>


More information about the OmniOS-discuss mailing list