[OmniOS-discuss] Slow zfs writes
Ram Chander
ramquick at gmail.com
Tue Feb 12 00:09:32 EST 2013
So it looks like re-distribution issue. Initially there were two Vdev with
24 disks ( disk 0-23 ) for close to year. After which which we added 24
more disks and created additional vdevs. The initial vdevs are filled up
and so write speed declined. Now how to find files that are present in a
Vdev or a disk. That way I can remove and re-copy back to distribute data.
Any other way to solve this ?
Total capacity of pool - 98Tb
Used - 44Tb
Free - 54 Tb
root at host:# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
----------- ----- ----- ----- ----- ----- -----
test 54.0T 62.7T 52 1.12K 2.16M 5.78M
raidz1 11.2T 2.41T 13 30 176K 146K
c2t0d0 - - 5 18 42.1K 39.0K
c2t1d0 - - 5 18 42.2K 39.0K
c2t2d0 - - 5 18 42.5K 39.0K
c2t3d0 - - 5 18 42.9K 39.0K
c2t4d0 - - 5 18 42.6K 39.0K
raidz1 13.3T 308G 13 100 213K 521K
c2t5d0 - - 5 94 50.8K 135K
c2t6d0 - - 5 94 51.0K 135K
c2t7d0 - - 5 94 50.8K 135K
c2t8d0 - - 5 94 51.1K 135K
c2t9d0 - - 5 94 51.1K 135K
raidz1 13.4T 19.1T 9 455 743K 2.31M
c2t12d0 - - 3 137 69.6K 235K
c2t13d0 - - 3 129 69.4K 227K
c2t14d0 - - 3 139 69.6K 235K
c2t15d0 - - 3 131 69.6K 227K
c2t16d0 - - 3 141 69.6K 235K
c2t17d0 - - 3 132 69.5K 227K
c2t18d0 - - 3 142 69.6K 235K
c2t19d0 - - 3 133 69.6K 227K
c2t20d0 - - 3 143 69.6K 235K
c2t21d0 - - 3 133 69.5K 227K
c2t22d0 - - 3 143 69.6K 235K
c2t23d0 - - 3 133 69.5K 227K
raidz1 2.44T 16.6T 5 103 327K 485K
c2t24d0 - - 2 48 50.8K 87.4K
c2t25d0 - - 2 49 50.7K 87.4K
c2t26d0 - - 2 49 50.8K 87.3K
c2t27d0 - - 2 49 50.8K 87.3K
c2t28d0 - - 2 49 50.8K 87.3K
c2t29d0 - - 2 49 50.8K 87.3K
c2t30d0 - - 2 49 50.8K 87.3K
raidz1 8.18T 10.8T 5 295 374K 1.54M
c2t31d0 - - 2 131 58.2K 279K
c2t32d0 - - 2 131 58.1K 279K
c2t33d0 - - 2 131 58.2K 279K
c2t34d0 - - 2 132 58.2K 279K
c2t35d0 - - 2 132 58.1K 279K
c2t36d0 - - 2 133 58.3K 279K
c2t37d0 - - 2 133 58.2K 279K
raidz1 5.42T 13.6T 5 163 383K 823K
c2t38d0 - - 2 61 59.4K 146K
c2t39d0 - - 2 61 59.3K 146K
c2t40d0 - - 2 61 59.4K 146K
c2t41d0 - - 2 61 59.4K 146K
c2t42d0 - - 2 61 59.3K 146K
c2t43d0 - - 2 62 59.2K 146K
c2t44d0 - - 2 62 59.3K 146K
On Tue, Feb 12, 2013 at 4:20 AM, Denis Cheong <denis at denisandyuki.net>wrote:
> You haven't enabled dedup on any zfs volumes? That will drop performance
> by 30x to 300x especially on an array that size unless you have an insane
> amount of memory.
>
>
>
>
> On Tue, Feb 12, 2013 at 3:52 AM, Ram Chander <ramquick at gmail.com> wrote:
>
>> cp of 1Gb file takes 20min to the pool whereas on normal disk it takes
>> 35sec.
>>
>>
>> On Mon, Feb 11, 2013 at 10:06 PM, Eric Sproul <esproul at omniti.com> wrote:
>>
>>> You still haven't said how you know that performance has dropped 30x.
>>> Where are the numbers?
>>>
>>> On Mon, Feb 11, 2013 at 11:33 AM, Ram Chander <ramquick at gmail.com>
>>> wrote:
>>> > I am not sure what happened on Jan 5. I havent run scrub or replaced
>>> devices
>>> > for more than 3 months. The underlying hardware is Dell Md1200 which
>>> has 48
>>> > disks.
>>> > How to recover from this ? I had tried rebooting but issue comes back.
>>> >
>>> >
>>> > On Mon, Feb 11, 2013 at 8:14 PM, Eric Sproul <esproul at omniti.com>
>>> wrote:
>>> >>
>>> >> On Mon, Feb 11, 2013 at 7:48 AM, Ram Chander <ramquick at gmail.com>
>>> wrote:
>>> >> > Hi,
>>> >> >
>>> >> > My OI box is expreiencing slow zfs writes ( around 30 times slower
>>> ).
>>> >> > iostat
>>> >> > reports below error though pool is healthy. This is happening in
>>> past 4
>>> >> > days
>>> >> > though no change was done to system. Is the hard disks faulty ?
>>> >> > Please help.
>>> >>
>>> >> How have you measured this 30x drop in performance? You haven't
>>> >> provided any data.
>>> >>
>>> >>
>>> >> > c4t0d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
>>> >> > Vendor: iDRAC Product: Virtual CD Revision: 0323 Serial No:
>>> >> > Size: 0.00GB <0 bytes>
>>> >> > Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
>>> >> > Illegal Request: 1 Predictive Failure Analysis: 0
>>> >>
>>> >> I wouldn't worry about errors here. This is a virtual device provided
>>> >> by your server's lights-out management system.
>>> >>
>>> >>
>>> >> > root at host:~# fmadm faulty
>>> >> > --------------- ------------------------------------ --------------
>>> >> > ---------
>>> >> > TIME EVENT-ID MSG-ID
>>> >> > SEVERITY
>>> >> > --------------- ------------------------------------ --------------
>>> >> > ---------
>>> >> > Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC
>>> >> > Major
>>> >> >
>>> >> > Host : host
>>> >> > Platform : PowerEdge-R810
>>> >> > Product_sn :
>>> >> >
>>> >> > Fault class : fault.fs.zfs.io_failure_wait
>>> >> > Affects : zfs://pool=test
>>> >> > faulted but still in service
>>> >> > Problem in : zfs://pool=test
>>> >> > faulted but still in service
>>> >> >
>>> >> > Description : The ZFS pool has experienced currently unrecoverable
>>> I/O
>>> >> > failures. Refer to
>>> >> > http://illumos.org/msg/ZFS-8000-HC
>>> >> > for
>>> >> > more information.
>>> >> >
>>> >> > Response : No automated response will be taken.
>>> >> >
>>> >> > Impact : Read and write I/Os cannot be serviced.
>>> >> >
>>> >> > Action : Make sure the affected devices are connected, then run
>>> >> > 'zpool clear'.
>>> >>
>>> >> What has happened since January 5? The pool appears fine now. Did
>>> >> you run a scrub? Replace devices? Reboot? It looks like ZFS
>>> >> encountered an underlying problem with the hardware.
>>> >>
>>> >> Eric
>>> >> _______________________________________________
>>> >> OmniOS-discuss mailing list
>>> >> OmniOS-discuss at lists.omniti.com
>>> >> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>> >
>>> >
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>
>>
>>
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>
>>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20130212/a75ea4f0/attachment-0001.html>
More information about the OmniOS-discuss
mailing list