[OmniOS-discuss] background snapshot destruction impact

Tobias Oetiker tobi at oetiker.ch
Mon Dec 23 13:35:40 UTC 2013


Today George-Cristian Bîrzan wrote:

> On Sat, Dec 21, 2013 at 3:15 PM, Tobias Oetiker <tobi at oetiker.ch> wrote:
>
> > a) would it be faster to send/receive the content of the
> >    deduplicated filesystem to a new non deduplicated
> >    and then destroy the entire filesystem (not the pool).
> >
> >
> Destroying the entire dataset directly won't help, if anything, it'll just
> prolong the agony. If you want a controlled 'deletion', you can zero the
> current version of it (delete files individually and whatnot), then revert
> to the previous snapshot, repeat until there are no snapshots.

so I did it anyway ... :-) and it took roughtly 13 hours to destroy
10TB two things to note:

a) monitoring the change in values reported from queing
   zpool get freeing pool repeatedly it seemd that the freeing was
   sort of a linear process ... I expected it to last 60 hours. But
   at least in my case the process spead up 20 fold after about 11
   hours ... this may coincide with the point in time where I
   had turned of deduplication on the dataset.

b) also after 11 hours, the server crashed. It's last words were

Dec 23 11:01:36 fugu genunix: [ID 843051 kern.info] NOTICE: SUNW-MSG-ID: SUNOS-8000-0G, TYPE: Error, VER: 1, SEVERITY: Major
Dec 23 11:01:36 fugu unix: [ID 836849 kern.notice]
Dec 23 11:01:36 fugu ^Mpanic[cpu0]/thread=ffffff00f40c5c40:
Dec 23 11:01:36 fugu genunix: [ID 918906 kern.notice] I/O to pool 'slow' appears to be hung.
Dec 23 11:01:36 fugu unix: [ID 100000 kern.notice]
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5a20 zfs:vdev_deadman+10b ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5a70 zfs:vdev_deadman+4a ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5ac0 zfs:vdev_deadman+4a ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5af0 zfs:spa_deadman+ad ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5b90 genunix:cyclic_softint+f3 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5ba0 unix:cbe_low_level+14 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5bf0 unix:av_dispatch_softvect+78 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f40c5c20 unix:dispatch_softint+39 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb27d0 unix:switch_sp_and_call+13 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2810 unix:dosoftint+44 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2870 unix:do_interrupt+ba ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2880 unix:cmnint+ba ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb29d0 unix:mutex_enter+10 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2a00 unix:page_destroy+70 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2a40 genunix:fs_dispose+32 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2ac0 genunix:swap_dispose+a9 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2b50 genunix:fop_dispose+91 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2b90 genunix:anon_decref+13f ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2c00 genunix:anon_free+74 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2c50 genunix:segvn_free+242 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2c80 genunix:seg_free+30 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2d60 genunix:segvn_unmap+cde ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2dc0 genunix:as_free+e7 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2df0 genunix:relvm+220 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2e80 genunix:proc_exit+454 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2ea0 genunix:exit+15 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2ec0 genunix:rexit+18 ()
Dec 23 11:01:36 fugu genunix: [ID 655072 kern.notice] ffffff00f7bb2f10 unix:brand_sys_sysenter+1c9 ()
Dec 23 11:01:36 fugu unix: [ID 100000 kern.notice]
Dec 23 11:01:36 fugu genunix: [ID 672855 kern.notice] syncing file systems...
Dec 23 11:01:38 fugu genunix: [ID 904073 kern.notice]  done
Dec 23 11:01:39 fugu genunix: [ID 111219 kern.notice] dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Dec 23 11:05:35 fugu genunix: [ID 100000 kern.notice]
Dec 23 11:05:35 fugu genunix: [ID 665016 kern.notice] ^M 26% done: 3709201 pages dumped,
Dec 23 11:05:35 fugu genunix: [ID 495082 kern.notice] dump failed: error 28

as you can see the dump did not succeed:

savecore -vf vmdump.0
savecore: incomplete dump on dump device
savecore: System dump time: Mon Dec 23 11:01:39 2013

savecore: saving system crash dump in /var/crash/unknown/{unix,vmcore}.0
Constructing namelist /var/crash/unknown/unix.0
Constructing corefile /var/crash/unknown/vmcore.0
pfn 32542208 not found for as=fffffffffbc30ac0, va=ffffff0000000000
pfn 32542209 not found for as=fffffffffbc30ac0, va=ffffff0000001000
pfn 32542210 not found for as=fffffffffbc30ac0, va=ffffff0000002000
pfn 32542211 not found for as=fffffffffbc30ac0, va=ffffff0000003000
pfn 32542212 not found for as=fffffffffbc30ac0, va=ffffff0000004000
pfn 32542213 not found for as=fffffffffbc30ac0, va=ffffff0000005000
pfn 32542214 not found for as=fffffffffbc30ac0, va=ffffff0000006000
pfn 32542215 not found for as=fffffffffbc30ac0, va=ffffff0000007000
pfn 32542216 not found for as=fffffffffbc30ac0, va=ffffff0000008000
pfn 32542217 not found for as=fffffffffbc30ac0, va=ffffff0000009000
 1:36  99% donesavecore: stream tag 1862 not in range 1..1
savecore: bad summary magic 62063c2c


it then came back up and completed the job without a hitch, picking up pace very quickly.

cheers
tobi

-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900


More information about the OmniOS-discuss mailing list