[OmniOS-discuss] ZFS pool allocation remains after removing all files
Richard Elling
richard.elling at richardelling.com
Fri Oct 10 17:00:47 UTC 2014
On Oct 9, 2014, at 4:58 PM, Rune Tipsmark <rt at steait.net> wrote:
> Just updated to latest version r151012
>
> Still same... I checked for vdev settings, is there another place I can check?
It won't be a ZFS feature. On the initiator, use something like sg3_utils thusly:
[root at congo ~]# sg_opcodes /dev/rdsk/c0t5000C50030117C3Bd0
SEAGATE ST800FM0043 0005
Peripheral device type: disk
Opcode Service CDB Name
(hex) action(h) size
-----------------------------------------------
00 6 Test Unit Ready
01 6 Rezero Unit
03 6 Request Sense
04 6 Format Unit
07 6 Reassign Blocks
08 6 Read(6)
0a 6 Write(6)
0b 6 Seek(6)
12 6 Inquiry
15 6 Mode select(6)
16 6 Reserve(6)
17 6 Release(6)
1a 6 Mode sense(6)
1b 6 Start stop unit
1c 6 Receive diagnostic results
1d 6 Send diagnostic
25 10 Read capacity(10)
28 10 Read(10)
2a 10 Write(10)
2b 10 Seek(10)
2e 10 Write and verify(10)
2f 10 Verify(10)
35 10 Synchronize cache(10)
37 10 Read defect data(10)
3b 0 10 Write buffer, combined header and data [or multiple modes]
3b 2 10 Write buffer, data
3b 4 10 Write buffer, download microcode and activate
3b 5 10 Write buffer, download microcode, save, and activate
3b 6 10 Write buffer, download microcode with offsets and activate
3b 7 10 Write buffer, download microcode with offsets, save, and activate
3b a 10 Write buffer, write data to echo buffer
3b d 10 Write buffer, download microcode with offsets, select activation events, save and defer activate
3b e 10 Write buffer, download microcode with offsets, save and defer activate
3b f 10 Write buffer, activate deferred microcode
3b 1a 10 Write buffer, enable expander comms protocol and echo buffer
3b 1c 10 Write buffer, download application client error history
3c 0 10 Read buffer, combined header and data [or multiple modes]
3c 2 10 Read buffer, data
3c 3 10 Read buffer, descriptor
3c a 10 Read buffer, read data from echo buffer
3c b 10 Read buffer, echo buffer descriptor
3c 1c 10 Read buffer, error history
3e 10 Read long(10)
3f 10 Write long(10)
41 10 Write same(10)
42 10 Unmap
48 2 10 Sanitize, block erase
48 1f 10 Sanitize, exit failure mode
4c 10 Log select
4d 10 Log sense
55 10 Mode select(10)
56 10 Reserve(10)
57 10 Release(10)
5a 10 Mode sense(10)
5e 0 10 Persistent reserve in, read keys
5e 1 10 Persistent reserve in, read reservation
5e 2 10 Persistent reserve in, report capabilities
5e 3 10 Persistent reserve in, read full status
5f 0 10 Persistent reserve out, register
5f 1 10 Persistent reserve out, reserve
5f 2 10 Persistent reserve out, release
5f 3 10 Persistent reserve out, clear
5f 4 10 Persistent reserve out, preempt
5f 5 10 Persistent reserve out, preempt and abort
5f 6 10 Persistent reserve out, register and ignore existing key
5f 7 10 Persistent reserve out, register and move
7f 9 32 Read(32)
7f a 32 Verify(32)
7f b 32 Write(32)
7f c 32 Write an verify(32)
7f d 32 Write same(32)
88 16 Read(16)
8a 16 Write(16)
8e 16 Write and verify(16)
8f 16 Verify(16)
91 16 Synchronize cache(16)
93 16 Write same(16)
9e 10 16 Read capacity(16)
9e 11 16 Read long(16)
9f 11 16 Write long(16)
a0 12 Report luns
a3 5 12 Report identifying information
a3 c 12 Report supported operation codes
a3 d 12 Report supported task management functions
a4 6 12 Set identifying information
a4 f 12 Set timestamp
b7 12 Read defect data(12)
e0 10 Vendor specific [0xe0]
e1 10 Vendor specific [0xe1]
e2 10 Vendor specific [0xe2]
e6 10 Vendor specific [0xe6]
f7 10 Vendor specific [0xf7]
I'd try it for iSCSI, but since I no longer use iSCSI (yea AoE!) I can't test :-P
-- richard
>
> root at zfs10:/root# echo "::zfs_params" | mdb -k | grep vdev
> zfs_vdev_max_active = 0x3e8
> zfs_vdev_sync_read_min_active = 0xa
> zfs_vdev_sync_read_max_active = 0xa
> zfs_vdev_sync_write_min_active = 0xa
> zfs_vdev_sync_write_max_active = 0xa
> zfs_vdev_async_read_min_active = 0x1
> zfs_vdev_async_read_max_active = 0x3
> zfs_vdev_async_write_min_active = 0x1
> zfs_vdev_async_write_max_active = 0xa
> zfs_vdev_scrub_min_active = 0x1
> zfs_vdev_scrub_max_active = 0x2
> zfs_vdev_async_write_active_min_dirty_percent = 0x1e
> zfs_vdev_async_write_active_max_dirty_percent = 0x3c
> mdb: variable reference_tracking_enable not found: unknown symbol name
> mdb: variable reference_history not found: unknown symbol name
> zfs_vdev_cache_max = 0x4000
> zfs_vdev_cache_size = 0x0
> zfs_vdev_cache_bshift = 0x10
> vdev_mirror_shift = 0x15
> zfs_vdev_aggregation_limit = 0x20000
>
> Rune
>
> -----Original Message-----
> From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Rune Tipsmark
> Sent: Thursday, October 09, 2014 3:33 PM
> To: Dan McDonald
> Cc: omnios-discuss
> Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files
>
> Is there a command I can run to check?
>
> Rune
>
> -----Original Message-----
> From: Dan McDonald [mailto:danmcd at omniti.com]
> Sent: Thursday, October 09, 2014 11:51 AM
> To: Rune Tipsmark
> Cc: omnios-discuss
> Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files
>
>
> On Oct 9, 2014, at 2:38 PM, Rune Tipsmark <rt at steait.net> wrote:
>
>> So if I just upgrade to latest it should be supported?
>
> It should be available in r151010! That's why I'm surprised.
>
> Dan
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
More information about the OmniOS-discuss
mailing list