[OmniOS-discuss] A useful tidbit or two for ESX admins running OmniOS Fibre Targets

Richard Jahnel rjahnel at ellipseinc.com
Mon Dec 14 22:21:51 UTC 2015


All I can say for sure is that the problem is repeatable with sufficient time.

The test we use to see whether or not a storage volume is susceptible is to create and eager zero a 4 TB vmdk on the volume. Or as large a VMDK as the volume can handle.

Most of the time it will panic within the first TB else it has thus far always panicked before the third.

Volumes made with only three flags previously listed will not panic and have been test with eager zeros as large as 8 TB. This has been tested against R151014 and R151016.

-----Original Message-----
From: Dan McDonald [mailto:danmcd at omniti.com]
Sent: Monday, December 14, 2015 4:14 PM
To: Richard Jahnel <rjahnel at ellipseinc.com>; Dan McDonald <danmcd at omniti.com>
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] A useful tidbit or two for ESX admins running OmniOS Fibre Targets


> On Dec 14, 2015, at 2:53 PM, Richard Jahnel <rjahnel at ellipseinc.com> wrote:
>
> Yes, you looked at it around Oct 12th or 13th of this year.

And it was... interesting:

panic[cpu11]/thread=ffffff15a1ea4780:
hati_pte_map: flags & HAT_LOAD_REMAP


ffffff009a23f850 unix:hati_pte_map+3ab ()
ffffff009a23f8e0 unix:hati_load_common+139 ()
ffffff009a23f960 unix:hat_memload+75 ()
ffffff009a23fa80 genunix:segvn_faultpage+730 ()
ffffff009a23fc50 genunix:segvn_fault+8e6 ()
ffffff009a23fd60 genunix:as_fault+31a ()
ffffff009a23fdf0 unix:pagefault+96 ()
ffffff009a23ff00 unix:trap+2c7 ()
ffffff009a23ff10 unix:cmntrap+e6 ()


Nothing to indicate ZFS or FC... that's a VM subsystem fault.  I do, however, see 78 threads all doing SCSI WRITE_SAME.

Dan

________________________________

The content of this e-mail (including any attachments) is strictly confidential and may be commercially sensitive. If you are not, or believe you may not be, the intended recipient, please advise the sender immediately by return e-mail, delete this e-mail and destroy any copies.


More information about the OmniOS-discuss mailing list