[OmniOS-discuss] kernel panic "kernel heap corruption detected" when creating zero eager disks

wuffers moo at wuffers.net
Wed Mar 25 18:17:28 UTC 2015

On Wed, Mar 25, 2015 at 12:09 PM, Dan McDonald <danmcd at omniti.com> wrote:

> > Can't I just set this dynamically like so (so I can potentially skip 2
> reboots)?
> >
> > echo kmem_flags/W0xf | mdb -kw
> No, because those are read at kmem cache creation time at the system's
> start.
Ahh, if I RTFM'd the whole doc, I would have caught this excerpt:

" These are set in conjunction with the global kmem_flags variable at cache
creation time. Setting kmem_flags while the system is running has no effect
on the debugging behavior, except for subsequently created caches (which is
rare after boot-up)."

> I can't comment myself on Fibre/STMF, as we do IB SRP here. I would say
> it's been "fairly" stable (can run for months before I see an issue), but
> have seen some weird hangups where I had to reboot the head unit (but no
> kernel panics).
> You reproduce this bug by configuring things a specific way, right?  I ask
> because you seem to have been running okay until you fell down this
> particular panic rabbit hole with a particular set of things, correct?
The panic is happening when I tried to create a 10+TB eager zero vmdk with
the vSphere fat client. I'm assuming that it will happen a third time when
I use the same steps. Since I can't save myself the two reboots, I will
most likely try without the usual Hyper-V and VMware host loads and just
try to create the vmdk and see what happens. So I would say no, I'm not
changing any settings or configuration, just trying to do "normal" things
like create disks, although they are much bigger than anything I've created
before. I do have a 50TB LU for the Hyper-V hosts, but never tried to
create any disk that big on it. If I have time I'll try it on a Hyper-V VM
as well.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150325/28129d55/attachment.html>

More information about the OmniOS-discuss mailing list