From jimklimov at cos.ru Fri Nov 1 12:36:46 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Fri, 01 Nov 2013 13:36:46 +0100 Subject: [OmniOS-discuss] java : feca beba : endian problem? In-Reply-To: <20131031221118.GA26743@wolfman.devio.us> References: <20131031175433.GA6982@wolfman.devio.us> <20131031221118.GA26743@wolfman.devio.us> Message-ID: <5273A05E.1080609@cos.ru> On 2013-10-31 23:13, Mayuresh Kathe wrote: > i checked on openbsd running on x86-64, the header is still > the same, "fecabeba". > as per the material i've read on java, it should always be, > "cafebabe". > am i doing something wrong somewhere? As a wild guess, this magic number allows the system to set up its file-reading routines in such a way that the number would resolve to a predefined value? I.e. if it differs, the parser should try different endianness/byteswap filters? //Jim From jimklimov at cos.ru Fri Nov 1 13:04:52 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Fri, 01 Nov 2013 14:04:52 +0100 Subject: [OmniOS-discuss] Boot from mirrored USB? In-Reply-To: <5272933A.6050501@hfg-gmuend.de> References: <4EFE3929-FA6A-4FC7-934E-C9A5DA5DBC21@fluffy.co.uk> <20131031135339.GG2186@eschaton.local> <5272933A.6050501@hfg-gmuend.de> Message-ID: <5273A6F4.4050001@cos.ru> On 2013-10-31 18:28, G?nther Alka wrote: > Only problem: currently you can only boot from usb when you plugin the > sticks in those ports > that were used during setup. It would be perfect it you may boot from > any usb port maybe on similar > mainboards. But this seems a Grub restriction where the physical > bootdevice is hardcoded during setup. This is also a problem in other cases, such as attempts to switch from LegacyIDE to SATA on the motherboard. In my case on a laptop, booting from USB causes the SATA devices to be enumerated differently than if I boot from a SATA disk, so legacy-ide is the only practically working option :( My research into the problem led me to believe that it is possible to code a different solution into the bootloader and rpool-importer, but my attempts to complete this have stalled :( I may be wrong (some time has passed) but somewhere here: http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libgrubmgmt/common/libgrub_fs.c ...grub locates the "physpath" strings from the rpool-component labels and passes these strings to the kernel (as well as the rootfs dataset by number, not name). Then in zfs_mountroot() at http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/zfs_vfsops.c the booted kernel tries to mount the provided device path - which fails due to renamed devices. I tried to make GRUB also detect and pass the GUID of the pool for the process to fall back in case of absent devices, and this required to implement appropriate ascii_to_uint64() and/or ascii_to_uint64_hex(), and the opposite functions in both GRUB and ZFS, as well as to figure out the proper use of nvlists and spa_import_rootpool() and stuff like that. In short, this seemed like an easy quest for someone who knows their way around the ZFS code, but I was blind there and did fail. I believe the GUID value passing from GRUB to kernel worked (but not completely certain of that though), and mounting of an rpool by GUID apparently did not succeed, and then I had other pressing quests :( If anyone is interested to pick up, I can share the diffs of what I did though :) HTH, //Jim Klimov From ikaufman at eng.ucsd.edu Fri Nov 1 16:28:09 2013 From: ikaufman at eng.ucsd.edu (Ian Kaufman) Date: Fri, 1 Nov 2013 09:28:09 -0700 Subject: [OmniOS-discuss] Physical slot based disk names for LSI SAS on OmniOS? In-Reply-To: References: <20131030202506.664471A0487@apps0.cs.toronto.edu> <52729B8B.4080104@hfg-gmuend.de> <3C6A3499-FE2C-4E95-9F58-E96A63089212@gmail.com> Message-ID: You can also use the sas2ircu directly via the command line. However, G?nther's efforts and integration into the napp-it GUI make it so much easier to use. Ian On Thu, Oct 31, 2013 at 2:37 PM, alka wrote: > hi Felix > > its part of the monitor extension > (free for less than 8 disks) > > see Menu disks >> SAS2 extension > > > Am 31.10.2013 um 19:26 schrieb Felix Nielsen: > > Hi G?nther, > > Is that feature included in napp-it? If so where and if not how do I get it > :) > > Thanks > Felix > > Btw. napp-it rocks :) > > Den 31/10/2013 kl. 19.03 skrev G?nther Alka : > > I have added a physical slot detection that displays physical > > Slot, WWN and serials together with the option to switch on the red > > alert led on supported backplanes within napp-it with the help of > > sas2ircu (a LSI tool that displays slot and disk serial). > > > > On 30.10.2013 21:25, Chris Siebenmann wrote: > > This is a long shot and I suspect that the answer is no, but: in > > OmniOS, is it possible somehow to have disk device names for disks > > behind LSI SAS controllers that are based on the physical slot ('phy') > > that the disk is found in instead of the disk's reported WNN or serial > > number? > > > (I understand that this is only easy if there are no SAS expanders > > involved, which is the situation that I care about.) > > > I know that I can recover this information from prtconf -v with > > appropriate mangling, but our contemplated environment would be much > > more manageable if we could directly use physical slot based device > > naming. > > > Thanks in advance. I'll post a summary if there are any private > > replies (for anything besides 'nope, can't do it'). > > > - cks > > _______________________________________________ > > OmniOS-discuss mailing list > > OmniOS-discuss at lists.omniti.com > > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > > > _______________________________________________ > > OmniOS-discuss mailing list > > OmniOS-discuss at lists.omniti.com > > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > > -- > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu From alka at hfg-gmuend.de Fri Nov 1 18:03:56 2013 From: alka at hfg-gmuend.de (alka) Date: Fri, 1 Nov 2013 19:03:56 +0100 Subject: [OmniOS-discuss] Boot from mirrored USB? In-Reply-To: <5273A6F4.4050001@cos.ru> References: <4EFE3929-FA6A-4FC7-934E-C9A5DA5DBC21@fluffy.co.uk> <20131031135339.GG2186@eschaton.local> <5272933A.6050501@hfg-gmuend.de> <5273A6F4.4050001@cos.ru> Message-ID: <12120527-F031-4478-9561-C8DA6A04D40F@hfg-gmuend.de> Sadly I cannot help to overcome this behaviour. If one could plugin any bootable device in any slot (sata, ide or usb) and it would bootup just like plain old stupid DOS with the bios bootup order- that would be a huge improvement in handling - allowing " preconfigured deploy and emergency disks" I really hope someone with more Grub knowledge is willing to look at this item as well and help to fix it. Gea Am 01.11.2013 um 14:04 schrieb Jim Klimov: > On 2013-10-31 18:28, G?nther Alka wrote: >> Only problem: currently you can only boot from usb when you plugin the >> sticks in those ports >> that were used during setup. It would be perfect it you may boot from >> any usb port maybe on similar >> mainboards. But this seems a Grub restriction where the physical >> bootdevice is hardcoded during setup. > > This is also a problem in other cases, such as attempts to switch from > LegacyIDE to SATA on the motherboard. In my case on a laptop, booting > from USB causes the SATA devices to be enumerated differently than if > I boot from a SATA disk, so legacy-ide is the only practically working > option :( > > My research into the problem led me to believe that it is possible to > code a different solution into the bootloader and rpool-importer, but > my attempts to complete this have stalled :( > > I may be wrong (some time has passed) but somewhere here: > http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libgrubmgmt/common/libgrub_fs.c > ...grub locates the "physpath" strings from the rpool-component labels > and passes these strings to the kernel (as well as the rootfs dataset > by number, not name). > > Then in zfs_mountroot() at > http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/zfs_vfsops.c > the booted kernel tries to mount the provided device path - which > fails due to renamed devices. > > I tried to make GRUB also detect and pass the GUID of the pool for the > process to fall back in case of absent devices, and this required to > implement appropriate ascii_to_uint64() and/or ascii_to_uint64_hex(), > and the opposite functions in both GRUB and ZFS, as well as to figure > out the proper use of nvlists and spa_import_rootpool() and stuff like > that. In short, this seemed like an easy quest for someone who knows > their way around the ZFS code, but I was blind there and did fail. > I believe the GUID value passing from GRUB to kernel worked (but not > completely certain of that though), and mounting of an rpool by GUID > apparently did not succeed, and then I had other pressing quests :( > If anyone is interested to pick up, I can share the diffs of what > I did though :) > > HTH, > //Jim Klimov > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimklimov at cos.ru Fri Nov 1 19:11:11 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Fri, 01 Nov 2013 20:11:11 +0100 Subject: [OmniOS-discuss] Boot from mirrored USB? In-Reply-To: <12120527-F031-4478-9561-C8DA6A04D40F@hfg-gmuend.de> References: <4EFE3929-FA6A-4FC7-934E-C9A5DA5DBC21@fluffy.co.uk> <20131031135339.GG2186@eschaton.local> <5272933A.6050501@hfg-gmuend.de> <5273A6F4.4050001@cos.ru> <12120527-F031-4478-9561-C8DA6A04D40F@hfg-gmuend.de> Message-ID: <5273FCCF.6020401@cos.ru> On 2013-11-01 19:03, alka wrote: > Sadly I cannot help to overcome this behaviour. > If one could plugin any bootable device in any slot > (sata, ide or usb) and it would bootup just like > plain old stupid DOS with the bios bootup order- > that would be a huge improvement in handling - > allowing " preconfigured deploy and emergency disks" > > I really hope someone with more Grub knowledge > is willing to look at this item as well and help to fix it. See if Alex Eremin's revival of Failsafe Boot helps for such cases? http://alexeremin.blogspot.ru/2013/05/firefly-failsafe-image-for-illumos.html AFAIK, this should use GRUB to load a failsafe miniroot into RAM from the provided filesystem (including from ZFS), and does not rely on the kernel rpool import at all. The project is distributed as ISO and USB images, but according to comments, the former can be unpacked and files from it can be used from grub: # Download ISO, unpack ISO and copy files to boot-url # The ramdisk is compressed with gzip, which must be unpacked # to work with iPXE # mv firefly firefly.gz # gunzip firefly.gz kernel ${base-url}/platform/i86pc/kernel/amd64/unix module ${base-url}/firefly Hmm... I don't think I've after all tried this with my laptop... And I don't think that the image building recipe was described or opensourced either, as this was a PoC release... //Jim From frederic.alix at fredalix.com Fri Nov 1 21:26:33 2013 From: frederic.alix at fredalix.com (=?ISO-8859-1?Q?Fr=E9d=E9ric_Alix?=) Date: Fri, 1 Nov 2013 22:26:33 +0100 Subject: [OmniOS-discuss] French traduction of OmniOS documentation Message-ID: Hi, Few weeks ago, i started to translate the OmniOS documentation in french. I asked to some sysadmin friends, what they are thinking about it. And conclusion, for people that know Solaris 10 or 11, it's not a problem. They know what is a zone, zfs,... but for others, it's more difficult for them to use OmniOS. I think the OmniOS's wiki is good for a syadmin that already know Solaris, but for newbie, it's not enought. Now, i don't know if it's a good idea to finish to translate the wiki. I would like to start a OmniOS survival guide in english and french. What are you thinking about that ? It can be good to add another translation, like russian, no ? See you, fred N.b: For peoples who can read french, you'll find here the OmniOS wiki's installation part in french. http://www.fredalix.com/txt/Omnios-%20Installation-TradFR.txt -------------- next part -------------- An HTML attachment was scrubbed... URL: From alain.odea at gmail.com Sun Nov 3 00:49:07 2013 From: alain.odea at gmail.com (Alain O'Dea) Date: Sun, 3 Nov 2013 00:49:07 +0000 Subject: [OmniOS-discuss] Net::SSH::AuthenticationFailed failed when using knife-ec2 with OmniOS AMI Message-ID: Hi Folks: I'm having trouble bootstrapping OmniOS instances in EC2. Here's what I run (sanitized): knife ec2 server create --ssh-gateway aws-bastion.example.com --region us-east-1 --subnet subnet-12345678 --security-group-ids sg-12345678 --flavor m1.small --image ami-35eb835c --distro omnios-smartmachine --node-name ${USER/_/.}-omnios-$(uuid) Here's the output (sanitized): Instance ID: i-12345678 Flavor: m1.small Image: ami-35eb835c Region: us-east-1 Availability Zone: us-east-1a Security Group Ids: sg-12345678 Tags: Name: alain.odea-omnios-12345678-abcd-ef12-1234-1234567890ab SSH Key: ssh.key.name Waiting for instance..................... Subnet ID: subnet-12345678 Private IP Address: 10.0.1.123 .ERROR: Net::SSH::AuthenticationFailed: Net::SSH::AuthenticationFailed Here's my hypothesis: The OmniOS EC2 AMI brings up the SSH server before provisioning SSH public keys. Is there a good workaround for this or should I bring up the machine with the EC2 API directly and use knife bootstrap after? Thanks, Alain From esproul at omniti.com Mon Nov 4 15:25:33 2013 From: esproul at omniti.com (Eric Sproul) Date: Mon, 4 Nov 2013 10:25:33 -0500 Subject: [OmniOS-discuss] Net::SSH::AuthenticationFailed failed when using knife-ec2 with OmniOS AMI In-Reply-To: References: Message-ID: On Sat, Nov 2, 2013 at 8:49 PM, Alain O'Dea wrote: > Here's my hypothesis: The OmniOS EC2 AMI brings up the SSH server > before provisioning SSH public keys. > > Is there a good workaround for this or should I bring up the machine > with the EC2 API directly and use knife bootstrap after? Your hypothesis is accurate. There is a transient SMF service called ec2-credential that fetches the pubkey during startup. That service is not dependent on the ssh service though-- SMF starts independent services in parallel, so this is likely a race. It would probably be best to use the EC2 API directly for now, and bootstrap after the instance comes up. The ec2-credential method script is here: https://github.com/omniti-labs/omnios-build/blob/r151006/build/ec2-credential/files/install-ec2-credential Eric From eugen at leitl.org Wed Nov 6 13:35:26 2013 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Nov 2013 14:35:26 +0100 Subject: [OmniOS-discuss] SMCI-H8DGi-F with 2x LSI-SAS-9211-8i-SGL Message-ID: <20131106133526.GQ5661@leitl.org> I'm looking to build an all-in-one OmniOS+nappit on VMWare ESXi with 2x 6234 Opterons on a SMCI-H8DGi-F with 2x LSI-SAS-9211-8i-SGL -- no SAS backplane. The hardware is 2x Intel SSDrive DC S3700 for a mirrored slog and 4x Micron Crucial M500 960 GByte for a pool (mirrored stripe, or somesuch -- there's probably no point in a raidz1 or raidz2). Is that a sensible setup? Any problems? How many cores/memory would you give to the zfs VMWare guest? Would 4x and 8 GBytes RAM enough? Should I go for 16 GBytes? (machine has 128 GBytes total, will be running Windows 2012 + Orkackle + cartridge). Thanks. From johan.kragsterman at capvert.se Wed Nov 6 18:07:06 2013 From: johan.kragsterman at capvert.se (Johan Kragsterman) Date: Wed, 6 Nov 2013 19:07:06 +0100 Subject: [OmniOS-discuss] snapshots on an exported pool Message-ID: Hi! I've been thinking of what happens to a snapshot that have been sent to a zfs pool that later have been exported, and the original filesystem, on another pool, still changes. What happens with the snapshot(and the relation between the snap and the original) when the pool is re-imported? Rgrds Johan From jimklimov at cos.ru Wed Nov 6 19:14:39 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Wed, 06 Nov 2013 20:14:39 +0100 Subject: [OmniOS-discuss] snapshots on an exported pool In-Reply-To: References: Message-ID: <527A951F.1060302@cos.ru> On 2013-11-06 19:07, Johan Kragsterman wrote: > Hi! > > I've been thinking of what happens to a snapshot that have been sent to a zfs pool that later have been exported, and the original filesystem, on another pool, still changes. > > What happens with the snapshot(and the relation between the snap and the original) when the pool is re-imported? When you send a snapshot from your original pool, you have (logically) the same data named as the same snapshot in the other pool. If both snapshots continue to exist, you can send increments (from this snap to a next one defined in the future) to update a remote target. This requires that only one of the "live" datasets changes: changes on the target are discarded (rolled back) and the new increment is added on. However if you remove the original snapshot, you can't send increments anymore with ZFS means - you would have to replicate starting with an earlier common snapshot, or even with a full-sync from scratch. Also you can sync in such case with rsync - but would no longer be able to use zfs send for incremental sync... //Jim From johan.kragsterman at capvert.se Wed Nov 6 19:43:43 2013 From: johan.kragsterman at capvert.se (Johan Kragsterman) Date: Wed, 6 Nov 2013 20:43:43 +0100 Subject: [OmniOS-discuss] snapshots on an exported pool In-Reply-To: <527A951F.1060302@cos.ru> References: <527A951F.1060302@cos.ru>, Message-ID: Thanks, Jim! I bet with myself that it would be you that answered this question! I won... I also got another question, but I send it in another mail with another subject, to make it easier to search the list. Rgrds Johan Till: omnios-discuss at lists.omniti.com Fr?n: Jim Klimov S?nt av: "OmniOS-discuss" Datum: 2013.11.06 20:18 ?rende: Re: [OmniOS-discuss] snapshots on an exported pool On 2013-11-06 19:07, Johan Kragsterman wrote: > Hi! > > ? I've been thinking of what happens to a snapshot that have been sent to a zfs pool that later have been exported, and the original filesystem, on another pool, still changes. > > What happens with the snapshot(and the relation between the snap and the original) when the pool is re-imported? When you send a snapshot from your original pool, you have (logically) the same data named as the same snapshot in the other pool. If both snapshots continue to exist, you can send increments (from this snap to a next one defined in the future) to update a remote target. This requires that only one of the "live" datasets changes: changes on the target are discarded (rolled back) and the new increment is added on. However if you remove the original snapshot, you can't send increments anymore with ZFS means - you would have to replicate starting with an earlier common snapshot, or even with a full-sync from scratch. Also you can sync in such case with rsync - but would no longer be able to use zfs send for incremental sync... //Jim _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From johan.kragsterman at capvert.se Wed Nov 6 19:51:28 2013 From: johan.kragsterman at capvert.se (Johan Kragsterman) Date: Wed, 6 Nov 2013 20:51:28 +0100 Subject: [OmniOS-discuss] growing a comstar LUN Message-ID: Hi! I need to grow some comstar lu's. The volumes I based the lu's on are sparse, so the size of the lu's are not specified in the stmf lu creation, but based on the sparse volume size. When I change the volume size(zfs set volsize=), the volume changes right, but when I use "stmfadm list-lu -v", there is no change on the lu's sizes.' I could of coarse use stmfadm modify-lu -s, but I thought that stmf perhaps could pick up the changed size anyway...? But so it seems not to be the case...? Rgrds Johan From jimklimov at cos.ru Thu Nov 7 18:43:45 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Thu, 07 Nov 2013 19:43:45 +0100 Subject: [OmniOS-discuss] growing a comstar LUN In-Reply-To: References: Message-ID: <527BDF61.7020207@cos.ru> On 2013-11-06 20:51, Johan Kragsterman wrote: > Hi! > > I need to grow some comstar lu's. > > The volumes I based the lu's on are sparse, so the size of the lu's are not specified in the stmf lu creation, but based on the sparse volume size. > > When I change the volume size(zfs set volsize=), the volume changes right, but when I use "stmfadm list-lu -v", there is no change on the lu's sizes.' > > I could of coarse use stmfadm modify-lu -s, but I thought that stmf perhaps could pick up the changed size anyway...? But so it seems not to be the case...? Did you try remounting the target (on initiator side)? Also, did you inspect the LUNs (as "disks") formatting and partitioning layout? I believe that remounting the target should allow the full LUN size to be seen. However the old partition table remains in place, so you'd have to enlarge that (fdisk, parted). Then in the increased partition you can increase the filesystem with its tools - for ZFS this would mean increasing Solaris slices first, then expanding the pool (possibly with explicit "zpool attach -e ..."). HTH, //Jim Klimov From rafibeyli at gmail.com Fri Nov 8 07:57:57 2013 From: rafibeyli at gmail.com (Hafiz Rafibeyli) Date: Fri, 8 Nov 2013 09:57:57 +0200 (EET) Subject: [OmniOS-discuss] system hangs randomly In-Reply-To: <24493439.2000821.1383895627197.JavaMail.zimbra@cantekstil.com.tr> Message-ID: <964772415.2002197.1383897477403.JavaMail.zimbra@cantekstil.com.tr> Hello, Omnios version:SunOS 5.11 omnios-b281e50 Server:Supermicro X8DAH (24x storage chassis) we are using omnios as a production nfs server for Esxi hosts. everything was ok,but last 20 days system hangs 3 times.Nothing changed on hardware side. for OS disks we are using two SSDSA2SH032G1GN(32 Gb Intel X25-E SSD) in zfs mirror attached onboard sata ports of motherboard. I captured monitor screenshot when system hangs,and sending as attachment. My pools info: pool: rpool state: ONLINE scan: resilvered 20.0G in 0h3m with 0 errors on Sun Oct 20 14:01:01 2013 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4d0s0 ONLINE 0 0 0 c3d1s0 ONLINE 0 0 0 errors: No known data errors pool: zpool1 state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details. scan: scrub repaired 0 in 5h0m with 0 errors on Sat Oct 12 19:00:53 2013 config: NAME STATE READ WRITE CKSUM zpool1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t5000C50041E9D9A7d0 ONLINE 0 0 0 c1t5000C50041F1A5EFd0 ONLINE 0 0 0 c1t5000C5004253FF87d0 ONLINE 0 0 0 c1t5000C50055A607E3d0 ONLINE 0 0 0 c1t5000C50055A628EFd0 ONLINE 0 0 0 c1t5000C50055A62F57d0 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 c1t5001517959627219d0 ONLINE 0 0 0 c1t5001517BB2747BE7d0 ONLINE 0 0 0 cache c1t5001517803D007D8d0 ONLINE 0 0 0 c1t5001517BB2AFB592d0 ONLINE 0 0 0 spares c1t5000C5005600A6B3d0 AVAIL c1t5000C5005600B43Bd0 AVAIL errors: No known data errors -------------- next part -------------- A non-text attachment was scrubbed... Name: 20131107_021236.jpg Type: image/jpeg Size: 249100 bytes Desc: not available URL: From rafibeyli at gmail.com Fri Nov 8 12:35:56 2013 From: rafibeyli at gmail.com (Hafiz Rafibeyli) Date: Fri, 8 Nov 2013 14:35:56 +0200 (EET) Subject: [OmniOS-discuss] Fwd: system hangs randomly In-Reply-To: <964772415.2002197.1383897477403.JavaMail.zimbra@cantekstil.com.tr> References: <964772415.2002197.1383897477403.JavaMail.zimbra@cantekstil.com.tr> Message-ID: <917775435.2012613.1383914156801.JavaMail.zimbra@cantekstil.com.tr> log on monitor when system hangs was like this:(can send actuall taken screenshot to individual mail adresses) scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: reset bus, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: early timeout, target=0 lun=0 gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): Error for command 'read sector' Error Level: Informational gda: [ID 107833 kern.notice] Sense Key: aborted command gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): Error for command 'read sector' Error Level: Informational gda: [ID 107833 kern.notice] Sense Key: aborted command gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: abort request, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: abort device, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: reset target, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: reset bus, target=0 lun=0 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): timeout: early timeout, target=0 lun=0 gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): Error for command 'read sector' Error Level: Informational gda: [ID 107833 kern.notice] Sense Key: aborted command gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): Hello, Omnios version:SunOS 5.11 omnios-b281e50 Server:Supermicro X8DAH (24x storage chassis) we are using omnios as a production nfs server for Esxi hosts. everything was ok,but last 20 days system hangs 3 times.Nothing changed on hardware side. for OS disks we are using two SSDSA2SH032G1GN(32 Gb Intel X25-E SSD) in zfs mirror attached onboard sata ports of motherboard. I captured monitor screenshot when system hangs,and sending as attachment. My pools info: pool: rpool state: ONLINE scan: resilvered 20.0G in 0h3m with 0 errors on Sun Oct 20 14:01:01 2013 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c4d0s0 ONLINE 0 0 0 c3d1s0 ONLINE 0 0 0 errors: No known data errors pool: zpool1 state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details. scan: scrub repaired 0 in 5h0m with 0 errors on Sat Oct 12 19:00:53 2013 config: NAME STATE READ WRITE CKSUM zpool1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t5000C50041E9D9A7d0 ONLINE 0 0 0 c1t5000C50041F1A5EFd0 ONLINE 0 0 0 c1t5000C5004253FF87d0 ONLINE 0 0 0 c1t5000C50055A607E3d0 ONLINE 0 0 0 c1t5000C50055A628EFd0 ONLINE 0 0 0 c1t5000C50055A62F57d0 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 c1t5001517959627219d0 ONLINE 0 0 0 c1t5001517BB2747BE7d0 ONLINE 0 0 0 cache c1t5001517803D007D8d0 ONLINE 0 0 0 c1t5001517BB2AFB592d0 ONLINE 0 0 0 spares c1t5000C5005600A6B3d0 AVAIL c1t5000C5005600B43Bd0 AVAIL errors: No known data errors -------------- next part -------------- A non-text attachment was scrubbed... Name: 20131107_021236.jpg Type: image/jpeg Size: 249100 bytes Desc: not available URL: From fabio at fabiorabelo.wiki.br Fri Nov 8 13:18:57 2013 From: fabio at fabiorabelo.wiki.br (=?UTF-8?Q?F=C3=A1bio_Rabelo?=) Date: Fri, 8 Nov 2013 11:18:57 -0200 Subject: [OmniOS-discuss] From Stable to Bloody and vice-versa Message-ID: Hi to all Probably a basic question, but I did not find an answer on forum ... so : With this commands : pkg unset-publisher omnios pkg set-publisher -g http://pkg.omniti.com/omnios/release omnios pkg update reboot I am changing from Stable to Bloody, or vice-versa ? thanks in advance ... F?bio Rabelo From esproul at omniti.com Fri Nov 8 14:59:55 2013 From: esproul at omniti.com (Eric Sproul) Date: Fri, 8 Nov 2013 09:59:55 -0500 Subject: [OmniOS-discuss] From Stable to Bloody and vice-versa In-Reply-To: References: Message-ID: On Fri, Nov 8, 2013 at 8:18 AM, F?bio Rabelo wrote: > Hi to all > > Probably a basic question, but I did not find an answer on forum ... so : > > With this commands : > > pkg unset-publisher omnios > pkg set-publisher -g http://pkg.omniti.com/omnios/release omnios > pkg update > reboot "omnios/release" is the URL for stable. Note that if you're running bloody, you can't just 'pkg update' to stable because all the branch versions are lower. Eric From mweiss at cimlbr.com Fri Nov 8 16:17:42 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 10:17:42 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover Message-ID: <527D0EA6.2030307@cimlbr.com> I am working on a failover script using OmniOS as a NFS server. According to VMware, if I mount and nfs datastore via its IP Address then I should be able to move the IP around and still mount it, however it is not working right. For example: On an ESXi instance (5.1U1) I mount the following NFS Datastore 172.16.50.100 /tank/vmrep which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes omni-rep1: 172.16.50.1 omni-rep2: 172.16.50.2 I am using zrep to failover my zfs dataset. http://www.bolthole.com/solaris/zrep/zrep.documentation.html Essential, it puts primary into read-only, does a zfs send/receive, then sets the secondary to rw. To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I have created a virtual ip to use for this purpose. #setnfsip.sh ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs #removenfsip.sh ipadm delete-addr vmxnet3s0/nfs So, when I want to failover, I just do the following: #!/bin/sh #zfs unshare tank/vmrep #sleep 5 /scripts/removenfsip.sh sync sleep 5 #zrep sync tank/vmrep #sleep 5 #the following does the zfs snapshot/send/receive zrep failover tank/vmrep sleep 5 #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep #sleep 5 ssh 172.16.50.2 /scripts/setnfsip.sh So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. The problem is in ESXi the datastore goes inaccessable. I can fail back and the datastore comes back online like fine. I can mount the nfs datastore as a new one with the .100 ip on omni-rep2 so it is being exported properly. According to the last paragraph of this https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 It should work, I have merely changed which host is broadcasting my datastore's IP address. I know a guy named saso? did some iScsi failover recently and noted it worked with NFS. I am just wondering what I am missing here. From jimklimov at cos.ru Fri Nov 8 16:20:06 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Fri, 08 Nov 2013 17:20:06 +0100 Subject: [OmniOS-discuss] Fwd: system hangs randomly In-Reply-To: <917775435.2012613.1383914156801.JavaMail.zimbra@cantekstil.com.tr> References: <964772415.2002197.1383897477403.JavaMail.zimbra@cantekstil.com.tr> <917775435.2012613.1383914156801.JavaMail.zimbra@cantekstil.com.tr> Message-ID: <527D0F36.2060101@cos.ru> The logs specify that your IDE devices (I believe, these are the rpool SSDs in legacy mode) return errors on reads and timeout on retries or resets. This may mean a few things: 1) Imminent device death i.e. due to wear over lifetime, try to get these replaced with new units (especially if their age or some actual diagnostics results from "smartctl" or vendor tools also indicate the possibility of such scenario) 2) Bad diagnostics, perhaps due to IDE protocol limitations - try to switch the controller into SATA mode and use some illumos live media (OI LiveCD/LiveUSB or OmniOS equivalents) to boot the server with the rpool disks in SATA mode and run: zpool import -N -R /a -f rpool mount -F zfs rpool/ROOT/your_BE_name /a && \ touch /a/reconfigure zpool export rpool Depending on your OS setup, the BE mounting may require some other command (like "zfs mount rpool/ROOT/your_BE_name"). This routine mounts the pool, indicates to the BE that it should make new device nodes (so it runs "devfsadm" early in the boot), and exports the pool. In the process, the rpool ZFS labels begin referencing the new hard-disk device node names which is what the rootfs procedure relies on. In some more difficult cases it might help to also copy (rsync) the /dev/ and /devices/ from the live environment into the on-disk BE so that these device names saved into the pool labels would match those discovered by the kernel upon boot. Do have backups; it might make sense to complete this experiment with one of the mirror halves removed, so that if nothing works (even rolling back to an IDE-only setup) you can destroy this half's content and boot in IDE mode from the other half and re-attach the mirrored part to it. As a variant, it might make sense (if you'd also refresh the hardware) to attach the new device(s) to the rpool as a 3/4-way mirror, and then completing the switcheroo to SATA with only the new couple plugged in - you'd be able to fall back on the old and tested set if all goes wrong somehow. Good luck, //Jim On 2013-11-08 13:35, Hafiz Rafibeyli wrote: > log on monitor when system hangs was like this:(can send actuall taken screenshot to individual mail adresses) > > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: reset bus, target=0 lun=0 > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: early timeout, target=0 lun=0 > gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): > Error for command 'read sector' Error Level: Informational > gda: [ID 107833 kern.notice] Sense Key: aborted command > gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 > gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): > Error for command 'read sector' Error Level: Informational > gda: [ID 107833 kern.notice] Sense Key: aborted command > gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: abort request, target=0 lun=0 > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: abort device, target=0 lun=0 > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: reset target, target=0 lun=0 > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: reset bus, target=0 lun=0 > scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): > timeout: early timeout, target=0 lun=0 > gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): > Error for command 'read sector' Error Level: Informational > gda: [ID 107833 kern.notice] Sense Key: aborted command > gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 > gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): > > > Hello, > > Omnios version:SunOS 5.11 omnios-b281e50 > Server:Supermicro X8DAH (24x storage chassis) > > we are using omnios as a production nfs server for Esxi hosts. > > everything was ok,but last 20 days system hangs 3 times.Nothing changed on hardware side. > > for OS disks we are using two SSDSA2SH032G1GN(32 Gb Intel X25-E SSD) in zfs mirror attached onboard sata ports of motherboard. > > I captured monitor screenshot when system hangs,and sending as attachment. > > > My pools info: > > pool: rpool > state: ONLINE > scan: resilvered 20.0G in 0h3m with 0 errors on Sun Oct 20 14:01:01 2013 > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c4d0s0 ONLINE 0 0 0 > c3d1s0 ONLINE 0 0 0 > > errors: No known data errors > > > pool: zpool1 > state: ONLINE > status: Some supported features are not enabled on the pool. The pool can > still be used, but some features are unavailable. > action: Enable all features using 'zpool upgrade'. Once this is done, > the pool may no longer be accessible by software that does not support > the features. See zpool-features(5) for details. > scan: scrub repaired 0 in 5h0m with 0 errors on Sat Oct 12 19:00:53 2013 > config: > > NAME STATE READ WRITE CKSUM > zpool1 ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > c1t5000C50041E9D9A7d0 ONLINE 0 0 0 > c1t5000C50041F1A5EFd0 ONLINE 0 0 0 > c1t5000C5004253FF87d0 ONLINE 0 0 0 > c1t5000C50055A607E3d0 ONLINE 0 0 0 > c1t5000C50055A628EFd0 ONLINE 0 0 0 > c1t5000C50055A62F57d0 ONLINE 0 0 0 > logs > mirror-1 ONLINE 0 0 0 > c1t5001517959627219d0 ONLINE 0 0 0 > c1t5001517BB2747BE7d0 ONLINE 0 0 0 > cache > c1t5001517803D007D8d0 ONLINE 0 0 0 > c1t5001517BB2AFB592d0 ONLINE 0 0 0 > spares > c1t5000C5005600A6B3d0 AVAIL > c1t5000C5005600B43Bd0 AVAIL > > errors: No known data errors > > > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > From jimklimov at cos.ru Fri Nov 8 16:36:55 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Fri, 08 Nov 2013 17:36:55 +0100 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D0EA6.2030307@cimlbr.com> References: <527D0EA6.2030307@cimlbr.com> Message-ID: <527D1327.8030404@cos.ru> On 2013-11-08 17:17, Matt Weiss wrote: > I am working on a failover script using OmniOS as a NFS server. > > According to VMware, if I mount and nfs datastore via its IP Address > then I should be able to move the IP around and still mount it, however > it is not working right. As a random guess, can this be related to ARP address caching? I think that likely the VMWare hosts continue to connect to the same IP address for NFS transactions, which automatically uses the MAC addresses that they continue to remember as matching the storage IP address, until this knowledge times out or is removed by forceful actions. Did you have a chance to either wait for 5 minutes, or flush the old address out of VMWare hosts? Perhaps, if your new active NFS host would "ping" the VMWare hosts, this would update their ARP entries (unless the hosts or your network have anti-spoofing)... Alternately, you can try setting the same MAC address on the two VNICs of the two hosts, but then you'd have to similarly refresh the Ethernet switches' knowledge of address-to-port bindings. HTH, //Jim Klimov From mweiss at cimlbr.com Fri Nov 8 16:46:04 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 10:46:04 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D1327.8030404@cos.ru> References: <527D0EA6.2030307@cimlbr.com> <527D1327.8030404@cos.ru> Message-ID: <527D154C.1030207@cimlbr.com> Well thought out, but I have accounted for that. I have even waited over night. I can run vmkping 172.16.50.100 while I run my ip setting scripts manually and merely drop a ping at most. I can also mount the datastore as a new one right afterwards and it mounts fine. I tried on another ESXi host and it creates the same UUID datastore name so I think I can rule that out as well. I am wondering if I need any sort of special setting on the sharenfs property? This is all being done with 10GbE dedicated switch with nothing else on it currently. -Matt On 11/8/2013 10:36 AM, Jim Klimov wrote: > On 2013-11-08 17:17, Matt Weiss wrote: >> I am working on a failover script using OmniOS as a NFS server. >> >> According to VMware, if I mount and nfs datastore via its IP Address >> then I should be able to move the IP around and still mount it, however >> it is not working right. > > As a random guess, can this be related to ARP address caching? > I think that likely the VMWare hosts continue to connect to the > same IP address for NFS transactions, which automatically uses > the MAC addresses that they continue to remember as matching the > storage IP address, until this knowledge times out or is removed > by forceful actions. > > Did you have a chance to either wait for 5 minutes, or flush the > old address out of VMWare hosts? Perhaps, if your new active NFS > host would "ping" the VMWare hosts, this would update their ARP > entries (unless the hosts or your network have anti-spoofing)... > > Alternately, you can try setting the same MAC address on the two > VNICs of the two hosts, but then you'd have to similarly refresh > the Ethernet switches' knowledge of address-to-port bindings. > > HTH, > //Jim Klimov > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From geoffn at gnaa.net Fri Nov 8 16:50:05 2013 From: geoffn at gnaa.net (Geoff Nordli) Date: Fri, 08 Nov 2013 08:50:05 -0800 Subject: [OmniOS-discuss] From Stable to Bloody and vice-versa In-Reply-To: References: Message-ID: <527D163D.4070807@gnaa.net> On 13-11-08 06:59 AM, Eric Sproul wrote: > On Fri, Nov 8, 2013 at 8:18 AM, F?bio Rabelo wrote: >> Hi to all >> >> Probably a basic question, but I did not find an answer on forum ... so : >> >> With this commands : >> >> pkg unset-publisher omnios >> pkg set-publisher -g http://pkg.omniti.com/omnios/release omnios >> pkg update >> reboot > "omnios/release" is the URL for stable. Note that if you're running > bloody, you can't just 'pkg update' to stable because all the branch > versions are lower. > > Eric, what if you are running bloody now and you want to switch to stable when the next stable is released? I need to install bloody to get some features, but I definitely don't want to stay there :) thanks, Geoff From mir at miras.org Fri Nov 8 17:02:12 2013 From: mir at miras.org (Michael Rasmussen) Date: Fri, 8 Nov 2013 18:02:12 +0100 Subject: [OmniOS-discuss] From Stable to Bloody and vice-versa In-Reply-To: <527D163D.4070807@gnaa.net> References: <527D163D.4070807@gnaa.net> Message-ID: <20131108180212.1540c5dc@sleipner.datanom.net> On Fri, 08 Nov 2013 08:50:05 -0800 Geoff Nordli wrote: > > Eric, what if you are running bloody now and you want to switch to stable when the next stable is released? > If you hold back further updates your bloody will be on 15007. The next stable is 15008 so when that time comes you should be able to change to stable. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- FORTUNE PROVIDES QUESTIONS FOR THE GREAT ANSWERS: #19 A: To be or not to be. Q: What is the square root of 4b^2? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From cks at cs.toronto.edu Fri Nov 8 17:15:11 2013 From: cks at cs.toronto.edu (Chris Siebenmann) Date: Fri, 08 Nov 2013 12:15:11 -0500 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: Your message of Fri, 08 Nov 2013 10:17:42 -0600. <527D0EA6.2030307@cimlbr.com> Message-ID: <20131108171511.B79D21A0421@apps0.cs.toronto.edu> | I am working on a failover script using OmniOS as a NFS server. [...] | I am using zrep to failover my zfs dataset. | http://www.bolthole.com/solaris/zrep/zrep.documentation.html | | Essential, it puts primary into read-only, does a zfs send/receive, | then sets the secondary to rw. Transparent NFS failover is extremely demanding about the two versions of the filesystems being absolutely identical down to extremely low level details. It's not clear to me if zfs send and zfs recv replicate all of these low-level details; they are certainly not documented to do so. (At the technical level, the two NFS servers must use identical NFS filehandles for all filesystem objects. One problem is that part of NFS filehandles for ZFS filesystems is an 'objset unique ID' and as far as I can tell the ultimate source of this objset unique ID is *not* set by zfs recv; it is set to a unique value by the kernel when the objset is created.) - cks From mir at miras.org Fri Nov 8 17:25:31 2013 From: mir at miras.org (Michael Rasmussen) Date: Fri, 8 Nov 2013 18:25:31 +0100 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <20131108171511.B79D21A0421@apps0.cs.toronto.edu> References: <527D0EA6.2030307@cimlbr.com> <20131108171511.B79D21A0421@apps0.cs.toronto.edu> Message-ID: <20131108182531.2f032f8e@sleipner.datanom.net> On Fri, 08 Nov 2013 12:15:11 -0500 Chris Siebenmann wrote: > > Transparent NFS failover is extremely demanding about the two versions > of the filesystems being absolutely identical down to extremely low > level details. It's not clear to me if zfs send and zfs recv replicate > all of these low-level details; they are certainly not documented to do > so. > There is also the problem of NFS file locks which cannot be transferred to another server. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- In America, it's not how much an item costs, it's how much you save. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From mweiss at cimlbr.com Fri Nov 8 17:29:20 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 11:29:20 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <20131108182531.2f032f8e@sleipner.datanom.net> References: <527D0EA6.2030307@cimlbr.com> <20131108171511.B79D21A0421@apps0.cs.toronto.edu> <20131108182531.2f032f8e@sleipner.datanom.net> Message-ID: <527D1F70.1030000@cimlbr.com> I thought VMWare does not do NFS locking, but uses its own custom lock mechanism with .lck files? On 11/8/2013 11:25 AM, Michael Rasmussen wrote: > On Fri, 08 Nov 2013 12:15:11 -0500 > Chris Siebenmann wrote: > >> Transparent NFS failover is extremely demanding about the two versions >> of the filesystems being absolutely identical down to extremely low >> level details. It's not clear to me if zfs send and zfs recv replicate >> all of these low-level details; they are certainly not documented to do >> so. >> > There is also the problem of NFS file locks which cannot be transferred > to another server. > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From skiselkov.ml at gmail.com Fri Nov 8 17:36:07 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Fri, 08 Nov 2013 17:36:07 +0000 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D0EA6.2030307@cimlbr.com> References: <527D0EA6.2030307@cimlbr.com> Message-ID: <527D2107.6000500@gmail.com> On 11/8/13, 4:17 PM, Matt Weiss wrote: > I am working on a failover script using OmniOS as a NFS server. > > According to VMware, if I mount and nfs datastore via its IP Address > then I should be able to move the IP around and still mount it, however > it is not working right. > > For example: > > On an ESXi instance (5.1U1) I mount the following NFS Datastore > 172.16.50.100 > /tank/vmrep > which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes > > > omni-rep1: 172.16.50.1 > omni-rep2: 172.16.50.2 > > I am using zrep to failover my zfs dataset. > http://www.bolthole.com/solaris/zrep/zrep.documentation.html > > Essential, it puts primary into read-only, does a zfs send/receive, then > sets the secondary to rw. > > > To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I > have created a virtual ip to use for this purpose. > > #setnfsip.sh > ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs > > #removenfsip.sh > ipadm delete-addr vmxnet3s0/nfs > > > So, when I want to failover, I just do the following: > > #!/bin/sh > #zfs unshare tank/vmrep > #sleep 5 > /scripts/removenfsip.sh > sync > sleep 5 > #zrep sync tank/vmrep > #sleep 5 > #the following does the zfs snapshot/send/receive > zrep failover tank/vmrep > sleep 5 > #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep > #sleep 5 > ssh 172.16.50.2 /scripts/setnfsip.sh > > > So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it > has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. > > The problem is in ESXi the datastore goes inaccessable. I can fail back > and the datastore comes back online like fine. I can mount the nfs > datastore as a new one with the .100 ip on omni-rep2 so it is being > exported properly. > > According to the last paragraph of this > > https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 > > > It should work, I have merely changed which host is broadcasting my > datastore's IP address. > > I know a guy named saso? did some iScsi failover recently and noted it > worked with NFS. I am just wondering what I am missing here. I haven't done NFS datastore failover from ESXi myself, but off the top of my head I guess what's going haywire here is that you're setting the dataset read-only before moving it over. Don't do that. Simply tear down the IP address, migrate the dataset, set up a new NFS share on the target machine and then reinstate the IP address at the target. ESXi aggressively monitors the health of its datastores and if it gets to a state it can't deal with (e.g. write a datastore that refuses to process it), it will offline the whole datastore, awaiting administrator intervention. Don't worry about the datastore being offline for a while, ESXi will hold VM writes and the VMs themselves won't usually complain for up to 1-2 minutes (defaults on Windows/Linux). -- Saso From mweiss at cimlbr.com Fri Nov 8 17:36:29 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 11:36:29 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <20131108182531.2f032f8e@sleipner.datanom.net> References: <527D0EA6.2030307@cimlbr.com> <20131108171511.B79D21A0421@apps0.cs.toronto.edu> <20131108182531.2f032f8e@sleipner.datanom.net> Message-ID: <527D211D.5060304@cimlbr.com> I have been googling this for a while and couldn't find enough important info, but I think you finally gave me my google search term I needed: *transparent* nfs failover. Thanks, I will report any new findings, but I think I am starting to understand why this didn't work. Somehow heartbeat / pacemaker setups have a way to overcome this, but I was merely looking for a manual failover to dual All-In-One boxes. On 11/8/2013 11:25 AM, Michael Rasmussen wrote: > On Fri, 08 Nov 2013 12:15:11 -0500 > Chris Siebenmann wrote: > >> Transparent NFS failover is extremely demanding about the two versions >> of the filesystems being absolutely identical down to extremely low >> level details. It's not clear to me if zfs send and zfs recv replicate >> all of these low-level details; they are certainly not documented to do >> so. >> > There is also the problem of NFS file locks which cannot be transferred > to another server. > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mir at miras.org Fri Nov 8 17:54:59 2013 From: mir at miras.org (Michael Rasmussen) Date: Fri, 8 Nov 2013 18:54:59 +0100 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D1F70.1030000@cimlbr.com> References: <527D0EA6.2030307@cimlbr.com> <20131108171511.B79D21A0421@apps0.cs.toronto.edu> <20131108182531.2f032f8e@sleipner.datanom.net> <527D1F70.1030000@cimlbr.com> Message-ID: <20131108185459.460902fd@sleipner.datanom.net> On Fri, 08 Nov 2013 11:29:20 -0600 Matt Weiss wrote: > I thought VMWare does not do NFS locking, but uses its own custom lock mechanism with .lck files? > It is the NFS protocol itself which does this to guaranty atomic move, copy, and rename. And what is worse is that these locks resides in kernel space which means no user space application can release these locks. Releasing locks for a disappeared NFS process will in worst case require a reboot. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- Execute every act of thy life as though it were thy last. -- Marcus Aurelius -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From mweiss at cimlbr.com Fri Nov 8 18:08:42 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 12:08:42 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D2107.6000500@gmail.com> References: <527D0EA6.2030307@cimlbr.com> <527D2107.6000500@gmail.com> Message-ID: <527D28AA.7000509@cimlbr.com> On 11/8/2013 11:36 AM, Saso Kiselkov wrote: > On 11/8/13, 4:17 PM, Matt Weiss wrote: >> I am working on a failover script using OmniOS as a NFS server. >> >> According to VMware, if I mount and nfs datastore via its IP Address >> then I should be able to move the IP around and still mount it, however >> it is not working right. >> >> For example: >> >> On an ESXi instance (5.1U1) I mount the following NFS Datastore >> 172.16.50.100 >> /tank/vmrep >> which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes >> >> >> omni-rep1: 172.16.50.1 >> omni-rep2: 172.16.50.2 >> >> I am using zrep to failover my zfs dataset. >> http://www.bolthole.com/solaris/zrep/zrep.documentation.html >> >> Essential, it puts primary into read-only, does a zfs send/receive, then >> sets the secondary to rw. >> >> >> To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I >> have created a virtual ip to use for this purpose. >> >> #setnfsip.sh >> ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs >> >> #removenfsip.sh >> ipadm delete-addr vmxnet3s0/nfs >> >> >> So, when I want to failover, I just do the following: >> >> #!/bin/sh >> #zfs unshare tank/vmrep >> #sleep 5 >> /scripts/removenfsip.sh >> sync >> sleep 5 >> #zrep sync tank/vmrep >> #sleep 5 >> #the following does the zfs snapshot/send/receive >> zrep failover tank/vmrep >> sleep 5 >> #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep >> #sleep 5 >> ssh 172.16.50.2 /scripts/setnfsip.sh >> >> >> So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it >> has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. >> >> The problem is in ESXi the datastore goes inaccessable. I can fail back >> and the datastore comes back online like fine. I can mount the nfs >> datastore as a new one with the .100 ip on omni-rep2 so it is being >> exported properly. >> >> According to the last paragraph of this >> >> https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 >> >> >> It should work, I have merely changed which host is broadcasting my >> datastore's IP address. >> >> I know a guy named saso? did some iScsi failover recently and noted it >> worked with NFS. I am just wondering what I am missing here. > I haven't done NFS datastore failover from ESXi myself, but off the top > of my head I guess what's going haywire here is that you're setting the > dataset read-only before moving it over. Don't do that. Simply tear down > the IP address, migrate the dataset, set up a new NFS share on the > target machine and then reinstate the IP address at the target. ESXi > aggressively monitors the health of its datastores and if it gets to a > state it can't deal with (e.g. write a datastore that refuses to process > it), it will offline the whole datastore, awaiting administrator > intervention. > > Don't worry about the datastore being offline for a while, ESXi will > hold VM writes and the VMs themselves won't usually complain for up to > 1-2 minutes (defaults on Windows/Linux). > I have experimented with sharing / unsharing leaving shared all the time setting RO, RW changing IP before/after etc adding sleeps, syncs So far all have the same result. I believe the issue now lies with the NFS state file. I may be able to get away with migrating that file with my failover script, but we will see. Does anyone know how to change OmniOS NFS server parameters? I would be wanting to put the state files into the zfs dataset somewhere to perhaps handle that. Somehow I believe pacemaker takes care of this, so surely, somehow NFS can be made to be a true failover. From richard.elling at richardelling.com Fri Nov 8 21:54:50 2013 From: richard.elling at richardelling.com (Richard Elling) Date: Fri, 8 Nov 2013 13:54:50 -0800 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D0EA6.2030307@cimlbr.com> References: <527D0EA6.2030307@cimlbr.com> Message-ID: <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> Comment below... On Nov 8, 2013, at 8:17 AM, Matt Weiss wrote: > I am working on a failover script using OmniOS as a NFS server. > > According to VMware, if I mount and nfs datastore via its IP Address then I should be able to move the IP around and still mount it, however it is not working right. > > For example: > > On an ESXi instance (5.1U1) I mount the following NFS Datastore > 172.16.50.100 > /tank/vmrep > which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes > > > omni-rep1: 172.16.50.1 > omni-rep2: 172.16.50.2 > > I am using zrep to failover my zfs dataset. > http://www.bolthole.com/solaris/zrep/zrep.documentation.html > > Essential, it puts primary into read-only, does a zfs send/receive, then sets the secondary to rw. > > > To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I have created a virtual ip to use for this purpose. > > #setnfsip.sh > ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs > > #removenfsip.sh > ipadm delete-addr vmxnet3s0/nfs > > > So, when I want to failover, I just do the following: > > #!/bin/sh > #zfs unshare tank/vmrep > #sleep 5 > /scripts/removenfsip.sh > sync > sleep 5 > #zrep sync tank/vmrep > #sleep 5 > #the following does the zfs snapshot/send/receive > zrep failover tank/vmrep > sleep 5 > #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep > #sleep 5 > ssh 172.16.50.2 /scripts/setnfsip.sh > > > So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. It is a replica, but it isn't the same from an NFS perspective. The files may have the same contents, but the NFSv3 file handles are different because they are in two different file systems. For this to work, the clients would have to remount, which blows your requirement for transparency. > > The problem is in ESXi the datastore goes inaccessable. I can fail back and the datastore comes back online like fine. I can mount the nfs datastore as a new one with the .100 ip on omni-rep2 so it is being exported properly. Yes, works as designed. > > According to the last paragraph of this > > https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 > > It should work, I have merely changed which host is broadcasting my datastore's IP address. > > I know a guy named saso? did some iScsi failover recently and noted it worked with NFS. I am just wondering what I am missing here. Saso's solution, and indeed most failover cluster solutions, use shared storage and not ZFS replication between the nodes. One good reason to do this is so you can transparently failover NFS service. NB, the replication method is used often for disaster recovery, because in DR there often is no transparent failover requirement. -- richard -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.elling at richardelling.com Fri Nov 8 22:02:26 2013 From: richard.elling at richardelling.com (Richard Elling) Date: Fri, 8 Nov 2013 14:02:26 -0800 Subject: [OmniOS-discuss] system hangs randomly In-Reply-To: <527D0F36.2060101@cos.ru> References: <964772415.2002197.1383897477403.JavaMail.zimbra@cantekstil.com.tr> <917775435.2012613.1383914156801.JavaMail.zimbra@cantekstil.com.tr> <527D0F36.2060101@cos.ru> Message-ID: On Nov 8, 2013, at 8:20 AM, Jim Klimov wrote: > The logs specify that your IDE devices (I believe, these are the rpool > SSDs in legacy mode) return errors on reads and timeout on retries or > resets. This may mean a few things: > > 1) Imminent device death i.e. due to wear over lifetime, try to get > these replaced with new units (especially if their age or some actual > diagnostics results from "smartctl" or vendor tools also indicate the > possibility of such scenario) I vote for this one. The X-25E are well-known for behaving this way as a failure mode. The only recourse is to replace the disk. > > 2) Bad diagnostics, perhaps due to IDE protocol limitations - try to > switch the controller into SATA mode and use some illumos live media > (OI LiveCD/LiveUSB or OmniOS equivalents) to boot the server with the > rpool disks in SATA mode and run: This isn't the cause or solution for the disk's woes, but I recommend going to AHCI mode at your convenience. You might be able to replace the disk without an outage, but this step will require an outage. -- richard > > zpool import -N -R /a -f rpool > mount -F zfs rpool/ROOT/your_BE_name /a && \ > touch /a/reconfigure > zpool export rpool > > Depending on your OS setup, the BE mounting may require some other > command (like "zfs mount rpool/ROOT/your_BE_name"). > > This routine mounts the pool, indicates to the BE that it should make > new device nodes (so it runs "devfsadm" early in the boot), and exports > the pool. In the process, the rpool ZFS labels begin referencing the new > hard-disk device node names which is what the rootfs procedure relies > on. In some more difficult cases it might help to also copy (rsync) the > /dev/ and /devices/ from the live environment into the on-disk BE so > that these device names saved into the pool labels would match those > discovered by the kernel upon boot. > > Do have backups; it might make sense to complete this experiment with > one of the mirror halves removed, so that if nothing works (even rolling > back to an IDE-only setup) you can destroy this half's content and boot > in IDE mode from the other half and re-attach the mirrored part to it. > > As a variant, it might make sense (if you'd also refresh the hardware) > to attach the new device(s) to the rpool as a 3/4-way mirror, and then > completing the switcheroo to SATA with only the new couple plugged in - > you'd be able to fall back on the old and tested set if all goes wrong > somehow. > > Good luck, > //Jim > > > On 2013-11-08 13:35, Hafiz Rafibeyli wrote: >> log on monitor when system hangs was like this:(can send actuall taken screenshot to individual mail adresses) >> >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: reset bus, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: early timeout, target=0 lun=0 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> Error for command 'read sector' Error Level: Informational >> gda: [ID 107833 kern.notice] Sense Key: aborted command >> gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> Error for command 'read sector' Error Level: Informational >> gda: [ID 107833 kern.notice] Sense Key: aborted command >> gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: abort request, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: abort device, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: reset target, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: reset bus, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: early timeout, target=0 lun=0 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> Error for command 'read sector' Error Level: Informational >> gda: [ID 107833 kern.notice] Sense Key: aborted command >> gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> >> >> Hello, >> >> Omnios version:SunOS 5.11 omnios-b281e50 >> Server:Supermicro X8DAH (24x storage chassis) >> >> we are using omnios as a production nfs server for Esxi hosts. >> >> everything was ok,but last 20 days system hangs 3 times.Nothing changed on hardware side. >> >> for OS disks we are using two SSDSA2SH032G1GN(32 Gb Intel X25-E SSD) in zfs mirror attached onboard sata ports of motherboard. >> >> I captured monitor screenshot when system hangs,and sending as attachment. >> >> >> My pools info: >> >> pool: rpool >> state: ONLINE >> scan: resilvered 20.0G in 0h3m with 0 errors on Sun Oct 20 14:01:01 2013 >> config: >> >> NAME STATE READ WRITE CKSUM >> rpool ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> c4d0s0 ONLINE 0 0 0 >> c3d1s0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> pool: zpool1 >> state: ONLINE >> status: Some supported features are not enabled on the pool. The pool can >> still be used, but some features are unavailable. >> action: Enable all features using 'zpool upgrade'. Once this is done, >> the pool may no longer be accessible by software that does not support >> the features. See zpool-features(5) for details. >> scan: scrub repaired 0 in 5h0m with 0 errors on Sat Oct 12 19:00:53 2013 >> config: >> >> NAME STATE READ WRITE CKSUM >> zpool1 ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> c1t5000C50041E9D9A7d0 ONLINE 0 0 0 >> c1t5000C50041F1A5EFd0 ONLINE 0 0 0 >> c1t5000C5004253FF87d0 ONLINE 0 0 0 >> c1t5000C50055A607E3d0 ONLINE 0 0 0 >> c1t5000C50055A628EFd0 ONLINE 0 0 0 >> c1t5000C50055A62F57d0 ONLINE 0 0 0 >> logs >> mirror-1 ONLINE 0 0 0 >> c1t5001517959627219d0 ONLINE 0 0 0 >> c1t5001517BB2747BE7d0 ONLINE 0 0 0 >> cache >> c1t5001517803D007D8d0 ONLINE 0 0 0 >> c1t5001517BB2AFB592d0 ONLINE 0 0 0 >> spares >> c1t5000C5005600A6B3d0 AVAIL >> c1t5000C5005600B43Bd0 AVAIL >> >> errors: No known data errors >> >> >> >> >> >> _______________________________________________ >> OmniOS-discuss mailing list >> OmniOS-discuss at lists.omniti.com >> http://lists.omniti.com/mailman/listinfo/omnios-discuss >> > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimklimov at cos.ru Fri Nov 8 22:07:48 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Fri, 08 Nov 2013 23:07:48 +0100 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> References: <527D0EA6.2030307@cimlbr.com> <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> Message-ID: <527D60B4.9030205@cos.ru> On 2013-11-08 22:54, Richard Elling wrote: > It is a replica, but it isn't the same from an NFS perspective. The > files may have the > same contents, but the NFSv3 file handles are different because they are > in two different > file systems. For this to work, the clients would have to remount, > which blows your > requirement for transparency. Just a "little quick question": the NFS clients have a means to specify several NFS servers as equivalents (described in "man mount_nfs")... I am not sure if this is what the OP is trying to do, and if not - whether this would help him? And if the VMWare NFS client has similar means to failover between servers - with explicitly different server IPs (i.e. one serves the share, then unshares it, replicates to another, and another one shares the master data)? //Jim From mweiss at cimlbr.com Fri Nov 8 22:22:09 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 16:22:09 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> References: <527D0EA6.2030307@cimlbr.com> <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> Message-ID: <527D6411.1010105@cimlbr.com> I believe most of you are correct. Just because I failover the data, I have not actually failed over the NFS state and therefore my setup will not work as intended. That being said, what are other people using in production for transparent failover using NFS? I have heard of RSF-1, but I was looking to homebrew this. On 11/8/2013 3:54 PM, Richard Elling wrote: > Comment below... > > On Nov 8, 2013, at 8:17 AM, Matt Weiss > wrote: > >> I am working on a failover script using OmniOS as a NFS server. >> >> According to VMware, if I mount and nfs datastore via its IP Address >> then I should be able to move the IP around and still mount it, >> however it is not working right. >> >> For example: >> >> On an ESXi instance (5.1U1) I mount the following NFS Datastore >> 172.16.50.100 >> /tank/vmrep >> which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes >> >> >> omni-rep1: 172.16.50.1 >> omni-rep2: 172.16.50.2 >> >> I am using zrep to failover my zfs dataset. >> http://www.bolthole.com/solaris/zrep/zrep.documentation.html >> >> Essential, it puts primary into read-only, does a zfs send/receive, >> then sets the secondary to rw. >> >> >> To expose my dataset (tank/vmrep) I am using sharenfs property of >> zfs. I have created a virtual ip to use for this purpose. >> >> #setnfsip.sh >> ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs >> >> #removenfsip.sh >> ipadm delete-addr vmxnet3s0/nfs >> >> >> So, when I want to failover, I just do the following: >> >> #!/bin/sh >> #zfs unshare tank/vmrep >> #sleep 5 >> /scripts/removenfsip.sh >> sync >> sleep 5 >> #zrep sync tank/vmrep >> #sleep 5 >> #the following does the zfs snapshot/send/receive >> zrep failover tank/vmrep >> sleep 5 >> #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep >> #sleep 5 >> ssh 172.16.50.2 /scripts/setnfsip.sh >> >> >> So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it >> has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. > > It is a replica, but it isn't the same from an NFS perspective. The > files may have the > same contents, but the NFSv3 file handles are different because they > are in two different > file systems. For this to work, the clients would have to remount, > which blows your > requirement for transparency. > >> >> The problem is in ESXi the datastore goes inaccessable. I can fail >> back and the datastore comes back online like fine. I can mount the >> nfs datastore as a new one with the .100 ip on omni-rep2 so it is >> being exported properly. > > Yes, works as designed. > >> >> According to the last paragraph of this >> >> https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 >> >> It should work, I have merely changed which host is broadcasting my >> datastore's IP address. >> >> I know a guy named saso? did some iScsi failover recently and noted >> it worked with NFS. I am just wondering what I am missing here. > > Saso's solution, and indeed most failover cluster solutions, use > shared storage and not > ZFS replication between the nodes. One good reason to do this is so > you can transparently > failover NFS service. > > NB, the replication method is used often for disaster recovery, > because in DR there often is > no transparent failover requirement. > -- richard > > -- > > Richard.Elling at RichardElling.com > +1-760-896-4422 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.elling at richardelling.com Fri Nov 8 22:26:04 2013 From: richard.elling at richardelling.com (Richard Elling) Date: Fri, 8 Nov 2013 14:26:04 -0800 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D60B4.9030205@cos.ru> References: <527D0EA6.2030307@cimlbr.com> <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> <527D60B4.9030205@cos.ru> Message-ID: On Nov 8, 2013, at 2:07 PM, Jim Klimov wrote: > On 2013-11-08 22:54, Richard Elling wrote: >> It is a replica, but it isn't the same from an NFS perspective. The >> files may have the >> same contents, but the NFSv3 file handles are different because they are >> in two different >> file systems. For this to work, the clients would have to remount, >> which blows your >> requirement for transparency. > > Just a "little quick question": the NFS clients have a means to specify > several NFS servers as equivalents (described in "man mount_nfs")... You are correct. These are mount-time options or with the "read-only" requirement. In other words, not very useful for this use case. > I am not sure if this is what the OP is trying to do, and if not - > whether this would help him? And if the VMWare NFS client has similar > means to failover between servers - with explicitly different server IPs > (i.e. one serves the share, then unshares it, replicates to another, and > another one shares the master data)? AFAIK, VMware has its own NFS client implementation that is not at all similar to the Sun/Solaris NFS client. -- richard -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweiss at cimlbr.com Fri Nov 8 22:44:41 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Fri, 08 Nov 2013 16:44:41 -0600 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: References: <527D0EA6.2030307@cimlbr.com> <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> <527D60B4.9030205@cos.ru> Message-ID: <527D6959.1050407@cimlbr.com> On 11/8/2013 4:26 PM, Richard Elling wrote: > On Nov 8, 2013, at 2:07 PM, Jim Klimov > wrote: > >> On 2013-11-08 22:54, Richard Elling wrote: >>> It is a replica, but it isn't the same from an NFS perspective. The >>> files may have the >>> same contents, but the NFSv3 file handles are different because they are >>> in two different >>> file systems. For this to work, the clients would have to remount, >>> which blows your >>> requirement for transparency. >> >> Just a "little quick question": the NFS clients have a means to specify >> several NFS servers as equivalents (described in "man mount_nfs")... > Does anyone know more about NFS and OmniOS specifically? I found an article http://www.howtoforge.com/high_availability_nfs_drbd_heartbeat_p3 that moves /var/lib/nfs to the datastore then uses ls to create a symbolic link to the new location. As far as I can tell OmniOS uses /var/nfs which contains V4_state Any other directories need to be moved? That one alone didn't get it done. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skiselkov.ml at gmail.com Fri Nov 8 23:07:53 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Fri, 08 Nov 2013 23:07:53 +0000 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D28AA.7000509@cimlbr.com> References: <527D0EA6.2030307@cimlbr.com> <527D2107.6000500@gmail.com> <527D28AA.7000509@cimlbr.com> Message-ID: <527D6EC9.7070801@gmail.com> On 11/8/13, 6:08 PM, Matt Weiss wrote: > > On 11/8/2013 11:36 AM, Saso Kiselkov wrote: >> On 11/8/13, 4:17 PM, Matt Weiss wrote: >>> I am working on a failover script using OmniOS as a NFS server. >>> >>> According to VMware, if I mount and nfs datastore via its IP Address >>> then I should be able to move the IP around and still mount it, however >>> it is not working right. >>> >>> For example: >>> >>> On an ESXi instance (5.1U1) I mount the following NFS Datastore >>> 172.16.50.100 >>> /tank/vmrep >>> which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes >>> >>> >>> omni-rep1: 172.16.50.1 >>> omni-rep2: 172.16.50.2 >>> >>> I am using zrep to failover my zfs dataset. >>> http://www.bolthole.com/solaris/zrep/zrep.documentation.html >>> >>> Essential, it puts primary into read-only, does a zfs send/receive, then >>> sets the secondary to rw. >>> >>> >>> To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I >>> have created a virtual ip to use for this purpose. >>> >>> #setnfsip.sh >>> ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs >>> >>> #removenfsip.sh >>> ipadm delete-addr vmxnet3s0/nfs >>> >>> >>> So, when I want to failover, I just do the following: >>> >>> #!/bin/sh >>> #zfs unshare tank/vmrep >>> #sleep 5 >>> /scripts/removenfsip.sh >>> sync >>> sleep 5 >>> #zrep sync tank/vmrep >>> #sleep 5 >>> #the following does the zfs snapshot/send/receive >>> zrep failover tank/vmrep >>> sleep 5 >>> #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep >>> #sleep 5 >>> ssh 172.16.50.2 /scripts/setnfsip.sh >>> >>> >>> So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it >>> has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. >>> >>> The problem is in ESXi the datastore goes inaccessable. I can fail back >>> and the datastore comes back online like fine. I can mount the nfs >>> datastore as a new one with the .100 ip on omni-rep2 so it is being >>> exported properly. >>> >>> According to the last paragraph of this >>> >>> https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 >>> >>> >>> >>> It should work, I have merely changed which host is broadcasting my >>> datastore's IP address. >>> >>> I know a guy named saso? did some iScsi failover recently and noted it >>> worked with NFS. I am just wondering what I am missing here. >> I haven't done NFS datastore failover from ESXi myself, but off the top >> of my head I guess what's going haywire here is that you're setting the >> dataset read-only before moving it over. Don't do that. Simply tear down >> the IP address, migrate the dataset, set up a new NFS share on the >> target machine and then reinstate the IP address at the target. ESXi >> aggressively monitors the health of its datastores and if it gets to a >> state it can't deal with (e.g. write a datastore that refuses to process >> it), it will offline the whole datastore, awaiting administrator >> intervention. >> >> Don't worry about the datastore being offline for a while, ESXi will >> hold VM writes and the VMs themselves won't usually complain for up to >> 1-2 minutes (defaults on Windows/Linux). >> > I have experimented with > sharing / unsharing > leaving shared all the time > setting RO, RW > changing IP before/after etc > adding sleeps, syncs > > So far all have the same result. I believe the issue now lies with the > NFS state file. I may be able to get away with migrating that file with > my failover script, but we will see. > > Does anyone know how to change OmniOS NFS server parameters? I would be > wanting to put the state files into the zfs dataset somewhere to perhaps > handle that. Somehow I believe pacemaker takes care of this, so surely, > somehow NFS can be made to be a true failover. Have you tried: 1) tear down IP address on machine A 2) migrate dataset to machine B 3) share dataset on machine B 4) set up IP address on machine B In this fashion you guarantee that at any time when the IP address is visible, the datastore is visible from it as well - this is crucial for ESX. If at any point it connects to your NFS server, but the datastore is not available, then it will declare the datastore unavailable and back off. The same thing happens with iSCSI targets. -- Saso From skiselkov.ml at gmail.com Fri Nov 8 23:14:51 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Fri, 08 Nov 2013 23:14:51 +0000 Subject: [OmniOS-discuss] NFS Datastore vmware esxi failover In-Reply-To: <527D6959.1050407@cimlbr.com> References: <527D0EA6.2030307@cimlbr.com> <3BA1C366-1A6B-448B-B299-9264183B9997@RichardElling.com> <527D60B4.9030205@cos.ru> <527D6959.1050407@cimlbr.com> Message-ID: <527D706B.3050703@gmail.com> On 11/8/13, 10:44 PM, Matt Weiss wrote: > Does anyone know more about NFS and OmniOS specifically? Don't try to migrate NFS state between machines transparently, it won't work. Copying over files won't help you either, the NFS server is in-kernel and has practically no externally persistent state. In any case, I don't think this is what you're hitting here. I believe VMWare's NFS client implementation is robust enough to handle disconnection/remount transparently. But you need to make sure that at any given time your datastore appears in the NFS export list from the server, or that the entire NFS server is inaccessible - these are the two states handled by ESX. Specifically, having the IP address assigned *before* the NFS share is exported will cause ESX to declare the datastore inaccessible and stop retrying. -- Saso From emunch at utmi.in Sat Nov 9 08:46:26 2013 From: emunch at utmi.in (Sam M) Date: Sat, 9 Nov 2013 14:16:26 +0530 Subject: [OmniOS-discuss] From Stable to Bloody and vice-versa In-Reply-To: <20131108180212.1540c5dc@sleipner.datanom.net> References: <527D163D.4070807@gnaa.net> <20131108180212.1540c5dc@sleipner.datanom.net> Message-ID: I had problems with bloody. Please see my discussion thread. I'm still unable to get packages to update in bloody due to the issue mentioned previously. Waiting for the update to the stable branch so I can have my zpool back... On 8 November 2013 22:32, Michael Rasmussen wrote: > On Fri, 08 Nov 2013 08:50:05 -0800 > Geoff Nordli wrote: > > > > > Eric, what if you are running bloody now and you want to switch to > stable when the next stable is released? > > > If you hold back further updates your bloody will be on 15007. The next > stable is 15008 so when that time comes you should be able to change to > stable. > > -- > Hilsen/Regards > Michael Rasmussen > > Get my public GnuPG keys: > michael rasmussen cc > http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E > mir datanom net > http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C > mir miras org > http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 > -------------------------------------------------------------- > FORTUNE PROVIDES QUESTIONS FOR THE GREAT ANSWERS: #19 > A: To be or not to be. > Q: What is the square root of 4b^2? > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb.messier at gmail.com Sat Nov 9 23:21:49 2013 From: seb.messier at gmail.com (Sebastien Messier) Date: Sat, 9 Nov 2013 18:21:49 -0500 Subject: [OmniOS-discuss] need help with apcupsd setup Message-ID: Hi, So I have set-up almost everything for my apc ups to run on my server but now I face a problem. They say I need to modify my /sbin/rc0 script and I really dont want to fuck shit up as I'm aware this is quite a serious file. Does anyone have a apc ups set-up already that can help me? Is there anywhere I can find a rc0 file for omnios? Thank you! This is my rc0 file attached. -- *S?bastien Messier*?tudiant au baccalaur?at en g?nie m?canique ?cole de technologie Sup?rieure Directeur m?canique du club ?tudiant Walking Machine *Walking Machine * *| Robotique mobile autonome* *P*: (514) 396-8800 x7408*| **| seb.messier at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- #!/sbin/sh # # CDDL HEADER START # # The contents of this file are subject to the terms of the # Common Development and Distribution License (the "License"). # You may not use this file except in compliance with the License. # # You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE # or http://www.opensolaris.org/os/licensing. # See the License for the specific language governing permissions # and limitations under the License. # # When distributing Covered Code, include this CDDL HEADER in each # file and include the License file at usr/src/OPENSOLARIS.LICENSE. # If applicable, add the following below this CDDL HEADER, with the # fields enclosed by brackets "[]" replaced with your own identifying # information: Portions Copyright [yyyy] [name of copyright owner] # # CDDL HEADER END # # # Copyright (c) 1984, 1986, 1987, 1988, 1989 AT&T. # All rights reserved. # # # Copyright 2009 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # "Run Commands" for init states 0, 5 and 6. PATH=/usr/sbin:/usr/bin if [ -z "$SMF_RESTARTER" ]; then echo "Cannot be run outside smf(5)" exit 1 fi # Export boot parameters to rc scripts set -- `/usr/bin/who -r` _INIT_RUN_LEVEL="$7" # Current run-level _INIT_RUN_NPREV="$8" # Number of times previously at current run-level _INIT_PREV_LEVEL="$9" # Previous run-level set -- `/usr/bin/uname -a` _INIT_UTS_SYSNAME="$1" # Operating system name (uname -s) _INIT_UTS_NODENAME="$2" # Node name (uname -n) _INIT_UTS_RELEASE="$3" # Operating system release (uname -r) _INIT_UTS_VERSION="$4" # Operating system version (uname -v) _INIT_UTS_MACHINE="$5" # Machine class (uname -m) _INIT_UTS_ISA="$6" # Instruction set architecture (uname -p) _INIT_UTS_PLATFORM="$7" # Platform string (uname -i) export _INIT_RUN_LEVEL _INIT_RUN_NPREV _INIT_PREV_LEVEL \ _INIT_UTS_SYSNAME _INIT_UTS_NODENAME _INIT_UTS_RELEASE _INIT_UTS_VERSION \ _INIT_UTS_MACHINE _INIT_UTS_ISA _INIT_UTS_PLATFORM if [ -d /etc/rc0.d ]; then for f in /etc/rc0.d/K*; do if [ -s $f ]; then case $f in *.sh) /lib/svc/bin/lsvcrun -s $f stop;; *) /lib/svc/bin/lsvcrun $f stop;; esac fi done # System cleanup functions ONLY (things that end fast!) for f in /etc/rc0.d/S*; do if [ -s $f ]; then case $f in *.sh) /lib/svc/bin/lsvcrun -s $f start;; *) /lib/svc/bin/lsvcrun $f start ;; esac fi done fi [ -f /etc/.dynamic_routing ] && /usr/bin/rm -f /etc/.dynamic_routing trap "" 15 From mailinglists at qutic.com Sun Nov 10 21:19:44 2013 From: mailinglists at qutic.com (qutic development) Date: Sun, 10 Nov 2013 22:19:44 +0100 Subject: [OmniOS-discuss] zone initialboot Message-ID: Hi, I like the concept of initialboot for OmniOS zones a lot. Huge step forward coming from sysidcfg :-) But I also like to have a log from executing the tmp file. Any way to get this in the next stable release? Something like: $SCRIPT > /var/log/initialboot (line 42 in /lib/svc/method/initial-boot) Best regards Stefan Husch From rafibeyli at gmail.com Mon Nov 11 07:45:28 2013 From: rafibeyli at gmail.com (Hafiz Rafibeyli) Date: Mon, 11 Nov 2013 09:45:28 +0200 (EET) Subject: [OmniOS-discuss] System hangs randomly In-Reply-To: References: Message-ID: <601419762.2109615.1384155928272.JavaMail.zimbra@cantekstil.com.tr> Helllo, Jim and Richard thank you for your quick answers. I will control and change controller mode to AHCI,but I think Richard right. My OS disks are new,but I have some suspicions on Intel X-25E series. regards Hafiz. ----- Orijinal Mesaj ----- Kimden: omnios-discuss-request at lists.omniti.com Kime: omnios-discuss at lists.omniti.com G?nderilenler: 9 Kas?m Cumartesi 2013 0:02:32 Konu: OmniOS-discuss Digest, Vol 20, Issue 7 Send OmniOS-discuss mailing list submissions to omnios-discuss at lists.omniti.com To subscribe or unsubscribe via the World Wide Web, visit http://lists.omniti.com/mailman/listinfo/omnios-discuss or, via email, send a message with subject or body 'help' to omnios-discuss-request at lists.omniti.com You can reach the person managing the list at omnios-discuss-owner at lists.omniti.com When replying, please edit your Subject line so it is more specific than "Re: Contents of OmniOS-discuss digest..." Today's Topics: 1. Re: NFS Datastore vmware esxi failover (Matt Weiss) 2. Re: NFS Datastore vmware esxi failover (Richard Elling) 3. Re: system hangs randomly (Richard Elling) ---------------------------------------------------------------------- Message: 1 Date: Fri, 08 Nov 2013 12:08:42 -0600 From: Matt Weiss To: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] NFS Datastore vmware esxi failover Message-ID: <527D28AA.7000509 at cimlbr.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 11/8/2013 11:36 AM, Saso Kiselkov wrote: > On 11/8/13, 4:17 PM, Matt Weiss wrote: >> I am working on a failover script using OmniOS as a NFS server. >> >> According to VMware, if I mount and nfs datastore via its IP Address >> then I should be able to move the IP around and still mount it, however >> it is not working right. >> >> For example: >> >> On an ESXi instance (5.1U1) I mount the following NFS Datastore >> 172.16.50.100 >> /tank/vmrep >> which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes >> >> >> omni-rep1: 172.16.50.1 >> omni-rep2: 172.16.50.2 >> >> I am using zrep to failover my zfs dataset. >> http://www.bolthole.com/solaris/zrep/zrep.documentation.html >> >> Essential, it puts primary into read-only, does a zfs send/receive, then >> sets the secondary to rw. >> >> >> To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I >> have created a virtual ip to use for this purpose. >> >> #setnfsip.sh >> ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs >> >> #removenfsip.sh >> ipadm delete-addr vmxnet3s0/nfs >> >> >> So, when I want to failover, I just do the following: >> >> #!/bin/sh >> #zfs unshare tank/vmrep >> #sleep 5 >> /scripts/removenfsip.sh >> sync >> sleep 5 >> #zrep sync tank/vmrep >> #sleep 5 >> #the following does the zfs snapshot/send/receive >> zrep failover tank/vmrep >> sleep 5 >> #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep >> #sleep 5 >> ssh 172.16.50.2 /scripts/setnfsip.sh >> >> >> So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it >> has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. >> >> The problem is in ESXi the datastore goes inaccessable. I can fail back >> and the datastore comes back online like fine. I can mount the nfs >> datastore as a new one with the .100 ip on omni-rep2 so it is being >> exported properly. >> >> According to the last paragraph of this >> >> https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 >> >> >> It should work, I have merely changed which host is broadcasting my >> datastore's IP address. >> >> I know a guy named saso? did some iScsi failover recently and noted it >> worked with NFS. I am just wondering what I am missing here. > I haven't done NFS datastore failover from ESXi myself, but off the top > of my head I guess what's going haywire here is that you're setting the > dataset read-only before moving it over. Don't do that. Simply tear down > the IP address, migrate the dataset, set up a new NFS share on the > target machine and then reinstate the IP address at the target. ESXi > aggressively monitors the health of its datastores and if it gets to a > state it can't deal with (e.g. write a datastore that refuses to process > it), it will offline the whole datastore, awaiting administrator > intervention. > > Don't worry about the datastore being offline for a while, ESXi will > hold VM writes and the VMs themselves won't usually complain for up to > 1-2 minutes (defaults on Windows/Linux). > I have experimented with sharing / unsharing leaving shared all the time setting RO, RW changing IP before/after etc adding sleeps, syncs So far all have the same result. I believe the issue now lies with the NFS state file. I may be able to get away with migrating that file with my failover script, but we will see. Does anyone know how to change OmniOS NFS server parameters? I would be wanting to put the state files into the zfs dataset somewhere to perhaps handle that. Somehow I believe pacemaker takes care of this, so surely, somehow NFS can be made to be a true failover. ------------------------------ Message: 2 Date: Fri, 8 Nov 2013 13:54:50 -0800 From: Richard Elling To: Matt Weiss Cc: omnios-discuss Subject: Re: [OmniOS-discuss] NFS Datastore vmware esxi failover Message-ID: <3BA1C366-1A6B-448B-B299-9264183B9997 at RichardElling.com> Content-Type: text/plain; charset="us-ascii" Comment below... On Nov 8, 2013, at 8:17 AM, Matt Weiss wrote: > I am working on a failover script using OmniOS as a NFS server. > > According to VMware, if I mount and nfs datastore via its IP Address then I should be able to move the IP around and still mount it, however it is not working right. > > For example: > > On an ESXi instance (5.1U1) I mount the following NFS Datastore > 172.16.50.100 > /tank/vmrep > which amounts to a UUID of 6c0c1d0d-928ef591 in /vmfs/volumes > > > omni-rep1: 172.16.50.1 > omni-rep2: 172.16.50.2 > > I am using zrep to failover my zfs dataset. > http://www.bolthole.com/solaris/zrep/zrep.documentation.html > > Essential, it puts primary into read-only, does a zfs send/receive, then sets the secondary to rw. > > > To expose my dataset (tank/vmrep) I am using sharenfs property of zfs. I have created a virtual ip to use for this purpose. > > #setnfsip.sh > ipadm create-addr -T static -a 172.16.50.100/24 vmxnet3s0/nfs > > #removenfsip.sh > ipadm delete-addr vmxnet3s0/nfs > > > So, when I want to failover, I just do the following: > > #!/bin/sh > #zfs unshare tank/vmrep > #sleep 5 > /scripts/removenfsip.sh > sync > sleep 5 > #zrep sync tank/vmrep > #sleep 5 > #the following does the zfs snapshot/send/receive > zrep failover tank/vmrep > sleep 5 > #ssh 172.16.50.2 /usr/sbin/zfs share tank/vmrep > #sleep 5 > ssh 172.16.50.2 /scripts/setnfsip.sh > > > So, all goes well, omni-rep2 is now exporting tank/vmrep with NFS, it has the 172.16.50.100 ip address, it is the exact replica of omni-rep1. It is a replica, but it isn't the same from an NFS perspective. The files may have the same contents, but the NFSv3 file handles are different because they are in two different file systems. For this to work, the clients would have to remount, which blows your requirement for transparency. > > The problem is in ESXi the datastore goes inaccessable. I can fail back and the datastore comes back online like fine. I can mount the nfs datastore as a new one with the .100 ip on omni-rep2 so it is being exported properly. Yes, works as designed. > > According to the last paragraph of this > > https://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 > > It should work, I have merely changed which host is broadcasting my datastore's IP address. > > I know a guy named saso? did some iScsi failover recently and noted it worked with NFS. I am just wondering what I am missing here. Saso's solution, and indeed most failover cluster solutions, use shared storage and not ZFS replication between the nodes. One good reason to do this is so you can transparently failover NFS service. NB, the replication method is used often for disaster recovery, because in DR there often is no transparent failover requirement. -- richard -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Fri, 8 Nov 2013 14:02:26 -0800 From: Richard Elling To: jimklimov at cos.ru Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] system hangs randomly Message-ID: Content-Type: text/plain; charset="us-ascii" On Nov 8, 2013, at 8:20 AM, Jim Klimov wrote: > The logs specify that your IDE devices (I believe, these are the rpool > SSDs in legacy mode) return errors on reads and timeout on retries or > resets. This may mean a few things: > > 1) Imminent device death i.e. due to wear over lifetime, try to get > these replaced with new units (especially if their age or some actual > diagnostics results from "smartctl" or vendor tools also indicate the > possibility of such scenario) I vote for this one. The X-25E are well-known for behaving this way as a failure mode. The only recourse is to replace the disk. > > 2) Bad diagnostics, perhaps due to IDE protocol limitations - try to > switch the controller into SATA mode and use some illumos live media > (OI LiveCD/LiveUSB or OmniOS equivalents) to boot the server with the > rpool disks in SATA mode and run: This isn't the cause or solution for the disk's woes, but I recommend going to AHCI mode at your convenience. You might be able to replace the disk without an outage, but this step will require an outage. -- richard > > zpool import -N -R /a -f rpool > mount -F zfs rpool/ROOT/your_BE_name /a && \ > touch /a/reconfigure > zpool export rpool > > Depending on your OS setup, the BE mounting may require some other > command (like "zfs mount rpool/ROOT/your_BE_name"). > > This routine mounts the pool, indicates to the BE that it should make > new device nodes (so it runs "devfsadm" early in the boot), and exports > the pool. In the process, the rpool ZFS labels begin referencing the new > hard-disk device node names which is what the rootfs procedure relies > on. In some more difficult cases it might help to also copy (rsync) the > /dev/ and /devices/ from the live environment into the on-disk BE so > that these device names saved into the pool labels would match those > discovered by the kernel upon boot. > > Do have backups; it might make sense to complete this experiment with > one of the mirror halves removed, so that if nothing works (even rolling > back to an IDE-only setup) you can destroy this half's content and boot > in IDE mode from the other half and re-attach the mirrored part to it. > > As a variant, it might make sense (if you'd also refresh the hardware) > to attach the new device(s) to the rpool as a 3/4-way mirror, and then > completing the switcheroo to SATA with only the new couple plugged in - > you'd be able to fall back on the old and tested set if all goes wrong > somehow. > > Good luck, > //Jim > > > On 2013-11-08 13:35, Hafiz Rafibeyli wrote: >> log on monitor when system hangs was like this:(can send actuall taken screenshot to individual mail adresses) >> >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: reset bus, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: early timeout, target=0 lun=0 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> Error for command 'read sector' Error Level: Informational >> gda: [ID 107833 kern.notice] Sense Key: aborted command >> gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> Error for command 'read sector' Error Level: Informational >> gda: [ID 107833 kern.notice] Sense Key: aborted command >> gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: abort request, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: abort device, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: reset target, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: reset bus, target=0 lun=0 >> scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0 (ata0): >> timeout: early timeout, target=0 lun=0 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> Error for command 'read sector' Error Level: Informational >> gda: [ID 107833 kern.notice] Sense Key: aborted command >> gda: [ID 107833 kern.notice] Vendor 'Gen-ATA ' error code: 0x3 >> gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 (Disk0): >> >> >> Hello, >> >> Omnios version:SunOS 5.11 omnios-b281e50 >> Server:Supermicro X8DAH (24x storage chassis) >> >> we are using omnios as a production nfs server for Esxi hosts. >> >> everything was ok,but last 20 days system hangs 3 times.Nothing changed on hardware side. >> >> for OS disks we are using two SSDSA2SH032G1GN(32 Gb Intel X25-E SSD) in zfs mirror attached onboard sata ports of motherboard. >> >> I captured monitor screenshot when system hangs,and sending as attachment. >> >> >> My pools info: >> >> pool: rpool >> state: ONLINE >> scan: resilvered 20.0G in 0h3m with 0 errors on Sun Oct 20 14:01:01 2013 >> config: >> >> NAME STATE READ WRITE CKSUM >> rpool ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> c4d0s0 ONLINE 0 0 0 >> c3d1s0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> pool: zpool1 >> state: ONLINE >> status: Some supported features are not enabled on the pool. The pool can >> still be used, but some features are unavailable. >> action: Enable all features using 'zpool upgrade'. Once this is done, >> the pool may no longer be accessible by software that does not support >> the features. See zpool-features(5) for details. >> scan: scrub repaired 0 in 5h0m with 0 errors on Sat Oct 12 19:00:53 2013 >> config: >> >> NAME STATE READ WRITE CKSUM >> zpool1 ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> c1t5000C50041E9D9A7d0 ONLINE 0 0 0 >> c1t5000C50041F1A5EFd0 ONLINE 0 0 0 >> c1t5000C5004253FF87d0 ONLINE 0 0 0 >> c1t5000C50055A607E3d0 ONLINE 0 0 0 >> c1t5000C50055A628EFd0 ONLINE 0 0 0 >> c1t5000C50055A62F57d0 ONLINE 0 0 0 >> logs >> mirror-1 ONLINE 0 0 0 >> c1t5001517959627219d0 ONLINE 0 0 0 >> c1t5001517BB2747BE7d0 ONLINE 0 0 0 >> cache >> c1t5001517803D007D8d0 ONLINE 0 0 0 >> c1t5001517BB2AFB592d0 ONLINE 0 0 0 >> spares >> c1t5000C5005600A6B3d0 AVAIL >> c1t5000C5005600B43Bd0 AVAIL >> >> errors: No known data errors >> >> >> >> >> >> _______________________________________________ >> OmniOS-discuss mailing list >> OmniOS-discuss at lists.omniti.com >> http://lists.omniti.com/mailman/listinfo/omnios-discuss >> > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss ------------------------------ End of OmniOS-discuss Digest, Vol 20, Issue 7 ********************************************* From tobi at oetiker.ch Mon Nov 11 14:05:00 2013 From: tobi at oetiker.ch (Tobias Oetiker) Date: Mon, 11 Nov 2013 15:05:00 +0100 (CET) Subject: [OmniOS-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: <1384178047.f52D50cf5.12340@b-lb-www-quonix> References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> Message-ID: We are looking at purchasing a new box. According to https://github.com/joyent/manufacturing/blob/master/parts-database.ods it seems that Joyent is using Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe soon 7K4000) disks. http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm We asked our system integrator for an offer on these boxes and he said, that he has several in the field and they wer cool, BUT he has seen problems with the on-board LSI 2308 and the backplane, and that he would recommend installing 5 extra LSI SAS controllers to directly connect each sas disk to a controller port ... Is this the reason that joyent lists all the firmware versions and one should make sure to use exactly these versions ? Any ideas ? cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900 From skiselkov.ml at gmail.com Mon Nov 11 14:10:27 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Mon, 11 Nov 2013 14:10:27 +0000 Subject: [OmniOS-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> Message-ID: <5280E553.702@gmail.com> On 11/11/13, 2:05 PM, Tobias Oetiker wrote: > We are looking at purchasing a new box. According to > https://github.com/joyent/manufacturing/blob/master/parts-database.ods > it seems that Joyent is using > > Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe > soon 7K4000) disks. > http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm > > We asked our system integrator for an offer on these boxes and he > said, that he has several in the field and they wer cool, BUT he > has seen problems with the on-board LSI 2308 and the backplane, and > that he would recommend installing 5 extra LSI SAS controllers to > directly connect each sas disk to a controller port ... > > Is this the reason that joyent lists all the firmware versions and > one should make sure to use exactly these versions ? > > Any ideas ? I may be completely off, but I see no reason why using LSI SAS chips installed on stand-alone extension cards is better than using the LSI SAS chip installed on the motherboard. I smell an attempt at upsell... Cheers -- Saso From skiselkov.ml at gmail.com Mon Nov 11 14:14:09 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Mon, 11 Nov 2013 14:14:09 +0000 Subject: [OmniOS-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: <5280E553.702@gmail.com> References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <5280E553.702@gmail.com> Message-ID: <5280E631.7080402@gmail.com> On 11/11/13, 2:10 PM, Saso Kiselkov wrote: > On 11/11/13, 2:05 PM, Tobias Oetiker wrote: >> We are looking at purchasing a new box. According to >> https://github.com/joyent/manufacturing/blob/master/parts-database.ods >> it seems that Joyent is using >> >> Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe >> soon 7K4000) disks. >> http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm >> >> We asked our system integrator for an offer on these boxes and he >> said, that he has several in the field and they wer cool, BUT he >> has seen problems with the on-board LSI 2308 and the backplane, and >> that he would recommend installing 5 extra LSI SAS controllers to >> directly connect each sas disk to a controller port ... >> >> Is this the reason that joyent lists all the firmware versions and >> one should make sure to use exactly these versions ? >> >> Any ideas ? > > I may be completely off, but I see no reason why using LSI SAS chips > installed on stand-alone extension cards is better than using the LSI > SAS chip installed on the motherboard. I smell an attempt at upsell... Just as a quick addendum, as far as firmware goes, it's probably prudent to follow Joyent's advice. There's little to gain by experimenting with new untried firmwares and Joyent has probably got a fair number of these boxes. That having been said, I've never seen any trouble with LSI SAS chips and expanders that would force me to rather do direct-attach instead of expanders (assuming, of course, you're not trying to attach SATA disks to a SAS backplane or some other such nonsense). -- Saso From tobi at oetiker.ch Mon Nov 11 14:28:16 2013 From: tobi at oetiker.ch (Tobias Oetiker) Date: Mon, 11 Nov 2013 15:28:16 +0100 (CET) Subject: [OmniOS-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: <5280E553.702@gmail.com> References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <5280E553.702@gmail.com> Message-ID: Hi Saso, Today Saso Kiselkov wrote: > On 11/11/13, 2:05 PM, Tobias Oetiker wrote: > > We are looking at purchasing a new box. According to > > https://github.com/joyent/manufacturing/blob/master/parts-database.ods > > it seems that Joyent is using > > > > Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe > > soon 7K4000) disks. > > http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm > > > > We asked our system integrator for an offer on these boxes and he > > said, that he has several in the field and they wer cool, BUT he > > has seen problems with the on-board LSI 2308 and the backplane, and > > that he would recommend installing 5 extra LSI SAS controllers to > > directly connect each sas disk to a controller port ... > > > > Is this the reason that joyent lists all the firmware versions and > > one should make sure to use exactly these versions ? > > > > Any ideas ? > > I may be completely off, but I see no reason why using LSI SAS chips > installed on stand-alone extension cards is better than using the LSI > SAS chip installed on the motherboard. I smell an attempt at upsell... this system has a sas expander on the backplane, so that all 36 disks can be controlled from the single on-board LSI 2308 our integrater maintains that he has seen issues with the LSI SAS2X28 expander on the backplane of the 6047R-E1R36L. I did find some messages from 2012 that there were issues but nothing recent. cheers tobi > Cheers > -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900 From skiselkov.ml at gmail.com Mon Nov 11 14:33:30 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Mon, 11 Nov 2013 14:33:30 +0000 Subject: [OmniOS-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <5280E553.702@gmail.com> Message-ID: <5280EABA.7040706@gmail.com> On 11/11/13, 2:28 PM, Tobias Oetiker wrote: > Hi Saso, > > Today Saso Kiselkov wrote: > >> On 11/11/13, 2:05 PM, Tobias Oetiker wrote: >>> We are looking at purchasing a new box. According to >>> https://github.com/joyent/manufacturing/blob/master/parts-database.ods >>> it seems that Joyent is using >>> >>> Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe >>> soon 7K4000) disks. >>> http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm >>> >>> We asked our system integrator for an offer on these boxes and he >>> said, that he has several in the field and they wer cool, BUT he >>> has seen problems with the on-board LSI 2308 and the backplane, and >>> that he would recommend installing 5 extra LSI SAS controllers to >>> directly connect each sas disk to a controller port ... >>> >>> Is this the reason that joyent lists all the firmware versions and >>> one should make sure to use exactly these versions ? >>> >>> Any ideas ? >> >> I may be completely off, but I see no reason why using LSI SAS chips >> installed on stand-alone extension cards is better than using the LSI >> SAS chip installed on the motherboard. I smell an attempt at upsell... > > this system has a sas expander on the backplane, so that all 36 > disks can be controlled from the single on-board LSI 2308 > > our integrater maintains that he has seen issues with the > LSI SAS2X28 expander on the backplane of the 6047R-E1R36L. > > I did find some messages from 2012 that there were issues but > nothing recent. I've got plenty of LSI SAS2x36 expanders (i.e. the 36-port version) - never had any trouble with them. Unless there are some serious mechanical issues (i.e. soldering problems etc.) I don't see a way there could be problems. Your integrator probably has had bad experiences with customers stuffing SATA drives and/or SSDs into the boxes and so they may just want to cover their behinds. Cheers, -- Saso From jdg117 at elvis.arl.psu.edu Mon Nov 11 14:43:28 2013 From: jdg117 at elvis.arl.psu.edu (John D Groenveld) Date: Mon, 11 Nov 2013 09:43:28 -0500 Subject: [OmniOS-discuss] need help with apcupsd setup In-Reply-To: Your message of "Sat, 09 Nov 2013 18:21:49 EST." References: Message-ID: <201311111443.rABEhSmM009549@elvis.arl.psu.edu> In message , Sebastien Messier writes: >So I have set-up almost everything for my apc ups to run on my server but >now I face a problem. They say I need to modify my /sbin/rc0 script and I >really dont want to fuck shit up as I'm aware this is quite a serious file. I've never found it necessary to modify /sbin/rc0. apcupsd hits the BATTERYLEVEL or MINUTES thresholds in /etc/opt/apcupsd/apcupsd.conf and shutdown(1M)'s. What do your tests reveal? John groenveld at acm.org From esproul at omniti.com Mon Nov 11 14:51:58 2013 From: esproul at omniti.com (Eric Sproul) Date: Mon, 11 Nov 2013 09:51:58 -0500 Subject: [OmniOS-discuss] zone initialboot In-Reply-To: References: Message-ID: On Sun, Nov 10, 2013 at 4:19 PM, qutic development wrote: > Hi, > > I like the concept of initialboot for OmniOS zones a lot. Huge step forward coming from sysidcfg :-) > > But I also like to have a log from executing the tmp file. Any way to get this in the next stable release? > > Something like: > > $SCRIPT > /var/log/initialboot > > (line 42 in /lib/svc/method/initial-boot) That sounds like a great idea. The source file for the method script is here: https://github.com/omniti-labs/illumos-omnios/blob/master/usr/src/cmd/svc/milestone/initial-boot Feel free to submit a pull request for that feature. Thanks, Eric From paul.jochum at alcatel-lucent.com Mon Nov 11 15:11:27 2013 From: paul.jochum at alcatel-lucent.com (Paul Jochum) Date: Mon, 11 Nov 2013 09:11:27 -0600 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL Message-ID: <5280F39F.6000807@alcatel-lucent.com> Hi All: Does anyone know the magic settings (either/both in the SuperMicro BIOS or OmniOS), to get the SOL (serial over LAN) on the IPMI on SSH to display the console? I can see the BIOS boot messages through the following: ssh cd /system1/sol1 start But, once I hit "GRUB loading stage 2", the window goes blank and I can't see anything else. My motherboard is a SuperMicro Motherboard X9SRH-7TF thanks, Paul From jimklimov at cos.ru Mon Nov 11 17:15:08 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Mon, 11 Nov 2013 18:15:08 +0100 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <5280F39F.6000807@alcatel-lucent.com> References: <5280F39F.6000807@alcatel-lucent.com> Message-ID: <5281109C.9000604@cos.ru> On 2013-11-11 16:11, Paul Jochum wrote: > Hi All: > > Does anyone know the magic settings (either/both in the SuperMicro > BIOS or OmniOS), to get the SOL (serial over LAN) on the IPMI on SSH to > display the console? I can see the BIOS boot messages through the > following: > > ssh > cd /system1/sol1 > start > > But, once I hit "GRUB loading stage 2", the window goes blank and I > can't see anything else. May I presume you have serial console configured in GRUB itself and passed to the kernel command-line? Or do you need to add something like the below snippets to menu.lst? Also make sure that the speed (default 9600, 115200, whatever) is matched in SOL/BIOS/GRUB/OS, or maybe is autodetected in SOL/BIOS side and does not matter. # Primary GRUB console is physical; allow sercon too # (must press a key there to get grub menu) serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1 terminal --timeout=10 console serial title default_bootfs syscon findroot (pool_rpool,0,a) kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive title default_bootfs sercon findroot (pool_rpool,0,a) kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=ttya module$ /platform/i86pc/$ISADIR/boot_archive Actually the "default" system (OS) console is not necessarily keyboard and screen - this depends on "eeprom" (/grub/solaris/bootenv.rc) lines, i.e. one can have this: setprop console 'ttya' #setprop console 'text' setprop ttyb-rts-dtr-off false setprop ttyb-ignore-cd true setprop ttya-rts-dtr-off false setprop ttya-ignore-cd true setprop ttyb-mode 9600,8,n,1,- setprop ttya-mode 9600,8,n,1,- For alternative serial speeds (115200, etc.) you may need to update /etc/ttydefs, adding the option to "console" and/or "contty" loops, beside changing the other locations. HTH, //Jim Klimov From landman at scalableinformatics.com Mon Nov 11 16:31:37 2013 From: landman at scalableinformatics.com (Joe Landman) Date: Mon, 11 Nov 2013 11:31:37 -0500 Subject: [OmniOS-discuss] [smartos-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> Message-ID: <52810669.70003@scalableinformatics.com> On 11/11/2013 09:05 AM, Tobias Oetiker wrote: > We are looking at purchasing a new box. According to > https://github.com/joyent/manufacturing/blob/master/parts-database.ods > it seems that Joyent is using > > Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe > soon 7K4000) disks. > http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm > > We asked our system integrator for an offer on these boxes and he > said, that he has several in the field and they wer cool, BUT he > has seen problems with the on-board LSI 2308 and the backplane, and > that he would recommend installing 5 extra LSI SAS controllers to > directly connect each sas disk to a controller port ... > > Is this the reason that joyent lists all the firmware versions and > one should make sure to use exactly these versions ? > > Any ideas ? I'll freely admit we are biased in this regards. We've heard of significant cooling issues on the rear mount drives, problems with backplane revisions and firmware. In general, direct connected units are strongly recommended over expander connected units (but take this as our bias being front and center). The later kernels do better with the LSI 2xxx HBA series. The SAS2IRCU utility does work nicely under the kernel on SmartOS from a few months ago. What we've found is that up to date firmware is usually a good thing (modulo bugs it introduces). There are still a few issues with SES/SGPIO control in a number of cases, but we've traced some of these to expander chip/card firmware/drive firmware interactions. -- Visit us at SC13 in Denver Nov 17-22 in booth #1919 Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com twtr : @scalablinfo phone: +1 734 786 8423 x121 cell : +1 734 612 4615 From keith.wesolowski at joyent.com Mon Nov 11 17:03:42 2013 From: keith.wesolowski at joyent.com (Keith Wesolowski) Date: Mon, 11 Nov 2013 17:03:42 +0000 Subject: [OmniOS-discuss] [smartos-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> Message-ID: <20131111170342.GB507@joyent.com> On Mon, Nov 11, 2013 at 03:05:00PM +0100, Tobias Oetiker wrote: > We are looking at purchasing a new box. According to > https://github.com/joyent/manufacturing/blob/master/parts-database.ods > it seems that Joyent is using > > Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe > soon 7K4000) disks. > http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm > > We asked our system integrator for an offer on these boxes and he > said, that he has several in the field and they wer cool, BUT he > has seen problems with the on-board LSI 2308 and the backplane, and > that he would recommend installing 5 extra LSI SAS controllers to > directly connect each sas disk to a controller port ... That's not how this system works. The backplanes have SAS expanders and as far as I know there is no way to get an alternate backplane. The only reason to want to bypass the expanders is if you are using SATA devices. We don't use them in this system, and do not recommend them. We've had no expander-related issues in our SAS systems. The only systems we use SATA devices in are the ones with expanderless 16-bay backplanes (Richmond-A/Richmond-C). Anything more useful will require actual data as to what "problems" have supposedly occurred and under what circumstances. A crash dump, if a panic occurred, is essential. Error messages, anything at all. Otherwise this feels a lot like an attempt to sell you 4 extra HBAs and a giant unmanageable wad of internal cables. > Is this the reason that joyent lists all the firmware versions and > one should make sure to use exactly these versions ? Yes. Never deviate from known-to-work firmware revisions unless you encounter a bug that is specific to this version and *known* to be fixed in a different one. From mir at miras.org Mon Nov 11 17:34:44 2013 From: mir at miras.org (Michael Rasmussen) Date: Mon, 11 Nov 2013 18:34:44 +0100 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <5281109C.9000604@cos.ru> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> Message-ID: <20131111183444.15253482@sleipner.datanom.net> On Mon, 11 Nov 2013 18:15:08 +0100 Jim Klimov wrote: > > May I presume you have serial console configured in GRUB itself and > passed to the kernel command-line? Or do you need to add something > like the below snippets to menu.lst? Also make sure that the speed > (default 9600, 115200, whatever) is matched in SOL/BIOS/GRUB/OS, or > maybe is autodetected in SOL/BIOS side and does not matter. > Another often seen problem with IPMI is that Grub is using a wrong resolution. Try specifying console or 640x480 as resolution. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- I own seven-eighths of all the artists in downtown Burbank! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From Rob at Logan.com Mon Nov 11 18:57:05 2013 From: Rob at Logan.com (Rob Logan) Date: Mon, 11 Nov 2013 13:57:05 -0500 Subject: [OmniOS-discuss] [SPAM] Re: OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <5281109C.9000604@cos.ru> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> Message-ID: <52812881.8030902@Logan.com> I've been moving away from SOL and using the BIOS "redirect console after boot" option. This lets me plug in a crash cart (keyboard/monitor) that always seems to be easier to find than a PC with a db9 on it or recalling the LOM/iLO's IP. Plus some LOM don't have a buffer, so you can't see the backtrace. Rob From seb.messier at gmail.com Mon Nov 11 19:07:11 2013 From: seb.messier at gmail.com (Sebastien Messier) Date: Mon, 11 Nov 2013 14:07:11 -0500 Subject: [OmniOS-discuss] need help with apcupsd setup In-Reply-To: <201311111443.rABEhSmM009549@elvis.arl.psu.edu> References: <201311111443.rABEhSmM009549@elvis.arl.psu.edu> Message-ID: Oh Well, I haven't done any tests yet. I thought I had to do this before testing. Ill run some tests tonight and we will see. I'll post the results here. Thank you for your input! On Mon, Nov 11, 2013 at 9:43 AM, John D Groenveld wrote: > In message < > CABEuEyoSx0yhBj+iMqKqQ1xximQDHnzfFckJ8rDiybqYyJTKzA at mail.gmail.com> > , Sebastien Messier writes: > >So I have set-up almost everything for my apc ups to run on my server but > >now I face a problem. They say I need to modify my /sbin/rc0 script and I > >really dont want to fuck shit up as I'm aware this is quite a serious > file. > > I've never found it necessary to modify /sbin/rc0. > apcupsd hits the BATTERYLEVEL or MINUTES thresholds in > /etc/opt/apcupsd/apcupsd.conf and shutdown(1M)'s. > What do your tests reveal? > > John > groenveld at acm.org > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -- *S?bastien Messier*?tudiant au baccalaur?at en g?nie m?canique ?cole de technologie Sup?rieure Directeur m?canique du club ?tudiant Walking Machine *Walking Machine * *| Robotique mobile autonome* *P*: (514) 396-8800 x7408*| **| seb.messier at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobi at oetiker.ch Mon Nov 11 21:09:48 2013 From: tobi at oetiker.ch (Tobias Oetiker) Date: Mon, 11 Nov 2013 22:09:48 +0100 (CET) Subject: [OmniOS-discuss] [smartos-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: <20131111170342.GB507@joyent.com> References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <20131111170342.GB507@joyent.com> Message-ID: Hi Keith, Today Keith Wesolowski wrote: > On Mon, Nov 11, 2013 at 03:05:00PM +0100, Tobias Oetiker wrote: > > > We are looking at purchasing a new box. According to > > https://github.com/joyent/manufacturing/blob/master/parts-database.ods > > it seems that Joyent is using > > > > Supermicro SuperStorage Server 6047R-E1R36L with 7K3000 (and maybe > > soon 7K4000) disks. > > http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm > > > > We asked our system integrator for an offer on these boxes and he > > said, that he has several in the field and they wer cool, BUT he > > has seen problems with the on-board LSI 2308 and the backplane, and > > that he would recommend installing 5 extra LSI SAS controllers to > > directly connect each sas disk to a controller port ... > > That's not how this system works. The backplanes have SAS expanders and > as far as I know there is no way to get an alternate backplane. they would build the system based on http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm then. With the X9DRD-7LN4F-JBOD motherboard and some extra LSI controller cards. > The only reason to want to bypass the expanders is if you are using SATA > devices. We don't use them in this system, and do not recommend them. we use one sata ssd for l2arc and two for zil ... you seem to get around the zil part by using zeus (expensive, is it worth it?). and no l2arc (reason?). > We've had no expander-related issues in our SAS systems. The only > systems we use SATA devices in are the ones with expanderless 16-bay > backplanes (Richmond-A/Richmond-C). > > Anything more useful will require actual data as to what "problems" have > supposedly occurred and under what circumstances. A crash dump, if a > panic occurred, is essential. Error messages, anything at all. > Otherwise this feels a lot like an attempt to sell you 4 extra HBAs and > a giant unmanageable wad of internal cables. :) there is no data, and I hope there never will be :-) > > Is this the reason that joyent lists all the firmware versions and > > one should make sure to use exactly these versions ? > > Yes. Never deviate from known-to-work firmware revisions unless you > encounter a bug that is specific to this version and *known* to be fixed > in a different one. :-) thanks! tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900 From paul.jochum at alcatel-lucent.com Mon Nov 11 22:34:01 2013 From: paul.jochum at alcatel-lucent.com (Paul Jochum) Date: Mon, 11 Nov 2013 16:34:01 -0600 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <5281109C.9000604@cos.ru> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> Message-ID: <52815B59.2060605@alcatel-lucent.com> On 11/11/2013 11:15 AM, Jim Klimov wrote: > On 2013-11-11 16:11, Paul Jochum wrote: >> Hi All: >> >> Does anyone know the magic settings (either/both in the SuperMicro >> BIOS or OmniOS), to get the SOL (serial over LAN) on the IPMI on SSH to >> display the console? I can see the BIOS boot messages through the >> following: >> >> ssh >> cd /system1/sol1 >> start >> >> But, once I hit "GRUB loading stage 2", the window goes blank and I >> can't see anything else. > > May I presume you have serial console configured in GRUB itself and > passed to the kernel command-line? Or do you need to add something > like the below snippets to menu.lst? Also make sure that the speed > (default 9600, 115200, whatever) is matched in SOL/BIOS/GRUB/OS, or > maybe is autodetected in SOL/BIOS side and does not matter. > > > # Primary GRUB console is physical; allow sercon too > # (must press a key there to get grub menu) > serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1 > terminal --timeout=10 console serial > > title default_bootfs syscon > findroot (pool_rpool,0,a) > kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS > module$ /platform/i86pc/$ISADIR/boot_archive > > title default_bootfs sercon > findroot (pool_rpool,0,a) > kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=ttya > module$ /platform/i86pc/$ISADIR/boot_archive > > > > Actually the "default" system (OS) console is not necessarily keyboard > and screen - this depends on "eeprom" (/grub/solaris/bootenv.rc) lines, > i.e. one can have this: > > setprop console 'ttya' > #setprop console 'text' > setprop ttyb-rts-dtr-off false > setprop ttyb-ignore-cd true > setprop ttya-rts-dtr-off false > setprop ttya-ignore-cd true > setprop ttyb-mode 9600,8,n,1,- > setprop ttya-mode 9600,8,n,1,- > > For alternative serial speeds (115200, etc.) you may need to update > /etc/ttydefs, adding the option to "console" and/or "contty" loops, > beside changing the other locations. > > HTH, > //Jim Klimov > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss Hi Jim: Thank you for responding. In playing with our system, I now can do the following: 1) on the SOL interface, I can see most of the POST messages 2) I see all of grub messages (and the menu) 3) Once OmniOS starts booting, I sometimes see about the first 30 characters: "SunOS Release 5.11 Version omn" while the real console displays the full message and the login prompt. Any suggestions on how to get the SOL to display the full boot message and give a login prompt? Since it got so far, I am assuming that I am on the correct port, and the proper baud rate, etc. I have been playing around with changing /boot/solaris/bootenv.rc and /etc/ttydefs, but neither seem to be helping. I believe that the SuperMicro SOL is basically taking the true console port (not a tty port) and putting it on the LAN, and so changes to bootenv.rc and ttydefs do not affect it (but this is just my hypothesis right now, and would love to be proven wrong on it) The changes I made are: BIOS: (I believe I turned these all back to the default settings, but am listing them here for completeness) Advanced -> Serial Port Console Redirection -> Com1 and Com2 Console Redirection are Disabled SOL Console Redirection is Enabled Console Redirection Settings (for SOL) Terminal Type = VT100+ Bits per second = 115200 (I have tried different rates (like 9600), but then I can't see the POST messages) Redirection after BIOS POST = Always Enable in OmniOS /rpool/boot/grub Commented out the line "splashimage /boot/grub/splash.xpm.gz" and changed timeout to 5 otherwise, everything else is as default (i.e. the serial and terminal lines are still commented out, and the kernel line is still the same: "kernel$ /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS) /boot/solaris/bootenv.rc (tried a lot of changes, but returned it to default since none seem to make a difference) /etc/ttydefs (tried a lot of changes, but returned it to default since none seem to make a difference) From mir at miras.org Mon Nov 11 23:28:38 2013 From: mir at miras.org (Michael Rasmussen) Date: Tue, 12 Nov 2013 00:28:38 +0100 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <52815B59.2060605@alcatel-lucent.com> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> Message-ID: <20131112002838.5d687f4d@sleipner.datanom.net> On Mon, 11 Nov 2013 16:34:01 -0600 Paul Jochum wrote: > in OmniOS > /rpool/boot/grub > Commented out the line "splashimage /boot/grub/splash.xpm.gz" > and changed timeout to 5 > otherwise, everything else is as default > (i.e. the serial and terminal lines are still commented out, and the kernel line is still the same: "kernel$ /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS) > > /boot/solaris/bootenv.rc > (tried a lot of changes, but returned it to default since none seem to make a difference) > > /etc/ttydefs > (tried a lot of changes, but returned it to default since none seem to make a difference) Have you tried playing with console=text, console=force-text? It seems the default to either console=graphics or console=text. Use console=force-text to enable standard vga mode. Last time I played with IPMI standard vga mode was the only way to see the entire boot process. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- This fortune is encrypted -- get your decoder rings ready! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From paul.jochum at alcatel-lucent.com Mon Nov 11 23:35:56 2013 From: paul.jochum at alcatel-lucent.com (Paul Jochum) Date: Mon, 11 Nov 2013 17:35:56 -0600 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <20131112002838.5d687f4d@sleipner.datanom.net> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> <20131112002838.5d687f4d@sleipner.datanom.net> Message-ID: <528169DC.9000306@alcatel-lucent.com> On 11/11/2013 05:28 PM, Michael Rasmussen wrote: > On Mon, 11 Nov 2013 16:34:01 -0600 > Paul Jochum wrote: > >> in OmniOS >> /rpool/boot/grub >> Commented out the line "splashimage /boot/grub/splash.xpm.gz" >> and changed timeout to 5 >> otherwise, everything else is as default >> (i.e. the serial and terminal lines are still commented out, and the kernel line is still the same: "kernel$ /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS) >> >> /boot/solaris/bootenv.rc >> (tried a lot of changes, but returned it to default since none seem to make a difference) >> >> /etc/ttydefs >> (tried a lot of changes, but returned it to default since none seem to make a difference) > Have you tried playing with console=text, console=force-text? > It seems the default to either console=graphics or console=text. Use > console=force-text to enable standard vga mode. Last time I played with > IPMI standard vga mode was the only way to see the entire boot process. > Hi Michael: Are you suggesting the console=force-text in the menu.lst, bootenv.rc, or ttydefs? thanks, Paul From mir at miras.org Mon Nov 11 23:42:03 2013 From: mir at miras.org (Michael Rasmussen) Date: Tue, 12 Nov 2013 00:42:03 +0100 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <528169DC.9000306@alcatel-lucent.com> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> <20131112002838.5d687f4d@sleipner.datanom.net> <528169DC.9000306@alcatel-lucent.com> Message-ID: <20131112004203.48d55060@sleipner.datanom.net> On Mon, 11 Nov 2013 17:35:56 -0600 Paul Jochum wrote: > > On 11/11/2013 05:28 PM, Michael Rasmussen wrote: > > On Mon, 11 Nov 2013 16:34:01 -0600 > > Paul Jochum wrote: > > > >> in OmniOS > >> /rpool/boot/grub > >> Commented out the line "splashimage /boot/grub/splash.xpm.gz" > >> and changed timeout to 5 > >> otherwise, everything else is as default > >> (i.e. the serial and terminal lines are still commented out, and the kernel line is still the same: "kernel$ /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS) > >> > >> /boot/solaris/bootenv.rc > >> (tried a lot of changes, but returned it to default since none seem to make a difference) > >> > >> /etc/ttydefs > >> (tried a lot of changes, but returned it to default since none seem to make a difference) > > Have you tried playing with console=text, console=force-text? > > It seems the default to either console=graphics or console=text. Use > > console=force-text to enable standard vga mode. Last time I played with > > IPMI standard vga mode was the only way to see the entire boot process. > > > Hi Michael: > > Are you suggesting the console=force-text in the menu.lst, bootenv.rc, or ttydefs? > It has to be added to the boot line in menu.lst. Read more here: http://docs.oracle.com/cd/E26502_01/html/E28983/glyas.html -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- If you lose your temper at a newspaper columnist, he'll get rich, or famous or both. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From jimklimov at cos.ru Tue Nov 12 09:35:48 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Tue, 12 Nov 2013 10:35:48 +0100 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <52815B59.2060605@alcatel-lucent.com> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> Message-ID: <5281F674.906@cos.ru> On 2013-11-11 23:34, Paul Jochum wrote: > Thank you for responding. In playing with our system, I now can do > the following: > 1) on the SOL interface, I can see most of the POST messages > 2) I see all of grub messages (and the menu) > 3) Once OmniOS starts booting, I sometimes see about the first 30 > characters: > "SunOS Release 5.11 Version omn" > while the real console displays the full message and the login prompt. This did happen to me on SOL with earlier Intel MFSYS servers and I've never worked around it well. However when we did similar setup recently on a newer MFSYS box (maybe v2, maybe just newer firmware) the "serial" console-over-LAN worked with no hiccups. I've tried the old box now, with newest OI "release" - it's the same: Press any key to continue. Press any key to continue. Press any key to continue. OpenIndiana Build oi_151a8 64-bit (illumos 7256a34efe) SunOS Release 5.11 - Copyright 1983-2010 Oracle and/or its affiliates. All rights reserved. Use is subject to license terms. Mo On a side note, I'd love to find a way to report to the "screen" console that the system is booted and login should be sought on the serial port. Because KVMing or otherwise looking at a blank screen makes the hand jerk involuntarily towards the reset button ;) But things like /dev/console are only routed to the current console (serial). I believe, virtual terminals (vt{1-6}) do have a way to hook up the physical console, I just did not find it back then :-\ Likewise, in this aspect I miss Linux approach to /dev/console (or some similar device?) being a sort of multiplexor that has one input but spews over many outputs like serial, screen and UDP-sink at once... //Jim From jimklimov at cos.ru Tue Nov 12 10:06:18 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Tue, 12 Nov 2013 11:06:18 +0100 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <5281F674.906@cos.ru> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> <5281F674.906@cos.ru> Message-ID: <5281FD9A.3090007@cos.ru> On 2013-11-12 10:35, Jim Klimov wrote: > On 2013-11-11 23:34, Paul Jochum wrote: >> Thank you for responding. In playing with our system, I now can do >> the following: >> 1) on the SOL interface, I can see most of the POST messages >> 2) I see all of grub messages (and the menu) >> 3) Once OmniOS starts booting, I sometimes see about the first 30 >> characters: >> "SunOS Release 5.11 Version omn" >> while the real console displays the full message and the login prompt. > > This did happen to me on SOL with earlier Intel MFSYS servers and > I've never worked around it well. However when we did similar setup > recently on a newer MFSYS box (maybe v2, maybe just newer firmware) > the "serial" console-over-LAN worked with no hiccups. I'd add that with Linux servers this old box was also buggy, in i.e. "vi" it often looped output or broke sessions, while the new box had no hiccups as I said. May be there is a problem with flow-control or something - apparently fixed in newer hardware or firmware - while there is lots of text output (i.e. the GRUB menu in dumb mode), this almost predictably breaks on certain output. So we are looking at scheduling some downtime to update this old MFSYS chassis's firmware, maybe you should also check with your server's vendor if there are new firmwares, whether they announce such a fix or not HTH, //Jim Klimov From keith.wesolowski at joyent.com Mon Nov 11 22:56:32 2013 From: keith.wesolowski at joyent.com (Keith Wesolowski) Date: Mon, 11 Nov 2013 22:56:32 +0000 Subject: [OmniOS-discuss] [smartos-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <20131111170342.GB507@joyent.com> Message-ID: <20131111225632.GA685@joyent.com> On Mon, Nov 11, 2013 at 10:09:48PM +0100, Tobias Oetiker wrote: > they would build the system based on > http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm > then. With the X9DRD-7LN4F-JBOD motherboard and some extra LSI > controller cards. Ok. > > The only reason to want to bypass the expanders is if you are using SATA > > devices. We don't use them in this system, and do not recommend them. > > we use one sata ssd for l2arc and two for zil ... you seem to get around > the zil part by using zeus (expensive, is it worth it?). and no > l2arc (reason?). The ZeusRAM costs something like $2000. By the time you've purchased a SATA slog and 5 extra HBAs, you can't be far from that. And the ZeusRAM was designed and developed specifically for this function -- not true of 99.9% or more of the SATA SSDs on the market. It has all the desired attributes and is blazingly fast. Perhaps most importantly, I don't worry too much about the firmware in the thing because STEC has a good track record with us. I think we've had only 2 firmware bugs in 7 years or so. Put all that together, and yeah I'd say it's well worth it. There's no way I'd use a SATA slog and a huge mess of DA HBAs instead. We don't use the L2ARC because (a) it's not clear how to monetise that in our business, and (b) it's unclear that there'd be much demand for it. Most of the people who fall out of DRAM (or assume that they are because they don't bother to collect data) just insist on all-SSD pools. A solution that is intermediate in cost and working set size seems at this time to enjoy limited demand. Obviously, build whatever's best for you. But if you do go the mega-DA route, I'll offer you at least one extra piece of advice. You need to implement a topo map for that system if you want the ability to have indicators. With that number of disks, you surely do. That also means you need (a) consistent SMBIOS identity for your system and (b) consistent cabling of the 36 cables to the 5 HBAs. If you don't have all of this, you'll light the wrong LEDs when a fault is diagnosed; operational hilarity is sure to ensue! From skiselkov.ml at gmail.com Tue Nov 12 15:04:14 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Tue, 12 Nov 2013 15:04:14 +0000 Subject: [OmniOS-discuss] [smartos-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: <20131111225632.GA685@joyent.com> References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <20131111170342.GB507@joyent.com> <20131111225632.GA685@joyent.com> Message-ID: <5282436E.9030600@gmail.com> On 11/11/13, 10:56 PM, Keith Wesolowski wrote: > The ZeusRAM costs something like $2000. By the time you've purchased a > SATA slog and 5 extra HBAs, you can't be far from that. And the ZeusRAM > was designed and developed specifically for this function -- not true of > 99.9% or more of the SATA SSDs on the market. It has all the desired > attributes and is blazingly fast. Perhaps most importantly, I don't > worry too much about the firmware in the thing because STEC has a good > track record with us. I think we've had only 2 firmware bugs in 7 years > or so. Put all that together, and yeah I'd say it's well worth it. > There's no way I'd use a SATA slog and a huge mess of DA HBAs instead. Been running SATA SSDs for L2ARC with SATA/SAS interposers for a while now without any issues. It's not ideal, but workable. One really has to think of it as a trade off. If you want performance, no questions asked, be prepared to drop the $2-4k for 1-2 ZeusRAMs per machine. For people who are willing to compromise a little bit, you can still get decent performance at about 1/4 of that price by using a pair of Intel DC S3700 200GB SSDs attached via interposers. YMMV. Cheers, -- Saso From hakansom at ohsu.edu Tue Nov 12 18:13:37 2013 From: hakansom at ohsu.edu (Marion Hakanson) Date: Tue, 12 Nov 2013 10:13:37 -0800 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: Message from Jim Klimov of "Tue, 12 Nov 2013 10:35:48 +0100." <5281F674.906@cos.ru> Message-ID: <201311121813.rACIDbHu005705@kyklops.ohsu.edu> jimklimov at cos.ru said: > On a side note, I'd love to find a way to report to the "screen" console that > the system is booted and login should be sought on the serial port. Because > KVMing or otherwise looking at a blank screen makes the hand jerk > involuntarily towards the reset button ;) But things like /dev/console are > only routed to the current console (serial). I believe, virtual terminals > (vt{1-6}) do have a way to hook up the physical console, I just did not find > it back then :-\ Likewise, in this aspect I miss Linux approach to /dev/ > console (or some similar device?) being a sort of multiplexor that has one > input but spews over many outputs like serial, screen and UDP-sink at once... I treat this as a hardware problem, which needs a hardware solution. In this case, I print up labels that say, "Serial Console Only" and attach them to the front and rear of the server. If you really need the VGA display to be active after boot, you can always enable an X11 desktop session, but it won't connect directly to the OS /dev/console. To elaborate, I've not found a reliable way to have GRUB activate both the serial console and the VGA console at the same time, on all the variety of hardware we have here. So, I think it is wise to believe this comment in the OmniOS "menu.lst" file: # WARNING: do not enable grub serial console when BIOS console serial # redirection is active. It's a little misleading, but I've found that on some hardware this means you must leave the "terminal ..." directive commented out, but you do still need the "serial ..." directive. BTW, I have found Solaris-10, OpenIndiana-151a7, and OmniOS all behave slightly differently with regard to setting up serial console (at the OS level). For example, Solaris-10 cannot have multiple "-B blah" args on the kernel line in menu.lst, while the Illumos-based systems can do so. That's the main difference. All of them benefit from "-B" args, such as "console=ttyb" (or ttya), and 'ttyb-mode="115200,8,n,1,-", and all of them need /etc/ttydefs tweaked to match the baud rate of the BIOS serial redirection settings. Some pay attention to "eeprom" settings and don't need the menu.lst "-B" settings, but it doesn't hurt to set both on all such systems. Regards, Marion From mweiss at cimlbr.com Tue Nov 12 19:42:34 2013 From: mweiss at cimlbr.com (Matt Weiss) Date: Tue, 12 Nov 2013 13:42:34 -0600 Subject: [OmniOS-discuss] [smartos-discuss] SuperStorage Server 6047R-E1R36L SAS Question In-Reply-To: <20131111225632.GA685@joyent.com> References: <1384178047.f52D50cf5.12340@b-lb-www-quonix> <20131111170342.GB507@joyent.com> <20131111225632.GA685@joyent.com> Message-ID: <528284AA.4010206@cimlbr.com> On 11/11/2013 4:56 PM, Keith Wesolowski wrote: > On Mon, Nov 11, 2013 at 10:09:48PM +0100, Tobias Oetiker wrote: > >> they would build the system based on >> http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm >> then. With the X9DRD-7LN4F-JBOD motherboard and some extra LSI >> controller cards. > Ok. > >>> The only reason to want to bypass the expanders is if you are using SATA >>> devices. We don't use them in this system, and do not recommend them. >> we use one sata ssd for l2arc and two for zil ... you seem to get around >> the zil part by using zeus (expensive, is it worth it?). and no >> l2arc (reason?). > The ZeusRAM costs something like $2000. By the time you've purchased a > SATA slog and 5 extra HBAs, you can't be far from that. And the ZeusRAM > was designed and developed specifically for this function -- not true of > 99.9% or more of the SATA SSDs on the market. It has all the desired > attributes and is blazingly fast. Perhaps most importantly, I don't > worry too much about the firmware in the thing because STEC has a good > track record with us. I think we've had only 2 firmware bugs in 7 years > or so. Put all that together, and yeah I'd say it's well worth it. > There's no way I'd use a SATA slog and a huge mess of DA HBAs instead. > > We don't use the L2ARC because (a) it's not clear how to monetise that > in our business, and (b) it's unclear that there'd be much demand for > it. Most of the people who fall out of DRAM (or assume that they are > because they don't bother to collect data) just insist on all-SSD pools. > A solution that is intermediate in cost and working set size seems at > this time to enjoy limited demand. > > Obviously, build whatever's best for you. But if you do go the mega-DA > route, I'll offer you at least one extra piece of advice. You need to > implement a topo map for that system if you want the ability to have > indicators. With that number of disks, you surely do. That also means > you need (a) consistent SMBIOS identity for your system and (b) > consistent cabling of the 36 cables to the 5 HBAs. If you don't have > all of this, you'll light the wrong LEDs when a fault is diagnosed; > operational hilarity is sure to ensue! > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss I am putting in production TODAY in fact this exact motherboard and a ZuesRAM drive. I have been testing / benchmarking this system for a couple of months using an Intel SSD for a zil. Bought the ZuesRAM before final production and benchmarked again. The system has been stable so far, I do not have any issues with it to date. I bought 1 lsi 9207-8i card for about $250 on newegg, just to have as a spare of the onboard happens to go out for some reason. With the power of ZFS and being able to just import your pool elsewhere if need be, I really don't understand why you would want 5 HBA's. It sounds like a load of bull to me. The ZuesRAM, so far I am a fan, it is getting significantly better performance than the Intel 520 I used for testing, which of course is no surprise. I also tested an Intel 335. I had no intention on those being permanent, just wanted something to test with. The Zues is very very stable, the IOPS and transfer rates are constant, whereas the SSD zil would fluctuate up and down a bit. 7x - 2 vdev mirrors, and a zuesram with an All-In-One Napp-it Omni OS box, I am getting 40k IOPS with vmware IO Analyzer. From henson at acm.org Tue Nov 12 22:52:17 2013 From: henson at acm.org (Paul B. Henson) Date: Tue, 12 Nov 2013 14:52:17 -0800 Subject: [OmniOS-discuss] illumos power management Message-ID: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> I think the storage server I'm working on at home (current omnios stable) is spinning down disks; whenever I get back to it after not touching it for a while, anything touching my export pool (dual 6 x 3TB WD Red RAIDZ2) takes a long time to respond, and then goes back to normal. My current power.conf is: ----- autopm enable autoS3 disable cpupm enable cpu_deep_idle enable cpu-threshold 1s ----- I had thought that when running on server class hardware the default was not to spin down drives, but perhaps I was mistaken? Hmm, or maybe the default is to just skip power management completely, and explicitly turning it on turned on everything. I'm thinking about adding "system-threshold always-on" to the configuration, which I guess will disable all power down functionality, other than CPU, which is controlled by the "cpu-threshold" parameter? What other power savings would this turnoff? I am in general in favor of saving energy :), other than for the disks, as the latency is too high and causes too much of a delay, and it's not clear if setting the system-threshold to stop spinning down disks will also disable other potentially desirable power savings. I guess the other option would be to explicitly configure device-thresholds for all of the disks individually. Is there any way to list what components of a system support power management? While on the subject, any thoughts on event-mode vs poll-mode for cpupm? It's not clear from the man page what the default is if you just set it to "enable". Thanks. From henson at acm.org Wed Nov 13 00:36:17 2013 From: henson at acm.org (Paul B. Henson) Date: Tue, 12 Nov 2013 16:36:17 -0800 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <52815B59.2060605@alcatel-lucent.com> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> Message-ID: <08f601cee008$5fd07420$1f715c60$@acm.org> > From: Paul Jochum > Sent: Monday, November 11, 2013 2:34 PM > > neither seem to be helping. I believe that the SuperMicro SOL is > basically taking the true console port (not a tty port) and putting it > on the LAN, and so changes to bootenv.rc and ttydefs do not affect it > (but this is just my hypothesis right now, and would love to be proven > wrong on it) Well, sort of, but not exactly. When you have console redirection enabled in the bios, it takes anything written to the standard VGA output and duplicates it to the serial port -- but *only* for strings written with the BIOS output functions. This covers POST, and typically boot loaders, but once the OS starts up, it drives the vga console directly, rather than through the BIOS, and the output only shows up on the VGA console. So, if you want the OS to use a serial console you need to configure it in the OS. I generally don't bother configuring grub explicitly for a serial console, as the bios redirection typically works fine for it. You just need to be sure to disable the splashscreen, as that switches the VGA console to graphics mode rather than text mode which the bios can't replicate. Looks like you already did that. As far as configuring the OS, I usually set the boot variables with eeprom: # eeprom console console=ttyc # eeprom ttyc-mode ttyc-mode=115200,8,n,1,- At least on my supermicro motherboard, there are two physical serial ports, and then the virtual SOL port shows up as a third. The last piece is to update the definition of the console in ttydefs: console:115200 hupcl opost onlcr:115200::console If you don't, as soon as the OS tries to start the login prompt on the console, there's the baud rate mismatch and it dies :(. Unless I'm misremembering/forgetting something, that's pretty much all I needed to do to get it going on my box... From paul.jochum at alcatel-lucent.com Wed Nov 13 01:53:12 2013 From: paul.jochum at alcatel-lucent.com (Paul Jochum) Date: Tue, 12 Nov 2013 19:53:12 -0600 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <08f601cee008$5fd07420$1f715c60$@acm.org> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> <08f601cee008$5fd07420$1f715c60$@acm.org> Message-ID: <5282DB88.6020707@alcatel-lucent.com> Thank you all, I got this to work. Here are my notes, in case anyone else runs into this. Motherboard: SuperMicro X9SRH-7TF BIOS FW version: 3.00 IPMI FW version: 2.40 1) When running the sol1 interface, do not run the Java version of the console at the same time. 2) When in doubt, reboot the IPMI. Looks like the IPMI has a tendency to hang, and a reboot (or a "Unit Reset" from the web interface) helps to clear things up. 1) In the BIOS (most of these are defaults, but here for completeness, and I can cut an paste from the console now :) ): Advanced -> Serial Port Console Redirection -> Com1, Com2 and EMS Console Redirection should be disabled SOL Console Redirection should be enabled SOL Console Redirection Settings -> Terminal Type [VT100+] Bits per second [115200] Data Bits [8] Parity [None] Stop Bits [1] Flow Control [None] VT-UTF8 Combo Key Support [Enabled] Recorder Mode [Disabled] Resolution 100x31 [Enabled] Legacy OS Redirection Resolution [80x25] Putty KeyPad [VT100] Redirection After BIOS POST [Always Enable] 2) Once the system is booted, change the following files: //boot/grub/menu.lst Comment out the line splashimage /boot/grub/splash.xpm.gz /etc/ttydefs replace the line "console:9600 hupcl opost onlcr:9600::console" with "console:115200 hupcl opost onlcr:115200::console" /boot/solaris/bootenv.rc replace the line "setprop ttyc-mode 9600,8,n,1,-" with "setprop ttyc-mode 115200,8,n,1,-" replace the line "setprop console text" with "setprop console ttyc" On 11/12/2013 06:36 PM, Paul B. Henson wrote: >> From: Paul Jochum >> Sent: Monday, November 11, 2013 2:34 PM >> >> neither seem to be helping. I believe that the SuperMicro SOL is >> basically taking the true console port (not a tty port) and putting it >> on the LAN, and so changes to bootenv.rc and ttydefs do not affect it >> (but this is just my hypothesis right now, and would love to be proven >> wrong on it) > Well, sort of, but not exactly. When you have console redirection enabled in > the bios, it takes anything written to the standard VGA output and > duplicates it to the serial port -- but *only* for strings written with the > BIOS output functions. This covers POST, and typically boot loaders, but > once the OS starts up, it drives the vga console directly, rather than > through the BIOS, and the output only shows up on the VGA console. So, if > you want the OS to use a serial console you need to configure it in the OS. > > I generally don't bother configuring grub explicitly for a serial console, > as the bios redirection typically works fine for it. You just need to be > sure to disable the splashscreen, as that switches the VGA console to > graphics mode rather than text mode which the bios can't replicate. Looks > like you already did that. As far as configuring the OS, I usually set the > boot variables with eeprom: > > # eeprom console > console=ttyc > # eeprom ttyc-mode > ttyc-mode=115200,8,n,1,- > > At least on my supermicro motherboard, there are two physical serial ports, > and then the virtual SOL port shows up as a third. > > The last piece is to update the definition of the console in ttydefs: > > console:115200 hupcl opost onlcr:115200::console > > If you don't, as soon as the OS tries to start the login prompt on the > console, there's the baud rate mismatch and it dies :(. > > Unless I'm misremembering/forgetting something, that's pretty much all I > needed to do to get it going on my box... > From turbo124 at gmail.com Wed Nov 13 05:02:04 2013 From: turbo124 at gmail.com (David Bomba) Date: Wed, 13 Nov 2013 16:02:04 +1100 Subject: [OmniOS-discuss] HP DL180 refuses to shutdown or reboot Message-ID: Hi Guys, I've installed OmniOS on many HP DL180 G6 boxes with great success, however the latest server refuses to either reboot or shutdown. There are two configuration changes which differentiate this server from others, 1 a LSI-9201-8i HBA and 2 this server uses usb media for booting. The cli goes through the usual processing of shutting down, killing user processes, and syncing the file system. The server otherwise operates flawlessly. I'm wondering if anyone can point me in the right direction to debug this situation? Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From henson at acm.org Wed Nov 13 07:23:33 2013 From: henson at acm.org (Paul B. Henson) Date: Tue, 12 Nov 2013 23:23:33 -0800 Subject: [OmniOS-discuss] HP DL180 refuses to shutdown or reboot In-Reply-To: References: Message-ID: Have you tried disabling fast reboot? Run 'reboot -p', if that works ok you can permanently disable it with svccfg: http://docs.oracle.com/cd/E26502_01/html/E28983/gktlr.html > On Nov 12, 2013, at 9:02 PM, David Bomba wrote: > > Hi Guys, > > I've installed OmniOS on many HP DL180 G6 boxes with great success, however the latest server refuses to either reboot or shutdown. > > There are two configuration changes which differentiate this server from others, 1 a LSI-9201-8i HBA and 2 this server uses usb media for booting. > > The cli goes through the usual processing of shutting down, killing user processes, and syncing the file system. > > The server otherwise operates flawlessly. > > I'm wondering if anyone can point me in the right direction to debug this situation? > > Dave > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From skiselkov.ml at gmail.com Wed Nov 13 09:36:27 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Wed, 13 Nov 2013 09:36:27 +0000 Subject: [OmniOS-discuss] OmniOS / SuperMicro motherboard and settings for IPMI SOL In-Reply-To: <5282DB88.6020707@alcatel-lucent.com> References: <5280F39F.6000807@alcatel-lucent.com> <5281109C.9000604@cos.ru> <52815B59.2060605@alcatel-lucent.com> <08f601cee008$5fd07420$1f715c60$@acm.org> <5282DB88.6020707@alcatel-lucent.com> Message-ID: <5283481B.2070406@gmail.com> On 11/13/13, 1:53 AM, Paul Jochum wrote: > Thank you all, I got this to work. > > Here are my notes, in case anyone else runs into this. > > Motherboard: SuperMicro X9SRH-7TF > BIOS FW version: 3.00 > IPMI FW version: 2.40 > > 1) When running the sol1 interface, do not run the Java version of the > console at the same time. > 2) When in doubt, reboot the IPMI. Looks like the IPMI has a tendency > to hang, and a reboot (or a "Unit Reset" from the web interface) helps > to clear things up. > > > 1) In the BIOS (most of these are defaults, but here for completeness, > and I can cut an paste from the console now :) ): > Advanced -> Serial Port Console Redirection -> > Com1, Com2 and EMS Console Redirection should be disabled > SOL Console Redirection should be enabled > SOL Console Redirection Settings -> > Terminal Type [VT100+] > Bits per second [115200] > Data Bits [8] > Parity [None] > Stop Bits [1] > Flow Control [None] > VT-UTF8 Combo Key Support [Enabled] > Recorder Mode [Disabled] > Resolution 100x31 [Enabled] > Legacy OS Redirection Resolution [80x25] > Putty KeyPad [VT100] > Redirection After BIOS POST [Always Enable] > > 2) Once the system is booted, change the following files: > //boot/grub/menu.lst > Comment out the line splashimage /boot/grub/splash.xpm.gz > > /etc/ttydefs > replace the line "console:9600 hupcl opost > onlcr:9600::console" > with "console:115200 hupcl opost onlcr:115200::console" > > /boot/solaris/bootenv.rc > replace the line "setprop ttyc-mode 9600,8,n,1,-" > with "setprop ttyc-mode 115200,8,n,1,-" > > replace the line "setprop console text" > with "setprop console ttyc" Thanks for following up on this Paul, appreciate it. -- Saso From thibault.vincent at smartjog.com Wed Nov 13 10:24:12 2013 From: thibault.vincent at smartjog.com (Thibault VINCENT) Date: Wed, 13 Nov 2013 11:24:12 +0100 Subject: [OmniOS-discuss] HP DL180 refuses to shutdown or reboot In-Reply-To: References: Message-ID: <5283534C.1010301@smartjog.com> On 11/13/2013 06:02 AM, David Bomba wrote: > The cli goes through the usual processing of shutting down, killing user > processes, and syncing the file system. This looks like the issue I had on Dell servers, 12th generation. There is an open bug but I'm not able to work with this hardware anymore, maybe this could be fixed for all at the same time. https://www.illumos.org/issues/4052 Cheers -- Thibault VINCENT - Infrastructure Engineer SmartJog | T: +33 1 5868 6238 27 Blvd Hippolyte Marqu?s, 94200 Ivry-sur-Seine, France www.smartjog.com | a TDF Group company -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 295 bytes Desc: OpenPGP digital signature URL: From tobi at oetiker.ch Wed Nov 13 13:59:30 2013 From: tobi at oetiker.ch (Tobias Oetiker) Date: Wed, 13 Nov 2013 14:59:30 +0100 (CET) Subject: [OmniOS-discuss] zfs and small files Message-ID: We have this customer who loves to put tons of files into single directories. The other day he found that removing a directory with 7k files from raidz2 pool (with ssd log device) took about 10 minutes ... is this to be expected, or should this be faster on a properly configured system ? cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900 From peter.tribble at gmail.com Wed Nov 13 14:37:55 2013 From: peter.tribble at gmail.com (Peter Tribble) Date: Wed, 13 Nov 2013 14:37:55 +0000 Subject: [OmniOS-discuss] zfs and small files In-Reply-To: References: Message-ID: On Wed, Nov 13, 2013 at 1:59 PM, Tobias Oetiker wrote: > We have this customer who loves to put tons of files into single > directories. The other day he found that removing a directory with > 7k files from raidz2 pool (with ssd log device) took about 10 > minutes ... > > is this to be expected, or should this be faster on a properly > configured system ? > That's a tiny number of files. Ought to be essentially instant. I routinely have up to a million files in a directory. That can take a few minutes to delete, but I don't have any SSDs at all on those systems. This isn't being done over NFS, is it? That can dramatically slow operations like this, even with a log. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobi at oetiker.ch Wed Nov 13 14:46:02 2013 From: tobi at oetiker.ch (Tobias Oetiker) Date: Wed, 13 Nov 2013 15:46:02 +0100 (CET) Subject: [OmniOS-discuss] zfs and small files In-Reply-To: References: Message-ID: Hi Peter, Today Peter Tribble wrote: > On Wed, Nov 13, 2013 at 1:59 PM, Tobias Oetiker wrote: > > > We have this customer who loves to put tons of files into single > > directories. The other day he found that removing a directory with > > 7k files from raidz2 pool (with ssd log device) took about 10 > > minutes ... > > > > is this to be expected, or should this be faster on a properly > > configured system ? > > > > That's a tiny number of files. Ought to be essentially instant. > I routinely have up to a million files in a directory. That can take > a few minutes to delete, but I don't have any SSDs at all on > those systems. > > This isn't being done over NFS, is it? That can dramatically slow > operations like this, even with a log. nope it is local ... are you running this on raidz2 ? cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900 From peter.tribble at gmail.com Wed Nov 13 15:35:34 2013 From: peter.tribble at gmail.com (Peter Tribble) Date: Wed, 13 Nov 2013 15:35:34 +0000 Subject: [OmniOS-discuss] zfs and small files In-Reply-To: References: Message-ID: On Wed, Nov 13, 2013 at 2:46 PM, Tobias Oetiker wrote: > Hi Peter, > > Today Peter Tribble wrote: > > > On Wed, Nov 13, 2013 at 1:59 PM, Tobias Oetiker wrote: > > > > > We have this customer who loves to put tons of files into single > > > directories. The other day he found that removing a directory with > > > 7k files from raidz2 pool (with ssd log device) took about 10 > > > minutes ... > > > > > > is this to be expected, or should this be faster on a properly > > > configured system ? > > > > > > > That's a tiny number of files. Ought to be essentially instant. > > I routinely have up to a million files in a directory. That can take > > a few minutes to delete, but I don't have any SSDs at all on > > those systems. > > > > This isn't being done over NFS, is it? That can dramatically slow > > operations like this, even with a log. > > nope it is local ... > > are you running this on raidz2 ? > Yes. On old X4540s. Deletion of a directory with 7k files in it takes about 0.3s -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ci4 at outlook.com Wed Nov 13 16:14:34 2013 From: ci4 at outlook.com (Chavdar Ivanov) Date: Wed, 13 Nov 2013 16:14:34 +0000 Subject: [OmniOS-discuss] zfs and small files In-Reply-To: References: Message-ID: From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Peter Tribble Sent: 13 November 2013 15:36 To: Tobias Oetiker Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs and small files On Wed, Nov 13, 2013 at 2:46 PM, Tobias Oetiker > wrote: Hi Peter, Today Peter Tribble wrote: > On Wed, Nov 13, 2013 at 1:59 PM, Tobias Oetiker > wrote: > > > We have this customer who loves to put tons of files into single > > directories. The other day he found that removing a directory with > > 7k files from raidz2 pool (with ssd log device) took about 10 > > minutes ... > > > > is this to be expected, or should this be faster on a properly > > configured system ? > > > > That's a tiny number of files. Ought to be essentially instant. > I routinely have up to a million files in a directory. That can take > a few minutes to delete, but I don't have any SSDs at all on > those systems. > > This isn't being done over NFS, is it? That can dramatically slow > operations like this, even with a log. nope it is local ... are you running this on raidz2 ? Yes. On old X4540s. Deletion of a directory with 7k files in it takes about 0.3s [ci] It will take about 0.3s IIF you have snapshots of the ZFS. It will take looooong time if this is the only snapshot of the system. Chavdar -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ci4 at outlook.com Wed Nov 13 16:16:39 2013 From: ci4 at outlook.com (Chavdar Ivanov) Date: Wed, 13 Nov 2013 16:16:39 +0000 Subject: [OmniOS-discuss] zfs and small files In-Reply-To: References: Message-ID: From: Chavdar Ivanov [mailto:ci4 at outlook.com] Sent: 13 November 2013 16:15 To: 'Peter Tribble'; 'Tobias Oetiker' Cc: omnios-discuss at lists.omniti.com Subject: RE: [OmniOS-discuss] zfs and small files From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Peter Tribble Sent: 13 November 2013 15:36 To: Tobias Oetiker Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs and small files On Wed, Nov 13, 2013 at 2:46 PM, Tobias Oetiker > wrote: Hi Peter, Today Peter Tribble wrote: > On Wed, Nov 13, 2013 at 1:59 PM, Tobias Oetiker > wrote: > > > We have this customer who loves to put tons of files into single > > directories. The other day he found that removing a directory with > > 7k files from raidz2 pool (with ssd log device) took about 10 > > minutes ... > > > > is this to be expected, or should this be faster on a properly > > configured system ? > > > > That's a tiny number of files. Ought to be essentially instant. > I routinely have up to a million files in a directory. That can take > a few minutes to delete, but I don't have any SSDs at all on > those systems. > > This isn't being done over NFS, is it? That can dramatically slow > operations like this, even with a log. nope it is local ... are you running this on raidz2 ? Yes. On old X4540s. Deletion of a directory with 7k files in it takes about 0.3s [ci] It will take about 0.3s IIF you have snapshots of the ZFS. It will take looooong time if this is the only snapshot of the system. Chavdar [ci] (my usual "method" of deleting many files on a system without a snapshot is to create one, delete them as required, then remove the snapshot at a convenient time - at the end of the day or so. CI -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafael.vanoni at pluribusnetworks.com Wed Nov 13 17:10:53 2013 From: rafael.vanoni at pluribusnetworks.com (Rafael Vanoni) Date: Wed, 13 Nov 2013 09:10:53 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: > On Nov 12, 2013, at 5:52 PM, "Paul B. Henson" wrote: > >> I think the storage server I'm working on at home (current omnios stable) is >> spinning down disks; whenever I get back to it after not touching it for a >> while, anything touching my export pool (dual 6 x 3TB WD Red RAIDZ2) takes a >> long time to respond, and then goes back to normal. >> >> My current power.conf is: >> >> ----- >> autopm enable >> autoS3 disable >> >> cpupm enable >> cpu_deep_idle enable >> cpu-threshold 1s >> ----- >> >> I had thought that when running on server class hardware the default was not >> to spin down drives, but perhaps I was mistaken? Hmm, or maybe the default >> is to just skip power management completely, and explicitly turning it on >> turned on everything. At least for CPU power management, the default is to have it on. >> I'm thinking about adding "system-threshold always-on" to the configuration, >> which I guess will disable all power down functionality, other than CPU, >> which is controlled by the "cpu-threshold" parameter? What other power >> savings would this turnoff? I am in general in favor of saving energy :), >> other than for the disks, as the latency is too high and causes too much of >> a delay, and it's not clear if setting the system-threshold to stop spinning >> down disks will also disable other potentially desirable power savings. I >> guess the other option would be to explicitly configure device-thresholds >> for all of the disks individually. Is there any way to list what components >> of a system support power management? Yes, If you don't specify individual settings for devices, "system-threshold always-on" will leave everything at full power. I don't know of a specific way of querying PM capabilities for devices, but have a look at the man page for power.conf(4). It mentions a way of specifying policies based on generic capabilities that can apply to any device that supports it, instead of having to do it one by one. >> While on the subject, any thoughts on event-mode vs poll-mode for cpupm? >> It's not clear from the man page what the default is if you just set it to >> "enable". If the system supports it (essentially if we're able to create CPU power domains with information from ACPI tables), event mode is the default. Otherwise we fall back to polling. Hth Rafael >> Thanks. >> ------------------------------------------- >> illumos-discuss >> Archives: https://www.listbox.com/member/archive/182180/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175845-b97465aa >> Modify Your Subscription: https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com > > > ------------------------------------------- > illumos-discuss > Archives: https://www.listbox.com/member/archive/182180/=now > RSS Feed: https://www.listbox.com/member/archive/rss/182180/24965146-fd3f1df4 > Modify Your Subscription: https://www.listbox.com/member/?member_id=24965146&id_secret=24965146-15bf8ee5 > Powered by Listbox: http://www.listbox.com From garrett at damore.org Wed Nov 13 19:24:02 2013 From: garrett at damore.org (Garrett D'Amore) Date: Wed, 13 Nov 2013 11:24:02 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: CPU power management is unlikely to do much for you. Its better to rely on C-states to save power on modern systems. That said, there are some known bugs with deep C states and some drivers; we don't ordinarily recommend setting it for servers. The autopm should probably just be set to disable for your configuration. Letting the system power down your hard drives is likely to be the root of your problems. You don't need or want to power the drives down, and probably you don't want to power management anything else. (Frankly, very very few components on typical illumos based systems even support power management apart from the disk subsystem. The display subsystem typically uses its own power management which doesn't participate with the rest of illumos' power management framework, IIRC.) Indeed, the illumos power management framework is kind of a shambles right now -- consisting of at least two separate generations of code, a high level of complexity, and almost no substantial support from the various things that *should* support it. At one point I was discussing with Randy Fishel about some ideas I had for simplifying things substantially. Most generally, these days involvement with a core power management is probably *not* what you want, except to indicate "policy" -- modern devices (disk drives notwithstanding) can generally raise and lower power so quickly that its best to just let the drivers do their own power management explicitly, with only a few monitoring and policy hooks. This would lead to a vastly simpler power framework, which would probably be more likely to be taken up driver authors than the current mess. On Tue, Nov 12, 2013 at 2:52 PM, Paul B. Henson wrote: > I think the storage server I'm working on at home (current omnios stable) > is > spinning down disks; whenever I get back to it after not touching it for a > while, anything touching my export pool (dual 6 x 3TB WD Red RAIDZ2) takes > a > long time to respond, and then goes back to normal. > > My current power.conf is: > > ----- > autopm enable > autoS3 disable > > cpupm enable > cpu_deep_idle enable > cpu-threshold 1s > ----- > > I had thought that when running on server class hardware the default was > not > to spin down drives, but perhaps I was mistaken? Hmm, or maybe the default > is to just skip power management completely, and explicitly turning it on > turned on everything. > > I'm thinking about adding "system-threshold always-on" to the > configuration, > which I guess will disable all power down functionality, other than CPU, > which is controlled by the "cpu-threshold" parameter? What other power > savings would this turnoff? I am in general in favor of saving energy :), > other than for the disks, as the latency is too high and causes too much of > a delay, and it's not clear if setting the system-threshold to stop > spinning > down disks will also disable other potentially desirable power savings. I > guess the other option would be to explicitly configure device-thresholds > for all of the disks individually. Is there any way to list what components > of a system support power management? > > While on the subject, any thoughts on event-mode vs poll-mode for cpupm? > It's not clear from the man page what the default is if you just set it to > "enable". > > Thanks. > > > > ------------------------------------------- > illumos-discuss > Archives: https://www.listbox.com/member/archive/182180/=now > RSS Feed: > https://www.listbox.com/member/archive/rss/182180/22003744-9012f59c > Modify Your Subscription: > https://www.listbox.com/member/?member_id=22003744&id_secret=22003744-e9cd8436 > Powered by Listbox: http://www.listbox.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafael.vanoni at pluribusnetworks.com Wed Nov 13 19:50:04 2013 From: rafael.vanoni at pluribusnetworks.com (Rafael Vanoni) Date: Wed, 13 Nov 2013 11:50:04 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: On Wed, Nov 13, 2013 at 11:24 AM, Garrett D'Amore wrote: > CPU power management is unlikely to do much for you. Its better to rely > on C-states to save power on modern systems. That said, > Strictly speaking, C-states are part of CPU power management. If you were referring to p-states, with the exception of apps that require very low latency, the race to idle policy implemented with the Power Aware Dispatcher project improved power efficiency without affecting performance across a variety of workloads. > there are some known bugs with deep C states and some drivers; we don't > ordinarily recommend setting it for servers. > Can you point me at these bugs ? Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.coopersmith at oracle.com Wed Nov 13 20:33:18 2013 From: alan.coopersmith at oracle.com (Alan Coopersmith) Date: Wed, 13 Nov 2013 12:33:18 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: <5283E20E.6010207@oracle.com> On 11/13/13 11:24 AM, Garrett D'Amore wrote: > The display subsystem typically uses its own power management which doesn't > participate with the rest of illumos' power management framework, IIRC. If you're running an X server, it manages power for the video card & monitors. If you're not running an X server, I don't believe there is any power management support for them (except possibly in nvidia's closed-source driver). -- -Alan Coopersmith- alan.coopersmith at oracle.com Oracle Solaris Engineering - http://blogs.oracle.com/alanc From henson at acm.org Thu Nov 14 01:04:31 2013 From: henson at acm.org (Paul B. Henson) Date: Wed, 13 Nov 2013 17:04:31 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: <0ac901cee0d5$7d73ccc0$785b6640$@acm.org> > From: DavidHalko [mailto:davidhalko at gmail.com] > Sent: Tuesday, November 12, 2013 9:20 PM > > Honestly, I really like how the Mac Mini wakes the entire machine up and > spins up the hard drives (2x 2TB mirror) automatically during an access when > everything is sleeping. I have one of these as an external storage server. Don't you need an Apple Airport to avail of wake on demand? Or an Apple TV it looks like... > With flash l2arc & mirrored logging flash, the impact of sleeping under ZFS > has the potential to be a lot less intrusive. Have you thought about adding > some SSD's to your system and keep your maximum power savings? Heh; when I picked this chassis: http://www.supermicro.com/products/chassis/3u/836/sc836ba-r920.cfm I think I pretty much already gave up on "maximum" power savings ;). I've actually got a 4.5 kW solar panel system I put in a few years ago when we remodeled, so I'm not drastically concerned about power use, but I wouldn't mind saving a few watts here and there on things that won't really result in a performance impact. 30 odd seconds of latency isn't really going to cut it on that regard, so I'm ruling out hard drive sleep. I'd probably be happy enough just with power savings on the CPUs. Thanks. From henson at acm.org Thu Nov 14 01:06:33 2013 From: henson at acm.org (Paul B. Henson) Date: Wed, 13 Nov 2013 17:06:33 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: <528338BC.709@cucumber.demon.co.uk> References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <528338BC.709@cucumber.demon.co.uk> Message-ID: <0aca01cee0d5$c4a4e5c0$4deeb140$@acm.org> > From: Andrew Gabriel [mailto:andrew.gabriel at cucumber.demon.co.uk] > Sent: Wednesday, November 13, 2013 12:31 AM > > There's also a setting in ata.conf for programming IDE disks. > If standby is not set to >= 0, then I think the disks will keep their > previous setting (from another OS having set them, or from the BIOS > settings). Hmm, they are SATA disks, but they are hooked up to a SAS controller, so I don't think that setting would apply. Thanks. From henson at acm.org Thu Nov 14 03:38:29 2013 From: henson at acm.org (Paul B. Henson) Date: Wed, 13 Nov 2013 19:38:29 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: <528445B5.1080202@acm.org> On 11/13/2013 9:10 AM, Rafael Vanoni wrote: > Yes, If you don't specify individual settings for devices, > "system-threshold always-on" will leave everything at full power. Other than the CPU? IE, would the following: autopm enable system-threshold always-on cpupm enable result in a system that manages CPU power, but leaves everything else always hot? I think that's what I'll probably go with for now. > I don't know of a specific way of querying PM capabilities for > devices, but have a look at the man page for power.conf(4). It > mentions a way of specifying policies based on generic capabilities > that can apply to any device that supports it, instead of having to do > it one by one. I don't see a way to do that for specifying threshold/timeouts, only the device-thresholds parameter, which takes a single specific device seems to allow specifying those. There is the device-dependency-property, which allows you to use generic capabilities, but only to define dependencies, not configure thresholds, the example is: device-dependency-property removable-media /dev/fb which evidentally means never power down a removable media device if the frame buffer is powered on. I think most likely the only devices that are capable of being power managed in my server are the CPU and hard drives, but it would be nice to be able to confirm that. > If the system supports it (essentially if we're able to create CPU > power domains with information from ACPI tables), event mode is the > default. Otherwise we fall back to polling. Is there any way to tell what it ended up doing? I assume my box is new enough to support event mode, but you never know if the bios has broken ACPI data 8-/... The options for cpu_deep_idle seem a bit confusing, there is "default" which says "Advanced cpu idle power saving features are enabled on hardware which supports it", and "enable", which says "Enables the system to automatically use idle cpu power saving features" -- I'm not clear what the difference is here. You can't really enable a feature on hardware that doesn't support it ;). The third option "disable" makes sense, it won't do it regardless of whether or not the hardware could. Evidently if you don't specify it at all it defaults to "default", overall it seems it would be simpler for the only option for this parameter to be "disabled"; either it will do it if it can, or it won't do it at all if you tell it not to. Thanks? From henson at acm.org Thu Nov 14 04:00:29 2013 From: henson at acm.org (Paul B. Henson) Date: Wed, 13 Nov 2013 20:00:29 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> Message-ID: <52844ADD.30906@acm.org> On 11/13/2013 11:24 AM, Garrett D'Amore wrote: > CPU power management is unlikely to do much for you. Its better to rely > on C-states to save power on modern systems. I'm not sure how to interpret that; the power.conf man page says that if cpu_deep_idle is enabled "On X86 systems this can translate to the use of ACPI C-States beyond C1", so it seems illumos is already using C states for CPU power management? > Frankly, very very few components on typical illumos based > systems even support power management apart from the disk subsystem. Probably only disks and CPU I would think, although it would be nice to be able to list out what the system thinks it could manage. I definitely don't want to spin down the disks, but I wouldn't mind saving a few watts here and there on the CPU. So maybe something like: autopm disable autoS3 disable cpupm enable cpu_deep_idle enable > The display subsystem typically uses its own power management which > doesn't participate with the rest of illumos' power management > framework, IIRC.) Unlike good old SPARC boxes, my x86 "headless" server has a graphics adapter in it. As I'm using a serial console, all it ever displays after boot is a blank screen, can't imagine it takes much power... > want, except to indicate "policy" -- modern devices (disk drives > notwithstanding) can generally raise and lower power so quickly that its > best to just let the drivers do their own power management explicitly, > with only a few monitoring and policy hooks. This would lead to a > vastly simpler power framework, which would probably be more likely to > be taken up driver authors than the current mess. I think that's where Linux is heading, the new Intel P-state driver for CPU power management doesn't really have any options, just turn it on and let it do its thing... They determined the previous ACPI driver actually ended up using more power by waking up the CPU to decide whether or not it should decrease the frequency than just letting the CPU deal with it itself internally. Thanks? From turbo124 at gmail.com Thu Nov 14 07:18:41 2013 From: turbo124 at gmail.com (David Bomba) Date: Thu, 14 Nov 2013 18:18:41 +1100 Subject: [OmniOS-discuss] HP DL180 refuses to shutdown or reboot In-Reply-To: References: Message-ID: Hi Paul, Indeed you are correct: # *svccfg -s "system/boot-config:default" setprop config/fastreboot_default=false* # *svcadm refresh svc:/system/boot-config:default* Fixed the rebooting of the machine, however shutdown -y -i5 -g0 has the same problem... Daved On 13 November 2013 18:23, Paul B. Henson wrote: > Have you tried disabling fast reboot? Run 'reboot -p', if that works ok > you can permanently disable it with svccfg: > > http://docs.oracle.com/cd/E26502_01/html/E28983/gktlr.html > > > On Nov 12, 2013, at 9:02 PM, David Bomba wrote: > > > > Hi Guys, > > > > I've installed OmniOS on many HP DL180 G6 boxes with great success, > however the latest server refuses to either reboot or shutdown. > > > > There are two configuration changes which differentiate this server > from others, 1 a LSI-9201-8i HBA and 2 this server uses usb media for > booting. > > > > The cli goes through the usual processing of shutting down, killing user > processes, and syncing the file system. > > > > The server otherwise operates flawlessly. > > > > I'm wondering if anyone can point me in the right direction to debug > this situation? > > > > Dave > > > > _______________________________________________ > > OmniOS-discuss mailing list > > OmniOS-discuss at lists.omniti.com > > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randyf at sibernet.com Thu Nov 14 06:44:26 2013 From: randyf at sibernet.com (randyf at sibernet.com) Date: Wed, 13 Nov 2013 22:44:26 -0800 (PST) Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: <52844ADD.30906@acm.org> References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <52844ADD.30906@acm.org> Message-ID: On Wed, 13 Nov 2013, Paul B. Henson wrote: > On 11/13/2013 11:24 AM, Garrett D'Amore wrote: >> CPU power management is unlikely to do much for you. Its better to rely >> on C-states to save power on modern systems. > > I'm not sure how to interpret that; the power.conf man page says that if > cpu_deep_idle is enabled "On X86 systems this can translate to the use of > ACPI C-States beyond C1", so it seems illumos is already using C states for > CPU power management? Indeed. > >> Frankly, very very few components on typical illumos based >> systems even support power management apart from the disk subsystem. > > Probably only disks and CPU I would think, although it would be nice to be > able to list out what the system thinks it could manage. I definitely don't > want to spin down the disks, but I wouldn't mind saving a few watts here and > there on the CPU. So maybe something like: > > autopm disable > autoS3 disable > cpupm enable > cpu_deep_idle enable CPUPM can be independent of autopm, so if you desire CPUPM and not disk PM, setting autopm to 'disable' is correct (note, autoS3 is only relevant if S3-support is 'enable'). Also, "cpupm" can take 2 arguments, the second being one of 'event-mode' or 'poll-mode' (the default being 'event-mode', which is the mode that is separated from autopm). So you might wish to change 'cpupm enable' to 'cpupm enable event-mode' just so it is explicit (you might also wish to add 'S3-support disable' just to make sure the system won't ever try to suspend). Also, you *can* specify different PM capabilities for different disks. Multi-terabyte drives are cheap, and could serve as the backup media. There could well be value of having backup disks spindown, but keep the OS disks always-on (you would have to enable autopm for this action, though). Lastly, powertop(1m) can be used to see how well CPUPM is working for you. > >> The display subsystem typically uses its own power management which >> doesn't participate with the rest of illumos' power management >> framework, IIRC.) I am not aware of any x86 display drivers that have a power(9e) entry point, but if it does, it will partake in the PM framework operations. > > Unlike good old SPARC boxes, my x86 "headless" server has a graphics adapter > in it. As I'm using a serial console, all it ever displays after boot is a > blank screen, can't imagine it takes much power... It may be more than you might think. The backlight on flat panels is the biggest draw, and may not be trivial. A monitor rendering black will still have the gun scanning and the tube lit. You might prefer either removing the display adapter, unplugging the monitor, or starting X so that it can run display PM. Cheers! ---- Randy From henson at acm.org Fri Nov 15 00:27:47 2013 From: henson at acm.org (Paul B. Henson) Date: Thu, 14 Nov 2013 16:27:47 -0800 Subject: [OmniOS-discuss] HP DL180 refuses to shutdown or reboot In-Reply-To: References: Message-ID: <0be501cee199$8458efb0$8d0acf10$@acm.org> > From: David Bomba [mailto:turbo124 at gmail.com] > Sent: Wednesday, November 13, 2013 11:19 PM > > Fixed the rebooting of the machine, however shutdown -y -i5 -g0 has the > same problem... Power off? I don't think I ever have powered my box off from the OS level, so I'm not sure if I have a problem with that. My box has an LSI 9201-16i HBA, if I recall correctly the other couple of people I've seen having issues with fast reboot also had LSI HBA's... From henson at acm.org Fri Nov 15 01:42:19 2013 From: henson at acm.org (Paul B. Henson) Date: Thu, 14 Nov 2013 17:42:19 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <528445B5.1080202@acm.org> Message-ID: <0c0901cee1a3$ee7e8170$cb7b8450$@acm.org> > From: Rafael Vanoni [mailto:rafael.vanoni at pluribusnetworks.com] > Sent: Thursday, November 14, 2013 10:33 AM > > # echo "cpupm::print" | mdb -k 2 (PM_CPUPM_EVENT) Cool, looks like I'm good, thanks :). > I think the default option refers to when you don't specify > cpu_deep_idle at all, which is equivalent to having it and 'enable'. > If you enable and the hardware doesn't support it, the OS falls back > to whatever it can. No, that's not what the man page reads as (excerpted below), at least to my eyes. In checking the source code, looks like "disable", "enable", and "default" are all valid to explicitly specify (https://github.com/illumos/illumos-gate/blob/7256a34efe9df75b638b9e812912ef 7c5c68e208/usr/src/cmd/power/handlers.c), and "default" is exactly the same as "enable" -- "Default policy is same as enable" (https://github.com/illumos/illumos-gate/blob/7256a34efe9df75b638b9e812912ef 7c5c68e208/usr/src/uts/i86pc/os/cpupm/cpu_idle.c). So if you don't mention it at all, it is the same as "default" which is the same as "enable". Seems it would be simpler to only have one option, disable, but at least now I understand what it's doing. Thanks much. --------- If supported by the platform, a cpu_deep_idle entry can be used to enable or disable automatic use of power saving cpu idle states. The format of the cpu_deep_idle entry is: cpu_deep_idle behavior Acceptable values for behavior are: default Advanced cpu idle power saving features are enabled on hardware which supports it. On X86 systems this can translate to the use of ACPI C- States beyond C1. enable Enables the system to automatically use idle cpu power saving features. disable The system does not automatically use idle cpu power saving features. This option can be used when maximum performance is required at the expense of power. Illumos Last change: Feb 27, 2009 7 File Formats POWER.CONF(4) absent It the cpu_deep_idle keyword is absent from power.conf the behavior is the same as the default case. From henson at acm.org Fri Nov 15 01:49:03 2013 From: henson at acm.org (Paul B. Henson) Date: Thu, 14 Nov 2013 17:49:03 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <52844ADD.30906@acm.org> Message-ID: <0c0b01cee1a4$df0adbc0$9d209340$@acm.org> > From: randyf at sibernet.com [mailto:randyf at sibernet.com] > Sent: Wednesday, November 13, 2013 10:44 PM > > CPUPM can be independent of autopm, so if you desire CPUPM and not disk > PM, setting autopm to 'disable' is correct (note, autoS3 is only relevant Well, my new draft is: autopm disable S3-support disable cpupm enable event-mode cpu_deep_idle enable Which I think is doing what I want. Unless there is some other component that would be willing to go into power savings mode other than the disks. > Also, you *can* specify different PM capabilities for different disks. > Multi-terabyte drives are cheap, and could serve as the backup media. > There could well be value of having backup disks spindown, but keep the OS > disks always-on (you would have to enable autopm for this action, though). For this box, the only disks are either the production OS or the production storage, but I could see that being useful for some type of online backup box that is only accessed non-interactively once a day or so. > Lastly, powertop(1m) can be used to see how well CPUPM is working for > you. Oooh, cool, I did not realize illumos included powertop. Looks like with the above config on an idle box the majority of CPU time is in C3 and the P-state is exclusively the lowest frequency. I'll have to run that again once I get some load going. > It may be more than you might think. The backlight on flat panels is > the biggest draw, and may not be trivial. A monitor rendering black will Oh, it doesn't actually have a monitor attached. The box physically has a VGA port, but the only way I've ever looked at the actual "graphics console" is via the remote KVM app, which I only used when I was initially building it. Serial consoles are much nicer for servers :), the only annoying lack is that there is no way in the supermicro bios to select an alternate boot device via the serial console. You can get into setup, but they haven't mapped anything to pull up the boot menu :(. Thanks much. From henson at acm.org Fri Nov 15 06:07:31 2013 From: henson at acm.org (Paul B. Henson) Date: Thu, 14 Nov 2013 22:07:31 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <52844ADD.30906@acm.org> <0c0b01cee1a4$df0adbc0$9d209340$@acm.org> Message-ID: <20131115060731.GA5476@bender.unx.csupomona.edu> On Thu, Nov 14, 2013 at 06:19:03PM -0800, Rafael Vanoni wrote: > Just out of curiosity, how big is this box (how many sockets/chips and > how much memory)? It's a Supermicro 836BA-R920B chassis with a X9SRI-F motherboard, a single Xeon E5-2620 (hex core) and 48G (6 x 8) of Crucial DDR3 PC3-12800 ECC memory... Along with 13 WD Red 3TB drives (2 x 6 raidz2 + spare), 2 crucial m4 256G SSDs (rpool/l2arc), and an Intel DC S3700 as slog. Someday I'll have to break out the ammeter and see how much juice it's actually sucking. All the pieces are (relatively) power efficient, but it's still a lot of box. Percentage wise, turning on CPU power management is probably a drop in the bucket, but it makes me feel better ;). From rafael.vanoni at pluribusnetworks.com Thu Nov 14 18:33:23 2013 From: rafael.vanoni at pluribusnetworks.com (Rafael Vanoni) Date: Thu, 14 Nov 2013 10:33:23 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: <528445B5.1080202@acm.org> References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <528445B5.1080202@acm.org> Message-ID: On Wed, Nov 13, 2013 at 7:38 PM, Paul B. Henson wrote: [snip] > Is there any way to tell what it ended up doing? I assume my box is new > enough to support event mode, but you never know if the bios has broken ACPI > data 8-/... Try # echo "cpupm::print" | mdb -k > The options for cpu_deep_idle seem a bit confusing, there is "default" which > says "Advanced cpu idle power saving features are enabled on hardware > which supports it", and "enable", which says "Enables the system to > automatically use idle cpu power saving features" -- I'm not clear what the > difference is here. You can't really enable a feature on hardware that > doesn't support it ;). The third option "disable" makes sense, it won't do > it regardless of whether or not the hardware could. Evidently if you don't > specify it at all it defaults to "default", overall it seems it would be > simpler for the only option for this parameter to be "disabled"; either it > will do it if it can, or it won't do it at all if you tell it not to. > > Thanks? I think the default option refers to when you don't specify cpu_deep_idle at all, which is equivalent to having it and 'enable'. If you enable and the hardware doesn't support it, the OS falls back to whatever it can. Rafael From rafael.vanoni at pluribusnetworks.com Fri Nov 15 02:19:03 2013 From: rafael.vanoni at pluribusnetworks.com (Rafael Vanoni) Date: Thu, 14 Nov 2013 18:19:03 -0800 Subject: [OmniOS-discuss] [discuss] illumos power management In-Reply-To: <0c0b01cee1a4$df0adbc0$9d209340$@acm.org> References: <032d01cedff9$d9d1a910$8d74fb30$@acm.org> <52844ADD.30906@acm.org> <0c0b01cee1a4$df0adbc0$9d209340$@acm.org> Message-ID: On Nov 14, 2013, at 17:49, "Paul B. Henson" wrote: >> From: randyf at sibernet.com [mailto:randyf at sibernet.com] >> Sent: Wednesday, November 13, 2013 10:44 PM >> >> CPUPM can be independent of autopm, so if you desire CPUPM and not disk >> PM, setting autopm to 'disable' is correct (note, autoS3 is only relevant > > Well, my new draft is: > > autopm disable > S3-support disable > cpupm enable event-mode > cpu_deep_idle enable > > Which I think is doing what I want. Unless there is some other component > that would be willing to go into power savings mode other than the disks. > >> Also, you *can* specify different PM capabilities for different disks. >> Multi-terabyte drives are cheap, and could serve as the backup media. >> There could well be value of having backup disks spindown, but keep the OS >> disks always-on (you would have to enable autopm for this action, though). > > For this box, the only disks are either the production OS or the production > storage, but I could see that being useful for some type of online backup > box that is only accessed non-interactively once a day or so. > >> Lastly, powertop(1m) can be used to see how well CPUPM is working for >> you. > > Oooh, cool, I did not realize illumos included powertop. Looks like with the > above config on an idle box the majority of CPU time is in C3 and the > P-state is exclusively the lowest frequency. I'll have to run that again > once I get some load going. > > >> It may be more than you might think. The backlight on flat panels is >> the biggest draw, and may not be trivial. A monitor rendering black will > > Oh, it doesn't actually have a monitor attached. The box physically has a > VGA port, but the only way I've ever looked at the actual "graphics console" > is via the remote KVM app, which I only used when I was initially building > it. Serial consoles are much nicer for servers :), the only annoying lack is > that there is no way in the supermicro bios to select an alternate boot > device via the serial console. You can get into setup, but they haven't > mapped anything to pull up the boot menu :(. > > Thanks much. > Just out of curiosity, how big is this box (how many sockets/chips and how much memory)? Rafael From moo at wuffers.net Fri Nov 15 05:39:27 2013 From: moo at wuffers.net (wuffers) Date: Fri, 15 Nov 2013 00:39:27 -0500 Subject: [OmniOS-discuss] kernel panic - anon_decref Message-ID: So I'm adding VMware hosts (ESXi 5.5) to my OmniOS ZFS SAN, which are already hosting some volumes for our Windows 2012 Hyper-V infrastructure, running over SRP and Infiniband. In VMware, I had uninstalled the default Mellanox 1.9.7 drivers and installed the older 1.6.1 drivers along with OFED 1.8.2. I had no issues adding the new initiator to the target group, and creating a new host group and view for the host - after which the volume automagically showed up as expected. I formatted using VMFS5, and started creating a VM, attaching an ISO and loading up Windows Server 2012 R2. Somewhere during the install, I had my first kernel panic and I had to reboot the SAN as it was during business hours (couldn't wait for the dump to finish). Later that night I reproduced the issue (just loading up VMs, and trying out a VMware converter job) and was able to get a proper dump (which is now sitting in my /var/crash/unknown, ~7GB). Screenshots: http://i.imgur.com/nGakKyS.png?1 http://i.imgur.com/wIx0g6J.png?1 TIME UUID SUNW-MSG-ID Nov 14 2013 22:13:46.926077000 a4432472-983c-ca82-a231-d1b468a3a91a SUNOS-8000-KL TIME CLASS ENA Nov 14 22:13:46.8830 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 14 22:12:33.1029 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = a4432472-983c-ca82-a231-d1b468a3a91a code = SUNOS-8000-KL diag-time = 1384485226 890408 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.a4432472-983c-ca82-a231-d1b468a3a91a resource = sw:///:path=/var/crash/unknown/.a4432472-983c-ca82-a231-d1b468a3a91a savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.0 os-instance-uuid = a4432472-983c-ca82-a231-d1b468a3a91a panicstr = anon_decref: slot count 0 panicstack = fffffffffbb2fa18 () | genunix:anon_free+74 () | genunix:segvn_free+242 () | genunix:seg_free+30 () | genunix:segvn_unmap+cde () | genunix:as_free+e7 () | genunix:relvm+220 () | genunix:proc_exit+454 () | genunix:exit+15 () | genunix:rexit+18 () | unix:brand_sys_sysenter+1c9 () | crashtime = 1384482703 panic-time = Thu Nov 14 21:31:43 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x5285916a 0x3732d048 While getting the Hyper-V hosts up on IB and SRP I had issues with the Windows hosts but never with the SAN box, and they have now been running stable for 3+ months until the kernel panic today. I saw some other anon_decref bugs, but those were in 2007-2008 and have already been rolled into OmniOS. I'm pretty sure I was on the original r151006, and now am on the latest r151006y, in hopes it's already taken care of. I'll try other things to see if I can reproduce on the latest build. In the meantime, does anyone want to take a look at the dump? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skiselkov.ml at gmail.com Fri Nov 15 16:17:46 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Fri, 15 Nov 2013 16:17:46 +0000 Subject: [OmniOS-discuss] kernel panic - anon_decref In-Reply-To: References: Message-ID: <5286492A.5030801@gmail.com> On 11/15/13, 5:39 AM, wuffers wrote: > So I'm adding VMware hosts (ESXi 5.5) to my OmniOS ZFS SAN, which are > already hosting some volumes for our Windows 2012 Hyper-V > infrastructure, running over SRP and Infiniband. In VMware, I had > uninstalled the default Mellanox 1.9.7 drivers and installed the older > 1.6.1 drivers along with OFED 1.8.2. I had no issues adding the new > initiator to the target group, and creating a new host group and view > for the host - after which the volume automagically showed up as expected. > > I formatted using VMFS5, and started creating a VM, attaching an ISO and > loading up Windows Server 2012 R2. Somewhere during the install, I had > my first kernel panic and I had to reboot the SAN as it was during > business hours (couldn't wait for the dump to finish). Later that night > I reproduced the issue (just loading up VMs, and trying out a VMware > converter job) and was able to get a proper dump (which is now sitting > in my /var/crash/unknown, ~7GB). > > Screenshots: > http://i.imgur.com/nGakKyS.png?1 > http://i.imgur.com/wIx0g6J.png?1 > > > TIME UUID > SUNW-MSG-ID > Nov 14 2013 22:13:46.926077000 a4432472-983c-ca82-a231-d1b468a3a91a > SUNOS-8000-KL > > TIME CLASS ENA > Nov 14 22:13:46.8830 ireport.os.sunos.panic.dump_available > 0x0000000000000000 > Nov 14 22:12:33.1029 ireport.os.sunos.panic.dump_pending_on_device > 0x0000000000000000 > > nvlist version: 0 > version = 0x0 > class = list.suspect > uuid = a4432472-983c-ca82-a231-d1b468a3a91a > code = SUNOS-8000-KL > diag-time = 1384485226 890408 > de = fmd:///module/software-diagnosis > fault-list-sz = 0x1 > fault-list = (array of embedded nvlists) > (start fault-list[0]) > nvlist version: 0 > version = 0x0 > class = defect.sunos.kernel.panic > certainty = 0x64 > asru = > sw:///:path=/var/crash/unknown/.a4432472-983c-ca82-a231-d1b468a3a91a > resource = > sw:///:path=/var/crash/unknown/.a4432472-983c-ca82-a231-d1b468a3a91a > savecore-succcess = 1 > dump-dir = /var/crash/unknown > dump-files = vmdump.0 > os-instance-uuid = a4432472-983c-ca82-a231-d1b468a3a91a > panicstr = anon_decref: slot count 0 > panicstack = fffffffffbb2fa18 () | genunix:anon_free+74 > () | genunix:segvn_free+242 () | genunix:seg_free+30 () | > genunix:segvn_unmap+cde () | genunix:as_free+e7 () | genunix:relvm+220 > () | genunix:proc_exit+454 () | genunix:exit+15 () | genunix:rexit+18 () > | unix:brand_sys_sysenter+1c9 () | > crashtime = 1384482703 > panic-time = Thu Nov 14 21:31:43 2013 EST > (end fault-list[0]) > > fault-status = 0x1 > severity = Major > __ttl = 0x1 > __tod = 0x5285916a 0x3732d048 > > While getting the Hyper-V hosts up on IB and SRP I had issues with the > Windows hosts but never with the SAN box, and they have now been running > stable for 3+ months until the kernel panic today. I saw some other > anon_decref bugs, but those were in 2007-2008 and have already been > rolled into OmniOS. I'm pretty sure I was on the original r151006, and > now am on the latest r151006y, in hopes it's already taken care of. I'll > try other things to see if I can reproduce on the latest build. > > In the meantime, does anyone want to take a look at the dump? So, just to clear up, the SAN box is also running inside VMware? Or is it on real hardware? Cheers, -- Saso From moo at wuffers.net Fri Nov 15 21:17:34 2013 From: moo at wuffers.net (wuffers) Date: Fri, 15 Nov 2013 16:17:34 -0500 Subject: [OmniOS-discuss] kernel panic - anon_decref Message-ID: > So, just to clear up, the SAN box is also running inside VMware? Or is > it on real hardware? It is on bare metal. The SAN box is a Supermicro SuperChassis 826BE26-R920LPB, which has a X9DR3-LN4F+ mobo. # prtdiag System Configuration: Supermicro X9DRi-LN4+/X9DR3-LN4+ BIOS Configuration: American Megatrends Inc. 2.0 01/11/2013 BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style) ==== Processor Sockets ==================================== Version Location Tag -------------------------------- -------------------------- Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz CPU 1 Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz CPU 2 [truncated memory information - 256GB installed] -------------- next part -------------- An HTML attachment was scrubbed... URL: From skiselkov.ml at gmail.com Fri Nov 15 22:55:56 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Fri, 15 Nov 2013 22:55:56 +0000 Subject: [OmniOS-discuss] kernel panic - anon_decref In-Reply-To: References: Message-ID: <5286A67C.80706@gmail.com> On 11/15/13, 9:17 PM, wuffers wrote: >> So, just to clear up, the SAN box is also running inside VMware? Or is >> it on real hardware? > > It is on bare metal. The SAN box is a Supermicro SuperChassis 826BE26-R920LPB, which has a X9DR3-LN4F+ mobo. > > # prtdiag > System Configuration: Supermicro X9DRi-LN4+/X9DR3-LN4+ > > BIOS Configuration: American Megatrends Inc. 2.0 01/11/2013 > BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style) > > ==== Processor Sockets ==================================== > > Version Location Tag > > -------------------------------- -------------------------- > Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz CPU 1 > Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz CPU 2 > > [truncated memory information - 256GB installed] Can you try and gather some info from the crash dump? See: http://wiki.illumos.org/display/illumos/How+To+Report+Problems The backtrace you posted provides too little info to make any headway, it appears as though the kernel virtual memory map got somehow corrupted (perhaps something DMA'ing in the wrong places?), but it's hard to know what exactly the box was doing at that time. Cheers, -- Saso From moo at wuffers.net Sat Nov 16 03:06:25 2013 From: moo at wuffers.net (wuffers) Date: Fri, 15 Nov 2013 22:06:25 -0500 Subject: [OmniOS-discuss] kernel panic - anon_decref In-Reply-To: <5286A67C.80706@gmail.com> References: <5286A67C.80706@gmail.com> Message-ID: On Fri, Nov 15, 2013 at 5:55 PM, Saso Kiselkov wrote: > > Can you try and gather some info from the crash dump? See: > http://wiki.illumos.org/display/illumos/How+To+Report+Problems > The backtrace you posted provides too little info to make any headway, > it appears as though the kernel virtual memory map got somehow corrupted > (perhaps something DMA'ing in the wrong places?), but it's hard to know > what exactly the box was doing at that time. > > Cheers, > -- > Saso > Hope this is what you're looking for (exceeded pastebin's 500k limit). https://drive.google.com/file/d/0B7mCJnZUzJPKX3FjY0xZby1EUFE Also attached. Thanks, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- cpu 21 thread ffffff4383d08c60 message anon_decref: slot count 0 rdi fffffffffbb72da6 rsi ffffff01e8caaaf0 rdx ffffff4383d08c60 rcx 33 r8 c r9 1fffffe865a3f rax ffffff01e8caab10 rbx ffffff432d1fe4f0 rbp ffffff01e8caab50 r10 1 r11 3 r12 ffffff42e1bf6b00 r13 ffffff4558663658 r14 ffffff01e8caaba8 r15 0 fsbase 0 gsbase ffffff430fd28040 ds 4b es 4b fs 0 gs 1c3 trapno 0 err 0 rip fffffffffb85e800 cs 30 rflags 286 rsp ffffff01e8caaae8 ss 38 gdt_hi 0 gdt_lo e00001ef idt_hi 0 idt_lo d0000fff ldt 0 task 70 cr0 80050033 cr2 feed4ccb cr3 bc00000 cr4 406f8 ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 0 fffffffffbc30d20 1f 0 0 -1 no no t-0 ffffff01e8005c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 1 ffffff430f788580 1f 0 0 -1 no no t-2 ffffff01e8388c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 2 ffffff430f787080 1f 0 0 -1 no no t-1 ffffff01e8424c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 3 ffffff430f785a80 1f 1 0 -1 no no t-2 ffffff01e84b1c40 (idle) | | RUNNING <--+ +--> PRI THREAD PROC READY 60 ffffff01e8137c40 sched QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 4 ffffff430f77fac0 1f 1 0 -1 no no t-4 ffffff01e853dc40 (idle) | | RUNNING <--+ +--> PRI THREAD PROC READY 60 ffffff01e9f71c40 sched QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 5 ffffff430f779b00 1f 0 0 -1 no no t-4 ffffff01e840ec40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 6 ffffff430f770500 1f 0 0 -1 no no t-2 ffffff01e8aaac40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 7 ffffff430f76f000 1f 0 0 -1 no no t-1 ffffff01e8b15c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 8 ffffff430f908ac0 1f 0 0 -1 no no t-3 ffffff01e8b86c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 9 ffffff430f903580 1f 0 0 -1 no no t-6 ffffff01e8bf1c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 10 ffffff430f902080 1f 0 0 -1 no no t-0 ffffff01e8c5cc40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 11 ffffff430f8fea80 1f 0 0 -1 no no t-2 ffffff01e8cdfc40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 12 ffffff430f8f9540 1f 0 0 -1 no no t-0 ffffff01e8dd9c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 13 ffffff430f8f8040 1f 0 0 -1 no no t-3 ffffff01e908dc40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 14 ffffff430f9af500 1f 0 0 -1 no no t-0 ffffff01e9156c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 15 ffffff430f9ae000 1f 0 0 -1 no no t-3 ffffff01e91c1c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 16 ffffff430f9aaac0 1f 0 0 -1 no no t-4 ffffff01e922cc40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 17 ffffff430f9a5580 1f 0 0 -1 no no t-2 ffffff01e92a9c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 18 ffffff430f9a4080 1f 0 0 -1 no no t-0 ffffff01e9314c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 19 ffffff430f9a0a80 1f 0 0 -1 no no t-1 ffffff01e937fc40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 20 ffffff430fd29540 1f 0 0 -1 no no t-1 ffffff01e93eac40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 21 fffffffffbc3b620 1b 0 0 29 no no t-0 ffffff4383d08c60 sleep | RUNNING <--+ READY EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 22 ffffff430fd24b00 1f 0 0 -1 no no t-1 ffffff01e94d8c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 23 ffffff430fd1d500 1f 0 0 -1 no no t-1 ffffff01e9561c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ADDR PROC LWP CLS PRI WCHAN fffffffffbc2fac0 fffffffffbc2eb80 fffffffffbc315e0 0 96 0 PC: _resume_from_idle+0xf1 CMD: sched stack pointer for thread fffffffffbc2fac0: fffffffffbc72210 [ fffffffffbc72210 _resume_from_idle+0xf1() ] swtch+0x141() sched+0x835() main+0x46c() _locore_start+0x90() ffffff01e8005c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8005c40: ffffff01e8005660 1() ffffff01e800bc40 fffffffffbc2eb80 0 0 60 fffffffffbcca9bc PC: _resume_from_idle+0xf1 THREAD: thread_reaper() stack pointer for thread ffffff01e800bc40: ffffff01e800bb60 [ ffffff01e800bb60 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcca9bc, fffffffffbcfb518) thread_reaper+0xb9() thread_start+8() ffffff01e8011c40 fffffffffbc2eb80 0 0 60 ffffff42e261be70 PC: _resume_from_idle+0xf1 TASKQ: kmem_move_taskq stack pointer for thread ffffff01e8011c40: ffffff01e8011a80 [ ffffff01e8011a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261be70, ffffff42e261be60) taskq_thread_wait+0xbe(ffffff42e261be40, ffffff42e261be60, ffffff42e261be70 , ffffff01e8011bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261be40) thread_start+8() ffffff01e8017c40 fffffffffbc2eb80 0 0 60 ffffff42e261bd58 PC: _resume_from_idle+0xf1 TASKQ: kmem_taskq stack pointer for thread ffffff01e8017c40: ffffff01e8017a80 [ ffffff01e8017a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261bd58, ffffff42e261bd48) taskq_thread_wait+0xbe(ffffff42e261bd28, ffffff42e261bd48, ffffff42e261bd58 , ffffff01e8017bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261bd28) thread_start+8() ffffff01e801dc40 fffffffffbc2eb80 0 0 60 ffffff42e261bc40 PC: _resume_from_idle+0xf1 TASKQ: pseudo_nexus_enum_tq stack pointer for thread ffffff01e801dc40: ffffff01e801da80 [ ffffff01e801da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261bc40, ffffff42e261bc30) taskq_thread_wait+0xbe(ffffff42e261bc10, ffffff42e261bc30, ffffff42e261bc40 , ffffff01e801dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261bc10) thread_start+8() ffffff01e8023c40 fffffffffbc2eb80 0 0 60 fffffffffbd17d00 PC: _resume_from_idle+0xf1 THREAD: scsi_hba_barrier_daemon() stack pointer for thread ffffff01e8023c40: ffffff01e8023b20 [ ffffff01e8023b20 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbd17d00, fffffffffbd17cf8) scsi_hba_barrier_daemon+0xd6(0) thread_start+8() ffffff01e8029c40 fffffffffbc2eb80 0 0 60 fffffffffbd17d18 PC: _resume_from_idle+0xf1 THREAD: scsi_lunchg1_daemon() stack pointer for thread ffffff01e8029c40: ffffff01e8029630 [ ffffff01e8029630 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbd17d18, fffffffffbd17d10) scsi_lunchg1_daemon+0x1de(0) thread_start+8() ffffff01e802fc40 fffffffffbc2eb80 0 0 60 fffffffffbd17d30 PC: _resume_from_idle+0xf1 THREAD: scsi_lunchg2_daemon() stack pointer for thread ffffff01e802fc40: ffffff01e802fb30 [ ffffff01e802fb30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbd17d30, fffffffffbd17d28) scsi_lunchg2_daemon+0x121(0) thread_start+8() ffffff01e8035c40 fffffffffbc2eb80 0 0 60 ffffff42e261bb28 PC: _resume_from_idle+0xf1 TASKQ: scsi_vhci_nexus_enum_tq stack pointer for thread ffffff01e8035c40: ffffff01e8035a80 [ ffffff01e8035a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261bb28, ffffff42e261bb18) taskq_thread_wait+0xbe(ffffff42e261baf8, ffffff42e261bb18, ffffff42e261bb28 , ffffff01e8035bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261baf8) thread_start+8() ffffff01e808fc40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e808fc40: ffffff01e808fa80 [ ffffff01e808fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e808fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e807dc40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e807dc40: ffffff01e807da80 [ ffffff01e807da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e807dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e806bc40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e806bc40: ffffff01e806ba80 [ ffffff01e806ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e806bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e805fc40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e805fc40: ffffff01e805fa80 [ ffffff01e805fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e805fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e8053c40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e8053c40: ffffff01e8053a80 [ ffffff01e8053a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e8053bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e8047c40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e8047c40: ffffff01e8047a80 [ ffffff01e8047a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e8047bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e8041c40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e8041c40: ffffff01e8041a80 [ ffffff01e8041a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e8041bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e803bc40 fffffffffbc2eb80 0 0 60 ffffff42e261ba10 PC: _resume_from_idle+0xf1 TASKQ: mdi_taskq stack pointer for thread ffffff01e803bc40: ffffff01e803ba80 [ ffffff01e803ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261ba10, ffffff42e261ba00) taskq_thread_wait+0xbe(ffffff42e261b9e0, ffffff42e261ba00, ffffff42e261ba10 , ffffff01e803bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b9e0) thread_start+8() ffffff01e804dc40 fffffffffbc2eb80 0 0 60 ffffff42e261b8f8 PC: _resume_from_idle+0xf1 TASKQ: vhci_taskq stack pointer for thread ffffff01e804dc40: ffffff01e804da80 [ ffffff01e804da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b8f8, ffffff42e261b8e8) taskq_thread_wait+0xbe(ffffff42e261b8c8, ffffff42e261b8e8, ffffff42e261b8f8 , ffffff01e804dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b8c8) thread_start+8() ffffff01e8119c40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e8119c40: ffffff01e8119a80 [ ffffff01e8119a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e8119bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e810dc40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e810dc40: ffffff01e810da80 [ ffffff01e810da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e810dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e80fbc40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e80fbc40: ffffff01e80fba80 [ ffffff01e80fba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e80fbbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e8095c40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e8095c40: ffffff01e8095a80 [ ffffff01e8095a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e8095bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e8083c40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e8083c40: ffffff01e8083a80 [ ffffff01e8083a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e8083bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e8071c40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e8071c40: ffffff01e8071a80 [ ffffff01e8071a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e8071bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e8065c40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e8065c40: ffffff01e8065a80 [ ffffff01e8065a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e8065bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e8059c40 fffffffffbc2eb80 0 0 60 ffffff42e261b7e0 PC: _resume_from_idle+0xf1 TASKQ: vhci_update_pathstates stack pointer for thread ffffff01e8059c40: ffffff01e8059a80 [ ffffff01e8059a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b7e0, ffffff42e261b7d0) taskq_thread_wait+0xbe(ffffff42e261b7b0, ffffff42e261b7d0, ffffff42e261b7e0 , ffffff01e8059bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b7b0) thread_start+8() ffffff01e8077c40 fffffffffbc2eb80 0 0 60 ffffff42e261b6c8 PC: _resume_from_idle+0xf1 TASKQ: npe_nexus_enum_tq stack pointer for thread ffffff01e8077c40: ffffff01e8077a80 [ ffffff01e8077a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b6c8, ffffff42e261b6b8) taskq_thread_wait+0xbe(ffffff42e261b698, ffffff42e261b6b8, ffffff42e261b6c8 , ffffff01e8077bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b698) thread_start+8() ffffff01e8089c40 fffffffffbc2eb80 0 0 60 ffffff42e261b5b0 PC: _resume_from_idle+0xf1 TASKQ: isa_nexus_enum_tq stack pointer for thread ffffff01e8089c40: ffffff01e8089a80 [ ffffff01e8089a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b5b0, ffffff42e261b5a0) taskq_thread_wait+0xbe(ffffff42e261b580, ffffff42e261b5a0, ffffff42e261b5b0 , ffffff01e8089bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b580) thread_start+8() ffffff01e80d7c40 fffffffffbc2eb80 0 0 60 ffffff42e261b498 PC: _resume_from_idle+0xf1 TASKQ: ACPI0 stack pointer for thread ffffff01e80d7c40: ffffff01e80d7a80 [ ffffff01e80d7a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b498, ffffff42e261b488) taskq_thread_wait+0xbe(ffffff42e261b468, ffffff42e261b488, ffffff42e261b498 , ffffff01e80d7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b468) thread_start+8() ffffff01e8101c40 fffffffffbc2eb80 0 0 62 ffffff42e261b268 PC: _resume_from_idle+0xf1 TASKQ: ACPI1 stack pointer for thread ffffff01e8101c40: ffffff01e8101a80 [ ffffff01e8101a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b268, ffffff42e261b258) taskq_thread_wait+0xbe(ffffff42e261b238, ffffff42e261b258, ffffff42e261b268 , ffffff01e8101bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b238) thread_start+8() ffffff01e8107c40 fffffffffbc2eb80 0 0 60 ffffff42e261b150 PC: _resume_from_idle+0xf1 TASKQ: ACPI2 stack pointer for thread ffffff01e8107c40: ffffff01e8107a80 [ ffffff01e8107a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b150, ffffff42e261b140) taskq_thread_wait+0xbe(ffffff42e261b120, ffffff42e261b140, ffffff42e261b150 , ffffff01e8107bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b120) thread_start+8() ffffff01e8113c40 fffffffffbc2eb80 0 0 60 ffffff42e261b038 PC: _resume_from_idle+0xf1 TASKQ: ACPI3 stack pointer for thread ffffff01e8113c40: ffffff01e8113a80 [ ffffff01e8113a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b038, ffffff42e261b028) taskq_thread_wait+0xbe(ffffff42e261b008, ffffff42e261b028, ffffff42e261b038 , ffffff01e8113bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b008) thread_start+8() ffffff01e811fc40 fffffffffbc2eb80 0 0 60 ffffff42e2978e78 PC: _resume_from_idle+0xf1 TASKQ: ACPI4 stack pointer for thread ffffff01e811fc40: ffffff01e811fa80 [ ffffff01e811fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978e78, ffffff42e2978e68) taskq_thread_wait+0xbe(ffffff42e2978e48, ffffff42e2978e68, ffffff42e2978e78 , ffffff01e811fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978e48) thread_start+8() ffffff01e8125c40 fffffffffbc2eb80 0 0 60 ffffff42e2978d60 PC: _resume_from_idle+0xf1 TASKQ: ACPI5 stack pointer for thread ffffff01e8125c40: ffffff01e8125a80 [ ffffff01e8125a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978d60, ffffff42e2978d50) taskq_thread_wait+0xbe(ffffff42e2978d30, ffffff42e2978d50, ffffff42e2978d60 , ffffff01e8125bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978d30) thread_start+8() ffffff01e80efc40 fffffffffbc2eb80 0 0 99 ffffff42e261b380 PC: _resume_from_idle+0xf1 TASKQ: timeout_taskq stack pointer for thread ffffff01e80efc40: ffffff01e80efa80 [ ffffff01e80efa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b380, ffffff42e261b370) taskq_thread_wait+0xbe(ffffff42e261b350, ffffff42e261b370, ffffff42e261b380 , ffffff01e80efbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b350) thread_start+8() ffffff01e80e9c40 fffffffffbc2eb80 0 0 99 ffffff42e261b380 PC: _resume_from_idle+0xf1 TASKQ: timeout_taskq stack pointer for thread ffffff01e80e9c40: ffffff01e80e9a80 [ ffffff01e80e9a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b380, ffffff42e261b370) taskq_thread_wait+0xbe(ffffff42e261b350, ffffff42e261b370, ffffff42e261b380 , ffffff01e80e9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b350) thread_start+8() ffffff01e80e3c40 fffffffffbc2eb80 0 0 99 ffffff42e261b380 PC: _resume_from_idle+0xf1 TASKQ: timeout_taskq stack pointer for thread ffffff01e80e3c40: ffffff01e80e3a80 [ ffffff01e80e3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b380, ffffff42e261b370) taskq_thread_wait+0xbe(ffffff42e261b350, ffffff42e261b370, ffffff42e261b380 , ffffff01e80e3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b350) thread_start+8() ffffff01e80ddc40 fffffffffbc2eb80 0 0 99 ffffff42e261b380 PC: _resume_from_idle+0xf1 TASKQ: timeout_taskq stack pointer for thread ffffff01e80ddc40: ffffff01e80dda80 [ ffffff01e80dda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e261b380, ffffff42e261b370) taskq_thread_wait+0xbe(ffffff42e261b350, ffffff42e261b370, ffffff42e261b380 , ffffff01e80ddbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e261b350) thread_start+8() ffffff01e80f5c40 fffffffffbc2eb80 0 0 99 fffffffffbca5cc0 PC: _resume_from_idle+0xf1 THREAD: timeout_taskq_thread() stack pointer for thread ffffff01e80f5c40: ffffff01e80f5b50 [ ffffff01e80f5b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbca5cc0, fffffffffbca5d08) timeout_taskq_thread+0x95(0) thread_start+8() ffffff01e8131c40 fffffffffbc2eb80 0 0 99 ffffff42e2978c48 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e8131c40: ffffff01e8131a80 [ ffffff01e8131a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978c48, ffffff42e2978c38) taskq_thread_wait+0xbe(ffffff42e2978c18, ffffff42e2978c38, ffffff42e2978c48 , ffffff01e8131bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978c18) thread_start+8() ffffff01e812bc40 fffffffffbc2eb80 0 0 99 ffffff42e2978c48 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e812bc40: ffffff01e812ba80 [ ffffff01e812ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978c48, ffffff42e2978c38) taskq_thread_wait+0xbe(ffffff42e2978c18, ffffff42e2978c38, ffffff42e2978c48 , ffffff01e812bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978c18) thread_start+8() ffffff01e9c87c40 fffffffffbc2eb80 0 0 60 ffffff432552f5c0 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00020003 stack pointer for thread ffffff01e9c87c40: ffffff01e9c87a80 [ ffffff01e9c87a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f5c0, ffffff432552f5b0) taskq_thread_wait+0xbe(ffffff432552f590, ffffff432552f5b0, ffffff432552f5c0 , ffffff01e9c87bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f590) thread_start+8() ffffff01e9871c40 fffffffffbc2eb80 0 0 60 ffffff432552f4a8 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00020003 stack pointer for thread ffffff01e9871c40: ffffff01e9871a80 [ ffffff01e9871a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f4a8, ffffff432552f498) taskq_thread_wait+0xbe(ffffff432552f478, ffffff432552f498, ffffff432552f4a8 , ffffff01e9871bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f478) thread_start+8() ffffff01eb4c2c40 fffffffffbc2eb80 0 0 60 ffffff4361b02d70 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00020003 stack pointer for thread ffffff01eb4c2c40: ffffff01eb4c2a80 [ ffffff01eb4c2a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02d70, ffffff4361b02d60) taskq_thread_wait+0xbe(ffffff4361b02d40, ffffff4361b02d60, ffffff4361b02d70 , ffffff01eb4c2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b02d40) thread_start+8() ffffff01eb528c40 fffffffffbc2eb80 0 0 60 ffffff4361b02c58 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00020003 stack pointer for thread ffffff01eb528c40: ffffff01eb528a80 [ ffffff01eb528a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02c58, ffffff4361b02c48) taskq_thread_wait+0xbe(ffffff4361b02c28, ffffff4361b02c48, ffffff4361b02c58 , ffffff01eb528bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b02c28) thread_start+8() ffffff01e9ef5c40 fffffffffbc2eb80 0 0 60 ffffff432552f390 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00020003 stack pointer for thread ffffff01e9ef5c40: ffffff01e9ef5a80 [ ffffff01e9ef5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f390, ffffff432552f380) taskq_thread_wait+0xbe(ffffff432552f360, ffffff432552f380, ffffff432552f390 , ffffff01e9ef5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f360) thread_start+8() ffffff01eb3c5c40 fffffffffbc2eb80 0 0 60 ffffff432552f278 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00020003 stack pointer for thread ffffff01eb3c5c40: ffffff01eb3c5a80 [ ffffff01eb3c5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f278, ffffff432552f268) taskq_thread_wait+0xbe(ffffff432552f248, ffffff432552f268, ffffff432552f278 , ffffff01eb3c5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f248) thread_start+8() ffffff01e986bc40 fffffffffbc2eb80 0 0 60 ffffff42e2617050 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01e986bc40: ffffff01e986ba30 [ ffffff01e986ba30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e2617050, ffffff42e1ac7e80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff42e2617050, ffffff42e1ac7e80, 7530, 4) taskq_thread_wait+0x64(ffffff42e2978b00, ffffff42e1ac7e80, ffffff42e2617050 , ffffff01e986bbc0, 7530) taskq_d_thread+0x145(ffffff42e2617020) thread_start+8() ffffff01e8403c40 fffffffffbc2eb80 0 0 60 ffffff43181e3030 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01e8403c40: ffffff01e8403a30 [ ffffff01e8403a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff43181e3030, ffffff42e1ac7a80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff43181e3030, ffffff42e1ac7a80, 7530, 4) taskq_thread_wait+0x64(ffffff42e2978b00, ffffff42e1ac7a80, ffffff43181e3030 , ffffff01e8403bc0, 7530) taskq_d_thread+0x145(ffffff43181e3000) thread_start+8() ffffff01e913fc40 fffffffffbc2eb80 0 0 60 ffffff43422d4488 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01e913fc40: ffffff01e913fa30 [ ffffff01e913fa30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff43422d4488, ffffff42e1ac7f80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff43422d4488, ffffff42e1ac7f80, 7530, 4) taskq_thread_wait+0x64(ffffff42e2978b00, ffffff42e1ac7f80, ffffff43422d4488 , ffffff01e913fbc0, 7530) taskq_d_thread+0x145(ffffff43422d4458) thread_start+8() ffffff01e9b79c40 fffffffffbc2eb80 0 0 60 ffffff43422d4a70 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01e9b79c40: ffffff01e9b79a30 [ ffffff01e9b79a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff43422d4a70, ffffff42e1ac7b00, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff43422d4a70, ffffff42e1ac7b00, 7530, 4) taskq_thread_wait+0x64(ffffff42e2978b00, ffffff42e1ac7b00, ffffff43422d4a70 , ffffff01e9b79bc0, 7530) taskq_d_thread+0x145(ffffff43422d4a40) thread_start+8() ffffff01ea32fc40 fffffffffbc2eb80 0 0 60 ffffff4384bf1080 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01ea32fc40: ffffff01ea32fa30 [ ffffff01ea32fa30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff4384bf1080, ffffff42e1ac7f00, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff4384bf1080, ffffff42e1ac7f00, 7530, 4) taskq_thread_wait+0x64(ffffff42e2978b00, ffffff42e1ac7f00, ffffff4384bf1080 , ffffff01ea32fbc0, 7530) taskq_d_thread+0x145(ffffff4384bf1050) thread_start+8() ffffff01e9f71c40 fffffffffbc2eb80 0 0 60 0 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01e9f71c40: ffffff01e9f71a30 [ ffffff01e9f71a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff431e7b4698, ffffff42e1ac7a00, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff431e7b4698, ffffff42e1ac7a00, 7530, 4) taskq_thread_wait+0x64(ffffff42e2978b00, ffffff42e1ac7a00, ffffff431e7b4698 , ffffff01e9f71bc0, 7530) taskq_d_thread+0x145(ffffff431e7b4668) thread_start+8() ffffff01e8137c40 fffffffffbc2eb80 0 0 60 0 PC: _resume_from_idle+0xf1 TASKQ: system_taskq stack pointer for thread ffffff01e8137c40: ffffff01e8137a80 [ ffffff01e8137a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978b30, ffffff42e2978b20) taskq_thread_wait+0xbe(ffffff42e2978b00, ffffff42e2978b20, ffffff42e2978b30 , ffffff01e8137bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978b00) thread_start+8() ffffff01e997ac40 fffffffffbc2eb80 0 0 99 ffffff431ea20d28 PC: _resume_from_idle+0xf1 THREAD: mac_srs_worker() stack pointer for thread ffffff01e997ac40: ffffff01e997ab30 [ ffffff01e997ab30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431ea20d28, ffffff431ea20d00) mac_srs_worker+0x141(ffffff431ea20d00) thread_start+8() ffffff01e9980c40 fffffffffbc2eb80 0 0 99 ffffff431ea219e8 PC: _resume_from_idle+0xf1 THREAD: mac_srs_worker() stack pointer for thread ffffff01e9980c40: ffffff01e9980b30 [ ffffff01e9980b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431ea219e8, ffffff431ea219c0) mac_srs_worker+0x141(ffffff431ea219c0) thread_start+8() ffffff01e9986c40 fffffffffbc2eb80 0 0 99 ffffff431ea219ea PC: _resume_from_idle+0xf1 THREAD: mac_rx_srs_poll_ring() stack pointer for thread ffffff01e9986c40: ffffff01e9986b10 [ ffffff01e9986b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431ea219ea, ffffff431ea219c0) mac_rx_srs_poll_ring+0xad(ffffff431ea219c0) thread_start+8() ffffff01e998cc40 fffffffffbc2eb80 0 0 99 ffffff431ea226a8 PC: _resume_from_idle+0xf1 THREAD: mac_srs_worker() stack pointer for thread ffffff01e998cc40: ffffff01e998cb30 [ ffffff01e998cb30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431ea226a8, ffffff431ea22680) mac_srs_worker+0x141(ffffff431ea22680) thread_start+8() ffffff01e9992c40 fffffffffbc2eb80 0 0 99 ffffff431f8ad6e0 PC: _resume_from_idle+0xf1 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff01e9992c40: ffffff01e9992b30 [ ffffff01e9992b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8ad6e0, ffffff431f8ad640) mac_soft_ring_worker+0xb1(ffffff431f8ad640) thread_start+8() ffffff01e9998c40 fffffffffbc2eb80 0 0 99 ffffff431f8ad860 PC: _resume_from_idle+0xf1 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff01e9998c40: ffffff01e9998b30 [ ffffff01e9998b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8ad860, ffffff431f8ad7c0) mac_soft_ring_worker+0xb1(ffffff431f8ad7c0) thread_start+8() ffffff01e999ec40 fffffffffbc2eb80 0 0 99 ffffff431f8ad9e0 PC: _resume_from_idle+0xf1 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff01e999ec40: ffffff01e999eb30 [ ffffff01e999eb30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8ad9e0, ffffff431f8ad940) mac_soft_ring_worker+0xb1(ffffff431f8ad940) thread_start+8() ffffff01e99a4c40 fffffffffbc2eb80 0 0 99 ffffff431f8a8ba0 PC: _resume_from_idle+0xf1 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff01e99a4c40: ffffff01e99a4b30 [ ffffff01e99a4b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8a8ba0, ffffff431f8a8b00) mac_soft_ring_worker+0xb1(ffffff431f8a8b00) thread_start+8() ffffff01e99aac40 fffffffffbc2eb80 0 0 99 ffffff431f8a8a20 PC: _resume_from_idle+0xf1 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff01e99aac40: ffffff01e99aab30 [ ffffff01e99aab30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8a8a20, ffffff431f8a8980) mac_soft_ring_worker+0xb1(ffffff431f8a8980) thread_start+8() ffffff01e99b0c40 fffffffffbc2eb80 0 0 99 ffffff431f8a88a0 PC: _resume_from_idle+0xf1 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff01e99b0c40: ffffff01e99b0b30 [ ffffff01e99b0b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8a88a0, ffffff431f8a8800) mac_soft_ring_worker+0xb1(ffffff431f8a8800) thread_start+8() ffffff01e9532c40 fffffffffbc2eb80 0 0 60 ffffff430fc253b8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff01e9532c40: ffffff01e9532a80 [ ffffff01e9532a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc253b8, ffffff430fc253a8) taskq_thread_wait+0xbe(ffffff430fc25388, ffffff430fc253a8, ffffff430fc253b8 , ffffff01e9532bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25388) thread_start+8() ffffff01e952cc40 fffffffffbc2eb80 0 0 60 ffffff430fc253b8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff01e952cc40: ffffff01e952ca80 [ ffffff01e952ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc253b8, ffffff430fc253a8) taskq_thread_wait+0xbe(ffffff430fc25388, ffffff430fc253a8, ffffff430fc253b8 , ffffff01e952cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25388) thread_start+8() ffffff01e9526c40 fffffffffbc2eb80 0 0 60 ffffff430fc253b8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff01e9526c40: ffffff01e9526a80 [ ffffff01e9526a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc253b8, ffffff430fc253a8) taskq_thread_wait+0xbe(ffffff430fc25388, ffffff430fc253a8, ffffff430fc253b8 , ffffff01e9526bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25388) thread_start+8() ffffff01e9520c40 fffffffffbc2eb80 0 0 60 ffffff430fc253b8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff01e9520c40: ffffff01e9520a80 [ ffffff01e9520a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc253b8, ffffff430fc253a8) taskq_thread_wait+0xbe(ffffff430fc25388, ffffff430fc253a8, ffffff430fc253b8 , ffffff01e9520bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25388) thread_start+8() ffffff01e9538c40 fffffffffbc2eb80 0 0 60 ffffff430fc25188 PC: _resume_from_idle+0xf1 TASKQ: hubd_nexus_enum_tq stack pointer for thread ffffff01e9538c40: ffffff01e9538a80 [ ffffff01e9538a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25188, ffffff430fc25178) taskq_thread_wait+0xbe(ffffff430fc25158, ffffff430fc25178, ffffff430fc25188 , ffffff01e9538bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25158) thread_start+8() ffffff01e9b6dc40 fffffffffbc2eb80 0 0 60 ffffff430fdb12b0 PC: _resume_from_idle+0xf1 TASKQ: USB_hubd_81_pipehndl_tq_1 stack pointer for thread ffffff01e9b6dc40: ffffff01e9b6da80 [ ffffff01e9b6da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb12b0, ffffff430fdb12a0) taskq_thread_wait+0xbe(ffffff430fdb1280, ffffff430fdb12a0, ffffff430fdb12b0 , ffffff01e9b6dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1280) thread_start+8() ffffff01e9b67c40 fffffffffbc2eb80 0 0 60 ffffff430fdb12b0 PC: _resume_from_idle+0xf1 TASKQ: USB_hubd_81_pipehndl_tq_1 stack pointer for thread ffffff01e9b67c40: ffffff01e9b67a80 [ ffffff01e9b67a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb12b0, ffffff430fdb12a0) taskq_thread_wait+0xbe(ffffff430fdb1280, ffffff430fdb12a0, ffffff430fdb12b0 , ffffff01e9b67bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1280) thread_start+8() ffffff01e8d39c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7c70 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff01e8d39c40: ffffff01e8d39a80 [ ffffff01e8d39a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7c70, ffffff430f8d7c60) taskq_thread_wait+0xbe(ffffff430f8d7c40, ffffff430f8d7c60, ffffff430f8d7c70 , ffffff01e8d39bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7c40) thread_start+8() ffffff01e8d33c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7c70 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff01e8d33c40: ffffff01e8d33a80 [ ffffff01e8d33a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7c70, ffffff430f8d7c60) taskq_thread_wait+0xbe(ffffff430f8d7c40, ffffff430f8d7c60, ffffff430f8d7c70 , ffffff01e8d33bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7c40) thread_start+8() ffffff01e8d2dc40 fffffffffbc2eb80 0 0 60 ffffff430f8d7c70 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff01e8d2dc40: ffffff01e8d2da80 [ ffffff01e8d2da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7c70, ffffff430f8d7c60) taskq_thread_wait+0xbe(ffffff430f8d7c40, ffffff430f8d7c60, ffffff430f8d7c70 , ffffff01e8d2dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7c40) thread_start+8() ffffff01e8d27c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7c70 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff01e8d27c40: ffffff01e8d27a80 [ ffffff01e8d27a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7c70, ffffff430f8d7c60) taskq_thread_wait+0xbe(ffffff430f8d7c40, ffffff430f8d7c60, ffffff430f8d7c70 , ffffff01e8d27bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7c40) thread_start+8() ffffff01e8d8ec40 fffffffffbc2eb80 0 0 60 ffffff430f8d75e0 PC: _resume_from_idle+0xf1 TASKQ: hubd_nexus_enum_tq stack pointer for thread ffffff01e8d8ec40: ffffff01e8d8ea80 [ ffffff01e8d8ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d75e0, ffffff430f8d75d0) taskq_thread_wait+0xbe(ffffff430f8d75b0, ffffff430f8d75d0, ffffff430f8d75e0 , ffffff01e8d8ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d75b0) thread_start+8() ffffff01e927ac40 fffffffffbc2eb80 0 0 60 ffffff430fc25a48 PC: _resume_from_idle+0xf1 TASKQ: USB_hubd_81_pipehndl_tq_0 stack pointer for thread ffffff01e927ac40: ffffff01e927aa80 [ ffffff01e927aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25a48, ffffff430fc25a38) taskq_thread_wait+0xbe(ffffff430fc25a18, ffffff430fc25a38, ffffff430fc25a48 , ffffff01e927abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25a18) thread_start+8() ffffff01e9274c40 fffffffffbc2eb80 0 0 60 ffffff430fc25a48 PC: _resume_from_idle+0xf1 TASKQ: USB_hubd_81_pipehndl_tq_0 stack pointer for thread ffffff01e9274c40: ffffff01e9274a80 [ ffffff01e9274a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25a48, ffffff430fc25a38) taskq_thread_wait+0xbe(ffffff430fc25a18, ffffff430fc25a38, ffffff430fc25a48 , ffffff01e9274bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25a18) thread_start+8() ffffff01e9db6c40 fffffffffbc2eb80 0 0 60 ffffff4310550da8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff01e9db6c40: ffffff01e9db6a80 [ ffffff01e9db6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550da8, ffffff4310550d98) taskq_thread_wait+0xbe(ffffff4310550d78, ffffff4310550d98, ffffff4310550da8 , ffffff01e9db6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550d78) thread_start+8() ffffff01e9db0c40 fffffffffbc2eb80 0 0 60 ffffff4310550da8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff01e9db0c40: ffffff01e9db0a80 [ ffffff01e9db0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550da8, ffffff4310550d98) taskq_thread_wait+0xbe(ffffff4310550d78, ffffff4310550d98, ffffff4310550da8 , ffffff01e9db0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550d78) thread_start+8() ffffff01e9daac40 fffffffffbc2eb80 0 0 60 ffffff4310550da8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff01e9daac40: ffffff01e9daaa80 [ ffffff01e9daaa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550da8, ffffff4310550d98) taskq_thread_wait+0xbe(ffffff4310550d78, ffffff4310550d98, ffffff4310550da8 , ffffff01e9daabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550d78) thread_start+8() ffffff01e9da4c40 fffffffffbc2eb80 0 0 60 ffffff4310550da8 PC: _resume_from_idle+0xf1 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff01e9da4c40: ffffff01e9da4a80 [ ffffff01e9da4a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550da8, ffffff4310550d98) taskq_thread_wait+0xbe(ffffff4310550d78, ffffff4310550d98, ffffff4310550da8 , ffffff01e9da4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550d78) thread_start+8() ffffff01e9df3c40 fffffffffbc2eb80 0 0 60 ffffff4310550c90 PC: _resume_from_idle+0xf1 TASKQ: usb_mid_nexus_enum_tq stack pointer for thread ffffff01e9df3c40: ffffff01e9df3a80 [ ffffff01e9df3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550c90, ffffff4310550c80) taskq_thread_wait+0xbe(ffffff4310550c60, ffffff4310550c80, ffffff4310550c90 , ffffff01e9df3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550c60) thread_start+8() ffffff01e9e5dc40 fffffffffbc2eb80 0 0 60 ffffff4310550b78 PC: _resume_from_idle+0xf1 TASKQ: USB_hid_82_pipehndl_tq_0 stack pointer for thread ffffff01e9e5dc40: ffffff01e9e5da80 [ ffffff01e9e5da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550b78, ffffff4310550b68) taskq_thread_wait+0xbe(ffffff4310550b48, ffffff4310550b68, ffffff4310550b78 , ffffff01e9e5dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550b48) thread_start+8() ffffff01e9e57c40 fffffffffbc2eb80 0 0 60 ffffff4310550b78 PC: _resume_from_idle+0xf1 TASKQ: USB_hid_82_pipehndl_tq_0 stack pointer for thread ffffff01e9e57c40: ffffff01e9e57a80 [ ffffff01e9e57a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550b78, ffffff4310550b68) taskq_thread_wait+0xbe(ffffff4310550b48, ffffff4310550b68, ffffff4310550b78 , ffffff01e9e57bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550b48) thread_start+8() ffffff01e9ea6c40 fffffffffbc2eb80 0 0 60 ffffff4310550a60 PC: _resume_from_idle+0xf1 TASKQ: USB_hid_81_pipehndl_tq_1 stack pointer for thread ffffff01e9ea6c40: ffffff01e9ea6a80 [ ffffff01e9ea6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550a60, ffffff4310550a50) taskq_thread_wait+0xbe(ffffff4310550a30, ffffff4310550a50, ffffff4310550a60 , ffffff01e9ea6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550a30) thread_start+8() ffffff01e9ea0c40 fffffffffbc2eb80 0 0 60 ffffff4310550a60 PC: _resume_from_idle+0xf1 TASKQ: USB_hid_81_pipehndl_tq_1 stack pointer for thread ffffff01e9ea0c40: ffffff01e9ea0a80 [ ffffff01e9ea0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550a60, ffffff4310550a50) taskq_thread_wait+0xbe(ffffff4310550a30, ffffff4310550a50, ffffff4310550a60 , ffffff01e9ea0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550a30) thread_start+8() ffffff01e99b6c40 fffffffffbc2eb80 0 0 99 ffffff430fd41c10 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e99b6c40: ffffff01e99b6b40 [ ffffff01e99b6b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41c10, ffffff430fd41bd0) squeue_worker+0x104(ffffff430fd41bc0) thread_start+8() ffffff01e99bcc40 fffffffffbc2eb80 0 0 99 ffffff430fd41c12 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e99bcc40: ffffff01e99bcb00 [ ffffff01e99bcb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41c12, ffffff430fd41bd0) squeue_polling_thread+0xa9(ffffff430fd41bc0) thread_start+8() ffffff01e99c2c40 fffffffffbc2eb80 0 0 99 ffffff430fd41b50 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e99c2c40: ffffff01e99c2b40 [ ffffff01e99c2b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41b50, ffffff430fd41b10) squeue_worker+0x104(ffffff430fd41b00) thread_start+8() ffffff01e99c8c40 fffffffffbc2eb80 0 0 99 ffffff430fd41b52 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e99c8c40: ffffff01e99c8b00 [ ffffff01e99c8b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41b52, ffffff430fd41b10) squeue_polling_thread+0xa9(ffffff430fd41b00) thread_start+8() ffffff01e9a40c40 fffffffffbc2eb80 0 0 60 ffffff4361b02b40 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00020003 stack pointer for thread ffffff01e9a40c40: ffffff01e9a40a80 [ ffffff01e9a40a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02b40, ffffff4361b02b30) taskq_thread_wait+0xbe(ffffff4361b02b10, ffffff4361b02b30, ffffff4361b02b40 , ffffff01e9a40bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b02b10) thread_start+8() ffffff01e9aa6c40 fffffffffbc2eb80 0 0 60 ffffff4361b02a28 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00020003 stack pointer for thread ffffff01e9aa6c40: ffffff01e9aa6a80 [ ffffff01e9aa6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02a28, ffffff4361b02a18) taskq_thread_wait+0xbe(ffffff4361b029f8, ffffff4361b02a18, ffffff4361b02a28 , ffffff01e9aa6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b029f8) thread_start+8() ffffff01e819dc40 fffffffffbc2eb80 0 0 60 fffffffffbcfaf4c PC: _resume_from_idle+0xf1 THREAD: streams_bufcall_service() stack pointer for thread ffffff01e819dc40: ffffff01e819db70 [ ffffff01e819db70 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcfaf4c, fffffffffbcfafd0) streams_bufcall_service+0x8d() thread_start+8() ffffff01e81a3c40 fffffffffbc2eb80 0 0 60 fffffffffbcca0f8 PC: _resume_from_idle+0xf1 THREAD: streams_qbkgrnd_service() stack pointer for thread ffffff01e81a3c40: ffffff01e81a3b50 [ ffffff01e81a3b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcca0f8, fffffffffbcca0f0) streams_qbkgrnd_service+0x151() thread_start+8() ffffff01e81a9c40 fffffffffbc2eb80 0 0 60 fffffffffbcca0fa PC: _resume_from_idle+0xf1 THREAD: streams_sqbkgrnd_service() stack pointer for thread ffffff01e81a9c40: ffffff01e81a9b60 [ ffffff01e81a9b60 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcca0fa, fffffffffbcca0f0) streams_sqbkgrnd_service+0xe5() thread_start+8() ffffff01e81afc40 fffffffffbc2eb80 0 0 60 fffffffffbc31ec0 PC: _resume_from_idle+0xf1 THREAD: page_capture_thread() stack pointer for thread ffffff01e81afc40: ffffff01e81afae0 [ ffffff01e81afae0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbc31ec0, fffffffffbc2da18, 117a12daa00, 989680, 0) cv_reltimedwait+0x51(fffffffffbc31ec0, fffffffffbc2da18, 1d524, 4) page_capture_thread+0xb1() thread_start+8() ffffff01eb818c40 ffffff42e2966008 ffffff4321d35400 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb818c40: ffffff01eb8189f0 [ ffffff01eb8189f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ebe22c40 ffffff42e2966008 ffffff4321d28c80 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ebe22c40: ffffff01ebe229f0 [ ffffff01ebe229f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ea5f3c40 ffffff42e2966008 ffffff4321d32800 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ea5f3c40: ffffff01ea5f39f0 [ ffffff01ea5f39f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e9f59c40 ffffff42e2966008 ffffff4384a033c0 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e9f59c40: ffffff01e9f599f0 [ ffffff01e9f599f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ebe28c40 ffffff42e2966008 ffffff4384ac0200 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ebe28c40: ffffff01ebe289f0 [ ffffff01ebe289f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e9f5fc40 ffffff42e2966008 ffffff4384a048c0 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e9f5fc40: ffffff01e9f5f9f0 [ ffffff01e9f5f9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e9f6bc40 ffffff42e2966008 ffffff4311e25580 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e9f6bc40: ffffff01e9f6b9f0 [ ffffff01e9f6b9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e9f65c40 ffffff42e2966008 ffffff4312042a40 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e9f65c40: ffffff01e9f659f0 [ ffffff01e9f659f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ea323c40 ffffff42e2966008 ffffff4311a4fe00 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ea323c40: ffffff01ea3239f0 [ ffffff01ea3239f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e8d73c40 ffffff42e2966008 ffffff430e71d840 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e8d73c40: ffffff01e8d739f0 [ ffffff01e8d739f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb61ac40 ffffff42e2966008 ffffff4312049600 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb61ac40: ffffff01eb61a9f0 [ ffffff01eb61a9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ea117c40 ffffff42e2966008 ffffff42a9c48e80 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ea117c40: ffffff01ea1179f0 [ ffffff01ea1179f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb97cc40 ffffff42e2966008 ffffff4312045500 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb97cc40: ffffff01eb97c9f0 [ ffffff01eb97c9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb602c40 ffffff42e2966008 ffffff4311f513c0 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb602c40: ffffff01eb6029f0 [ ffffff01eb6029f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb858c40 ffffff42e2966008 ffffff4311e27180 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb858c40: ffffff01eb8589f0 [ ffffff01eb8589f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e9865c40 ffffff42e2966008 ffffff4311a52100 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e9865c40: ffffff01e98659f0 [ ffffff01e98659f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb8ecc40 ffffff42e2966008 ffffff4311f4fec0 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb8ecc40: ffffff01eb8ec9f0 [ ffffff01eb8ec9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ebd7ac40 ffffff42e2966008 ffffff4312048100 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ebd7ac40: ffffff01ebd7a9f0 [ ffffff01ebd7a9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01ea317c40 ffffff42e2966008 ffffff431204b900 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01ea317c40: ffffff01ea3179f0 [ ffffff01ea3179f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb952c40 ffffff42e2966008 ffffff431202dc00 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb952c40: ffffff01eb9529f0 [ ffffff01eb9529f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb98ec40 ffffff42e2966008 ffffff4311f51ac0 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb98ec40: ffffff01eb98e9f0 [ ffffff01eb98e9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb54ec40 ffffff42e2966008 ffffff430e744580 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb54ec40: ffffff01eb54e9f0 [ ffffff01eb54e9f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01eb846c40 ffffff42e2966008 ffffff4312044e00 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01eb846c40: ffffff01eb8469f0 [ ffffff01eb8469f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e8d85c40 ffffff42e2966008 ffffff4321c398c0 0 60 ffffff42e291d920 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e8d85c40: ffffff01e8d859f0 [ ffffff01e8d859f0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e291d920, ffffff42e291d928, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e291d920, ffffff42e291d928, 1770, 4) kcfpool_svc+0x8e(0) thread_start+8() ffffff01e81b5c40 ffffff42e2966008 ffffff42a9c4f840 0 60 ffffff42e1ac3c74 PC: _resume_from_idle+0xf1 CMD: kcfpoold stack pointer for thread ffffff01e81b5c40: ffffff01e81b59d0 [ ffffff01e81b59d0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e1ac3c74, ffffff42e1ac3c78, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff42e1ac3c74, ffffff42e1ac3c78, 1770, 4) kcfpoold+0xf6(0) thread_start+8() ffffff01e81bbc40 fffffffffbc2eb80 0 0 60 fffffffffbd1dba0 PC: _resume_from_idle+0xf1 THREAD: arc_reclaim_thread() stack pointer for thread ffffff01e81bbc40: ffffff01e81bbab0 [ ffffff01e81bbab0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd1dba0, fffffffffbd1db98, 3b9aca00, 989680 , 0) cv_timedwait+0x5c(fffffffffbd1dba0, fffffffffbd1db98, 1bdb73) arc_reclaim_thread+0x11a() thread_start+8() ffffff01e81c1c40 fffffffffbc2eb80 0 0 60 fffffffffbd21fc0 PC: _resume_from_idle+0xf1 THREAD: l2arc_feed_thread() stack pointer for thread ffffff01e81c1c40: ffffff01e81c1aa0 [ ffffff01e81c1aa0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd21fc0, fffffffffbd21fb8, 3b9aca00, 989680 , 0) cv_timedwait+0x5c(fffffffffbd21fc0, fffffffffbd21fb8, 1bdb85) l2arc_feed_thread+0xad() thread_start+8() ffffff01e81c7c40 fffffffffbc2eb80 0 0 60 fffffffffbcf07f0 PC: _resume_from_idle+0xf1 THREAD: pm_dep_thread() stack pointer for thread ffffff01e81c7c40: ffffff01e81c7b60 [ ffffff01e81c7b60 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcf07f0, fffffffffbcf1e78) pm_dep_thread+0xbd() thread_start+8() ffffff01e81cdc40 fffffffffbc2eb80 0 0 60 ffffff42e2978a18 PC: _resume_from_idle+0xf1 TASKQ: ppm_nexus_enum_tq stack pointer for thread ffffff01e81cdc40: ffffff01e81cda80 [ ffffff01e81cda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978a18, ffffff42e2978a08) taskq_thread_wait+0xbe(ffffff42e29789e8, ffffff42e2978a08, ffffff42e2978a18 , ffffff01e81cdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e29789e8) thread_start+8() ffffff01e81d3c40 fffffffffbc2eb80 0 0 60 ffffff42e2978900 PC: _resume_from_idle+0xf1 TASKQ: ahci_nexus_enum_tq stack pointer for thread ffffff01e81d3c40: ffffff01e81d3a80 [ ffffff01e81d3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978900, ffffff42e29788f0) taskq_thread_wait+0xbe(ffffff42e29788d0, ffffff42e29788f0, ffffff42e2978900 , ffffff01e81d3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e29788d0) thread_start+8() ffffff01e81dfc40 fffffffffbc2eb80 0 0 60 ffffff42e29787e8 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port0 stack pointer for thread ffffff01e81dfc40: ffffff01e81dfa80 [ ffffff01e81dfa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29787e8, ffffff42e29787d8) taskq_thread_wait+0xbe(ffffff42e29787b8, ffffff42e29787d8, ffffff42e29787e8 , ffffff01e81dfbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e29787b8) thread_start+8() ffffff01e81d9c40 fffffffffbc2eb80 0 0 60 ffffff42e29787e8 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port0 stack pointer for thread ffffff01e81d9c40: ffffff01e81d9a80 [ ffffff01e81d9a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29787e8, ffffff42e29787d8) taskq_thread_wait+0xbe(ffffff42e29787b8, ffffff42e29787d8, ffffff42e29787e8 , ffffff01e81d9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e29787b8) thread_start+8() ffffff01e81ebc40 fffffffffbc2eb80 0 0 60 ffffff42e29786d0 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port1 stack pointer for thread ffffff01e81ebc40: ffffff01e81eba80 [ ffffff01e81eba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29786d0, ffffff42e29786c0) taskq_thread_wait+0xbe(ffffff42e29786a0, ffffff42e29786c0, ffffff42e29786d0 , ffffff01e81ebbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e29786a0) thread_start+8() ffffff01e81e5c40 fffffffffbc2eb80 0 0 60 ffffff42e29786d0 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port1 stack pointer for thread ffffff01e81e5c40: ffffff01e81e5a80 [ ffffff01e81e5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29786d0, ffffff42e29786c0) taskq_thread_wait+0xbe(ffffff42e29786a0, ffffff42e29786c0, ffffff42e29786d0 , ffffff01e81e5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e29786a0) thread_start+8() ffffff01e81f7c40 fffffffffbc2eb80 0 0 60 ffffff42e29785b8 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port2 stack pointer for thread ffffff01e81f7c40: ffffff01e81f7a80 [ ffffff01e81f7a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29785b8, ffffff42e29785a8) taskq_thread_wait+0xbe(ffffff42e2978588, ffffff42e29785a8, ffffff42e29785b8 , ffffff01e81f7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978588) thread_start+8() ffffff01e81f1c40 fffffffffbc2eb80 0 0 60 ffffff42e29785b8 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port2 stack pointer for thread ffffff01e81f1c40: ffffff01e81f1a80 [ ffffff01e81f1a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29785b8, ffffff42e29785a8) taskq_thread_wait+0xbe(ffffff42e2978588, ffffff42e29785a8, ffffff42e29785b8 , ffffff01e81f1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978588) thread_start+8() ffffff01e8203c40 fffffffffbc2eb80 0 0 60 ffffff42e29784a0 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port3 stack pointer for thread ffffff01e8203c40: ffffff01e8203a80 [ ffffff01e8203a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29784a0, ffffff42e2978490) taskq_thread_wait+0xbe(ffffff42e2978470, ffffff42e2978490, ffffff42e29784a0 , ffffff01e8203bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978470) thread_start+8() ffffff01e81fdc40 fffffffffbc2eb80 0 0 60 ffffff42e29784a0 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port3 stack pointer for thread ffffff01e81fdc40: ffffff01e81fda80 [ ffffff01e81fda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e29784a0, ffffff42e2978490) taskq_thread_wait+0xbe(ffffff42e2978470, ffffff42e2978490, ffffff42e29784a0 , ffffff01e81fdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978470) thread_start+8() ffffff01e820fc40 fffffffffbc2eb80 0 0 60 ffffff42e2978388 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port4 stack pointer for thread ffffff01e820fc40: ffffff01e820fa80 [ ffffff01e820fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978388, ffffff42e2978378) taskq_thread_wait+0xbe(ffffff42e2978358, ffffff42e2978378, ffffff42e2978388 , ffffff01e820fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978358) thread_start+8() ffffff01e8209c40 fffffffffbc2eb80 0 0 60 ffffff42e2978388 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port4 stack pointer for thread ffffff01e8209c40: ffffff01e8209a80 [ ffffff01e8209a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978388, ffffff42e2978378) taskq_thread_wait+0xbe(ffffff42e2978358, ffffff42e2978378, ffffff42e2978388 , ffffff01e8209bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978358) thread_start+8() ffffff01e821bc40 fffffffffbc2eb80 0 0 60 ffffff42e2978270 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port5 stack pointer for thread ffffff01e821bc40: ffffff01e821ba80 [ ffffff01e821ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978270, ffffff42e2978260) taskq_thread_wait+0xbe(ffffff42e2978240, ffffff42e2978260, ffffff42e2978270 , ffffff01e821bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978240) thread_start+8() ffffff01e8215c40 fffffffffbc2eb80 0 0 60 ffffff42e2978270 PC: _resume_from_idle+0xf1 TASKQ: ahci_event_handle_taskq_port5 stack pointer for thread ffffff01e8215c40: ffffff01e8215a80 [ ffffff01e8215a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978270, ffffff42e2978260) taskq_thread_wait+0xbe(ffffff42e2978240, ffffff42e2978260, ffffff42e2978270 , ffffff01e8215bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978240) thread_start+8() ffffff01e8221c40 fffffffffbc2eb80 0 0 60 ffffff42e2978158 PC: _resume_from_idle+0xf1 TASKQ: pci15d9_626_0 stack pointer for thread ffffff01e8221c40: ffffff01e8221a80 [ ffffff01e8221a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978158, ffffff42e2978148) taskq_thread_wait+0xbe(ffffff42e2978128, ffffff42e2978148, ffffff42e2978158 , ffffff01e8221bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978128) thread_start+8() ffffff01e8227c40 fffffffffbc2eb80 0 0 60 fffffffffbd13e80 PC: _resume_from_idle+0xf1 THREAD: sata_event_daemon() stack pointer for thread ffffff01e8227c40: ffffff01e8227b00 [ ffffff01e8227b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd13e80, fffffffffbd13e78, 2faf080, 989680 , 0) cv_reltimedwait+0x51(fffffffffbd13e80, fffffffffbd13e78, 5, 4) sata_event_daemon+0xff(fffffffffbd13e68) thread_start+8() ffffff01e8257c40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e8257c40: ffffff01e8257a80 [ ffffff01e8257a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e8257bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e8251c40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e8251c40: ffffff01e8251a80 [ ffffff01e8251a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e8251bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e824bc40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e824bc40: ffffff01e824ba80 [ ffffff01e824ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e824bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e8245c40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e8245c40: ffffff01e8245a80 [ ffffff01e8245a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e8245bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e823fc40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e823fc40: ffffff01e823fa80 [ ffffff01e823fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e823fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e8239c40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e8239c40: ffffff01e8239a80 [ ffffff01e8239a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e8239bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e8233c40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e8233c40: ffffff01e8233a80 [ ffffff01e8233a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e8233bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e822dc40 fffffffffbc2eb80 0 0 97 ffffff42e2978040 PC: _resume_from_idle+0xf1 TASKQ: sd_drv_taskq stack pointer for thread ffffff01e822dc40: ffffff01e822da80 [ ffffff01e822da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2978040, ffffff42e2978030) taskq_thread_wait+0xbe(ffffff42e2978010, ffffff42e2978030, ffffff42e2978040 , ffffff01e822dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff42e2978010) thread_start+8() ffffff01e825dc40 fffffffffbc2eb80 0 0 97 ffffff430e629e80 PC: _resume_from_idle+0xf1 TASKQ: sd_rmw_taskq stack pointer for thread ffffff01e825dc40: ffffff01e825da80 [ ffffff01e825da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629e80, ffffff430e629e70) taskq_thread_wait+0xbe(ffffff430e629e50, ffffff430e629e70, ffffff430e629e80 , ffffff01e825dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629e50) thread_start+8() ffffff01e8269c40 fffffffffbc2eb80 0 0 97 ffffff430e629d68 PC: _resume_from_idle+0xf1 TASKQ: xbuf_taskq stack pointer for thread ffffff01e8269c40: ffffff01e8269a80 [ ffffff01e8269a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629d68, ffffff430e629d58) taskq_thread_wait+0xbe(ffffff430e629d38, ffffff430e629d58, ffffff430e629d68 , ffffff01e8269bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629d38) thread_start+8() ffffff01e829ec40 ffffff430e6fb010 ffffff42a9c4e340 2 0 ffffff430e6ef5c8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e829ec40: ffffff01e829e990 [ ffffff01e829e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef5c8, ffffff430e6ef5b8) taskq_thread_wait+0xbe(ffffff430e6ef598, ffffff430e6ef5b8, ffffff430e6ef5c8 , ffffff01e829ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef598) thread_start+8() ffffff01e82a4c40 ffffff430e6fb010 ffffff42a9c4dc40 2 99 ffffff430e6ef6e0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e82a4c40: ffffff01e82a4990 [ ffffff01e82a4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef6e0, ffffff430e6ef6d0) taskq_thread_wait+0xbe(ffffff430e6ef6b0, ffffff430e6ef6d0, ffffff430e6ef6e0 , ffffff01e82a4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef6b0) thread_start+8() ffffff01e859ec40 ffffff430e6fb010 ffffff42a9c4a380 2 99 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e859ec40: ffffff01e859e990 [ ffffff01e859e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e859ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e8598c40 ffffff430e6fb010 ffffff42a9c4aa80 2 99 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8598c40: ffffff01e8598990 [ ffffff01e8598990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e8598ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e8592c40 ffffff430e6fb010 ffffff42a9c4b180 2 0 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8592c40: ffffff01e8592990 [ ffffff01e8592990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e8592ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e82c2c40 ffffff430e6fb010 ffffff42a9c4b880 2 99 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e82c2c40: ffffff01e82c2990 [ ffffff01e82c2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e82c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e82bcc40 ffffff430e6fb010 ffffff42a9c4c040 2 99 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e82bcc40: ffffff01e82bc990 [ ffffff01e82bc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e82bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e82b6c40 ffffff430e6fb010 ffffff42a9c4c740 2 99 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e82b6c40: ffffff01e82b6990 [ ffffff01e82b6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e82b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e82b0c40 ffffff430e6fb010 ffffff42a9c4ce40 2 99 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e82b0c40: ffffff01e82b0990 [ ffffff01e82b0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e82b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e82aac40 ffffff430e6fb010 ffffff42a9c4d540 2 0 ffffff430e6ef7f8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e82aac40: ffffff01e82aa990 [ ffffff01e82aa990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef7f8, ffffff430e6ef7e8) taskq_thread_wait+0xbe(ffffff430e6ef7c8, ffffff430e6ef7e8, ffffff430e6ef7f8 , ffffff01e82aaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef7c8) thread_start+8() ffffff01e96dac40 ffffff430e6fb010 ffffff430e7120c0 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96dac40: ffffff01e96da990 [ ffffff01e96da990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96daad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96d4c40 ffffff430e6fb010 ffffff430e70a000 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96d4c40: ffffff01e96d4990 [ ffffff01e96d4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96d4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96cec40 ffffff430e6fb010 ffffff430e70b500 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96cec40: ffffff01e96ce990 [ ffffff01e96ce990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96cead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96c8c40 ffffff430e6fb010 ffffff430e725900 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96c8c40: ffffff01e96c8990 [ ffffff01e96c8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96bcc40 ffffff430e6fb010 ffffff430e70ca00 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96bcc40: ffffff01e96bc990 [ ffffff01e96bc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96b0c40 ffffff430e6fb010 ffffff430e7135c0 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96b0c40: ffffff01e96b0990 [ ffffff01e96b0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96aac40 ffffff430e6fb010 ffffff430e720300 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96aac40: ffffff01e96aa990 [ ffffff01e96aa990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e96aaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e969ec40 ffffff430e6fb010 ffffff430e71b540 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e969ec40: ffffff01e969e990 [ ffffff01e969e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e969ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e968cc40 ffffff430e6fb010 ffffff430e7151c0 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e968cc40: ffffff01e968c990 [ ffffff01e968c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e968cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9680c40 ffffff430e6fb010 ffffff430e724400 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9680c40: ffffff01e9680990 [ ffffff01e9680990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9680ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9674c40 ffffff430e6fb010 ffffff430e71bc40 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9674c40: ffffff01e9674990 [ ffffff01e9674990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9674ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9662c40 ffffff430e6fb010 ffffff430e71e700 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9662c40: ffffff01e9662990 [ ffffff01e9662990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9662ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9656c40 ffffff430e6fb010 ffffff430e71a040 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9656c40: ffffff01e9656990 [ ffffff01e9656990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9656ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e964ac40 ffffff430e6fb010 ffffff430e718a80 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e964ac40: ffffff01e964a990 [ ffffff01e964a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e964aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9638c40 ffffff430e6fb010 ffffff430e71ee00 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9638c40: ffffff01e9638990 [ ffffff01e9638990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9638ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e962cc40 ffffff430e6fb010 ffffff430e71ae40 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e962cc40: ffffff01e962c990 [ ffffff01e962c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e962cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9620c40 ffffff430e6fb010 ffffff430e722f00 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9620c40: ffffff01e9620990 [ ffffff01e9620990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9620ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9614c40 ffffff430e6fb010 ffffff430e71c340 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9614c40: ffffff01e9614990 [ ffffff01e9614990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9614ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e9602c40 ffffff430e6fb010 ffffff430e71f500 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9602c40: ffffff01e9602990 [ ffffff01e9602990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e9602ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e95f6c40 ffffff430e6fb010 ffffff430e720a00 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95f6c40: ffffff01e95f6990 [ ffffff01e95f6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e95f6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e95e4c40 ffffff430e6fb010 ffffff430e723600 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95e4c40: ffffff01e95e4990 [ ffffff01e95e4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e95e4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e95d2c40 ffffff430e6fb010 ffffff42a9c46ac0 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95d2c40: ffffff01e95d2990 [ ffffff01e95d2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e95d2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e95ccc40 ffffff430e6fb010 ffffff42a9c463c0 2 99 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95ccc40: ffffff01e95cc990 [ ffffff01e95cc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e95ccad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e8275c40 ffffff430e6fb010 ffffff42a9c49c80 2 0 ffffff430e6ef910 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8275c40: ffffff01e8275990 [ ffffff01e8275990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef910, ffffff430e6ef900) taskq_thread_wait+0xbe(ffffff430e6ef8e0, ffffff430e6ef900, ffffff430e6ef910 , ffffff01e8275ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef8e0) thread_start+8() ffffff01e96c2c40 ffffff430e6fb010 ffffff42a9c49580 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96c2c40: ffffff01e96c2990 [ ffffff01e96c2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e96c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e96b6c40 ffffff430e6fb010 ffffff430e718380 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96b6c40: ffffff01e96b6990 [ ffffff01e96b6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e96b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e96a4c40 ffffff430e6fb010 ffffff430e7127c0 2 99 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e96a4c40: ffffff01e96a4990 [ ffffff01e96a4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e96a4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9698c40 ffffff430e6fb010 ffffff430e71ca40 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9698c40: ffffff01e9698990 [ ffffff01e9698990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9698ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9692c40 ffffff430e6fb010 ffffff430e723d00 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9692c40: ffffff01e9692990 [ ffffff01e9692990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9692ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9686c40 ffffff430e6fb010 ffffff430e714ac0 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9686c40: ffffff01e9686990 [ ffffff01e9686990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9686ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e967ac40 ffffff430e6fb010 ffffff430e7158c0 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e967ac40: ffffff01e967a990 [ ffffff01e967a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e967aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e966ec40 ffffff430e6fb010 ffffff430e71d140 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e966ec40: ffffff01e966e990 [ ffffff01e966e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e966ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9668c40 ffffff430e6fb010 ffffff430e716080 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9668c40: ffffff01e9668990 [ ffffff01e9668990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9668ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e965cc40 ffffff430e6fb010 ffffff430e717580 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e965cc40: ffffff01e965c990 [ ffffff01e965c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e965cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9650c40 ffffff430e6fb010 ffffff430e721100 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9650c40: ffffff01e9650990 [ ffffff01e9650990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9650ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9644c40 ffffff430e6fb010 ffffff430e71a740 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9644c40: ffffff01e9644990 [ ffffff01e9644990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9644ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e963ec40 ffffff430e6fb010 ffffff430e719180 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e963ec40: ffffff01e963e990 [ ffffff01e963e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e963ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9632c40 ffffff430e6fb010 ffffff430e719880 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9632c40: ffffff01e9632990 [ ffffff01e9632990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9632ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9626c40 ffffff430e6fb010 ffffff430e721800 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9626c40: ffffff01e9626990 [ ffffff01e9626990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9626ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e961ac40 ffffff430e6fb010 ffffff42a9c455c0 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e961ac40: ffffff01e961a990 [ ffffff01e961a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e961aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e960ec40 ffffff430e6fb010 ffffff430e71e000 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e960ec40: ffffff01e960e990 [ ffffff01e960e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e960ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e9608c40 ffffff430e6fb010 ffffff42a9c440c0 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e9608c40: ffffff01e9608990 [ ffffff01e9608990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e9608ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e95fcc40 ffffff430e6fb010 ffffff42a9c45cc0 2 99 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95fcc40: ffffff01e95fc990 [ ffffff01e95fc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e95fcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e95f0c40 ffffff430e6fb010 ffffff42a9c447c0 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95f0c40: ffffff01e95f0990 [ ffffff01e95f0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e95f0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e95eac40 ffffff430e6fb010 ffffff430e722100 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95eac40: ffffff01e95ea990 [ ffffff01e95ea990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e95eaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e95d8c40 ffffff430e6fb010 ffffff430e724b00 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95d8c40: ffffff01e95d8990 [ ffffff01e95d8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e95d8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e95c6c40 ffffff430e6fb010 ffffff42a9c44ec0 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e95c6c40: ffffff01e95c6990 [ ffffff01e95c6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e95c6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e827bc40 ffffff430e6fb010 ffffff430e737700 2 0 ffffff430e6efa28 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e827bc40: ffffff01e827b990 [ ffffff01e827b990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efa28, ffffff430e6efa18) taskq_thread_wait+0xbe(ffffff430e6ef9f8, ffffff430e6efa18, ffffff430e6efa28 , ffffff01e827bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef9f8) thread_start+8() ffffff01e85b0c40 ffffff430e6fb010 ffffff430eadd340 2 0 ffffff430e6efb40 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85b0c40: ffffff01e85b0990 [ ffffff01e85b0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efb40, ffffff430e6efb30) taskq_thread_wait+0xbe(ffffff430e6efb10, ffffff430e6efb30, ffffff430e6efb40 , ffffff01e85b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efb10) thread_start+8() ffffff01e85aac40 ffffff430e6fb010 ffffff430eadda40 2 0 ffffff430e6efb40 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85aac40: ffffff01e85aa990 [ ffffff01e85aa990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efb40, ffffff430e6efb30) taskq_thread_wait+0xbe(ffffff430e6efb10, ffffff430e6efb30, ffffff430e6efb40 , ffffff01e85aaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efb10) thread_start+8() ffffff01e828dc40 ffffff430e6fb010 ffffff430eade140 2 99 ffffff430e6efb40 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e828dc40: ffffff01e828d990 [ ffffff01e828d990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efb40, ffffff430e6efb30) taskq_thread_wait+0xbe(ffffff430e6efb10, ffffff430e6efb30, ffffff430e6efb40 , ffffff01e828dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efb10) thread_start+8() ffffff01e8287c40 ffffff430e6fb010 ffffff430eade840 2 0 ffffff430e6efb40 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8287c40: ffffff01e8287990 [ ffffff01e8287990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efb40, ffffff430e6efb30) taskq_thread_wait+0xbe(ffffff430e6efb10, ffffff430e6efb30, ffffff430e6efb40 , ffffff01e8287ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efb10) thread_start+8() ffffff01e8281c40 ffffff430e6fb010 ffffff430e737000 2 99 ffffff430e6efb40 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8281c40: ffffff01e8281990 [ ffffff01e8281990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efb40, ffffff430e6efb30) taskq_thread_wait+0xbe(ffffff430e6efb10, ffffff430e6efb30, ffffff430e6efb40 , ffffff01e8281ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efb10) thread_start+8() ffffff01e85e0c40 ffffff430e6fb010 ffffff430ead9a80 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85e0c40: ffffff01e85e0990 [ ffffff01e85e0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85e0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85dac40 ffffff430e6fb010 ffffff430eada180 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85dac40: ffffff01e85da990 [ ffffff01e85da990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85daad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85d4c40 ffffff430e6fb010 ffffff430eada880 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85d4c40: ffffff01e85d4990 [ ffffff01e85d4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85d4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85cec40 ffffff430e6fb010 ffffff430eadb040 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85cec40: ffffff01e85ce990 [ ffffff01e85ce990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85cead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85c8c40 ffffff430e6fb010 ffffff430eadb740 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85c8c40: ffffff01e85c8990 [ ffffff01e85c8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85c2c40 ffffff430e6fb010 ffffff430eadbe40 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85c2c40: ffffff01e85c2990 [ ffffff01e85c2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85bcc40 ffffff430e6fb010 ffffff430eadc540 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85bcc40: ffffff01e85bc990 [ ffffff01e85bc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85b6c40 ffffff430e6fb010 ffffff430eadcc40 2 99 ffffff430e6efc58 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85b6c40: ffffff01e85b6990 [ ffffff01e85b6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efc58, ffffff430e6efc48) taskq_thread_wait+0xbe(ffffff430e6efc28, ffffff430e6efc48, ffffff430e6efc58 , ffffff01e85b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efc28) thread_start+8() ffffff01e85fec40 ffffff430e6fb010 ffffff430ead7780 2 99 ffffff430e6efd70 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85fec40: ffffff01e85fe990 [ ffffff01e85fe990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efd70, ffffff430e6efd60) taskq_thread_wait+0xbe(ffffff430e6efd40, ffffff430e6efd60, ffffff430e6efd70 , ffffff01e85fead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efd40) thread_start+8() ffffff01e85f8c40 ffffff430e6fb010 ffffff430ead7e80 2 99 ffffff430e6efd70 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85f8c40: ffffff01e85f8990 [ ffffff01e85f8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efd70, ffffff430e6efd60) taskq_thread_wait+0xbe(ffffff430e6efd40, ffffff430e6efd60, ffffff430e6efd70 , ffffff01e85f8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efd40) thread_start+8() ffffff01e85f2c40 ffffff430e6fb010 ffffff430ead8580 2 0 ffffff430e6efd70 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85f2c40: ffffff01e85f2990 [ ffffff01e85f2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efd70, ffffff430e6efd60) taskq_thread_wait+0xbe(ffffff430e6efd40, ffffff430e6efd60, ffffff430e6efd70 , ffffff01e85f2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efd40) thread_start+8() ffffff01e85ecc40 ffffff430e6fb010 ffffff430ead8c80 2 0 ffffff430e6efd70 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85ecc40: ffffff01e85ec990 [ ffffff01e85ec990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efd70, ffffff430e6efd60) taskq_thread_wait+0xbe(ffffff430e6efd40, ffffff430e6efd60, ffffff430e6efd70 , ffffff01e85ecad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efd40) thread_start+8() ffffff01e85e6c40 ffffff430e6fb010 ffffff430ead9380 2 0 ffffff430e6efd70 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e85e6c40: ffffff01e85e6990 [ ffffff01e85e6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efd70, ffffff430e6efd60) taskq_thread_wait+0xbe(ffffff430e6efd40, ffffff430e6efd60, ffffff430e6efd70 , ffffff01e85e6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efd40) thread_start+8() ffffff01e868ec40 ffffff430e6fb010 ffffff430eaccc00 2 99 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e868ec40: ffffff01e868e990 [ ffffff01e868e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e868ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8688c40 ffffff430e6fb010 ffffff430eacd300 2 99 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8688c40: ffffff01e8688990 [ ffffff01e8688990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8688ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8682c40 ffffff430e6fb010 ffffff430eacda00 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8682c40: ffffff01e8682990 [ ffffff01e8682990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8682ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e867cc40 ffffff430e6fb010 ffffff430eace100 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e867cc40: ffffff01e867c990 [ ffffff01e867c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e867cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8676c40 ffffff430e6fb010 ffffff430eace800 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8676c40: ffffff01e8676990 [ ffffff01e8676990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8676ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8628c40 ffffff430e6fb010 ffffff430ead45c0 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8628c40: ffffff01e8628990 [ ffffff01e8628990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8628ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8622c40 ffffff430e6fb010 ffffff430ead4cc0 2 99 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8622c40: ffffff01e8622990 [ ffffff01e8622990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8622ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e861cc40 ffffff430e6fb010 ffffff430ead53c0 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e861cc40: ffffff01e861c990 [ ffffff01e861c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e861cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8616c40 ffffff430e6fb010 ffffff430ead5ac0 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8616c40: ffffff01e8616990 [ ffffff01e8616990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8616ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8610c40 ffffff430e6fb010 ffffff430ead61c0 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8610c40: ffffff01e8610990 [ ffffff01e8610990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8610ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e860ac40 ffffff430e6fb010 ffffff430ead68c0 2 0 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e860ac40: ffffff01e860a990 [ ffffff01e860a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e860aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8604c40 ffffff430e6fb010 ffffff430ead7080 2 99 ffffff430e6efe88 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8604c40: ffffff01e8604990 [ ffffff01e8604990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6efe88, ffffff430e6efe78) taskq_thread_wait+0xbe(ffffff430e6efe58, ffffff430e6efe78, ffffff430e6efe88 , ffffff01e8604ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6efe58) thread_start+8() ffffff01e8670c40 ffffff430e6fb010 ffffff430eacf100 2 0 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8670c40: ffffff01e8670990 [ ffffff01e8670990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8670ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e866ac40 ffffff430e6fb010 ffffff430eacf800 2 99 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e866ac40: ffffff01e866a990 [ ffffff01e866a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e866aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e8664c40 ffffff430e6fb010 ffffff430eacff00 2 99 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8664c40: ffffff01e8664990 [ ffffff01e8664990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8664ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e865ec40 ffffff430e6fb010 ffffff430ead0600 2 99 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e865ec40: ffffff01e865e990 [ ffffff01e865e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e865ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e8658c40 ffffff430e6fb010 ffffff430ead0d00 2 0 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8658c40: ffffff01e8658990 [ ffffff01e8658990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8658ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e8652c40 ffffff430e6fb010 ffffff430ead1400 2 99 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8652c40: ffffff01e8652990 [ ffffff01e8652990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8652ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e864cc40 ffffff430e6fb010 ffffff430ead1b00 2 99 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e864cc40: ffffff01e864c990 [ ffffff01e864c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e864cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e8646c40 ffffff430e6fb010 ffffff430ead2200 2 0 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8646c40: ffffff01e8646990 [ ffffff01e8646990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8646ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e8640c40 ffffff430e6fb010 ffffff430ead2900 2 0 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8640c40: ffffff01e8640990 [ ffffff01e8640990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8640ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e863ac40 ffffff430e6fb010 ffffff430ead30c0 2 0 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e863ac40: ffffff01e863a990 [ ffffff01e863a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e863aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e8634c40 ffffff430e6fb010 ffffff430ead37c0 2 0 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8634c40: ffffff01e8634990 [ ffffff01e8634990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e8634ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e862ec40 ffffff430e6fb010 ffffff430ead3ec0 2 99 ffffff430e629048 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e862ec40: ffffff01e862e990 [ ffffff01e862e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629048, ffffff430e629038) taskq_thread_wait+0xbe(ffffff430e629018, ffffff430e629038, ffffff430e629048 , ffffff01e862ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629018) thread_start+8() ffffff01e86f4c40 ffffff430e6fb010 ffffff430eb25380 2 0 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86f4c40: ffffff01e86f4990 [ ffffff01e86f4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86f4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86eec40 ffffff430e6fb010 ffffff430eb25a80 2 0 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86eec40: ffffff01e86ee990 [ ffffff01e86ee990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86eead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86e8c40 ffffff430e6fb010 ffffff430eb26180 2 0 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86e8c40: ffffff01e86e8990 [ ffffff01e86e8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86e8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86e2c40 ffffff430e6fb010 ffffff430eb26880 2 99 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86e2c40: ffffff01e86e2990 [ ffffff01e86e2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86e2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86bec40 ffffff430e6fb010 ffffff430eac9340 2 0 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86bec40: ffffff01e86be990 [ ffffff01e86be990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86bead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86b8c40 ffffff430e6fb010 ffffff430eac9a40 2 99 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86b8c40: ffffff01e86b8990 [ ffffff01e86b8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86b8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86b2c40 ffffff430e6fb010 ffffff430eaca140 2 0 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86b2c40: ffffff01e86b2990 [ ffffff01e86b2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86b2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86acc40 ffffff430e6fb010 ffffff430eaca840 2 0 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86acc40: ffffff01e86ac990 [ ffffff01e86ac990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86acad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86a6c40 ffffff430e6fb010 ffffff430eacb000 2 99 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86a6c40: ffffff01e86a6990 [ ffffff01e86a6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86a6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e86a0c40 ffffff430e6fb010 ffffff430eacb700 2 99 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86a0c40: ffffff01e86a0990 [ ffffff01e86a0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e86a0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e869ac40 ffffff430e6fb010 ffffff430eacbe00 2 99 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e869ac40: ffffff01e869a990 [ ffffff01e869a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e869aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e8694c40 ffffff430e6fb010 ffffff430eacc500 2 99 ffffff430e629160 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8694c40: ffffff01e8694990 [ ffffff01e8694990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629160, ffffff430e629150) taskq_thread_wait+0xbe(ffffff430e629130, ffffff430e629150, ffffff430e629160 , ffffff01e8694ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629130) thread_start+8() ffffff01e87aec40 ffffff430e6fb010 ffffff430eb17700 2 99 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87aec40: ffffff01e87ae990 [ ffffff01e87ae990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e87aead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e87a8c40 ffffff430e6fb010 ffffff430eb17e00 2 99 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87a8c40: ffffff01e87a8990 [ ffffff01e87a8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e87a8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e87a2c40 ffffff430e6fb010 ffffff430eb18500 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87a2c40: ffffff01e87a2990 [ ffffff01e87a2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e87a2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e8754c40 ffffff430e6fb010 ffffff430eb1e200 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8754c40: ffffff01e8754990 [ ffffff01e8754990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e8754ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e874ec40 ffffff430e6fb010 ffffff430eb1e900 2 99 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e874ec40: ffffff01e874e990 [ ffffff01e874e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e874ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e8748c40 ffffff430e6fb010 ffffff430eb1f0c0 2 99 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8748c40: ffffff01e8748990 [ ffffff01e8748990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e8748ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e8742c40 ffffff430e6fb010 ffffff430eb1f7c0 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8742c40: ffffff01e8742990 [ ffffff01e8742990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e8742ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e86dcc40 ffffff430e6fb010 ffffff430eac7040 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86dcc40: ffffff01e86dc990 [ ffffff01e86dc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e86dcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e86d6c40 ffffff430e6fb010 ffffff430eac7740 2 99 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86d6c40: ffffff01e86d6990 [ ffffff01e86d6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e86d6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e86d0c40 ffffff430e6fb010 ffffff430eac7e40 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86d0c40: ffffff01e86d0990 [ ffffff01e86d0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e86d0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e86cac40 ffffff430e6fb010 ffffff430eac8540 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86cac40: ffffff01e86ca990 [ ffffff01e86ca990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e86caad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e86c4c40 ffffff430e6fb010 ffffff430eac8c40 2 0 ffffff430e629278 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86c4c40: ffffff01e86c4990 [ ffffff01e86c4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629278, ffffff430e629268) taskq_thread_wait+0xbe(ffffff430e629248, ffffff430e629268, ffffff430e629278 , ffffff01e86c4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629248) thread_start+8() ffffff01e873cc40 ffffff430e6fb010 ffffff430eb1fec0 2 99 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e873cc40: ffffff01e873c990 [ ffffff01e873c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e873cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8736c40 ffffff430e6fb010 ffffff430eb205c0 2 0 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8736c40: ffffff01e8736990 [ ffffff01e8736990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8736ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8730c40 ffffff430e6fb010 ffffff430eb20cc0 2 0 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8730c40: ffffff01e8730990 [ ffffff01e8730990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8730ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e872ac40 ffffff430e6fb010 ffffff430eb213c0 2 99 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e872ac40: ffffff01e872a990 [ ffffff01e872a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e872aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8724c40 ffffff430e6fb010 ffffff430eb21ac0 2 99 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8724c40: ffffff01e8724990 [ ffffff01e8724990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8724ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e871ec40 ffffff430e6fb010 ffffff430eb221c0 2 0 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e871ec40: ffffff01e871e990 [ ffffff01e871e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e871ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8718c40 ffffff430e6fb010 ffffff430eb228c0 2 99 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8718c40: ffffff01e8718990 [ ffffff01e8718990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8718ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8712c40 ffffff430e6fb010 ffffff430eb23080 2 99 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8712c40: ffffff01e8712990 [ ffffff01e8712990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8712ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e870cc40 ffffff430e6fb010 ffffff430eb23780 2 0 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e870cc40: ffffff01e870c990 [ ffffff01e870c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e870cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8706c40 ffffff430e6fb010 ffffff430eb23e80 2 0 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8706c40: ffffff01e8706990 [ ffffff01e8706990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8706ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e8700c40 ffffff430e6fb010 ffffff430eb24580 2 99 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8700c40: ffffff01e8700990 [ ffffff01e8700990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e8700ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e86fac40 ffffff430e6fb010 ffffff430eb24c80 2 0 ffffff430e629390 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e86fac40: ffffff01e86fa990 [ ffffff01e86fa990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629390, ffffff430e629380) taskq_thread_wait+0xbe(ffffff430e629360, ffffff430e629380, ffffff430e629390 , ffffff01e86faad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629360) thread_start+8() ffffff01e879cc40 ffffff430e6fb010 ffffff430eb18c00 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e879cc40: ffffff01e879c990 [ ffffff01e879c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e879cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8796c40 ffffff430e6fb010 ffffff430eb19300 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8796c40: ffffff01e8796990 [ ffffff01e8796990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8796ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8790c40 ffffff430e6fb010 ffffff430eb19a00 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8790c40: ffffff01e8790990 [ ffffff01e8790990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8790ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e878ac40 ffffff430e6fb010 ffffff430eb1a100 2 0 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e878ac40: ffffff01e878a990 [ ffffff01e878a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e878aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8784c40 ffffff430e6fb010 ffffff430eb1a800 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8784c40: ffffff01e8784990 [ ffffff01e8784990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8784ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e877ec40 ffffff430e6fb010 ffffff430eb1b100 2 0 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e877ec40: ffffff01e877e990 [ ffffff01e877e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e877ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8778c40 ffffff430e6fb010 ffffff430eb1b800 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8778c40: ffffff01e8778990 [ ffffff01e8778990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8778ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8772c40 ffffff430e6fb010 ffffff430eb1bf00 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8772c40: ffffff01e8772990 [ ffffff01e8772990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8772ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e876cc40 ffffff430e6fb010 ffffff430eb1c600 2 0 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e876cc40: ffffff01e876c990 [ ffffff01e876c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e876cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8766c40 ffffff430e6fb010 ffffff430eb1cd00 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8766c40: ffffff01e8766990 [ ffffff01e8766990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8766ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8760c40 ffffff430e6fb010 ffffff430eb1d400 2 99 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8760c40: ffffff01e8760990 [ ffffff01e8760990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e8760ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e875ac40 ffffff430e6fb010 ffffff430eb1db00 2 0 ffffff430e6294a8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e875ac40: ffffff01e875a990 [ ffffff01e875a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6294a8, ffffff430e629498) taskq_thread_wait+0xbe(ffffff430e629478, ffffff430e629498, ffffff430e6294a8 , ffffff01e875aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629478) thread_start+8() ffffff01e8814c40 ffffff430e6fb010 ffffff430eb0fe80 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8814c40: ffffff01e8814990 [ ffffff01e8814990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e8814ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e880ec40 ffffff430e6fb010 ffffff430eb10580 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e880ec40: ffffff01e880e990 [ ffffff01e880e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e880ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87eac40 ffffff430e6fb010 ffffff430eb13040 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87eac40: ffffff01e87ea990 [ ffffff01e87ea990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87eaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87e4c40 ffffff430e6fb010 ffffff430eb13740 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87e4c40: ffffff01e87e4990 [ ffffff01e87e4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87e4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87dec40 ffffff430e6fb010 ffffff430eb13e40 2 99 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87dec40: ffffff01e87de990 [ ffffff01e87de990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87dead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87d8c40 ffffff430e6fb010 ffffff430eb14540 2 99 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87d8c40: ffffff01e87d8990 [ ffffff01e87d8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87d8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87d2c40 ffffff430e6fb010 ffffff430eb14c40 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87d2c40: ffffff01e87d2990 [ ffffff01e87d2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87d2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87ccc40 ffffff430e6fb010 ffffff430eb15340 2 99 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87ccc40: ffffff01e87cc990 [ ffffff01e87cc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87ccad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87c6c40 ffffff430e6fb010 ffffff430eb15a40 2 99 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87c6c40: ffffff01e87c6990 [ ffffff01e87c6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87c6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87c0c40 ffffff430e6fb010 ffffff430eb16140 2 99 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87c0c40: ffffff01e87c0990 [ ffffff01e87c0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87c0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87bac40 ffffff430e6fb010 ffffff430eb16840 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87bac40: ffffff01e87ba990 [ ffffff01e87ba990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87baad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e87b4c40 ffffff430e6fb010 ffffff430eb17000 2 0 ffffff430e6295c0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87b4c40: ffffff01e87b4990 [ ffffff01e87b4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6295c0, ffffff430e6295b0) taskq_thread_wait+0xbe(ffffff430e629590, ffffff430e6295b0, ffffff430e6295c0 , ffffff01e87b4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629590) thread_start+8() ffffff01e8844c40 ffffff430e6fb010 ffffff430eb0c5c0 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8844c40: ffffff01e8844990 [ ffffff01e8844990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8844ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e883ec40 ffffff430e6fb010 ffffff430eb0ccc0 2 99 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e883ec40: ffffff01e883e990 [ ffffff01e883e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e883ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e8838c40 ffffff430e6fb010 ffffff430eb0d3c0 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8838c40: ffffff01e8838990 [ ffffff01e8838990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8838ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e8832c40 ffffff430e6fb010 ffffff430eb0dac0 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8832c40: ffffff01e8832990 [ ffffff01e8832990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8832ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e882cc40 ffffff430e6fb010 ffffff430eb0e1c0 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e882cc40: ffffff01e882c990 [ ffffff01e882c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e882cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e8826c40 ffffff430e6fb010 ffffff430eb0e8c0 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8826c40: ffffff01e8826990 [ ffffff01e8826990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8826ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e8820c40 ffffff430e6fb010 ffffff430eb0f080 2 99 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8820c40: ffffff01e8820990 [ ffffff01e8820990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8820ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e8808c40 ffffff430e6fb010 ffffff430eb10c80 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8808c40: ffffff01e8808990 [ ffffff01e8808990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8808ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e8802c40 ffffff430e6fb010 ffffff430eb11380 2 99 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8802c40: ffffff01e8802990 [ ffffff01e8802990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e8802ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e87fcc40 ffffff430e6fb010 ffffff430eb11a80 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87fcc40: ffffff01e87fc990 [ ffffff01e87fc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e87fcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e87f6c40 ffffff430e6fb010 ffffff430eb12180 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87f6c40: ffffff01e87f6990 [ ffffff01e87f6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e87f6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e87f0c40 ffffff430e6fb010 ffffff430eb12880 2 0 ffffff430e6296d8 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e87f0c40: ffffff01e87f0990 [ ffffff01e87f0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6296d8, ffffff430e6296c8) taskq_thread_wait+0xbe(ffffff430e6296a8, ffffff430e6296c8, ffffff430e6296d8 , ffffff01e87f0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6296a8) thread_start+8() ffffff01e881ac40 ffffff430e6fb010 ffffff430eb0f780 2 99 ffffff430e6297f0 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e881ac40: ffffff01e881a990 [ ffffff01e881a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6297f0, ffffff430e6297e0) taskq_thread_wait+0xbe(ffffff430e6297c0, ffffff430e6297e0, ffffff430e6297f0 , ffffff01e881aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6297c0) thread_start+8() ffffff01e884ac40 ffffff430e6fb010 ffffff430eb0bec0 2 99 ffffff430e629908 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e884ac40: ffffff01e884a990 [ ffffff01e884a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629908, ffffff430e6298f8) taskq_thread_wait+0xbe(ffffff430e6298d8, ffffff430e6298f8, ffffff430e629908 , ffffff01e884aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6298d8) thread_start+8() ffffff01e8850c40 ffffff430e6fb010 ffffff430eb0b7c0 2 99 ffffff430e629a20 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8850c40: ffffff01e8850990 [ ffffff01e8850990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629a20, ffffff430e629a10) taskq_thread_wait+0xbe(ffffff430e6299f0, ffffff430e629a10, ffffff430e629a20 , ffffff01e8850ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6299f0) thread_start+8() ffffff01e8856c40 ffffff430e6fb010 ffffff430eb0b0c0 2 99 ffffff430e629b38 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8856c40: ffffff01e8856990 [ ffffff01e8856990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629b38, ffffff430e629b28) taskq_thread_wait+0xbe(ffffff430e629b08, ffffff430e629b28, ffffff430e629b38 , ffffff01e8856ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629b08) thread_start+8() ffffff01e885cc40 ffffff430e6fb010 ffffff430eb0a900 2 99 ffffff430e629c50 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e885cc40: ffffff01e885c990 [ ffffff01e885c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e629c50, ffffff430e629c40) taskq_thread_wait+0xbe(ffffff430e629c20, ffffff430e629c40, ffffff430e629c50 , ffffff01e885cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e629c20) thread_start+8() ffffff01e8298c40 ffffff430e6fb010 ffffff42a9c4ea40 2 99 ffffff42ec343380 PC: _resume_from_idle+0xf1 CMD: zpool-rpool stack pointer for thread ffffff01e8298c40: ffffff01e8298a50 [ ffffff01e8298a50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42ec343380, ffffff42ec343378) spa_thread+0x1db(ffffff42ec342ac0) thread_start+8() ffffff01e8874c40 fffffffffbc2eb80 0 0 60 ffffff430e6ef4b0 PC: _resume_from_idle+0xf1 TASKQ: zfs_vn_rele_taskq stack pointer for thread ffffff01e8874c40: ffffff01e8874a80 [ ffffff01e8874a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef4b0, ffffff430e6ef4a0) taskq_thread_wait+0xbe(ffffff430e6ef480, ffffff430e6ef4a0, ffffff430e6ef4b0 , ffffff01e8874bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef480) thread_start+8() ffffff01e887ac40 fffffffffbc2eb80 0 0 60 ffffff430eaf4ed4 PC: _resume_from_idle+0xf1 THREAD: txg_quiesce_thread() stack pointer for thread ffffff01e887ac40: ffffff01e887aad0 [ ffffff01e887aad0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430eaf4ed4, ffffff430eaf4e98) txg_thread_wait+0xaf(ffffff430eaf4e90, ffffff01e887abc0, ffffff430eaf4ed4, 0 ) txg_quiesce_thread+0x106(ffffff430eaf4d00) thread_start+8() ffffff01e8883c40 fffffffffbc2eb80 0 0 60 ffffff430eaf4ed0 PC: _resume_from_idle+0xf1 THREAD: txg_sync_thread() stack pointer for thread ffffff01e8883c40: ffffff01e8883a10 [ ffffff01e8883a10 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff430eaf4ed0, ffffff430eaf4e98, 1296d5b80, 989680, 0) cv_timedwait+0x5c(ffffff430eaf4ed0, ffffff430eaf4e98, 1bdc11) txg_thread_wait+0x5f(ffffff430eaf4e90, ffffff01e8883bc0, ffffff430eaf4ed0, 1f3) txg_sync_thread+0xfd(ffffff430eaf4d00) thread_start+8() ffffff01e8889c40 fffffffffbc2eb80 0 0 60 ffffff430e6ef398 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01e8889c40: ffffff01e8889a80 [ ffffff01e8889a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef398, ffffff430e6ef388) taskq_thread_wait+0xbe(ffffff430e6ef368, ffffff430e6ef388, ffffff430e6ef398 , ffffff01e8889bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef368) thread_start+8() ffffff01e888fc40 fffffffffbc2eb80 0 0 0 fffffffffbc13854 PC: _resume_from_idle+0xf1 THREAD: memscrubber() stack pointer for thread ffffff01e888fc40: ffffff01e888fb20 [ ffffff01e888fb20 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbc13854, fffffffffbc13858) memscrubber+0x146() thread_start+8() ffffff01e8895c40 fffffffffbc2eb80 0 0 60 ffffff430e6ef280 PC: _resume_from_idle+0xf1 TASKQ: acpinex_nexus_enum_tq stack pointer for thread ffffff01e8895c40: ffffff01e8895a80 [ ffffff01e8895a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef280, ffffff430e6ef270) taskq_thread_wait+0xbe(ffffff430e6ef250, ffffff430e6ef270, ffffff430e6ef280 , ffffff01e8895bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef250) thread_start+8() ffffff01e889bc40 fffffffffbc2eb80 0 0 60 ffffff430e6ef168 PC: _resume_from_idle+0xf1 TASKQ: acpinex_nexus_enum_tq stack pointer for thread ffffff01e889bc40: ffffff01e889ba80 [ ffffff01e889ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef168, ffffff430e6ef158) taskq_thread_wait+0xbe(ffffff430e6ef138, ffffff430e6ef158, ffffff430e6ef168 , ffffff01e889bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef138) thread_start+8() ffffff01e88a1c40 fffffffffbc2eb80 0 0 60 ffffff430e6ef050 PC: _resume_from_idle+0xf1 TASKQ: acpinex_nexus_enum_tq stack pointer for thread ffffff01e88a1c40: ffffff01e88a1a80 [ ffffff01e88a1a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e6ef050, ffffff430e6ef040) taskq_thread_wait+0xbe(ffffff430e6ef020, ffffff430e6ef040, ffffff430e6ef050 , ffffff01e88a1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430e6ef020) thread_start+8() ffffff01e88a7c40 fffffffffbc2eb80 0 0 60 ffffff430ee8ce90 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e88a7c40: ffffff01e88a7a80 [ ffffff01e88a7a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8ce90, ffffff430ee8ce80) taskq_thread_wait+0xbe(ffffff430ee8ce60, ffffff430ee8ce80, ffffff430ee8ce90 , ffffff01e88a7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8ce60) thread_start+8() ffffff01e88adc40 fffffffffbc2eb80 0 0 60 fffffffffbd489a0 PC: _resume_from_idle+0xf1 THREAD: dld_taskq_dispatch() stack pointer for thread ffffff01e88adc40: ffffff01e88adb60 [ ffffff01e88adb60 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbd489a0, fffffffffbd48998) dld_taskq_dispatch+0x115() thread_start+8() ffffff01e88b3c40 fffffffffbc2eb80 0 0 60 ffffff430ee8cd78 PC: _resume_from_idle+0xf1 TASKQ: IP_INJECT_QUEUE_OUT stack pointer for thread ffffff01e88b3c40: ffffff01e88b3a80 [ ffffff01e88b3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8cd78, ffffff430ee8cd68) taskq_thread_wait+0xbe(ffffff430ee8cd48, ffffff430ee8cd68, ffffff430ee8cd78 , ffffff01e88b3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8cd48) thread_start+8() ffffff01e88b9c40 fffffffffbc2eb80 0 0 60 ffffff430ee8cc60 PC: _resume_from_idle+0xf1 TASKQ: IP_INJECT_QUEUE_IN stack pointer for thread ffffff01e88b9c40: ffffff01e88b9a80 [ ffffff01e88b9a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8cc60, ffffff430ee8cc50) taskq_thread_wait+0xbe(ffffff430ee8cc30, ffffff430ee8cc50, ffffff430ee8cc60 , ffffff01e88b9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8cc30) thread_start+8() ffffff01e88bfc40 fffffffffbc2eb80 0 0 60 ffffff430ee8cb48 PC: _resume_from_idle+0xf1 TASKQ: IP_NIC_EVENT_QUEUE stack pointer for thread ffffff01e88bfc40: ffffff01e88bfa80 [ ffffff01e88bfa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8cb48, ffffff430ee8cb38) taskq_thread_wait+0xbe(ffffff430ee8cb18, ffffff430ee8cb38, ffffff430ee8cb48 , ffffff01e88bfbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8cb18) thread_start+8() ffffff01e88c5c40 fffffffffbc2eb80 0 0 99 ffffff430e482808 PC: _resume_from_idle+0xf1 THREAD: ipsec_loader() stack pointer for thread ffffff01e88c5c40: ffffff01e88c5b30 [ ffffff01e88c5b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e482808, ffffff430e4827f0) ipsec_loader+0x149(ffffff430e481000) thread_start+8() ffffff01e88cbc40 fffffffffbc2eb80 0 0 99 ffffff430f449ed0 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e88cbc40: ffffff01e88cbb40 [ ffffff01e88cbb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449ed0, ffffff430f449e90) squeue_worker+0x104(ffffff430f449e80) thread_start+8() ffffff01e88d1c40 fffffffffbc2eb80 0 0 99 ffffff430f449ed2 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e88d1c40: ffffff01e88d1b00 [ ffffff01e88d1b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449ed2, ffffff430f449e90) squeue_polling_thread+0xa9(ffffff430f449e80) thread_start+8() ffffff01e88d7c40 fffffffffbc2eb80 0 0 60 ffffffffc000fff0 PC: _resume_from_idle+0xf1 THREAD: dce_reclaim_worker() stack pointer for thread ffffff01e88d7c40: ffffff01e88d7ab0 [ ffffff01e88d7ab0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffffffc000fff0, ffffffffc000ffe8, df8475800, 989680, 0) cv_timedwait+0x5c(ffffffffc000fff0, ffffffffc000ffe8, 1becfa) dce_reclaim_worker+0xab(0) thread_start+8() ffffff01e88ddc40 fffffffffbc2eb80 0 0 60 ffffff430e672e70 PC: _resume_from_idle+0xf1 THREAD: ill_taskq_dispatch() stack pointer for thread ffffff01e88ddc40: ffffff01e88ddaf0 [ ffffff01e88ddaf0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430e672e70, ffffff430e672e68) ill_taskq_dispatch+0x155(ffffff430e672000) thread_start+8() ffffff01e88e3c40 fffffffffbc2eb80 0 0 60 ffffff430ee8ca30 PC: _resume_from_idle+0xf1 TASKQ: ilb_rule_taskq_ffffff42a999ce0 stack pointer for thread ffffff01e88e3c40: ffffff01e88e3a80 [ ffffff01e88e3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8ca30, ffffff430ee8ca20) taskq_thread_wait+0xbe(ffffff430ee8ca00, ffffff430ee8ca20, ffffff430ee8ca30 , ffffff01e88e3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8ca00) thread_start+8() ffffff01e8949c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c918 PC: _resume_from_idle+0xf1 TASKQ: sof_close_deferred_taskq stack pointer for thread ffffff01e8949c40: ffffff01e8949a80 [ ffffff01e8949a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c918, ffffff430ee8c908) taskq_thread_wait+0xbe(ffffff430ee8c8e8, ffffff430ee8c908, ffffff430ee8c918 , ffffff01e8949bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c8e8) thread_start+8() ffffff01e894fc40 fffffffffbc2eb80 0 0 60 ffffff430ee8c800 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e894fc40: ffffff01e894fa80 [ ffffff01e894fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c800, ffffff430ee8c7f0) taskq_thread_wait+0xbe(ffffff430ee8c7d0, ffffff430ee8c7f0, ffffff430ee8c800 , ffffff01e894fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c7d0) thread_start+8() ffffff01e8955c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c6e8 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8955c40: ffffff01e8955a80 [ ffffff01e8955a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c6e8, ffffff430ee8c6d8) taskq_thread_wait+0xbe(ffffff430ee8c6b8, ffffff430ee8c6d8, ffffff430ee8c6e8 , ffffff01e8955bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c6b8) thread_start+8() ffffff01e895bc40 fffffffffbc2eb80 0 0 60 ffffff430ee8c5d0 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e895bc40: ffffff01e895ba80 [ ffffff01e895ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c5d0, ffffff430ee8c5c0) taskq_thread_wait+0xbe(ffffff430ee8c5a0, ffffff430ee8c5c0, ffffff430ee8c5d0 , ffffff01e895bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c5a0) thread_start+8() ffffff01e8961c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c4b8 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8961c40: ffffff01e8961a80 [ ffffff01e8961a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c4b8, ffffff430ee8c4a8) taskq_thread_wait+0xbe(ffffff430ee8c488, ffffff430ee8c4a8, ffffff430ee8c4b8 , ffffff01e8961bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c488) thread_start+8() ffffff01e8967c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c3a0 PC: _resume_from_idle+0xf1 TASKQ: pci_pci_nexus_enum_tq stack pointer for thread ffffff01e8967c40: ffffff01e8967a80 [ ffffff01e8967a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c3a0, ffffff430ee8c390) taskq_thread_wait+0xbe(ffffff430ee8c370, ffffff430ee8c390, ffffff430ee8c3a0 , ffffff01e8967bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c370) thread_start+8() ffffff01e8973c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c288 PC: _resume_from_idle+0xf1 TASKQ: ehci_nexus_enum_tq stack pointer for thread ffffff01e8973c40: ffffff01e8973a80 [ ffffff01e8973a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c288, ffffff430ee8c278) taskq_thread_wait+0xbe(ffffff430ee8c258, ffffff430ee8c278, ffffff430ee8c288 , ffffff01e8973bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c258) thread_start+8() ffffff01e858cc40 fffffffffbc2eb80 0 0 60 ffffff430ee8c170 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff01e858cc40: ffffff01e858ca80 [ ffffff01e858ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c170, ffffff430ee8c160) taskq_thread_wait+0xbe(ffffff430ee8c140, ffffff430ee8c160, ffffff430ee8c170 , ffffff01e858cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c140) thread_start+8() ffffff01e8586c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c170 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff01e8586c40: ffffff01e8586a80 [ ffffff01e8586a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c170, ffffff430ee8c160) taskq_thread_wait+0xbe(ffffff430ee8c140, ffffff430ee8c160, ffffff430ee8c170 , ffffff01e8586bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c140) thread_start+8() ffffff01e8580c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c170 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff01e8580c40: ffffff01e8580a80 [ ffffff01e8580a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c170, ffffff430ee8c160) taskq_thread_wait+0xbe(ffffff430ee8c140, ffffff430ee8c160, ffffff430ee8c170 , ffffff01e8580bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c140) thread_start+8() ffffff01e857ac40 fffffffffbc2eb80 0 0 60 ffffff430ee8c170 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff01e857ac40: ffffff01e857aa80 [ ffffff01e857aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c170, ffffff430ee8c160) taskq_thread_wait+0xbe(ffffff430ee8c140, ffffff430ee8c160, ffffff430ee8c170 , ffffff01e857abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c140) thread_start+8() ffffff01e8868c40 fffffffffbc2eb80 0 0 60 ffffff430ee8c058 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_81_pipehndl_tq_0 stack pointer for thread ffffff01e8868c40: ffffff01e8868a80 [ ffffff01e8868a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c058, ffffff430ee8c048) taskq_thread_wait+0xbe(ffffff430ee8c028, ffffff430ee8c048, ffffff430ee8c058 , ffffff01e8868bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c028) thread_start+8() ffffff01e826fc40 fffffffffbc2eb80 0 0 60 ffffff430ee8c058 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_81_pipehndl_tq_0 stack pointer for thread ffffff01e826fc40: ffffff01e826fa80 [ ffffff01e826fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430ee8c058, ffffff430ee8c048) taskq_thread_wait+0xbe(ffffff430ee8c028, ffffff430ee8c048, ffffff430ee8c058 , ffffff01e826fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430ee8c028) thread_start+8() ffffff01e886ec40 fffffffffbc2eb80 0 0 60 ffffff430f6f6e98 PC: _resume_from_idle+0xf1 TASKQ: ehci_nexus_enum_tq stack pointer for thread ffffff01e886ec40: ffffff01e886ea80 [ ffffff01e886ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6e98, ffffff430f6f6e88) taskq_thread_wait+0xbe(ffffff430f6f6e68, ffffff430f6f6e88, ffffff430f6f6e98 , ffffff01e886ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6e68) thread_start+8() ffffff01e82d4c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6d80 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff01e82d4c40: ffffff01e82d4a80 [ ffffff01e82d4a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6d80, ffffff430f6f6d70) taskq_thread_wait+0xbe(ffffff430f6f6d50, ffffff430f6f6d70, ffffff430f6f6d80 , ffffff01e82d4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6d50) thread_start+8() ffffff01e82cec40 fffffffffbc2eb80 0 0 60 ffffff430f6f6d80 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff01e82cec40: ffffff01e82cea80 [ ffffff01e82cea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6d80, ffffff430f6f6d70) taskq_thread_wait+0xbe(ffffff430f6f6d50, ffffff430f6f6d70, ffffff430f6f6d80 , ffffff01e82cebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6d50) thread_start+8() ffffff01e82c8c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6d80 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff01e82c8c40: ffffff01e82c8a80 [ ffffff01e82c8a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6d80, ffffff430f6f6d70) taskq_thread_wait+0xbe(ffffff430f6f6d50, ffffff430f6f6d70, ffffff430f6f6d80 , ffffff01e82c8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6d50) thread_start+8() ffffff01e8862c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6d80 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff01e8862c40: ffffff01e8862a80 [ ffffff01e8862a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6d80, ffffff430f6f6d70) taskq_thread_wait+0xbe(ffffff430f6f6d50, ffffff430f6f6d70, ffffff430f6f6d80 , ffffff01e8862bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6d50) thread_start+8() ffffff01e82e0c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6c68 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_81_pipehndl_tq_1 stack pointer for thread ffffff01e82e0c40: ffffff01e82e0a80 [ ffffff01e82e0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6c68, ffffff430f6f6c58) taskq_thread_wait+0xbe(ffffff430f6f6c38, ffffff430f6f6c58, ffffff430f6f6c68 , ffffff01e82e0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6c38) thread_start+8() ffffff01e82dac40 fffffffffbc2eb80 0 0 60 ffffff430f6f6c68 PC: _resume_from_idle+0xf1 TASKQ: USB_ehci_81_pipehndl_tq_1 stack pointer for thread ffffff01e82dac40: ffffff01e82daa80 [ ffffff01e82daa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6c68, ffffff430f6f6c58) taskq_thread_wait+0xbe(ffffff430f6f6c38, ffffff430f6f6c58, ffffff430f6f6c68 , ffffff01e82dabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6c38) thread_start+8() ffffff01e82e6c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6b50 PC: _resume_from_idle+0xf1 TASKQ: ibmf_saa_event_taskq stack pointer for thread ffffff01e82e6c40: ffffff01e82e6a80 [ ffffff01e82e6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6b50, ffffff430f6f6b40) taskq_thread_wait+0xbe(ffffff430f6f6b20, ffffff430f6f6b40, ffffff430f6f6b50 , ffffff01e82e6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6b20) thread_start+8() ffffff01e834cc40 fffffffffbc2eb80 0 0 60 ffffff430f6f6a38 PC: _resume_from_idle+0xf1 TASKQ: ibmf_taskq stack pointer for thread ffffff01e834cc40: ffffff01e834ca80 [ ffffff01e834ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6a38, ffffff430f6f6a28) taskq_thread_wait+0xbe(ffffff430f6f6a08, ffffff430f6f6a28, ffffff430f6f6a38 , ffffff01e834cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6a08) thread_start+8() ffffff01e8358c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6920 PC: _resume_from_idle+0xf1 TASKQ: ib_nexus_enum_tq stack pointer for thread ffffff01e8358c40: ffffff01e8358a80 [ ffffff01e8358a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6920, ffffff430f6f6910) taskq_thread_wait+0xbe(ffffff430f6f68f0, ffffff430f6f6910, ffffff430f6f6920 , ffffff01e8358bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f68f0) thread_start+8() ffffff01e835ec40 fffffffffbc2eb80 0 0 60 ffffff430f6f6808 PC: _resume_from_idle+0xf1 TASKQ: ib_ibnex_enum_taskq stack pointer for thread ffffff01e835ec40: ffffff01e835ea80 [ ffffff01e835ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6808, ffffff430f6f67f8) taskq_thread_wait+0xbe(ffffff430f6f67d8, ffffff430f6f67f8, ffffff430f6f6808 , ffffff01e835ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f67d8) thread_start+8() ffffff01e8d55c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7a40 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8d55c40: ffffff01e8d55a80 [ ffffff01e8d55a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7a40, ffffff430f8d7a30) taskq_thread_wait+0xbe(ffffff430f8d7a10, ffffff430f8d7a30, ffffff430f8d7a40 , ffffff01e8d55bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7a10) thread_start+8() ffffff01e8d5bc40 fffffffffbc2eb80 0 0 60 ffffff430f8d7928 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8d5bc40: ffffff01e8d5ba80 [ ffffff01e8d5ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7928, ffffff430f8d7918) taskq_thread_wait+0xbe(ffffff430f8d78f8, ffffff430f8d7918, ffffff430f8d7928 , ffffff01e8d5bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d78f8) thread_start+8() ffffff01e8d61c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7810 PC: _resume_from_idle+0xf1 TASKQ: npe_nexus_enum_tq stack pointer for thread ffffff01e8d61c40: ffffff01e8d61a80 [ ffffff01e8d61a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7810, ffffff430f8d7800) taskq_thread_wait+0xbe(ffffff430f8d77e0, ffffff430f8d7800, ffffff430f8d7810 , ffffff01e8d61bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d77e0) thread_start+8() ffffff01e8d67c40 fffffffffbc2eb80 0 0 60 ffffff430f8d76f8 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8d67c40: ffffff01e8d67a80 [ ffffff01e8d67a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d76f8, ffffff430f8d76e8) taskq_thread_wait+0xbe(ffffff430f8d76c8, ffffff430f8d76e8, ffffff430f8d76f8 , ffffff01e8d67bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d76c8) thread_start+8() ffffff01e8d94c40 fffffffffbc2eb80 0 0 60 ffffff430f8d74c8 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8d94c40: ffffff01e8d94a80 [ ffffff01e8d94a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d74c8, ffffff430f8d74b8) taskq_thread_wait+0xbe(ffffff430f8d7498, ffffff430f8d74b8, ffffff430f8d74c8 , ffffff01e8d94bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7498) thread_start+8() ffffff01e8d9ac40 fffffffffbc2eb80 0 0 60 ffffff430f8d73b0 PC: _resume_from_idle+0xf1 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff01e8d9ac40: ffffff01e8d9aa80 [ ffffff01e8d9aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d73b0, ffffff430f8d73a0) taskq_thread_wait+0xbe(ffffff430f8d7380, ffffff430f8d73a0, ffffff430f8d73b0 , ffffff01e8d9abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7380) thread_start+8() ffffff01ea599c40 fffffffffbc2eb80 0 0 60 ffffff432cdd7030 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01ea599c40: ffffff01ea599b10 [ ffffff01ea599b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7030, ffffff432cdd7028) stmf_worker_task+0x505(ffffff432cdd7000) thread_start+8() ffffff01ea59fc40 fffffffffbc2eb80 0 0 60 ffffff432cdd7080 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01ea59fc40: ffffff01ea59fb10 [ ffffff01ea59fb10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7080, ffffff432cdd7078) stmf_worker_task+0x505(ffffff432cdd7050) thread_start+8() ffffff01ea5a5c40 fffffffffbc2eb80 0 0 60 ffffff432cdd70d0 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01ea5a5c40: ffffff01ea5a5b10 [ ffffff01ea5a5b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd70d0, ffffff432cdd70c8) stmf_worker_task+0x505(ffffff432cdd70a0) thread_start+8() ffffff01ea5abc40 fffffffffbc2eb80 0 0 60 ffffff432cdd7120 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01ea5abc40: ffffff01ea5abb10 [ ffffff01ea5abb10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7120, ffffff432cdd7118) stmf_worker_task+0x505(ffffff432cdd70f0) thread_start+8() ffffff01eb75ec40 fffffffffbc2eb80 0 0 60 ffffff432cdd7170 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01eb75ec40: ffffff01eb75eb10 [ ffffff01eb75eb10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7170, ffffff432cdd7168) stmf_worker_task+0x505(ffffff432cdd7140) thread_start+8() ffffff01e85a4c40 fffffffffbc2eb80 0 0 60 ffffff432cdd71c0 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e85a4c40: ffffff01e85a4b10 [ ffffff01e85a4b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd71c0, ffffff432cdd71b8) stmf_worker_task+0x505(ffffff432cdd7190) thread_start+8() ffffff01e9e99c40 fffffffffbc2eb80 0 0 60 ffffff432cdd7210 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e9e99c40: ffffff01e9e99b10 [ ffffff01e9e99b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7210, ffffff432cdd7208) stmf_worker_task+0x505(ffffff432cdd71e0) thread_start+8() ffffff01e8afec40 fffffffffbc2eb80 0 0 60 ffffff432cdd7260 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e8afec40: ffffff01e8afeb10 [ ffffff01e8afeb10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7260, ffffff432cdd7258) stmf_worker_task+0x505(ffffff432cdd7230) thread_start+8() ffffff01e84f9c40 fffffffffbc2eb80 0 0 60 ffffff432cdd72b0 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e84f9c40: ffffff01e84f9b10 [ ffffff01e84f9b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd72b0, ffffff432cdd72a8) stmf_worker_task+0x505(ffffff432cdd7280) thread_start+8() ffffff01e850bc40 fffffffffbc2eb80 0 0 60 ffffff432cdd7300 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e850bc40: ffffff01e850bb10 [ ffffff01e850bb10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7300, ffffff432cdd72f8) stmf_worker_task+0x505(ffffff432cdd72d0) thread_start+8() ffffff01eb6a4c40 fffffffffbc2eb80 0 0 60 ffffff432cdd7350 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01eb6a4c40: ffffff01eb6a4b10 [ ffffff01eb6a4b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7350, ffffff432cdd7348) stmf_worker_task+0x505(ffffff432cdd7320) thread_start+8() ffffff01e8dc8c40 fffffffffbc2eb80 0 0 60 ffffff432cdd73a0 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e8dc8c40: ffffff01e8dc8b10 [ ffffff01e8dc8b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd73a0, ffffff432cdd7398) stmf_worker_task+0x505(ffffff432cdd7370) thread_start+8() ffffff01e8526c40 fffffffffbc2eb80 0 0 60 ffffff432cdd73f0 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e8526c40: ffffff01e8526b10 [ ffffff01e8526b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd73f0, ffffff432cdd73e8) stmf_worker_task+0x505(ffffff432cdd73c0) thread_start+8() ffffff01e9b95c40 fffffffffbc2eb80 0 0 60 ffffff432cdd7440 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e9b95c40: ffffff01e9b95b10 [ ffffff01e9b95b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7440, ffffff432cdd7438) stmf_worker_task+0x505(ffffff432cdd7410) thread_start+8() ffffff01eb93cc40 fffffffffbc2eb80 0 0 60 ffffff432cdd7490 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01eb93cc40: ffffff01eb93cb10 [ ffffff01eb93cb10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd7490, ffffff432cdd7488) stmf_worker_task+0x505(ffffff432cdd7460) thread_start+8() ffffff01e9c8dc40 fffffffffbc2eb80 0 0 60 ffffff432cdd74e0 PC: _resume_from_idle+0xf1 THREAD: stmf_worker_task() stack pointer for thread ffffff01e9c8dc40: ffffff01e9c8db10 [ ffffff01e9c8db10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432cdd74e0, ffffff432cdd74d8) stmf_worker_task+0x505(ffffff432cdd74b0) thread_start+8() ffffff01e8ec1c40 fffffffffbc2eb80 0 0 60 ffffffffc00d4e08 PC: _resume_from_idle+0xf1 TASKQ: STMF_SVC_TASKQ stack pointer for thread ffffff01e8ec1c40: ffffff01e8ec19b0 [ ffffff01e8ec19b0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffffffc00d4e08, ffffffffc00d4e00, 1312d00, 989680 , 0) cv_reltimedwait+0x51(ffffffffc00d4e08, ffffffffc00d4e00, 2, 4) stmf_svc_timeout+0x11a(ffffff01e8ec1b10) stmf_svc+0x1b8(0) taskq_thread+0x2d0(ffffff430f8d7150) thread_start+8() ffffff01e8fd5c40 fffffffffbc2eb80 0 0 60 ffffffffc00f2fc2 PC: _resume_from_idle+0xf1 THREAD: ibcm_process_tlist() stack pointer for thread ffffff01e8fd5c40: ffffff01e8fd5b50 [ ffffff01e8fd5b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc00f2fc2, ffffffffc00f32e8) ibcm_process_tlist+0x1e1() thread_start+8() ffffff01e912dc40 fffffffffbc2eb80 0 0 60 ffffff430fc25ea8 PC: _resume_from_idle+0xf1 TASKQ: hermon_nexus_enum_tq stack pointer for thread ffffff01e912dc40: ffffff01e912da80 [ ffffff01e912da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25ea8, ffffff430fc25e98) taskq_thread_wait+0xbe(ffffff430fc25e78, ffffff430fc25e98, ffffff430fc25ea8 , ffffff01e912dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25e78) thread_start+8() ffffff01e8f93c40 fffffffffbc2eb80 0 0 97 ffffffffc001b1a8 PC: _resume_from_idle+0xf1 THREAD: ibtl_async_thread() stack pointer for thread ffffff01e8f93c40: ffffff01e8f93b50 [ ffffff01e8f93b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc001b1a8, ffffffffc001b1a0) ibtl_async_thread+0x1e1() thread_start+8() ffffff01e8f99c40 fffffffffbc2eb80 0 0 97 ffffffffc001b1a8 PC: _resume_from_idle+0xf1 THREAD: ibtl_async_thread() stack pointer for thread ffffff01e8f99c40: ffffff01e8f99b50 [ ffffff01e8f99b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc001b1a8, ffffffffc001b1a0) ibtl_async_thread+0x1e1() thread_start+8() ffffff01e8f9fc40 fffffffffbc2eb80 0 0 97 ffffffffc001b1a8 PC: _resume_from_idle+0xf1 THREAD: ibtl_async_thread() stack pointer for thread ffffff01e8f9fc40: ffffff01e8f9fb50 [ ffffff01e8f9fb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc001b1a8, ffffffffc001b1a0) ibtl_async_thread+0x1e1() thread_start+8() ffffff01e8fa5c40 fffffffffbc2eb80 0 0 97 ffffffffc001b1a8 PC: _resume_from_idle+0xf1 THREAD: ibtl_async_thread() stack pointer for thread ffffff01e8fa5c40: ffffff01e8fa5b50 [ ffffff01e8fa5b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc001b1a8, ffffffffc001b1a0) ibtl_async_thread+0x1e1() thread_start+8() ffffff01e8fabc40 fffffffffbc2eb80 0 0 60 ffffff4310550948 PC: _resume_from_idle+0xf1 TASKQ: hermon_hermon_taskq00000000 stack pointer for thread ffffff01e8fabc40: ffffff01e8faba80 [ ffffff01e8faba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550948, ffffff4310550938) taskq_thread_wait+0xbe(ffffff4310550918, ffffff4310550938, ffffff4310550948 , ffffff01e8fabbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550918) thread_start+8() ffffff01e8fb1c40 fffffffffbc2eb80 0 0 60 ffffff4310550830 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010001 stack pointer for thread ffffff01e8fb1c40: ffffff01e8fb1a80 [ ffffff01e8fb1a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550830, ffffff4310550820) taskq_thread_wait+0xbe(ffffff4310550800, ffffff4310550820, ffffff4310550830 , ffffff01e8fb1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550800) thread_start+8() ffffff01e9eacc40 fffffffffbc2eb80 0 0 60 ffffff4310550718 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010001 stack pointer for thread ffffff01e9eacc40: ffffff01e9eaca80 [ ffffff01e9eaca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550718, ffffff4310550708) taskq_thread_wait+0xbe(ffffff43105506e8, ffffff4310550708, ffffff4310550718 , ffffff01e9eacbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43105506e8) thread_start+8() ffffff01e90f9c40 fffffffffbc2eb80 0 0 60 ffffff4310550600 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010001 stack pointer for thread ffffff01e90f9c40: ffffff01e90f9a80 [ ffffff01e90f9a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550600, ffffff43105505f0) taskq_thread_wait+0xbe(ffffff43105505d0, ffffff43105505f0, ffffff4310550600 , ffffff01e90f9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43105505d0) thread_start+8() ffffff01e9e2fc40 fffffffffbc2eb80 0 0 60 ffffff43105504e8 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010001 stack pointer for thread ffffff01e9e2fc40: ffffff01e9e2fa80 [ ffffff01e9e2fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43105504e8, ffffff43105504d8) taskq_thread_wait+0xbe(ffffff43105504b8, ffffff43105504d8, ffffff43105504e8 , ffffff01e9e2fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43105504b8) thread_start+8() ffffff01e901dc40 fffffffffbc2eb80 0 0 60 ffffff43105503d0 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010004 stack pointer for thread ffffff01e901dc40: ffffff01e901da80 [ ffffff01e901da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43105503d0, ffffff43105503c0) taskq_thread_wait+0xbe(ffffff43105503a0, ffffff43105503c0, ffffff43105503d0 , ffffff01e901dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43105503a0) thread_start+8() ffffff01e8e51c40 fffffffffbc2eb80 0 0 60 ffffff43105502b8 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010004 stack pointer for thread ffffff01e8e51c40: ffffff01e8e51a80 [ ffffff01e8e51a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43105502b8, ffffff43105502a8) taskq_thread_wait+0xbe(ffffff4310550288, ffffff43105502a8, ffffff43105502b8 , ffffff01e8e51bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550288) thread_start+8() ffffff01e8eb7c40 fffffffffbc2eb80 0 0 60 ffffff43105501a0 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010004 stack pointer for thread ffffff01e8eb7c40: ffffff01e8eb7a80 [ ffffff01e8eb7a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43105501a0, ffffff4310550190) taskq_thread_wait+0xbe(ffffff4310550170, ffffff4310550190, ffffff43105501a0 , ffffff01e8eb7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550170) thread_start+8() ffffff01e89e5c40 fffffffffbc2eb80 0 0 60 ffffff4310550088 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010004 stack pointer for thread ffffff01e89e5c40: ffffff01e89e5a80 [ ffffff01e89e5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550088, ffffff4310550078) taskq_thread_wait+0xbe(ffffff4310550058, ffffff4310550078, ffffff4310550088 , ffffff01e89e5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550058) thread_start+8() ffffff01e8ecdc40 fffffffffbc2eb80 0 0 60 ffffff43181deec8 PC: _resume_from_idle+0xf1 TASKQ: hermon_nexus_enum_tq stack pointer for thread ffffff01e8ecdc40: ffffff01e8ecda80 [ ffffff01e8ecda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181deec8, ffffff43181deeb8) taskq_thread_wait+0xbe(ffffff43181dee98, ffffff43181deeb8, ffffff43181deec8 , ffffff01e8ecdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181dee98) thread_start+8() ffffff01e9cedc40 fffffffffbc2eb80 0 0 60 ffffff43181dedb0 PC: _resume_from_idle+0xf1 TASKQ: hermon_hermon_taskq00000001 stack pointer for thread ffffff01e9cedc40: ffffff01e9ceda80 [ ffffff01e9ceda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181dedb0, ffffff43181deda0) taskq_thread_wait+0xbe(ffffff43181ded80, ffffff43181deda0, ffffff43181dedb0 , ffffff01e9cedbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181ded80) thread_start+8() ffffff01e8d6dc40 fffffffffbc2eb80 0 0 60 ffffff43181dec98 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010001 stack pointer for thread ffffff01e8d6dc40: ffffff01e8d6da80 [ ffffff01e8d6da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181dec98, ffffff43181dec88) taskq_thread_wait+0xbe(ffffff43181dec68, ffffff43181dec88, ffffff43181dec98 , ffffff01e8d6dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181dec68) thread_start+8() ffffff01e8cb6c40 fffffffffbc2eb80 0 0 60 ffffff43181deb80 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010001 stack pointer for thread ffffff01e8cb6c40: ffffff01e8cb6a80 [ ffffff01e8cb6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181deb80, ffffff43181deb70) taskq_thread_wait+0xbe(ffffff43181deb50, ffffff43181deb70, ffffff43181deb80 , ffffff01e8cb6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181deb50) thread_start+8() ffffff01e8d3fc40 fffffffffbc2eb80 0 0 60 ffffff43181dea68 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010001 stack pointer for thread ffffff01e8d3fc40: ffffff01e8d3fa80 [ ffffff01e8d3fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181dea68, ffffff43181dea58) taskq_thread_wait+0xbe(ffffff43181dea38, ffffff43181dea58, ffffff43181dea68 , ffffff01e8d3fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181dea38) thread_start+8() ffffff01e9083c40 fffffffffbc2eb80 0 0 60 ffffff43181de950 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010001 stack pointer for thread ffffff01e9083c40: ffffff01e9083a80 [ ffffff01e9083a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de950, ffffff43181de940) taskq_thread_wait+0xbe(ffffff43181de920, ffffff43181de940, ffffff43181de950 , ffffff01e9083bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de920) thread_start+8() ffffff01e9b9dc40 fffffffffbc2eb80 0 0 60 ffffff43181de838 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010004 stack pointer for thread ffffff01e9b9dc40: ffffff01e9b9da80 [ ffffff01e9b9da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de838, ffffff43181de828) taskq_thread_wait+0xbe(ffffff43181de808, ffffff43181de828, ffffff43181de838 , ffffff01e9b9dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de808) thread_start+8() ffffff01e8f0fc40 fffffffffbc2eb80 0 0 60 ffffff43181de720 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010004 stack pointer for thread ffffff01e8f0fc40: ffffff01e8f0fa80 [ ffffff01e8f0fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de720, ffffff43181de710) taskq_thread_wait+0xbe(ffffff43181de6f0, ffffff43181de710, ffffff43181de720 , ffffff01e8f0fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de6f0) thread_start+8() ffffff01e8f75c40 fffffffffbc2eb80 0 0 60 ffffff43181de608 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010004 stack pointer for thread ffffff01e8f75c40: ffffff01e8f75a80 [ ffffff01e8f75a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de608, ffffff43181de5f8) taskq_thread_wait+0xbe(ffffff43181de5d8, ffffff43181de5f8, ffffff43181de608 , ffffff01e8f75bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de5d8) thread_start+8() ffffff01e9c15c40 fffffffffbc2eb80 0 0 60 ffffff43181de4f0 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010004 stack pointer for thread ffffff01e9c15c40: ffffff01e9c15a80 [ ffffff01e9c15a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de4f0, ffffff43181de4e0) taskq_thread_wait+0xbe(ffffff43181de4c0, ffffff43181de4e0, ffffff43181de4f0 , ffffff01e9c15bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de4c0) thread_start+8() ffffff01e836ac40 fffffffffbc2eb80 0 0 60 fffffffffbcca8c0 PC: _resume_from_idle+0xf1 THREAD: task_commit() stack pointer for thread ffffff01e836ac40: ffffff01e836ab60 [ ffffff01e836ab60 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcca8c0, fffffffffbcca8b8) task_commit+0xd9() thread_start+8() ffffff01e8370c40 fffffffffbc2eb80 0 0 60 ffffff42a9da0dc0 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01e8370c40: ffffff01e8370a90 [ ffffff01e8370a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42a9da0dc0, ffffff42a9da0db8) evch_delivery_hold+0x70(ffffff42a9da0d98, ffffff01e8370bc0) evch_delivery_thr+0x29e(ffffff42a9da0d98) thread_start+8() ffffff01e8376c40 fffffffffbc2eb80 0 0 60 ffffff42a9da0c70 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01e8376c40: ffffff01e8376a90 [ ffffff01e8376a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42a9da0c70, ffffff42a9da0c68) evch_delivery_hold+0x70(ffffff42a9da0c48, ffffff01e8376bc0) evch_delivery_thr+0x29e(ffffff42a9da0c48) thread_start+8() ffffff01e837cc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e837cc40: ffffff01e837cbb0 [ ffffff01e837cbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(0) thread_start+8() ffffff01e83d0c40 fffffffffbc2eb80 0 0 99 ffffff430f449e10 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e83d0c40: ffffff01e83d0b40 [ ffffff01e83d0b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449e10, ffffff430f449dd0) squeue_worker+0x104(ffffff430f449dc0) thread_start+8() ffffff01e83d6c40 fffffffffbc2eb80 0 0 99 ffffff430f449e12 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e83d6c40: ffffff01e83d6b00 [ ffffff01e83d6b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449e12, ffffff430f449dd0) squeue_polling_thread+0xa9(ffffff430f449dc0) thread_start+8() ffffff01e84a5c40 fffffffffbc2eb80 0 0 60 ffffff430f6f66f0 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e84a5c40: ffffff01e84a5a80 [ ffffff01e84a5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f66f0, ffffff430f6f66e0) taskq_thread_wait+0xbe(ffffff430f6f66c0, ffffff430f6f66e0, ffffff430f6f66f0 , ffffff01e84a5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f66c0) thread_start+8() ffffff01e8388c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8388c40: ffffff01e8388810 0xb() ffffff01e83cac40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e83cac40: ffffff01e83cabb0 [ ffffff01e83cabb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(1) thread_start+8() ffffff01e848ec40 fffffffffbc2eb80 0 0 99 ffffff430f449d50 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e848ec40: ffffff01e848eb40 [ ffffff01e848eb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449d50, ffffff430f449d10) squeue_worker+0x104(ffffff430f449d00) thread_start+8() ffffff01e8494c40 fffffffffbc2eb80 0 0 99 ffffff430f449d52 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8494c40: ffffff01e8494b00 [ ffffff01e8494b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449d52, ffffff430f449d10) squeue_polling_thread+0xa9(ffffff430f449d00) thread_start+8() ffffff01e8531c40 fffffffffbc2eb80 0 0 60 ffffff430f6f65d8 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8531c40: ffffff01e8531a80 [ ffffff01e8531a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f65d8, ffffff430f6f65c8) taskq_thread_wait+0xbe(ffffff430f6f65a8, ffffff430f6f65c8, ffffff430f6f65d8 , ffffff01e8531bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f65a8) thread_start+8() ffffff01e8424c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8424c40: ffffff01e8424840 0xffffff430e6fc080() ffffff01e8466c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8466c40: ffffff01e8466bb0 [ ffffff01e8466bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(2) thread_start+8() ffffff01e851ac40 fffffffffbc2eb80 0 0 99 ffffff430f449c90 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e851ac40: ffffff01e851ab40 [ ffffff01e851ab40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449c90, ffffff430f449c50) squeue_worker+0x104(ffffff430f449c40) thread_start+8() ffffff01e8520c40 fffffffffbc2eb80 0 0 99 ffffff430f449c92 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8520c40: ffffff01e8520b00 [ ffffff01e8520b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449c92, ffffff430f449c50) squeue_polling_thread+0xa9(ffffff430f449c40) thread_start+8() ffffff01e8472c40 fffffffffbc2eb80 0 0 60 ffffff430f6f64c0 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8472c40: ffffff01e8472a80 [ ffffff01e8472a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f64c0, ffffff430f6f64b0) taskq_thread_wait+0xbe(ffffff430f6f6490, ffffff430f6f64b0, ffffff430f6f64c0 , ffffff01e8472bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6490) thread_start+8() ffffff01e84b1c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e84b1c40: ffffff01e84b1840 0xffffff430f83ae80() ffffff01e84f3c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e84f3c40: ffffff01e84f3bb0 [ ffffff01e84f3bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(3) thread_start+8() ffffff01e846cc40 fffffffffbc2eb80 0 0 99 ffffff430f449bd0 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e846cc40: ffffff01e846cb40 [ ffffff01e846cb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449bd0, ffffff430f449b90) squeue_worker+0x104(ffffff430f449b80) thread_start+8() ffffff01e8382c40 fffffffffbc2eb80 0 0 99 ffffff430f449bd2 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8382c40: ffffff01e8382b00 [ ffffff01e8382b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449bd2, ffffff430f449b90) squeue_polling_thread+0xa9(ffffff430f449b80) thread_start+8() ffffff01e8a9ec40 fffffffffbc2eb80 0 0 60 ffffff430f6f63a8 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8a9ec40: ffffff01e8a9ea80 [ ffffff01e8a9ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f63a8, ffffff430f6f6398) taskq_thread_wait+0xbe(ffffff430f6f6378, ffffff430f6f6398, ffffff430f6f63a8 , ffffff01e8a9ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6378) thread_start+8() ffffff01e853dc40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e853dc40: ffffff01e853d830 0xffffff430f83ac80() ffffff01e897fc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e897fc40: ffffff01e897fbb0 [ ffffff01e897fbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(4) thread_start+8() ffffff01e8a87c40 fffffffffbc2eb80 0 0 99 ffffff430f449b10 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8a87c40: ffffff01e8a87b40 [ ffffff01e8a87b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449b10, ffffff430f449ad0) squeue_worker+0x104(ffffff430f449ac0) thread_start+8() ffffff01e8a8dc40 fffffffffbc2eb80 0 0 99 ffffff430f449b12 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8a8dc40: ffffff01e8a8db00 [ ffffff01e8a8db00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449b12, ffffff430f449ad0) squeue_polling_thread+0xa9(ffffff430f449ac0) thread_start+8() ffffff01e8b09c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6290 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8b09c40: ffffff01e8b09a80 [ ffffff01e8b09a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6290, ffffff430f6f6280) taskq_thread_wait+0xbe(ffffff430f6f6260, ffffff430f6f6280, ffffff430f6f6290 , ffffff01e8b09bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6260) thread_start+8() ffffff01e840ec40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e840ec40: ffffff01e840e840 0xffffff430f83aa00() ffffff01e8a81c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8a81c40: ffffff01e8a81bb0 [ ffffff01e8a81bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(5) thread_start+8() ffffff01e8af2c40 fffffffffbc2eb80 0 0 99 ffffff430f449a50 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8af2c40: ffffff01e8af2b40 [ ffffff01e8af2b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449a50, ffffff430f449a10) squeue_worker+0x104(ffffff430f449a00) thread_start+8() ffffff01e8af8c40 fffffffffbc2eb80 0 0 99 ffffff430f449a52 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8af8c40: ffffff01e8af8b00 [ ffffff01e8af8b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449a52, ffffff430f449a10) squeue_polling_thread+0xa9(ffffff430f449a00) thread_start+8() ffffff01e8b27c40 fffffffffbc2eb80 0 0 60 ffffff430f6f6178 PC: _resume_from_idle+0xf1 TASKQ: acpinex_nexus_enum_tq stack pointer for thread ffffff01e8b27c40: ffffff01e8b27a80 [ ffffff01e8b27a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6178, ffffff430f6f6168) taskq_thread_wait+0xbe(ffffff430f6f6148, ffffff430f6f6168, ffffff430f6f6178 , ffffff01e8b27bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6148) thread_start+8() ffffff01e8b7ac40 fffffffffbc2eb80 0 0 60 ffffff430f6f6060 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8b7ac40: ffffff01e8b7aa80 [ ffffff01e8b7aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f6f6060, ffffff430f6f6050) taskq_thread_wait+0xbe(ffffff430f6f6030, ffffff430f6f6050, ffffff430f6f6060 , ffffff01e8b7abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f6f6030) thread_start+8() ffffff01e8aaac40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8aaac40: ffffff01e8aaa820 0xffffff430f83a880() ffffff01e8aecc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8aecc40: ffffff01e8aecbb0 [ ffffff01e8aecbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(6) thread_start+8() ffffff01e8b63c40 fffffffffbc2eb80 0 0 99 ffffff430f449990 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8b63c40: ffffff01e8b63b40 [ ffffff01e8b63b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449990, ffffff430f449950) squeue_worker+0x104(ffffff430f449940) thread_start+8() ffffff01e8b69c40 fffffffffbc2eb80 0 0 99 ffffff430f449992 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8b69c40: ffffff01e8b69b00 [ ffffff01e8b69b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449992, ffffff430f449950) squeue_polling_thread+0xa9(ffffff430f449940) thread_start+8() ffffff01e8be5c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7ea0 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8be5c40: ffffff01e8be5a80 [ ffffff01e8be5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7ea0, ffffff430f8d7e90) taskq_thread_wait+0xbe(ffffff430f8d7e70, ffffff430f8d7e90, ffffff430f8d7ea0 , ffffff01e8be5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7e70) thread_start+8() ffffff01e8b15c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8b15c40: ffffff01e8b15710 0xffffff01e8b157e0() av_set_softint_pending+0x17(1, ffffff42e231c258) siron+0x16() kpreempt+0x1b1(1) sys_rtt_common+0x1bf(ffffff01e8b157e0) _sys_rtt_ints_disabled+8() wrmsr+0xd() tsc_gethrtimeunscaled_delta+0x3c() gethrtime_unscaled+0xa() apix_do_softint_prolog+0x59(fffffffff78ff3e4, ffffff01e8b15990, ffffff01e8b4bc40, ffffff42e1ada380) 0xffffff430f76f000() acpi_cpu_cstate+0x11b(ffffff430f880a38) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff01e8b5dc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8b5dc40: ffffff01e8b5dbb0 [ ffffff01e8b5dbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(7) thread_start+8() ffffff01e8bcec40 fffffffffbc2eb80 0 0 99 ffffff430f4498d0 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8bcec40: ffffff01e8bceb40 [ ffffff01e8bceb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f4498d0, ffffff430f449890) squeue_worker+0x104(ffffff430f449880) thread_start+8() ffffff01e8bd4c40 fffffffffbc2eb80 0 0 99 ffffff430f4498d2 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8bd4c40: ffffff01e8bd4b00 [ ffffff01e8bd4b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f4498d2, ffffff430f449890) squeue_polling_thread+0xa9(ffffff430f449880) thread_start+8() ffffff01e8c50c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7d88 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8c50c40: ffffff01e8c50a80 [ ffffff01e8c50a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7d88, ffffff430f8d7d78) taskq_thread_wait+0xbe(ffffff430f8d7d58, ffffff430f8d7d78, ffffff430f8d7d88 , ffffff01e8c50bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7d58) thread_start+8() ffffff01e8b86c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8b86c40: ffffff01e8b86660 1() ffffff01e8bc8c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8bc8c40: ffffff01e8bc8bb0 [ ffffff01e8bc8bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(8) thread_start+8() ffffff01e8c39c40 fffffffffbc2eb80 0 0 99 ffffff430f449810 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8c39c40: ffffff01e8c39b40 [ ffffff01e8c39b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449810, ffffff430f4497d0) squeue_worker+0x104(ffffff430f4497c0) thread_start+8() ffffff01e8c3fc40 fffffffffbc2eb80 0 0 99 ffffff430f449812 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8c3fc40: ffffff01e8c3fb00 [ ffffff01e8c3fb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449812, ffffff430f4497d0) squeue_polling_thread+0xa9(ffffff430f4497c0) thread_start+8() ffffff01e8cd3c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7b58 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8cd3c40: ffffff01e8cd3a80 [ ffffff01e8cd3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7b58, ffffff430f8d7b48) taskq_thread_wait+0xbe(ffffff430f8d7b28, ffffff430f8d7b48, ffffff430f8d7b58 , ffffff01e8cd3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7b28) thread_start+8() ffffff01e8bf1c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8bf1c40: ffffff01e8bf1840 0xffffff430f83a200() ffffff01e8c33c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8c33c40: ffffff01e8c33bb0 [ ffffff01e8c33bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(9) thread_start+8() ffffff01e8cbcc40 fffffffffbc2eb80 0 0 99 ffffff430f449750 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8cbcc40: ffffff01e8cbcb40 [ ffffff01e8cbcb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449750, ffffff430f449710) squeue_worker+0x104(ffffff430f449700) thread_start+8() ffffff01e8cc2c40 fffffffffbc2eb80 0 0 99 ffffff430f449752 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8cc2c40: ffffff01e8cc2b00 [ ffffff01e8cc2b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449752, ffffff430f449710) squeue_polling_thread+0xa9(ffffff430f449700) thread_start+8() ffffff01e8dc2c40 fffffffffbc2eb80 0 0 60 ffffff430f8d7298 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e8dc2c40: ffffff01e8dc2a80 [ ffffff01e8dc2a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7298, ffffff430f8d7288) taskq_thread_wait+0xbe(ffffff430f8d7268, ffffff430f8d7288, ffffff430f8d7298 , ffffff01e8dc2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7268) thread_start+8() ffffff01e8c5cc40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8c5cc40: ffffff01e8c5c740 apix_add_pending_hardint+0x17(f1) apix_do_interrupt+0x21d(ffffff01e8c5c7e0, 1c00) _sys_rtt_ints_disabled+8() wrmsr+0xd() do_splx+0x65(fffffffff7903539) 0xffffff01e8c92c40() apix_hilevel_intr_prolog+0x3e(fffffffffb80094a, ffffff01e8c5ca50, ffffff01e8c5ca50, 2) acpi_cpu_cstate+0x11b(ffffff430f880498) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff01e8c9ec40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8c9ec40: ffffff01e8c9ebb0 [ ffffff01e8c9ebb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(a) thread_start+8() ffffff01e8da0c40 fffffffffbc2eb80 0 0 99 ffffff430f449690 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e8da0c40: ffffff01e8da0b40 [ ffffff01e8da0b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449690, ffffff430f449650) squeue_worker+0x104(ffffff430f449640) thread_start+8() ffffff01e8db1c40 fffffffffbc2eb80 0 0 99 ffffff430f449692 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e8db1c40: ffffff01e8db1b00 [ ffffff01e8db1b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449692, ffffff430f449650) squeue_polling_thread+0xa9(ffffff430f449640) thread_start+8() ffffff01e906bc40 fffffffffbc2eb80 0 0 60 ffffff430f8d7068 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e906bc40: ffffff01e906ba80 [ ffffff01e906ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f8d7068, ffffff430f8d7058) taskq_thread_wait+0xbe(ffffff430f8d7038, ffffff430f8d7058, ffffff430f8d7068 , ffffff01e906bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430f8d7038) thread_start+8() ffffff01e8cdfc40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8cdfc40: ffffff01e8cdf950 apix_intr_exit+0x24(ffffff01e8cdfa50, 0) 0xffffff430f8fea80() apix_do_interrupt+0xfe(ffffff01e8cdfa50, 2) _interrupt+0xba() acpi_cpu_cstate+0x11b(ffffff430f948980) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff01e8d21c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8d21c40: ffffff01e8d21bb0 [ ffffff01e8d21bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(b) thread_start+8() ffffff01e9054c40 fffffffffbc2eb80 0 0 99 ffffff430f4495d0 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e9054c40: ffffff01e9054b40 [ ffffff01e9054b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f4495d0, ffffff430f449590) squeue_worker+0x104(ffffff430f449580) thread_start+8() ffffff01e905ac40 fffffffffbc2eb80 0 0 99 ffffff430f4495d2 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e905ac40: ffffff01e905ab00 [ ffffff01e905ab00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f4495d2, ffffff430f449590) squeue_polling_thread+0xa9(ffffff430f449580) thread_start+8() ffffff01e914ac40 fffffffffbc2eb80 0 0 60 ffffff430fc25d90 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e914ac40: ffffff01e914aa80 [ ffffff01e914aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25d90, ffffff430fc25d80) taskq_thread_wait+0xbe(ffffff430fc25d60, ffffff430fc25d80, ffffff430fc25d90 , ffffff01e914abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25d60) thread_start+8() ffffff01e8dd9c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e8dd9c40: ffffff01e8dd9860 0xffffff430f953b00() ffffff01e8e1bc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e8e1bc40: ffffff01e8e1bbb0 [ ffffff01e8e1bbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(c) thread_start+8() ffffff01e9133c40 fffffffffbc2eb80 0 0 99 ffffff430f449510 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e9133c40: ffffff01e9133b40 [ ffffff01e9133b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449510, ffffff430f4494d0) squeue_worker+0x104(ffffff430f4494c0) thread_start+8() ffffff01e9139c40 fffffffffbc2eb80 0 0 99 ffffff430f449512 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e9139c40: ffffff01e9139b00 [ ffffff01e9139b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449512, ffffff430f4494d0) squeue_polling_thread+0xa9(ffffff430f4494c0) thread_start+8() ffffff01e91b5c40 fffffffffbc2eb80 0 0 60 ffffff430fc25c78 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e91b5c40: ffffff01e91b5a80 [ ffffff01e91b5a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25c78, ffffff430fc25c68) taskq_thread_wait+0xbe(ffffff430fc25c48, ffffff430fc25c68, ffffff430fc25c78 , ffffff01e91b5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25c48) thread_start+8() ffffff01e908dc40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e908dc40: ffffff01e908d840 0xffffff430f953900() ffffff01e90cfc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e90cfc40: ffffff01e90cfbb0 [ ffffff01e90cfbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(d) thread_start+8() ffffff01e919ec40 fffffffffbc2eb80 0 0 99 ffffff430f449450 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e919ec40: ffffff01e919eb40 [ ffffff01e919eb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449450, ffffff430f449410) squeue_worker+0x104(ffffff430f449400) thread_start+8() ffffff01e91a4c40 fffffffffbc2eb80 0 0 99 ffffff430f449452 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e91a4c40: ffffff01e91a4b00 [ ffffff01e91a4b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449452, ffffff430f449410) squeue_polling_thread+0xa9(ffffff430f449400) thread_start+8() ffffff01e9220c40 fffffffffbc2eb80 0 0 60 ffffff430fc25b60 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e9220c40: ffffff01e9220a80 [ ffffff01e9220a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25b60, ffffff430fc25b50) taskq_thread_wait+0xbe(ffffff430fc25b30, ffffff430fc25b50, ffffff430fc25b60 , ffffff01e9220bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25b30) thread_start+8() ffffff01e9156c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e9156c40: ffffff01e9156830 0xffffff430f953780() ffffff01e9198c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e9198c40: ffffff01e9198bb0 [ ffffff01e9198bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(e) thread_start+8() ffffff01e9209c40 fffffffffbc2eb80 0 0 99 ffffff430f449390 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e9209c40: ffffff01e9209b40 [ ffffff01e9209b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449390, ffffff430f449350) squeue_worker+0x104(ffffff430f449340) thread_start+8() ffffff01e920fc40 fffffffffbc2eb80 0 0 99 ffffff430f449392 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e920fc40: ffffff01e920fb00 [ ffffff01e920fb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449392, ffffff430f449350) squeue_polling_thread+0xa9(ffffff430f449340) thread_start+8() ffffff01e929dc40 fffffffffbc2eb80 0 0 60 ffffff430fc25930 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e929dc40: ffffff01e929da80 [ ffffff01e929da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25930, ffffff430fc25920) taskq_thread_wait+0xbe(ffffff430fc25900, ffffff430fc25920, ffffff430fc25930 , ffffff01e929dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25900) thread_start+8() ffffff01e91c1c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e91c1c40: ffffff01e91c1840 0xffffff430f953400() ffffff01e9203c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e9203c40: ffffff01e9203bb0 [ ffffff01e9203bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(f) thread_start+8() ffffff01e9286c40 fffffffffbc2eb80 0 0 99 ffffff430f4492d0 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e9286c40: ffffff01e9286b40 [ ffffff01e9286b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f4492d0, ffffff430f449290) squeue_worker+0x104(ffffff430f449280) thread_start+8() ffffff01e928cc40 fffffffffbc2eb80 0 0 99 ffffff430f4492d2 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e928cc40: ffffff01e928cb00 [ ffffff01e928cb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f4492d2, ffffff430f449290) squeue_polling_thread+0xa9(ffffff430f449280) thread_start+8() ffffff01e9308c40 fffffffffbc2eb80 0 0 60 ffffff430fc25818 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e9308c40: ffffff01e9308a80 [ ffffff01e9308a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25818, ffffff430fc25808) taskq_thread_wait+0xbe(ffffff430fc257e8, ffffff430fc25808, ffffff430fc25818 , ffffff01e9308bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc257e8) thread_start+8() ffffff01e922cc40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e922cc40: ffffff01e922c930 apix_do_softint_prolog+0x59(fffffffff78ff3e4, ffffff01e922c990, ffffff01e9262c40, ffffff01e922c970) 0xffffff430f9aaac0() acpi_cpu_cstate+0x11b(ffffff430fc1ec28) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff01e926ec40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e926ec40: ffffff01e926ebb0 [ ffffff01e926ebb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(10) thread_start+8() ffffff01e92f1c40 fffffffffbc2eb80 0 0 99 ffffff430f449210 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e92f1c40: ffffff01e92f1b40 [ ffffff01e92f1b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449210, ffffff430f4491d0) squeue_worker+0x104(ffffff430f4491c0) thread_start+8() ffffff01e92f7c40 fffffffffbc2eb80 0 0 99 ffffff430f449212 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e92f7c40: ffffff01e92f7b00 [ ffffff01e92f7b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449212, ffffff430f4491d0) squeue_polling_thread+0xa9(ffffff430f4491c0) thread_start+8() ffffff01e9373c40 fffffffffbc2eb80 0 0 60 ffffff430fc25700 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e9373c40: ffffff01e9373a80 [ ffffff01e9373a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25700, ffffff430fc256f0) taskq_thread_wait+0xbe(ffffff430fc256d0, ffffff430fc256f0, ffffff430fc25700 , ffffff01e9373bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc256d0) thread_start+8() ffffff01e92a9c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e92a9c40: ffffff01e92a9840 0xffffff430f953180() ffffff01e92ebc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e92ebc40: ffffff01e92ebbb0 [ ffffff01e92ebbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(11) thread_start+8() ffffff01e935cc40 fffffffffbc2eb80 0 0 99 ffffff430f449150 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e935cc40: ffffff01e935cb40 [ ffffff01e935cb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449150, ffffff430f449110) squeue_worker+0x104(ffffff430f449100) thread_start+8() ffffff01e9362c40 fffffffffbc2eb80 0 0 99 ffffff430f449152 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e9362c40: ffffff01e9362b00 [ ffffff01e9362b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449152, ffffff430f449110) squeue_polling_thread+0xa9(ffffff430f449100) thread_start+8() ffffff01e93dec40 fffffffffbc2eb80 0 0 60 ffffff430fc255e8 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e93dec40: ffffff01e93dea80 [ ffffff01e93dea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc255e8, ffffff430fc255d8) taskq_thread_wait+0xbe(ffffff430fc255b8, ffffff430fc255d8, ffffff430fc255e8 , ffffff01e93debc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc255b8) thread_start+8() ffffff01e9314c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e9314c40: ffffff01e9314820 0xffffff430fca3f00() ffffff01e9356c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e9356c40: ffffff01e9356bb0 [ ffffff01e9356bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(12) thread_start+8() ffffff01e93c7c40 fffffffffbc2eb80 0 0 99 ffffff430f449090 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e93c7c40: ffffff01e93c7b40 [ ffffff01e93c7b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449090, ffffff430f449050) squeue_worker+0x104(ffffff430f449040) thread_start+8() ffffff01e93cdc40 fffffffffbc2eb80 0 0 99 ffffff430f449092 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e93cdc40: ffffff01e93cdb00 [ ffffff01e93cdb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f449092, ffffff430f449050) squeue_polling_thread+0xa9(ffffff430f449040) thread_start+8() ffffff01e9449c40 fffffffffbc2eb80 0 0 60 ffffff430fc254d0 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e9449c40: ffffff01e9449a80 [ ffffff01e9449a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc254d0, ffffff430fc254c0) taskq_thread_wait+0xbe(ffffff430fc254a0, ffffff430fc254c0, ffffff430fc254d0 , ffffff01e9449bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc254a0) thread_start+8() ffffff01e937fc40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e937fc40: ffffff01e937f840 0xffffff430fca3d80() ffffff01e93c1c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e93c1c40: ffffff01e93c1bb0 [ ffffff01e93c1bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(13) thread_start+8() ffffff01e9432c40 fffffffffbc2eb80 0 0 99 ffffff430fd41f10 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e9432c40: ffffff01e9432b40 [ ffffff01e9432b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41f10, ffffff430fd41ed0) squeue_worker+0x104(ffffff430fd41ec0) thread_start+8() ffffff01e9438c40 fffffffffbc2eb80 0 0 99 ffffff430fd41f12 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e9438c40: ffffff01e9438b00 [ ffffff01e9438b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41f12, ffffff430fd41ed0) squeue_polling_thread+0xa9(ffffff430fd41ec0) thread_start+8() ffffff01e94ccc40 fffffffffbc2eb80 0 0 60 ffffff430fc252a0 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e94ccc40: ffffff01e94cca80 [ ffffff01e94cca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc252a0, ffffff430fc25290) taskq_thread_wait+0xbe(ffffff430fc25270, ffffff430fc25290, ffffff430fc252a0 , ffffff01e94ccbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25270) thread_start+8() ffffff01e93eac40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e93eac40: ffffff01e93ea860 1() ffffff01e942cc40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e942cc40: ffffff01e942cbb0 [ ffffff01e942cbb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(14) thread_start+8() ffffff01e94b5c40 fffffffffbc2eb80 0 0 99 ffffff430fd41e50 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e94b5c40: ffffff01e94b5b40 [ ffffff01e94b5b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41e50, ffffff430fd41e10) squeue_worker+0x104(ffffff430fd41e00) thread_start+8() ffffff01e94bbc40 fffffffffbc2eb80 0 0 99 ffffff430fd41e52 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e94bbc40: ffffff01e94bbb00 [ ffffff01e94bbb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41e52, ffffff430fd41e10) squeue_polling_thread+0xa9(ffffff430fd41e00) thread_start+8() ffffff01e9555c40 fffffffffbc2eb80 0 0 60 ffffff430fc25070 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e9555c40: ffffff01e9555a80 [ ffffff01e9555a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fc25070, ffffff430fc25060) taskq_thread_wait+0xbe(ffffff430fc25040, ffffff430fc25060, ffffff430fc25070 , ffffff01e9555bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fc25040) thread_start+8() ffffff01e9455c40 fffffffffbc2eb80 0 0 -1 0 PC: _resume_from_idle+0xf1 THREAD: idle() stack pointer for thread ffffff01e9455c40: ffffff01e9455bd0 [ ffffff01e9455bd0 _resume_from_idle+0xf1() ] swtch+0x141() idle+0xbc() thread_start+8() ffffff01e9497c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e9497c40: ffffff01e9497bb0 [ ffffff01e9497bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(15) thread_start+8() ffffff01e953ec40 fffffffffbc2eb80 0 0 99 ffffff430fd41d90 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e953ec40: ffffff01e953eb40 [ ffffff01e953eb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41d90, ffffff430fd41d50) squeue_worker+0x104(ffffff430fd41d40) thread_start+8() ffffff01e9544c40 fffffffffbc2eb80 0 0 99 ffffff430fd41d92 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e9544c40: ffffff01e9544b00 [ ffffff01e9544b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41d92, ffffff430fd41d50) squeue_polling_thread+0xa9(ffffff430fd41d40) thread_start+8() ffffff01e95c0c40 fffffffffbc2eb80 0 0 60 ffffff430fd8feb0 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e95c0c40: ffffff01e95c0a80 [ ffffff01e95c0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8feb0, ffffff430fd8fea0) taskq_thread_wait+0xbe(ffffff430fd8fe80, ffffff430fd8fea0, ffffff430fd8feb0 , ffffff01e95c0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fe80) thread_start+8() ffffff01e94d8c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e94d8c40: ffffff01e94d8840 0xffffff430fca3600() ffffff01e951ac40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e951ac40: ffffff01e951abb0 [ ffffff01e951abb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(16) thread_start+8() ffffff01e95a9c40 fffffffffbc2eb80 0 0 99 ffffff430fd41cd0 PC: _resume_from_idle+0xf1 THREAD: squeue_worker() stack pointer for thread ffffff01e95a9c40: ffffff01e95a9b40 [ ffffff01e95a9b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41cd0, ffffff430fd41c90) squeue_worker+0x104(ffffff430fd41c80) thread_start+8() ffffff01e95afc40 fffffffffbc2eb80 0 0 99 ffffff430fd41cd2 PC: _resume_from_idle+0xf1 THREAD: squeue_polling_thread() stack pointer for thread ffffff01e95afc40: ffffff01e95afb00 [ ffffff01e95afb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd41cd2, ffffff430fd41c90) squeue_polling_thread+0xa9(ffffff430fd41c80) thread_start+8() ffffff01e95dec40 fffffffffbc2eb80 0 0 60 ffffff430fd8fd98 PC: _resume_from_idle+0xf1 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff01e95dec40: ffffff01e95dea80 [ ffffff01e95dea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fd98, ffffff430fd8fd88) taskq_thread_wait+0xbe(ffffff430fd8fd68, ffffff430fd8fd88, ffffff430fd8fd98 , ffffff01e95debc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fd68) thread_start+8() ffffff01e9561c40 fffffffffbc2eb80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff01e9561c40: ffffff01e9561680 ffffff01e95a3c40 fffffffffbc2eb80 0 0 109 0 PC: _resume_from_idle+0xf1 THREAD: cpu_pause() stack pointer for thread ffffff01e95a3c40: ffffff01e95a3bb0 [ ffffff01e95a3bb0 _resume_from_idle+0xf1() ] swtch+0x141() cpu_pause+0x80(17) thread_start+8() ffffff01e96e6c40 fffffffffbc2eb80 0 0 99 ffffff430fd8fc80 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e96e6c40: ffffff01e96e6a80 [ ffffff01e96e6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fc80, ffffff430fd8fc70) taskq_thread_wait+0xbe(ffffff430fd8fc50, ffffff430fd8fc70, ffffff430fd8fc80 , ffffff01e96e6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fc50) thread_start+8() ffffff01e96e0c40 fffffffffbc2eb80 0 0 99 ffffff430fd8fc80 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e96e0c40: ffffff01e96e0a80 [ ffffff01e96e0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fc80, ffffff430fd8fc70) taskq_thread_wait+0xbe(ffffff430fd8fc50, ffffff430fd8fc70, ffffff430fd8fc80 , ffffff01e96e0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fc50) thread_start+8() ffffff01e96f2c40 fffffffffbc2eb80 0 0 99 ffffff430fd8fb68 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e96f2c40: ffffff01e96f2a80 [ ffffff01e96f2a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fb68, ffffff430fd8fb58) taskq_thread_wait+0xbe(ffffff430fd8fb38, ffffff430fd8fb58, ffffff430fd8fb68 , ffffff01e96f2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fb38) thread_start+8() ffffff01e96ecc40 fffffffffbc2eb80 0 0 99 ffffff430fd8fb68 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e96ecc40: ffffff01e96eca80 [ ffffff01e96eca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fb68, ffffff430fd8fb58) taskq_thread_wait+0xbe(ffffff430fd8fb38, ffffff430fd8fb58, ffffff430fd8fb68 , ffffff01e96ecbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fb38) thread_start+8() ffffff01e96fec40 fffffffffbc2eb80 0 0 99 ffffff430fd8fa50 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e96fec40: ffffff01e96fea80 [ ffffff01e96fea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fa50, ffffff430fd8fa40) taskq_thread_wait+0xbe(ffffff430fd8fa20, ffffff430fd8fa40, ffffff430fd8fa50 , ffffff01e96febc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fa20) thread_start+8() ffffff01e96f8c40 fffffffffbc2eb80 0 0 99 ffffff430fd8fa50 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e96f8c40: ffffff01e96f8a80 [ ffffff01e96f8a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8fa50, ffffff430fd8fa40) taskq_thread_wait+0xbe(ffffff430fd8fa20, ffffff430fd8fa40, ffffff430fd8fa50 , ffffff01e96f8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8fa20) thread_start+8() ffffff01e970ac40 fffffffffbc2eb80 0 0 99 ffffff430fd8f938 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e970ac40: ffffff01e970aa80 [ ffffff01e970aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f938, ffffff430fd8f928) taskq_thread_wait+0xbe(ffffff430fd8f908, ffffff430fd8f928, ffffff430fd8f938 , ffffff01e970abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f908) thread_start+8() ffffff01e9704c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f938 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9704c40: ffffff01e9704a80 [ ffffff01e9704a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f938, ffffff430fd8f928) taskq_thread_wait+0xbe(ffffff430fd8f908, ffffff430fd8f928, ffffff430fd8f938 , ffffff01e9704bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f908) thread_start+8() ffffff01e9716c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f820 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9716c40: ffffff01e9716a80 [ ffffff01e9716a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f820, ffffff430fd8f810) taskq_thread_wait+0xbe(ffffff430fd8f7f0, ffffff430fd8f810, ffffff430fd8f820 , ffffff01e9716bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f7f0) thread_start+8() ffffff01e9710c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f820 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9710c40: ffffff01e9710a80 [ ffffff01e9710a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f820, ffffff430fd8f810) taskq_thread_wait+0xbe(ffffff430fd8f7f0, ffffff430fd8f810, ffffff430fd8f820 , ffffff01e9710bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f7f0) thread_start+8() ffffff01e9722c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f708 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9722c40: ffffff01e9722a80 [ ffffff01e9722a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f708, ffffff430fd8f6f8) taskq_thread_wait+0xbe(ffffff430fd8f6d8, ffffff430fd8f6f8, ffffff430fd8f708 , ffffff01e9722bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f6d8) thread_start+8() ffffff01e971cc40 fffffffffbc2eb80 0 0 99 ffffff430fd8f708 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e971cc40: ffffff01e971ca80 [ ffffff01e971ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f708, ffffff430fd8f6f8) taskq_thread_wait+0xbe(ffffff430fd8f6d8, ffffff430fd8f6f8, ffffff430fd8f708 , ffffff01e971cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f6d8) thread_start+8() ffffff01e972ec40 fffffffffbc2eb80 0 0 99 ffffff430fd8f5f0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e972ec40: ffffff01e972ea80 [ ffffff01e972ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f5f0, ffffff430fd8f5e0) taskq_thread_wait+0xbe(ffffff430fd8f5c0, ffffff430fd8f5e0, ffffff430fd8f5f0 , ffffff01e972ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f5c0) thread_start+8() ffffff01e9728c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f5f0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9728c40: ffffff01e9728a80 [ ffffff01e9728a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f5f0, ffffff430fd8f5e0) taskq_thread_wait+0xbe(ffffff430fd8f5c0, ffffff430fd8f5e0, ffffff430fd8f5f0 , ffffff01e9728bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f5c0) thread_start+8() ffffff01e973ac40 fffffffffbc2eb80 0 0 99 ffffff430fd8f4d8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e973ac40: ffffff01e973aa80 [ ffffff01e973aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f4d8, ffffff430fd8f4c8) taskq_thread_wait+0xbe(ffffff430fd8f4a8, ffffff430fd8f4c8, ffffff430fd8f4d8 , ffffff01e973abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f4a8) thread_start+8() ffffff01e9734c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f4d8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9734c40: ffffff01e9734a80 [ ffffff01e9734a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f4d8, ffffff430fd8f4c8) taskq_thread_wait+0xbe(ffffff430fd8f4a8, ffffff430fd8f4c8, ffffff430fd8f4d8 , ffffff01e9734bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f4a8) thread_start+8() ffffff01e9746c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f3c0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9746c40: ffffff01e9746a80 [ ffffff01e9746a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f3c0, ffffff430fd8f3b0) taskq_thread_wait+0xbe(ffffff430fd8f390, ffffff430fd8f3b0, ffffff430fd8f3c0 , ffffff01e9746bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f390) thread_start+8() ffffff01e9740c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f3c0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9740c40: ffffff01e9740a80 [ ffffff01e9740a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f3c0, ffffff430fd8f3b0) taskq_thread_wait+0xbe(ffffff430fd8f390, ffffff430fd8f3b0, ffffff430fd8f3c0 , ffffff01e9740bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f390) thread_start+8() ffffff01e9752c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f2a8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9752c40: ffffff01e9752a80 [ ffffff01e9752a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f2a8, ffffff430fd8f298) taskq_thread_wait+0xbe(ffffff430fd8f278, ffffff430fd8f298, ffffff430fd8f2a8 , ffffff01e9752bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f278) thread_start+8() ffffff01e974cc40 fffffffffbc2eb80 0 0 99 ffffff430fd8f2a8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e974cc40: ffffff01e974ca80 [ ffffff01e974ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f2a8, ffffff430fd8f298) taskq_thread_wait+0xbe(ffffff430fd8f278, ffffff430fd8f298, ffffff430fd8f2a8 , ffffff01e974cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f278) thread_start+8() ffffff01e975ec40 fffffffffbc2eb80 0 0 99 ffffff430fd8f190 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e975ec40: ffffff01e975ea80 [ ffffff01e975ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f190, ffffff430fd8f180) taskq_thread_wait+0xbe(ffffff430fd8f160, ffffff430fd8f180, ffffff430fd8f190 , ffffff01e975ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f160) thread_start+8() ffffff01e9758c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f190 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9758c40: ffffff01e9758a80 [ ffffff01e9758a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f190, ffffff430fd8f180) taskq_thread_wait+0xbe(ffffff430fd8f160, ffffff430fd8f180, ffffff430fd8f190 , ffffff01e9758bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f160) thread_start+8() ffffff01e976ac40 fffffffffbc2eb80 0 0 99 ffffff430fd8f078 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e976ac40: ffffff01e976aa80 [ ffffff01e976aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f078, ffffff430fd8f068) taskq_thread_wait+0xbe(ffffff430fd8f048, ffffff430fd8f068, ffffff430fd8f078 , ffffff01e976abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f048) thread_start+8() ffffff01e9764c40 fffffffffbc2eb80 0 0 99 ffffff430fd8f078 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9764c40: ffffff01e9764a80 [ ffffff01e9764a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fd8f078, ffffff430fd8f068) taskq_thread_wait+0xbe(ffffff430fd8f048, ffffff430fd8f068, ffffff430fd8f078 , ffffff01e9764bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fd8f048) thread_start+8() ffffff01e9776c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1eb8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9776c40: ffffff01e9776a80 [ ffffff01e9776a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1eb8, ffffff430fdb1ea8) taskq_thread_wait+0xbe(ffffff430fdb1e88, ffffff430fdb1ea8, ffffff430fdb1eb8 , ffffff01e9776bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1e88) thread_start+8() ffffff01e9770c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1eb8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9770c40: ffffff01e9770a80 [ ffffff01e9770a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1eb8, ffffff430fdb1ea8) taskq_thread_wait+0xbe(ffffff430fdb1e88, ffffff430fdb1ea8, ffffff430fdb1eb8 , ffffff01e9770bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1e88) thread_start+8() ffffff01e9782c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1da0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9782c40: ffffff01e9782a80 [ ffffff01e9782a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1da0, ffffff430fdb1d90) taskq_thread_wait+0xbe(ffffff430fdb1d70, ffffff430fdb1d90, ffffff430fdb1da0 , ffffff01e9782bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1d70) thread_start+8() ffffff01e977cc40 fffffffffbc2eb80 0 0 99 ffffff430fdb1da0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e977cc40: ffffff01e977ca80 [ ffffff01e977ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1da0, ffffff430fdb1d90) taskq_thread_wait+0xbe(ffffff430fdb1d70, ffffff430fdb1d90, ffffff430fdb1da0 , ffffff01e977cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1d70) thread_start+8() ffffff01e978ec40 fffffffffbc2eb80 0 0 99 ffffff430fdb1c88 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e978ec40: ffffff01e978ea80 [ ffffff01e978ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1c88, ffffff430fdb1c78) taskq_thread_wait+0xbe(ffffff430fdb1c58, ffffff430fdb1c78, ffffff430fdb1c88 , ffffff01e978ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1c58) thread_start+8() ffffff01e9788c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1c88 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9788c40: ffffff01e9788a80 [ ffffff01e9788a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1c88, ffffff430fdb1c78) taskq_thread_wait+0xbe(ffffff430fdb1c58, ffffff430fdb1c78, ffffff430fdb1c88 , ffffff01e9788bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1c58) thread_start+8() ffffff01e979ac40 fffffffffbc2eb80 0 0 99 ffffff430fdb1b70 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e979ac40: ffffff01e979aa80 [ ffffff01e979aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1b70, ffffff430fdb1b60) taskq_thread_wait+0xbe(ffffff430fdb1b40, ffffff430fdb1b60, ffffff430fdb1b70 , ffffff01e979abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1b40) thread_start+8() ffffff01e9794c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1b70 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e9794c40: ffffff01e9794a80 [ ffffff01e9794a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1b70, ffffff430fdb1b60) taskq_thread_wait+0xbe(ffffff430fdb1b40, ffffff430fdb1b60, ffffff430fdb1b70 , ffffff01e9794bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1b40) thread_start+8() ffffff01e97a6c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1a58 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97a6c40: ffffff01e97a6a80 [ ffffff01e97a6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1a58, ffffff430fdb1a48) taskq_thread_wait+0xbe(ffffff430fdb1a28, ffffff430fdb1a48, ffffff430fdb1a58 , ffffff01e97a6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1a28) thread_start+8() ffffff01e97a0c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1a58 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97a0c40: ffffff01e97a0a80 [ ffffff01e97a0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1a58, ffffff430fdb1a48) taskq_thread_wait+0xbe(ffffff430fdb1a28, ffffff430fdb1a48, ffffff430fdb1a58 , ffffff01e97a0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1a28) thread_start+8() ffffff01e97b2c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1940 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97b2c40: ffffff01e97b2a80 [ ffffff01e97b2a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1940, ffffff430fdb1930) taskq_thread_wait+0xbe(ffffff430fdb1910, ffffff430fdb1930, ffffff430fdb1940 , ffffff01e97b2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1910) thread_start+8() ffffff01e97acc40 fffffffffbc2eb80 0 0 99 ffffff430fdb1940 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97acc40: ffffff01e97aca80 [ ffffff01e97aca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1940, ffffff430fdb1930) taskq_thread_wait+0xbe(ffffff430fdb1910, ffffff430fdb1930, ffffff430fdb1940 , ffffff01e97acbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1910) thread_start+8() ffffff01e97bec40 fffffffffbc2eb80 0 0 99 ffffff430fdb1828 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97bec40: ffffff01e97bea80 [ ffffff01e97bea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1828, ffffff430fdb1818) taskq_thread_wait+0xbe(ffffff430fdb17f8, ffffff430fdb1818, ffffff430fdb1828 , ffffff01e97bebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb17f8) thread_start+8() ffffff01e97b8c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1828 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97b8c40: ffffff01e97b8a80 [ ffffff01e97b8a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1828, ffffff430fdb1818) taskq_thread_wait+0xbe(ffffff430fdb17f8, ffffff430fdb1818, ffffff430fdb1828 , ffffff01e97b8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb17f8) thread_start+8() ffffff01e97cac40 fffffffffbc2eb80 0 0 99 ffffff430fdb1710 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97cac40: ffffff01e97caa80 [ ffffff01e97caa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1710, ffffff430fdb1700) taskq_thread_wait+0xbe(ffffff430fdb16e0, ffffff430fdb1700, ffffff430fdb1710 , ffffff01e97cabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb16e0) thread_start+8() ffffff01e97c4c40 fffffffffbc2eb80 0 0 99 ffffff430fdb1710 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97c4c40: ffffff01e97c4a80 [ ffffff01e97c4a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb1710, ffffff430fdb1700) taskq_thread_wait+0xbe(ffffff430fdb16e0, ffffff430fdb1700, ffffff430fdb1710 , ffffff01e97c4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb16e0) thread_start+8() ffffff01e97d6c40 fffffffffbc2eb80 0 0 99 ffffff430fdb15f8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97d6c40: ffffff01e97d6a80 [ ffffff01e97d6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb15f8, ffffff430fdb15e8) taskq_thread_wait+0xbe(ffffff430fdb15c8, ffffff430fdb15e8, ffffff430fdb15f8 , ffffff01e97d6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb15c8) thread_start+8() ffffff01e97d0c40 fffffffffbc2eb80 0 0 99 ffffff430fdb15f8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97d0c40: ffffff01e97d0a80 [ ffffff01e97d0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb15f8, ffffff430fdb15e8) taskq_thread_wait+0xbe(ffffff430fdb15c8, ffffff430fdb15e8, ffffff430fdb15f8 , ffffff01e97d0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb15c8) thread_start+8() ffffff01e97e2c40 fffffffffbc2eb80 0 0 99 ffffff430fdb14e0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97e2c40: ffffff01e97e2a80 [ ffffff01e97e2a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb14e0, ffffff430fdb14d0) taskq_thread_wait+0xbe(ffffff430fdb14b0, ffffff430fdb14d0, ffffff430fdb14e0 , ffffff01e97e2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb14b0) thread_start+8() ffffff01e97dcc40 fffffffffbc2eb80 0 0 99 ffffff430fdb14e0 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97dcc40: ffffff01e97dca80 [ ffffff01e97dca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb14e0, ffffff430fdb14d0) taskq_thread_wait+0xbe(ffffff430fdb14b0, ffffff430fdb14d0, ffffff430fdb14e0 , ffffff01e97dcbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb14b0) thread_start+8() ffffff01e97eec40 fffffffffbc2eb80 0 0 99 ffffff430fdb13c8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97eec40: ffffff01e97eea80 [ ffffff01e97eea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb13c8, ffffff430fdb13b8) taskq_thread_wait+0xbe(ffffff430fdb1398, ffffff430fdb13b8, ffffff430fdb13c8 , ffffff01e97eebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1398) thread_start+8() ffffff01e97e8c40 fffffffffbc2eb80 0 0 99 ffffff430fdb13c8 PC: _resume_from_idle+0xf1 TASKQ: callout_taskq stack pointer for thread ffffff01e97e8c40: ffffff01e97e8a80 [ ffffff01e97e8a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fdb13c8, ffffff430fdb13b8) taskq_thread_wait+0xbe(ffffff430fdb1398, ffffff430fdb13b8, ffffff430fdb13c8 , ffffff01e97e8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff430fdb1398) thread_start+8() ffffff01e9d06c40 fffffffffbc2eb80 0 0 98 ffffff4310550ec0 PC: _resume_from_idle+0xf1 TASKQ: console_taskq stack pointer for thread ffffff01e9d06c40: ffffff01e9d06a80 [ ffffff01e9d06a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310550ec0, ffffff4310550eb0) taskq_thread_wait+0xbe(ffffff4310550e90, ffffff4310550eb0, ffffff4310550ec0 , ffffff01e9d06bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4310550e90) thread_start+8() ffffff4310bb1b40 ffffff43107e8048 ffffff430e7143c0 1 59 ffffff43107e84c0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4310bb1b40: ffffff01e9ee9d80 [ ffffff01e9ee9d80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff43107e84c0, fffffffffbd12ed0) door_unref+0x94() doorfs32+0xf5(0, 0, 0, 0, 0, 8) sys_syscall32+0xff() ffffff4311cbbc60 ffffff43107e8048 ffffff430e746880 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311cbbc60: ffffff01e8b80d20 [ ffffff01e8b80d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff43137e4be0) shuttle_resume+0x2af(ffffff43137e4be0, fffffffffbd12ed0) door_return+0x3e0(fe7afd50, 8, 0, 0, fe7afe00, f5f00) doorfs32+0x180(fe7afd50, 8, 0, fe7afe00, f5f00, a) sys_syscall32+0xff() ffffff43113f10a0 ffffff43107e8048 ffffff430e7421c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff43113f10a0: ffffff01e8db7d20 [ ffffff01e8db7d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff430fdaa3c0) shuttle_resume+0x2af(ffffff430fdaa3c0, fffffffffbd12ed0) door_return+0x3e0(fe082db4, 4, 0, 0, fe082e00, f5f00) doorfs32+0x180(fe082db4, 4, 0, fe082e00, f5f00, a) sys_syscall32+0xff() ffffff43137ae8a0 ffffff43107e8048 ffffff431203aa80 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff43137ae8a0: ffffff01ea223d20 [ ffffff01ea223d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311cbb520) shuttle_resume+0x2af(ffffff4311cbb520, fffffffffbd12ed0) door_return+0x3e0(fda88db4, 4, 0, 0, fda88e00, f5f00) doorfs32+0x180(fda88db4, 4, 0, fda88e00, f5f00, a) sys_syscall32+0xff() ffffff4311b824a0 ffffff43107e8048 ffffff4312043840 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311b824a0: ffffff01eb5ccd20 [ ffffff01eb5ccd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311cc2c20) shuttle_resume+0x2af(ffffff4311cc2c20, fffffffffbd12ed0) door_return+0x3e0(fde84db0, 4, 0, 0, fde84e00, f5f00) doorfs32+0x180(fde84db0, 4, 0, fde84e00, f5f00, a) sys_syscall32+0xff() ffffff4310bb1060 ffffff43107e8048 ffffff4312040e40 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4310bb1060: ffffff01eb584d20 [ ffffff01eb584d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311cbb520) shuttle_resume+0x2af(ffffff4311cbb520, fffffffffbd12ed0) door_return+0x3e0(fdc86db0, 4, 0, 0, fdc86e00, f5f00) doorfs32+0x180(fdc86db0, 4, 0, fdc86e00, f5f00, a) sys_syscall32+0xff() ffffff4312e0a4e0 ffffff43107e8048 ffffff4312038080 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4312e0a4e0: ffffff01e9ee2d20 [ ffffff01e9ee2d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311d66400) shuttle_resume+0x2af(ffffff4311d66400, fffffffffbd12ed0) door_return+0x3e0(fdb87dac, 4, 0, 0, fdb87e00, f5f00) doorfs32+0x180(fdb87dac, 4, 0, fdb87e00, f5f00, a) sys_syscall32+0xff() ffffff4311d5bb80 ffffff43107e8048 ffffff430e73db00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311d5bb80: ffffff01e92fdd20 [ ffffff01e92fdd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383ca77c0) shuttle_resume+0x2af(ffffff4383ca77c0, fffffffffbd12ed0) door_return+0x3e0(fe47edb4, 4, 0, 0, fe47ee00, f5f00) doorfs32+0x180(fe47edb4, 4, 0, fe47ee00, f5f00, a) sys_syscall32+0xff() ffffff4311d5b7e0 ffffff43107e8048 ffffff430e722800 1 59 ffffff4311d5b9ce PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311d5b7e0: ffffff01e93d3c50 [ ffffff01e93d3c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311d5b9ce, ffffff4311d5b9d0, 0) cv_wait_sig_swap+0x17(ffffff4311d5b9ce, ffffff4311d5b9d0) cv_waituntil_sig+0xbd(ffffff4311d5b9ce, ffffff4311d5b9d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff4311d5b440 ffffff43107e8048 ffffff430eb07f00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311d5b440: ffffff01e896dd20 [ ffffff01e896dd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311d4b820) shuttle_resume+0x2af(ffffff4311d4b820, fffffffffbd12ed0) door_return+0x3e0(fdf83db8, 4, 0, 0, fdf83e00, f5f00) doorfs32+0x180(fdf83db8, 4, 0, fdf83e00, f5f00, a) sys_syscall32+0xff() ffffff4311f9a820 ffffff43107e8048 ffffff4311a4a140 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311f9a820: ffffff01eb5aed20 [ ffffff01eb5aed20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff43137e4be0) shuttle_resume+0x2af(ffffff43137e4be0, fffffffffbd12ed0) door_return+0x3e0(fdd85db4, 4, 0, 0, fdd85e00, f5f00) doorfs32+0x180(fdd85db4, 4, 0, fdd85e00, f5f00, a) sys_syscall32+0xff() ffffff4310bb1400 ffffff43107e8048 ffffff430e707c40 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4310bb1400: ffffff01e9d49d20 [ ffffff01e9d49d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383ca77c0) shuttle_resume+0x2af(ffffff4383ca77c0, fffffffffbd12ed0) door_return+0x3e0(feaaedb8, 4, 0, 0, feaaee00, f5f00) doorfs32+0x180(feaaedb8, 4, 0, feaaee00, f5f00, a) sys_syscall32+0xff() ffffff4311cb73a0 ffffff43107e8048 ffffff42a9c48080 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311cb73a0: ffffff01e9226d20 [ ffffff01e9226d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383deb7a0) shuttle_resume+0x2af(ffffff4383deb7a0, fffffffffbd12ed0) door_return+0x3e0(fe59fd50, 8, 0, 0, fe59fe00, f5f00) doorfs32+0x180(fe59fd50, 8, 0, fe59fe00, f5f00, a) sys_syscall32+0xff() ffffff43137ae500 ffffff43107e8048 ffffff431203b180 1 59 ffffff43137ae6ee PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff43137ae500: ffffff01e9bc7c60 [ ffffff01e9bc7c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff43137ae6ee, ffffff43137ae6f0, 2540bd1f4, 1, 4) cv_waituntil_sig+0xfa(ffffff43137ae6ee, ffffff43137ae6f0, ffffff01e9bc7e10, 2) lwp_park+0x15e(fe181f18, 0) syslwp_park+0x63(0, fe181f18, 0) sys_syscall32+0xff() ffffff4311cbe160 ffffff43107e8048 ffffff430e708a40 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311cbe160: ffffff01e8bebd20 [ ffffff01e8bebd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383ddfb60) shuttle_resume+0x2af(ffffff4383ddfb60, fffffffffbd12ed0) door_return+0x3e0(fe8aedb8, 4, 0, 0, fe8aee00, f5f00) doorfs32+0x180(fe8aedb8, 4, 0, fe8aee00, f5f00, a) sys_syscall32+0xff() ffffff4311d61080 ffffff43107e8048 ffffff430e739a00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311d61080: ffffff01e9368d20 [ ffffff01e9368d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311b82100) shuttle_resume+0x2af(ffffff4311b82100, fffffffffbd12ed0) door_return+0x3e0(fe37fdb8, 4, 0, 0, fe37fe00, f5f00) doorfs32+0x180(fe37fdb8, 4, 0, fe37fe00, f5f00, a) sys_syscall32+0xff() ffffff4311cb7740 ffffff43107e8048 ffffff430e725200 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4311cb7740: ffffff01e91bbd20 [ ffffff01e91bbd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311e53400) shuttle_resume+0x2af(ffffff4311e53400, fffffffffbd12ed0) door_return+0x3e0(fe69edb8, 4, 0, 0, fe69ee00, f5f00) doorfs32+0x180(fe69edb8, 4, 0, fe69ee00, f5f00, a) sys_syscall32+0xff() ffffff4310bb17a0 ffffff43107e8048 ffffff430e713cc0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff4310bb17a0: ffffff01e9eefd20 [ ffffff01e9eefd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4312e0ac20) shuttle_resume+0x2af(ffffff4312e0ac20, fffffffffbd12ed0) door_return+0x3e0(fec8fdb4, 4, 0, 0, fec8fe00, f5f00) doorfs32+0x180(fec8fdb4, 4, 0, fec8fe00, f5f00, a) sys_syscall32+0xff() ffffff43106bb040 ffffff43107e8048 ffffff430e741ac0 1 59 ffffff43106bb22e PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff43106bb040: ffffff01e9d86c40 [ ffffff01e9d86c40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43106bb22e, ffffff42e26ae200, 0) cv_wait_sig_swap+0x17(ffffff43106bb22e, ffffff42e26ae200) cv_waituntil_sig+0xbd(ffffff43106bb22e, ffffff42e26ae200, 0, 0) sigtimedwait+0x197(8047e4c, 8046d30, 0) sys_syscall32+0xff() ffffff43106bb780 ffffff43106bf038 ffffff430e70e100 1 59 ffffff4310baef74 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff43106bb780: ffffff01e985fae0 [ ffffff01e985fae0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff4310baef74, ffffff430fdb8aa8) cte_get_event+0xb3(ffffff4310baef40, 0, 81f2338, 0, 0, 1) ctfs_endpoint_ioctl+0xf9(ffffff4310baef38, 63746502, 81f2338, ffffff42e2624cf8, fffffffffbcefc60, 0) ctfs_bu_ioctl+0x4b(ffffff43106f9700, 63746502, 81f2338, 102001, ffffff42e2624cf8, ffffff01e985fe68, 0) fop_ioctl+0x55(ffffff43106f9700, 63746502, 81f2338, 102001, ffffff42e2624cf8 , ffffff01e985fe68, 0) ioctl+0x9b(3, 63746502, 81f2338) sys_syscall32+0xff() ffffff4311cc24e0 ffffff43106bf038 ffffff430e7405c0 1 59 ffffff4311cc26ce PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cc24e0: ffffff01e8352c50 [ ffffff01e8352c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311cc26ce, ffffff4311cc26d0, 0) cv_wait_sig_swap+0x17(ffffff4311cc26ce, ffffff4311cc26d0) cv_waituntil_sig+0xbd(ffffff4311cc26ce, ffffff4311cc26d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff4311e53060 ffffff4311faf0b0 ffffff4312046300 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/inet/ipmgmtd stack pointer for thread ffffff4311e53060: ffffff01e8d79d50 [ ffffff01e8d79d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe96fe00, f5f00) doorfs32+0x180(0, 0, 0, fe96fe00, f5f00, a) sys_syscall32+0xff() ffffff4311fa1460 ffffff4311faf0b0 ffffff4311f528c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/inet/ipmgmtd stack pointer for thread ffffff4311fa1460: ffffff01e9d80d20 [ ffffff01e9d80d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383d098a0) shuttle_resume+0x2af(ffffff4383d098a0, fffffffffbd12ed0) door_return+0x3e0(fea6e720, 1bc, 0, 0, fea6ee00, f5f00) doorfs32+0x180(fea6e720, 1bc, 0, fea6ee00, f5f00, a) sys_syscall32+0xff() ffffff4311ef3440 ffffff4311faf0b0 ffffff4311e24e80 1 59 ffffff4311ef362e PC: _resume_from_idle+0xf1 CMD: /lib/inet/ipmgmtd stack pointer for thread ffffff4311ef3440: ffffff01e9d1edd0 [ ffffff01e9d1edd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311ef362e, ffffff4311ef3630, 0) cv_wait_sig_swap+0x17(ffffff4311ef362e, ffffff4311ef3630) pause+0x45() sys_syscall32+0xff() ffffff4311fa1800 ffffff4311fa20c0 ffffff430e711900 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/inet/netcfgd stack pointer for thread ffffff4311fa1800: ffffff01e9d18d50 [ ffffff01e9d18d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fed5ee00, f5f00) doorfs32+0x180(0, 0, 0, fed5ee00, f5f00, a) sys_syscall32+0xff() ffffff4311fa1ba0 ffffff4311fa20c0 ffffff4311f521c0 1 59 ffffff4311fa1d8e PC: _resume_from_idle+0xf1 CMD: /lib/inet/netcfgd stack pointer for thread ffffff4311fa1ba0: ffffff01e9d3bdd0 [ ffffff01e9d3bdd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311fa1d8e, ffffff4311fa1d90, 0) cv_wait_sig_swap+0x17(ffffff4311fa1d8e, ffffff4311fa1d90) pause+0x45() sys_syscall32+0xff() ffffff4383e1a480 ffffff4311e8d0a0 ffffff4321d33100 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4383e1a480: ffffff01eb958d50 [ ffffff01eb958d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe2bee00, f5f00) doorfs32+0x180(0, 0, 0, fe2bee00, f5f00, a) sys_syscall32+0xff() ffffff431379a000 ffffff4311e8d0a0 ffffff431202d500 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff431379a000: ffffff01eb946d20 [ ffffff01eb946d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383e1a820) shuttle_resume+0x2af(ffffff4383e1a820, fffffffffbd12ed0) door_return+0x3e0(fe4bf960, 410, 0, 0, fe4bfe00, f5f00) doorfs32+0x180(fe4bf960, 410, 0, fe4bfe00, f5f00, a) sys_syscall32+0xff() ffffff4383e1a0e0 ffffff4311e8d0a0 ffffff430e7413c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4383e1a0e0: ffffff01eb95ed50 [ ffffff01eb95ed50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe1aee00, f5f00) doorfs32+0x180(0, 0, 0, fe1aee00, f5f00, a) sys_syscall32+0xff() ffffff4311b7a120 ffffff4311e8d0a0 ffffff430e70bc00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311b7a120: ffffff01e9c99d20 [ ffffff01e9c99d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311e9b080) shuttle_resume+0x2af(ffffff4311e9b080, fffffffffbd12ed0) door_return+0x3e0(fe7c0cf0, 78, 0, 0, fe7c0e00, f5f00) doorfs32+0x180(fe7c0cf0, 78, 0, fe7c0e00, f5f00, a) sys_syscall32+0xff() ffffff4311263b60 ffffff4311e8d0a0 ffffff430eb08d00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311263b60: ffffff01e9c93d20 [ ffffff01e9c93d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311cb7000) shuttle_resume+0x2af(ffffff4311cb7000, fffffffffbd12ed0) door_return+0x3e0(fe9bed60, 18, 0, 0, fe9bee00, f5f00) doorfs32+0x180(fe9bed60, 18, 0, fe9bee00, f5f00, a) sys_syscall32+0xff() ffffff4383e59be0 ffffff4311e8d0a0 ffffff4321c375c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4383e59be0: ffffff01eb964d20 [ ffffff01eb964d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383e1a820) shuttle_resume+0x2af(ffffff4383e1a820, fffffffffbd12ed0) door_return+0x3e0(fe0afd60, 30, 0, 0, fe0afe00, f5f00) doorfs32+0x180(fe0afd60, 30, 0, fe0afe00, f5f00, a) sys_syscall32+0xff() ffffff4383e1abc0 ffffff4311e8d0a0 ffffff430e743080 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4383e1abc0: ffffff01eb662d20 [ ffffff01eb662d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311e9b7c0) shuttle_resume+0x2af(ffffff4311e9b7c0, fffffffffbd12ed0) door_return+0x3e0(fe5be960, 410, 0, 0, fe5bee00, f5f00) doorfs32+0x180(fe5be960, 410, 0, fe5bee00, f5f00, a) sys_syscall32+0xff() ffffff4383e59840 ffffff4311e8d0a0 ffffff4321d28580 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4383e59840: ffffff01eb96ad50 [ ffffff01eb96ad50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fdfb0e00, f5f00) doorfs32+0x180(0, 0, 0, fdfb0e00, f5f00, a) sys_syscall32+0xff() ffffff4311d4b0e0 ffffff4311e8d0a0 ffffff430eb07800 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311d4b0e0: ffffff01e9ca5d20 [ ffffff01e9ca5d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311e9b7c0) shuttle_resume+0x2af(ffffff4311e9b7c0, fffffffffbd12ed0) door_return+0x3e0(fe8bfcf0, 78, 0, 0, fe8bfe00, f5f00) doorfs32+0x180(fe8bfcf0, 78, 0, fe8bfe00, f5f00, a) sys_syscall32+0xff() ffffff4383e1a820 ffffff4311e8d0a0 ffffff4321d2fe00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4383e1a820: ffffff01eb94cd20 [ ffffff01eb94cd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311263b60) shuttle_resume+0x2af(ffffff4311263b60, fffffffffbd12ed0) door_return+0x3e0(fe3c0cf0, 78, 0, 0, fe3c0e00, f5f00) doorfs32+0x180(fe3c0cf0, 78, 0, fe3c0e00, f5f00, a) sys_syscall32+0xff() ffffff4311d4b480 ffffff4311e8d0a0 ffffff4311e26380 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311d4b480: ffffff01e9c9fd20 [ ffffff01e9c9fd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383e1a820) shuttle_resume+0x2af(ffffff4383e1a820, fffffffffbd12ed0) door_return+0x3e0(fe6c1cf0, 78, 0, 0, fe6c1e00, f5f00) doorfs32+0x180(fe6c1cf0, 78, 0, fe6c1e00, f5f00, a) sys_syscall32+0xff() ffffff4311e9b7c0 ffffff4311e8d0a0 ffffff4312045c00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311e9b7c0: ffffff01e8d7fd20 [ ffffff01e8d7fd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311263b60) shuttle_resume+0x2af(ffffff4311263b60, fffffffffbd12ed0) door_return+0x3e0(feb9ecf0, 78, 0, 0, feb9ee00, f5f00) doorfs32+0x180(feb9ecf0, 78, 0, feb9ee00, f5f00, a) sys_syscall32+0xff() ffffff4311e9b080 ffffff4311e8d0a0 ffffff4311e24080 1 59 0 PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311e9b080: ffffff01e9d2ad20 [ ffffff01e9d2ad20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff431192a820) shuttle_resume+0x2af(ffffff431192a820, fffffffffbd12ed0) door_return+0x3e0(fecfed60, 18, 0, 0, fecfee00, f5f00) doorfs32+0x180(fecfed60, 18, 0, fecfee00, f5f00, a) sys_syscall32+0xff() ffffff01e9d9ec40 fffffffffbc2eb80 0 0 60 ffffffffc0107d08 PC: _resume_from_idle+0xf1 THREAD: softmac_taskq_dispatch() stack pointer for thread ffffff01e9d9ec40: ffffff01e9d9eb60 [ ffffff01e9d9eb60 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc0107d08, ffffffffc0107d00) softmac_taskq_dispatch+0x11d() thread_start+8() ffffff4311e9b420 ffffff4311e8d0a0 ffffff4311e24780 1 59 ffffff4311e9b60e PC: _resume_from_idle+0xf1 CMD: /sbin/dlmgmtd stack pointer for thread ffffff4311e9b420: ffffff01e9d24dd0 [ ffffff01e9d24dd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311e9b60e, ffffff4311e9b610, 0) cv_wait_sig_swap+0x17(ffffff4311e9b60e, ffffff4311e9b610) pause+0x45() sys_syscall32+0xff() ffffff01e990dc40 fffffffffbc2eb80 0 0 60 ffffff431e9aeb08 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01e990dc40: ffffff01e990db00 [ ffffff01e990db00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431e9aeb08, ffffff431e9aeaf8) i_mac_notify_thread+0xee(ffffff431e9aea08) thread_start+8() ffffff01e98adc40 fffffffffbc2eb80 0 0 60 ffffff4311e64240 PC: _resume_from_idle+0xf1 THREAD: ibd_async_work() stack pointer for thread ffffff01e98adc40: ffffff01e98adb30 [ ffffff01e98adb30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4311e64240, ffffff4311e64238) ibd_async_work+0x23a(ffffff4311e64000) thread_start+8() ffffff01e9f7fc40 fffffffffbc2eb80 0 0 60 ffffff431e9ab608 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01e9f7fc40: ffffff01e9f7fb00 [ ffffff01e9f7fb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431e9ab608, ffffff431e9ab5f8) i_mac_notify_thread+0xee(ffffff431e9ab508) thread_start+8() ffffff01e9f85c40 fffffffffbc2eb80 0 0 60 ffffff431e9a8108 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01e9f85c40: ffffff01e9f85b00 [ ffffff01e9f85b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431e9a8108, ffffff431e9a80f8) i_mac_notify_thread+0xee(ffffff431e9a8008) thread_start+8() ffffff01eb566c40 fffffffffbc2eb80 0 0 60 ffffff43181de2c0 PC: _resume_from_idle+0xf1 TASKQ: _C90300091340 stack pointer for thread ffffff01eb566c40: ffffff01eb566a80 [ ffffff01eb566a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de2c0, ffffff43181de2b0) taskq_thread_wait+0xbe(ffffff43181de290, ffffff43181de2b0, ffffff43181de2c0 , ffffff01eb566bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de290) thread_start+8() ffffff01eb56cc40 fffffffffbc2eb80 0 0 60 ffffff432552fd68 PC: _resume_from_idle+0xf1 TASKQ: _C903000BC0BC stack pointer for thread ffffff01eb56cc40: ffffff01eb56ca80 [ ffffff01eb56ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552fd68, ffffff432552fd58) taskq_thread_wait+0xbe(ffffff432552fd38, ffffff432552fd58, ffffff432552fd68 , ffffff01eb56cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552fd38) thread_start+8() ffffff01eb572c40 fffffffffbc2eb80 0 0 60 ffffff432552fc50 PC: _resume_from_idle+0xf1 TASKQ: eibnx_nexus_enum_tq stack pointer for thread ffffff01eb572c40: ffffff01eb572a80 [ ffffff01eb572a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552fc50, ffffff432552fc40) taskq_thread_wait+0xbe(ffffff432552fc20, ffffff432552fc40, ffffff432552fc50 , ffffff01eb572bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552fc20) thread_start+8() ffffff01eb578c40 fffffffffbc2eb80 0 0 60 ffffff431f8a4068 PC: _resume_from_idle+0xf1 THREAD: eibnx_create_eoib_node() stack pointer for thread ffffff01eb578c40: ffffff01eb578b30 [ ffffff01eb578b30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f8a4068, ffffff431f8a4060) eibnx_create_eoib_node+0xdb() thread_start+8() ffffff01eb57ec40 fffffffffbc2eb80 0 0 60 ffffff431e82d640 PC: _resume_from_idle+0xf1 THREAD: eibnx_port_monitor() stack pointer for thread ffffff01eb57ec40: ffffff01eb57ea10 [ ffffff01eb57ea10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431e82d640, ffffff431e82d638) eibnx_port_monitor+0x1f5(ffffff431e82c000) thread_start+8() ffffff01e8364c40 fffffffffbc2eb80 0 0 60 ffffff4310645640 PC: _resume_from_idle+0xf1 THREAD: eibnx_port_monitor() stack pointer for thread ffffff01e8364c40: ffffff01e8364a10 [ ffffff01e8364a10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310645640, ffffff4310645638) eibnx_port_monitor+0x1f5(ffffff4310644000) thread_start+8() ffffff01e8ec7c40 fffffffffbc2eb80 0 0 60 ffffff431f956640 PC: _resume_from_idle+0xf1 THREAD: eibnx_port_monitor() stack pointer for thread ffffff01e8ec7c40: ffffff01e8ec7a10 [ ffffff01e8ec7a10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff431f956640, ffffff431f956638) eibnx_port_monitor+0x1f5(ffffff431f955000) thread_start+8() ffffff01e9ed0c40 fffffffffbc2eb80 0 0 60 ffffff432ce23640 PC: _resume_from_idle+0xf1 THREAD: eibnx_port_monitor() stack pointer for thread ffffff01e9ed0c40: ffffff01e9ed0a10 [ ffffff01e9ed0a10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432ce23640, ffffff432ce23638) eibnx_port_monitor+0x1f5(ffffff432ce22000) thread_start+8() ffffff01eb454c40 fffffffffbc2eb80 0 0 99 ffffff4310fe6148 PC: _resume_from_idle+0xf1 THREAD: rdsv3_af_thr_worker() stack pointer for thread ffffff01eb454c40: ffffff01eb454b40 [ ffffff01eb454b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310fe6148, ffffff4310fe6140) rdsv3_af_thr_worker+0xd0(ffffff4310fe6140) thread_start+8() ffffff01eb498c40 fffffffffbc2eb80 0 0 99 ffffff43137b3008 PC: _resume_from_idle+0xf1 THREAD: rdsv3_af_thr_worker() stack pointer for thread ffffff01eb498c40: ffffff01eb498b40 [ ffffff01eb498b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43137b3008, ffffff43137b3000) rdsv3_af_thr_worker+0xd0(ffffff43137b3000) thread_start+8() ffffff01e9b18c40 fffffffffbc2eb80 0 0 99 ffffff4321cb9b48 PC: _resume_from_idle+0xf1 THREAD: rdsv3_af_thr_worker() stack pointer for thread ffffff01e9b18c40: ffffff01e9b18b40 [ ffffff01e9b18b40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cb9b48, ffffff4321cb9b40) rdsv3_af_thr_worker+0xd0(ffffff4321cb9b40) thread_start+8() ffffff01e9b1ec40 fffffffffbc2eb80 0 0 99 ffffff4321cb9b08 PC: _resume_from_idle+0xf1 THREAD: rdsv3_af_thr_worker() stack pointer for thread ffffff01e9b1ec40: ffffff01e9b1eb40 [ ffffff01e9b1eb40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cb9b08, ffffff4321cb9b00) rdsv3_af_thr_worker+0xd0(ffffff4321cb9b00) thread_start+8() ffffff01e98b3c40 fffffffffbc2eb80 0 0 60 ffffff432552f908 PC: _resume_from_idle+0xf1 TASKQ: rdsv3_krdsd stack pointer for thread ffffff01e98b3c40: ffffff01e98b3a80 [ ffffff01e98b3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f908, ffffff432552f8f8) taskq_thread_wait+0xbe(ffffff432552f8d8, ffffff432552f8f8, ffffff432552f908 , ffffff01e98b3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f8d8) thread_start+8() ffffff01e9f4dc40 fffffffffbc2eb80 0 0 60 ffffff432552f7f0 PC: _resume_from_idle+0xf1 TASKQ: idm_global_taskq stack pointer for thread ffffff01e9f4dc40: ffffff01e9f4da80 [ ffffff01e9f4da80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f7f0, ffffff432552f7e0) taskq_thread_wait+0xbe(ffffff432552f7c0, ffffff432552f7e0, ffffff432552f7f0 , ffffff01e9f4dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f7c0) thread_start+8() ffffff01e9f53c40 fffffffffbc2eb80 0 0 60 ffffffffc0134ee4 PC: _resume_from_idle+0xf1 THREAD: idm_wd_thread() stack pointer for thread ffffff01e9f53c40: ffffff01e9f53ae0 [ ffffff01e9f53ae0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffffffc0134ee4, ffffffffc0134ec0, 12a05f200, 989680, 0) cv_reltimedwait+0x51(ffffffffc0134ee4, ffffffffc0134ec0, 1f4, 4) idm_wd_thread+0x203(0) thread_start+8() ffffff01ebbe7c40 fffffffffbc2eb80 0 0 60 ffffff4321cd8a18 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01ebbe7c40: ffffff01ebbe7a80 [ ffffff01ebbe7a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8a18, ffffff4321cd8a08) taskq_thread_wait+0xbe(ffffff4321cd89e8, ffffff4321cd8a08, ffffff4321cd8a18 , ffffff01ebbe7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd89e8) thread_start+8() ffffff01ebbf9c40 fffffffffbc2eb80 0 0 60 ffffff4321cd86d0 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_mptsas_event_taskq stack pointer for thread ffffff01ebbf9c40: ffffff01ebbf9a80 [ ffffff01ebbf9a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd86d0, ffffff4321cd86c0) taskq_thread_wait+0xbe(ffffff4321cd86a0, ffffff4321cd86c0, ffffff4321cd86d0 , ffffff01ebbf9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd86a0) thread_start+8() ffffff01ebc0bc40 fffffffffbc2eb80 0 0 60 ffffff4321cd84a0 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_mptsas_dr_taskq stack pointer for thread ffffff01ebc0bc40: ffffff01ebc0ba80 [ ffffff01ebc0ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd84a0, ffffff4321cd8490) taskq_thread_wait+0xbe(ffffff4321cd8470, ffffff4321cd8490, ffffff4321cd84a0 , ffffff01ebc0bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8470) thread_start+8() ffffff01ebc1dc40 fffffffffbc2eb80 0 0 60 ffffff4310a13818 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc1dc40: ffffff01ebc1db50 [ ffffff01ebc1db50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a13818, ffffff4310a13820) mptsas_doneq_thread+0x10b(ffffff4310a13830) thread_start+8() ffffff01ebc29c40 fffffffffbc2eb80 0 0 60 ffffff4310a13858 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc29c40: ffffff01ebc29b50 [ ffffff01ebc29b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a13858, ffffff4310a13860) mptsas_doneq_thread+0x10b(ffffff4310a13870) thread_start+8() ffffff01ebc3bc40 fffffffffbc2eb80 0 0 60 ffffff4310a13898 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc3bc40: ffffff01ebc3bb50 [ ffffff01ebc3bb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a13898, ffffff4310a138a0) mptsas_doneq_thread+0x10b(ffffff4310a138b0) thread_start+8() ffffff01ebc4dc40 fffffffffbc2eb80 0 0 60 ffffff4310a138d8 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc4dc40: ffffff01ebc4db50 [ ffffff01ebc4db50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a138d8, ffffff4310a138e0) mptsas_doneq_thread+0x10b(ffffff4310a138f0) thread_start+8() ffffff01ebc5fc40 fffffffffbc2eb80 0 0 60 ffffff4310a13918 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc5fc40: ffffff01ebc5fb50 [ ffffff01ebc5fb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a13918, ffffff4310a13920) mptsas_doneq_thread+0x10b(ffffff4310a13930) thread_start+8() ffffff01ebc71c40 fffffffffbc2eb80 0 0 60 ffffff4310a13958 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc71c40: ffffff01ebc71b50 [ ffffff01ebc71b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a13958, ffffff4310a13960) mptsas_doneq_thread+0x10b(ffffff4310a13970) thread_start+8() ffffff01ebc83c40 fffffffffbc2eb80 0 0 60 ffffff4310a13998 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc83c40: ffffff01ebc83b50 [ ffffff01ebc83b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a13998, ffffff4310a139a0) mptsas_doneq_thread+0x10b(ffffff4310a139b0) thread_start+8() ffffff01ebc95c40 fffffffffbc2eb80 0 0 60 ffffff4310a139d8 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc95c40: ffffff01ebc95b50 [ ffffff01ebc95b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4310a139d8, ffffff4310a139e0) mptsas_doneq_thread+0x10b(ffffff4310a139f0) thread_start+8() ffffff01eb560c40 fffffffffbc2eb80 0 0 60 ffffff432552fe80 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01eb560c40: ffffff01eb560a80 [ ffffff01eb560a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552fe80, ffffff432552fe70) taskq_thread_wait+0xbe(ffffff432552fe50, ffffff432552fe70, ffffff432552fe80 , ffffff01eb560bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552fe50) thread_start+8() ffffff01eb4aac40 fffffffffbc2eb80 0 0 60 ffffff4361b02e88 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01eb4aac40: ffffff01eb4aaa80 [ ffffff01eb4aaa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02e88, ffffff4361b02e78) taskq_thread_wait+0xbe(ffffff4361b02e58, ffffff4361b02e78, ffffff4361b02e88 , ffffff01eb4aabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b02e58) thread_start+8() ffffff01eb49ec40 fffffffffbc2eb80 0 0 60 ffffff432552f160 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01eb49ec40: ffffff01eb49ea80 [ ffffff01eb49ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f160, ffffff432552f150) taskq_thread_wait+0xbe(ffffff432552f130, ffffff432552f150, ffffff432552f160 , ffffff01eb49ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f130) thread_start+8() ffffff01eb4a4c40 fffffffffbc2eb80 0 0 60 ffffff432552f048 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01eb4a4c40: ffffff01eb4a4a80 [ ffffff01eb4a4a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff432552f048, ffffff432552f038) taskq_thread_wait+0xbe(ffffff432552f018, ffffff432552f038, ffffff432552f048 , ffffff01eb4a4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff432552f018) thread_start+8() ffffff01ebbf3c40 fffffffffbc2eb80 0 0 60 ffffff4321cd87e8 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01ebbf3c40: ffffff01ebbf3a80 [ ffffff01ebbf3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd87e8, ffffff4321cd87d8) taskq_thread_wait+0xbe(ffffff4321cd87b8, ffffff4321cd87d8, ffffff4321cd87e8 , ffffff01ebbf3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd87b8) thread_start+8() ffffff01ebbffc40 fffffffffbc2eb80 0 0 60 ffffff4321cd85b8 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_mptsas_event_taskq stack pointer for thread ffffff01ebbffc40: ffffff01ebbffa80 [ ffffff01ebbffa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd85b8, ffffff4321cd85a8) taskq_thread_wait+0xbe(ffffff4321cd8588, ffffff4321cd85a8, ffffff4321cd85b8 , ffffff01ebbffbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8588) thread_start+8() ffffff01ebc11c40 fffffffffbc2eb80 0 0 60 ffffff4321cd8388 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_mptsas_dr_taskq stack pointer for thread ffffff01ebc11c40: ffffff01ebc11a80 [ ffffff01ebc11a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8388, ffffff4321cd8378) taskq_thread_wait+0xbe(ffffff4321cd8358, ffffff4321cd8378, ffffff4321cd8388 , ffffff01ebc11bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8358) thread_start+8() ffffff01ebc23c40 fffffffffbc2eb80 0 0 60 ffffff43113f9e18 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc23c40: ffffff01ebc23b50 [ ffffff01ebc23b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9e18, ffffff43113f9e20) mptsas_doneq_thread+0x10b(ffffff43113f9e30) thread_start+8() ffffff01ebc35c40 fffffffffbc2eb80 0 0 60 ffffff43113f9e58 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc35c40: ffffff01ebc35b50 [ ffffff01ebc35b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9e58, ffffff43113f9e60) mptsas_doneq_thread+0x10b(ffffff43113f9e70) thread_start+8() ffffff01ebc47c40 fffffffffbc2eb80 0 0 60 ffffff43113f9e98 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc47c40: ffffff01ebc47b50 [ ffffff01ebc47b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9e98, ffffff43113f9ea0) mptsas_doneq_thread+0x10b(ffffff43113f9eb0) thread_start+8() ffffff01ebc53c40 fffffffffbc2eb80 0 0 60 ffffff43113f9ed8 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc53c40: ffffff01ebc53b50 [ ffffff01ebc53b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9ed8, ffffff43113f9ee0) mptsas_doneq_thread+0x10b(ffffff43113f9ef0) thread_start+8() ffffff01ebc6bc40 fffffffffbc2eb80 0 0 60 ffffff43113f9f18 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc6bc40: ffffff01ebc6bb50 [ ffffff01ebc6bb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9f18, ffffff43113f9f20) mptsas_doneq_thread+0x10b(ffffff43113f9f30) thread_start+8() ffffff01ebc7dc40 fffffffffbc2eb80 0 0 60 ffffff43113f9f58 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc7dc40: ffffff01ebc7db50 [ ffffff01ebc7db50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9f58, ffffff43113f9f60) mptsas_doneq_thread+0x10b(ffffff43113f9f70) thread_start+8() ffffff01ebc8fc40 fffffffffbc2eb80 0 0 60 ffffff43113f9f98 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc8fc40: ffffff01ebc8fb50 [ ffffff01ebc8fb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9f98, ffffff43113f9fa0) mptsas_doneq_thread+0x10b(ffffff43113f9fb0) thread_start+8() ffffff01ebca1c40 fffffffffbc2eb80 0 0 60 ffffff43113f9fd8 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebca1c40: ffffff01ebca1b50 [ ffffff01ebca1b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43113f9fd8, ffffff43113f9fe0) mptsas_doneq_thread+0x10b(ffffff43113f9ff0) thread_start+8() ffffff01eb554c40 fffffffffbc2eb80 0 0 60 ffffff4321cd8158 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01eb554c40: ffffff01eb554a80 [ ffffff01eb554a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8158, ffffff4321cd8148) taskq_thread_wait+0xbe(ffffff4321cd8128, ffffff4321cd8148, ffffff4321cd8158 , ffffff01eb554bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8128) thread_start+8() ffffff01ebbedc40 fffffffffbc2eb80 0 0 60 ffffff4321cd8900 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01ebbedc40: ffffff01ebbeda80 [ ffffff01ebbeda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8900, ffffff4321cd88f0) taskq_thread_wait+0xbe(ffffff4321cd88d0, ffffff4321cd88f0, ffffff4321cd8900 , ffffff01ebbedbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd88d0) thread_start+8() ffffff01ebc05c40 fffffffffbc2eb80 0 0 60 ffffff43181de090 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_mptsas_event_taskq stack pointer for thread ffffff01ebc05c40: ffffff01ebc05a80 [ ffffff01ebc05a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de090, ffffff43181de080) taskq_thread_wait+0xbe(ffffff43181de060, ffffff43181de080, ffffff43181de090 , ffffff01ebc05bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de060) thread_start+8() ffffff01ebc17c40 fffffffffbc2eb80 0 0 60 ffffff4321cd8270 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_mptsas_dr_taskq stack pointer for thread ffffff01ebc17c40: ffffff01ebc17a80 [ ffffff01ebc17a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8270, ffffff4321cd8260) taskq_thread_wait+0xbe(ffffff4321cd8240, ffffff4321cd8260, ffffff4321cd8270 , ffffff01ebc17bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8240) thread_start+8() ffffff01ebc2fc40 fffffffffbc2eb80 0 0 60 ffffff430f833c18 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc2fc40: ffffff01ebc2fb50 [ ffffff01ebc2fb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833c18, ffffff430f833c20) mptsas_doneq_thread+0x10b(ffffff430f833c30) thread_start+8() ffffff01ebc41c40 fffffffffbc2eb80 0 0 60 ffffff430f833c58 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc41c40: ffffff01ebc41b50 [ ffffff01ebc41b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833c58, ffffff430f833c60) mptsas_doneq_thread+0x10b(ffffff430f833c70) thread_start+8() ffffff01ebc59c40 fffffffffbc2eb80 0 0 60 ffffff430f833c98 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc59c40: ffffff01ebc59b50 [ ffffff01ebc59b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833c98, ffffff430f833ca0) mptsas_doneq_thread+0x10b(ffffff430f833cb0) thread_start+8() ffffff01ebc65c40 fffffffffbc2eb80 0 0 60 ffffff430f833cd8 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc65c40: ffffff01ebc65b50 [ ffffff01ebc65b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833cd8, ffffff430f833ce0) mptsas_doneq_thread+0x10b(ffffff430f833cf0) thread_start+8() ffffff01ebc77c40 fffffffffbc2eb80 0 0 60 ffffff430f833d18 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc77c40: ffffff01ebc77b50 [ ffffff01ebc77b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833d18, ffffff430f833d20) mptsas_doneq_thread+0x10b(ffffff430f833d30) thread_start+8() ffffff01ebc89c40 fffffffffbc2eb80 0 0 60 ffffff430f833d58 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc89c40: ffffff01ebc89b50 [ ffffff01ebc89b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833d58, ffffff430f833d60) mptsas_doneq_thread+0x10b(ffffff430f833d70) thread_start+8() ffffff01ebc9bc40 fffffffffbc2eb80 0 0 60 ffffff430f833d98 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebc9bc40: ffffff01ebc9bb50 [ ffffff01ebc9bb50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833d98, ffffff430f833da0) mptsas_doneq_thread+0x10b(ffffff430f833db0) thread_start+8() ffffff01ebca7c40 fffffffffbc2eb80 0 0 60 ffffff430f833dd8 PC: _resume_from_idle+0xf1 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff01ebca7c40: ffffff01ebca7b50 [ ffffff01ebca7b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430f833dd8, ffffff430f833de0) mptsas_doneq_thread+0x10b(ffffff430f833df0) thread_start+8() ffffff01eb55ac40 fffffffffbc2eb80 0 0 60 ffffff4321cd8040 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01eb55ac40: ffffff01eb55aa80 [ ffffff01eb55aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8040, ffffff4321cd8030) taskq_thread_wait+0xbe(ffffff4321cd8010, ffffff4321cd8030, ffffff4321cd8040 , ffffff01eb55abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8010) thread_start+8() ffffff01ebb18c40 ffffff4375951038 ffffff4311a48540 2 0 ffffff4321ce1a10 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb18c40: ffffff01ebb18990 [ ffffff01ebb18990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1a10, ffffff4321ce1a00) taskq_thread_wait+0xbe(ffffff4321ce19e0, ffffff4321ce1a00, ffffff4321ce1a10 , ffffff01ebb18ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce19e0) thread_start+8() ffffff01eba3ac40 ffffff4375951038 ffffff430e70c300 2 99 ffffff4321cef378 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba3ac40: ffffff01eba3a990 [ ffffff01eba3a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef378, ffffff4321cef368) taskq_thread_wait+0xbe(ffffff4321cef348, ffffff4321cef368, ffffff4321cef378 , ffffff01eba3aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef348) thread_start+8() ffffff01ebaf4c40 ffffff4375951038 ffffff430eb07100 2 99 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebaf4c40: ffffff01ebaf4990 [ ffffff01ebaf4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01ebaf4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01eb758c40 ffffff4375951038 ffffff4321d2cc40 2 99 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb758c40: ffffff01eb758990 [ ffffff01eb758990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01eb758ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01eb7eec40 ffffff4375951038 ffffff431380ac80 2 99 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb7eec40: ffffff01eb7ee990 [ ffffff01eb7ee990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01eb7eead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01ebaeec40 ffffff4375951038 ffffff4312042340 2 99 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebaeec40: ffffff01ebaee990 [ ffffff01ebaee990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01ebaeead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01ebafac40 ffffff4375951038 ffffff4311e27880 2 0 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebafac40: ffffff01ebafa990 [ ffffff01ebafa990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01ebafaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01eba52c40 ffffff4375951038 ffffff4311f4f7c0 2 0 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba52c40: ffffff01eba52990 [ ffffff01eba52990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01eba52ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01eba46c40 ffffff4375951038 ffffff430e743780 2 99 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba46c40: ffffff01eba46990 [ ffffff01eba46990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01eba46ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01eba22c40 ffffff4375951038 ffffff4311a52800 2 99 ffffff4321ce1e70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba22c40: ffffff01eba22990 [ ffffff01eba22990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1e70, ffffff4321ce1e60) taskq_thread_wait+0xbe(ffffff4321ce1e40, ffffff4321ce1e60, ffffff4321ce1e70 , ffffff01eba22ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1e40) thread_start+8() ffffff01ebae8c40 ffffff4375951038 ffffff4375991f00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebae8c40: ffffff01ebae8990 [ ffffff01ebae8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebae8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebad0c40 ffffff4375951038 ffffff4375993b00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebad0c40: ffffff01ebad0990 [ ffffff01ebad0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebad0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eba94c40 ffffff4375951038 ffffff437598c140 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba94c40: ffffff01eba94990 [ ffffff01eba94990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eba94ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eba76c40 ffffff4375951038 ffffff437598ec00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba76c40: ffffff01eba76990 [ ffffff01eba76990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eba76ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eba58c40 ffffff4375951038 ffffff43759957c0 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba58c40: ffffff01eba58990 [ ffffff01eba58990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eba58ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eba0ac40 ffffff4375951038 ffffff43759973c0 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba0ac40: ffffff01eba0a990 [ ffffff01eba0a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eba0aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb9d4c40 ffffff4375951038 ffffff4375979e80 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9d4c40: ffffff01eb9d4990 [ ffffff01eb9d4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb9d4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebb78c40 ffffff4375951038 ffffff4375999040 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb78c40: ffffff01ebb78990 [ ffffff01ebb78990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebb78ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebb60c40 ffffff4375951038 ffffff4311a47e40 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb60c40: ffffff01ebb60990 [ ffffff01ebb60990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebb60ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebb48c40 ffffff4375951038 ffffff437599c140 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb48c40: ffffff01ebb48990 [ ffffff01ebb48990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebb48ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebb24c40 ffffff4375951038 ffffff437591e500 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb24c40: ffffff01ebb24990 [ ffffff01ebb24990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebb24ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01e9b54c40 ffffff4375951038 ffffff437591f300 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b54c40: ffffff01e9b54990 [ ffffff01e9b54990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01e9b54ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01e9b36c40 ffffff4375951038 ffffff4312046a00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b36c40: ffffff01e9b36990 [ ffffff01e9b36990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01e9b36ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb490c40 ffffff4375951038 ffffff43759265c0 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb490c40: ffffff01eb490990 [ ffffff01eb490990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb490ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb46cc40 ffffff4375951038 ffffff430eb09b00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb46cc40: ffffff01eb46c990 [ ffffff01eb46c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb46cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb590c40 ffffff4375951038 ffffff4375921800 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb590c40: ffffff01eb590990 [ ffffff01eb590990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb590ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebb0cc40 ffffff4375951038 ffffff4375922d00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb0cc40: ffffff01ebb0c990 [ ffffff01ebb0c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01ebb0cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eba34c40 ffffff4375951038 ffffff4375923b00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba34c40: ffffff01eba34990 [ ffffff01eba34990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eba34ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb5eac40 ffffff4375951038 ffffff43759288c0 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5eac40: ffffff01eb5ea990 [ ffffff01eb5ea990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb5eaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb449c40 ffffff4375951038 ffffff4321d2b740 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb449c40: ffffff01eb449990 [ ffffff01eb449990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb449ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb704c40 ffffff4375951038 ffffff430e706740 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb704c40: ffffff01eb704990 [ ffffff01eb704990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb704ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb716c40 ffffff4375951038 ffffff4312036ac0 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb716c40: ffffff01eb716990 [ ffffff01eb716990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb716ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb812c40 ffffff4375951038 ffffff431202ea00 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb812c40: ffffff01eb812990 [ ffffff01eb812990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb812ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01eb5d2c40 ffffff4375951038 ffffff4312043140 2 0 ffffff4321be3ed0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5d2c40: ffffff01eb5d2990 [ ffffff01eb5d2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ed0, ffffff4321be3ec0) taskq_thread_wait+0xbe(ffffff4321be3ea0, ffffff4321be3ec0, ffffff4321be3ed0 , ffffff01eb5d2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3ea0) thread_start+8() ffffff01ebac4c40 ffffff4375951038 ffffff4311a51a00 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebac4c40: ffffff01ebac4990 [ ffffff01ebac4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01ebac4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eba7cc40 ffffff4375951038 ffffff437598de00 2 99 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba7cc40: ffffff01eba7c990 [ ffffff01eba7c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eba7cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eba5ec40 ffffff4375951038 ffffff43759950c0 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba5ec40: ffffff01eba5e990 [ ffffff01eba5e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eba5ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eba16c40 ffffff4375951038 ffffff43759965c0 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba16c40: ffffff01eba16990 [ ffffff01eba16990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eba16ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb9ecc40 ffffff4375951038 ffffff4375990800 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9ecc40: ffffff01eb9ec990 [ ffffff01eb9ec990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb9ecad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb9cec40 ffffff4375951038 ffffff4375979780 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9cec40: ffffff01eb9ce990 [ ffffff01eb9ce990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb9cead0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb9b6c40 ffffff4375951038 ffffff437597b380 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9b6c40: ffffff01eb9b6990 [ ffffff01eb9b6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb9b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01ebb66c40 ffffff4375951038 ffffff437599a540 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb66c40: ffffff01ebb66990 [ ffffff01ebb66990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01ebb66ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01ebb4ec40 ffffff4375951038 ffffff437599ba40 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb4ec40: ffffff01ebb4e990 [ ffffff01ebb4e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01ebb4ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01ebb42c40 ffffff4375951038 ffffff430e7428c0 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb42c40: ffffff01ebb42990 [ ffffff01ebb42990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01ebb42ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01e9b60c40 ffffff4375951038 ffffff437591ec00 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b60c40: ffffff01e9b60990 [ ffffff01e9b60990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01e9b60ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01e9b42c40 ffffff4375951038 ffffff4375920100 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b42c40: ffffff01e9b42990 [ ffffff01e9b42990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01e9b42ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01e9b2ac40 ffffff4375951038 ffffff43759257c0 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b2ac40: ffffff01e9b2a990 [ ffffff01e9b2a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01e9b2aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb48ac40 ffffff4375951038 ffffff430e717c80 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb48ac40: ffffff01eb48a990 [ ffffff01eb48a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb48aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb478c40 ffffff4375951038 ffffff4375927ac0 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb478c40: ffffff01eb478990 [ ffffff01eb478990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb478ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb5a2c40 ffffff4375951038 ffffff4375921100 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5a2c40: ffffff01eb5a2990 [ ffffff01eb5a2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb5a2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb58ac40 ffffff4375951038 ffffff4375921f00 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb58ac40: ffffff01eb58a990 [ ffffff01eb58a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb58aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eba28c40 ffffff4375951038 ffffff4375924200 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba28c40: ffffff01eba28990 [ ffffff01eba28990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eba28ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb5b4c40 ffffff4375951038 ffffff4375924900 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5b4c40: ffffff01eb5b4990 [ ffffff01eb5b4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb5b4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eba40c40 ffffff4375951038 ffffff4321d2be40 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba40c40: ffffff01eba40990 [ ffffff01eba40990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eba40ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb65cc40 ffffff4375951038 ffffff4311a4a840 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb65cc40: ffffff01eb65c990 [ ffffff01eb65c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb65cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb614c40 ffffff4375951038 ffffff4311a47740 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb614c40: ffffff01eb614990 [ ffffff01eb614990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb614ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb692c40 ffffff4375951038 ffffff4321d2c540 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb692c40: ffffff01eb692990 [ ffffff01eb692990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb692ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01eb6c8c40 ffffff4375951038 ffffff4311e26a80 2 0 ffffff43181de3d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb6c8c40: ffffff01eb6c8990 [ ffffff01eb6c8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43181de3d8, ffffff43181de3c8) taskq_thread_wait+0xbe(ffffff43181de3a8, ffffff43181de3c8, ffffff43181de3d8 , ffffff01eb6c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff43181de3a8) thread_start+8() ffffff01ebb12c40 ffffff4375951038 ffffff4375922600 2 99 ffffff4321cd8d60 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb12c40: ffffff01ebb12990 [ ffffff01ebb12990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8d60, ffffff4321cd8d50) taskq_thread_wait+0xbe(ffffff4321cd8d30, ffffff4321cd8d50, ffffff4321cd8d60 , ffffff01ebb12ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8d30) thread_start+8() ffffff01ebb00c40 ffffff4375951038 ffffff4375923400 2 99 ffffff4321cd8d60 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb00c40: ffffff01ebb00990 [ ffffff01ebb00990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8d60, ffffff4321cd8d50) taskq_thread_wait+0xbe(ffffff4321cd8d30, ffffff4321cd8d50, ffffff4321cd8d60 , ffffff01ebb00ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8d30) thread_start+8() ffffff01eb5bac40 ffffff4375951038 ffffff43120378c0 2 99 ffffff4321cd8d60 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5bac40: ffffff01eb5ba990 [ ffffff01eb5ba990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8d60, ffffff4321cd8d50) taskq_thread_wait+0xbe(ffffff4321cd8d30, ffffff4321cd8d50, ffffff4321cd8d60 , ffffff01eb5baad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8d30) thread_start+8() ffffff01eb5e4c40 ffffff4375951038 ffffff4321d2b040 2 0 ffffff4321cd8d60 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5e4c40: ffffff01eb5e4990 [ ffffff01eb5e4990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8d60, ffffff4321cd8d50) taskq_thread_wait+0xbe(ffffff4321cd8d30, ffffff4321cd8d50, ffffff4321cd8d60 , ffffff01eb5e4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8d30) thread_start+8() ffffff01eba4cc40 ffffff4375951038 ffffff430e744c80 2 99 ffffff4321cd8d60 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba4cc40: ffffff01eba4c990 [ ffffff01eba4c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8d60, ffffff4321cd8d50) taskq_thread_wait+0xbe(ffffff4321cd8d30, ffffff4321cd8d50, ffffff4321cd8d60 , ffffff01eba4cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8d30) thread_start+8() ffffff01ebb30c40 ffffff4375951038 ffffff437599c840 2 99 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb30c40: ffffff01ebb30990 [ ffffff01ebb30990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01ebb30ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01e9b5ac40 ffffff4375951038 ffffff430eb09400 2 0 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b5ac40: ffffff01e9b5a990 [ ffffff01e9b5a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01e9b5aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01e9b3cc40 ffffff4375951038 ffffff4375920800 2 0 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b3cc40: ffffff01e9b3c990 [ ffffff01e9b3c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01e9b3cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01eb472c40 ffffff4375951038 ffffff43759281c0 2 99 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb472c40: ffffff01eb472990 [ ffffff01eb472990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01eb472ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01eb596c40 ffffff4375951038 ffffff430e706040 2 0 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb596c40: ffffff01eb596990 [ ffffff01eb596990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01eb596ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01ebb06c40 ffffff4375951038 ffffff4311e25c80 2 0 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb06c40: ffffff01ebb06990 [ ffffff01ebb06990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01ebb06ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01eba2ec40 ffffff4375951038 ffffff430e71fc00 2 99 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba2ec40: ffffff01eba2e990 [ ffffff01eba2e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01eba2ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01eb5c0c40 ffffff4375951038 ffffff4311a50500 2 99 ffffff4321cef030 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5c0c40: ffffff01eb5c0990 [ ffffff01eb5c0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef030, ffffff4321cef020) taskq_thread_wait+0xbe(ffffff4321cef000, ffffff4321cef020, ffffff4321cef030 , ffffff01eb5c0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef000) thread_start+8() ffffff01e9b4ec40 ffffff4375951038 ffffff43120355c0 2 99 ffffff4321cefe68 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b4ec40: ffffff01e9b4e990 [ ffffff01e9b4e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefe68, ffffff4321cefe58) taskq_thread_wait+0xbe(ffffff4321cefe38, ffffff4321cefe58, ffffff4321cefe68 , ffffff01e9b4ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefe38) thread_start+8() ffffff01e9b30c40 ffffff4375951038 ffffff43759250c0 2 99 ffffff4321cefe68 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b30c40: ffffff01e9b30990 [ ffffff01e9b30990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefe68, ffffff4321cefe58) taskq_thread_wait+0xbe(ffffff4321cefe38, ffffff4321cefe58, ffffff4321cefe68 , ffffff01e9b30ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefe38) thread_start+8() ffffff01eb47ec40 ffffff4375951038 ffffff43759273c0 2 99 ffffff4321cefe68 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb47ec40: ffffff01eb47e990 [ ffffff01eb47e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefe68, ffffff4321cefe58) taskq_thread_wait+0xbe(ffffff4321cefe38, ffffff4321cefe58, ffffff4321cefe68 , ffffff01eb47ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefe38) thread_start+8() ffffff01eb5a8c40 ffffff4375951038 ffffff430e70d800 2 0 ffffff4321cefe68 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5a8c40: ffffff01eb5a8990 [ ffffff01eb5a8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefe68, ffffff4321cefe58) taskq_thread_wait+0xbe(ffffff4321cefe38, ffffff4321cefe58, ffffff4321cefe68 , ffffff01eb5a8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefe38) thread_start+8() ffffff01eb59cc40 ffffff4375951038 ffffff4311a50c00 2 99 ffffff4321cefe68 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb59cc40: ffffff01eb59c990 [ ffffff01eb59c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefe68, ffffff4321cefe58) taskq_thread_wait+0xbe(ffffff4321cefe38, ffffff4321cefe58, ffffff4321cefe68 , ffffff01eb59cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefe38) thread_start+8() ffffff01eba1cc40 ffffff4375951038 ffffff4375995ec0 2 0 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba1cc40: ffffff01eba1c990 [ ffffff01eba1c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01eba1cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01eb9f8c40 ffffff4375951038 ffffff437598c840 2 99 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9f8c40: ffffff01eb9f8990 [ ffffff01eb9f8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01eb9f8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01eb9dac40 ffffff4375951038 ffffff4375979080 2 0 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9dac40: ffffff01eb9da990 [ ffffff01eb9da990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01eb9daad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01eb9c8c40 ffffff4375951038 ffffff437597ac80 2 99 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9c8c40: ffffff01eb9c8990 [ ffffff01eb9c8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01eb9c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01eb9b0c40 ffffff4375951038 ffffff437597ba80 2 0 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9b0c40: ffffff01eb9b0990 [ ffffff01eb9b0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01eb9b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01ebb72c40 ffffff4375951038 ffffff4375999e40 2 99 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb72c40: ffffff01ebb72990 [ ffffff01ebb72990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01ebb72ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01ebb54c40 ffffff4375951038 ffffff437599b340 2 99 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb54c40: ffffff01ebb54990 [ ffffff01ebb54990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01ebb54ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01ebb3cc40 ffffff4375951038 ffffff437597c880 2 99 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb3cc40: ffffff01ebb3c990 [ ffffff01ebb3c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01ebb3cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01ebb2ac40 ffffff4375951038 ffffff437591d700 2 0 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb2ac40: ffffff01ebb2a990 [ ffffff01ebb2a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01ebb2aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01e9b48c40 ffffff4375951038 ffffff437591fa00 2 99 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b48c40: ffffff01e9b48990 [ ffffff01e9b48990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01e9b48ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01e9b24c40 ffffff4375951038 ffffff4375925ec0 2 0 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9b24c40: ffffff01e9b24990 [ ffffff01e9b24990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01e9b24ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01eb484c40 ffffff4375951038 ffffff4375926cc0 2 0 ffffff4321cef5a8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb484c40: ffffff01eb484990 [ ffffff01eb484990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef5a8, ffffff4321cef598) taskq_thread_wait+0xbe(ffffff4321cef578, ffffff4321cef598, ffffff4321cef5a8 , ffffff01eb484ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef578) thread_start+8() ffffff01e9f91c40 ffffff4375951038 ffffff430e73e200 2 99 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9f91c40: ffffff01e9f91990 [ ffffff01e9f91990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01e9f91ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01ebab8c40 ffffff4375951038 ffffff4375989740 2 0 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebab8c40: ffffff01ebab8990 [ ffffff01ebab8990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01ebab8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01eba82c40 ffffff4375951038 ffffff437598e500 2 0 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba82c40: ffffff01eba82990 [ ffffff01eba82990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01eba82ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01eba64c40 ffffff4375951038 ffffff4375990100 2 0 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba64c40: ffffff01eba64990 [ ffffff01eba64990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01eba64ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01eba10c40 ffffff4375951038 ffffff4375996cc0 2 0 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba10c40: ffffff01eba10990 [ ffffff01eba10990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01eba10ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01eb9fec40 ffffff4375951038 ffffff43759981c0 2 99 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9fec40: ffffff01eb9fe990 [ ffffff01eb9fe990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01eb9fead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01eb9e6c40 ffffff4375951038 ffffff4311a51300 2 99 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9e6c40: ffffff01eb9e6990 [ ffffff01eb9e6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01eb9e6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01eb9c2c40 ffffff4375951038 ffffff4311a4f000 2 99 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9c2c40: ffffff01eb9c2990 [ ffffff01eb9c2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01eb9c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01ebb7ec40 ffffff4375951038 ffffff437597c180 2 99 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb7ec40: ffffff01ebb7e990 [ ffffff01ebb7e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01ebb7ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01ebb5ac40 ffffff4375951038 ffffff437599ac40 2 0 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb5ac40: ffffff01ebb5a990 [ ffffff01ebb5a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01ebb5aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01ebb36c40 ffffff4375951038 ffffff437591d000 2 0 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb36c40: ffffff01ebb36990 [ ffffff01ebb36990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01ebb36ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01ebb1ec40 ffffff4375951038 ffffff437591de00 2 99 ffffff4321cd8b30 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb1ec40: ffffff01ebb1e990 [ ffffff01ebb1e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cd8b30, ffffff4321cd8b20) taskq_thread_wait+0xbe(ffffff4321cd8b00, ffffff4321cd8b20, ffffff4321cd8b30 , ffffff01ebb1ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cd8b00) thread_start+8() ffffff01e9ff7c40 ffffff4375951038 ffffff4375982cc0 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9ff7c40: ffffff01e9ff7990 [ ffffff01e9ff7990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01e9ff7ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01e9fe5c40 ffffff4375951038 ffffff4375983ac0 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fe5c40: ffffff01e9fe5990 [ ffffff01e9fe5990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01e9fe5ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01e9fc7c40 ffffff4375951038 ffffff4375986580 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fc7c40: ffffff01e9fc7990 [ ffffff01e9fc7990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01e9fc7ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01e9fa9c40 ffffff4375951038 ffffff43759848c0 2 0 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fa9c40: ffffff01e9fa9990 [ ffffff01e9fa9990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01e9fa9ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01e9f9dc40 ffffff4375951038 ffffff4375991100 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9f9dc40: ffffff01e9f9d990 [ ffffff01e9f9d990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01e9f9dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01ebae2c40 ffffff4375951038 ffffff4375992600 2 0 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebae2c40: ffffff01ebae2990 [ ffffff01ebae2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01ebae2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01eba88c40 ffffff4375951038 ffffff437598d700 2 0 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba88c40: ffffff01eba88990 [ ffffff01eba88990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01eba88ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01eba70c40 ffffff4375951038 ffffff437598f300 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba70c40: ffffff01eba70990 [ ffffff01eba70990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01eba70ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01eba04c40 ffffff4375951038 ffffff4375997ac0 2 0 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba04c40: ffffff01eba04990 [ ffffff01eba04990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01eba04ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01eb9e0c40 ffffff4375951038 ffffff43759988c0 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9e0c40: ffffff01eb9e0990 [ ffffff01eb9e0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01eb9e0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01eb9bcc40 ffffff4375951038 ffffff437597a580 2 0 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9bcc40: ffffff01eb9bc990 [ ffffff01eb9bc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01eb9bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01ebb6cc40 ffffff4375951038 ffffff4375999740 2 99 ffffff4321cef7d8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebb6cc40: ffffff01ebb6c990 [ ffffff01ebb6c990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef7d8, ffffff4321cef7c8) taskq_thread_wait+0xbe(ffffff4321cef7a8, ffffff4321cef7c8, ffffff4321cef7d8 , ffffff01ebb6cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef7a8) thread_start+8() ffffff01e9ff1c40 ffffff4375951038 ffffff43759833c0 2 99 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9ff1c40: ffffff01e9ff1990 [ ffffff01e9ff1990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01e9ff1ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01e9fd9c40 ffffff4375951038 ffffff4375985080 2 0 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fd9c40: ffffff01e9fd9990 [ ffffff01e9fd9990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01e9fd9ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01e9fbbc40 ffffff4375951038 ffffff4375987380 2 0 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fbbc40: ffffff01e9fbb990 [ ffffff01e9fbb990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01e9fbbad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01e9fafc40 ffffff4375951038 ffffff4375988180 2 0 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fafc40: ffffff01e9faf990 [ ffffff01e9faf990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01e9fafad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01e9f97c40 ffffff4375951038 ffffff430e738500 2 99 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9f97c40: ffffff01e9f97990 [ ffffff01e9f97990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01e9f97ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01ebadcc40 ffffff4375951038 ffffff4375992d00 2 0 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebadcc40: ffffff01ebadc990 [ ffffff01ebadc990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01ebadcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01ebabec40 ffffff4375951038 ffffff4375989040 2 99 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebabec40: ffffff01ebabe990 [ ffffff01ebabe990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01ebabead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01ebaacc40 ffffff4375951038 ffffff437598a540 2 0 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebaacc40: ffffff01ebaac990 [ ffffff01ebaac990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01ebaacad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01ebaa0c40 ffffff4375951038 ffffff437598b340 2 99 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebaa0c40: ffffff01ebaa0990 [ ffffff01ebaa0990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01ebaa0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01eba8ec40 ffffff4375951038 ffffff437598d000 2 0 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba8ec40: ffffff01eba8e990 [ ffffff01eba8e990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01eba8ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01eba6ac40 ffffff4375951038 ffffff437598fa00 2 99 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba6ac40: ffffff01eba6a990 [ ffffff01eba6a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01eba6aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01eb9f2c40 ffffff4375951038 ffffff4375994900 2 99 ffffff4321cef8f0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb9f2c40: ffffff01eb9f2990 [ ffffff01eb9f2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef8f0, ffffff4321cef8e0) taskq_thread_wait+0xbe(ffffff4321cef8c0, ffffff4321cef8e0, ffffff4321cef8f0 , ffffff01eb9f2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef8c0) thread_start+8() ffffff01ea03fc40 ffffff4375951038 ffffff437597d800 2 0 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea03fc40: ffffff01ea03f990 [ ffffff01ea03f990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01ea03fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01ea027c40 ffffff4375951038 ffffff437597fb00 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea027c40: ffffff01ea027990 [ ffffff01ea027990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01ea027ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01ea00fc40 ffffff4375951038 ffffff43759810c0 2 0 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea00fc40: ffffff01ea00f990 [ ffffff01ea00f990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01ea00fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01e9ffdc40 ffffff4375951038 ffffff43759825c0 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9ffdc40: ffffff01e9ffd990 [ ffffff01e9ffd990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01e9ffdad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01e9fd3c40 ffffff4375951038 ffffff4375985780 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fd3c40: ffffff01e9fd3990 [ ffffff01e9fd3990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01e9fd3ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01e9fc1c40 ffffff4375951038 ffffff4375986c80 2 0 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fc1c40: ffffff01e9fc1990 [ ffffff01e9fc1990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01e9fc1ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01e9fa3c40 ffffff4375951038 ffffff4375988880 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fa3c40: ffffff01e9fa3990 [ ffffff01e9fa3990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01e9fa3ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01e9f8bc40 ffffff4375951038 ffffff4375991800 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9f8bc40: ffffff01e9f8b990 [ ffffff01e9f8b990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01e9f8bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01ebacac40 ffffff4375951038 ffffff4375994200 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebacac40: ffffff01ebaca990 [ ffffff01ebaca990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01ebacaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01ebab2c40 ffffff4375951038 ffffff4375989e40 2 0 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebab2c40: ffffff01ebab2990 [ ffffff01ebab2990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01ebab2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01ebaa6c40 ffffff4375951038 ffffff437598ac40 2 0 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebaa6c40: ffffff01ebaa6990 [ ffffff01ebaa6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01ebaa6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01eba9ac40 ffffff4375951038 ffffff437598ba40 2 99 ffffff4321be32c8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eba9ac40: ffffff01eba9a990 [ ffffff01eba9a990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be32c8, ffffff4321be32b8) taskq_thread_wait+0xbe(ffffff4321be3298, ffffff4321be32b8, ffffff4321be32c8 , ffffff01eba9aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3298) thread_start+8() ffffff01ea0abc40 ffffff4375951038 ffffff437596de80 2 0 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0abc40: ffffff01ea0ab990 [ ffffff01ea0ab990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea0abad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea093c40 ffffff4375951038 ffffff4375970180 2 0 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea093c40: ffffff01ea093990 [ ffffff01ea093990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea093ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea07bc40 ffffff4375951038 ffffff4375975700 2 99 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea07bc40: ffffff01ea07b990 [ ffffff01ea07b990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea07bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea063c40 ffffff4375951038 ffffff430e739300 2 99 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea063c40: ffffff01ea063990 [ ffffff01ea063990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea063ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea04bc40 ffffff4375951038 ffffff437597d100 2 99 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea04bc40: ffffff01ea04b990 [ ffffff01ea04b990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea04bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea033c40 ffffff4375951038 ffffff437597ed00 2 0 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea033c40: ffffff01ea033990 [ ffffff01ea033990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea033ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea021c40 ffffff4375951038 ffffff4375980200 2 0 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea021c40: ffffff01ea021990 [ ffffff01ea021990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea021ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea009c40 ffffff4375951038 ffffff43759817c0 2 0 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea009c40: ffffff01ea009990 [ ffffff01ea009990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ea009ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01e9febc40 ffffff4375951038 ffffff42a9c48780 2 99 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9febc40: ffffff01e9feb990 [ ffffff01e9feb990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01e9febad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01e9fcdc40 ffffff4375951038 ffffff4375985e80 2 99 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fcdc40: ffffff01e9fcd990 [ ffffff01e9fcd990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01e9fcdad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01e9fb5c40 ffffff4375951038 ffffff4375987a80 2 99 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fb5c40: ffffff01e9fb5990 [ ffffff01e9fb5990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01e9fb5ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ebad6c40 ffffff4375951038 ffffff4375993400 2 0 ffffff4321cefc38 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ebad6c40: ffffff01ebad6990 [ ffffff01ebad6990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cefc38, ffffff4321cefc28) taskq_thread_wait+0xbe(ffffff4321cefc08, ffffff4321cefc28, ffffff4321cefc38 , ffffff01ebad6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cefc08) thread_start+8() ffffff01ea0d5c40 ffffff4375951038 ffffff4375972c40 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0d5c40: ffffff01ea0d5990 [ ffffff01ea0d5990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea0d5ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea0c9c40 ffffff4375951038 ffffff4375973a40 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0c9c40: ffffff01ea0c9990 [ ffffff01ea0c9990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea0c9ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea0b7c40 ffffff4375951038 ffffff437596d780 2 0 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0b7c40: ffffff01ea0b7990 [ ffffff01ea0b7990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea0b7ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea09fc40 ffffff4375951038 ffffff437596f380 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea09fc40: ffffff01ea09f990 [ ffffff01ea09f990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea09fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea087c40 ffffff4375951038 ffffff4375974840 2 0 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea087c40: ffffff01ea087990 [ ffffff01ea087990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea087ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea06fc40 ffffff4375951038 ffffff4375976500 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea06fc40: ffffff01ea06f990 [ ffffff01ea06f990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea06fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea057c40 ffffff4375951038 ffffff4375977a00 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea057c40: ffffff01ea057990 [ ffffff01ea057990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea057ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea045c40 ffffff4375951038 ffffff437597df00 2 0 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea045c40: ffffff01ea045990 [ ffffff01ea045990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea045ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea02dc40 ffffff4375951038 ffffff437597f400 2 0 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea02dc40: ffffff01ea02d990 [ ffffff01ea02d990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea02dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea015c40 ffffff4375951038 ffffff4375980900 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea015c40: ffffff01ea015990 [ ffffff01ea015990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea015ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea003c40 ffffff4375951038 ffffff4375981ec0 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea003c40: ffffff01ea003990 [ ffffff01ea003990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01ea003ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01e9fdfc40 ffffff4375951038 ffffff43759841c0 2 99 ffffff4321be3610 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01e9fdfc40: ffffff01e9fdf990 [ ffffff01e9fdf990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3610, ffffff4321be3600) taskq_thread_wait+0xbe(ffffff4321be35e0, ffffff4321be3600, ffffff4321be3610 , ffffff01e9fdfad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be35e0) thread_start+8() ffffff01ea0e7c40 ffffff4375951038 ffffff4375971740 2 99 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0e7c40: ffffff01ea0e7990 [ ffffff01ea0e7990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea0e7ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea0e1c40 ffffff4375951038 ffffff4375971e40 2 99 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0e1c40: ffffff01ea0e1990 [ ffffff01ea0e1990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea0e1ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea0dbc40 ffffff4375951038 ffffff4375972540 2 99 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0dbc40: ffffff01ea0db990 [ ffffff01ea0db990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea0dbad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea0cfc40 ffffff4375951038 ffffff4375973340 2 0 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0cfc40: ffffff01ea0cf990 [ ffffff01ea0cf990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea0cfad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea0c3c40 ffffff4375951038 ffffff4375974140 2 0 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0c3c40: ffffff01ea0c3990 [ ffffff01ea0c3990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea0c3ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea0b1c40 ffffff4375951038 ffffff437596e580 2 0 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0b1c40: ffffff01ea0b1990 [ ffffff01ea0b1990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea0b1ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea099c40 ffffff4375951038 ffffff437596fa80 2 0 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea099c40: ffffff01ea099990 [ ffffff01ea099990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea099ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea081c40 ffffff4375951038 ffffff4375975000 2 0 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea081c40: ffffff01ea081990 [ ffffff01ea081990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea081ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea069c40 ffffff4375951038 ffffff4375976c00 2 0 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea069c40: ffffff01ea069990 [ ffffff01ea069990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea069ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea051c40 ffffff4375951038 ffffff4375978100 2 99 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea051c40: ffffff01ea051990 [ ffffff01ea051990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea051ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea039c40 ffffff4375951038 ffffff437597e600 2 99 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea039c40: ffffff01ea039990 [ ffffff01ea039990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea039ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea01bc40 ffffff4375951038 ffffff4375978800 2 99 ffffff4321be3728 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea01bc40: ffffff01ea01b990 [ ffffff01ea01b990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3728, ffffff4321be3718) taskq_thread_wait+0xbe(ffffff4321be36f8, ffffff4321be3718, ffffff4321be3728 , ffffff01ea01bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be36f8) thread_start+8() ffffff01ea05dc40 ffffff4375951038 ffffff4375977300 2 99 ffffff4321be3958 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea05dc40: ffffff01ea05d990 [ ffffff01ea05d990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3958, ffffff4321be3948) taskq_thread_wait+0xbe(ffffff4321be3928, ffffff4321be3948, ffffff4321be3958 , ffffff01ea05dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3928) thread_start+8() ffffff01ea075c40 ffffff4375951038 ffffff4375975e00 2 99 ffffff4321be3a70 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea075c40: ffffff01ea075990 [ ffffff01ea075990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3a70, ffffff4321be3a60) taskq_thread_wait+0xbe(ffffff4321be3a40, ffffff4321be3a60, ffffff4321be3a70 , ffffff01ea075ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3a40) thread_start+8() ffffff01ea08dc40 ffffff4375951038 ffffff4375970880 2 99 ffffff4321be3b88 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea08dc40: ffffff01ea08d990 [ ffffff01ea08d990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3b88, ffffff4321be3b78) taskq_thread_wait+0xbe(ffffff4321be3b58, ffffff4321be3b78, ffffff4321be3b88 , ffffff01ea08dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3b58) thread_start+8() ffffff01ea0a5c40 ffffff4375951038 ffffff437596ec80 2 99 ffffff4321be3ca0 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0a5c40: ffffff01ea0a5990 [ ffffff01ea0a5990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3ca0, ffffff4321be3c90) taskq_thread_wait+0xbe(ffffff4321be3c70, ffffff4321be3c90, ffffff4321be3ca0 , ffffff01ea0a5ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3c70) thread_start+8() ffffff01ea0bdc40 ffffff4375951038 ffffff437596d080 2 99 ffffff4321be3db8 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01ea0bdc40: ffffff01ea0bd990 [ ffffff01ea0bd990 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be3db8, ffffff4321be3da8) taskq_thread_wait+0xbe(ffffff4321be3d88, ffffff4321be3da8, ffffff4321be3db8 , ffffff01ea0bdad0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be3d88) thread_start+8() ffffff01eb5c6c40 ffffff4375951038 ffffff430e73cd00 2 99 ffffff42ec342900 PC: _resume_from_idle+0xf1 CMD: zpool-tank stack pointer for thread ffffff01eb5c6c40: ffffff01eb5c6a50 [ ffffff01eb5c6a50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42ec342900, ffffff42ec3428f8) spa_thread+0x1db(ffffff42ec342040) thread_start+8() ffffff01ea30bc40 fffffffffbc2eb80 0 0 60 ffffff4361b02280 PC: _resume_from_idle+0xf1 TASKQ: zfs_vn_rele_taskq stack pointer for thread ffffff01ea30bc40: ffffff01ea30ba80 [ ffffff01ea30ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02280, ffffff4361b02270) taskq_thread_wait+0xbe(ffffff4361b02250, ffffff4361b02270, ffffff4361b02280 , ffffff01ea30bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b02250) thread_start+8() ffffff01ea207c40 fffffffffbc2eb80 0 0 60 ffffff4374fb47d4 PC: _resume_from_idle+0xf1 THREAD: txg_quiesce_thread() stack pointer for thread ffffff01ea207c40: ffffff01ea207ad0 [ ffffff01ea207ad0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4374fb47d4, ffffff4374fb4798) txg_thread_wait+0xaf(ffffff4374fb4790, ffffff01ea207bc0, ffffff4374fb47d4, 0 ) txg_quiesce_thread+0x106(ffffff4374fb4600) thread_start+8() ffffff01eb5dbc40 fffffffffbc2eb80 0 0 60 ffffff4374fb47d0 PC: _resume_from_idle+0xf1 THREAD: txg_sync_thread() stack pointer for thread ffffff01eb5dbc40: ffffff01eb5dba10 [ ffffff01eb5dba10 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff4374fb47d0, ffffff4374fb4798, 115c1f500, 989680, 0) cv_timedwait+0x5c(ffffff4374fb47d0, ffffff4374fb4798, 1bdbef) txg_thread_wait+0x5f(ffffff4374fb4790, ffffff01eb5dbbc0, ffffff4374fb47d0, 1d2) txg_sync_thread+0xfd(ffffff4374fb4600) thread_start+8() ffffff01eb548c40 fffffffffbc2eb80 0 0 60 fffffffffbd26d18 PC: _resume_from_idle+0xf1 THREAD: crypto_bufcall_service() stack pointer for thread ffffff01eb548c40: ffffff01eb548b70 [ ffffff01eb548b70 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbd26d18, fffffffffbd26d00) crypto_bufcall_service+0x8d() thread_start+8() ffffff01ea12fc40 fffffffffbc2eb80 0 0 60 ffffff4321be33e0 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01ea12fc40: ffffff01ea12fa80 [ ffffff01ea12fa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321be33e0, ffffff4321be33d0) taskq_thread_wait+0xbe(ffffff4321be33b0, ffffff4321be33d0, ffffff4321be33e0 , ffffff01ea12fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321be33b0) thread_start+8() ffffff01ea299c40 fffffffffbc2eb80 0 0 60 ffffff4321cef148 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01ea299c40: ffffff01ea299a80 [ ffffff01ea299a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef148, ffffff4321cef138) taskq_thread_wait+0xbe(ffffff4321cef118, ffffff4321cef138, ffffff4321cef148 , ffffff01ea299bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef118) thread_start+8() ffffff43137c5060 ffffff431e905000 ffffff43120371c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/pfexecd stack pointer for thread ffffff43137c5060: ffffff01eb6d4d50 [ ffffff01eb6d4d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fecafe00, f5f00) doorfs32+0x180(0, 0, 0, fecafe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4311da4be0 ffffff431e905000 ffffff430e740cc0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/pfexecd stack pointer for thread ffffff4311da4be0: ffffff01e9280d20 [ ffffff01e9280d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383fe4500) shuttle_resume+0x2af(ffffff4383fe4500, fffffffffbd12ed0) door_return+0x3e0(fedae960, c, 0, 0, fedaee00, f5f00) doorfs32+0x180(fedae960, c, 0, fedaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4312026100 ffffff431e905000 ffffff43120363c0 1 59 ffffff43120262ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/pfexecd stack pointer for thread ffffff4312026100: ffffff01e9836d90 [ ffffff01e9836d90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43120262ee, ffffff42e26ae440, 0) cv_wait_sig_swap+0x17(ffffff43120262ee, ffffff42e26ae440) sigsuspend+0xf9(8047db0) _sys_sysenter_post_swapgs+0x149() ffffff4383cbb740 ffffff431f92c018 ffffff4312032400 1 59 ffffff4383cbb92e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cbb740: ffffff01ea2edc50 [ ffffff01ea2edc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cbb92e, ffffff4383cbb930, 0) cv_wait_sig_swap+0x17(ffffff4383cbb92e, ffffff4383cbb930) cv_waituntil_sig+0xbd(ffffff4383cbb92e, ffffff4383cbb930, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cbb3a0 ffffff431f92c018 ffffff430e73a800 1 59 ffffff4383cbb58e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cbb3a0: ffffff01ea2f3c50 [ ffffff01ea2f3c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cbb58e, ffffff4383cbb590, 0) cv_wait_sig_swap+0x17(ffffff4383cbb58e, ffffff4383cbb590) cv_waituntil_sig+0xbd(ffffff4383cbb58e, ffffff4383cbb590, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cbb000 ffffff431f92c018 ffffff430e709840 1 59 ffffff4383cbb1ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cbb000: ffffff01eb653c40 [ ffffff01eb653c40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cbb1ee, ffffff42e26ae340, 0) cv_wait_sig_swap+0x17(ffffff4383cbb1ee, ffffff42e26ae340) cv_waituntil_sig+0xbd(ffffff4383cbb1ee, ffffff42e26ae340, 0, 0) sigtimedwait+0x197(febaffac, febafea0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cb63c0 ffffff431f92c018 ffffff4311a49340 1 59 ffffff4383cb65ae PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cb63c0: ffffff01ea2e1c50 [ ffffff01ea2e1c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cb65ae, ffffff4383cb65b0, 0) cv_wait_sig_swap+0x17(ffffff4383cb65ae, ffffff4383cb65b0) cv_waituntil_sig+0xbd(ffffff4383cb65ae, ffffff4383cb65b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cb6020 ffffff431f92c018 ffffff4312044000 1 59 ffffff4383cb620e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cb6020: ffffff01eb6e6c50 [ ffffff01eb6e6c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cb620e, ffffff4383cb6210, 0) cv_wait_sig_swap+0x17(ffffff4383cb620e, ffffff4383cb6210) cv_waituntil_sig+0xbd(ffffff4383cb620e, ffffff4383cb6210, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff01eb7e5c40 fffffffffbc2eb80 0 0 60 ffffff4383e37118 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01eb7e5c40: ffffff01eb7e5b00 [ ffffff01eb7e5b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4383e37118, ffffff4383e37108) i_mac_notify_thread+0xee(ffffff4383e37018) thread_start+8() ffffff01eb918c40 fffffffffbc2eb80 0 0 60 ffffff4383d36b10 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01eb918c40: ffffff01eb918b00 [ ffffff01eb918b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4383d36b10, ffffff4383d36b00) i_mac_notify_thread+0xee(ffffff4383d36a10) thread_start+8() ffffff01eb91ec40 fffffffffbc2eb80 0 0 60 ffffff4393eb4b20 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01eb91ec40: ffffff01eb91eb00 [ ffffff01eb91eb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4393eb4b20, ffffff4393eb4b10) i_mac_notify_thread+0xee(ffffff4393eb4a20) thread_start+8() ffffff4383cb0b20 ffffff431f92c018 ffffff430e73a100 1 59 ffffff4383cb0d0e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cb0b20: ffffff01eb6ecc50 [ ffffff01eb6ecc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cb0d0e, ffffff4383cb0d10, 0) cv_wait_sig_swap+0x17(ffffff4383cb0d0e, ffffff4383cb0d10) cv_waituntil_sig+0xbd(ffffff4383cb0d0e, ffffff4383cb0d10, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cb0780 ffffff431f92c018 ffffff430e70a700 1 59 ffffff4383cb096e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cb0780: ffffff01ea201c50 [ ffffff01ea201c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cb096e, ffffff4383cb0970, 0) cv_wait_sig_swap+0x17(ffffff4383cb096e, ffffff4383cb0970) cv_waituntil_sig+0xbd(ffffff4383cb096e, ffffff4383cb0970, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cfb0c0 ffffff431f92c018 ffffff4312033900 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cfb0c0: ffffff01ea292d20 [ ffffff01ea292d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01ea177c40) shuttle_resume+0x2af(ffffff01ea177c40, fffffffffbd12ed0) door_return+0x3e0(fd6cec3c, 4, 0, 0, fd6cee00, f5f00) doorfs32+0x180(fd6cec3c, 4, 0, fd6cee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4311263080 ffffff431f92c018 ffffff431380c180 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4311263080: ffffff01ea304d50 [ ffffff01ea304d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(fe52ed24, 4, 0, 0, fe52ee00, f5f00) doorfs32+0x180(fe52ed24, 4, 0, fe52ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383d140e0 ffffff4383ad70a8 ffffff430eb08600 1 59 ffffff4383d142ce PC: _resume_from_idle+0xf1 CMD: devfsadmd stack pointer for thread ffffff4383d140e0: ffffff01e9ccfc50 [ ffffff01e9ccfc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d142ce, ffffff4383d142d0, 0) cv_wait_sig_swap+0x17(ffffff4383d142ce, ffffff4383d142d0) cv_waituntil_sig+0xbd(ffffff4383d142ce, ffffff4383d142d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383d11be0 ffffff4383ad70a8 ffffff430e743e80 1 59 ffffff4383d11dce PC: _resume_from_idle+0xf1 CMD: devfsadmd stack pointer for thread ffffff4383d11be0: ffffff01ea16fc60 [ ffffff01ea16fc60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383d11dce, ffffff4383d11dd0, 7734bead, 1 , 4) cv_waituntil_sig+0xfa(ffffff4383d11dce, ffffff4383d11dd0, ffffff01ea16fe10, 2) lwp_park+0x15e(fec4ef38, 0) syslwp_park+0x63(0, fec4ef38, 0) _sys_sysenter_post_swapgs+0x149() ffffff43846f1ae0 ffffff4383ad70a8 ffffff4312032b00 1 59 0 PC: _resume_from_idle+0xf1 CMD: devfsadmd stack pointer for thread ffffff43846f1ae0: ffffff01eb99ad50 [ ffffff01eb99ad50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe7d0e00, f5f00) doorfs32+0x180(0, 0, 0, fe7d0e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383d11840 ffffff4383ad70a8 ffffff4321c36ec0 1 59 0 PC: _resume_from_idle+0xf1 CMD: devfsadmd stack pointer for thread ffffff4383d11840: ffffff01eb647d20 [ ffffff01eb647d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01ebd74c40) shuttle_resume+0x2af(ffffff01ebd74c40, fffffffffbd12ed0) door_return+0x3e0(fe9ced8c, 4, 0, 0, fe9cee00, f5f00) doorfs32+0x180(fe9ced8c, 4, 0, fe9cee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383d114a0 ffffff4383ad70a8 ffffff4312040740 1 59 ffffff4383d1168e PC: _resume_from_idle+0xf1 CMD: devfsadmd stack pointer for thread ffffff4383d114a0: ffffff01e9cb1c50 [ ffffff01e9cb1c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d1168e, ffffff4383d11690, 0) cv_wait_sig_swap+0x17(ffffff4383d1168e, ffffff4383d11690) cv_waituntil_sig+0xbd(ffffff4383d1168e, ffffff4383d11690, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383d14820 ffffff4383ad70a8 ffffff4312039c80 1 59 ffffff4383d14a0e PC: _resume_from_idle+0xf1 CMD: devfsadmd stack pointer for thread ffffff4383d14820: ffffff01e9cabdd0 [ ffffff01e9cabdd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d14a0e, ffffff4383d14a10, 0) cv_wait_sig_swap+0x17(ffffff4383d14a0e, ffffff4383d14a10) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff43113f1b80 ffffff431f92c018 ffffff431380c880 1 59 ffffff43113f1d6e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff43113f1b80: ffffff01eb769c50 [ ffffff01eb769c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43113f1d6e, ffffff43113f1d70, 0) cv_wait_sig_swap+0x17(ffffff43113f1d6e, ffffff43113f1d70) cv_waituntil_sig+0xbd(ffffff43113f1d6e, ffffff43113f1d70, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cadb40 ffffff431f92c018 ffffff4321d33800 1 59 ffffff4383cadd2e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cadb40: ffffff01eb76fc50 [ ffffff01eb76fc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cadd2e, ffffff4383cadd30, 0) cv_wait_sig_swap+0x17(ffffff4383cadd2e, ffffff4383cadd30) cv_waituntil_sig+0xbd(ffffff4383cadd2e, ffffff4383cadd30, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cad7a0 ffffff431f92c018 ffffff4313813340 1 59 ffffff4383cad98e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cad7a0: ffffff01eb6e0c50 [ ffffff01eb6e0c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cad98e, ffffff4383cad990, 0) cv_wait_sig_swap+0x17(ffffff4383cad98e, ffffff4383cad990) cv_waituntil_sig+0xbd(ffffff4383cad98e, ffffff4383cad990, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ca7b60 ffffff431f92c018 ffffff4312033200 1 59 ffffff4383ca7d4e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383ca7b60: ffffff01e9cbdc50 [ ffffff01e9cbdc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ca7d4e, ffffff4383ca7d50, 0) cv_wait_sig_swap+0x17(ffffff4383ca7d4e, ffffff4383ca7d50) cv_waituntil_sig+0xbd(ffffff4383ca7d4e, ffffff4383ca7d50, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ca7080 ffffff431f92c018 ffffff4312030800 1 59 ffffff4383ca726e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383ca7080: ffffff01eb77bc50 [ ffffff01eb77bc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ca726e, ffffff4383ca7270, 0) cv_wait_sig_swap+0x17(ffffff4383ca726e, ffffff4383ca7270) cv_waituntil_sig+0xbd(ffffff4383ca726e, ffffff4383ca7270, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ca3b80 ffffff431f92c018 ffffff42a9c471c0 1 59 ffffff4383ca3d6e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383ca3b80: ffffff01eb781c50 [ ffffff01eb781c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ca3d6e, ffffff4383ca3d70, 0) cv_wait_sig_swap+0x17(ffffff4383ca3d6e, ffffff4383ca3d70) cv_waituntil_sig+0xbd(ffffff4383ca3d6e, ffffff4383ca3d70, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ca30a0 ffffff431f92c018 ffffff431204a400 1 59 ffffff4383ca328e PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383ca30a0: ffffff01e9cd5c50 [ ffffff01e9cd5c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ca328e, ffffff4383ca3290, 0) cv_wait_sig_swap+0x17(ffffff4383ca328e, ffffff4383ca3290) cv_waituntil_sig+0xbd(ffffff4383ca328e, ffffff4383ca3290, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff01ea153c40 fffffffffbc2eb80 0 0 60 ffffff42e2959a30 PC: _resume_from_idle+0xf1 THREAD: irm_balance_thread() stack pointer for thread ffffff01ea153c40: ffffff01ea153b70 [ ffffff01ea153b70 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff42e2959a30, ffffff42e2959a20) irm_balance_thread+0x50(ffffff42e2959a00) thread_start+8() ffffff01ea177c40 fffffffffbc2eb80 0 0 60 fffffffffbca9500 PC: _resume_from_idle+0xf1 THREAD: log_event_deliver() stack pointer for thread ffffff01ea177c40: ffffff01ea177b50 [ ffffff01ea177b50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbca9500, fffffffffbca94f0) log_event_deliver+0x1b3() thread_start+8() ffffff4383cbbae0 ffffff431f92c018 ffffff4313809780 1 59 ffffff4383cbbcce PC: _resume_from_idle+0xf1 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff4383cbbae0: ffffff01ea2e7dd0 [ ffffff01ea2e7dd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cbbcce, ffffff4383cbbcd0, 0) cv_wait_sig_swap+0x17(ffffff4383cbbcce, ffffff4383cbbcd0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff4383cfbba0 ffffff431192f070 ffffff4312030f00 1 59 ffffff4321cd6222 PC: _resume_from_idle+0xf1 CMD: /usr/lib/power/powerd stack pointer for thread ffffff4383cfbba0: ffffff01eb6dac50 [ ffffff01eb6dac50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4321cd6222, ffffff4321cd61e8, 0) cv_wait_sig_swap+0x17(ffffff4321cd6222, ffffff4321cd61e8) cv_timedwait_sig_hrtime+0x35(ffffff4321cd6222, ffffff4321cd61e8, ffffffffffffffff) poll_common+0x50c(fedbefa8, 1, 0, 0) pollsys+0xe3(fedbefa8, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cfb800 ffffff431192f070 ffffff4313811040 1 59 ffffff4383cfb9ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/power/powerd stack pointer for thread ffffff4383cfb800: ffffff01eb775d90 [ ffffff01eb775d90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cfb9ee, ffffff42e26ae480, 0) cv_wait_sig_swap+0x17(ffffff4383cfb9ee, ffffff42e26ae480) sigsuspend+0xf9(fecaef90) _sys_sysenter_post_swapgs+0x149() ffffff4383cfb460 ffffff431192f070 ffffff430e712ec0 1 59 ffffff4321cf5d0a PC: _resume_from_idle+0xf1 CMD: /usr/lib/power/powerd stack pointer for thread ffffff4383cfb460: ffffff01e9060c50 [ ffffff01e9060c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4321cf5d0a, ffffff4321cf5cd0, 0) cv_wait_sig_swap+0x17(ffffff4321cf5d0a, ffffff4321cf5cd0) cv_timedwait_sig_hrtime+0x35(ffffff4321cf5d0a, ffffff4321cf5cd0, ffffffffffffffff) poll_common+0x50c(febaffa8, 1, 0, 0) pollsys+0xe3(febaffa8, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ca37e0 ffffff431192f070 ffffff430e716780 1 59 ffffff4383ca39ce PC: _resume_from_idle+0xf1 CMD: /usr/lib/power/powerd stack pointer for thread ffffff4383ca37e0: ffffff01eb787d90 [ ffffff01eb787d90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ca39ce, ffffff42e26ae480, 0) cv_wait_sig_swap+0x17(ffffff4383ca39ce, ffffff42e26ae480) sigsuspend+0xf9(8047ed0) _sys_sysenter_post_swapgs+0x149() ffffff43137b80a0 ffffff431e903008 ffffff4312035cc0 1 59 ffffff437594b8b2 PC: _resume_from_idle+0xf1 CMD: /usr/lib/dbus-daemon --system stack pointer for thread ffffff43137b80a0: ffffff01e9cc3c50 [ ffffff01e9cc3c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff437594b8b2, ffffff437594b878, 0) cv_wait_sig_swap+0x17(ffffff437594b8b2, ffffff437594b878) cv_timedwait_sig_hrtime+0x35(ffffff437594b8b2, ffffff437594b878, ffffffffffffffff) poll_common+0x50c(80d8638, 3, 0, 0) pollsys+0xe3(80d8638, 3, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137d33c0 ffffff430e81b048 ffffff430e73bf00 3 60 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff43137d33c0: ffffff01ea0f9d50 [ ffffff01ea0f9d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fea6ee00, f5f00) doorfs32+0x180(0, 0, 0, fea6ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff43137ddc00 ffffff430e81b048 ffffff431202ce00 3 60 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff43137ddc00: ffffff01eb63bd20 [ ffffff01eb63bd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff43137dd860) shuttle_resume+0x2af(ffffff43137dd860, fffffffffbd12ed0) door_return+0x3e0(0, 0, 0, 0, fedaee00, f5f00) doorfs32+0x180(0, 0, 0, fedaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff43137dd860 ffffff430e81b048 ffffff4312031600 3 60 ffffff430e81b4c0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff43137dd860: ffffff01eb641d80 [ ffffff01eb641d80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff430e81b4c0, fffffffffbd12ed0) door_unref+0x94() doorfs32+0xf5(0, 0, 0, 0, 0, 8) _sys_sysenter_post_swapgs+0x149() ffffff4311cac780 ffffff430e81b048 ffffff43120347c0 3 60 ffffff4311cac96e PC: _resume_from_idle+0xf1 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff4311cac780: ffffff01eb64dc50 [ ffffff01eb64dc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311cac96e, ffffff4311cac970, 0) cv_wait_sig_swap+0x17(ffffff4311cac96e, ffffff4311cac970) cv_waituntil_sig+0xbd(ffffff4311cac96e, ffffff4311cac970, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137cc780 ffffff430e81b048 ffffff430e746180 3 60 ffffff43137cc96e PC: _resume_from_idle+0xf1 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff43137cc780: ffffff01ea0ffdd0 [ ffffff01ea0ffdd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43137cc96e, ffffff43137cc970, 0) cv_wait_sig_swap+0x17(ffffff43137cc96e, ffffff43137cc970) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff4383d83780 ffffff430e853040 ffffff430e70d100 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/picl/picld stack pointer for thread ffffff4383d83780: ffffff01eb80ad20 [ ffffff01eb80ad20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383cb63c0) shuttle_resume+0x2af(ffffff4383cb63c0, fffffffffbd12ed0) door_return+0x3e0(fea4ed10, 0, 0, 0, fea4ee00, f5f00) doorfs32+0x180(fea4ed10, 0, 0, fea4ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383fbe880 ffffff430e853040 ffffff431202c000 1 59 ffffff4383fbea6e PC: _resume_from_idle+0xf1 CMD: /usr/lib/picl/picld stack pointer for thread ffffff4383fbe880: ffffff01eb982c50 [ ffffff01eb982c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383fbea6e, ffffff4383fbea70, 0) cv_wait_sig_swap+0x17(ffffff4383fbea6e, ffffff4383fbea70) cv_waituntil_sig+0xbd(ffffff4383fbea6e, ffffff4383fbea70, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383fbe4e0 ffffff430e853040 ffffff4312049d00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/picl/picld stack pointer for thread ffffff4383fbe4e0: ffffff01eb988d50 [ ffffff01eb988d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe83fe00, f5f00) doorfs32+0x180(0, 0, 0, fe83fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383d02020 ffffff430e853040 ffffff4313814840 1 59 ffffff4383d0220e PC: _resume_from_idle+0xf1 CMD: /usr/lib/picl/picld stack pointer for thread ffffff4383d02020: ffffff01eb876dd0 [ ffffff01eb876dd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d0220e, ffffff4383d02210, 0) cv_wait_sig_swap+0x17(ffffff4383d0220e, ffffff4383d02210) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff4311e53b40 ffffff4321bd8030 ffffff4321d36200 1 59 ffffff4321516b42 PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald-addon-network-discovery stack pointer for thread ffffff4311e53b40: ffffff01ea31dc50 [ ffffff01ea31dc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4321516b42, ffffff4321516b08, 0) cv_wait_sig_swap+0x17(ffffff4321516b42, ffffff4321516b08) cv_timedwait_sig_hrtime+0x35(ffffff4321516b42, ffffff4321516b08, ffffffffffffffff) poll_common+0x50c(8069778, 2, 0, 0) pollsys+0xe3(8069778, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137bcb60 ffffff4321006020 ffffff4312047100 1 59 ffffff4383d1577a PC: _resume_from_idle+0xf1 CMD: hald-runner stack pointer for thread ffffff43137bcb60: ffffff01ea2f9c50 [ ffffff01ea2f9c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d1577a, ffffff4383d15740, 0) cv_wait_sig_swap+0x17(ffffff4383d1577a, ffffff4383d15740) cv_timedwait_sig_hrtime+0x35(ffffff4383d1577a, ffffff4383d15740, ffffffffffffffff) poll_common+0x50c(8064790, 1, 0, 0) pollsys+0xe3(8064790, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137f0800 ffffff4383fe2008 ffffff4321d2d340 1 59 ffffff4321ac21ca PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald-addon-cpufreq stack pointer for thread ffffff43137f0800: ffffff01ea145c50 [ ffffff01ea145c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4321ac21ca, ffffff4321ac2190, 0) cv_wait_sig_swap+0x17(ffffff4321ac21ca, ffffff4321ac2190) cv_timedwait_sig_hrtime+0x35(ffffff4321ac21ca, ffffff4321ac2190, ffffffffffffffff) poll_common+0x50c(80696f8, 2, 0, 0) pollsys+0xe3(80696f8, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137b87e0 ffffff438484a030 ffffff42a9c478c0 1 59 ffffff43615a859a PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald-addon-acpi stack pointer for thread ffffff43137b87e0: ffffff01ea237c60 [ ffffff01ea237c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff43615a859a, ffffff43615a8560, 391f2e369d1f9f, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff43615a859a, ffffff43615a8560, 391f2e369d1f9f) poll_common+0x50c(8066fa8, 1, ffffff01ea237e40, 0) pollsys+0xe3(8066fa8, 1, 80477c8, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383d833e0 ffffff4321006020 ffffff430e73b100 1 59 ffffff4383c20ef2 PC: _resume_from_idle+0xf1 CMD: hald-runner stack pointer for thread ffffff4383d833e0: ffffff01eb87cc50 [ ffffff01eb87cc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383c20ef2, ffffff4383c20eb8, 0) cv_wait_sig_swap+0x17(ffffff4383c20ef2, ffffff4383c20eb8) cv_timedwait_sig_hrtime+0x35(ffffff4383c20ef2, ffffff4383c20eb8, ffffffffffffffff) poll_common+0x50c(8067558, 2, 0, 0) pollsys+0xe3(8067558, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383debb40 ffffff4311fa90b8 ffffff4311f505c0 1 59 ffffff4321516642 PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff4383debb40: ffffff01eb893c50 [ ffffff01eb893c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4321516642, ffffff4321516608, 0) cv_wait_sig_swap+0x17(ffffff4321516642, ffffff4321516608) cv_timedwait_sig_hrtime+0x35(ffffff4321516642, ffffff4321516608, ffffffffffffffff) poll_common+0x50c(808a968, 1, 0, 0) pollsys+0xe3(808a968, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43893a4140 ffffff4311fa90b8 ffffff4313811740 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff43893a4140: ffffff01e9ce7d50 [ ffffff01e9ce7d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe2aee00, f5f00) doorfs32+0x180(0, 0, 0, fe2aee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4311cb3020 ffffff4311fa90b8 ffffff4313809e80 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff4311cb3020: ffffff01eb90cd20 [ ffffff01eb90cd20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383ca7080) shuttle_resume+0x2af(ffffff4383ca7080, fffffffffbd12ed0) door_return+0x3e0(fe60ecdc, 4, 0, 0, fe60ee00, f5f00) doorfs32+0x180(fe60ecdc, 4, 0, fe60ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383dd8800 ffffff4311fa90b8 ffffff4321c367c0 1 59 ffffff4383dd89ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff4383dd8800: ffffff01eb912c50 [ ffffff01eb912c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383dd89ee, ffffff4383dd89f0, 0) cv_wait_sig_swap+0x17(ffffff4383dd89ee, ffffff4383dd89f0) cv_waituntil_sig+0xbd(ffffff4383dd89ee, ffffff4383dd89f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383d02760 ffffff4311fa90b8 ffffff4312047800 1 59 ffffff4383ce753a PC: _resume_from_idle+0xf1 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff4383d02760: ffffff01eb864c50 [ ffffff01eb864c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ce753a, ffffff4383ce7500, 0) cv_wait_sig_swap+0x17(ffffff4383ce753a, ffffff4383ce7500) cv_timedwait_sig_hrtime+0x35(ffffff4383ce753a, ffffff4383ce7500, ffffffffffffffff) poll_common+0x50c(84494a0, b, 0, 0) pollsys+0xe3(84494a0, b, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ddfb60 ffffff4383a93088 ffffff4321c38ac0 1 59 ffffff4383ddfd4e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddfb60: ffffff01eb8abcc0 [ ffffff01eb8abcc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383ddfd4e, ffffff4383ddfd50, 1bf08eae3b , 1, 4) cv_waituntil_sig+0xfa(ffffff4383ddfd4e, ffffff4383ddfd50, ffffff01eb8abe70, 2) nanosleep+0x19f(fe96ef88, fe96ef80) _sys_sysenter_post_swapgs+0x149() ffffff4383ddf7c0 ffffff4383a93088 ffffff431380ba80 1 59 ffffff4383a93500 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddf7c0: ffffff01eb870d80 [ ffffff01eb870d80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff4383a93500, fffffffffbd12ed0) door_unref+0x94() doorfs32+0xf5(0, 0, 0, 0, 0, 8) _sys_sysenter_post_swapgs+0x149() ffffff43842a9420 ffffff4383a93088 ffffff4321d31a00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff43842a9420: ffffff01eb840d50 [ ffffff01eb840d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fde68e00, f5f00) doorfs32+0x180(0, 0, 0, fde68e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4311dad3e0 ffffff4383a93088 ffffff4321d2f000 1 59 ffffff4311dad5ce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4311dad3e0: ffffff01eb970cc0 [ ffffff01eb970cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4311dad5ce, ffffff4311dad5d0, 22ecb25be2a , 1, 4) cv_waituntil_sig+0xfa(ffffff4311dad5ce, ffffff4311dad5d0, ffffff01eb970e70, 2) nanosleep+0x19f(fdd69f78, fdd69f70) _sys_sysenter_post_swapgs+0x149() ffffff4311cb3760 ffffff4383a93088 ffffff4321d35b00 1 59 ffffff4311cb394e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4311cb3760: ffffff01eb899cc0 [ ffffff01eb899cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4311cb394e, ffffff4311cb3950, 34630b89d15 , 1, 4) cv_waituntil_sig+0xfa(ffffff4311cb394e, ffffff4311cb3950, ffffff01eb899e70, 2) nanosleep+0x19f(fdc6af78, fdc6af70) _sys_sysenter_post_swapgs+0x149() ffffff43845508a0 ffffff4383a93088 ffffff4312041540 1 59 ffffff4384550a8e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff43845508a0: ffffff01eaa28cc0 [ ffffff01eaa28cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4384550a8e, ffffff4384550a90, 14f46b0241 , 1, 4) cv_waituntil_sig+0xfa(ffffff4384550a8e, ffffff4384550a90, ffffff01eaa28e70, 2) nanosleep+0x19f(fd373f38, fd373f30) _sys_sysenter_post_swapgs+0x149() ffffff4384550500 ffffff4383a93088 ffffff4312038e80 1 59 ffffff43845506ee PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4384550500: ffffff01eb056cc0 [ ffffff01eb056cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff43845506ee, ffffff43845506f0, 3466c5366bf , 1, 4) cv_waituntil_sig+0xfa(ffffff43845506ee, ffffff43845506f0, ffffff01eb056e70, 2) nanosleep+0x19f(fd274f78, fd274f70) _sys_sysenter_post_swapgs+0x149() ffffff4383d0a140 ffffff4383a93088 ffffff431202e300 1 59 ffffff4383d0a32e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383d0a140: ffffff01eb6aac60 [ ffffff01eb6aac60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383d0a32e, ffffff4383d0a330, 2540bd364, 1, 4) cv_waituntil_sig+0xfa(ffffff4383d0a32e, ffffff4383d0a330, ffffff01eb6aae10, 2) lwp_park+0x15e(fd175f18, 0) syslwp_park+0x63(0, fd175f18, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ddbb80 ffffff4383a93088 ffffff4312048f00 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddbb80: ffffff01eb8c8d20 [ ffffff01eb8c8d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383d09160) shuttle_resume+0x2af(ffffff4383d09160, fffffffffbd12ed0) door_return+0x3e0(fe4ddc90, da, 0, 0, fe561e00, f5f00) doorfs32+0x180(fe4ddc90, da, 0, fe561e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383ddb7e0 ffffff4383a93088 ffffff4321c37cc0 1 59 ffffff4383ddb9ce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddb7e0: ffffff01eb8facc0 [ ffffff01eb8facc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383ddb9ce, ffffff4383ddb9d0, 22ecb25bd30 , 1, 4) cv_waituntil_sig+0xfa(ffffff4383ddb9ce, ffffff4383ddb9d0, ffffff01eb8fae70, 2) nanosleep+0x19f(fe462f78, fe462f70) _sys_sysenter_post_swapgs+0x149() ffffff4383ddb440 ffffff4383a93088 ffffff4321d32100 1 59 ffffff4383ddb62e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddb440: ffffff01eb900cc0 [ ffffff01eb900cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383ddb62e, ffffff4383ddb630, 3466c536830 , 1, 4) cv_waituntil_sig+0xfa(ffffff4383ddb62e, ffffff4383ddb630, ffffff01eb900e70, 2) nanosleep+0x19f(fe363f78, fe363f70) _sys_sysenter_post_swapgs+0x149() ffffff4383ddb0a0 ffffff4383a93088 ffffff4312031d00 1 59 ffffff4383ddb28e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddb0a0: ffffff01eb86acc0 [ ffffff01eb86acc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383ddb28e, ffffff4383ddb290, 37e11d32a, 1, 4) cv_waituntil_sig+0xfa(ffffff4383ddb28e, ffffff4383ddb290, ffffff01eb86ae70, 2) nanosleep+0x19f(fe264f38, fe264f30) _sys_sysenter_post_swapgs+0x149() ffffff4383dd8ba0 ffffff4383a93088 ffffff431203b880 1 59 ffffff4383dd8d8e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383dd8ba0: ffffff01eb906cc0 [ ffffff01eb906cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383dd8d8e, ffffff4383dd8d90, 8bee6437d0 , 1, 4) cv_waituntil_sig+0xfa(ffffff4383dd8d8e, ffffff4383dd8d90, ffffff01eb906e70, 2) nanosleep+0x19f(fe165f78, fe165f70) _sys_sysenter_post_swapgs+0x149() ffffff438423c060 ffffff4383a93088 ffffff430e73c600 1 59 ffffff438423c24e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff438423c060: ffffff01eb668cc0 [ ffffff01eb668cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff438423c24e, ffffff438423c250, 14f46b01dd , 1, 4) cv_waituntil_sig+0xfa(ffffff438423c24e, ffffff438423c250, ffffff01eb668e70, 2) nanosleep+0x19f(fe066f38, fe066f30) _sys_sysenter_post_swapgs+0x149() ffffff43842a9b60 ffffff4383a93088 ffffff4321d34600 1 59 ffffff43842a9d4e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff43842a9b60: ffffff01eb89fcc0 [ ffffff01eb89fcc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff43842a9d4e, ffffff43842a9d50, 3466c5368b0 , 1, 4) cv_waituntil_sig+0xfa(ffffff43842a9d4e, ffffff43842a9d50, ffffff01eb89fe70, 2) nanosleep+0x19f(fdf67f78, fdf67f70) _sys_sysenter_post_swapgs+0x149() ffffff4383e594a0 ffffff4383a93088 ffffff4384abed00 1 59 ffffff4383e5968e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383e594a0: ffffff01eb6b0cc0 [ ffffff01eb6b0cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383e5968e, ffffff4383e59690, 14f46b01e7 , 1, 4) cv_waituntil_sig+0xfa(ffffff4383e5968e, ffffff4383e59690, ffffff01eb6b0e70, 2) nanosleep+0x19f(fdb6bf38, fdb6bf30) _sys_sysenter_post_swapgs+0x149() ffffff438413d780 ffffff4383a93088 ffffff4321d33f00 1 59 ffffff438413d96e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff438413d780: ffffff01eb69ecc0 [ ffffff01eb69ecc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff438413d96e, ffffff438413d970, 3466c536754 , 1, 4) cv_waituntil_sig+0xfa(ffffff438413d96e, ffffff438413d970, ffffff01eb69ee70, 2) nanosleep+0x19f(fda6cf78, fda6cf70) _sys_sysenter_post_swapgs+0x149() ffffff4384007ae0 ffffff4383a93088 ffffff4384bc3c40 1 59 ffffff4384007cce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4384007ae0: ffffff01ea5e1cc0 [ ffffff01ea5e1cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4384007cce, ffffff4384007cd0, 22ecb25be2a , 1, 4) cv_waituntil_sig+0xfa(ffffff4384007cce, ffffff4384007cd0, ffffff01ea5e1e70, 2) nanosleep+0x19f(fd96df78, fd96df70) _sys_sysenter_post_swapgs+0x149() ffffff4311d8bc40 ffffff4383a93088 ffffff4384bc3540 1 59 ffffff4311d8be2e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4311d8bc40: ffffff01ea5e7cc0 [ ffffff01ea5e7cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4311d8be2e, ffffff4311d8be30, 34630b89e37 , 1, 4) cv_waituntil_sig+0xfa(ffffff4311d8be2e, ffffff4311d8be30, ffffff01ea5e7e70, 2) nanosleep+0x19f(fd86ef78, fd86ef70) _sys_sysenter_post_swapgs+0x149() ffffff4384550c40 ffffff4383a93088 ffffff4384bd0a00 1 59 ffffff4384550e2e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4384550c40: ffffff01ea219cc0 [ ffffff01ea219cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4384550e2e, ffffff4384550e30, 22ecb25bdd7 , 1, 4) cv_waituntil_sig+0xfa(ffffff4384550e2e, ffffff4384550e30, ffffff01ea219e70, 2) nanosleep+0x19f(fd76ff78, fd76ff70) _sys_sysenter_post_swapgs+0x149() ffffff4384550160 ffffff4383a93088 ffffff4384bc4a40 1 59 ffffff438455034e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4384550160: ffffff01ea5d5cc0 [ ffffff01ea5d5cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff438455034e, ffffff4384550350, 3466c5367a4 , 1, 4) cv_waituntil_sig+0xfa(ffffff438455034e, ffffff4384550350, ffffff01ea5d5e70, 2) nanosleep+0x19f(fd670f78, fd670f70) _sys_sysenter_post_swapgs+0x149() ffffff4384662c60 ffffff4383a93088 ffffff4384bc4340 1 59 ffffff4384662e4e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4384662c60: ffffff01ea5dbcc0 [ ffffff01ea5dbcc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4384662e4e, ffffff4384662e50, 22ecb25bd76 , 1, 4) cv_waituntil_sig+0xfa(ffffff4384662e4e, ffffff4384662e50, ffffff01ea5dbe70, 2) nanosleep+0x19f(fd571f78, fd571f70) _sys_sysenter_post_swapgs+0x149() ffffff43846628c0 ffffff4383a93088 ffffff4384bd1100 1 59 ffffff4384662aae PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff43846628c0: ffffff01ea213cc0 [ ffffff01ea213cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4384662aae, ffffff4384662ab0, 3466c5367d4 , 1, 4) cv_waituntil_sig+0xfa(ffffff4384662aae, ffffff4384662ab0, ffffff01ea213e70, 2) nanosleep+0x19f(fd472f78, fd472f70) _sys_sysenter_post_swapgs+0x149() ffffff4383ddf420 ffffff4383a93088 ffffff4313811e40 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddf420: ffffff01eb8b1d20 [ ffffff01eb8b1d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311f9a820) shuttle_resume+0x2af(ffffff4311f9a820, fffffffffbd12ed0) door_return+0x3e0(fe75bcb0, da, 0, 0, fe75fe00, f5f00) doorfs32+0x180(fe75bcb0, da, 0, fe75fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383ddf080 ffffff4383a93088 ffffff4313812540 1 60 ffffff431e981b70 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383ddf080: ffffff01eb8b7930 [ ffffff01eb8b7930 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff431e981b70, ffffff431e981a38) so_dequeue_msg+0x2f7(ffffff431e981a18, ffffff01eb8b7b48, ffffff01eb8b7df0, ffffff01eb8b7b50, 20) so_recvmsg+0x249(ffffff431e981a18, ffffff01eb8b7c10, ffffff01eb8b7df0, ffffff4383a4c0d0) socket_recvmsg+0x33(ffffff431e981a18, ffffff01eb8b7c10, ffffff01eb8b7df0, ffffff4383a4c0d0) socket_vop_read+0x5f(ffffff4383a4da80, ffffff01eb8b7df0, 0, ffffff4383a4c0d0 , 0) fop_read+0x8b(ffffff4383a4da80, ffffff01eb8b7df0, 0, ffffff4383a4c0d0, 0) read+0x2a7(6, fe660664, 94c) read32+0x1e(6, fe660664, 94c) _sys_sysenter_post_swapgs+0x149() ffffff4383deb060 ffffff4383a93088 ffffff4311f50cc0 1 59 ffffff4383deb24e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/nscd stack pointer for thread ffffff4383deb060: ffffff01eb8a5dd0 [ ffffff01eb8a5dd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383deb24e, ffffff4383deb250, 0) cv_wait_sig_swap+0x17(ffffff4383deb24e, ffffff4383deb250) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff4311d55ba0 ffffff4383ff3020 ffffff4312039580 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/method/iscsid stack pointer for thread ffffff4311d55ba0: ffffff01ea231d50 [ ffffff01ea231d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fecbee00, f5f00) doorfs32+0x180(0, 0, 0, fecbee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff01ea22bc40 fffffffffbc2eb80 0 0 60 ffffff4321ce1150 PC: _resume_from_idle+0xf1 TASKQ: iscsi_nexus_enum_tq stack pointer for thread ffffff01ea22bc40: ffffff01ea22ba80 [ ffffff01ea22ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1150, ffffff4321ce1140) taskq_thread_wait+0xbe(ffffff4321ce1120, ffffff4321ce1140, ffffff4321ce1150 , ffffff01ea22bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1120) thread_start+8() ffffff01ea0f3c40 fffffffffbc2eb80 0 0 60 ffffff4361b02398 PC: _resume_from_idle+0xf1 TASKQ: isns_reg_query_taskq stack pointer for thread ffffff01ea0f3c40: ffffff01ea0f3a80 [ ffffff01ea0f3a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02398, ffffff4361b02388) taskq_thread_wait+0xbe(ffffff4361b02368, ffffff4361b02388, ffffff4361b02398 , ffffff01ea0f3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b02368) thread_start+8() ffffff01ea0edc40 fffffffffbc2eb80 0 0 60 ffffff43759f9a30 PC: _resume_from_idle+0xf1 TASKQ: isns_scn_taskq stack pointer for thread ffffff01ea0edc40: ffffff01ea0eda80 [ ffffff01ea0eda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9a30, ffffff43759f9a20) taskq_thread_wait+0xbe(ffffff43759f9a00, ffffff43759f9a20, ffffff43759f9a30 , ffffff01ea0edbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9a00) thread_start+8() ffffff01eb81ec40 fffffffffbc2eb80 0 0 60 ffffff43759f9918 PC: _resume_from_idle+0xf1 TASKQ: iscsi_Static stack pointer for thread ffffff01eb81ec40: ffffff01eb81ea80 [ ffffff01eb81ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9918, ffffff43759f9908) taskq_thread_wait+0xbe(ffffff43759f98e8, ffffff43759f9908, ffffff43759f9918 , ffffff01eb81ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f98e8) thread_start+8() ffffff01eb824c40 fffffffffbc2eb80 0 0 60 ffffff43759f9800 PC: _resume_from_idle+0xf1 TASKQ: iscsi_SendTarget stack pointer for thread ffffff01eb824c40: ffffff01eb824a80 [ ffffff01eb824a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9800, ffffff43759f97f0) taskq_thread_wait+0xbe(ffffff43759f97d0, ffffff43759f97f0, ffffff43759f9800 , ffffff01eb824bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f97d0) thread_start+8() ffffff01eb924c40 fffffffffbc2eb80 0 0 60 ffffff43759f96e8 PC: _resume_from_idle+0xf1 TASKQ: iscsi_SLP stack pointer for thread ffffff01eb924c40: ffffff01eb924a80 [ ffffff01eb924a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f96e8, ffffff43759f96d8) taskq_thread_wait+0xbe(ffffff43759f96b8, ffffff43759f96d8, ffffff43759f96e8 , ffffff01eb924bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f96b8) thread_start+8() ffffff01eb92ac40 fffffffffbc2eb80 0 0 60 ffffff43759f95d0 PC: _resume_from_idle+0xf1 TASKQ: iscsi_iSNS stack pointer for thread ffffff01eb92ac40: ffffff01eb92aa80 [ ffffff01eb92aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f95d0, ffffff43759f95c0) taskq_thread_wait+0xbe(ffffff43759f95a0, ffffff43759f95c0, ffffff43759f95d0 , ffffff01eb92abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f95a0) thread_start+8() ffffff43137f0460 ffffff4383ff3020 ffffff4321d2e140 1 59 ffffff43137f064e PC: _resume_from_idle+0xf1 CMD: /lib/svc/method/iscsid stack pointer for thread ffffff43137f0460: ffffff01eb710c40 [ ffffff01eb710c40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43137f064e, ffffff42e26ae5c0, 0) cv_wait_sig_swap+0x17(ffffff43137f064e, ffffff42e26ae5c0) cv_waituntil_sig+0xbd(ffffff43137f064e, ffffff42e26ae5c0, 0, 0) sigtimedwait+0x197(8047dd0, 8047cc0, 0) _sys_sysenter_post_swapgs+0x149() ffffff01eb632c40 fffffffffbc2eb80 0 0 60 ffffff4321cef260 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb632c40: ffffff01eb632a80 [ ffffff01eb632a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef260, ffffff4321cef250) taskq_thread_wait+0xbe(ffffff4321cef230, ffffff4321cef250, ffffff4321cef260 , ffffff01eb632bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef230) thread_start+8() ffffff01eb5f0c40 fffffffffbc2eb80 0 0 60 ffffff4321cef6c0 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb5f0c40: ffffff01eb5f0a80 [ ffffff01eb5f0a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321cef6c0, ffffff4321cef6b0) taskq_thread_wait+0xbe(ffffff4321cef690, ffffff4321cef6b0, ffffff4321cef6c0 , ffffff01eb5f0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321cef690) thread_start+8() ffffff01eb5f6c40 fffffffffbc2eb80 0 0 60 ffffff43759f94b8 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb5f6c40: ffffff01eb5f6a80 [ ffffff01eb5f6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f94b8, ffffff43759f94a8) taskq_thread_wait+0xbe(ffffff43759f9488, ffffff43759f94a8, ffffff43759f94b8 , ffffff01eb5f6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9488) thread_start+8() ffffff01eb5fcc40 fffffffffbc2eb80 0 0 60 ffffff43759f93a0 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb5fcc40: ffffff01eb5fca80 [ ffffff01eb5fca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f93a0, ffffff43759f9390) taskq_thread_wait+0xbe(ffffff43759f9370, ffffff43759f9390, ffffff43759f93a0 , ffffff01eb5fcbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9370) thread_start+8() ffffff43137d6880 ffffff4383a86098 ffffff431202c700 1 59 ffffff4383c20952 PC: _resume_from_idle+0xf1 CMD: /usr/lib/utmpd stack pointer for thread ffffff43137d6880: ffffff01eb620c60 [ ffffff01eb620c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383c20952, ffffff4383c20918, 391f2cb0d04a55, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff4383c20952, ffffff4383c20918, 391f2cb0d04a55) poll_common+0x50c(806b5d0, 2, ffffff01eb620e40, 0) pollsys+0xe3(806b5d0, 2, 8047c48, 0) _sys_sysenter_post_swapgs+0x149() ffffff43106bb3e0 ffffff438484d028 ffffff4375971040 1 59 ffffff4374cb922a PC: _resume_from_idle+0xf1 CMD: /usr/lib/saf/ttymon -g -d /dev/console -l console -T sun-color -m ldterm,ttcom p stack pointer for thread ffffff43106bb3e0: ffffff01eb62cc50 [ ffffff01eb62cc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4374cb922a, ffffff4374cb91f0, 0) cv_wait_sig_swap+0x17(ffffff4374cb922a, ffffff4374cb91f0) cv_timedwait_sig_hrtime+0x35(ffffff4374cb922a, ffffff4374cb91f0, ffffffffffffffff) poll_common+0x50c(8047cb8, 1, 0, 0) pollsys+0xe3(8047cb8, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4384e2dba0 ffffff4383a4a080 ffffff4384a025c0 1 59 ffffff4383cf0052 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/in.routed stack pointer for thread ffffff4384e2dba0: ffffff01ea15fc60 [ ffffff01ea15fc60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383cf0052, ffffff4383cf0018, 391f2e434973dd, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff4383cf0052, ffffff4383cf0018, 391f2e434973dd) poll_common+0x50c(8047a40, 4, ffffff01ea15fe40, 0) pollsys+0xe3(8047a40, 4, 8047b18, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383fc1860 ffffff4383a65078 ffffff4384abfb00 1 59 ffffff4384944dfa PC: _resume_from_idle+0xf1 CMD: /usr/sbin/cron stack pointer for thread ffffff4383fc1860: ffffff01eb8f2c60 [ ffffff01eb8f2c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4384944dfa, ffffff4384944dc0, 3931a2a9bbcc3c, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff4384944dfa, ffffff4384944dc0, 3931a2a9bbcc3c) poll_common+0x50c(8047b80, 1, ffffff01eb8f2e40, ffffff01eb8f2e50) pollsys+0xe3(8047b80, 1, 8047c34, 806e018) _sys_sysenter_post_swapgs+0x149() ffffff4311d55460 ffffff4384acc058 ffffff4384a02cc0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/hotplugd stack pointer for thread ffffff4311d55460: ffffff01ea123d50 [ ffffff01ea123d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fedaee00, f5f00) doorfs32+0x180(0, 0, 0, fedaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff43137a5180 ffffff4384acc058 ffffff4384a017c0 1 59 ffffff43137a536e PC: _resume_from_idle+0xf1 CMD: /usr/lib/hotplugd stack pointer for thread ffffff43137a5180: ffffff01eb8dac50 [ ffffff01eb8dac50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43137a536e, ffffff43137a5370, 0) cv_wait_sig_swap+0x17(ffffff43137a536e, ffffff43137a5370) cv_waituntil_sig+0xbd(ffffff43137a536e, ffffff43137a5370, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137dd4c0 ffffff4383ca50b8 ffffff4321d30c00 1 59 ffffff43137dd6ae PC: _resume_from_idle+0xf1 CMD: /usr/lib/saf/ttymon stack pointer for thread ffffff43137dd4c0: ffffff01eb698dd0 [ ffffff01eb698dd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43137dd6ae, ffffff43137dd6b0, 0) cv_wait_sig_swap+0x17(ffffff43137dd6ae, ffffff43137dd6b0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff431379aae0 ffffff4311b7e080 ffffff4384a010c0 1 59 ffffff431055ac7c PC: _resume_from_idle+0xf1 CMD: /usr/lib/saf/sac -t 300 stack pointer for thread ffffff431379aae0: ffffff01eb8e0b10 [ ffffff01eb8e0b10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff431055ac7c, ffffff431055ac20, 0) cv_wait_sig_swap+0x17(ffffff431055ac7c, ffffff431055ac20) fifo_read+0xc9(ffffff4383acf140, ffffff01eb8e0df0, 0, ffffff4384a97d18, 0) fop_read+0x8b(ffffff4383acf140, ffffff01eb8e0df0, 0, ffffff4384a97d18, 0) read+0x2a7(4, 8047d98, 18) read32+0x1e(4, 8047d98, 18) _sys_sysenter_post_swapgs+0x149() ffffff01eb6b6c40 fffffffffbc2eb80 0 0 60 ffffff43759f9288 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb6b6c40: ffffff01eb6b6a80 [ ffffff01eb6b6a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9288, ffffff43759f9278) taskq_thread_wait+0xbe(ffffff43759f9258, ffffff43759f9278, ffffff43759f9288 , ffffff01eb6b6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9258) thread_start+8() ffffff01eb6bcc40 fffffffffbc2eb80 0 0 60 ffffff43759f9170 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb6bcc40: ffffff01eb6bca80 [ ffffff01eb6bca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9170, ffffff43759f9160) taskq_thread_wait+0xbe(ffffff43759f9140, ffffff43759f9160, ffffff43759f9170 , ffffff01eb6bcbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9140) thread_start+8() ffffff01eb71cc40 fffffffffbc2eb80 0 0 60 ffffff43759f9b48 PC: _resume_from_idle+0xf1 TASKQ: zil_clean stack pointer for thread ffffff01eb71cc40: ffffff01eb71ca80 [ ffffff01eb71ca80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9b48, ffffff43759f9b38) taskq_thread_wait+0xbe(ffffff43759f9b18, ffffff43759f9b38, ffffff43759f9b48 , ffffff01eb71cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9b18) thread_start+8() ffffff01e9edcc40 fffffffffbc2eb80 0 0 60 ffffff4384bf1278 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010006 stack pointer for thread ffffff01e9edcc40: ffffff01e9edca30 [ ffffff01e9edca30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff4384bf1278, ffffff4384a66580, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff4384bf1278, ffffff4384a66580, 7530, 4) taskq_thread_wait+0x64(ffffff43759f9028, ffffff4384a66580, ffffff4384bf1278 , ffffff01e9edcbc0, 7530) taskq_d_thread+0x145(ffffff4384bf1248) thread_start+8() ffffff01e9ed6c40 fffffffffbc2eb80 0 0 60 ffffff431e7b43f8 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010006 stack pointer for thread ffffff01e9ed6c40: ffffff01e9ed6a30 [ ffffff01e9ed6a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff431e7b43f8, ffffff4384a66000, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff431e7b43f8, ffffff4384a66000, 7530, 4) taskq_thread_wait+0x64(ffffff43759f9028, ffffff4384a66000, ffffff431e7b43f8 , ffffff01e9ed6bc0, 7530) taskq_d_thread+0x145(ffffff431e7b43c8) thread_start+8() ffffff01ea329c40 fffffffffbc2eb80 0 0 60 ffffff4375942120 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010006 stack pointer for thread ffffff01ea329c40: ffffff01ea329a30 [ ffffff01ea329a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff4375942120, ffffff4384a66500, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff4375942120, ffffff4384a66500, 7530, 4) taskq_thread_wait+0x64(ffffff43759f9028, ffffff4384a66500, ffffff4375942120 , ffffff01ea329bc0, 7530) taskq_d_thread+0x145(ffffff43759420f0) thread_start+8() ffffff01eb7a9c40 fffffffffbc2eb80 0 0 60 ffffff43759f9058 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010006 stack pointer for thread ffffff01eb7a9c40: ffffff01eb7a9a80 [ ffffff01eb7a9a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9058, ffffff43759f9048) taskq_thread_wait+0xbe(ffffff43759f9028, ffffff43759f9048, ffffff43759f9058 , ffffff01eb7a9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9028) thread_start+8() ffffff01ea1a7c40 fffffffffbc2eb80 0 0 60 ffffff4384bf3e98 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010006 stack pointer for thread ffffff01ea1a7c40: ffffff01ea1a7a80 [ ffffff01ea1a7a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3e98, ffffff4384bf3e88) taskq_thread_wait+0xbe(ffffff4384bf3e68, ffffff4384bf3e88, ffffff4384bf3e98 , ffffff01ea1a7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf3e68) thread_start+8() ffffff01ea1c5c40 fffffffffbc2eb80 0 0 60 ffffff43759427b0 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010006 stack pointer for thread ffffff01ea1c5c40: ffffff01ea1c5a30 [ ffffff01ea1c5a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff43759427b0, ffffff4311db2a00, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff43759427b0, ffffff4311db2a00, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3e68, ffffff4311db2a00, ffffff43759427b0 , ffffff01ea1c5bc0, 7530) taskq_d_thread+0x145(ffffff4375942780) thread_start+8() ffffff01ea335c40 fffffffffbc2eb80 0 0 60 ffffff4384bf3d80 PC: _resume_from_idle+0xf1 TASKQ: s00091340_0x00010006 stack pointer for thread ffffff01ea335c40: ffffff01ea335a80 [ ffffff01ea335a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3d80, ffffff4384bf3d70) taskq_thread_wait+0xbe(ffffff4384bf3d50, ffffff4384bf3d70, ffffff4384bf3d80 , ffffff01ea335bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf3d50) thread_start+8() ffffff01ea39bc40 fffffffffbc2eb80 0 0 60 ffffff4384bf3c68 PC: _resume_from_idle+0xf1 TASKQ: r00091340_0x00010006 stack pointer for thread ffffff01ea39bc40: ffffff01ea39ba80 [ ffffff01ea39ba80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3c68, ffffff4384bf3c58) taskq_thread_wait+0xbe(ffffff4384bf3c38, ffffff4384bf3c58, ffffff4384bf3c68 , ffffff01ea39bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf3c38) thread_start+8() ffffff01e9d0cc40 fffffffffbc2eb80 0 0 60 ffffff432de6d1a8 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01e9d0cc40: ffffff01e9d0ca30 [ ffffff01e9d0ca30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff432de6d1a8, ffffff4384a99500, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff432de6d1a8, ffffff4384a99500, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3b20, ffffff4384a99500, ffffff432de6d1a8 , ffffff01e9d0cbc0, 7530) taskq_d_thread+0x145(ffffff432de6d178) thread_start+8() ffffff01e9150c40 fffffffffbc2eb80 0 0 60 ffffff431e7b4120 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01e9150c40: ffffff01e9150a30 [ ffffff01e9150a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff431e7b4120, ffffff4384a99180, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff431e7b4120, ffffff4384a99180, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3b20, ffffff4384a99180, ffffff431e7b4120 , ffffff01e9150bc0, 7530) taskq_d_thread+0x145(ffffff431e7b40f0) thread_start+8() ffffff01e9b8fc40 fffffffffbc2eb80 0 0 60 ffffff4375942970 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01e9b8fc40: ffffff01e9b8fa30 [ ffffff01e9b8fa30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff4375942970, ffffff4384a99080, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff4375942970, ffffff4384a99080, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3b20, ffffff4384a99080, ffffff4375942970 , ffffff01e9b8fbc0, 7530) taskq_d_thread+0x145(ffffff4375942940) thread_start+8() ffffff01eb936c40 fffffffffbc2eb80 0 0 60 ffffff42e2616f70 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01eb936c40: ffffff01eb936a30 [ ffffff01eb936a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e2616f70, ffffff4384a99400, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff42e2616f70, ffffff4384a99400, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3b20, ffffff4384a99400, ffffff42e2616f70 , ffffff01eb936bc0, 7530) taskq_d_thread+0x145(ffffff42e2616f40) thread_start+8() ffffff01e8263c40 fffffffffbc2eb80 0 0 60 ffffff42e2617328 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01e8263c40: ffffff01e8263a30 [ ffffff01e8263a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff42e2617328, ffffff4384a99480, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff42e2617328, ffffff4384a99480, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3b20, ffffff4384a99480, ffffff42e2617328 , ffffff01e8263bc0, 7530) taskq_d_thread+0x145(ffffff42e26172f8) thread_start+8() ffffff01ea401c40 fffffffffbc2eb80 0 0 60 ffffff4384bf3b50 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01ea401c40: ffffff01ea401a80 [ ffffff01ea401a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3b50, ffffff4384bf3b40) taskq_thread_wait+0xbe(ffffff4384bf3b20, ffffff4384bf3b40, ffffff4384bf3b50 , ffffff01ea401bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf3b20) thread_start+8() ffffff01e93e4c40 fffffffffbc2eb80 0 0 60 ffffff4365502ea8 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010006 stack pointer for thread ffffff01e93e4c40: ffffff01e93e4a30 [ ffffff01e93e4a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff4365502ea8, ffffff4384a9ba80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff4365502ea8, ffffff4384a9ba80, 7530, 4) taskq_thread_wait+0x64(ffffff4384bf3a08, ffffff4384a9ba80, ffffff4365502ea8 , ffffff01e93e4bc0, 7530) taskq_d_thread+0x145(ffffff4365502e78) thread_start+8() ffffff01ea467c40 fffffffffbc2eb80 0 0 60 ffffff4384bf3a38 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010006 stack pointer for thread ffffff01ea467c40: ffffff01ea467a80 [ ffffff01ea467a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3a38, ffffff4384bf3a28) taskq_thread_wait+0xbe(ffffff4384bf3a08, ffffff4384bf3a28, ffffff4384bf3a38 , ffffff01ea467bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf3a08) thread_start+8() ffffff01ea4cdc40 fffffffffbc2eb80 0 0 60 ffffff4384bf3920 PC: _resume_from_idle+0xf1 TASKQ: s000BC0BC_0x00010006 stack pointer for thread ffffff01ea4cdc40: ffffff01ea4cda80 [ ffffff01ea4cda80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3920, ffffff4384bf3910) taskq_thread_wait+0xbe(ffffff4384bf38f0, ffffff4384bf3910, ffffff4384bf3920 , ffffff01ea4cdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf38f0) thread_start+8() ffffff01ea533c40 fffffffffbc2eb80 0 0 60 ffffff4384bf3808 PC: _resume_from_idle+0xf1 TASKQ: r000BC0BC_0x00010006 stack pointer for thread ffffff01ea533c40: ffffff01ea533a80 [ ffffff01ea533a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf3808, ffffff4384bf37f8) taskq_thread_wait+0xbe(ffffff4384bf37d8, ffffff4384bf37f8, ffffff4384bf3808 , ffffff01ea533bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf37d8) thread_start+8() ffffff431379a3a0 ffffff4384b35060 ffffff4321d27e80 1 59 ffffff4384b0d04a PC: _resume_from_idle+0xf1 CMD: /usr/lib/inet/in.ndpd stack pointer for thread ffffff431379a3a0: ffffff01ea11dc50 [ ffffff01ea11dc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4384b0d04a, ffffff4384b0d010, 0) cv_wait_sig_swap+0x17(ffffff4384b0d04a, ffffff4384b0d010) cv_timedwait_sig_hrtime+0x35(ffffff4384b0d04a, ffffff4384b0d010, ffffffffffffffff) poll_common+0x50c(80a4170, 20, 0, 0) pollsys+0xe3(80a4170, 20, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff431192a480 ffffff4383fc0000 ffffff4384abd800 1 59 ffffff4384b3f94a PC: _resume_from_idle+0xf1 CMD: /usr/sbin/rpcbind stack pointer for thread ffffff431192a480: ffffff01ea165c50 [ ffffff01ea165c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4384b3f94a, ffffff4384b3f910, 0) cv_wait_sig_swap+0x17(ffffff4384b3f94a, ffffff4384b3f910) cv_timedwait_sig_hrtime+0x35(ffffff4384b3f94a, ffffff4384b3f910, ffffffffffffffff) poll_common+0x50c(8045dd0, 7, 0, 0) pollsys+0xe3(8045dd0, 7, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438413d3e0 ffffff431192c078 ffffff4384bd0300 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff438413d3e0: ffffff01eb78dd50 [ ffffff01eb78dd50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fedaee00, f5f00) doorfs32+0x180(0, 0, 0, fedaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff43137c5400 ffffff43848fb038 ffffff430e711200 1 59 ffffff43137c55ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff43137c5400: ffffff01ea159c60 [ ffffff01ea159c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff43137c55ee, ffffff43137c55f0, 22ecb2586e , 1, 4) cv_waituntil_sig+0xfa(ffffff43137c55ee, ffffff43137c55f0, ffffff01ea159e10, 2) lwp_park+0x15e(fea6ef48, 0) syslwp_park+0x63(0, fea6ef48, 0) _sys_sysenter_post_swapgs+0x149() ffffff43120214c0 ffffff43848fb038 ffffff4384dc00c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff43120214c0: ffffff01eb0cad20 [ ffffff01eb0cad20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01e9c8dc40) shuttle_resume+0x2af(ffffff01e9c8dc40, fffffffffbd12ed0) door_return+0x3e0(0, 0, 0, 0, fe870e00, f5f00) doorfs32+0x180(0, 0, 0, fe870e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383cb0040 ffffff43848fb038 ffffff431380a580 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff4383cb0040: ffffff01eb722d20 [ ffffff01eb722d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01eb75ec40) shuttle_resume+0x2af(ffffff01eb75ec40, fffffffffbd12ed0) door_return+0x3e0(0, 0, 0, 0, fe96fe00, f5f00) doorfs32+0x180(0, 0, 0, fe96fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff01ea5b1c40 fffffffffbc2eb80 0 0 60 fffffffffbcefda8 PC: _resume_from_idle+0xf1 THREAD: auto_do_unmount() stack pointer for thread ffffff01ea5b1c40: ffffff01ea5b1a30 [ ffffff01ea5b1a30 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbcefda8, fffffffffbcef7d0, 1bf08eb000, 989680, 0) cv_timedwait+0x5c(fffffffffbcefda8, fffffffffbcef7d0, 1be97c) zone_status_timedwait+0x6b(fffffffffbcefc60, 1be97c, 5) auto_do_unmount+0xc7(ffffff43215297e0) thread_start+8() ffffff4383cb6b00 ffffff43848fb038 ffffff4384bcfc00 1 59 ffffff4383cb6cee PC: _resume_from_idle+0xf1 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff4383cb6b00: ffffff01eb793dd0 [ ffffff01eb793dd0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cb6cee, ffffff4383cb6cf0, 0) cv_wait_sig_swap+0x17(ffffff4383cb6cee, ffffff4383cb6cf0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff43137aec40 ffffff431192c078 ffffff4313812c40 1 59 ffffff431192c138 PC: _resume_from_idle+0xf1 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff43137aec40: ffffff01eb852c30 [ ffffff01eb852c30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff431192c138, fffffffffbcf2a70, 0) cv_wait_sig_swap+0x17(ffffff431192c138, fffffffffbcf2a70) waitid+0x24d(0, 1c9, ffffff01eb852e40, 3) waitsys32+0x36(0, 1c9, 8047cf0, 3) _sys_sysenter_post_swapgs+0x149() ffffff430fdaa020 ffffff4383fdc010 ffffff4384bd1800 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/inet/inetd start stack pointer for thread ffffff430fdaa020: ffffff01ea20dd50 [ ffffff01ea20dd50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fe9eee00, f5f00) doorfs32+0x180(0, 0, 0, fe9eee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383d05740 ffffff4383fdc010 ffffff4321d2a880 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/inet/inetd start stack pointer for thread ffffff4383d05740: ffffff01ea129d20 [ ffffff01ea129d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01ea2d6c40) shuttle_resume+0x2af(ffffff01ea2d6c40, fffffffffbd12ed0) door_return+0x3e0(fed4ed00, 4, 0, 0, fed4ee00, f5f00) doorfs32+0x180(fed4ed00, 4, 0, fed4ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff01ea2d6c40 fffffffffbc2eb80 0 0 60 ffffff438493cc10 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ea2d6c40: ffffff01ea2d6a90 [ ffffff01ea2d6a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff438493cc10, ffffff438493cc08) evch_delivery_hold+0x70(ffffff438493cbe8, ffffff01ea2d6bc0) evch_delivery_thr+0x29e(ffffff438493cbe8) thread_start+8() ffffff43137e4be0 ffffff4383fdc010 ffffff4384bcf500 1 59 ffffff4321cf69aa PC: _resume_from_idle+0xf1 CMD: /usr/lib/inet/inetd start stack pointer for thread ffffff43137e4be0: ffffff01eb799c50 [ ffffff01eb799c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4321cf69aa, ffffff4321cf6970, 0) cv_wait_sig_swap+0x17(ffffff4321cf69aa, ffffff4321cf6970) cv_timedwait_sig_hrtime+0x35(ffffff4321cf69aa, ffffff4321cf6970, ffffffffffffffff) poll_common+0x50c(8320f30, 10, 0, 0) pollsys+0xe3(8320f30, 10, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311cac3e0 ffffff4384da80a0 ffffff4384dcf880 1 59 ffffff4311cac5ce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4311cac3e0: ffffff01ea5ffc50 [ ffffff01ea5ffc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311cac5ce, ffffff4311cac5d0, 0) cv_wait_sig_swap+0x17(ffffff4311cac5ce, ffffff4311cac5d0) cv_waituntil_sig+0xbd(ffffff4311cac5ce, ffffff4311cac5d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4384662180 ffffff4384da80a0 ffffff4384bc2e40 1 59 ffffff438466236e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4384662180: ffffff01ea5edc50 [ ffffff01ea5edc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438466236e, ffffff4384662370, 0) cv_wait_sig_swap+0x17(ffffff438466236e, ffffff4384662370) cv_waituntil_sig+0xbd(ffffff438466236e, ffffff4384662370, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43844ac4e0 ffffff4384da80a0 ffffff4384abdf00 1 59 ffffff43844ac6ce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff43844ac4e0: ffffff01eb84cc50 [ ffffff01eb84cc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43844ac6ce, ffffff43844ac6d0, 0) cv_wait_sig_swap+0x17(ffffff43844ac6ce, ffffff43844ac6d0) cv_waituntil_sig+0xbd(ffffff43844ac6ce, ffffff43844ac6d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4384434be0 ffffff4384da80a0 ffffff4384bce000 1 59 ffffff4384434dce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4384434be0: ffffff01ea5c3c50 [ ffffff01ea5c3c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4384434dce, ffffff4384434dd0, 0) cv_wait_sig_swap+0x17(ffffff4384434dce, ffffff4384434dd0) cv_waituntil_sig+0xbd(ffffff4384434dce, ffffff4384434dd0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438441f0e0 ffffff4384da80a0 ffffff4384dce380 1 59 ffffff438441f2ce PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff438441f0e0: ffffff01ea627c50 [ ffffff01ea627c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438441f2ce, ffffff438441f2d0, 0) cv_wait_sig_swap+0x17(ffffff438441f2ce, ffffff438441f2d0) cv_waituntil_sig+0xbd(ffffff438441f2ce, ffffff438441f2d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438473a7a0 ffffff4384da80a0 ffffff4384dcdc80 1 59 ffffff438473a98e PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff438473a7a0: ffffff01ea62dc50 [ ffffff01ea62dc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438473a98e, ffffff438473a990, 0) cv_wait_sig_swap+0x17(ffffff438473a98e, ffffff438473a990) cv_waituntil_sig+0xbd(ffffff438473a98e, ffffff438473a990, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438473a400 ffffff4384da80a0 ffffff4384dcea80 1 59 ffffff438473a5ee PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff438473a400: ffffff01ea621c50 [ ffffff01ea621c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438473a5ee, ffffff438473a5f0, 0) cv_wait_sig_swap+0x17(ffffff438473a5ee, ffffff438473a5f0) cv_waituntil_sig+0xbd(ffffff438473a5ee, ffffff438473a5f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4384e05080 ffffff4384da80a0 ffffff4384dcc780 1 59 ffffff4383cf0372 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4384e05080: ffffff01ea63fc50 [ ffffff01ea63fc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cf0372, ffffff4383cf0338, 0) cv_wait_sig_swap+0x17(ffffff4383cf0372, ffffff4383cf0338) cv_timedwait_sig_hrtime+0x35(ffffff4383cf0372, ffffff4383cf0338, ffffffffffffffff) poll_common+0x50c(8075608, 1, 0, 0) pollsys+0xe3(8075608, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311f9a0e0 ffffff4384da80a0 ffffff4384a01ec0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4311f9a0e0: ffffff01eb8d4d20 [ ffffff01eb8d4d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311dc3c60) shuttle_resume+0x2af(ffffff4311dc3c60, fffffffffbd12ed0) door_return+0x3e0(0, 0, 0, 0, fe474e00, f5f00) doorfs32+0x180(0, 0, 0, fe474e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4384e3db80 ffffff4384da80a0 ffffff4384dcce80 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4384e3db80: ffffff01ea639d20 [ ffffff01ea639d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311ef30a0) shuttle_resume+0x2af(ffffff4311ef30a0, fffffffffbd12ed0) door_return+0x3e0(0, 0, 0, 0, fe573e00, f5f00) doorfs32+0x180(0, 0, 0, fe573e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383fc1c00 ffffff4384da80a0 ffffff4384bc5140 1 59 ffffff4383fc1dee PC: _resume_from_idle+0xf1 CMD: /usr/sbin/syslogd stack pointer for thread ffffff4383fc1c00: ffffff01ea5cfc40 [ ffffff01ea5cfc40 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383fc1dee, ffffff42e26aebc0, 0) cv_wait_sig_swap+0x17(ffffff4383fc1dee, ffffff42e26aebc0) cv_waituntil_sig+0xbd(ffffff4383fc1dee, ffffff42e26aebc0, 0, 0) sigtimedwait+0x197(8047dec, 8047bf0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137bc080 ffffff4394e90000 ffffff4384dc0ec0 1 59 ffffff432d06d0aa PC: _resume_from_idle+0xf1 CMD: -bash stack pointer for thread ffffff43137bc080: ffffff01e8c459d0 [ ffffff01e8c459d0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff432d06d0aa, ffffff432d190588) str_cv_wait+0x27(ffffff432d06d0aa, ffffff432d190588, ffffffffffffffff, 0) strwaitq+0x2c3(ffffff432d190508, 2, 1, 2803, ffffffffffffffff, ffffff01e8c45b98) strread+0x144(ffffff4393cc9980, ffffff01e8c45df0, ffffff4321346180) spec_read+0x66(ffffff4393cc9980, ffffff01e8c45df0, 0, ffffff4321346180, 0) fop_read+0x8b(ffffff4393cc9980, ffffff01e8c45df0, 0, ffffff4321346180, 0) read+0x2a7(0, 804730b, 1) read32+0x1e(0, 804730b, 1) _sys_sysenter_post_swapgs+0x149() ffffff4311dc3c60 ffffff4384d13068 ffffff4384abf400 1 59 ffffff43a7186a42 PC: _resume_from_idle+0xf1 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff4311dc3c60: ffffff01e8cc8c50 [ ffffff01e8cc8c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43a7186a42, ffffff43a7186a08, 0) cv_wait_sig_swap+0x17(ffffff43a7186a42, ffffff43a7186a08) cv_timedwait_sig_hrtime+0x35(ffffff43a7186a42, ffffff43a7186a08, ffffffffffffffff) poll_common+0x50c(8047360, 4, 0, 0) pollsys+0xe3(8047360, 4, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311cb3b00 ffffff432d18c030 ffffff4321d29a80 1 59 ffffff431fa55c7a PC: _resume_from_idle+0xf1 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff4311cb3b00: ffffff01ea633c50 [ ffffff01ea633c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff431fa55c7a, ffffff431fa55c40, 0) cv_wait_sig_swap+0x17(ffffff431fa55c7a, ffffff431fa55c40) cv_timedwait_sig_hrtime+0x35(ffffff431fa55c7a, ffffff431fa55c40, ffffffffffffffff) poll_common+0x50c(8047380, 2, 0, 0) pollsys+0xe3(8047380, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43844acc20 ffffff4383e4d0c8 ffffff4384dcc080 1 59 ffffff4383cf722a PC: _resume_from_idle+0xf1 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff43844acc20: ffffff01ea645c50 [ ffffff01ea645c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cf722a, ffffff4383cf71f0, 0) cv_wait_sig_swap+0x17(ffffff4383cf722a, ffffff4383cf71f0) cv_timedwait_sig_hrtime+0x35(ffffff4383cf722a, ffffff4383cf71f0, ffffffffffffffff) poll_common+0x50c(8047460, 1, 0, 0) pollsys+0xe3(8047460, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff01ea605c40 fffffffffbc2eb80 0 0 60 ffffff4384b02ec8 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ea605c40: ffffff01ea605a90 [ ffffff01ea605a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384b02ec8, ffffff4384b02ec0) evch_delivery_hold+0x70(ffffff4384b02ea0, ffffff01ea605bc0) evch_delivery_thr+0x29e(ffffff4384b02ea0) thread_start+8() ffffff01ea68ec40 fffffffffbc2eb80 0 0 60 ffffff4361b02910 PC: _resume_from_idle+0xf1 TASKQ: npe_nexus_enum_tq stack pointer for thread ffffff01ea68ec40: ffffff01ea68ea80 [ ffffff01ea68ea80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4361b02910, ffffff4361b02900) taskq_thread_wait+0xbe(ffffff4361b028e0, ffffff4361b02900, ffffff4361b02910 , ffffff01ea68ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4361b028e0) thread_start+8() ffffff01ea7e2c40 fffffffffbc2eb80 0 0 60 ffffff4321ce1d58 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01ea7e2c40: ffffff01ea7e2a80 [ ffffff01ea7e2a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce1d58, ffffff4321ce1d48) taskq_thread_wait+0xbe(ffffff4321ce1d28, ffffff4321ce1d48, ffffff4321ce1d58 , ffffff01ea7e2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1d28) thread_start+8() ffffff01ea7f4c40 fffffffffbc2eb80 0 0 60 ffffff4384bf36f0 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01ea7f4c40: ffffff01ea7f4a80 [ ffffff01ea7f4a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4384bf36f0, ffffff4384bf36e0) taskq_thread_wait+0xbe(ffffff4384bf36c0, ffffff4384bf36e0, ffffff4384bf36f0 , ffffff01ea7f4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4384bf36c0) thread_start+8() ffffff01ea7fac40 fffffffffbc2eb80 0 0 60 ffffff43759f9c60 PC: _resume_from_idle+0xf1 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff01ea7fac40: ffffff01ea7faa80 [ ffffff01ea7faa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9c60, ffffff43759f9c50) taskq_thread_wait+0xbe(ffffff43759f9c30, ffffff43759f9c50, ffffff43759f9c60 , ffffff01ea7fabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9c30) thread_start+8() ffffff01ea731c40 fffffffffbc2eb80 0 0 60 ffffff4321ce16c8 PC: _resume_from_idle+0xf1 TASKQ: pseudo_nexus_enum_tq stack pointer for thread ffffff01ea731c40: ffffff01ea731a80 [ ffffff01ea731a80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4321ce16c8, ffffff4321ce16b8) taskq_thread_wait+0xbe(ffffff4321ce1698, ffffff4321ce16b8, ffffff4321ce16c8 , ffffff01ea731bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff4321ce1698) thread_start+8() ffffff01ea92ac40 fffffffffbc2eb80 0 0 99 ffffff43759f9e90 PC: _resume_from_idle+0xf1 TASKQ: dtrace_taskq stack pointer for thread ffffff01ea92ac40: ffffff01ea92aa80 [ ffffff01ea92aa80 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43759f9e90, ffffff43759f9e80) taskq_thread_wait+0xbe(ffffff43759f9e60, ffffff43759f9e80, ffffff43759f9e90 , ffffff01ea92abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff43759f9e60) thread_start+8() ffffff01eb2bbc40 fffffffffbc2eb80 0 0 60 ffffffffc01a5816 PC: _resume_from_idle+0xf1 THREAD: ufs_thread_idle() stack pointer for thread ffffff01eb2bbc40: ffffff01eb2bbb70 [ ffffff01eb2bbb70 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc01a5816, ffffffffc01a5820) ufs_thread_idle+0x143() thread_start+8() ffffff01eb2c1c40 fffffffffbc2eb80 0 0 60 ffffffffc01a5df6 PC: _resume_from_idle+0xf1 THREAD: ufs_thread_hlock() stack pointer for thread ffffff01eb2c1c40: ffffff01eb2c1af0 [ ffffff01eb2c1af0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffffffc01a5df6, ffffffffc01a5e00) ufs_thread_run+0x80(ffffffffc01a5de0, ffffff01eb2c1bd0) ufs_thread_hlock+0x73(0) thread_start+8() ffffff4311d9e860 ffffff4384cfc090 ffffff4321d31300 1 59 ffffff439540e21a PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311d9e860: ffffff01eb608c60 [ ffffff01eb608c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff439540e21a, ffffff439540e1e0, 391f2eedb54bcd, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff439540e21a, ffffff439540e1e0, 391f2eedb54bcd) poll_common+0x50c(fdd5eb50, 1, ffffff01eb608e40, 0) pollsys+0xe3(fdd5eb50, 1, fdd5eaf8, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311d9ec00 ffffff4384cfc090 ffffff4384a041c0 1 59 ffffff4311d9edee PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311d9ec00: ffffff01eb60ec60 [ ffffff01eb60ec60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4311d9edee, ffffff4311d9edf0, 2540bd147, 1, 4) cv_waituntil_sig+0xfa(ffffff4311d9edee, ffffff4311d9edf0, ffffff01eb60ee10, 2) lwp_park+0x15e(fdbdcf18, 0) syslwp_park+0x63(0, fdbdcf18, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311b7a860 ffffff4384cfc090 ffffff4384abe600 1 59 ffffff4311b7aa4e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311b7a860: ffffff01eb6c2cc0 [ ffffff01eb6c2cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4311b7aa4e, ffffff4311b7aa50, 97de94ad6, 1, 4) cv_waituntil_sig+0xfa(ffffff4311b7aa4e, ffffff4311b7aa50, ffffff01eb6c2e70, 2) nanosleep+0x19f(fda8ef28, 0) _sys_sysenter_post_swapgs+0x149() ffffff431193a460 ffffff4384cfc090 ffffff431204ab00 1 59 ffffff431193a64e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff431193a460: ffffff01eb85ec50 [ ffffff01eb85ec50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff431193a64e, ffffff431193a650, 0) cv_wait_sig_swap+0x17(ffffff431193a64e, ffffff431193a650) cv_waituntil_sig+0xbd(ffffff431193a64e, ffffff431193a650, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137e70e0 ffffff4384cfc090 ffffff430e73e900 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff43137e70e0: ffffff01ea287d20 [ ffffff01ea287d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01ebdcec40) shuttle_resume+0x2af(ffffff01ebdcec40, fffffffffbd12ed0) door_return+0x3e0(fd76faf4, 4, 0, 0, fd76fe00, f5f00) doorfs32+0x180(fd76faf4, 4, 0, fd76fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff01ebdcec40 fffffffffbc2eb80 0 0 60 ffffff4385082ab8 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ebdcec40: ffffff01ebdcea90 [ ffffff01ebdcea90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4385082ab8, ffffff4385082ab0) evch_delivery_hold+0x70(ffffff4385082a90, ffffff01ebdcebc0) evch_delivery_thr+0x29e(ffffff4385082a90) thread_start+8() ffffff4312026840 ffffff4384cfc090 ffffff4384ac0900 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4312026840: ffffff01eb8e6d20 [ ffffff01eb8e6d20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4383ca7080) shuttle_resume+0x2af(ffffff4383ca7080, fffffffffbd12ed0) door_return+0x3e0(fd65ec04, 4, 0, 0, fd65ee00, f5f00) doorfs32+0x180(fd65ec04, 4, 0, fd65ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff43137d64e0 ffffff4384cfc090 ffffff430e70ae00 1 59 ffffff43137d66ce PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff43137d64e0: ffffff01ea682c50 [ ffffff01ea682c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43137d66ce, ffffff43137d66d0, 0) cv_wait_sig_swap+0x17(ffffff43137d66ce, ffffff43137d66d0) cv_waituntil_sig+0xbd(ffffff43137d66ce, ffffff43137d66d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311e9bb60 ffffff4384cfc090 ffffff4384dc1cc0 1 59 ffffff4311e9bd4e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311e9bb60: ffffff01ea9cac50 [ ffffff01ea9cac50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311e9bd4e, ffffff4311e9bd50, 0) cv_wait_sig_swap+0x17(ffffff4311e9bd4e, ffffff4311e9bd50) cv_waituntil_sig+0xbd(ffffff4311e9bd4e, ffffff4311e9bd50, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383fe4160 ffffff4384cfc090 ffffff4321d30500 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383fe4160: ffffff01ea688d50 [ ffffff01ea688d50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fd43ee00, f5f00) doorfs32+0x180(0, 0, 0, fd43ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff4383ff6b00 ffffff4384cfc090 ffffff43120340c0 1 59 ffffff4395036542 PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383ff6b00: ffffff01e9ce1c50 [ ffffff01e9ce1c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4395036542, ffffff4395036508, 0) cv_wait_sig_swap+0x17(ffffff4395036542, ffffff4395036508) cv_timedwait_sig_hrtime+0x35(ffffff4395036542, ffffff4395036508, ffffffffffffffff) poll_common+0x50c(848f088, 4, 0, 0) pollsys+0xe3(848f088, 4, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311d55800 ffffff4384cfc090 ffffff4384bc5840 1 59 ffffff4311d559ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311d55800: ffffff01ea5c9c50 [ ffffff01ea5c9c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311d559ee, ffffff4311d559f0, 0) cv_wait_sig_swap+0x17(ffffff4311d559ee, ffffff4311d559f0) cv_waituntil_sig+0xbd(ffffff4311d559ee, ffffff4311d559f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438946e160 ffffff4384cfc090 ffffff430e73b800 1 59 ffffff438946e34e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff438946e160: ffffff01eb930c50 [ ffffff01eb930c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438946e34e, ffffff438946e350, 0) cv_wait_sig_swap+0x17(ffffff438946e34e, ffffff438946e350) cv_waituntil_sig+0xbd(ffffff438946e34e, ffffff438946e350, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43844ac140 ffffff4384cfc090 ffffff431202f800 1 59 ffffff43844ac32e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff43844ac140: ffffff01eb82ec50 [ ffffff01eb82ec50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43844ac32e, ffffff43844ac330, 0) cv_wait_sig_swap+0x17(ffffff43844ac32e, ffffff43844ac330) cv_waituntil_sig+0xbd(ffffff43844ac32e, ffffff43844ac330, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff43137d3760 ffffff4384cfc090 ffffff4384bcee00 1 59 ffffff43137d394e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff43137d3760: ffffff01ea5b7c50 [ ffffff01ea5b7c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43137d394e, ffffff43137d3950, 0) cv_wait_sig_swap+0x17(ffffff43137d394e, ffffff43137d3950) cv_waituntil_sig+0xbd(ffffff43137d394e, ffffff43137d3950, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383c23500 ffffff4384cfc090 ffffff4384abd100 1 59 ffffff4383c236ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383c23500: ffffff01eb7f4c50 [ ffffff01eb7f4c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383c236ee, ffffff4383c236f0, 0) cv_wait_sig_swap+0x17(ffffff4383c236ee, ffffff4383c236f0) cv_waituntil_sig+0xbd(ffffff4383c236ee, ffffff4383c236f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438413d040 ffffff4384cfc090 ffffff430e709140 1 59 ffffff438413d22e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff438413d040: ffffff01eb8cec50 [ ffffff01eb8cec50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438413d22e, ffffff438413d230, 0) cv_wait_sig_swap+0x17(ffffff438413d22e, ffffff438413d230) cv_waituntil_sig+0xbd(ffffff438413d22e, ffffff438413d230, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383cb6760 ffffff4384cfc090 ffffff430eb0a200 1 59 ffffff4383cb694e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383cb6760: ffffff01eb800c50 [ ffffff01eb800c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383cb694e, ffffff4383cb6950, 0) cv_wait_sig_swap+0x17(ffffff4383cb694e, ffffff4383cb6950) cv_waituntil_sig+0xbd(ffffff4383cb694e, ffffff4383cb6950, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff01ebeb2c40 fffffffffbc2eb80 0 0 60 ffffff438470c348 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ebeb2c40: ffffff01ebeb2a90 [ ffffff01ebeb2a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff438470c348, ffffff438470c340) evch_delivery_hold+0x70(ffffff438470c320, ffffff01ebeb2bc0) evch_delivery_thr+0x29e(ffffff438470c320) thread_start+8() ffffff4311da4100 ffffff4384cfc090 ffffff4321d36900 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311da4100: ffffff01eb7fad20 [ ffffff01eb7fad20 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff01ebd74c40) shuttle_resume+0x2af(ffffff01ebd74c40, fffffffffbd12ed0) door_return+0x3e0(fc79eb88, 4, 0, 0, fc79ee00, f5f00) doorfs32+0x180(fc79eb88, 4, 0, fc79ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff01ebd74c40 fffffffffbc2eb80 0 0 60 ffffff4383a24358 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ebd74c40: ffffff01ebd74a90 [ ffffff01ebd74a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4383a24358, ffffff4383a24350) evch_delivery_hold+0x70(ffffff4383a24330, ffffff01ebd74bc0) evch_delivery_thr+0x29e(ffffff4383a24330) thread_start+8() ffffff01ebd80c40 fffffffffbc2eb80 0 0 60 ffffff437593d330 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ebd80c40: ffffff01ebd80a90 [ ffffff01ebd80a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff437593d330, ffffff437593d328) evch_delivery_hold+0x70(ffffff437593d308, ffffff01ebd80bc0) evch_delivery_thr+0x29e(ffffff437593d308) thread_start+8() ffffff4311b82100 ffffff4384cfc090 ffffff4321c360c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4311b82100: ffffff01eb74ed50 [ ffffff01eb74ed50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, fc69fe00, f5f00) doorfs32+0x180(0, 0, 0, fc69fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff01ebe46c40 fffffffffbc2eb80 0 0 60 ffffff4395a604f8 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01ebe46c40: ffffff01ebe46a90 [ ffffff01ebe46a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4395a604f8, ffffff4395a604f0) evch_delivery_hold+0x70(ffffff4395a604d0, ffffff01ebe46bc0) evch_delivery_thr+0x29e(ffffff4395a604d0) thread_start+8() ffffff4384fc93c0 ffffff4384cfc090 ffffff4321d2da40 1 59 ffffff4384fc95ae PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4384fc93c0: ffffff01eb7a3c50 [ ffffff01eb7a3c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4384fc95ae, ffffff4384fc95b0, 0) cv_wait_sig_swap+0x17(ffffff4384fc95ae, ffffff4384fc95b0) cv_waituntil_sig+0xbd(ffffff4384fc95ae, ffffff4384fc95b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4385084000 ffffff4384cfc090 ffffff4313813a40 1 59 ffffff43850841ee PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4385084000: ffffff01eb6cec50 [ ffffff01eb6cec50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43850841ee, ffffff43850841f0, 0) cv_wait_sig_swap+0x17(ffffff43850841ee, ffffff43850841f0) cv_waituntil_sig+0xbd(ffffff43850841ee, ffffff43850841f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383ca3440 ffffff4384cfc090 ffffff4321d2a180 1 59 ffffff4383ca362e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383ca3440: ffffff01ea111c50 [ ffffff01ea111c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383ca362e, ffffff4383ca3630, 0) cv_wait_sig_swap+0x17(ffffff4383ca362e, ffffff4383ca3630) cv_waituntil_sig+0xbd(ffffff4383ca362e, ffffff4383ca3630, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383d14bc0 ffffff4384cfc090 ffffff4321d27780 1 59 ffffff4383d14dae PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383d14bc0: ffffff01eb8bdc50 [ ffffff01eb8bdc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d14dae, ffffff4383d14db0, 0) cv_wait_sig_swap+0x17(ffffff4383d14dae, ffffff4383d14db0) cv_waituntil_sig+0xbd(ffffff4383d14dae, ffffff4383d14db0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4383d14480 ffffff4384cfc090 ffffff4384bc2040 1 59 ffffff4383d1466e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4383d14480: ffffff01ea5f9c50 [ ffffff01ea5f9c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4383d1466e, ffffff4383d14670, 0) cv_wait_sig_swap+0x17(ffffff4383d1466e, ffffff4383d14670) cv_waituntil_sig+0xbd(ffffff4383d1466e, ffffff4383d14670, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff438417cc60 ffffff4384cfc090 ffffff4384dc2ac0 1 59 ffffff438417ce4e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff438417cc60: ffffff01e9cdbc50 [ ffffff01e9cdbc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff438417ce4e, ffffff438417ce50, 0) cv_wait_sig_swap+0x17(ffffff438417ce4e, ffffff438417ce50) cv_waituntil_sig+0xbd(ffffff438417ce4e, ffffff438417ce50, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff4384e3d0a0 ffffff4384cfc090 ffffff4384dcf180 1 59 ffffff4384e3d28e PC: _resume_from_idle+0xf1 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff4384e3d0a0: ffffff01ea60bd90 [ ffffff01ea60bd90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4384e3d28e, ffffff42e26aeb80, 0) cv_wait_sig_swap+0x17(ffffff4384e3d28e, ffffff42e26aeb80) sigsuspend+0xf9(8047dc8) _sys_sysenter_post_swapgs+0x149() ffffff4385084740 ffffff4383a780a0 ffffff431380b380 1 59 ffffff438508492e PC: _resume_from_idle+0xf1 CMD: /usr/perl5/bin/perl /usr/lib/intrd stack pointer for thread ffffff4385084740: ffffff01eaa86cc0 [ ffffff01eaa86cc0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff438508492e, ffffff4385084930, a7a358088, 1, 4) cv_waituntil_sig+0xfa(ffffff438508492e, ffffff4385084930, ffffff01eaa86e70, 2) nanosleep+0x19f(8047b08, 8047b00) _sys_sysenter_post_swapgs+0x149() ffffff4311ef30a0 ffffff4384d960a8 ffffff4313814140 1 59 ffffff4384b0dc7a PC: _resume_from_idle+0xf1 CMD: /var/web-gui/data/tools/httpd/napp-it-mhttpd -c **.pl -u napp-it -d /var/web-g u stack pointer for thread ffffff4311ef30a0: ffffff01eb626c50 [ ffffff01eb626c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4384b0dc7a, ffffff4384b0dc40, 0) cv_wait_sig_swap+0x17(ffffff4384b0dc7a, ffffff4384b0dc40) cv_timedwait_sig_hrtime+0x35(ffffff4384b0dc7a, ffffff4384b0dc40, ffffffffffffffff) poll_common+0x50c(8045100, 2, 0, 0) pollsys+0xe3(8045100, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff01e9d30c40 fffffffffbc2eb80 0 0 60 ffffff4393eb1620 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01e9d30c40: ffffff01e9d30b00 [ ffffff01e9d30b00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4393eb1620, ffffff4393eb1610) i_mac_notify_thread+0xee(ffffff4393eb1520) thread_start+8() ffffff01e849ac40 fffffffffbc2eb80 0 0 60 ffffff4393eae120 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01e849ac40: ffffff01e849ab00 [ ffffff01e849ab00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4393eae120, ffffff4393eae110) i_mac_notify_thread+0xee(ffffff4393eae020) thread_start+8() ffffff01e943ec40 fffffffffbc2eb80 0 0 60 ffffff4545586b28 PC: _resume_from_idle+0xf1 THREAD: i_mac_notify_thread() stack pointer for thread ffffff01e943ec40: ffffff01e943eb00 [ ffffff01e943eb00 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff4545586b28, ffffff4545586b18) i_mac_notify_thread+0xee(ffffff4545586a28) thread_start+8() ffffff4383d08c60 ffffff4321004028 ffffff4384dc07c0 1 29 0 PC: panicsys+0x109 CMD: sleep 2 stack pointer for thread ffffff4383d08c60: ffffff01e8caaae0 param_preset() 0xfffffffffbb2fa18() anon_free+0x74(ffffff4558663658, 0, 18000) segvn_free+0x242(ffffff43a7144818) seg_free+0x30(ffffff43a7144818) segvn_unmap+0xcde(ffffff43a7144818, 8064000, 18000) as_free+0xe7(ffffff4384be3690) relvm+0x220() proc_exit+0x454(1, 0) exit+0x15(1, 0) rexit+0x18(0) _sys_sysenter_post_swapgs+0x149() ffffff4383fbec20 ffffff4310177030 ffffff430e73f0c0 1 29 ffffff43101770f0 PC: _resume_from_idle+0xf1 CMD: /usr//bin/bash /var/web-gui/data/napp-it/zfsos/_lib/scripts/taskserver.sh stack pointer for thread ffffff4383fbec20: ffffff01eb70ac30 [ ffffff01eb70ac30 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43101770f0, fffffffffbcf2a70, 0) cv_wait_sig_swap+0x17(ffffff43101770f0, fffffffffbcf2a70) waitid+0x24d(7, 0, ffffff01eb70ae40, 3) waitsys32+0x36(7, 0, 8046960, 3) _sys_sysenter_post_swapgs+0x149() ffffff4311db3b00 ffffff4384d0f078 ffffff4384a03ac0 1 29 ffffff4383d15eaa PC: _resume_from_idle+0xf1 CMD: /usr/bin/perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/socketserver.pl dae m stack pointer for thread ffffff4311db3b00: ffffff01ea10bc60 [ ffffff01ea10bc60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4383d15eaa, ffffff4383d15e70, 391f2cabf10e02, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff4383d15eaa, ffffff4383d15e70, 391f2cabf10e02) poll_common+0x50c(8bde4a0, 1, ffffff01ea10be40, 0) pollsys+0xe3(8bde4a0, 1, 8047958, 0) _sys_sysenter_post_swapgs+0x149() ffffff4311cc2140 ffffff43106bf038 ffffff430e706e40 1 59 ffffff4311cc232e PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cc2140: ffffff01e84abc50 [ ffffff01e84abc50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311cc232e, ffffff4311cc2330, 0) cv_wait_sig_swap+0x17(ffffff4311cc232e, ffffff4311cc2330) cv_waituntil_sig+0xbd(ffffff4311cc232e, ffffff4311cc2330, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff4311cbec40 ffffff43106bf038 ffffff430e70e800 1 59 ffffff4310bae114 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cbec40: ffffff01e8b0fae0 [ ffffff01e8b0fae0 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig+0x185(ffffff4310bae114, ffffff430fdb8aa8) cte_get_event+0xb3(ffffff4310bae0e0, 0, 81f21e8, 0, 0, 1) ctfs_endpoint_ioctl+0xf9(ffffff4310bae0d8, 63746502, 81f21e8, ffffff42e2624cf8, fffffffffbcefc60, 0) ctfs_bu_ioctl+0x4b(ffffff4311d65100, 63746502, 81f21e8, 102001, ffffff42e2624cf8, ffffff01e8b0fe68, 0) fop_ioctl+0x55(ffffff4311d65100, 63746502, 81f21e8, 102001, ffffff42e2624cf8 , ffffff01e8b0fe68, 0) ioctl+0x9b(9f, 63746502, 81f21e8) sys_syscall32+0xff() ffffff4311cbe8a0 ffffff43106bf038 ffffff430e70ef00 1 59 ffffff4310ba5d94 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cbe8a0: ffffff01e8537b20 [ ffffff01e8537b20 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4310ba5d94, ffffff4310a50ce0, 0) cv_wait_sig_swap+0x17(ffffff4310ba5d94, ffffff4310a50ce0) cv_waituntil_sig+0xbd(ffffff4310ba5d94, ffffff4310a50ce0, 0, ffffff43) port_getn+0x39f(ffffff4310a50c80, fe6f1fa0, 1, ffffff01e8537e1c, ffffff01e8537dd0) portfs+0x25d(5, 5, fe6f1fa0, 0, 0, 0) portfs32+0x78(5, 5, fe6f1fa0, 0, 0, 0) sys_syscall32+0xff() ffffff4311dc3180 ffffff43106bf038 ffffff4311a48c40 1 59 ffffff4311dc336e PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311dc3180: ffffff01e8505c60 [ ffffff01e8505c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff4311dc336e, ffffff4311dc3370, 2540bd1e5, 1, 4) cv_waituntil_sig+0xfa(ffffff4311dc336e, ffffff4311dc3370, ffffff01e8505e10, 2) lwp_park+0x15e(fb424f18, 0) syslwp_park+0x63(0, fb424f18, 0) sys_syscall32+0xff() ffffff01e944fc40 fffffffffbc2eb80 0 0 60 ffffff43104a3de8 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01e944fc40: ffffff01e944fa90 [ ffffff01e944fa90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43104a3de8, ffffff43104a3de0) evch_delivery_hold+0x70(ffffff43104a3dc0, ffffff01e944fbc0) evch_delivery_thr+0x29e(ffffff43104a3dc0) thread_start+8() ffffff01e955bc40 fffffffffbc2eb80 0 0 60 ffffff43104a3c28 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01e955bc40: ffffff01e955ba90 [ ffffff01e955ba90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43104a3c28, ffffff43104a3c20) evch_delivery_hold+0x70(ffffff43104a3c00, ffffff01e955bbc0) evch_delivery_thr+0x29e(ffffff43104a3c00) thread_start+8() ffffff438945cc60 ffffff43106bf038 ffffff4384dc23c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff438945cc60: ffffff01ea90ed50 [ ffffff01ea90ed50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(0, 0, 0, 0, f95aee00, f5f00) doorfs32+0x180(0, 0, 0, f95aee00, f5f00, a) sys_syscall32+0xff() ffffff4311d8b160 ffffff43106bf038 ffffff4384bce700 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311d8b160: ffffff01ea5bdd50 [ ffffff01ea5bdd50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(f98becc0, 4, 0, 0, f98bee00, f5f00) doorfs32+0x180(f98becc0, 4, 0, f98bee00, f5f00, a) sys_syscall32+0xff() ffffff4311dbb000 ffffff43106bf038 ffffff4311a47040 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311dbb000: ffffff01e83fdd50 [ ffffff01e83fdd50 _resume_from_idle+0xf1() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd12ed0) door_return+0x214(fb028cc8, 4, 0, 0, fb028e00, f5f00) doorfs32+0x180(fb028cc8, 4, 0, fb028e00, f5f00, a) sys_syscall32+0xff() ffffff01e9b73c40 fffffffffbc2eb80 0 0 60 ffffff43104a3a68 PC: _resume_from_idle+0xf1 THREAD: evch_delivery_thr() stack pointer for thread ffffff01e9b73c40: ffffff01e9b73a90 [ ffffff01e9b73a90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff43104a3a68, ffffff43104a3a60) evch_delivery_hold+0x70(ffffff43104a3a40, ffffff01e9b73bc0) evch_delivery_thr+0x29e(ffffff43104a3a40) thread_start+8() ffffff4311cbe500 ffffff43106bf038 ffffff430e73fec0 1 59 ffffff4311cbe6ee PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cbe500: ffffff01e8aa4c50 [ ffffff01e8aa4c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311cbe6ee, ffffff4311cbe6f0, 0) cv_wait_sig_swap+0x17(ffffff4311cbe6ee, ffffff4311cbe6f0) cv_waituntil_sig+0xbd(ffffff4311cbe6ee, ffffff4311cbe6f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff4311cbb8c0 ffffff43106bf038 ffffff430e73f7c0 1 59 0 PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cbb8c0: ffffff01e8c56d00 [ ffffff01e8c56d00 _resume_from_idle+0xf1() ] swtch_to+0xb6(ffffff4311d5b7e0) shuttle_resume+0x2af(ffffff4311d5b7e0, fffffffffbd12ed0) door_call+0x336(a, fe4f3c28) doorfs32+0xa7(a, fe4f3c28, 0, 0, 0, 3) sys_syscall32+0xff() ffffff4311cbb520 ffffff43106bf038 ffffff430e745380 1 59 ffffff4311cbb70e PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff4311cbb520: ffffff01e8cd9c50 [ ffffff01e8cd9c50 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff4311cbb70e, ffffff4311cbb710, 0) cv_wait_sig_swap+0x17(ffffff4311cbb70e, ffffff4311cbb710) cv_waituntil_sig+0xbd(ffffff4311cbb70e, ffffff4311cbb710, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff43106bbb20 ffffff43106bf038 ffffff430e707540 1 59 ffffff43106bbd0e PC: _resume_from_idle+0xf1 CMD: /lib/svc/bin/svc.startd stack pointer for thread ffffff43106bbb20: ffffff01e9d43d90 [ ffffff01e9d43d90 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff43106bbd0e, ffffff42e26ae1c0, 0) cv_wait_sig_swap+0x17(ffffff43106bbd0e, ffffff42e26ae1c0) sigsuspend+0xf9(8047e50) sys_syscall32+0xff() ffffff430fdaab00 ffffff430fdad018 ffffff430e710b00 1 59 ffffff43104ad62a PC: _resume_from_idle+0xf1 CMD: /sbin/init stack pointer for thread ffffff430fdaab00: ffffff01e97f4c60 [ ffffff01e97f4c60 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff43104ad62a, ffffff43104ad5f0, 391f6aca3025a0, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff43104ad62a, ffffff43104ad5f0, 391f6aca3025a0) poll_common+0x50c(806b784, 1, ffffff01e97f4e40, 0) pollsys+0xe3(806b784, 1, 80475d8, 0) sys_syscall32+0xff() ffffff01e980cc40 ffffff430fda9020 ffffff430e70f600 0 97 ffffff430fda90e0 PC: _resume_from_idle+0xf1 CMD: pageout stack pointer for thread ffffff01e980cc40: ffffff01e980ca10 [ ffffff01e980ca10 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(ffffff430fda90e0, fffffffffbcdf440) pageout_scanner+0x121() thread_start+8() ffffff01e97fac40 ffffff430fda9020 ffffff430e710400 0 98 fffffffffbcdf468 PC: _resume_from_idle+0xf1 CMD: pageout stack pointer for thread ffffff01e97fac40: ffffff01e97faa70 [ ffffff01e97faa70 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcdf468, fffffffffbcdf460) pageout+0x1e6() thread_start+8() ffffff01e9800c40 ffffff430fda7028 ffffff430e70fd00 0 60 fffffffffbcf0de4 PC: _resume_from_idle+0xf1 CMD: fsflush stack pointer for thread ffffff01e9800c40: ffffff01e9800a20 [ ffffff01e9800a20 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcf0de4, fffffffffbca74a0) fsflush+0x21d() thread_start+8() ffffff01e9806c40 fffffffffbc2eb80 0 0 60 fffffffffbcfb6a8 PC: _resume_from_idle+0xf1 THREAD: mod_uninstall_daemon() stack pointer for thread ffffff01e9806c40: ffffff01e9806b70 [ ffffff01e9806b70 _resume_from_idle+0xf1() ] swtch+0x141() cv_wait+0x70(fffffffffbcfb6a8, fffffffffbcf0820) mod_uninstall_daemon+0x123() thread_start+8() ffffff01e9812c40 fffffffffbc2eb80 0 0 60 fffffffffbcdf538 PC: _resume_from_idle+0xf1 THREAD: seg_pasync_thread() stack pointer for thread ffffff01e9812c40: ffffff01e9812af0 [ ffffff01e9812af0 _resume_from_idle+0xf1() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbcdf538, fffffffffbcdf530, 3b9aca00, 989680 , 0) cv_reltimedwait+0x51(fffffffffbcdf538, fffffffffbcdf530, 64, 4) seg_pasync_thread+0xd1() thread_start+8() MESSAGE dcpc0 is /pseudo/dcpc at 0 pseudo-device: fasttrap0 fasttrap0 is /pseudo/fasttrap at 0 pseudo-device: fbt0 fbt0 is /pseudo/fbt at 0 pseudo-device: lockstat0 lockstat0 is /pseudo/lockstat at 0 pseudo-device: profile0 profile0 is /pseudo/profile at 0 pseudo-device: sdt0 sdt0 is /pseudo/sdt at 0 pseudo-device: systrace0 systrace0 is /pseudo/systrace at 0 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x3112010c WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x3112010c /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x3112010c received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 WARNING: /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 /pci at 0,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Log info 0x31120303 received for target 10. scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc NOTICE: ibp2 registered NOTICE: ibp2 link up, 32000 Mbps NOTICE: ibp1 registered NOTICE: ibp1 link up, 32000 Mbps NOTICE: ibp3 registered NOTICE: ibp3 link up, 32000 Mbps panic[cpu21]/thread=ffffff4383d08c60: anon_decref: slot count 0 ffffff01e8caab90 fffffffffbb2fa18 () ffffff01e8caac00 genunix:anon_free+74 () ffffff01e8caac50 genunix:segvn_free+242 () ffffff01e8caac80 genunix:seg_free+30 () ffffff01e8caad60 genunix:segvn_unmap+cde () ffffff01e8caadc0 genunix:as_free+e7 () ffffff01e8caadf0 genunix:relvm+220 () ffffff01e8caae80 genunix:proc_exit+454 () ffffff01e8caaea0 genunix:exit+15 () ffffff01e8caaec0 genunix:rexit+18 () ffffff01e8caaf10 unix:brand_sys_sysenter+1c9 () syncing file systems... done dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel NOTICE: ahci0: ahci_tran_reset_dport port 0 reset port NOTICE: ahci0: ahci_tran_reset_dport port 1 reset port stack pointer for thread ffffff4383d08c60: ffffff01e8caaae0 ffffff01e8caab50 param_preset() ffffff01e8caab90 0xfffffffffbb2fa18() ffffff01e8caac00 anon_free+0x74(ffffff4558663658, 0, 18000) ffffff01e8caac50 segvn_free+0x242(ffffff43a7144818) ffffff01e8caac80 seg_free+0x30(ffffff43a7144818) ffffff01e8caad60 segvn_unmap+0xcde(ffffff43a7144818, 8064000, 18000) ffffff01e8caadc0 as_free+0xe7(ffffff4384be3690) ffffff01e8caadf0 relvm+0x220() ffffff01e8caae80 proc_exit+0x454(1, 0) ffffff01e8caaea0 exit+0x15(1, 0) ffffff01e8caaec0 rexit+0x18(0) ffffff01e8caaf10 _sys_sysenter_post_swapgs+0x149() THREAD STATE SOBJ COUNT ffffff01e8011c40 SLEEP CV 627 swtch+0x141 cv_wait+0x70 taskq_thread_wait+0xbe taskq_thread+0x37c thread_start+8 ffffff01e809bc40 FREE 172 ffffff4383cbb740 SLEEP CV 42 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd lwp_park+0x15e syslwp_park+0x63 _sys_sysenter_post_swapgs+0x149 ffffff01e80bfc40 FREE 26 av_dispatch_softvect+0x60 apix_dispatch_softint+0x41 ffffff01e99bcc40 SLEEP CV 26 swtch+0x141 cv_wait+0x70 squeue_polling_thread+0xa9 thread_start+8 ffffff01e99b6c40 SLEEP CV 26 swtch+0x141 cv_wait+0x70 squeue_worker+0x104 thread_start+8 ffffff4311cbbc60 SLEEP SHUTTLE 26 swtch_to+0xb6 shuttle_resume+0x2af door_return+0x3e0 doorfs32+0x180 sys_syscall32+0xff ffffff01eb818c40 SLEEP CV 24 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 kcfpool_svc+0x8e thread_start+8 ffffff01ebc1dc40 SLEEP CV 24 swtch+0x141 cv_wait+0x70 mptsas_doneq_thread+0x10b thread_start+8 ffffff01e837cc40 ONPROC 24 swtch+0x141 cpu_pause+0x80 thread_start+8 ffffff4383ddfb60 SLEEP CV 21 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa nanosleep+0x19f _sys_sysenter_post_swapgs+0x149 ffffff4383cfbba0 SLEEP CV 19 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_timedwait_sig_hrtime+0x35 poll_common+0x50c pollsys+0xe3 _sys_sysenter_post_swapgs+0x149 ffffff01ea599c40 SLEEP CV 16 swtch+0x141 cv_wait+0x70 stmf_worker_task+0x505 thread_start+8 ffffff4311da4be0 SLEEP SHUTTLE 16 swtch_to+0xb6 shuttle_resume+0x2af door_return+0x3e0 doorfs32+0x180 _sys_sysenter_post_swapgs+0x149 ffffff01e986bc40 SLEEP CV 15 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 taskq_thread_wait+0x64 taskq_d_thread+0x145 thread_start+8 ffffff43137c5060 SLEEP SHUTTLE 13 swtch+0x141 shuttle_swtch+0x203 door_return+0x214 doorfs32+0x180 _sys_sysenter_post_swapgs+0x149 ffffff01e8370c40 SLEEP CV 12 swtch+0x141 cv_wait+0x70 evch_delivery_hold+0x70 evch_delivery_thr+0x29e thread_start+8 ffffff01e845ac40 FREE 10 apix_intr_exit+0x24 apix_intr_thread_epilog+0xcb apix_dispatch_lowlevel+0x30 ffffff01e990dc40 SLEEP CV 9 swtch+0x141 cv_wait+0x70 i_mac_notify_thread+0xee thread_start+8 ffffff4311e53060 SLEEP SHUTTLE 8 swtch+0x141 shuttle_swtch+0x203 door_return+0x214 doorfs32+0x180 sys_syscall32+0xff ffffff4383d14820 SLEEP CV 7 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 pause+0x45 _sys_sysenter_post_swapgs+0x149 ffffff43137b87e0 SLEEP CV 6 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_timedwait_sig_hrtime+0x2a poll_common+0x50c pollsys+0xe3 _sys_sysenter_post_swapgs+0x149 ffffff01e9992c40 SLEEP CV 6 swtch+0x141 cv_wait+0x70 mac_soft_ring_worker+0xb1 thread_start+8 ffffff4311d5b7e0 SLEEP CV 5 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd lwp_park+0x15e syslwp_park+0x63 sys_syscall32+0xff ffffff01e8454c40 FREE 4 apic_send_ipi+0x73 send_dirint+0x18 cbe_xcall+0xac cyclic_reprogram_here+0x46 cyclic_reprogram+0x68 lbolt_ev_to_cyclic+0xcf av_dispatch_softvect+0x60 apix_dispatch_softint+0x41 ffffff4383d11be0 SLEEP CV 4 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa lwp_park+0x15e syslwp_park+0x63 _sys_sysenter_post_swapgs+0x149 ffffff01eb57ec40 SLEEP CV 4 swtch+0x141 cv_wait+0x70 eibnx_port_monitor+0x1f5 thread_start+8 ffffff01e8f93c40 SLEEP CV 4 swtch+0x141 cv_wait+0x70 ibtl_async_thread+0x1e1 thread_start+8 ffffff01eb454c40 SLEEP CV 4 swtch+0x141 cv_wait+0x70 rdsv3_af_thr_worker+0xd0 thread_start+8 ffffff4312026100 SLEEP CV 4 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 sigsuspend+0xf9 _sys_sysenter_post_swapgs+0x149 ffffff01e997ac40 SLEEP CV 3 swtch+0x141 cv_wait+0x70 mac_srs_worker+0x141 thread_start+8 ffffff4383cbb000 SLEEP CV 3 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd sigtimedwait+0x197 _sys_sysenter_post_swapgs+0x149 ffffff4311ef3440 SLEEP CV 3 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 pause+0x45 sys_syscall32+0xff ffffff01e8005c40 ONPROC 3 1 ffffff01e8883c40 SLEEP CV 2 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c txg_thread_wait+0x5f txg_sync_thread+0xfd thread_start+8 ffffff43137ae500 SLEEP CV 2 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa lwp_park+0x15e syslwp_park+0x63 sys_syscall32+0xff ffffff01e8298c40 SLEEP CV 2 swtch+0x141 cv_wait+0x70 spa_thread+0x1db thread_start+8 ffffff01e887ac40 SLEEP CV 2 swtch+0x141 cv_wait+0x70 txg_thread_wait+0xaf txg_quiesce_thread+0x106 thread_start+8 ffffff43106bb780 SLEEP CV 2 swtch+0x141 cv_wait_sig+0x185 cte_get_event+0xb3 ctfs_endpoint_ioctl+0xf9 ctfs_bu_ioctl+0x4b fop_ioctl+0x55 ioctl+0x9b sys_syscall32+0xff ffffff43137dd860 SLEEP CV 2 swtch+0x141 cv_wait_sig+0x185 door_unref+0x94 doorfs32+0xf5 _sys_sysenter_post_swapgs+0x149 ffffff43137aec40 SLEEP CV 2 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 waitid+0x24d waitsys32+0x36 _sys_sysenter_post_swapgs+0x149 ffffff01e8b51c40 FREE 1 apix_add_pending_hardint+0x17 apix_do_interrupt+0x21d _sys_rtt_ints_disabled+8 mutex_exit+9 cyclic_reprogram_here+0x46 cyclic_reprogram+0x68 lbolt_ev_to_cyclic+0xcf av_dispatch_softvect+0x60 apix_dispatch_softint+0x41 ffffff01e80cbc40 FREE 1 apix_setspl+0x17 do_splx+0x65 cbe_restore_level+0x17 cyclic_reprogram_cyclic+0xff cyclic_reprogram+0x68 lbolt_ev_to_cyclic+0xcf av_dispatch_softvect+0x60 apix_dispatch_softint+0x41 ffffff01e9597c40 FREE 1 i_ddi_vaddr_swap_put64+0x23 hermon_cq_poll+0x1a2 hermon_ci_poll_cq+0x2d 0xffffff438486b6f0 srpt_ch_scq_hdlr+0x184 ibtl_cq_handler_call+0x1b ibc_cq_handler+0x88 0x80872c36 hermon_eq_poll+0x20a hermon_isr+0xa6 0xffffff430fd1d500 apix_dispatch_pending_hardint+0x3f ffffff01e80c5c40 FREE 1 timeout_execute+0x139 timer_softintr+0x8d av_dispatch_softvect+0x78 apix_dispatch_softint+0x41 ffffff01e9f53c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 idm_wd_thread+0x203 thread_start+8 ffffff01e81b5c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 kcfpoold+0xf6 thread_start+8 ffffff01e81afc40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 page_capture_thread+0xb1 thread_start+8 ffffff01e8227c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 sata_event_daemon+0xff thread_start+8 ffffff01e9812c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 seg_pasync_thread+0xd1 thread_start+8 ffffff01e8ec1c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 stmf_svc_timeout+0x11a stmf_svc+0x1b8 taskq_thread+0x2d0 thread_start+8 ffffff01e81bbc40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c arc_reclaim_thread+0x11a thread_start+8 ffffff01e88d7c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c dce_reclaim_worker+0xab thread_start+8 ffffff01e81c1c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c l2arc_feed_thread+0xad thread_start+8 ffffff01ea5b1c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c zone_status_timedwait+0x6b auto_do_unmount+0xc7 thread_start+8 ffffff430fdaab00 SLEEP CV 1 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_timedwait_sig_hrtime+0x2a poll_common+0x50c pollsys+0xe3 sys_syscall32+0xff ffffff01eb548c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 crypto_bufcall_service+0x8d thread_start+8 ffffff01e88adc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 dld_taskq_dispatch+0x115 thread_start+8 ffffff01eb578c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 eibnx_create_eoib_node+0xdb thread_start+8 ffffff01e9800c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 fsflush+0x21d thread_start+8 ffffff01e8fd5c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ibcm_process_tlist+0x1e1 thread_start+8 ffffff01e98adc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ibd_async_work+0x23a thread_start+8 ffffff01e88ddc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ill_taskq_dispatch+0x155 thread_start+8 ffffff01e88c5c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ipsec_loader+0x149 thread_start+8 ffffff01ea153c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 irm_balance_thread+0x50 thread_start+8 ffffff01ea177c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 log_event_deliver+0x1b3 thread_start+8 ffffff01e9986c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 mac_rx_srs_poll_ring+0xad thread_start+8 ffffff01e888fc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 memscrubber+0x146 thread_start+8 ffffff01e9806c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 mod_uninstall_daemon+0x123 thread_start+8 ffffff01e97fac40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 pageout+0x1e6 thread_start+8 ffffff01e980cc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 pageout_scanner+0x121 thread_start+8 ffffff01e81c7c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 pm_dep_thread+0xbd thread_start+8 ffffff01e8023c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 scsi_hba_barrier_daemon+0xd6 thread_start+8 ffffff01e8029c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 scsi_lunchg1_daemon+0x1de thread_start+8 ffffff01e802fc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 scsi_lunchg2_daemon+0x121 thread_start+8 ffffff01e9d9ec40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 softmac_taskq_dispatch+0x11d thread_start+8 ffffff01e819dc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 streams_bufcall_service+0x8d thread_start+8 ffffff01e81a3c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 streams_qbkgrnd_service+0x151 thread_start+8 ffffff01e81a9c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 streams_sqbkgrnd_service+0xe5 thread_start+8 ffffff01e836ac40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 task_commit+0xd9 thread_start+8 ffffff01e800bc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 thread_reaper+0xb9 thread_start+8 ffffff01e80f5c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 timeout_taskq_thread+0x95 thread_start+8 ffffff01eb2bbc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ufs_thread_idle+0x143 thread_start+8 ffffff01eb2c1c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ufs_thread_run+0x80 ufs_thread_hlock+0x73 thread_start+8 ffffff4310bb1b40 SLEEP CV 1 swtch+0x141 cv_wait_sig+0x185 door_unref+0x94 doorfs32+0xf5 sys_syscall32+0xff ffffff4383ddf080 SLEEP CV 1 swtch+0x141 cv_wait_sig+0x185 so_dequeue_msg+0x2f7 so_recvmsg+0x249 socket_recvmsg+0x33 socket_vop_read+0x5f fop_read+0x8b read+0x2a7 read32+0x1e _sys_sysenter_post_swapgs+0x149 ffffff43137bc080 SLEEP CV 1 swtch+0x141 cv_wait_sig+0x185 str_cv_wait+0x27 strwaitq+0x2c3 strread+0x144 spec_read+0x66 fop_read+0x8b read+0x2a7 read32+0x1e _sys_sysenter_post_swapgs+0x149 ffffff4311cbe8a0 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd port_getn+0x39f portfs+0x25d portfs32+0x78 sys_syscall32+0xff ffffff43106bb040 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd sigtimedwait+0x197 sys_syscall32+0xff ffffff431379aae0 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 fifo_read+0xc9 fop_read+0x8b read+0x2a7 read32+0x1e _sys_sysenter_post_swapgs+0x149 ffffff43106bbb20 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 sigsuspend+0xf9 sys_syscall32+0xff ffffff4311cbb8c0 SLEEP SHUTTLE 1 swtch_to+0xb6 shuttle_resume+0x2af door_call+0x336 doorfs32+0xa7 sys_syscall32+0xff ffffff01e9f71c40 RUN 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 taskq_thread_wait+0x64 taskq_d_thread+0x145 thread_start+8 ffffff01e8137c40 RUN 1 swtch+0x141 cv_wait+0x70 taskq_thread_wait+0xbe taskq_thread+0x37c thread_start+8 ffffff01e9561c40 ONPROC 1 0 ffffff01e8388c40 ONPROC 1 0xb ffffff01e8b15c40 ONPROC 1 0xffffff01e8b157e0 av_set_softint_pending+0x17 siron+0x16 kpreempt+0x1b1 sys_rtt_common+0x1bf _sys_rtt_ints_disabled+8 wrmsr+0xd tsc_gethrtimeunscaled_delta+0x3c gethrtime_unscaled+0xa apix_do_softint_prolog+0x59 0xffffff430f76f000 acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 ffffff01e8424c40 ONPROC 1 0xffffff430e6fc080 ffffff01e8bf1c40 ONPROC 1 0xffffff430f83a200 ffffff01e8aaac40 ONPROC 1 0xffffff430f83a880 ffffff01e840ec40 ONPROC 1 0xffffff430f83aa00 ffffff01e853dc40 ONPROC 1 0xffffff430f83ac80 ffffff01e84b1c40 ONPROC 1 0xffffff430f83ae80 ffffff01e92a9c40 ONPROC 1 0xffffff430f953180 ffffff01e91c1c40 ONPROC 1 0xffffff430f953400 ffffff01e9156c40 ONPROC 1 0xffffff430f953780 ffffff01e908dc40 ONPROC 1 0xffffff430f953900 ffffff01e8dd9c40 ONPROC 1 0xffffff430f953b00 ffffff01e94d8c40 ONPROC 1 0xffffff430fca3600 ffffff01e937fc40 ONPROC 1 0xffffff430fca3d80 ffffff01e9314c40 ONPROC 1 0xffffff430fca3f00 ffffff01e8c5cc40 ONPROC 1 apix_add_pending_hardint+0x17 apix_do_interrupt+0x21d _sys_rtt_ints_disabled+8 wrmsr+0xd do_splx+0x65 0xffffff01e8c92c40 apix_hilevel_intr_prolog+0x3e acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 ffffff01e922cc40 ONPROC 1 apix_do_softint_prolog+0x59 0xffffff430f9aaac0 acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 ffffff01e8cdfc40 ONPROC 1 apix_intr_exit+0x24 0xffffff430f8fea80 apix_do_interrupt+0xfe _interrupt+0xba acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 ffffff01e9455c40 ONPROC 1 swtch+0x141 idle+0xbc thread_start+8 fffffffffbc2fac0 STOPPED 1 swtch+0x141 sched+0x835 main+0x46c ffffff4383d08c60 PANIC 1 param_preset 0xfffffffffbb2fa18 anon_free+0x74 segvn_free+0x242 seg_free+0x30 segvn_unmap+0xcde as_free+0xe7 relvm+0x220 proc_exit+0x454 exit+0x15 rexit+0x18 _sys_sysenter_post_swapgs+0x149 From moo at wuffers.net Sat Nov 16 07:48:43 2013 From: moo at wuffers.net (wuffers) Date: Sat, 16 Nov 2013 02:48:43 -0500 Subject: [OmniOS-discuss] kernel panic - anon_decref In-Reply-To: References: <5286A67C.80706@gmail.com> Message-ID: When it pours, it rains. With r151006y, I had two kernel panics in quick succession while trying to create some zero thick eager disks (4 at the same time) in ESXi. They are now "kernel heap corruption detected" instead of anon_decref. Kernel panic 2 (dump info: https://drive.google.com/file/d/0B7mCJnZUzJPKMHhqZHJnaDEzYkk) http://i.imgur.com/eIssxmc.png?1 http://i.imgur.com/MXJy4zP.png?1 TIME UUID SUNW-MSG-ID Nov 16 2013 00:51:24.912170000 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 SUNOS-8000-KL TIME CLASS ENA Nov 16 00:51:24.8638 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 16 00:49:58.8671 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 code = SUNOS-8000-KL diag-time = 1384581084 866703 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 resource = sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.1 os-instance-uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 panicstr = kernel heap corruption detected panicstack = fffffffffba49c04 () | genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | genunix:kmem_depot_ws_reap+5d () | genunix:kmem_cache_magazine_purge+118 () | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | unix:thread_start+8 () | crashtime = 1384577735 panic-time = Fri Nov 15 23:55:35 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x528707dc 0x365e9c10 kernel panic 3 (dump info: https://drive.google.com/file/d/0B7mCJnZUzJPKbnZIeWZzQjhUOTQ): (looked the same, no screenshots) TIME UUID SUNW-MSG-ID Nov 16 2013 01:44:43.327489000 a6592c60-199f-ead5-9586-ff013bf5ab2d SUNOS-8000-KL TIME CLASS ENA Nov 16 01:44:43.2941 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 16 01:44:03.5356 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d code = SUNOS-8000-KL diag-time = 1384584283 296816 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d resource = sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.2 os-instance-uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d panicstr = kernel heap corruption detected panicstack = fffffffffba49c04 () | genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | genunix:kmem_cache_magazine_purge+dc () | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | unix:thread_start+8 () | crashtime = 1384582658 panic-time = Sat Nov 16 01:17:38 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x5287145b 0x138515e8 --- Now, having looked through all 3, I can see in the first two there were some warnings: WARNING: /pci at 0 ,0/pci8086,3c08 at 3 /pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 The /var/adm/message also had a sprinkling of these: Nov 15 23:36:43 san1 scsi: [ID 243001 kern.warning] WARNING: /pci at 0 ,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Nov 15 23:36:43 san1 mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 Nov 15 23:36:43 san1 scsi: [ID 365881 kern.info] /pci at 0,0/pci8086,3c08 at 3 /pci1000,3030 at 0 (mpt_sas1): Nov 15 23:36:43 san1 Log info 0x31120303 received for target 10. Nov 15 23:36:43 san1 scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc Following this http://lists.omniti.com/pipermail/omnios-discuss/2013-March/000544.html to map the target disk, it's my Stec ZeusRAM ZIL drive that's configured as a mirror (if I've done it right). I didn't see these errors in the 3rd dump, so don't know if it's contributing. I may try to do a memtest tomorrow on the system just in case it's some hardware issues. My zpool status shows all my drives okay with no known data errors. Not sure how to proceed from here.. my Hyper-V hosts have been using the SAN with no issues for 2+ months since it's been up and configured, using SRP and IB. I'd expect the VM hosts to crash before my SAN does. Of course, I can make the vmdump.x files available to anyone who wants to look at them (7GB, 8GB, 4GB). -------------- next part -------------- An HTML attachment was scrubbed... URL: From moo at wuffers.net Mon Nov 18 22:42:31 2013 From: moo at wuffers.net (wuffers) Date: Mon, 18 Nov 2013 17:42:31 -0500 Subject: [OmniOS-discuss] kernel panic - anon_decref In-Reply-To: References: <5286A67C.80706@gmail.com> Message-ID: Just to add to this, I had a 4th kernel panic, and this was a 3rd different type. I did a memtest on the unit after this last panic, and it ran successfully (24+ hours). I'm skeptical that it's memory, or something to do with the IOCLogInfo=0x31120303 error (last 2 panics didn't have that - I may start another thread on that), as I've been running this config with Hyper-V hosts just fine. Adding an ESXi host (just one for now) into the mix seems to make things unstable. Should I be starting an issue in the Illumos issue report ( https://www.illumos.org/projects/illumos-gate/issues/new), and if so, just one report or one for each panic type? List of kernel panics so far: Panic 1: anon_decref: slot count 0 Panic 2-3: kernel heap corruption detected Panic 4: BAD TRAP: type=e (#pf Page fault) rp=ffffff01e97d7a70 addr=1500010 occurred in module "genunix" due to an illegal access to a user address Latest crash file here: https://drive.google.com/file/d/0B7mCJnZUzJPKWW83TFBhVHpVajQ TIME UUID SUNW-MSG-ID Nov 17 2013 09:22:20.799446000 9d55f532-d39f-4dea-8f57-d3b24c8e9dff SUNOS-8000-KL TIME CLASS ENA Nov 17 09:22:20.7654 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 17 09:21:14.0267 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = 9d55f532-d39f-4dea-8f57-d3b24c8e9dff code = SUNOS-8000-KL diag-time = 1384698140 767808 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.9d55f532-d39f-4dea-8f57-d3b24c8e9dff resource = sw:///:path=/var/crash/unknown/.9d55f532-d39f-4dea-8f57-d3b24c8e9dff savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.3 os-instance-uuid = 9d55f532-d39f-4dea-8f57-d3b24c8e9dff panicstr = BAD TRAP: type=e (#pf Page fault) rp=ffffff01e97d7a70 addr=1500010 occurred in module "genunix" due to an illegal access to a user address panicstack = unix:die+df () | unix:trap+db3 () | unix:cmntrap+e6 () | genunix:anon_decref+35 () | genunix:anon_free+74 () | genunix:segvn_free+242 () | genunix:seg_free+30 () | genunix:segvn_unmap+cde () | genunix:as_free+e7 () | genunix:relvm+220 () | genunix:proc_exit+454 () | genunix:exit+15 () | genunix:rexit+18 () | unix:brand_sys_sysenter+1c9 () | crashtime = 1384592942 panic-time = Sat Nov 16 04:09:02 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x5288d11c 0x2fa693f0 On Sat, Nov 16, 2013 at 2:48 AM, wuffers wrote: > When it pours, it rains. With r151006y, I had two kernel panics in quick > succession while trying to create some zero thick eager disks (4 at the > same time) in ESXi. They are now "kernel heap corruption detected" instead > of anon_decref. > > Kernel panic 2 (dump info: > https://drive.google.com/file/d/0B7mCJnZUzJPKMHhqZHJnaDEzYkk) > http://i.imgur.com/eIssxmc.png?1 > http://i.imgur.com/MXJy4zP.png?1 > > TIME UUID > SUNW-MSG-ID > Nov 16 2013 00:51:24.912170000 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > SUNOS-8000-KL > > TIME CLASS ENA > Nov 16 00:51:24.8638 ireport.os.sunos.panic.dump_available > 0x0000000000000000 > Nov 16 00:49:58.8671 ireport.os.sunos.panic.dump_pending_on_device > 0x0000000000000000 > > > nvlist version: 0 > version = 0x0 > class = list.suspect > uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > code = SUNOS-8000-KL > diag-time = 1384581084 866703 > > de = fmd:///module/software-diagnosis > fault-list-sz = 0x1 > fault-list = (array of embedded nvlists) > (start fault-list[0]) > nvlist version: 0 > version = 0x0 > class = defect.sunos.kernel.panic > certainty = 0x64 > asru = > sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > resource = > sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > > savecore-succcess = 1 > dump-dir = /var/crash/unknown > dump-files = vmdump.1 > os-instance-uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > panicstr = kernel heap corruption detected > panicstack = fffffffffba49c04 () | > genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | > genunix:kmem_depot_ws_reap+5d () | genunix:kmem_cache_magazine_purge+118 () > | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | > unix:thread_start+8 () | > crashtime = 1384577735 > panic-time = Fri Nov 15 23:55:35 2013 EST > > (end fault-list[0]) > > fault-status = 0x1 > severity = Major > __ttl = 0x1 > __tod = 0x528707dc 0x365e9c10 > > kernel panic 3 (dump info: > https://drive.google.com/file/d/0B7mCJnZUzJPKbnZIeWZzQjhUOTQ): > (looked the same, no screenshots) > > TIME UUID > SUNW-MSG-ID > Nov 16 2013 01:44:43.327489000 a6592c60-199f-ead5-9586-ff013bf5ab2d > SUNOS-8000-KL > > TIME CLASS ENA > Nov 16 01:44:43.2941 ireport.os.sunos.panic.dump_available > 0x0000000000000000 > Nov 16 01:44:03.5356 ireport.os.sunos.panic.dump_pending_on_device > 0x0000000000000000 > > > nvlist version: 0 > version = 0x0 > class = list.suspect > uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d > code = SUNOS-8000-KL > diag-time = 1384584283 296816 > > de = fmd:///module/software-diagnosis > fault-list-sz = 0x1 > fault-list = (array of embedded nvlists) > (start fault-list[0]) > nvlist version: 0 > version = 0x0 > class = defect.sunos.kernel.panic > certainty = 0x64 > asru = > sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d > resource = > sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d > > savecore-succcess = 1 > dump-dir = /var/crash/unknown > dump-files = vmdump.2 > os-instance-uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d > panicstr = kernel heap corruption detected > panicstack = fffffffffba49c04 () | > genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | > genunix:kmem_cache_magazine_purge+dc () | > genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | > unix:thread_start+8 () | > crashtime = 1384582658 > panic-time = Sat Nov 16 01:17:38 2013 EST > > (end fault-list[0]) > > fault-status = 0x1 > severity = Major > __ttl = 0x1 > __tod = 0x5287145b 0x138515e8 > > > --- > Now, having looked through all 3, I can see in the first two there were > some warnings: > > WARNING: /pci at 0 ,0/pci8086,3c08 at 3 /pci1000,3030 at 0 (mpt_sas1): > mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 > > The /var/adm/message also had a sprinkling of these: > Nov 15 23:36:43 san1 scsi: [ID 243001 kern.warning] WARNING: /pci at 0 > ,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): > Nov 15 23:36:43 san1 mptsas_handle_event: IOCStatus=0x8000, > IOCLogInfo=0x31120303 > Nov 15 23:36:43 san1 scsi: [ID 365881 kern.info] /pci at 0,0/pci8086,3c08 at 3 > /pci1000,3030 at 0 (mpt_sas1): > Nov 15 23:36:43 san1 Log info 0x31120303 received for target 10. > Nov 15 23:36:43 san1 scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc > > Following this > http://lists.omniti.com/pipermail/omnios-discuss/2013-March/000544.htmlto map the target disk, it's my Stec ZeusRAM ZIL drive that's configured as > a mirror (if I've done it right). I didn't see these errors in the 3rd > dump, so don't know if it's contributing. I may try to do a memtest > tomorrow on the system just in case it's some hardware issues. > > My zpool status shows all my drives okay with no known data errors. > > Not sure how to proceed from here.. my Hyper-V hosts have been using the > SAN with no issues for 2+ months since it's been up and configured, using > SRP and IB. I'd expect the VM hosts to crash before my SAN does. > > Of course, I can make the vmdump.x files available to anyone who wants to > look at them (7GB, 8GB, 4GB). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafibeyli at gmail.com Tue Nov 19 15:40:53 2013 From: rafibeyli at gmail.com (Hafiz Rafibeyli) Date: Tue, 19 Nov 2013 17:40:53 +0200 (EET) Subject: [OmniOS-discuss] zpool upgrade In-Reply-To: References: Message-ID: <1694406958.258009.1384875653929.JavaMail.zimbra@cantekstil.com.tr> Hello, any side effects upgrading zfs pool for future flags? I tested it on my test server ,but is it ok for prod zfs pools? # zpool upgrade ydkpool This system supports ZFS pool feature flags. Successfully upgraded 'ydkpool' from version 28 to feature flags. Enabled the following features on 'ydkpool': async_destroy empty_bpobj lz4_compress ----- Orijinal Mesaj ----- Kimden: omnios-discuss-request at lists.omniti.com Kime: omnios-discuss at lists.omniti.com G?nderilenler: 19 Kas?m Sal? 2013 0:42:32 Konu: OmniOS-discuss Digest, Vol 20, Issue 19 Send OmniOS-discuss mailing list submissions to omnios-discuss at lists.omniti.com To subscribe or unsubscribe via the World Wide Web, visit http://lists.omniti.com/mailman/listinfo/omnios-discuss or, via email, send a message with subject or body 'help' to omnios-discuss-request at lists.omniti.com You can reach the person managing the list at omnios-discuss-owner at lists.omniti.com When replying, please edit your Subject line so it is more specific than "Re: Contents of OmniOS-discuss digest..." Today's Topics: 1. Re: kernel panic - anon_decref (wuffers) 2. Re: kernel panic - anon_decref (wuffers) ---------------------------------------------------------------------- Message: 1 Date: Sat, 16 Nov 2013 02:48:43 -0500 From: wuffers To: Saso Kiselkov Cc: omnios-discuss Subject: Re: [OmniOS-discuss] kernel panic - anon_decref Message-ID: Content-Type: text/plain; charset="iso-8859-1" When it pours, it rains. With r151006y, I had two kernel panics in quick succession while trying to create some zero thick eager disks (4 at the same time) in ESXi. They are now "kernel heap corruption detected" instead of anon_decref. Kernel panic 2 (dump info: https://drive.google.com/file/d/0B7mCJnZUzJPKMHhqZHJnaDEzYkk) http://i.imgur.com/eIssxmc.png?1 http://i.imgur.com/MXJy4zP.png?1 TIME UUID SUNW-MSG-ID Nov 16 2013 00:51:24.912170000 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 SUNOS-8000-KL TIME CLASS ENA Nov 16 00:51:24.8638 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 16 00:49:58.8671 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 code = SUNOS-8000-KL diag-time = 1384581084 866703 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 resource = sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.1 os-instance-uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 panicstr = kernel heap corruption detected panicstack = fffffffffba49c04 () | genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | genunix:kmem_depot_ws_reap+5d () | genunix:kmem_cache_magazine_purge+118 () | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | unix:thread_start+8 () | crashtime = 1384577735 panic-time = Fri Nov 15 23:55:35 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x528707dc 0x365e9c10 kernel panic 3 (dump info: https://drive.google.com/file/d/0B7mCJnZUzJPKbnZIeWZzQjhUOTQ): (looked the same, no screenshots) TIME UUID SUNW-MSG-ID Nov 16 2013 01:44:43.327489000 a6592c60-199f-ead5-9586-ff013bf5ab2d SUNOS-8000-KL TIME CLASS ENA Nov 16 01:44:43.2941 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 16 01:44:03.5356 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d code = SUNOS-8000-KL diag-time = 1384584283 296816 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d resource = sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.2 os-instance-uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d panicstr = kernel heap corruption detected panicstack = fffffffffba49c04 () | genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | genunix:kmem_cache_magazine_purge+dc () | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | unix:thread_start+8 () | crashtime = 1384582658 panic-time = Sat Nov 16 01:17:38 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x5287145b 0x138515e8 --- Now, having looked through all 3, I can see in the first two there were some warnings: WARNING: /pci at 0 ,0/pci8086,3c08 at 3 /pci1000,3030 at 0 (mpt_sas1): mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 The /var/adm/message also had a sprinkling of these: Nov 15 23:36:43 san1 scsi: [ID 243001 kern.warning] WARNING: /pci at 0 ,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): Nov 15 23:36:43 san1 mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303 Nov 15 23:36:43 san1 scsi: [ID 365881 kern.info] /pci at 0,0/pci8086,3c08 at 3 /pci1000,3030 at 0 (mpt_sas1): Nov 15 23:36:43 san1 Log info 0x31120303 received for target 10. Nov 15 23:36:43 san1 scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc Following this http://lists.omniti.com/pipermail/omnios-discuss/2013-March/000544.html to map the target disk, it's my Stec ZeusRAM ZIL drive that's configured as a mirror (if I've done it right). I didn't see these errors in the 3rd dump, so don't know if it's contributing. I may try to do a memtest tomorrow on the system just in case it's some hardware issues. My zpool status shows all my drives okay with no known data errors. Not sure how to proceed from here.. my Hyper-V hosts have been using the SAN with no issues for 2+ months since it's been up and configured, using SRP and IB. I'd expect the VM hosts to crash before my SAN does. Of course, I can make the vmdump.x files available to anyone who wants to look at them (7GB, 8GB, 4GB). -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 18 Nov 2013 17:42:31 -0500 From: wuffers To: Saso Kiselkov Cc: omnios-discuss Subject: Re: [OmniOS-discuss] kernel panic - anon_decref Message-ID: Content-Type: text/plain; charset="iso-8859-1" Just to add to this, I had a 4th kernel panic, and this was a 3rd different type. I did a memtest on the unit after this last panic, and it ran successfully (24+ hours). I'm skeptical that it's memory, or something to do with the IOCLogInfo=0x31120303 error (last 2 panics didn't have that - I may start another thread on that), as I've been running this config with Hyper-V hosts just fine. Adding an ESXi host (just one for now) into the mix seems to make things unstable. Should I be starting an issue in the Illumos issue report ( https://www.illumos.org/projects/illumos-gate/issues/new), and if so, just one report or one for each panic type? List of kernel panics so far: Panic 1: anon_decref: slot count 0 Panic 2-3: kernel heap corruption detected Panic 4: BAD TRAP: type=e (#pf Page fault) rp=ffffff01e97d7a70 addr=1500010 occurred in module "genunix" due to an illegal access to a user address Latest crash file here: https://drive.google.com/file/d/0B7mCJnZUzJPKWW83TFBhVHpVajQ TIME UUID SUNW-MSG-ID Nov 17 2013 09:22:20.799446000 9d55f532-d39f-4dea-8f57-d3b24c8e9dff SUNOS-8000-KL TIME CLASS ENA Nov 17 09:22:20.7654 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 17 09:21:14.0267 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = 9d55f532-d39f-4dea-8f57-d3b24c8e9dff code = SUNOS-8000-KL diag-time = 1384698140 767808 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.9d55f532-d39f-4dea-8f57-d3b24c8e9dff resource = sw:///:path=/var/crash/unknown/.9d55f532-d39f-4dea-8f57-d3b24c8e9dff savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.3 os-instance-uuid = 9d55f532-d39f-4dea-8f57-d3b24c8e9dff panicstr = BAD TRAP: type=e (#pf Page fault) rp=ffffff01e97d7a70 addr=1500010 occurred in module "genunix" due to an illegal access to a user address panicstack = unix:die+df () | unix:trap+db3 () | unix:cmntrap+e6 () | genunix:anon_decref+35 () | genunix:anon_free+74 () | genunix:segvn_free+242 () | genunix:seg_free+30 () | genunix:segvn_unmap+cde () | genunix:as_free+e7 () | genunix:relvm+220 () | genunix:proc_exit+454 () | genunix:exit+15 () | genunix:rexit+18 () | unix:brand_sys_sysenter+1c9 () | crashtime = 1384592942 panic-time = Sat Nov 16 04:09:02 2013 EST (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x5288d11c 0x2fa693f0 On Sat, Nov 16, 2013 at 2:48 AM, wuffers wrote: > When it pours, it rains. With r151006y, I had two kernel panics in quick > succession while trying to create some zero thick eager disks (4 at the > same time) in ESXi. They are now "kernel heap corruption detected" instead > of anon_decref. > > Kernel panic 2 (dump info: > https://drive.google.com/file/d/0B7mCJnZUzJPKMHhqZHJnaDEzYkk) > http://i.imgur.com/eIssxmc.png?1 > http://i.imgur.com/MXJy4zP.png?1 > > TIME UUID > SUNW-MSG-ID > Nov 16 2013 00:51:24.912170000 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > SUNOS-8000-KL > > TIME CLASS ENA > Nov 16 00:51:24.8638 ireport.os.sunos.panic.dump_available > 0x0000000000000000 > Nov 16 00:49:58.8671 ireport.os.sunos.panic.dump_pending_on_device > 0x0000000000000000 > > > nvlist version: 0 > version = 0x0 > class = list.suspect > uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > code = SUNOS-8000-KL > diag-time = 1384581084 866703 > > de = fmd:///module/software-diagnosis > fault-list-sz = 0x1 > fault-list = (array of embedded nvlists) > (start fault-list[0]) > nvlist version: 0 > version = 0x0 > class = defect.sunos.kernel.panic > certainty = 0x64 > asru = > sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > resource = > sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > > savecore-succcess = 1 > dump-dir = /var/crash/unknown > dump-files = vmdump.1 > os-instance-uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 > panicstr = kernel heap corruption detected > panicstack = fffffffffba49c04 () | > genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | > genunix:kmem_depot_ws_reap+5d () | genunix:kmem_cache_magazine_purge+118 () > | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | > unix:thread_start+8 () | > crashtime = 1384577735 > panic-time = Fri Nov 15 23:55:35 2013 EST > > (end fault-list[0]) > > fault-status = 0x1 > severity = Major > __ttl = 0x1 > __tod = 0x528707dc 0x365e9c10 > > kernel panic 3 (dump info: > https://drive.google.com/file/d/0B7mCJnZUzJPKbnZIeWZzQjhUOTQ): > (looked the same, no screenshots) > > TIME UUID > SUNW-MSG-ID > Nov 16 2013 01:44:43.327489000 a6592c60-199f-ead5-9586-ff013bf5ab2d > SUNOS-8000-KL > > TIME CLASS ENA > Nov 16 01:44:43.2941 ireport.os.sunos.panic.dump_available > 0x0000000000000000 > Nov 16 01:44:03.5356 ireport.os.sunos.panic.dump_pending_on_device > 0x0000000000000000 > > > nvlist version: 0 > version = 0x0 > class = list.suspect > uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d > code = SUNOS-8000-KL > diag-time = 1384584283 296816 > > de = fmd:///module/software-diagnosis > fault-list-sz = 0x1 > fault-list = (array of embedded nvlists) > (start fault-list[0]) > nvlist version: 0 > version = 0x0 > class = defect.sunos.kernel.panic > certainty = 0x64 > asru = > sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d > resource = > sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d > > savecore-succcess = 1 > dump-dir = /var/crash/unknown > dump-files = vmdump.2 > os-instance-uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d > panicstr = kernel heap corruption detected > panicstack = fffffffffba49c04 () | > genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | > genunix:kmem_cache_magazine_purge+dc () | > genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | > unix:thread_start+8 () | > crashtime = 1384582658 > panic-time = Sat Nov 16 01:17:38 2013 EST > > (end fault-list[0]) > > fault-status = 0x1 > severity = Major > __ttl = 0x1 > __tod = 0x5287145b 0x138515e8 > > > --- > Now, having looked through all 3, I can see in the first two there were > some warnings: > > WARNING: /pci at 0 ,0/pci8086,3c08 at 3 /pci1000,3030 at 0 (mpt_sas1): > mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303 > > The /var/adm/message also had a sprinkling of these: > Nov 15 23:36:43 san1 scsi: [ID 243001 kern.warning] WARNING: /pci at 0 > ,0/pci8086,3c08 at 3/pci1000,3030 at 0 (mpt_sas1): > Nov 15 23:36:43 san1 mptsas_handle_event: IOCStatus=0x8000, > IOCLogInfo=0x31120303 > Nov 15 23:36:43 san1 scsi: [ID 365881 kern.info] /pci at 0,0/pci8086,3c08 at 3 > /pci1000,3030 at 0 (mpt_sas1): > Nov 15 23:36:43 san1 Log info 0x31120303 received for target 10. > Nov 15 23:36:43 san1 scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc > > Following this > http://lists.omniti.com/pipermail/omnios-discuss/2013-March/000544.htmlto map the target disk, it's my Stec ZeusRAM ZIL drive that's configured as > a mirror (if I've done it right). I didn't see these errors in the 3rd > dump, so don't know if it's contributing. I may try to do a memtest > tomorrow on the system just in case it's some hardware issues. > > My zpool status shows all my drives okay with no known data errors. > > Not sure how to proceed from here.. my Hyper-V hosts have been using the > SAN with no issues for 2+ months since it's been up and configured, using > SRP and IB. I'd expect the VM hosts to crash before my SAN does. > > Of course, I can make the vmdump.x files available to anyone who wants to > look at them (7GB, 8GB, 4GB). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss ------------------------------ End of OmniOS-discuss Digest, Vol 20, Issue 19 ********************************************** From jimklimov at cos.ru Tue Nov 19 21:16:52 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Tue, 19 Nov 2013 22:16:52 +0100 Subject: [OmniOS-discuss] zpool upgrade In-Reply-To: <1694406958.258009.1384875653929.JavaMail.zimbra@cantekstil.com.tr> References: <1694406958.258009.1384875653929.JavaMail.zimbra@cantekstil.com.tr> Message-ID: <528BD544.5000304@cos.ru> On 2013-11-19 16:40, Hafiz Rafibeyli wrote: > Hello, > > any side effects upgrading zfs pool for future flags? > > I tested it on my test server ,but is it ok for prod zfs pools? > > > > > # zpool upgrade ydkpool > This system supports ZFS pool feature flags. > > Successfully upgraded 'ydkpool' from version 28 to feature flags. > Enabled the following features on 'ydkpool': > async_destroy > empty_bpobj > lz4_compress I guess these features are in the mass-testing long enough and did not cause known grief, so enabling should be okay - at a risk of being the first "strange" configuration where things don't work ;) Possibly, the main drawback would be the loss of compatibility with proprietary Solaris ZFS, i.e. you won't be able to import the pool with an Oracle Solaris installation. If that does not matter to you - go ahead... and do have backups prepared, as always ;) //Jim From henson at acm.org Sat Nov 23 03:16:44 2013 From: henson at acm.org (Paul B. Henson) Date: Fri, 22 Nov 2013 19:16:44 -0800 Subject: [OmniOS-discuss] IPMI? Message-ID: <002401cee7fa$719bcfc0$54d36f40$@acm.org> I'm trying to get openipmi working on a supermicro box. Based on https://www.illumos.org/issues/370, it seems there should be an ipmi driver in illumos, and that commit is old enough I'd think it would be in current stable. I don't see any ipmi devices though. Is the driver not in stable? I also see some threads talking about having to patch ipmitool to use it, I'm using the vanilla pkgsrc ipmitool, not the joyent one, so I'm not sure what that's about. It's late on a Friday ;), so I'm going to give up for now, in the hopes some kind soul has a better clue on this than me and can save me some time digging into it :). Thanks much... From sk at kram.io Sat Nov 23 10:19:47 2013 From: sk at kram.io (Steffen Kram) Date: Sat, 23 Nov 2013 11:19:47 +0100 Subject: [OmniOS-discuss] IPMI? In-Reply-To: <002401cee7fa$719bcfc0$54d36f40$@acm.org> References: <002401cee7fa$719bcfc0$54d36f40$@acm.org> Message-ID: Hi Paul, the current stable does already include the patch described in some of the former threads. It also does work on the Supermicro X9? Boards that I?m currently using. I can read sensor data and set the ipmi preferences. What exactly are you trying to accomplish? Cheers, Steffen Am 23.11.2013 um 04:16 schrieb Paul B. Henson : > I'm trying to get openipmi working on a supermicro box. Based on > https://www.illumos.org/issues/370, it seems there should be an ipmi driver > in illumos, and that commit is old enough I'd think it would be in current > stable. > > I don't see any ipmi devices though. Is the driver not in stable? I also see > some threads talking about having to patch ipmitool to use it, I'm using the > vanilla pkgsrc ipmitool, not the joyent one, so I'm not sure what that's > about. It's late on a Friday ;), so I'm going to give up for now, in the > hopes some kind soul has a better clue on this than me and can save me some > time digging into it :). > > Thanks much... > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From esproul at omniti.com Sat Nov 23 17:28:50 2013 From: esproul at omniti.com (Eric Sproul) Date: Sat, 23 Nov 2013 12:28:50 -0500 Subject: [OmniOS-discuss] IPMI? In-Reply-To: <002401cee7fa$719bcfc0$54d36f40$@acm.org> References: <002401cee7fa$719bcfc0$54d36f40$@acm.org> Message-ID: On Fri, Nov 22, 2013 at 10:16 PM, Paul B. Henson wrote: > I'm trying to get openipmi working on a supermicro box. Based on > https://www.illumos.org/issues/370, it seems there should be an ipmi driver > in illumos, and that commit is old enough I'd think it would be in current > stable. It's pkg:/driver/ipmi, but it isn't installed by default. There's also pkg:/system/management/ipmitool if that's helpful to you. Regards, Eric From tobi at oetiker.ch Sun Nov 24 15:54:16 2013 From: tobi at oetiker.ch (Tobias Oetiker) Date: Sun, 24 Nov 2013 16:54:16 +0100 (CET) Subject: [OmniOS-discuss] how is r151008 coming along Message-ID: Wondering ... how is the r151008 release coming along ... would love to get my little hands on all the fixes and enhancements that went into illumos over the last few months ... -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch tobi at oetiker.ch ++41 62 775 9902 / sb: -9900 From jimklimov at cos.ru Sun Nov 24 15:59:09 2013 From: jimklimov at cos.ru (Jim Klimov) Date: Sun, 24 Nov 2013 16:59:09 +0100 Subject: [OmniOS-discuss] how is r151008 coming along In-Reply-To: References: Message-ID: <5292224D.5060604@cos.ru> On 2013-11-24 16:54, Tobias Oetiker wrote: > Wondering ... how is the r151008 release coming along ... would > love to get my little hands on all the fixes and enhancements that > went into illumos over the last few months ... Wouldn't it suffice to take the previous stable, build your own illumos-gate, and (is it possible to) later upgrade to new stable? Or do you want not only the kernel improvements? //Jim From emunch at utmi.in Sun Nov 24 20:19:59 2013 From: emunch at utmi.in (Sam M) Date: Mon, 25 Nov 2013 01:49:59 +0530 Subject: [OmniOS-discuss] how is r151008 coming along In-Reply-To: <5292224D.5060604@cos.ru> References: <5292224D.5060604@cos.ru> Message-ID: Am also waiting for r151008. Was using bloody earlier, upgraded my zpool, now can't access, with either release or bloody. So I'm also waiting. @Jim - I don't want to "build" anything, I don't want to fiddle around with the OS, want to just keep in step with a standard. Why complicate things, build my own, and then be out of sync all over again. On 24 November 2013 21:29, Jim Klimov wrote: > On 2013-11-24 16:54, Tobias Oetiker wrote: > >> Wondering ... how is the r151008 release coming along ... would >> love to get my little hands on all the fixes and enhancements that >> went into illumos over the last few months ... >> > > Wouldn't it suffice to take the previous stable, build your own > illumos-gate, and (is it possible to) later upgrade to new stable? > Or do you want not only the kernel improvements? > > //Jim > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From henson at acm.org Sun Nov 24 20:48:48 2013 From: henson at acm.org (Paul B. Henson) Date: Sun, 24 Nov 2013 12:48:48 -0800 Subject: [OmniOS-discuss] IPMI? In-Reply-To: References: <002401cee7fa$719bcfc0$54d36f40$@acm.org> Message-ID: <20131124204848.GK5195@bender.unx.csupomona.edu> On Sat, Nov 23, 2013 at 12:28:50PM -0500, Eric Sproul wrote: > It's pkg:/driver/ipmi, but it isn't installed by default. There's > also pkg:/system/management/ipmitool if that's helpful to you. D'oh, I didn't even think of running a 'pkg list -a | grep ipmi', I blame the lateness on a Friday :). That's perfect, having ipmitool is icing on the cake. I installed both of those and can now access the bmc from the local os cli. Thanks much! From henson at acm.org Sun Nov 24 20:50:21 2013 From: henson at acm.org (Paul B. Henson) Date: Sun, 24 Nov 2013 12:50:21 -0800 Subject: [OmniOS-discuss] IPMI? In-Reply-To: References: <002401cee7fa$719bcfc0$54d36f40$@acm.org> Message-ID: <20131124205021.GL5195@bender.unx.csupomona.edu> On Sat, Nov 23, 2013 at 11:19:47AM +0100, Steffen Kram wrote: > the former threads. It also does work on the Supermicro X9? Boards > that I?m currently using. I can read sensor data and set the ipmi > preferences. What exactly are you trying to accomplish? Basically just accessing the bmc from the local os, dumping the sel, etc; Eric pointed out the ipmi driver isn't part of the base install, once I installed it everything worked great. Thanks... From scotty at jhu.edu Mon Nov 25 15:58:09 2013 From: scotty at jhu.edu (Scott Roberts) Date: Mon, 25 Nov 2013 15:58:09 +0000 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS Message-ID: All, Does anyone have experience installing Oracle under OmniOS? Unfortunately, even in this day and age Oracle does not provide a text-based installer and OmniOS does not provide any X packaging. A list of steps or a link to a howto would be fine. I am hooked on OmniOS and am not looking to switch to a different Illumos flavor simply because of X support. Cheers, ?Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.kragsterman at capvert.se Mon Nov 25 16:33:12 2013 From: johan.kragsterman at capvert.se (Johan Kragsterman) Date: Mon, 25 Nov 2013 17:33:12 +0100 Subject: [OmniOS-discuss] Ang: Oracle 11g/12c on OmniOS In-Reply-To: References: Message-ID: -----"OmniOS-discuss" skrev: ----- Till: "omnios-discuss at lists.omniti.com" Fr?n: Scott Roberts S?nt av: "OmniOS-discuss" Datum: 2013-11-25 16:59 ?rende: [OmniOS-discuss] Oracle 11g/12c on OmniOS All, Does anyone have experience installing Oracle under OmniOS? ?Unfortunately, even in this day and age Oracle does not provide a text-based installer and OmniOS does not provide any X packaging. ?A list of steps or a link to a howto would be fine. ?I am hooked on OmniOS and am not looking to switch to a different Illumos flavor simply because of X support. Cheers, —Scott Guess many people would like to have that. I've seen some poor instructions around, but I don't remember where, actually... If you solve it, pls notify list of the process, and preferably, put it in the wiki... Rgrds Johan _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From henson at acm.org Mon Nov 25 18:23:36 2013 From: henson at acm.org (Paul B. Henson) Date: Mon, 25 Nov 2013 10:23:36 -0800 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS In-Reply-To: References: Message-ID: <2BFD6C98-3D85-4382-AF6C-D280A6A1843C@acm.org> > On Nov 25, 2013, at 7:58 AM, Scott Roberts wrote: > > Does anyone have experience installing Oracle under OmniOS? Out of curiosity, why are you trying to get oracle working under omnios? Obviously they're not going to support it (I have enough trouble getting oracle installed/getting support on officially validated operating systems ;) ). As long as you're going with the open source os, why not a nice open source database like Postgres :)? I can't think of a time we deployed oracle where no support wouldn't have been a show stopper for the people that owned the project. Good luck though. Have you tried installing X from pkgsrc? I've seen reports that works reasonably well. From scotty at jhu.edu Mon Nov 25 18:58:49 2013 From: scotty at jhu.edu (Scott Roberts) Date: Mon, 25 Nov 2013 18:58:49 +0000 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS In-Reply-To: <2BFD6C98-3D85-4382-AF6C-D280A6A1843C@acm.org> References: <2BFD6C98-3D85-4382-AF6C-D280A6A1843C@acm.org> Message-ID: It is an experiment, nothing more. If there was a way to do a text-based install I would have dispensed with X entirely. I tried installing X using pkgsrc using various instructions but never got it to actually start up an X server nor successfully pass the display back via SSH. On 11/25/13, 1:23 PM, "Paul B. Henson" wrote: > >> On Nov 25, 2013, at 7:58 AM, Scott Roberts wrote: >> >> Does anyone have experience installing Oracle under OmniOS? > >Out of curiosity, why are you trying to get oracle working under omnios? >Obviously they're not going to support it (I have enough trouble getting >oracle installed/getting support on officially validated operating >systems ;) ). As long as you're going with the open source os, why not a >nice open source database like Postgres :)? > >I can't think of a time we deployed oracle where no support wouldn't have >been a show stopper for the people that owned the project. > >Good luck though. Have you tried installing X from pkgsrc? I've seen >reports that works reasonably well. From ae at elahi.ru Mon Nov 25 18:59:49 2013 From: ae at elahi.ru (Andrew Evdokimov) Date: Mon, 25 Nov 2013 22:59:49 +0400 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS In-Reply-To: References: Message-ID: <52939E25.8020507@elahi.ru> Scott Roberts wrote, On 25.11.2013 19:58: > Does anyone have experience installing Oracle under OmniOS? Unfortunately, even in this day and age Oracle does not > provide a text-based installer and OmniOS does not provide any X packaging. A list of steps or a link to a howto would > be fine. I am hooked on OmniOS and am not looking to switch to a different Illumos flavor simply because of X support. Hi Scott, I've recently successfully installed 12c on bloody. You'll need: 1. 64-bit JRE - one bundled with OUI requires both X and Solaris libc. I've used latest OpenJDK 1.7.0_40-b43. You need to modify OmniOS build script to include support for 64-bit build if you are on bloody. 2. developer/build/make and developer/object-file packages. Please check that /usr/ccs/bin/as, /usr/ccs/bin/ld, /usr/ccs/bin/make and /usr/ccs/bin/ar exist and working before continuing. 3. response file to be used by OUI in non-interactive mode. 4. user and 2 groups Then you can start OUI using following command-line options: ./runInstaller -responseFile /oracle/db_install.rsp -silent -jreLoc /usr/java After that check if everything is fine (if not - check installer logs and fix issues) and continue with DB creation. You will not be able to run dbca or any other GUI tool since they need X. -- Andrew Evdokimov +7 910 450 83 33 mail ae at elahi.ru xmpp ae at elahi.ru From henson at acm.org Tue Nov 26 02:27:03 2013 From: henson at acm.org (Paul B. Henson) Date: Mon, 25 Nov 2013 18:27:03 -0800 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS In-Reply-To: References: <2BFD6C98-3D85-4382-AF6C-D280A6A1843C@acm.org> Message-ID: <20131126022703.GU5195@bender.unx.csupomona.edu> On Mon, Nov 25, 2013 at 06:58:49PM +0000, Scott Roberts wrote: > I tried installing X using pkgsrc using various instructions but never got > it to actually start up an X server nor successfully pass the display back > via SSH. You don't need to get an X server running on the omnios box, that's what actually displays the gui and is only needed on your client box. On the omnios side you just need the X libraries. With ssh, you need to specify XAuthLocation in sshd_config so sshd can find the xauth binary from pkgsrc. I've never done it myself, but there have been successful accounts on the list of forwarding X apps from an omnios server to a local X display... From mmabis at vmware.com Thu Nov 28 17:25:47 2013 From: mmabis at vmware.com (Matthew Mabis) Date: Thu, 28 Nov 2013 09:25:47 -0800 (PST) Subject: [OmniOS-discuss] Anyone Using a A1SAI-2750F-O for ZFS? if so any experiences, if not recommended why not? In-Reply-To: <1071557391.48583843.1385659512156.JavaMail.root@vmware.com> Message-ID: <789927357.48583929.1385659547106.JavaMail.root@vmware.com> Hey All I am wondering if anyone has toyed with the idea of using a supermicro A1SAI-2750F-O ( http://www.supermicro.com/products/motherboard/Atom/X10/A1SAi-2750F.cfm ) with the Avaton Chips in it for a ZFS platform... its got Quad Marvell Nics, IPMI, ECC and decent amount of SATA onboard (Zil, SLOG).. (Grant you i would still probably use a LSI 2008 series card like a M1015 and put it all in a UNAS NSC-800 ( http://www.u-nas.com/product/nsc800.html ) Extremely low power solution with 8 cores... anyone got any performance benchmarks for speed/throughput? Any reason why not to use this? Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffn at gnaa.net Thu Nov 28 20:32:44 2013 From: geoffn at gnaa.net (Geoff Nordli) Date: Thu, 28 Nov 2013 12:32:44 -0800 Subject: [OmniOS-discuss] updated install images containing i20 nic drivers Message-ID: <5297A86C.3080902@gnaa.net> Is there an updated install image that would contain the i210 NIC drivers? I am using OmniOS_Text_bloody_20130208. If not, any documentation on how to add the updated igb driver into the install image? thanks, Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: From esproul at omniti.com Thu Nov 28 20:43:02 2013 From: esproul at omniti.com (Eric Sproul) Date: Thu, 28 Nov 2013 15:43:02 -0500 Subject: [OmniOS-discuss] Anyone Using a A1SAI-2750F-O for ZFS? if so any experiences, if not recommended why not? In-Reply-To: <789927357.48583929.1385659547106.JavaMail.root@vmware.com> References: <1071557391.48583843.1385659512156.JavaMail.root@vmware.com> <789927357.48583929.1385659547106.JavaMail.root@vmware.com> Message-ID: A cursory search of http://illumos.org/hcl shows that neither the SATA nor the network controllers are supported out of the box. Historically, Marvell parts have had spotty support in Solaris/illumos. This is a nice looking config though, especially the ECC memory support. Eric On Nov 28, 2013 12:29 PM, "Matthew Mabis" wrote: > Hey All > > I am wondering if anyone has toyed with the idea of using a > supermicro A1SAI-2750F-O ( > http://www.supermicro.com/products/motherboard/Atom/X10/A1SAi-2750F.cfm) with > the Avaton Chips in it for a ZFS platform... its got Quad Marvell Nics, > IPMI, ECC and decent amount of SATA onboard (Zil, SLOG).. (Grant you i > would still probably use a LSI 2008 series card like a M1015 and put it all > in a UNAS NSC-800 ( http://www.u-nas.com/product/nsc800.html > ) > > Extremely low power solution with 8 cores... anyone got any performance > benchmarks for speed/throughput? Any reason why not to use this? > > > *Matt* > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From esproul at omniti.com Thu Nov 28 23:13:30 2013 From: esproul at omniti.com (Eric Sproul) Date: Thu, 28 Nov 2013 18:13:30 -0500 Subject: [OmniOS-discuss] Anyone Using a A1SAI-2750F-O for ZFS? if so any experiences, if not recommended why not? In-Reply-To: References: <1071557391.48583843.1385659512156.JavaMail.root@vmware.com> <789927357.48583929.1385659547106.JavaMail.root@vmware.com> Message-ID: A similar board might work: ASRock's C2750D4I, reviewed at Serve The Home: http://www.servethehome.com/Server-detail/asrock-c2750d4i-atom-c2750-storage-platform-review/ That has Intel i210 NICs which are supported in recent illumos (including the upcoming OmniOS r151008), and the first six SATA ports come from the C2750, so presumably they'll attach to ahci(7D). Also it will be easier to find ECC modules for the full-size DIMM slots than for the SODIMMs on the Supermicro. Eric From mmabis at vmware.com Fri Nov 29 01:58:24 2013 From: mmabis at vmware.com (Matthew Mabis) Date: Thu, 28 Nov 2013 17:58:24 -0800 (PST) Subject: [OmniOS-discuss] Anyone Using a A1SAI-2750F-O for ZFS? if so any experiences, if not recommended why not? In-Reply-To: References: <1071557391.48583843.1385659512156.JavaMail.root@vmware.com> <789927357.48583929.1385659547106.JavaMail.root@vmware.com> Message-ID: <2012302198.48772239.1385690304594.JavaMail.root@vmware.com> I really like the ASRock C2750D4I it just stinks theres really no release date in stone on that board yet... (been waiting for a while)... Starting to debate wether or not they will release it by the EOY.. I personally didnt want to go the Supermicro route but i am starting to run out of time to get my storage migrated from the server its on (in VM form) over to a more isolated box (VM'ed or Straight up) either way i am looking to start a new design with 4TB Drives b/c my 2TB Drive RaidZ2 config has about 20% capacity left so i know i am hitting performance issues with it and i want ZIL + SLOG in the new design... Thanks for that info that basically put the nail in the coffin for the Supermicro... either its gonna be an Low Power I3 Design or that ASRock Avaton model... Matt ----- Original Message ----- From: "Eric Sproul" To: "Matthew Mabis" Cc: "omnios-discuss" Sent: Thursday, November 28, 2013 4:13:30 PM Subject: Re: [OmniOS-discuss] Anyone Using a A1SAI-2750F-O for ZFS? if so any experiences, if not recommended why not? A similar board might work: ASRock's C2750D4I, reviewed at Serve The Home: https://urldefense.proofpoint.com/v1/url?u=http://www.servethehome.com/Server-detail/asrock-c2750d4i-atom-c2750-storage-platform-review/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=yqgQ6LhGnfWMd79QvLrmWsnr%2FlpWj5c0oy4MpT8%2Bgik%3D%0A&m=f%2Fw1ICKfSVJ%2FsRHTKGZzAhhVZMF6MajriK1f9WFgUxg%3D%0A&s=af5c8522726c81ae1a99e95691ee34a44e502283de618a76f1a7a652bd7165e4 That has Intel i210 NICs which are supported in recent illumos (including the upcoming OmniOS r151008), and the first six SATA ports come from the C2750, so presumably they'll attach to ahci(7D). Also it will be easier to find ECC modules for the full-size DIMM slots than for the SODIMMs on the Supermicro. Eric From scotty at jhu.edu Fri Nov 29 19:08:48 2013 From: scotty at jhu.edu (Scott Roberts) Date: Fri, 29 Nov 2013 19:08:48 +0000 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS In-Reply-To: <20131126022703.GU5195@bender.unx.csupomona.edu> References: <2BFD6C98-3D85-4382-AF6C-D280A6A1843C@acm.org> <20131126022703.GU5195@bender.unx.csupomona.edu> Message-ID: Thank you all. I got it to work with the help from Andrew and Paul. Cheers, --Scott From skiselkov.ml at gmail.com Fri Nov 29 20:35:11 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Fri, 29 Nov 2013 20:35:11 +0000 Subject: [OmniOS-discuss] Oracle 11g/12c on OmniOS In-Reply-To: References: <2BFD6C98-3D85-4382-AF6C-D280A6A1843C@acm.org> <20131126022703.GU5195@bender.unx.csupomona.edu> Message-ID: <5298FA7F.7070101@gmail.com> On 11/29/13, 7:08 PM, Scott Roberts wrote: > Thank you all. I got it to work with the help from Andrew and Paul. Hi Scott, Can you share some of the insights needed to get it going? Just so that google's cache can store it and other people don't have to rediscover what you already know. Thanks! Cheers, -- Saso From rafibeyli at gmail.com Sat Nov 30 09:43:56 2013 From: rafibeyli at gmail.com (Hafiz Rafibeyli) Date: Sat, 30 Nov 2013 11:43:56 +0200 (EET) Subject: [OmniOS-discuss] 2x acctual disk quantity In-Reply-To: <371043671.686980.1385803804947.JavaMail.zimbra@cantekstil.com.tr> Message-ID: <1136222610.687657.1385804636256.JavaMail.zimbra@cantekstil.com.tr> Hello, After adding 4 SAS dual port disks to my omnios system(omnios-b281e50+napp-it) ,I see number of disks 2x acctual quantity. I have dual controller backplane(supermicro) and 2 LSI 9211-8i, There is no any quantity problem with another disks,only with new added HP 300GB SAS DP. c4t500000E1168BB382d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EUJ7104 c4t500000E11693F232d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EY64104 c4t500000E11696D5A2d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F12G104 c4t500000E116974FB2d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F1P5104 c6t500000E1168BB383d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EUJ7104 c6t500000E11693F233d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EY64104 c6t500000E11696D5A3d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F12G104 c6t500000E116974FB3d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F1P5104 regards Hafiz From skiselkov.ml at gmail.com Sat Nov 30 11:26:57 2013 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Sat, 30 Nov 2013 11:26:57 +0000 Subject: [OmniOS-discuss] 2x acctual disk quantity In-Reply-To: <1136222610.687657.1385804636256.JavaMail.zimbra@cantekstil.com.tr> References: <1136222610.687657.1385804636256.JavaMail.zimbra@cantekstil.com.tr> Message-ID: <5299CB81.7090101@gmail.com> On 11/30/13, 9:43 AM, Hafiz Rafibeyli wrote: > Hello, > > After adding 4 SAS dual port disks to my omnios system(omnios-b281e50+napp-it) ,I see number of disks 2x acctual quantity. > > I have dual controller backplane(supermicro) and 2 LSI 9211-8i, > > There is no any quantity problem with another disks,only with new added HP 300GB SAS DP. > > > c4t500000E1168BB382d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EUJ7104 > c4t500000E11693F232d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EY64104 > c4t500000E11696D5A2d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F12G104 > c4t500000E116974FB2d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F1P5104 > c6t500000E1168BB383d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EUJ7104 > c6t500000E11693F233d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0EY64104 > c6t500000E11696D5A3d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F12G104 > c6t500000E116974FB3d0 (!parted) via dd ok 300 GB S:0 H:0 T:0 HP EG0300FARTT D001PAB0F1P5104 Hey Hafiz, What you're seeing are the two ports of the disks. Using "mpathadm list lu" you should be able to determine which SAS addresses correspond to which disks. You can also use something like diskmap.py from https://github.com/swacquie/DiskMap (it requires the sas2ircu utility from LSI). Cheers, -- Saso