[OmniOS-discuss] Slow Drive Detection and boot-archive

Richard Elling richard.elling at richardelling.com
Tue Jul 21 14:08:16 UTC 2015


> On Jul 20, 2015, at 7:56 PM, Michael Talbott <mtalbott at lji.org> wrote:
> 
> Thanks for the reply. The bios for the card is disabled already. The 8 second per drive scan happens after the kernel has already loaded and it is scanning for devices. I wonder if it's due to running newer firmware. I did update the cards to fw v.20.something before I moved to omnios. Is there a particular firmware version on the cards I should run to match OmniOS's drivers?

Google "LSI P20 firmware" for many tales of woe for many different OSes.
Be aware that getting the latest version of firmware from Avago might not be obvious...
the latest version is 20.00.04.00 for Windows.
 -- richard

> 
> 
> ________________________
> Michael Talbott
> Systems Administrator
> La Jolla Institute
> 
>> On Jul 20, 2015, at 6:06 PM, Marion Hakanson <hakansom at ohsu.edu> wrote:
>> 
>> Michael,
>> 
>> I've not seen this;  I do have one system with 120 drives and it
>> definitely does not have this problem.  A couple with 80+ drives
>> are also free of this issue, though they are still running OpenIndiana.
>> 
>> One thing I pretty much always do here, is to disable the boot option
>> in the LSI HBA's config utility (accessible from the during boot after
>> the BIOS has started up).  I do this because I don't want the BIOS
>> thinking it can boot from any of the external JBOD disks;  And also
>> because I've had some system BIOS crashes when they tried to enumerate
>> too many drives.  But, this all happens at the BIOS level, before the
>> OS has even started up, so in theory it should not affect what
>> you are seeing.
>> 
>> Regards,
>> 
>> Marion
>> 
>> 
>> ================================================================
>> Subject: Re: [OmniOS-discuss] Slow Drive Detection and boot-archive
>> From: Michael Talbott <mtalbott at lji.org>
>> Date: Fri, 17 Jul 2015 16:15:47 -0700
>> To: omnios-discuss <omnios-discuss at lists.omniti.com>
>> 
>> Just realized my typo. I'm using this on my 90 and 180 drive systems:
>> 
>> # svccfg -s boot-archive setprop start/timeout_seconds=720
>> # svccfg -s boot-archive setprop start/timeout_seconds=1440
>> 
>> Seems like 8 seconds to detect each drive is pretty excessive.
>> 
>> Any ideas on how to speed that up?
>> 
>> 
>> ________________________
>> Michael Talbott
>> Systems Administrator
>> La Jolla Institute
>> 
>>> On Jul 17, 2015, at 4:07 PM, Michael Talbott <mtalbott at lji.org> wrote:
>>> 
>>> I have multiple NAS servers I've moved to OmniOS and each of them have 90-180 4T disks. Everything has worked out pretty well for the most part. But I've come into an issue where when I reboot any of them, I'm getting boot-archive service timeouts happening. I found a workaround of increasing the timeout value which brings me to the following. As you can see below in a dmesg output, it's taking the kernel about 8 seconds to detect each of the drives. They're connected via a couple SAS2008 based LSI cards.
>>> 
>>> Is this normal?
>>> Is there a way to speed that up?
>>> 
>>> I've fixed my frustrating boot-archive timeout problem by adjusting the timeout value from the default of 60 seconds (I guess that'll work ok on systems with less than 8 drives?) to 8 seconds * 90 drives + a little extra time = 280 seconds (for the 90 drive systems). Which means it takes between 12-24 minutes to boot those machines up.
>>> 
>>> # svccfg -s boot-archive setprop start/timeout_seconds=280
>>> 
>>> I figure I can't be the only one. A little googling also revealed: https://www.illumos.org/issues/4614 <https://www.illumos.org/issues/4614>
>>> 
>>> Jul 17 15:40:15 store2 genunix: [ID 583861 kern.info] sd29 at mpt_sas3: unit-address w50000c0f0401bd43,0: w50000c0f0401bd43,0
>>> Jul 17 15:40:15 store2 genunix: [ID 936769 kern.info] sd29 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f0401bd43,0
>>> Jul 17 15:40:16 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f0401bd43,0 (sd29) online
>>> Jul 17 15:40:24 store2 genunix: [ID 583861 kern.info] sd30 at mpt_sas3: unit-address w50000c0f045679c3,0: w50000c0f045679c3,0
>>> Jul 17 15:40:24 store2 genunix: [ID 936769 kern.info] sd30 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f045679c3,0
>>> Jul 17 15:40:24 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f045679c3,0 (sd30) online
>>> Jul 17 15:40:33 store2 genunix: [ID 583861 kern.info] sd31 at mpt_sas3: unit-address w50000c0f045712b3,0: w50000c0f045712b3,0
>>> Jul 17 15:40:33 store2 genunix: [ID 936769 kern.info] sd31 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f045712b3,0
>>> Jul 17 15:40:33 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f045712b3,0 (sd31) online
>>> Jul 17 15:40:42 store2 genunix: [ID 583861 kern.info] sd32 at mpt_sas3: unit-address w50000c0f04571497,0: w50000c0f04571497,0
>>> Jul 17 15:40:42 store2 genunix: [ID 936769 kern.info] sd32 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f04571497,0
>>> Jul 17 15:40:42 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f04571497,0 (sd32) online
>>> Jul 17 15:40:50 store2 genunix: [ID 583861 kern.info] sd33 at mpt_sas3: unit-address w50000c0f042ac8eb,0: w50000c0f042ac8eb,0
>>> Jul 17 15:40:50 store2 genunix: [ID 936769 kern.info] sd33 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f042ac8eb,0
>>> Jul 17 15:40:50 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f042ac8eb,0 (sd33) online
>>> Jul 17 15:40:59 store2 genunix: [ID 583861 kern.info] sd34 at mpt_sas3: unit-address w50000c0f04571473,0: w50000c0f04571473,0
>>> Jul 17 15:40:59 store2 genunix: [ID 936769 kern.info] sd34 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f04571473,0
>>> Jul 17 15:40:59 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f04571473,0 (sd34) online
>>> Jul 17 15:41:08 store2 genunix: [ID 583861 kern.info] sd35 at mpt_sas3: unit-address w50000c0f042c636f,0: w50000c0f042c636f,0
>>> Jul 17 15:41:08 store2 genunix: [ID 936769 kern.info] sd35 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f042c636f,0
>>> Jul 17 15:41:08 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f042c636f,0 (sd35) online
>>> Jul 17 15:41:17 store2 genunix: [ID 583861 kern.info] sd36 at mpt_sas3: unit-address w50000c0f0401bf2f,0: w50000c0f0401bf2f,0
>>> Jul 17 15:41:17 store2 genunix: [ID 936769 kern.info] sd36 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f0401bf2f,0
>>> Jul 17 15:41:17 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f0401bf2f,0 (sd36) online
>>> Jul 17 15:41:25 store2 genunix: [ID 583861 kern.info] sd38 at mpt_sas3: unit-address w50000c0f0401bc1f,0: w50000c0f0401bc1f,0
>>> Jul 17 15:41:25 store2 genunix: [ID 936769 kern.info] sd38 is /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f0401bc1f,0
>>> Jul 17 15:41:26 store2 genunix: [ID 408114 kern.info] /pci at 0,0/pci8086,e06 at 2,2/pci1000,3080 at 0/iport at f/disk at w50000c0f0401bc1f,0 (sd38) online
>>> 
>>> 
>>> ________________________
>>> Michael Talbott
>>> Systems Administrator
>>> La Jolla Institute
>>> 
>> 
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>> 
>> 
> 
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list