From jdg117 at elvis.arl.psu.edu Sun Oct 1 18:24:20 2017 From: jdg117 at elvis.arl.psu.edu (John D Groenveld) Date: Sun, 01 Oct 2017 14:24:20 -0400 Subject: [OmniOS-discuss] Apache Serf In-Reply-To: Your message of "Fri, 29 Sep 2017 16:37:16 EDT." <201709292037.v8TKbGMx019519@elvis.arl.psu.edu> References: <201709292037.v8TKbGMx019519@elvis.arl.psu.edu> Message-ID: <201710011824.v91IOKNl029627@elvis.arl.psu.edu> In message <201709292037.v8TKbGMx019519 at elvis.arl.psu.edu>, John D Groenveld wr ites: >$ env PATH=/opt/subversion/apache24/bin:/usr/bin:/usr/sbin:/usr/ccs/bin:/usr/g >nu/bin:/usr/sfw/bin /usr/bin/python2.7 /tmp/scons.py APR=/opt/subversion/apach >e24 APU=/opt/subversion/apache24 OPENSSL=/usr PREFIX=/opt/subversion/serf CC=" >gcc" CFLAGS="-fPIC -D__EXTENSIONS__" LINKFLAGS="-G -shared -m64 -R/opt/subvers >ion/apache24/lib" check >== Testing test/testcases/chunked-empty.response == >ERROR: test case test/testcases/chunked-empty.response failed >scons: *** [check] Error 1 >scons: building terminated because of errors. > >$ pstack core >core 'core' of 11900: test/serf_response > fffffd7fff3ee939 main () + 219 > 0000000000000001 ???????? () >pstack: warning: librtld_db failed to initialize; symbols from shared librarie >s will not be available Unfortunately I can reproduce with pkg://omnios/developer/gcc44 and also a -m32 stack. I see OI has pkg://openindiana.org/library/libserf, but nothing pops as far as Illumos specific hacks. John groenveld at acm.org From jdg117 at elvis.arl.psu.edu Mon Oct 2 00:05:37 2017 From: jdg117 at elvis.arl.psu.edu (John D Groenveld) Date: Sun, 01 Oct 2017 20:05:37 -0400 Subject: [OmniOS-discuss] Apache Serf In-Reply-To: Your message of "Sun, 01 Oct 2017 14:24:20 EDT." <201710011824.v91IOKNl029627@elvis.arl.psu.edu> References: <201709292037.v8TKbGMx019519@elvis.arl.psu.edu> <201710011824.v91IOKNl029627@elvis.arl.psu.edu> Message-ID: <201710020005.v9205biO001483@elvis.arl.psu.edu> In message <201710011824.v91IOKNl029627 at elvis.arl.psu.edu>, John D Groenveld wr ites: >Unfortunately I can reproduce with pkg://omnios/developer/gcc44 >and also a -m32 stack. > >I see OI has pkg://openindiana.org/library/libserf, but >nothing pops as far as Illumos specific hacks. serf builds with sunstudio12u1. John groenveld at acm.org From icoomnios at gmail.com Mon Oct 2 07:51:06 2017 From: icoomnios at gmail.com (anthony omnios) Date: Mon, 2 Oct 2017 09:51:06 +0200 Subject: [OmniOS-discuss] write amplification zvol In-Reply-To: References: <1009260033.996.1506587596423.JavaMail.stephan.budach@stephanbudach.local> Message-ID: Hi, i have tried with a pool with ashift=9 and there is no write amplification, problem is solved. But i can't used a ashift=9 with ssd (850 evo), i have read many articles indicated problems with ashift=9 on ssd. How ca i do ? does i need to tweak specific zfs value ? Thanks, Anthony 2017-09-28 11:48 GMT+02:00 anthony omnios : > Thanks for you help Stephan. > > i have tried differents LUN with default of 512b and 4096: > > LU Name: 600144F04D4F0600000059A588910001 > Operational Status: Online > Provider Name : sbd > Alias : /dev/zvol/rdsk/filervm2/hdd-110002b > View Entry Count : 1 > Data File : /dev/zvol/rdsk/filervm2/hdd-110002b > Meta File : not set > Size : 26843545600 > Block Size : 4096 > Management URL : not set > Vendor ID : SUN > Product ID : COMSTAR > Serial Num : not set > Write Protect : Disabled > Writeback Cache : Disabled > Access State : Active > > Problem is the same. > > Cheers, > > Anthony > > 2017-09-28 10:33 GMT+02:00 Stephan Budach : > >> ----- Urspr?ngliche Mail ----- >> >> > Von: "anthony omnios" >> > An: "Richard Elling" >> > CC: omnios-discuss at lists.omniti.com >> > Gesendet: Donnerstag, 28. September 2017 09:56:42 >> > Betreff: Re: [OmniOS-discuss] write amplification zvol >> >> > Thanks Richard for your help. >> >> > My problem is that i have a network ISCSI traffic of 2 MB/s, each 5 >> > seconds i need to write on disks 10 MB of network traffic but on >> > pool filervm2 I am writing much more that, approximatively 60 MB >> > each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5 >> > second. When i check with smartmootools every ssd is writing >> > approximatively 250 GB of data each day. >> >> > How can i reduce amont of data writting on each ssd ? i have try to >> > reduce block size of zvol but it change nothing. >> >> > Anthony >> >> > 2017-09-28 1:29 GMT+02:00 Richard Elling < >> > richard.elling at richardelling.com > : >> >> > > Comment below... >> > >> >> > > > On Sep 27, 2017, at 12:57 AM, anthony omnios < >> > > > icoomnios at gmail.com >> > > > > wrote: >> > >> > > > >> > >> > > > Hi, >> > >> > > > >> > >> > > > i have a problem, i used many ISCSI zvol (for each vm), network >> > > > traffic is 2MB/s between kvm host and filer but i write on disks >> > > > many more than that. I used a pool with separated mirror zil >> > > > (intel s3710) and 8 ssd samsung 850 evo 1To >> > >> > > > >> > >> > > > zpool status >> > >> > > > pool: filervm2 >> > >> > > > state: ONLINE >> > >> > > > scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 >> > > > 15:45:48 >> > > > 2017 >> > >> > > > config: >> > >> > > > >> > >> > > > NAME STATE READ WRITE CKSUM >> > >> > > > filervm2 ONLINE 0 0 0 >> > >> > > > mirror-0 ONLINE 0 0 0 >> > >> > > > c7t5002538D41657AAFd0 ONLINE 0 0 0 >> > >> > > > c7t5002538D41F85C0Dd0 ONLINE 0 0 0 >> > >> > > > mirror-2 ONLINE 0 0 0 >> > >> > > > c7t5002538D41CC7105d0 ONLINE 0 0 0 >> > >> > > > c7t5002538D41CC7127d0 ONLINE 0 0 0 >> > >> > > > mirror-3 ONLINE 0 0 0 >> > >> > > > c7t5002538D41CD7F7Ed0 ONLINE 0 0 0 >> > >> > > > c7t5002538D41CD83FDd0 ONLINE 0 0 0 >> > >> > > > mirror-4 ONLINE 0 0 0 >> > >> > > > c7t5002538D41CD7F7Ad0 ONLINE 0 0 0 >> > >> > > > c7t5002538D41CD7F7Dd0 ONLINE 0 0 0 >> > >> > > > logs >> > >> > > > mirror-1 ONLINE 0 0 0 >> > >> > > > c4t2d0 ONLINE 0 0 0 >> > >> > > > c4t4d0 ONLINE 0 0 0 >> > >> > > > >> > >> > > > i used correct ashift of 13 for samsung 850 evo >> > >> > > > zdb|grep ashift : >> > >> > > > >> > >> > > > ashift: 13 >> > >> > > > ashift: 13 >> > >> > > > ashift: 13 >> > >> > > > ashift: 13 >> > >> > > > ashift: 13 >> > >> > > > >> > >> > > > But i write a lot on ssd every 5 seconds (many more than the >> > > > network traffic of 2 MB/s) >> > >> > > > >> > >> > > > iostat -xn -d 1 : >> > >> > > > >> > >> > > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device >> > >> > > > 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2 >> > >> >> > > filervm2 is seeing 3067 writes per second. This is the interface to >> > > the upper layers. >> > >> > > These writes are small. >> > >> >> > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool >> > >> > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0 >> > >> > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 >> > >> > > > 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t2d0 >> > >> > > > 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t4d0 >> > >> >> > > The log devices are seeing 552 writes per second and since >> > > sync=standard that >> > >> > > means that the upper layers are requesting syncs. >> > >> >> > > > 1.0 233.3 48.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41657AAFd0 >> > >> > > > 5.0 250.3 144.2 13207.3 0.0 0.0 0.0 0.1 0 3 c7t5002538D41CC7127d0 >> > >> > > > 2.0 254.3 24.0 13207.3 0.0 0.0 0.0 0.1 0 4 c7t5002538D41CC7105d0 >> > >> > > > 3.0 235.3 72.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41F85C0Dd0 >> > >> > > > 0.0 228.3 0.0 16178.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD83FDd0 >> > >> > > > 0.0 225.3 0.0 16210.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD7F7Ed0 >> > >> > > > 0.0 282.3 0.0 19991.1 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Dd0 >> > >> > > > 0.0 280.3 0.0 19871.0 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Ad0 >> > >> >> > > The pool disks see 1989 writes per second total or 994 writes per >> > > second logically. >> > >> >> > > It seems to me that reducing 3067 requested writes to 994 logical >> > > writes is the opposite >> > >> > > of amplification. What do you expect? >> > >> > > -- richard >> > >> >> > > > >> > >> > > > I used zvol of 64k, i try with 8k and problem is the same. >> > >> > > > >> > >> > > > zfs get all filervm2/hdd-110022a : >> > >> > > > >> > >> > > > NAME PROPERTY VALUE SOURCE >> > >> > > > filervm2/hdd-110022a type volume - >> > >> > > > filervm2/hdd-110022a creation Tue May 16 10:24 2017 - >> > >> > > > filervm2/hdd-110022a used 5.26G - >> > >> > > > filervm2/hdd-110022a available 2.90T - >> > >> > > > filervm2/hdd-110022a referenced 5.24G - >> > >> > > > filervm2/hdd-110022a compressratio 3.99x - >> > >> > > > filervm2/hdd-110022a reservation none default >> > >> > > > filervm2/hdd-110022a volsize 25G local >> > >> > > > filervm2/hdd-110022a volblocksize 64K - >> > >> > > > filervm2/hdd-110022a checksum on default >> > >> > > > filervm2/hdd-110022a compression lz4 local >> > >> > > > filervm2/hdd-110022a readonly off default >> > >> > > > filervm2/hdd-110022a copies 1 default >> > >> > > > filervm2/hdd-110022a refreservation none default >> > >> > > > filervm2/hdd-110022a primarycache all default >> > >> > > > filervm2/hdd-110022a secondarycache all default >> > >> > > > filervm2/hdd-110022a usedbysnapshots 15.4M - >> > >> > > > filervm2/hdd-110022a usedbydataset 5.24G - >> > >> > > > filervm2/hdd-110022a usedbychildren 0 - >> > >> > > > filervm2/hdd-110022a usedbyrefreservation 0 - >> > >> > > > filervm2/hdd-110022a logbias latency default >> > >> > > > filervm2/hdd-110022a dedup off default >> > >> > > > filervm2/hdd-110022a mlslabel none default >> > >> > > > filervm2/hdd-110022a sync standard local >> > >> > > > filervm2/hdd-110022a refcompressratio 3.99x - >> > >> > > > filervm2/hdd-110022a written 216K - >> > >> > > > filervm2/hdd-110022a logicalused 20.9G - >> > >> > > > filervm2/hdd-110022a logicalreferenced 20.9G - >> > >> > > > filervm2/hdd-110022a snapshot_limit none default >> > >> > > > filervm2/hdd-110022a snapshot_count none default >> > >> > > > filervm2/hdd-110022a redundant_metadata all default >> > >> > > > >> > >> > > > Sorry for my bad english. >> > >> > > > >> > >> > > > What can be the problem ? thanks >> > >> > > > >> > >> > > > Best regards, >> > >> > > > >> > >> > > > Anthony >> > >> >> How did you setup your LUNs? Especially, what is the block size for those >> LUNs. Could it be, that you went with the default of 512b blocks, where the >> drives do have 4k or even 8k blocks? >> >> Cheers, >> Stephan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey57 at gmail.com Mon Oct 2 17:39:07 2017 From: sergey57 at gmail.com (sergey ivanov) Date: Mon, 2 Oct 2017 13:39:07 -0400 Subject: [OmniOS-discuss] OmniOS based redundant NFS In-Reply-To: <150975111.1847.1506753957992.JavaMail.stephan.budach@budy.stephanbudach.de> References: <634237972.756.1506544449540.JavaMail.stephan.budach@budy.stephanbudach.de> <968909667.804.1506574164745.JavaMail.stephan.budach@budy.stephanbudach.de> <150975111.1847.1506753957992.JavaMail.stephan.budach@budy.stephanbudach.de> Message-ID: Thank you, Stephan, I will sure use custom guuids! And thanks for pointing at failure mode. I have tried exporting physical disks with sbdadm, it worked: === sbdadm list-lu Found 2 LU(s) GUID DATA SIZE SOURCE -------------------------------- ------------------- ---------------- 600144f0dc4db979000059cd2e460001 146815668224 /dev/rdsk/c6t500000E117B07742d0 600144f0dc4db979000059cd2e4c0002 146815668224 /dev/rdsk/c7t500000E117A9AD12d0 === I am thinking, I can have on the target server many disks exported with different lun numbers, or each disk having a separate target for export. I think many lun numbers in the same target would be easier to manage. Also, I've tested a setup where both layers: ISCSI target and NFS head are on the same machines, and ZPOOL to be used for NFS is composed of imported ISCSI targets, even if they are local to this NFS server. -- Sergey. Regards, Sergey Ivanov On Sat, Sep 30, 2017 at 2:45 AM, Stephan Budach wrote: > Hi Sergey, > > ----- Urspr?ngliche Mail ----- >> Von: "sergey ivanov" >> An: "Stephan Budach" >> CC: "omnios-discuss" >> Gesendet: Freitag, 29. September 2017 22:30:31 >> Betreff: Re: [OmniOS-discuss] OmniOS based redundant NFS >> >> Thanks, Stephan. >> I did a simple test with creating lu over physical disks for use as >> ISSI targets, and it worked well. I am going to directly connect 2 >> servers and export their disks as separate I SCSI targets. Or maybe >> different LUNs in a target. And then on the active server start >> initiator to get these targets, and combine them into a pool of 2-way >> mirrors so that it stays degraded but working if one of the servers >> dies. > > Well, different targets mean, that you will be able to service single disks on one node, without having to degrade the whole zpool, but only the affected vdevs. On the other hand, there is more complexity since, you will have of course quite a big number of iSCSI targets to login to. This may be ok, if the number doesn't get too hight, but going with hundreds of disks, I chose to use fewer targets with more LUNs. > > One thing to keep in mind is, that stmfad allows you to create the guuid to your liking. That is that you can freely choose the last 20 bytes to be anything you want. I used that to ascii-code the node name and slot into the guid, such as that it displays on my NFS heads, when running format. This helps a lot in mapping the LUNs to drives. > >> So, manual failover for this configuration will be the following. If >> the server to be disabled is still active, stop NFS, export zpool on >> it, stop iscsiadm, release shared IP. On the other server: import >> zpool and start NFS, activate shared IP. > > I am using the sharenfs properties of ZFS, but you will likely have to run zpool export -f if you want to fail over the service, since the zpool is still busy. Also, you'd better set zpool failmode to panic instead of wait, such as that an issue triggers a reboot, rather than keeping you NFS head waiting. > >> I read once there are some tricks which make clients do not recognize >> NFS server is changed underneath all mounts, but I never tried it. > > The only issue I came across was, when I deliberatley failed over the NFS service forth and back within the a too short period, which causes the NFSd on the former primary node to re-use the tcp packets numbers, insisting on reusing it's old NFS connections to the clients. I solved that by resetting the NFSd each time a service starts on any NFS head. The currently connected NFS clients are not affected by that and this solved this particular issue for me. > > Cheers, > Stephan > >> -- >> Regards, >> Sergey Ivanov. >> >> Regards, >> Sergey Ivanov >> >> >> On Thu, Sep 28, 2017 at 12:49 AM, Stephan Budach >> wrote: >> > Hi Sergey, >> > >> > ----- Urspr?ngliche Mail ----- >> >> Von: "sergey ivanov" >> >> An: "Stephan Budach" >> >> CC: "omnios-discuss" >> >> Gesendet: Mittwoch, 27. September 2017 23:15:49 >> >> Betreff: Re: [OmniOS-discuss] OmniOS based redundant NFS >> >> >> >> Thanks, Stephan! >> >> >> >> Please, explain "The reason to use two x two separate servers is, >> >> that >> >> the mirrored zpool's vdevs look the same on each NFS head". >> >> >> >> I understand that, if I want to have the same zpool based on iscsi >> >> devices, I should not mix local disks with iscsi target disks. >> >> >> >> But I think I can have 2 computers, each exporting a set of local >> >> disks as iscsi targets. And to have iscsi initiators on the same >> >> computers importing these targets to build zpools. >> >> >> >> Also, looking at sbdadm, I think I can 'create lu >> >> /dev/rdsk/c0t0d3s2'. >> >> >> >> Ok, I think I would better try it and report how it goes. >> > >> > Actually, things can become quite complex, I'd like to reduce the >> > "mental" involvement to the absolute minimum, mainly because we >> > often faced a situation where something would suddenly break, >> > which had been running for a long time without problems. This is >> > when peeple start? well maybe not panicking, but having to recap >> > what the current setup was like and what they had to do to tackle >> > this. >> > >> > So, uniformity is a great deal of help on such systems - at least >> > for us. Technically, there is no issue with mixing local and >> > remote iSCST targets on the same node, which serves as an iSCSI >> > target and a NFS head. >> > >> > Also, if one of the nodes really goes down, you will be loosing >> > your failover NFS head as well, maybe not a big deal and depending >> > on your requirements okay. I do have such a setup as well, >> > although only for an archive ZPOOL, where I can tolerate this >> > reduced redundancy for the benefit of a more lightweight setup. >> > >> > Cheers, >> > Stephan >> From richard.elling at richardelling.com Mon Oct 2 18:33:53 2017 From: richard.elling at richardelling.com (Richard Elling) Date: Mon, 2 Oct 2017 11:33:53 -0700 Subject: [OmniOS-discuss] write amplification zvol In-Reply-To: References: <1009260033.996.1506587596423.JavaMail.stephan.budach@stephanbudach.local> Message-ID: <8CE0DCE4-BA66-4A85-8A84-0BC064784718@richardelling.com> > On Oct 2, 2017, at 12:51 AM, anthony omnios wrote: > > Hi, > > i have tried with a pool with ashift=9 and there is no write amplification, problem is solved. ashift=13 means that the minumum size (bytes) written will be 8k (1<<13). So when you write a single byte, there will be at least 2 writes for the data (both sides of the mirror), 4 writes for metadata (both sides of the mirror * 2 copies of metadata for redundancy). Each metadata block contains information on 128 or more data blocks, so there is not a 1:1 correlation between data and metadata writes. Reducing ashift doesn't change the number of blocks written for a single byte write. It can only reduce or increase the size in bytes of the writes. HTH -- richard > > But i can't used a ashift=9 with ssd (850 evo), i have read many articles indicated problems with ashift=9 on ssd. > > How ca i do ? does i need to tweak specific zfs value ? > > Thanks, > > Anthony > > > > 2017-09-28 11:48 GMT+02:00 anthony omnios >: > Thanks for you help Stephan. > > i have tried differents LUN with default of 512b and 4096: > > LU Name: 600144F04D4F0600000059A588910001 > Operational Status: Online > Provider Name : sbd > Alias : /dev/zvol/rdsk/filervm2/hdd-110002b > View Entry Count : 1 > Data File : /dev/zvol/rdsk/filervm2/hdd-110002b > Meta File : not set > Size : 26843545600 > Block Size : 4096 > Management URL : not set > Vendor ID : SUN > Product ID : COMSTAR > Serial Num : not set > Write Protect : Disabled > Writeback Cache : Disabled > Access State : Active > > Problem is the same. > > Cheers, > > Anthony > > 2017-09-28 10:33 GMT+02:00 Stephan Budach >: > ----- Urspr?ngliche Mail ----- > > > Von: "anthony omnios" > > > An: "Richard Elling" > > > CC: omnios-discuss at lists.omniti.com > > Gesendet: Donnerstag, 28. September 2017 09:56:42 > > Betreff: Re: [OmniOS-discuss] write amplification zvol > > > Thanks Richard for your help. > > > My problem is that i have a network ISCSI traffic of 2 MB/s, each 5 > > seconds i need to write on disks 10 MB of network traffic but on > > pool filervm2 I am writing much more that, approximatively 60 MB > > each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5 > > second. When i check with smartmootools every ssd is writing > > approximatively 250 GB of data each day. > > > How can i reduce amont of data writting on each ssd ? i have try to > > reduce block size of zvol but it change nothing. > > > Anthony > > > 2017-09-28 1:29 GMT+02:00 Richard Elling < > > richard.elling at richardelling.com > : > > > > Comment below... > > > > > > > On Sep 27, 2017, at 12:57 AM, anthony omnios < > > > > icoomnios at gmail.com > > > > > wrote: > > > > > > > > > > > > Hi, > > > > > > > > > > > > i have a problem, i used many ISCSI zvol (for each vm), network > > > > traffic is 2MB/s between kvm host and filer but i write on disks > > > > many more than that. I used a pool with separated mirror zil > > > > (intel s3710) and 8 ssd samsung 850 evo 1To > > > > > > > > > > > > zpool status > > > > > > pool: filervm2 > > > > > > state: ONLINE > > > > > > scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 > > > > 15:45:48 > > > > 2017 > > > > > > config: > > > > > > > > > > > > NAME STATE READ WRITE CKSUM > > > > > > filervm2 ONLINE 0 0 0 > > > > > > mirror-0 ONLINE 0 0 0 > > > > > > c7t5002538D41657AAFd0 ONLINE 0 0 0 > > > > > > c7t5002538D41F85C0Dd0 ONLINE 0 0 0 > > > > > > mirror-2 ONLINE 0 0 0 > > > > > > c7t5002538D41CC7105d0 ONLINE 0 0 0 > > > > > > c7t5002538D41CC7127d0 ONLINE 0 0 0 > > > > > > mirror-3 ONLINE 0 0 0 > > > > > > c7t5002538D41CD7F7Ed0 ONLINE 0 0 0 > > > > > > c7t5002538D41CD83FDd0 ONLINE 0 0 0 > > > > > > mirror-4 ONLINE 0 0 0 > > > > > > c7t5002538D41CD7F7Ad0 ONLINE 0 0 0 > > > > > > c7t5002538D41CD7F7Dd0 ONLINE 0 0 0 > > > > > > logs > > > > > > mirror-1 ONLINE 0 0 0 > > > > > > c4t2d0 ONLINE 0 0 0 > > > > > > c4t4d0 ONLINE 0 0 0 > > > > > > > > > > > > i used correct ashift of 13 for samsung 850 evo > > > > > > zdb|grep ashift : > > > > > > > > > > > > ashift: 13 > > > > > > ashift: 13 > > > > > > ashift: 13 > > > > > > ashift: 13 > > > > > > ashift: 13 > > > > > > > > > > > > But i write a lot on ssd every 5 seconds (many more than the > > > > network traffic of 2 MB/s) > > > > > > > > > > > > iostat -xn -d 1 : > > > > > > > > > > > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > > > > > > 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2 > > > > > > filervm2 is seeing 3067 writes per second. This is the interface to > > > the upper layers. > > > > > These writes are small. > > > > > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool > > > > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0 > > > > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 > > > > > > 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t2d0 > > > > > > 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t4d0 > > > > > > The log devices are seeing 552 writes per second and since > > > sync=standard that > > > > > means that the upper layers are requesting syncs. > > > > > > > 1.0 233.3 48.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41657AAFd0 > > > > > > 5.0 250.3 144.2 13207.3 0.0 0.0 0.0 0.1 0 3 c7t5002538D41CC7127d0 > > > > > > 2.0 254.3 24.0 13207.3 0.0 0.0 0.0 0.1 0 4 c7t5002538D41CC7105d0 > > > > > > 3.0 235.3 72.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41F85C0Dd0 > > > > > > 0.0 228.3 0.0 16178.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD83FDd0 > > > > > > 0.0 225.3 0.0 16210.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD7F7Ed0 > > > > > > 0.0 282.3 0.0 19991.1 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Dd0 > > > > > > 0.0 280.3 0.0 19871.0 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Ad0 > > > > > > The pool disks see 1989 writes per second total or 994 writes per > > > second logically. > > > > > > It seems to me that reducing 3067 requested writes to 994 logical > > > writes is the opposite > > > > > of amplification. What do you expect? > > > > > -- richard > > > > > > > > > > > > > I used zvol of 64k, i try with 8k and problem is the same. > > > > > > > > > > > > zfs get all filervm2/hdd-110022a : > > > > > > > > > > > > NAME PROPERTY VALUE SOURCE > > > > > > filervm2/hdd-110022a type volume - > > > > > > filervm2/hdd-110022a creation Tue May 16 10:24 2017 - > > > > > > filervm2/hdd-110022a used 5.26G - > > > > > > filervm2/hdd-110022a available 2.90T - > > > > > > filervm2/hdd-110022a referenced 5.24G - > > > > > > filervm2/hdd-110022a compressratio 3.99x - > > > > > > filervm2/hdd-110022a reservation none default > > > > > > filervm2/hdd-110022a volsize 25G local > > > > > > filervm2/hdd-110022a volblocksize 64K - > > > > > > filervm2/hdd-110022a checksum on default > > > > > > filervm2/hdd-110022a compression lz4 local > > > > > > filervm2/hdd-110022a readonly off default > > > > > > filervm2/hdd-110022a copies 1 default > > > > > > filervm2/hdd-110022a refreservation none default > > > > > > filervm2/hdd-110022a primarycache all default > > > > > > filervm2/hdd-110022a secondarycache all default > > > > > > filervm2/hdd-110022a usedbysnapshots 15.4M - > > > > > > filervm2/hdd-110022a usedbydataset 5.24G - > > > > > > filervm2/hdd-110022a usedbychildren 0 - > > > > > > filervm2/hdd-110022a usedbyrefreservation 0 - > > > > > > filervm2/hdd-110022a logbias latency default > > > > > > filervm2/hdd-110022a dedup off default > > > > > > filervm2/hdd-110022a mlslabel none default > > > > > > filervm2/hdd-110022a sync standard local > > > > > > filervm2/hdd-110022a refcompressratio 3.99x - > > > > > > filervm2/hdd-110022a written 216K - > > > > > > filervm2/hdd-110022a logicalused 20.9G - > > > > > > filervm2/hdd-110022a logicalreferenced 20.9G - > > > > > > filervm2/hdd-110022a snapshot_limit none default > > > > > > filervm2/hdd-110022a snapshot_count none default > > > > > > filervm2/hdd-110022a redundant_metadata all default > > > > > > > > > > > > Sorry for my bad english. > > > > > > > > > > > > What can be the problem ? thanks > > > > > > > > > > > > Best regards, > > > > > > > > > > > > Anthony > > > > How did you setup your LUNs? Especially, what is the block size for those LUNs. Could it be, that you went with the default of 512b blocks, where the drives do have 4k or even 8k blocks? > > Cheers, > Stephan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at kebe.com Tue Oct 3 12:18:27 2017 From: danmcd at kebe.com (Dan McDonald) Date: Tue, 3 Oct 2017 08:18:27 -0400 Subject: [OmniOS-discuss] Fwd: Reminder & update: Oct 12 SmartOS/OmniOS/OI/illumos user meeting at Erlangen, Germany References: <06EB3DB8-8CEE-4644-AC92-73BB946F8BC6@peterkelm.com> Message-ID: Von: Peter Kelm Betreff: Reminder & update: Oct 12 SmartOS/OmniOS/OI/illumos user meeting at Erlangen, Germany Datum: 3. Oktober 2017 um 11:43:25 MESZ An: openindiana-discuss-owner at openindiana.org, smartos-discuss at lists.smartos.org, omnios-discuss at lists.omniti.com All, Some momentum has built over the past weeks around the SmartOS/OmniOS/OI/illumos meetup. Erigones, a software company from Bratislava, Slovakia will join us to talk about their Danube Cloud product. This software is built upon SmartOS/ZFS/crossbow. Certainly an interesting alternative to Joyent Triton or Project FiFo? Additionally we will talk about our experience deploying SmartOS for small size businesses, e.g. as an alternative to established VMWare setups. Don?t miss out attending this event! Register today directly via meetup: https://www.meetup.com/de-DE/illumos-Meetup/ or via eMail to: illumos-meetup at kelmmoyles.com Location: KelmMoyles office (at the IGZ Erlangen) Am Weichselgarten 7 91058 Erlangen Germany Date/Time: Oct 12, 2017, starting at 6:00pm Participation is free. Looking forward to seeing you at the event! Best regards, Peter > Am 15.09.2017 um 11:15 schrieb Peter Kelm >: > > All, > > We?d like to invite you to the SmartOS/OmniOS/OI/illumos user meeting at Erlangen on Oct 12, 2017. > > Location: > KelmMoyles office (at the IGZ Erlangen) > Am Weichselgarten 7 > 91058 Erlangen > Germany > > Date/Time: > Oct 12, 2017, starting at 6:00pm > > There is no firm agenda yet but please let us know what topics you?d be interested in to make this meeting as productive and useful as it can be. Your contributions are highly appreciated! > > I?d expect that some of the questions below might come up: > - How do you use SmartOS/OmniOS/OI/illumos today or how do you intend to use it ?tomorrow?? > - What are the factors that drive your interest? Where do you see strengths and weaknesses (vs. competing solutions)? > - What are the ?best practices? that you?d like to share? (E.g. how do you handle maintenance and updates for your deployment?) > > Participation is of course free but we?d like to ask you to register via eMail to: illumos-meetup at kelmmoyles.com > > FYI: The timing coincides with the IT-SA fair at Nuremberg, so you might want to consider spending the day there before heading over to us: https://www.it-sa.de > > We are very open to additional thoughts and suggestions, so please let us know. > > Looking forward to seeing you in October! > > Peter > > P.S. We haven?t setup a ?meetup" group yet. Any volunteers? > > Dipl.-Ing. Peter Kelm > KelmMoyles - Tailored Engineering Marketing > Am Weichselgarten 7 > 91058 Erlangen > T +49 (9132) 9060595 > F +49 (9132) 9060596 > E peter.kelm at kelmmoyles.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstockett at molalla.com Tue Oct 10 21:39:59 2017 From: jstockett at molalla.com (Jeff Stockett) Date: Tue, 10 Oct 2017 21:39:59 +0000 Subject: [OmniOS-discuss] a few questions about omniosce/illumos/hardware Message-ID: <916e0d538f1b48198afaa90f4bc74c9f@molalla.com> Tyan is making a new EPYC based server chassis that looks pretty slick (24 hotswap U.2 nvme drives directly connected to the CPU - no PLX expanders required): http://tyan.com/Barebones_TN70AB8026_B8026T70AE24HR Phoronix reviewed it today - and reports that latest OI Hipster wouldn't see the NVMe storage - I'm thinking that is probably illumos issue 8239 which is reportedly fixed: https://www.phoronix.com/scan.php?page=article&item=amd-epyc-bsd&num=1 A few questions: 1. Has anyone tried this server/motherboard with latest OmniOSCE? 2. How do you tell which illumos GIT commits go into a given build of OmniOSCE if it isn't too complicated? 3. The board has an OCP mezzanine adapter slot for network - I assume this X710-DA2 for OCP from Intel might work since the X710 has been supported for about a year now? https://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/x710-da2-for-ocp-brief.html Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at kebe.com Tue Oct 10 21:53:32 2017 From: danmcd at kebe.com (Dan McDonald) Date: Tue, 10 Oct 2017 17:53:32 -0400 Subject: [OmniOS-discuss] a few questions about omniosce/illumos/hardware In-Reply-To: <916e0d538f1b48198afaa90f4bc74c9f@molalla.com> References: <916e0d538f1b48198afaa90f4bc74c9f@molalla.com> Message-ID: <20171010215332.GA39243@everywhere.local> On Tue, Oct 10, 2017 at 09:39:59PM +0000, Jeff Stockett wrote: > Tyan is making a new EPYC based server chassis that looks pretty slick (24 hotswap U.2 nvme drives directly connected to the CPU - no PLX expanders required): > > http://tyan.com/Barebones_TN70AB8026_B8026T70AE24HR You may wish to expand the audience of this and ask on one of the illumos lists. What works there will work for OmniOS for sure. Dan From mark at openvs.co.uk Tue Oct 24 18:04:10 2017 From: mark at openvs.co.uk (Mark Adams) Date: Tue, 24 Oct 2017 19:04:10 +0100 Subject: [OmniOS-discuss] zfs+iscsi+ha Message-ID: Hi All, I'm looking at omnios to evaluate what I bet many people have tried before, an HA zfs setup for proxmox (using their zfs over iscsi) My setup is as follows: san1 and san2 running debian stretch, using pacemaker LIO to export DRBD device zfshead1 and zfshead2 running omnios, as iscsi initiator, mounting a disk from each shelf in a mirror, then exporting from ZFS volume to proxmox. heartbeat is installed in omnios as per Igors work. https://icicimov.github.io/blog/high-availability/ZFS-storage-with-OmniOS-and-iSCSI/ Now, this all works seemingly OK. The problem comes, when I fail over between the "sans" in pacemaker. If I only have 1 iscsi target mounted in omnios, then it all works ok, however if I have more than 1 ZFS immediately faults. I can see the following in the log: WARNING: iscsi session(5) is unable to offline obsolete logical unit 1 then WARNING: iscsi driver unable to online lun Which to me, means its tried to restart the iscsi connection and failed? Again, this only happens after I add more than 1 target. A single target seems to work fine. I tried to use ERL mode 2 but it seems omnios doesn't support this as initiator. Has anyone got any ideas about this? Or any pointers? Also let me know if you need more info. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gearboxes at outlook.com Sat Oct 28 19:18:11 2017 From: gearboxes at outlook.com (Machine Man) Date: Sat, 28 Oct 2017 19:18:11 +0000 Subject: [OmniOS-discuss] Release 14 repo Message-ID: Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. Are there any other hosted repos I can point to just to get the change over to openssh completed? Is there a workaround? I just updated a machines this past Sunday and it was still working. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdg117 at elvis.arl.psu.edu Sat Oct 28 19:24:18 2017 From: jdg117 at elvis.arl.psu.edu (John D Groenveld) Date: Sat, 28 Oct 2017 15:24:18 -0400 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: Your message of "Sat, 28 Oct 2017 19:18:11 -0000." References: Message-ID: <201710281924.v9SJOI08008557@elvis.arl.psu.edu> In message , Machine Man writes: >Is this still online? I am not able to update my last two machines by chang= Yes. John groenveld at acm.org From danmcd at kebe.com Sat Oct 28 19:31:47 2017 From: danmcd at kebe.com (Dan McDonald) Date: Sat, 28 Oct 2017 15:31:47 -0400 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: References: Message-ID: <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> > On Oct 28, 2017, at 3:18 PM, Machine Man wrote: > > Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. I see: http://pkg.omniti.com/omnios/r151014/ is giving me an appropriate response. > Are there any other hosted repos I can point to just to get the change over to openssh completed? > Is there a workaround? > I just updated a machines this past Sunday and it was still working. I'd stop-and-restart the "pkg update" you were doing. Maybe the pkg.omniti.com server moved? Dan From gearboxes at outlook.com Sat Oct 28 21:25:07 2017 From: gearboxes at outlook.com (Machine Man) Date: Sat, 28 Oct 2017 21:25:07 +0000 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> References: , <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> Message-ID: Thanks, I tried it and still getting: pkg: 0/1 catalogs successfully updated: http protocol error: code: 404 reason: Not Found URL: 'http://pkg.omniti.com/omnios/r151014/omnios/catalog/1/update.20170301T19Z. C' (happened 4 times) ________________________________ From: Dan McDonald Sent: Saturday, October 28, 2017 3:31:47 PM To: Machine Man Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] Release 14 repo > On Oct 28, 2017, at 3:18 PM, Machine Man wrote: > > Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. I see: http://pkg.omniti.com/omnios/r151014/ is giving me an appropriate response. > Are there any other hosted repos I can point to just to get the change over to openssh completed? > Is there a workaround? > I just updated a machines this past Sunday and it was still working. I'd stop-and-restart the "pkg update" you were doing. Maybe the pkg.omniti.com server moved? Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gearboxes at outlook.com Sat Oct 28 21:27:12 2017 From: gearboxes at outlook.com (Machine Man) Date: Sat, 28 Oct 2017 21:27:12 +0000 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> References: , <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> Message-ID: The system does have internet access and proper DNS resolution. I can do wget of http://pkg.omniti.com/omnios/r151014/omnios ________________________________ From: Dan McDonald Sent: Saturday, October 28, 2017 3:31:47 PM To: Machine Man Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] Release 14 repo > On Oct 28, 2017, at 3:18 PM, Machine Man wrote: > > Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. I see: http://pkg.omniti.com/omnios/r151014/ is giving me an appropriate response. > Are there any other hosted repos I can point to just to get the change over to openssh completed? > Is there a workaround? > I just updated a machines this past Sunday and it was still working. I'd stop-and-restart the "pkg update" you were doing. Maybe the pkg.omniti.com server moved? Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at kebe.com Sat Oct 28 21:35:53 2017 From: danmcd at kebe.com (Dan McDonald) Date: Sat, 28 Oct 2017 17:35:53 -0400 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: References: <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> Message-ID: <87F58191-7A58-4914-95F1-8853EF12F73D@kebe.com> Shoot. Someone at OmniTI may need to whack the front ends or something else. Dan Sent from my iPhone (typos, autocorrect, and all) > On Oct 28, 2017, at 5:25 PM, Machine Man wrote: > > Thanks, > > > I tried it and still getting: > > > pkg: 0/1 catalogs successfully updated: > > http protocol error: code: 404 reason: Not Found > URL: 'http://pkg.omniti.com/omnios/r151014/omnios/catalog/1/update.20170301T19Z. > > C' (happened 4 times) > > From: Dan McDonald > Sent: Saturday, October 28, 2017 3:31:47 PM > To: Machine Man > Cc: omnios-discuss at lists.omniti.com; Dan McDonald > Subject: Re: [OmniOS-discuss] Release 14 repo > > > > > On Oct 28, 2017, at 3:18 PM, Machine Man wrote: > > > > Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. > > I see: > > http://pkg.omniti.com/omnios/r151014/ > > is giving me an appropriate response. > > > Are there any other hosted repos I can point to just to get the change over to openssh completed? > > Is there a workaround? > > I just updated a machines this past Sunday and it was still working. > > I'd stop-and-restart the "pkg update" you were doing. Maybe the pkg.omniti.com server moved? > > Dan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gearboxes at outlook.com Sun Oct 29 14:30:39 2017 From: gearboxes at outlook.com (Machine Man) Date: Sun, 29 Oct 2017 14:30:39 +0000 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: <87F58191-7A58-4914-95F1-8853EF12F73D@kebe.com> References: <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> , <87F58191-7A58-4914-95F1-8853EF12F73D@kebe.com> Message-ID: It's probably not likely to be fixed at this point, is my only option to reinstall? I was living in fear for this, but just couldn't get all my remote systems done in time. My last few systems are all remote 400+ miles. I should have at least change over to openssh on all of them? BTW r22 isn't working either I tried to update to r22 before going to r22ce on our rsf1 cluster nodes since we haven't tested it on CE but ended up going to CE without issues. At least that had openssh. ________________________________ From: Dan McDonald Sent: Saturday, October 28, 2017 5:35:53 PM To: Machine Man Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] Release 14 repo Shoot. Someone at OmniTI may need to whack the front ends or something else. Dan Sent from my iPhone (typos, autocorrect, and all) On Oct 28, 2017, at 5:25 PM, Machine Man > wrote: Thanks, I tried it and still getting: pkg: 0/1 catalogs successfully updated: http protocol error: code: 404 reason: Not Found URL: 'http://pkg.omniti.com/omnios/r151014/omnios/catalog/1/update.20170301T19Z. C' (happened 4 times) ________________________________ From: Dan McDonald > Sent: Saturday, October 28, 2017 3:31:47 PM To: Machine Man Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] Release 14 repo > On Oct 28, 2017, at 3:18 PM, Machine Man > wrote: > > Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. I see: http://pkg.omniti.com/omnios/r151014/ is giving me an appropriate response. > Are there any other hosted repos I can point to just to get the change over to openssh completed? > Is there a workaround? > I just updated a machines this past Sunday and it was still working. I'd stop-and-restart the "pkg update" you were doing. Maybe the pkg.omniti.com server moved? Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gearboxes at outlook.com Sun Oct 29 14:41:55 2017 From: gearboxes at outlook.com (Machine Man) Date: Sun, 29 Oct 2017 14:41:55 +0000 Subject: [OmniOS-discuss] Release 14 repo In-Reply-To: References: <15D9FC0C-C3E9-4F7B-96E8-6A1A1CEA6415@kebe.com> , <87F58191-7A58-4914-95F1-8853EF12F73D@kebe.com>, Message-ID: Hmmm I just tried it again and its working Not waiting around again. Thanks ________________________________ From: Machine Man Sent: Sunday, October 29, 2017 10:30:39 AM To: Dan McDonald Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: RE: [OmniOS-discuss] Release 14 repo It's probably not likely to be fixed at this point, is my only option to reinstall? I was living in fear for this, but just couldn't get all my remote systems done in time. My last few systems are all remote 400+ miles. I should have at least change over to openssh on all of them? BTW r22 isn't working either I tried to update to r22 before going to r22ce on our rsf1 cluster nodes since we haven't tested it on CE but ended up going to CE without issues. At least that had openssh. ________________________________ From: Dan McDonald Sent: Saturday, October 28, 2017 5:35:53 PM To: Machine Man Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] Release 14 repo Shoot. Someone at OmniTI may need to whack the front ends or something else. Dan Sent from my iPhone (typos, autocorrect, and all) On Oct 28, 2017, at 5:25 PM, Machine Man > wrote: Thanks, I tried it and still getting: pkg: 0/1 catalogs successfully updated: http protocol error: code: 404 reason: Not Found URL: 'http://pkg.omniti.com/omnios/r151014/omnios/catalog/1/update.20170301T19Z. C' (happened 4 times) ________________________________ From: Dan McDonald > Sent: Saturday, October 28, 2017 3:31:47 PM To: Machine Man Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] Release 14 repo > On Oct 28, 2017, at 3:18 PM, Machine Man > wrote: > > Is this still online? I am not able to update my last two machines by change over to OpenSSH before upgrading to OmniOS CE r22. I see: http://pkg.omniti.com/omnios/r151014/ is giving me an appropriate response. > Are there any other hosted repos I can point to just to get the change over to openssh completed? > Is there a workaround? > I just updated a machines this past Sunday and it was still working. I'd stop-and-restart the "pkg update" you were doing. Maybe the pkg.omniti.com server moved? Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From chip at innovates.com Mon Oct 30 15:15:31 2017 From: chip at innovates.com (Schweiss, Chip) Date: Mon, 30 Oct 2017 10:15:31 -0500 Subject: [OmniOS-discuss] Editing kernel command line with BSD Loader Message-ID: Forgive me if there is a FAQ somewhere on this, but I could not locate one. How do I edit the command line now that my OmniOS is using the BSD loader? I'd like to disable a driver at boot time such as: -B disable-mpt_sas=true -Chip -------------- next part -------------- An HTML attachment was scrubbed... URL: From vab at bb-c.de Mon Oct 30 15:24:11 2017 From: vab at bb-c.de (Volker A. Brandt) Date: Mon, 30 Oct 2017 16:24:11 +0100 Subject: [OmniOS-discuss] Editing kernel command line with BSD Loader In-Reply-To: References: Message-ID: <23031.17435.930585.210343@shelob.bb-c.de> Hi Chip! > How do I edit the command line now that my OmniOS is using the BSD loader? > > I'd like to disable a driver at boot time such as: > > -B disable-mpt_sas=true Create a file "/boot/conf.d/boot-args" (name does not matter) and put boot-args="-v -B disable-mpt_sas=true" in it. (The -v is higly recommended :-) Regards -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANY Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 46 Gesch?ftsf?hrer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead"