[OmniOS-discuss] Omnios: can't start as many kvm's as I thought due to memory pressure

pieter van puymbroeck pietervanpuymbroeck at hotmail.com
Thu Mar 9 06:34:11 UTC 2017


Hello Jan,

$ swap -sh                           
total: 52G allocated + 100M reserved = 53G used, 8.8G available
$

This is when 7 machines are running. 
I face the issue when starting the 8th one.

Br
Pieter


Verstuurd vanaf mijn iPhone

> Op 8 mrt. 2017 om 22:29 heeft Jan Vlach <janus at volny.cz> het volgende geschreven:
> 
> Hi Pieter,
> 
> how much disk space do you have allocated for swap?
> 
> Jan
> 
>> On Wed, Mar 08, 2017 at 07:32:23PM +0000, pieter van puymbroeck wrote:
>> Hi,
>> 
>> 
>> I want to start 12 kvm machines in my global zone and I end up in an memlock issue after the 6th machine:
>> 
>> 
>> qemu_mlock: have only locked 0 of 7516192768 bytes; still trying…
>> 
>> 
>> The details:
>> 
>> 
>> # echo "::memstat" | mdb -k
>> 
>> Page Summary                Pages                MB  %Tot
>> 
>> ------------     ----------------  ----------------  ----
>> 
>> Kernel                    1462114              5711    5%
>> 
>> Boot pages                    166                 0    0%
>> 
>> ZFS File Data              143529               560    0%
>> 
>> Anon                        27477               107    0%
>> 
>> Exec and libs                1244                 4    0%
>> 
>> Page cache                   4911                19    0%
>> 
>> Free (cachelist)             3597                14    0%
>> 
>> Free (freelist)          29809914            116444   95%
>> 
>> 
>> Total                    31452952            122863
>> 
>> Physical                 31452951            122863
>> 
>> #
>> 
>> 
>> My pagesize is:
>> 
>> 
>> # pagesize
>> 
>> 4096
>> 
>> #
>> 
>> 
>> 
>> so 31452952 pages = (122863*1024*1024) bytes / 4096 (bytes)
>> 
>> thus: 31452952 / 256 = 122863MB
>> 
>> 
>> That's correct. So pages to MB we can divide by 256. MB to pages, multiply with 256.
>> 
>> 
>> I want 8GB per vm and I want 12vm's. This is 96GB (98304MB)
>> 
>> -> needs 98304 * 256 = 25165824 pages.
>> 
>> 
>> 
>> 31452952 - 25165824 = 6287128 left (or 24559 MB ~ 23.9GB )
>> 
>> 
>> I reserved (in /etc/system ) 15 GB for zfs arc
>> 
>> 15GB = 15360 MB = 3932160 pages
>> 
>> 
>> So this gives 6287128 - 3932160 = 2354968 pages left (or 9199MB ~ 8.98 GB)
>> 
>> 
>> -> this should be sufficient for omnios to run. Unless I'm not aware of other things, which is very well possible.
>> 
>> 
>> So remember
>> 
>> 
>> Page Summary                Pages                MB  %Tot
>> 
>> ------------     ----------------  ----------------  ----
>> 
>> ...
>> 
>> Free (cachelist)             3597                14    0%
>> 
>> Free (freelist)          29809914            116444   95%
>> 
>> 
>> Total                    31452952            122863
>> 
>> Physical                 31452951            122863
>> 
>> #
>> 
>> 
>> Let's boot one kvm with 8GB ram for the vm. We expect to see (8GB *1024) * 256 = 2097152 pages be eaten up. So we should end up with 31452952 - 2097152 = 29355800 pages.
>> 
>> OR if we are eating from the "Free" part, we should end up with 29809914 - 2097152 = 27712762 pages (or 108252MB ~105.71 GB)
>> 
>> 
>> Let's confirm:
>> 
>> 
>> Before boot:
>> 
>> # mdb -ke 'availrmem/D ; pages_pp_maximum/D'
>> 
>> availrmem:
>> 
>> availrmem:      29845807
>> 
>> pages_pp_maximum:
>> 
>> pages_pp_maximum:               1216998
>> 
>> #
>> 
>> 
>> Boot the machine.
>> 
>> 
>> # echo "::memstat" | mdb -k
>> 
>> Page Summary                Pages                MB  %Tot
>> 
>> ------------     ----------------  ----------------  ----
>> 
>> Kernel                    1472257              5751    5%
>> 
>> Boot pages                    166                 0    0%
>> 
>> ZFS File Data              149321               583    0%
>> 
>> Anon                      2139596              8357    7%
>> 
>> Exec and libs                1549                 6    0%
>> 
>> Page cache                   4955                19    0%
>> 
>> Free (cachelist)             3445                13    0%
>> 
>> Free (freelist)          27681663            108131   88% <<<----- 27712762 pages
>> 
>> 
>> Total                    31452952            122863
>> 
>> Physical                 31452951            122863
>> 
>> # mdb -ke 'availrmem/D ; pages_pp_maximum/D'
>> 
>> availrmem:
>> 
>> availrmem:      26621718
>> 
>> pages_pp_maximum:
>> 
>> pages_pp_maximum:               1216998
>> 
>> #
>> 
>> 
>> 
>> look there. We expected to end up with 29355800 pages or 27712762 free pages.
>> 
>> 
>> If we check in the available memory 29845807 - 26621718 = 3224089 pages were consumed (12594.09MB ~ 12.29GB )
>> 
>> Double check: 29809914 - 27681663 = 2128251 pages consumed ( 8313.48MB ~ 8.11GB )
>> 
>> And this for a machine of 8GB.
>> 
>> 
>> Let's do a test with a vm with 4GB allocated ( 1048576 pages )
>> 
>> 
>> 
>> Before boot:
>> 
>> # echo "::memstat" | mdb -k
>> 
>> Page Summary                Pages                MB  %Tot
>> 
>> ------------     ----------------  ----------------  ----
>> 
>> Kernel                    1476857              5768    5%
>> 
>> Boot pages                    166                 0    0%
>> 
>> ZFS File Data              176243               688    1%
>> 
>> Anon                      2139416              8357    7%
>> 
>> Exec and libs                1549                 6    0%
>> 
>> Page cache                   4961                19    0%
>> 
>> Free (cachelist)             3440                13    0%
>> 
>> Free (freelist)          27650320            108009   88%
>> 
>> 
>> Total                    31452952            122863
>> 
>> Physical                 31452951            122863
>> 
>> # mdb -ke 'availrmem/D ; pages_pp_maximum/D'
>> 
>> availrmem:
>> 
>> availrmem:      26596813
>> 
>> pages_pp_maximum:
>> 
>> pages_pp_maximum:               1216998
>> 
>> #
>> 
>> 
>> So we expect
>> 
>> 27650320 - 1048576 = 26601744 free pages (103913.06 MB ~ 101.47 GB free)
>> 
>> or
>> 
>> 26596813 - 1048576 = 25548237 free pages (99797.80 MB ~ 97.45 GB free)
>> 
>> 
>> Booting the vm.
>> 
>> And the aftermath:
>> 
>> 
>> # ./start_kvm_pogo.sh
>> 
>> qemu-system-x86_64: -net vnic,vlan=40,name=net0,ifname=vnic_kvm_pogo0,macaddr=2:8:20:87:2f:69: vnic dhcp disabled
>> 
>> 
>> Started VM: pogo
>> 
>> VNC available at: host IP 127.0.0.1
>> 
>> 10.0.1.171
>> 
>> ::1/128
>> 
>> ::/0 port 5900
>> 
>> QEMU Monitor, do: # telnet localhost . Note: use Control ] to exit monitor before quit!
>> 
>> #
>> 
>> 
>> Check the values
>> 
>> # echo "::memstat" | mdb -k
>> 
>> Page Summary                Pages                MB  %Tot
>> 
>> ------------     ----------------  ----------------  ----
>> 
>> Kernel                    1482301              5790    5%
>> 
>> Boot pages                    166                 0    0%
>> 
>> ZFS File Data              180958               706    1%
>> 
>> Anon                      3198541             12494   10%
>> 
>> Exec and libs                1549                 6    0%
>> 
>> Page cache                   4961                19    0%
>> 
>> Free (cachelist)             3440                13    0%
>> 
>> Free (freelist)          26581036            103832   85%
>> 
>> 
>> Total                    31452952            122863
>> 
>> Physical                 31452951            122863
>> 
>> #
>> 
>> # mdb -ke 'availrmem/D ; pages_pp_maximum/D'
>> 
>> availrmem:
>> 
>> availrmem:      24472324
>> 
>> pages_pp_maximum:
>> 
>> pages_pp_maximum:               1216998
>> 
>> #
>> 
>> 
>> let's verify.
>> 
>> Test1:
>> 
>> 27650320 - 26581036 = 1069284 pages consumed ( 4176.89 MB ~ 4GB )
>> 
>> Test 2:
>> 
>> 26596813 - 24472324 = 2124489 pages consumed ( 8298.78 MB ~ 8.10GB )
>> 
>> 
>> 
>> 3 questions remain.
>> 
>> - Why does this happen. If we check using the memstat it's fairly ok, but appearently this is not correct as I can't start the 12 machines.
>> 
>> - How to avoid this. If I say to kvm "use 8GB" I want it to use 8GB not 12GB. Or am I missing something crucial?
>> 
>> - If it's a (kernel) parameter ... which one and how to set it?
>> 
>> 
>> Thanks and best regards,
>> 
>> Pieter
>> 
>> 
> 
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> 
> -- 
> Be the change you want to see in the world.


More information about the OmniOS-discuss mailing list