SUSE Support

Here When You Need Us

Overcommit Memory in SLES

This document (7002775) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 15 GA and later Service Packs
SUSE Linux Enterprise Server 12 GA and later Service Packs
SUSE Linux Enterprise Server 11 GA and later Service Packs
SUSE Linux Enterprise Server 10 GA and later Service Packs
SUSE Linux Enterprise Server 9 GA and later Service Packs

Situation

Overcommit memory under SLES is a subject which is often misunderstood. The purpose of this TID is to address some common misunderstandings and provide resources for obtaining more information on this subject.

Note - In low memory conditions, overcommiting memory can lead to the oom-killer killing apparently random tasks.

Resolution

The definitive source of documentation for the behavior of overcommit memory is the Linux kernel source code. In particular, /usr/src/linux/mm/mmap.c (available when the kernel-source package is installed) is a good place to start.

As the source code can be difficult to follow, there is also documentation provided with the kernel-source package that explains overcommit memory in detail. This documentation can be found in the following file:
  • /usr/src/linux/Documentation/vm/overcommit-account
This file details the following 3 modes available for overcommit memory in the Linux kernel:
  • 0  -  Heuristic overcommit handling.
  • 1  -  Always overcommit.
  • 2  -  Don't overcommit.
Mode 0 is the default mode for SLES servers. This allows for processes to overcommit "reasonable" amounts of memory. If a process attempts to allocate an "unreasonable" amount of memory (as determined by internal heuristics), the memory allocation attempt is denied. In this mode, if many applications perform small overcommit allocations, it is possible for the server to run out of memory. In this situation, the Out of Memory killer (oom-kill) will be used to kill processes until enough memory is available for the server to continue operating.

Mode 1 allows processes to commit as much memory as requested. These allocations will never result in an "out of memory" error. This mode is usually appropriate only in specific scientific applications.

Mode 2 prevents memory overcommit and limits the amount of memory that is available for a process to allocate. This model ensures that processes will not be randomly killed by the oom-killer, and that there will always be enough memory for the kernel to operate properly. The total amount of memory available for use by the system is determined through the following calculation:
  • Total Commit Memory = (swap size + (RAM size * overcommit_ratio))
By default, overcommit_ratio is set to 50. With this setting, the total commit memory size will be equal to the total amount of swap space in the server, plus 50% of the RAM. In other words, if a server has 1 GB of RAM, and 1GB of swap space, the system would have a total commit limit of 1.5GB.

To determine or change which overcommit mode a server is operating in, the following proc files are used:
  • /proc/sys/vm/overcommit_memory
  • /proc/sys/vm/overcommit_ratio
Echoing the number of the desired mode into overcommit_memory will immediately change the overcommit mode being used. If mode 2 is in use, the ratio is determined using the value in the overcommit_ratio file.

To view the current memory statistics, check the following fields in /proc/meminfo:
  • CommitLimit   - Overcommit limit
  • Committed_AS  - Current memory amount committed

Additional Information

If the oom-killer is invoked, messages such as the following can be seen in /var/log/messages:

  kernel: Out of Memory: Kill process 12063 (gdb) score 4349115 and children.
  kernel: Out of memory: Killed process 3038 (ndsd).
  kernel: kthread invoked oom-killer: gfp_mask=0xd0, order=1, oomkilladj=0
  kernel:
  kernel: Call Trace: <ffffffff801645ee>{oom_kill_process+87}
  kernel:        <ffffffff80164a6d>{out_of_memory+374} <ffffffff80166991>{__alloc_pages+613}
  kernel:        <ffffffff80183412>{cache_alloc_refill+266} <ffffffff8017bb9f>{alloc_page_interleave+56}
  kernel:        <ffffffff8016628b>{__get_free_pages+14} <ffffffff801318f7>{copy_process+230}
  kernel:        <ffffffff80130a73>{wake_up_new_task+917} <ffffffff8013313b>{do_fork+196}
  kernel:        <ffffffff8012c0f8>{activate_task+204} <ffffffff80147d5d>{keventd_create_kthread+0}
  kernel:        <ffffffff8010be41>{kernel_thread+129} <ffffffff80147d5d>{keventd_create_kthread+0}
  kernel:        <ffffffff80147f39>{kthread+0} <ffffffff8010be9e>{child_rip+0}
  kernel:        <ffffffff80147d7a>{keventd_create_kthread+29} <ffffffff80147d5d>{keventd_create_kthread+0}
  kernel:        <ffffffff8014455e>{run_workqueue+139} <ffffffff80144c6c>{worker_thread+0}
  kernel:        <ffffffff80144d60>{worker_thread+244} <ffffffff8012c654>{default_wake_function+0}
  kernel:        <ffffffff80148025>{kthread+236} <ffffffff8010bea6>{child_rip+8}
  kernel:        <ffffffff80147f39>{kthread+0} <ffffffff8010be9e>{child_rip+0}
  kernel: Mem-info:
  kernel: Node 0 DMA per-cpu:
  kernel: CPU    0: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    1: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    2: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    3: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    4: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    5: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    6: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: CPU    7: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
  kernel: Node 0 DMA32 per-cpu:
  kernel: CPU    0: Hot: hi:  186, btch:  31 usd: 183   Cold: hi:   62, btch:  15 usd:  49
  kernel: CPU    1: Hot: hi:  186, btch:  31 usd: 162   Cold: hi:   62, btch:  15 usd:  20
  kernel: CPU    2: Hot: hi:  186, btch:  31 usd:  89   Cold: hi:   62, btch:  15 usd:  19
  kernel: CPU    3: Hot: hi:  186, btch:  31 usd:  53   Cold: hi:   62, btch:  15 usd:  14
  kernel: CPU    4: Hot: hi:  186, btch:  31 usd: 156   Cold: hi:   62, btch:  15 usd:  52
  kernel: CPU    5: Hot: hi:  186, btch:  31 usd:  77   Cold: hi:   62, btch:  15 usd:  50
  kernel: CPU    6: Hot: hi:  186, btch:  31 usd: 144   Cold: hi:   62, btch:  15 usd:  33
  kernel: CPU    7: Hot: hi:  186, btch:  31 usd: 165   Cold: hi:   62, btch:  15 usd:  14
  kernel: Node 0 Normal per-cpu:
  kernel: CPU    0: Hot: hi:  186, btch:  31 usd:  72   Cold: hi:   62, btch:  15 usd:  49
  kernel: CPU    1: Hot: hi:  186, btch:  31 usd: 178   Cold: hi:   62, btch:  15 usd:  53
  kernel: CPU    2: Hot: hi:  186, btch:  31 usd: 104   Cold: hi:   62, btch:  15 usd:  53
  kernel: CPU    3: Hot: hi:  186, btch:  31 usd:  42   Cold: hi:   62, btch:  15 usd:  57
  kernel: CPU    4: Hot: hi:  186, btch:  31 usd:  76   Cold: hi:   62, btch:  15 usd:  37
  kernel: CPU    5: Hot: hi:  186, btch:  31 usd: 172   Cold: hi:   62, btch:  15 usd:  35
  kernel: CPU    6: Hot: hi:  186, btch:  31 usd: 174   Cold: hi:   62, btch:  15 usd:  14
  kernel: CPU    7: Hot: hi:  186, btch:  31 usd: 184   Cold: hi:   62, btch:  15 usd:  20
  kernel: Free pages:       42772kB (0kB HighMem)
  kernel: Active:1010929 inactive:986215 dirty:0 writeback:0 unstable:0 free:10693 slab:12365 mapped:69 pagetables:8200
  kernel: Node 0 DMA free:12192kB min:16kB low:20kB high:24kB active:0kB inactive:0kB present:11816kB pages_scanned:0 all_unreclaimable? yes
  kernel: lowmem_reserve[]: 0 3254 8052 8052
  kernel: Node 0 DMA32 free:23768kB min:4636kB low:5792kB high:6952kB active:1625960kB inactive:1623184kB present:3332668kB pages_scanned:10156350 all_unreclaimable? yes
  kernel: lowmem_reserve[]: 0 0 4797 4797
  kernel: Node 0 Normal free:6812kB min:6836kB low:8544kB high:10252kB active:2417756kB inactive:2321676kB present:4912640kB pages_scanned:12975279 all_unreclaimable? yes
  kernel: lowmem_reserve[]: 0 0 0 0
  kernel: Node 0 DMA: 8*4kB 4*8kB 2*16kB 4*32kB 5*64kB 3*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 2*4096kB = 12192kB
  kernel: Node 0 DMA32: 2*4kB 8*8kB 9*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 5*4096kB = 23768kB
  kernel: Node 0 Normal: 1*4kB 5*8kB 3*16kB 2*32kB 0*64kB 0*128kB 2*256kB 0*512kB 0*1024kB 1*2048kB 1*4096kB = 6812kB
  kernel: Swap cache: add 29614894, delete 29614935, find 12752889/16454760, race 4+409
  kernel: Free swap  = 0kB
  kernel: Total swap = 4200956kB
  kernel: Free swap:            0kB
  kernel: 2293759 pages of RAM
  kernel: 249173 reserved pages
  kernel: 67865 pages shared
  kernel: 3 pages swap cached

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7002775
  • Creation Date: 19-Mar-2009
  • Modified Date:15-Jun-2023
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.