Side effects when increasing vm.max_map_count
This document (7000830) is provided subject to the disclaimer at the end of this document.
SUSE Linux Enterprise Server 9
This problem was worked around by increasing vm.max_map_count by 30x (sysctl –w vm.max_map_count=1966080). Although this setting did resolve the problem, the documentation in /usr/src/linux/Documentation/sysctl/vm.txt does not explain any of the side effects of increasing this parameter. This TID was created to document any questions and answered associated with any possible side effects of increasing this parameter.
Increasing this parameter will potentially increase memory consumption by an application and thereby reduce performance of the server. However, this is entirely dependent upon an application allocating a large amount of memory maps.
2. Is there an absolute maximum number that can be specified for vm.max_map_count?
Theoretically, yes -- MAXINT for the architecture. But the server will run out of memory long before an application hits these limits.
3. How does increasing vm.max_map_count impact the kernel memory footprint?
Each mapped area needs some kernel memory. At least a vm_area_struct must be allocated, i.e. around 128 bytes per map count (plus some small overhead added by the SLAB allocator if additional slabs are needed). When vm.max_map_count is larger, processes are allowed to make the kernel allocate more memory for this purpose.
4. Does the kernel preallocate memory according to this setting?
No. The memory is allocated only when a process actually needs the map areas.
5. Does increasing this limit have any performance impact (e.g. more CPU time to scan memory map)?
No. The increase of this limit does not by itself change anything. Only processes which actually use a large amount of memory maps are affected.
How are they affected? Well, since there will be more elements in the VM red-black tree, all operations on the VMA will take longer. The slow-down of most operations is logarithmic, e.g. further mmap's, munmap's et al. as well as handling page faults (both major and minor). Some operations will slow down linearly, e.g. copying the VMAs when a new process is forked.
In short, there is absolutely no impact on memory footprint or performance for processes which use the same number of maps. On the other hand, processes where one of the memory mapping functions would have failed with ENOMEM because of hitting the limit, will now be allowed to consume the additional kernel memory with all the implications described above.
- Document ID:7000830
- Creation Date: 02-Jul-2008
- Modified Date:16-Mar-2021
- SUSE Linux Enterprise Server
For questions or concerns with the SUSE Knowledgebase please contact: firstname.lastname@example.org