Best practice for providing kernel core dumps to support incidents
This document (7010056) is provided subject to the disclaimer at the end of this document.
SUSE Linux Enterprise Server 11
SUSE Linux Enterprise Server 12
SUSE Linux Enterprise Server 15
A kernel core dump is an extract of the system's memory at the time when the dump is taken.
Information in a core dump are helpful to see the system state and environment at a certain time.
That also sets the limit of the core dump usability.
A core dump is for investigating kernel crashes. It is most of the time not helpful for investigating system lockups due to other reasons, e.g. blocked i/o, resource limits etc.
It also normally cannot be used to investigate performance issues or applications crashes on the system.
Next, it should be considered which information within the system memory are important for investigation.
Since it is a kernel crash that should be analyzed, the memory that is occupied by the kernel should be in the dump.
Under certain circumstances, it is also useful to have the memory, occupied by userspace processes in the dump.
Mostly irrelevant are allocated, but unused memory pages, memory pages of the filesystem cache and not allocated pages.
Usually, the useful memory pages are only a small fraction of the whole memory.
Therefore the not useful information should be filtered out already when taking the dump.
This can be done by setting the dump level in:
please read the chapter:
Dump Filtering and Compression
in the kdump manpage, available via:
man 7 kdump
in order to choose the right number for the dumplevel.
the default dumplevel 31 should be fine for most situations.
If filtering directly during the core dump cannot be done, there is also the possibility to filter the created core afterwards.
Please refer to TID:
for further details.
A filtered and compressed dump is usually not bigger than a few Gb and could be uploaded to the novell ftp servers.
These are for EMEA:
For uploading, an ftp client needs to be used.
The login is done with username:
When prompted for a password, just type enter.
Afterwards, please inform your support contact about the filename and location of the coredump file.
The upload speed to the ftp servers can vary depending on the network load and the load of the ftp server.
For a core dump of 10Gb it usually takes about 2 hours to upload.
When uploading larger files, it is advisable to split them into smaller chunks that upload faster.
On SLES, you can use the split utility therefore, e.g:
split -b 2G test123 test123
will split the 10 Gb dump test123 into chunks of 2 Gb size with the suffix a*, e.g:
ls -lh test123a*
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123aa
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ab
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ac
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ad
-rw-r--r-- 1 root root 2,0G 26. Jan 11:31 test123ae
There is also the possibility to provide the dump at a local download location and ask the support contact to download it from there.
Finally, if the dump is really big and there are too many problems uploading it, the dump can be sent via Mail. Ask the support engineer you're working with for their corporate office location.
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:7010056
- Creation Date: 25-Jan-2012
- Modified Date:01-Sep-2020
- SUSE Linux Enterprise Server
For questions or concerns with the SUSE Knowledgebase please contact: email@example.com