Extending an existing MD RAID raid0 array
This document (000020890) is provided subject to the disclaimer at the end of this document.
Environment
SUSE Linux Enterprise Server 12
Situation
# mdadm --add /dev/md127 /dev/loop2 mdadm: add new device failed for /dev/loop2 as 3: Invalid argumentOn SLES 12 if the extension has NOT been correctly completed before reboot, then after reboot the existing array would be in an inactive state (see the additional information section for more details):
# cat /proc/mdstat Personalities : [raid0] md127 : inactive sda[0] sdb[1] sdc[2](S) 6282240 blocks super 1.2 unused devices: <none>
Resolution
First, let's see mdadm(8) man page:
# man mdadm | sed -rn '/to convert a RAID0/,/^[[:blank:]]*$/{/^[[:blank:]]*$/q;p}' | fmt -w 80 From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4 or RAID5. mdadm uses this functionality and the ability to add devices to a RAID4 to allow devices to be added to a RAID0. When requested to do this, mdadm will convert the RAID0 to a RAID4, add the necessary disks and make the reshape happen, and then convert the RAID4 back to RAID0. # man mdadm | sed -rn '/If the --raid-disks option is being used to increase/,/^[[:blank:]]*$/{/^[[:blank:]]*$/q;p}' | fmt -w 80 If the --raid-disks option is being used to increase the number of devices in an array, then --add can be used to add some extra devices to be included in the array. In most cases this is not needed as the extra devices can be added as spares first, and then the number of raid disks can be changed. However, for RAID0 it is not possible to add spares. So to increase the number of devices in a RAID0, it is necessary to set the new number of devices, and to add the new devices, in the same command.
The correct procedure to extend a raid0 array:
# cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md127 : active raid0 loop1[1] loop0[0] 10475520 blocks super 1.2 32k chunks unused devices: <none>
The extension, or grow, itself - an example with new /dev/loop2 device (grow mode, level 'raid0', raid devices equal to '3' here, and new disk to add):
# mdadm -Gv /dev/md127 -l 0 -n 3 -a /dev/loop2Reshape is ongoing note that internally 'raid4' is now visible:
# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid0] md127 : active raid4 loop2[3] loop1[1] loop0[0] 10475520 blocks super 1.2 level 4, 32k chunk, algorithm 5 [4/3] [UU__] [>....................] reshape = 2.1% (115168/5237760) finish=3.7min speed=23033K/sec unused devices: <none>After the reshape is completed, the array is again 'raid0' level:
# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid0] md127 : active raid0 loop2[3] loop1[1] loop0[0] 15713280 blocks super 1.2 32k chunks unused devices: <none>
Cause
Additional Information
For an inactive raid0 array on SLES 12
If an existing raid0 array was attempted to be extend incorrectly (see 'Cause' section), the array (assuming the correct procedure hasn't been done before system reboot) would be in an inactive state. This is because the incorrect command line execution nevertheless caused metadata placement on the disk:
# cat /proc/mdstat Personalities : [raid0] md127 : inactive sda[0] sdb[1] sdc[2](S) 6282240 blocks super 1.2 unused devices: <none> # wipefs /dev/sdc offset type ---------------------------------------------------------------- 0x1000 linux_raid_member [raid] LABEL: s124qb01:sf UUID: dcbae5a8-01aa-8599-d8bd-0cb3f2f6c436
See how '/dev/sdc' is visible as a disk member and marked as spare. This only affects old mdadm package versions.
Since raid0 conceptually does not have any spare device, the solution for seeing a spare device in raid0 caused by the older mdadm version is pretty simple:
# cat /proc/mdstat Personalities : [raid0] md127 : inactive sda[0] sdb[1] sdc[2](S) 6282240 blocks super 1.2 unused devices: <none> # mdadm /dev/md127 --remove /dev/sdc mdadm: hot removed /dev/sdc from /dev/md127 # cat /proc/mdstat Personalities : [raid0] md127 : inactive sda[0] sdb[1] 4188160 blocks super 1.2 # mdadm --run /dev/md127 mdadm: started array /dev/md/sf # cat /proc/mdstat Personalities : [raid0] md127 : active raid0 sda[0] sdb[1] 4188160 blocks super 1.2 256k chunks unused devices: <none>
Do NOT forget to remove the MD superblock from the newly removed disk:
# mdadm --zero-superblock /dev/sdc
Tunning array sync speed
As written in the man page quotations in the 'Resolution' section, the extension of a raid0 level array will trigger reshaping which can have a negative performance impact. The performance impact can be influenced by adjusting speed values, but this can prolong the reshaping itself. A wise value is thus needed (determining a proper value is outside the scope of this TID), for example:
# sysctl -w dev.raid.speed_limit_max=<value> # because reshaping causes huge i/o
Man pages
- https://manpages.opensuse.org/Tumbleweed/mdadm/mdadm.8.en.html
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000020890
- Creation Date: 14-Dec-2022
- Modified Date:20-Dec-2022
-
- SUSE Linux Enterprise Server
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com