Software RAID
In this article we are going to look at installing and configuring software RAID (Redundant Arrays of Inexpensive Disks) from the command line. RAID provides a great level of performance and reliability as data can be mirrored across multiple disks which allows your data to be access even if another disk is failing. In this article we will look at creating RAID1 (Mirroring) which will provide fault tolerance as data is written to two disks. Table 1 lists the RAID levels supported.
RAID Level | Description |
RAID0 | Striped disk array without fault tolerance – Requires a minimum of two disks to be implemented. |
RAID1 | Mirroring & Duplexing – Requires a minimum of two disks to be implemented. |
RAID4 | Independent data disks with shared parity disk – Requires a minimum of three disks to implement. |
RAID5 | Independent data disks with distributed parity blocks – Requires a minimum of three disks to be implemented. |
RAID6 | Independent data disks with two independent distributed parity schemes – Requires a minimum of four disks to be implemented. |
RAID10 | Very high reliability combined with high performance – Requires a minimum of four disks. |
Table 1: Software RAID supported levels.
Installation
In this section of the article we will need to install the “mdadm” utilities this can be done by using the YaST utility. The YaST software management module can be started by using two different commands, the first command is yast sw_single which will start a curses based interface and the second command is yast2 sw_single which will start a GUI (Graphical User Interface). In this article we will be using the yast sw_single command to install the “mdadm” utilities.
Once you have started the YaST software management module you will need to search for the the keyword “mdadm”. Once you have selected the “mdadm” package you can begin the installation and then exit once the installation has finished. The next step is to confirm that the “mdadm” package was installed successfully this can be done by issuing the rpm command followed by the -q qualifier as shown in Figure 2.1.
linux-9sl8:~ # rpm -q mdadm mdadm-2.6-0.17
Figure 2.1: Confirm the “mdadm” utilities are installed.
Preparing the Hard Disks/Partitions
In this section of the article we will look at preparing the hard disks. In this article we have two hard disks (/dev/sda and /dev/sdb) we will create a 2GB partition on each hard disk which will be used for setting up RAID1. The first task that you need to do is creating a 2GB partition on both disks this can be done by using the fdisk utility as shown in Figure 3.1.
linux-9sl8:~ # fdisk /dev/sda The number of cylinders for this disk is set to 3916. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 First cylinder (1438-3916, default 1438): Using default value 1438 Last cylinder or +size or +sizeM or +sizeK (1438-3916, default 3916): +2048M Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. linux-9sl8:~ # partprobe /dev/sda
Figure 3.1: Partitioning the first hard disk for RAID1.
Once you have partitioned the first disk you can partition the second disk using the fdisk utility. Once both hard disks have been partitioned you can begin to create the software RAID.
Creating the Software RAID
In this section of the article we will look at configuring software RAID1 which will allow mirroring of data. In this article we have two hard disks (/dev/sda and /dev/sdb) both of them have a newly created partition which were created in the previous section.
The utility that we will be using to manage and setup software RAID is mdadm. This command allows you to create software RAID and also help manage your RAID setup. Figure 4.1 shows the command used to create our software RAID1. Table 2 explains what each qualifier is used for.
linux-9sl8:~ # mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3 mdadm: /dev/sda3 appears to contain an ext2fs file system size=2008000K mtime=Mon Aug 18 14:25:01 2008 mdadm: /dev/sda3 appears to be part of a raid array: level=raid1 devices=2 ctime=Mon Aug 18 14:00:25 2008 mdadm: /dev/sdb3 appears to contain an ext2fs file system size=2008000K mtime=Mon Aug 18 14:25:01 2008 mdadm: /dev/sdb3 appears to be part of a raid array: level=raid1 devices=2 ctime=Mon Aug 18 14:00:25 2008 Continue creating array? y mdadm: array /dev/md0 started.
Figure 4.1: Creating the software RAID level 1.
Qualifier | Description |
–create /dev/md0 | This qualifier creates a new array with per-device superblocks. |
–level=1 | This qualifier sets what level RAID to use. The supported levels are listed in Table 1. |
–raid-devices=2 | This qualifier specifies how manage devices will be included in the RAID. |
/dev/sda3 /dev/sdb3 | This is the actual devices that will be used for RAID. |
Table 2: RAID Qualifiers explained.
Once you have executed the mdadm command you can check the /proc/mdstat file to watch the RAID being build using the cat command as shown in Figure 4.2.
linux-9sl8:~ # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda3[1] sdb3[0] 2008000 blocks [2/2] [UU] [=>...................] resync = 5.5% (112576/2008000) finish=3.9min speed=8041K/sec unused devices: <none>
Figure 4.2: Checking the status of the RAID.
As you can see in Figure 4.2 the status of the RAID is displayed, if you would like to watch the progress you can use the ‘watch‘ command followed by the command ‘cat /proc/mdstat‘ as shown in Figure 4.3.
linux-9sl8:~ # watch cat /proc/mdstat
Figure 4.3: Executing the “cat /proc/mdstat” every two seconds.
Once the RAID1 has been successfully built you will need to create a filesystem for the RAID device (/dev/md0). In this article we will use the ext3 filesystem by issuing the ‘mkfs.ext3‘ command as shown in Figure 4.4.
linux-9sl8:~ # mkfs.ext3 /dev/md0 mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 251392 inodes, 502000 blocks 25100 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=515899392 16 block groups 32768 blocks per group, 32768 fragments per group 15712 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Figure 4.4: Installing the ext3 filesystem.
Once the filesystem has been successfully installed you will need to create a mount point for the RAID device this can be done using the ‘mkdir‘ command as shown in Figure 4.5. Once you have created the RAID mount point you can mount your device using the mount command as shown in Figure 4.6.
linux-9sl8:~ # mkdir /raid1
Figure 4.5: Creating the RAID mount point.
linux-9sl8:~ # mount /dev/md0 /raid1/ linux-9sl8:~ # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 10490104 1757356 8732748 17% / udev 128304 92 128212 1% /dev /dev/md0 1976400 35760 1840240 2% /raid1
Figure 4.6: Mounting the RAID device.
As you can see in Figure 4.6 the second command that was issued (df) displays the mounted RAID device along with how much space is available.
Software RAID Details
Once you have your RAID device setup you can view the details of your setup by issuing the ‘mdadm‘ command followed by the –details qualifier as shown in Figure 5.1.
linux-9sl8:~ # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Wed Aug 20 16:58:42 2008 Raid Level : raid1 Array Size : 2008000 (1961.27 MiB 2056.19 MB) Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Wed Aug 20 17:15:05 2008 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : b6407f5f:6a231f5e:095dd1e2:2d7315bf Events : 0.6 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 4 1 active sync /dev/sdb3
Figure 5.1: Viewing the RAID details.
Simulating a Faulty Disk
In this section of the article we will look at simulating a faulty disk. The ‘mdadm‘ utility allows you to simulate a faulty disk just for testing purposes which is good if you a curious to see what happens when a disk failure occurs. Figure 6.1 shows the command use to set the /dev/sdb3 device to be faulty.
linux-9sl8:~ # mdadm --manage --set-faulty /dev/md0 /dev/sdb3 mdadm: set /dev/sdb3 faulty in /dev/md0
Figure 6.1: Setting a faulty device, simulating a disk failure.
Once the disk has been set to faulty you can issue the ‘mdadm‘ command followed by the –detail qualifier to check the status of the RAID device. Figure 6.2 shows the output of the ‘mdadm –detail‘ command, it is also possible to use the ‘cat‘ command to view the /proc/mdstat file to check the status of the RAID device.
linux-9sl8:~ # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Wed Aug 20 16:58:42 2008 Raid Level : raid1 Array Size : 2008000 (1961.27 MiB 2056.19 MB) Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Wed Aug 20 17:20:29 2008 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : b6407f5f:6a231f5e:095dd1e2:2d7315bf Events : 0.7 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 0 0 1 removed 2 8 4 - faulty spare /dev/sdb3
Figure 6.2: Checking the details of the RAID device.
As you can see in Figure 6.2 the device shows up as faulty which is simulating a faulty disk. Once you have simulated a faulty disk you will want to return the disk back to its normal state, this requires two steps. The first step is to remove the faulty device as shown in Figure 6.3 and then re-add the device as shown in Figure 6.4.
linux-9sl8:~ # mdadm --manage --remove /dev/md0 /dev/sdb3 mdadm: hot removed /dev/sdb3
Figure 6.3: Removing the faulty disk.
linux-9sl8:~ # mdadm --manage --add /dev/md0 /dev/sdb3 mdadm: re-added /dev/sdb3
Figure 6.4: Adding the removed disk to the RAID device.
Once you have re-added the /dev/sdb3 device you can use the ‘cat‘ command to watch the RAID device being build as shown in Figure 4.2.
Final Thoughts
In this article we covered how to setup RAID1 which provides redundancy for your data across multiple disks, you should also now feel comfortable setting up and configuring the other RAID levels which software RAID supports.
Comments
Skip the software RAID unless your experiementing on your home network. For anything production, use hardware RAID. It’s too cheap and has much better value in terms of OS-independence than software RAID.
–Frank
Software RAID is a nice alternative if you cannot afford Hardware RAID, it’s always
a good idea to know how to implement software RAID.
An interesting article, but it doesn’t properly cover monitoring for failure. It is all very well typing “mdadm –detail” when you have one Raid on one machine but we all know that 3 years down the line when a disk breaks you won’t be bothering. What is needed is an alert/email to let you know the system is running in a degraded state.
–Monty