Creating Software RAID Configurations in SLES 10
In this AppNote I will explain how to create a Software RAID configuration. This AppNote is for anyone just beginning with SLES or who has little experience creating new “volumes” in SLES.
The first thing I would like to explain is when you should, and can, use a software RAID configuration. Here are a few cases where a software RAID configuration with SLES 10 would be beneficial:
- When you have a expensive server with SCCI Disks and an Array Controller.
- When you only have a simple server that does not contain these kind of hardware, but you still would like to have more safety for your data.
Before we go on, it’s assumed that you have SLES 10 running, with 4 disks that are NOT being used. On these 4 disks we’ll will create the RAID set. In this AppNote I’ll explain how to create and configure a RAID set with EVMS (Enterprise Volume Management System). You can also use the default LVN (local Volume Manager) to create a software RAID, but because EVMS supports more filesystems and is cluster aware – and LVN soes not – I will use EVMS.
So, let’s start.
Creating the Work Disks
1. Open a terminal window as root and start the Enterprise Volume Management System Administration (EVMS) utility by entering: evmsgui
Figure 1 – Enterprise Volume Management System Administration utility
As you can see, there are two volumes on my default SLES 10 server installation. One resides on the first partition of SCCI device 1 (sda1), and one is on the second partition (sda2).
To create a software RAID5, we need at least 3 separate hard disks to create it on.
2. In the EVMS, click on the Available Object Tab. In my case, I see there are three disks where I can create the Software RAID5 volume.
Figure 2 – Available Objects
Before you can do anything with these disks, you first have to initialize them. Everything on the disk will be deleted, so be sure you select the correct disks! You must also add a Segment manager to the disk if you plan on using it in a clustered environment.
Let’s add a Segment manager to the Disk, which will automatically initialize the disk.
3. In the EVMS Administration utility, click the Action menu.
4. Click Add, then “Segment Manager to storage object”. The following screen appears.
Figure 3 – Adding a Segment Manager
5. Select DOS Segment Manager and click Next. The storage devices available for the Software RAID volume are listed.
6. Select a device to create a Segment Manager on, then click Next. An Option dialog appears.
Figure 4 – Selecting a device
7. Choose the kind of Disk type you would like to create (we will choose the default Linux).
Figure 5 – Linux disk type
8. Click Add to create the Segment Manager.
9. Click OK to save and exit.
Figure 6 – Completion screen
10. Repeat steps 2, 3, 4, and 5 with the other two disks.
When you have done this, the Available Objects tab in the EVMS Administration utility will be gone. This means there are no more objects to create segments or volumes on.
Figure 7 – Logical volumes (no more Available Objects tab)
Creating the Software RAID 5
Now that you’ve created three disks to work on, let’s see how to create the Software RAID 5.
First, we have to create a segment on the disk. A segment is like a partition on a normal disk.
1. To create the segment, click the Segments tab.
Figure 8 – Segments tab
Notice there are three 512MB free-space segments.
2. Right-click on the first one and select Create Segment.
Figure 9 – Creating a segment
3. Click Create. The Completion dialog appears.
4. Click OK to save the settings.
Figure 10 – Completion dialog
5. Repeat steps 1-4 for the other free-space segments.
The segments are created, and their size is almost 512MB each.
Creating the RAID 5 Disk
Now it’s time to create the actual RAID 5 disk.
1. In the Actions menu, click Create and then Region.
Figure 11 – Creating a Storage Region
2. Select the RAID manager (we will choose the MD RAID4/5 Region Manager) and click Next.
You’ll see the three free-disk segments you created before.
Figure 12 – View of devices
Note that for a RAID 5 disk set, you need a minimum of 3 disks. Let’s create one now.
3. Select the entire tree disk and click Next. A configuration screen appears, where you can set a few options.
4. Leave everything as it is and click Create to continue.
Figure 13 – Configuration options
10. If you see the completion screen as below, click OK to save the settings.
Figure 14 – Completion screen
Now that you’ve competed the Software RAID 5 Disk creation, you’ll see a Region tab in the EVMS utility.
Figure 15 – Storage regions
Creating and Mounting a Volume on the RAID Disk
It’s time to create a Volume on the RAID disk where we can save some data.
1. Right-click the Region object named “md/md0” and select Create EVMS Volumes.
Figure 16 – Creating EVMS volumes
2. Enter a name for the Volume.
3. Click Create and then OK in the Completion screen.
Now that you’ve created a Volume on the RAID disk, you need to put a filesystem on it before you can mount it.
4. To see if you properly created the volume, click the Volumes tab in the EVMS Administrator Utility. You’ll see the /dev/evms/sddu volume you just created.
Figure 17 – Verifying the volume
5. To create a file system on the /dev/evms/sddu disk, right-click it in the Volumes tab and select Make File System. In this case we will make a ReiserFS filesystem.
Figure 18 – Creating a ReiserFS filesystem
6. Select the ReiserFS File System Interface Module and click Next.
Figure 19 – Filesystem options
7. Enter a volume Label name and click Make.
8. In the completion screen click OK, and the filesystem will be created on the Volume.
9. Click Save to create a volume object in the /dev/evms/ directory called “sddu”.
Figure 20 – Creating the volume object
On this Volume object we just created a filesystem, so now we should be able to mount the Volume.
10. In the EVMS Administration Utility, right-click the /dev/evms/sddu volume under the Volumes tab and select Mount.
Figure 21 – Mounting the filesystem
11. To specify the mount point, enter “/sddu” and click Mount. When you see the completion screen below, the volume is mounted.
Figure 22 – Completion screen
12. Open a file browser and notice the the “sddu” directory in the root.
Figure 23 – sddu directory
Now that we’ve mounted the /dev/evms/sddu disk at a mount point of “/sddu” we have to make sure that this disk is also mounted when the server starts. To do this, we’ll add a line in the /ect/fstab file.
13. Using vi, open a terminal window as root and change to the /etc directory.
14. Enter “vi fstab”.
Figure 24 – vi fstab command
15. Press Enter to open the fstab file.
16. At the end of the file enter the following text:
/dev/evms/sddu /sddu auto defaults 0 0
Figure 25 – Added code to mount disk when server starts
Important: Make sure you don’t make a typo; if you do, the server will NOT start correctly!
17. Save the file and exit vi.
The only thing left is to make sure that EVMS is started during server startup. If you don’t do this, the /dev/evms/sddu disk will NOT be Mounted.
1. Open Yast.
2. Click System, then run the System Services (Runlevel) tool.
3. Click Expert Mode.
Figure 26 – Using the Runlevel tool
4. Find the boot.evms Services and select it.
5. Click Set/reset and Enable the Service. You will see that the checkbox ?B? will be checked. This means that during the Boot of the server the EVMS will be started.
6. Click Finish and Yes to save the settings.
Testing the Volume Mounts
Now it’s time to test that the volumes will also be mounted after you restart the server. To do a good test, create a file or directory in the /sddu mount point. When you restart the server, make sure the file or directory is still there.
Note that when the EVMS is not started correctly or the RAID disk is not configured correctly and you restart the server, a /sddu directory will always be there!
Remember that when you see the /sddu directory it does not mean that everything is OK. Check to see if your file or directory under the /sddu is still there. Then and only then you know for sure the configuration is OK.
If possible, also check to see what happens when you remove one of the tree disks where the RAID was created. The /sddu mount point should be intact when you remove only one disk.