Installing SLES on Software RAID1
Overview
Installation Details
Did it Work?
Related Links
Conclusion
Overview
The wise administrator builds redundancy into the systems he manages. Using a redundant array of independent disks (RAID) is one way to build redundancy. A redundancy strategy is not a backup strategy. I would like to be clear that this article is not a replacement for a properly implemented backup strategy. I personally believe the best RAID solution is hardware RAID. However, when budgets are constrained, software RAID is an alternative. This article focuses on the process of installing the operating system onto a software RAID1 mirror.
The scenario is based on SUSE Linux Enterprise Server 10 Service Pack 1, and includes the base and Web and Lamp patterns only. The server should have two disks of the same size, 2Gb in this case. This is more for clarity than an actual requirement. Then you will create two partitions of type Linux RAID on each of the disks. The partitions need to be the same size on each disk. You will then create the software RAID device using the “Linux RAID” partitions you created earlier. Finally, you will format each software RAID device; one with swap, the other with a root file system. The scenario has been tested on the i386 and x86_64 architectures.
Installation Details
- Start the SLES10 SP1 install as you usually do, until you get to the Installation Settings screen, then click Change, Partitions.
- Click Create Custom Partition Setup and Next.
- Select Custom Partitioning and Next
- Create a 500MB Linux RAID (type 0xFD) partition for swap, and use the rest of the space for a Linux RAID (type 0xFD) partition for root.
- Create the corresponding partitions on the other disk.
- When you have finished creating all the Linux RAID partitions, your Expert Partitioner should look like Figure 5 below. Notice there is a partition of equal size on each disk that serves as a place holder for swap and root. Now you will assemble the mirror using each of these double partitions.
Select RAID and Create RAID.
- Select RAID 1 (Mirroring), and Next.
- The current RAID device should be /dev/md0. Add both 500MB Linux RAID partitions to the md0 RAID.
- Once you have added md0 to both 500Mb partitions, click Next.
- Format the /dev/md0 device with a swap file system, and click Finish.
- Now repeat the previous four steps for the root partition. First, select RAID and Create RAID.
- Select RAID 1 (Mirroring) and Next.
- The current RAID device should be /dev/md1. Add both remaining Linux RAID partitions to the md1 RAID, and click Next.
- Format the /dev/md1 device with a reiser file system, mounted on /; and click Finish.
- When your done, the partitioner screen should look something like this:
- Click Finish.
- Notice that the boot loader section references the /dev/md? devices.
- Complete the installation as usual.
- Once the installation is complete, you need to finish the GRUB install. Since GRUB does not understand MD devices, it is only installed on the first disk. I like to make sure it is installed the same way on both disks.
- Login as root, and type “grub”. Follow the steps in Figure 16 below.
I created only two system partitions with a Linux RAID partition mirrored for each. If you have more system partitions, like /var, then you will need to have a pair of Linux RAID partitions and a /dev/md? device for each additional system partition.
Did it Work?
This section is intended to show what the system should look like using various commands after a successful install. Troubleshooting failed installs or partial installs is outside the scope of this article.
- Check the /etc/fstab file to ensure the software RAID devices are used to mount root and swap.
raid1:~ # cat /etc/fstab /dev/md1 / reiserfs acl,user_xattr 1 1 /dev/md0 swap swap defaults 0 0 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 /dev/fd0 /media/floppy auto noauto,user,sync 0 0
raid1:~ # cat /proc/mdstat Personalities : [raid1] [raid0] [raid5] [raid4] [linear] md1 : active raid1 sda2[0] sdb2[1] 1582336 blocks [2/2] [UU] md0 : active(auto-read-only) raid1 sda1[0] sdb1[1] 513984 blocks [2/2] [UU] unused devices: <none>
raid1:~ # cat /proc/cmdline root=/dev/md1 vga=0x332 resume=/dev/md0 splash=silent showopts
raid1:~ # mdadm --detail /dev/md1 /dev/md1: Version : 00.90.03 Creation Time : Tue Mar 4 01:22:48 2008 Raid Level : raid1 Array Size : 1582336 (1545.51 MiB 1620.31 MB) Used Dev Size : 1582336 (1545.51 MiB 1620.31 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Thu Mar 6 18:56:46 2008 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3858d3c1:b6e37b6d:8e2de91f:260fb55b Events : 0.5205 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2
Related Links
- Understanding how RAIDed Disks Interact with the SLES Boot Process
- Software RAID: Beyond YAST
- Migrating SLES to Software RAID1
Conclusion
Installing the SLES operating system is rather straight forward. The only real catch is to make sure you install GRUB onto both disks when the install is complete. The mirrored array provides redundancy for the system disk. If one of the disks becomes damaged or fails, then you can recover quickly using the other mirrored disk. However, always make sure you have a current working backup.
Related Articles
Jul 05th, 2023
SUSE Embedded Program
Oct 03rd, 2022
Comments
Great article and great software (YaST!), but when I print the article (on real paper, I mean), pictures are too small to read.
How can I bring this instructions in my server room?
Thank you. I’m not really sure. I’ve asked the powers that be to respond to see how to do this.
We are considering either posting a PDF version of the articles, or adjusting the Printer Friendly so it automatically expands the graphics before printing.
We like the second option best, because we edit the articles in HTML after we receive them, and they then fall out of sync with the OpenOffice or Word versions.
Stay tuned for a resolution.
Guess there is no way to install SLED 10 SP2 on softraid0 while windows xp intalled already…
Motherboard chip:nVIDIA nForce 570 Ultra
Actually, I don’t see why not. You would have Windows on one partition, and a separate partition for SLED. You would create a corresponding SLED partition on the other disk. You would need to make sure GRUB was configured to dual boot Windows or SLED, but it should work.
Softraid 0, windows xp installed. Leave about 100G unpartition space for Linux.
Boot with SLED 10 SP2 DVD. install…Got this Error message.
———
WARNING: This system has at least one hard disk with a raid configuration presented by the BIOS as RAID that is in fact a software RAID. The following disks were detected as part of such a RAID:
/dev/sda /dev/sdb
The linux kerne 2.4 supported some of these sysetes ( like Promise FastTrack and HighPoint RocketRaid), but the Linux kernel 2.6 does not support them at all.
If you install onto these disks, your RAID configuration and any data on the RAID will be lost. Refer to portal.suse.com to learn how to migrate to a linux software RAID.
———
Clike ok, then this message.
———
The partitioning on disk /dev/sda is not readable by the partitioning tool parted, which is used to change the partition table.
You can use the partitions on disk /dev/sda as they are. You can format them and assign mount point to them, but you cannot add, edit, resize, or remove partitions from that disk with this tool.
———
I also tried opensuse 10.3 , it can recognise windows partitions on softraid 0, I can mount them and create new partitions for suse 10.3 then install. But sled can’t …
In sles 10 sp 2 (sles 10 sp 1 not test) when hd fail – GRUB HARD DISK ERROR!
Author, you are test this ???
As stated in the document, the article is based on SLES10 SP1. I have not tested SLES10 SP2 or SLES11. I will test it when I can. If you find the solution, please post it for all.
I have found the solution and posted it in de Open Suse forum (http://forums.opensuse.org/install-boot-login/393772-how-install-bootloader-both-disks-software-raid-1-a.html)
But I’ve used it for SLES 10 SP2:
Because my system wouldn’t boot from /boot (I guess the BIOS didn’t support it) I had to install GRUB on the MBR
I’ve used the following partitioning:
Device Size Type Mount
/dev/sda 232.8 GB SEAGATE-…
/dev/sda1 1.0 GB LINUX RAID
/dev/sda2 231.8 GB Linux RAID
/dev/sdb 232.8 GB SEAGATE…
/dev/sdb1 1.0 GB Linux RAID
/dev/sdb2 231.8 GB Linux RAID
/dev/md0 1.0 GB MD Raid swap
/dev/md1 231.8 GB MD Raid /
First I’ve copied the MBR from disk 1 to disk 2 with this command:
dd if=/dev/sda of=/dev/sdb bs=512 count=1
Then I configured GRUB to also look on the seccond disk in case of malfunction:
grub
grub > find /boot/grub/stage1
(hd0,1)
(hd1,1)
grub > device (hd0) /dev/sdb
grub > root (hd0,1)
Filesystem type is ext2fs, partition type 0xfd
grub > setup (hd0)
Now you can boot with one disk or both.
Thank you for the information.
Hi there,
I’m preparing an OES2 sp1 install (SLES first needs to be installed) and needed exactly an article like this.
I went through step by step and it works perfectly on SLES 10 SP2.
Novells move to SLES/OES is a great move. They need to keep all their existing clientele (eg netware admins) and articles like this make the transition easy!
Many thanks.
Sam
I followed the directions above and also found that the system wouldn’t boot initially. So, I rebooted from the install DVD and chose Rescue System. Then I installed grub on both disks using the commands outlined in the article.
Rebooted and everything came up great. Thanks!
There are some RAID controllers that support IDE or SATA hard disks. Soft RAID provides the advantages of RAID systems without the additional cost of hardware RAID controllers. However, this requires some CPU time and has memory requirements that make it unsuitable for real high performance computers.