SUSE Conversations


Installing SLES on Software RAID1



By: jrecord

March 7, 2008 4:17 pm

Reads:1420

Comments:13

Rating:5.0

Overview
Installation Details
Did it Work?
Related Links
Conclusion

Overview

The wise administrator builds redundancy into the systems he manages. Using a redundant array of independent disks (RAID) is one way to build redundancy. A redundancy strategy is not a backup strategy. I would like to be clear that this article is not a replacement for a properly implemented backup strategy. I personally believe the best RAID solution is hardware RAID. However, when budgets are constrained, software RAID is an alternative. This article focuses on the process of installing the operating system onto a software RAID1 mirror.

The scenario is based on SUSE Linux Enterprise Server 10 Service Pack 1, and includes the base and Web and Lamp patterns only. The server should have two disks of the same size, 2Gb in this case. This is more for clarity than an actual requirement. Then you will create two partitions of type Linux RAID on each of the disks. The partitions need to be the same size on each disk. You will then create the software RAID device using the “Linux RAID” partitions you created earlier. Finally, you will format each software RAID device; one with swap, the other with a root file system. The scenario has been tested on the i386 and x86_64 architectures.

Installation Details

  1. Start the SLES10 SP1 install as you usually do, until you get to the Installation Settings screen, then click Change, Partitions.
  2. Installation settings

    Click to view.

    Figure 1 – Installation settings

  3. Click Create Custom Partition Setup and Next.
  4. Create custom partition setup

    Click to view.

    Figure 2 – Create custom partition setup

  5. Select Custom Partitioning and Next
  6. Custom partitioner

    Click to view.

    Figure 3 – Custom partitioner

  7. Create a 500MB Linux RAID (type 0xFD) partition for swap, and use the rest of the space for a Linux RAID (type 0xFD) partition for root.
  8. Creating Linux RAID partition for swap

    Click to view.

    Figure 4 – Creating Linux RAID partition for swap

  9. Create the corresponding partitions on the other disk.
  10. When you have finished creating all the Linux RAID partitions, your Expert Partitioner should look like Figure 5 below. Notice there is a partition of equal size on each disk that serves as a place holder for swap and root. Now you will assemble the mirror using each of these double partitions.

    Select RAID and Create RAID.

  11. Linux RAID partitions created

    Click to view.

    Figure 5 – Linux RAID partitions created

  12. Select RAID 1 (Mirroring), and Next.
  13. Creating swap mirror

    Click to view.

    Figure 6 – Creating swap mirror

  14. The current RAID device should be /dev/md0. Add both 500MB Linux RAID partitions to the md0 RAID.
  15. Assembling swap mirror

    Click to view.

    Figure 7 – Assembling swap mirror

  16. Once you have added md0 to both 500Mb partitions, click Next.
  17. Assembled swap mirror

    Click to view.

    Figure 8 – Assembled swap mirror

  18. Format the /dev/md0 device with a swap file system, and click Finish.
  19. Formatting swap mirror

    Click to view.

    Figure 9 – Formatting swap mirror

  20. Now repeat the previous four steps for the root partition. First, select RAID and Create RAID.
  21. swap mirror created

    Click to view.

    Figure 10 – swap mirror created

  22. Select RAID 1 (Mirroring) and Next.
  23. Creating root mirror device

    Click to view.

    Figure 11 – Creating root mirror device

  24. The current RAID device should be /dev/md1. Add both remaining Linux RAID partitions to the md1 RAID, and click Next.
  25. Assembled root mirror device

    Click to view.

    Figure 12 – Assembled root mirror device

  26. Format the /dev/md1 device with a reiser file system, mounted on /; and click Finish.
  27. Formatting root MD device

    Click to view.

    Figure 13 – Formatting root MD device

  28. When your done, the partitioner screen should look something like this:
  29. Completed expert partitioner screen

    Click to view.

    Figure 14 – Completed expert partitioner screen

    I created only two system partitions with a Linux RAID partition mirrored for each. If you have more system partitions, like /var, then you will need to have a pair of Linux RAID partitions and a /dev/md? device for each additional system partition.

  30. Click Finish.
  31. Notice that the boot loader section references the /dev/md? devices.
  32. Observe boot configuration

    Click to view.

    Figure 15 – Observe boot configuration

  33. Complete the installation as usual.
  34. Once the installation is complete, you need to finish the GRUB install. Since GRUB does not understand MD devices, it is only installed on the first disk. I like to make sure it is installed the same way on both disks.
  35. Login as root, and type “grub”. Follow the steps in Figure 16 below.
  36. GRUB install steps

    Click to view.

    Figure 16 – GRUB install steps

Did it Work?

This section is intended to show what the system should look like using various commands after a successful install. Troubleshooting failed installs or partial installs is outside the scope of this article.

  • Check the /etc/fstab file to ensure the software RAID devices are used to mount root and swap.
  • raid1:~ # cat /etc/fstab
    /dev/md1  /                  reiserfs  acl,user_xattr    1 1
    /dev/md0  swap               swap      defaults          0 0
    proc      /proc              proc      defaults          0 0
    sysfs     /sys               sysfs     noauto            0 0
    debugfs   /sys/kernel/debug  debugfs   noauto            0 0
    devpts    /dev/pts           devpts    mode=0620,gid=5   0 0
    /dev/fd0  /media/floppy      auto      noauto,user,sync  0 0
    
  • The /proc/mdstat shows the current status of the array. I will also show you the current state of a sync in progress. It is a brief version of the mdadm –detail output.
  • raid1:~ # cat /proc/mdstat
    Personalities : [raid1] [raid0] [raid5] [raid4] [linear]
    md1 : active raid1 sda2[0] sdb2[1]
          1582336 blocks [2/2] [UU]
    
    md0 : active(auto-read-only) raid1 sda1[0] sdb1[1]
          513984 blocks [2/2] [UU]
    
    unused devices: <none>
    
  • The /proc/cmdline shows the kernel command line configured at boot time. The source for these parameters is the /boot/grub/menu.lst and whatever is typed on the “Boot Options” line in the GRUB menu at boot time. The value in this case is to see that the root= parameter is pointing to the mirrored RAID array.
  • raid1:~ # cat /proc/cmdline
    root=/dev/md1 vga=0x332 resume=/dev/md0 splash=silent showopts
    
  • The mdadm command provides the most complete view of the software RAID array. The state should be “clean” and “active sync” for both devices in the mirror.
  • raid1:~ # mdadm --detail /dev/md1
    /dev/md1:
            Version : 00.90.03
      Creation Time : Tue Mar  4 01:22:48 2008
         Raid Level : raid1
         Array Size : 1582336 (1545.51 MiB 1620.31 MB)
      Used Dev Size : 1582336 (1545.51 MiB 1620.31 MB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 1
        Persistence : Superblock is persistent
    
        Update Time : Thu Mar  6 18:56:46 2008
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               UUID : 3858d3c1:b6e37b6d:8e2de91f:260fb55b
             Events : 0.5205
    
     Number Major Minor RaidDevice State
        0     8      2      0      active sync  /dev/sda2
        1     8     18      1      active sync  /dev/sdb2
    

Related Links

Conclusion

Installing the SLES operating system is rather straight forward. The only real catch is to make sure you install GRUB onto both disks when the install is complete. The mirrored array provides redundancy for the system disk. If one of the disks becomes damaged or fails, then you can recover quickly using the other mirrored disk. However, always make sure you have a current working backup.

VN:D [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
Installing SLES on Software RAID1, 5.0 out of 5 based on 1 rating

Tags: ,
Categories: SUSE Linux Enterprise Server, Technical Solutions

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

13 Comments

  1. By:vincent-bauchart-imadies-com

    Great article and great software (YaST!), but when I print the article (on real paper, I mean), pictures are too small to read.
    How can I bring this instructions in my server room?

  2. By:jrecord

    Thank you. I’m not really sure. I’ve asked the powers that be to respond to see how to do this.

  3. By:ssalgy

    We are considering either posting a PDF version of the articles, or adjusting the Printer Friendly so it automatically expands the graphics before printing.

    We like the second option best, because we edit the articles in HTML after we receive them, and they then fall out of sync with the OpenOffice or Word versions.

    Stay tuned for a resolution.

  4. By:lijie07

    Guess there is no way to install SLED 10 SP2 on softraid0 while windows xp intalled already…
    Motherboard chip:nVIDIA nForce 570 Ultra

  5. By:jrecord

    Actually, I don’t see why not. You would have Windows on one partition, and a separate partition for SLED. You would create a corresponding SLED partition on the other disk. You would need to make sure GRUB was configured to dual boot Windows or SLED, but it should work.

  6. By:lijie07

    Softraid 0, windows xp installed. Leave about 100G unpartition space for Linux.
    Boot with SLED 10 SP2 DVD. install…Got this Error message.

    ———
    WARNING: This system has at least one hard disk with a raid configuration presented by the BIOS as RAID that is in fact a software RAID. The following disks were detected as part of such a RAID:
    /dev/sda /dev/sdb
    The linux kerne 2.4 supported some of these sysetes ( like Promise FastTrack and HighPoint RocketRaid), but the Linux kernel 2.6 does not support them at all.
    If you install onto these disks, your RAID configuration and any data on the RAID will be lost. Refer to portal.suse.com to learn how to migrate to a linux software RAID.

    ———
    Clike ok, then this message.
    ———

    The partitioning on disk /dev/sda is not readable by the partitioning tool parted, which is used to change the partition table.
    You can use the partitions on disk /dev/sda as they are. You can format them and assign mount point to them, but you cannot add, edit, resize, or remove partitions from that disk with this tool.
    ———

    I also tried opensuse 10.3 , it can recognise windows partitions on softraid 0, I can mount them and create new partitions for suse 10.3 then install. But sled can’t …

  7. By:said_kr

    In sles 10 sp 2 (sles 10 sp 1 not test) when hd fail – GRUB HARD DISK ERROR!

    Author, you are test this ???

  8. By:jrecord

    As stated in the document, the article is based on SLES10 SP1. I have not tested SLES10 SP2 or SLES11. I will test it when I can. If you find the solution, please post it for all.

  9. By:mnt_schred

    I have found the solution and posted it in de Open Suse forum (http://forums.opensuse.org/install-boot-login/393772-how-install-bootloader-both-disks-software-raid-1-a.html)
    But I’ve used it for SLES 10 SP2:

    Because my system wouldn’t boot from /boot (I guess the BIOS didn’t support it) I had to install GRUB on the MBR

    I’ve used the following partitioning:

    Device Size Type Mount
    /dev/sda 232.8 GB SEAGATE-…
    /dev/sda1 1.0 GB LINUX RAID
    /dev/sda2 231.8 GB Linux RAID
    /dev/sdb 232.8 GB SEAGATE…
    /dev/sdb1 1.0 GB Linux RAID
    /dev/sdb2 231.8 GB Linux RAID
    /dev/md0 1.0 GB MD Raid swap
    /dev/md1 231.8 GB MD Raid /

    First I’ve copied the MBR from disk 1 to disk 2 with this command:

    dd if=/dev/sda of=/dev/sdb bs=512 count=1

    Then I configured GRUB to also look on the seccond disk in case of malfunction:

    grub
    grub > find /boot/grub/stage1
    (hd0,1)
    (hd1,1)
    grub > device (hd0) /dev/sdb
    grub > root (hd0,1)
    Filesystem type is ext2fs, partition type 0xfd
    grub > setup (hd0)

    Now you can boot with one disk or both.

  10. By:jrecord

    Thank you for the information.

  11. By:samrusso

    Hi there,
    I’m preparing an OES2 sp1 install (SLES first needs to be installed) and needed exactly an article like this.
    I went through step by step and it works perfectly on SLES 10 SP2.

    Novells move to SLES/OES is a great move. They need to keep all their existing clientele (eg netware admins) and articles like this make the transition easy!

    Many thanks.
    Sam

  12. By:mpjames

    I followed the directions above and also found that the system wouldn’t boot initially. So, I rebooted from the install DVD and chose Rescue System. Then I installed grub on both disks using the commands outlined in the article.

    Rebooted and everything came up great. Thanks!

  13. By:alistortelini

    There are some RAID controllers that support IDE or SATA hard disks. Soft RAID provides the advantages of RAID systems without the additional cost of hardware RAID controllers. However, this requires some CPU time and has memory requirements that make it unsuitable for real high performance computers.

Comment

RSS