Convert a single drive system to RAID
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without loosing data.
- Create a single-disk RAID-1 array with our new disk
- Move all your data from the old-disk to the new RAID-1 array
- Verify the data move was successful
- Wipe the old disk and add it to the new RAID-1 array
WARNING: Make a backup first. Even though our aim is to convert to a RAID setup without loosing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.
- 1 Assumptions
- 2 Create New RAID Array
- 3 Copy Data
- 4 Verify Success
- 5 Add Original Disk to Array
- I will assume for the sake of the guide that the disk currently in your system is /dev/sda and your new disk is /dev/sdb.
- We will create the following configuration:
1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk) 2 x Swap Partitions using 1 partition on each disk.
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to this article for reasons why: []
- To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the init 1 command.
- You will need to be the root user for the entire process.
Create New RAID Array
First we need to create a single-disk RAID array using the new disk.
Partition the Disk
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).
[root@arch ~]# fdisk -l /dev/sdb Disk /dev/sdb: 80.0 GB, 80025280000 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sdb1 1 66 530113+ 82 Linux swap / Solaris /dev/sdb2 67 9729 77618047+ fd Linux raid autodetect
Make sure your your partition types are set correctly. "Linux Swap" is type 82 and "Linux raid autodetect" is type FD
Create the RAID Device
Next, create the single-disk RAID-1 array. Note the 'missing' keyword is specified as one of our devices.
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2 mdadm: array /dev/md0 started.
Make sure the array had been created correcting by checking /proc/mdstat
[root@arch ~]# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] md0 : active raid1 sdb2 40064 blocks [2/1] [_U]
unused devices: <none>
The devices are in tact, however in a degraded state (because it's missing half the array!)
Make File Systems
Use whatever filesystem is your preference here. I'll use ext3 for this guide.
[root@arch ~]# mkfs -t ext3 -j /dev/md0 mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 10027008 inodes, 20027008 blocks 1001350 blocks (5.00%) reserved for the super user First data block=0 612 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424
Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 25 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Make a file system on the swap partition.
[root@arch ~]# mkswap /dev/sdb1 Setting up swapspace version 1, size = 271314 kB no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system
Mount the Array
[root@arch ~]# mkdir /mnt/new-raid [root@arch ~]# mount /dev/md0 /mnt/new-raid
Copy the Data
[root@arch ~]# cd /mnt/new-raid [root@arch mnt]# tar -C / -clspf - . | tar -xlspvf -
Use your preferred text editor to open /mnt/new-raid/boot/grub/menu.lst
--- SNIP --- default 0 color light-blue/black light-cyan/blue
## fallback fallback 1
# (0) Arch Linux title Arch Linux - Original Disc root (hd0,0) kernel /vmlinuz26 root=/dev/sda1
# (1) Arch Linux title Arch Linux - New RAID root (hd1,0) #kernel /vmlinuz26 root=/dev/sda1 ro kernel /vmlinuz26 root=/dev/md0 --- SNIP ---
Notice we added the fallback line and duplicated the Arch Linux entry with a different root directive on the kernel line.
You need to tell fstab on the new disk where to find the new devices
[root@arch ~]# cat /mnt/new-raid/etc/fstab /dev/md0 / ext3 defaults 0 1 /dev/sdb1 swap swap defaults 0 0
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys [root@arch ~]# mount --bind /proc /mnt/new-raid/proc [root@arch ~]# mount --bind /dev /mnt/new-raid/dev [root@arch ~]# chroot /mnt/new-raid/ [root ~]# chroot /mnt/new-raid/
You are now chrooted in the what will become the root of your RAID-1 system. Edit /etc/mkinitcpio.conf to include 'raid' in the HOOKS array. Place it before 'autodetect' but after 'sata', 'scsi' and 'pata' (whichever is appropriate for your hardware)
[root ~]# mkinitcpio -g /boot/kernel26.img [root ~]# exit
Install GRUB on the RAID Array
[root@arch /]# grub grub> root (hd1,1) Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd1) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded Done grub>quit
Reboot your computer, making sure it boots from the new raid disk (/dev/sdb) and not the original disk (/dev/sda). You may need to change the boot device priorities in your BIOS to do this.
Once the GRUB on the new disk loads, make sure you select to boot the new entry you created in menu.lst earlier.
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:
[root@arch ~]# mount /dev/md0 on / type ext3 (rw)
Also swapon -s
[root@arch ~]# swapon -s Filename Type Size Used Priority /dev/sdb1 partition 4000144 16 -1
Note it is the swap partition on sdb that is in use, nothing from sda
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.
Add Original Disk to Array
Partition Original Disk
Take the output of fdisk -l on your new disk, and make the partitions on your original disk look the same.
[root@arch ~]# fdisk -l /dev/sda Disk /dev/sda: 80.0 GB, 80025280000 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sda1 1 66 530113+ 82 Linux swap / Solaris /dev/sda2 67 9729 77618047+ fd Linux raid autodetect
Add Disk to Array
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2 mdadm: hot added /dev/sda2
Verify that the RAID array is being rebuilt.
[root@arch ~]# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] md0 : active raid1 sda2 sdb2 80108032 blocks [2/1] [_U] [>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec
unused devices: <none>