Difference between revisions of "Convert a single drive system to RAID"

From ArchWiki
Jump to: navigation, search
m (Alter fstab: Fix template.)
m (Create the RAID device: Solve one accuracy note.)
 
(72 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
[[Category:File systems]]
 
[[Category:File systems]]
 
[[es:Convert a single drive system to RAID]]
 
[[es:Convert a single drive system to RAID]]
{{out of date|grub-legacy is unsupported; 0.9 superblocks and MBR partitioning are also therefore non-useful}}
+
This guide shows how to convert a functional single-drive system to a [[RAID]] 1 setup after adding a second drive, without the need to temporarily store the data on a third drive. The procedure can also be adapted, simplifying it, to the conversion of simple non-root partitions, and to other RAID levels.
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.
+
* Create a single-disk RAID-1 array with our new disk
+
* Move all your data from the old-disk to the new RAID-1 array
+
* Verify the data move was successful
+
* Wipe the old disk and add it to the new RAID-1 array
+
  
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantee the process will be perfect, and there is a high risk of accidents happening.}}
+
{{Tip|You may consider using {{AUR|Raider}}, which can convert a single disk into a RAID system with a two-pass command.}}
  
== Assumptions ==
+
== Scenario ==
  
* I will assume for the sake of the guide that the disk currently in your system is {{ic|/dev/sda}} and your new disk is {{ic|/dev/sdb}}.
+
This example assumes that the pre-existing disk is {{ic|/dev/sda}}, which contains only one partition, {{ic|/dev/sda1}}, used for the whole system. The newly-added disk is {{ic|/dev/sdb}}.
* We will create the following configuration:
+
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)
+
** 2 x Swap Partitions using 1 partition on each disk.
+
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.
+
  
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Ic|telinit 1}} command.
+
{{Warning|Backup important data before proceeding.}}
  
* You will need to be the root user for the entire process.
+
== Prepare the new disk ==
  
== Create new RAID array ==
+
=== Partition the disk ===
  
First we need to create a single-disk RAID array using the new disk.
+
The first step is creating the [[partition]] on the new disk, {{ic|/dev/sdb1}}, that will be used as the mirror for the RAID array. In general, in this step it is not needed to recreate the exact partitioning scheme of the pre-existing drive; RAID can even be configured on whole disks, and [https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID#Partitions%20on%20a%20RAID%20device partitions] or [[LVM|logical volumes]] created later.
  
=== Partition the Disk ===
+
Make sure that the partition type is set as {{ic|FD}}. See [[RAID#Prepare the Devices]] and [[RAID#Create the Partition Table (GPT)]] for more information.
  
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).
+
=== Create the RAID device ===
  
Drop to single user mode:
+
Next, create the RAID array in a degraded state, using only the new disk. Note how the {{ic|missing}} keyword is specified for the first device: this will be added later.
# telinit 1
+
  
To see the current partitions:
+
  # mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
  # fdisk -l
+
  
To partition the new disk
+
{{Accuracy|Why would mdadm not see /dev/sdbX? And is rebooting the only way to fix it?}}
# fdisk /dev/sdb
+
 
+
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:
+
{{Note|Make sure to enable the '''bootable''' flag to avoid ''fatal boot failure / missing operating system'' errors on legacy BIOS systems. {{ic|gdisk}} (GPT) can do it too via its 'x' menu for extra functionality -> 'a' to set attributes -> partition# -> '2': legacy BIOS bootable.}}
+
c # Turn off DOS compatibility (optional).
+
n # new
+
p # primary
+
1 # first partition
+
1 # start at first cylinder
+
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.
+
n # new
+
p # primary
+
2 # second partition
+
press enter # Uses the default start from the end of the first partition
+
press enter # Uses the default of using all the remain space on the disk.
+
t # set the partition type
+
1 # for partition number 1
+
82 # ... and set it to be swap
+
t # set the partition type
+
2 # for partition number 2 ...
+
fd # ... and set it to be "linux raid auto"
+
a # Toggle the bootable flag to be "on"
+
2 # for partition number 2.
+
p # print what the partition table will look like
+
w # now write all of the above changes to disk
+
 
+
At the end of partitioning, your partitions should look something like this:
+
# fdisk -l /dev/sdb
+
Disk /dev/sdb: 80.0 GB, 80025280000 bytes
+
255 heads, 63 sectors/track, 9729 cylinders
+
Units = cylinders of 16065 * 512 = 8225280 bytes
+
Disk identifier: 0x00000000
+
+
    Device Boot      Start        End      Blocks  Id  System
+
/dev/sdb1              1          66      530113+  82  Linux swap / Solaris
+
/dev/sdb2              67        9729    77618047+  fd  Linux raid autodetect
+
 
+
Make sure your your partition types are set correctly. {{ic|"Linux Swap"}} is type {{ic|82}} and {{ic|"Linux raid autodetect"}} is type {{ic|FD}}.
+
 
+
=== Create the RAID device ===
+
 
+
Next, create the single-disk RAID-1 array. Note the {{ic|"missing"}} keyword is specified as one of our devices. We are going to fill this missing device later.
+
# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2
+
mdadm: array /dev/md0 started.
+
  
 
Note: If the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.
 
Note: If the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.
  
{{hc|<nowiki># mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb2</nowiki>|<nowiki>
+
If you want to use [[Syslinux]], then specify {{ic|1=--metadata=1.0}} (for the boot partition). As of Syslinux 6.03, [[mdadm]] 1.2 is not yet supported in Syslinux. See also [[Software RAID and LVM]].
mdadm: array /dev/md0 started.
+
</nowiki>}}
+
 
+
If you want to use Syslinux, you need to specify --metadata=1.0 (for the boot partition) [http://www.zytor.com/pipermail/syslinux/2011-September/016911.html as of Sept. 2011]
+
  
 
Make sure the array has been created correctly by checking {{ic|/proc/mdstat}}:
 
Make sure the array has been created correctly by checking {{ic|/proc/mdstat}}:
{{hc|# cat /proc/mdstat|
+
{{bc|# Personalities : [raid1]                                                                                                                                                                      
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
+
md0 : active raid1 sdb1[1]                                                                                                                                                                  
md0 : active raid1 sdb2[1]
+
       2930034432 blocks super 1.2 [2/1] [_U]                                                                                                                                                
       40064 blocks [2/1] [_U]
+
      bitmap: 22/22 pages [88KB], 65536KB chunk                                                                                                                                             
+
                                                                                                                                                                                             
unused devices: <none>
+
unused devices: <none>
 
}}
 
}}
  
The devices are intact, however in a degraded state. (Because it's missing half the array!)
+
=== Make file system ===
  
=== Make file systems ===
+
Create the needed [[file system]] on the {{ic|/dev/md0}} device.
 
+
Use the file system of your preference here. I'll use ext3 for this guide.
+
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0
+
mke2fs 1.38 (30-Jun-2005)
+
Filesystem label=
+
OS type: Linux
+
Block size=4096 (log=2)
+
Fragment size=4096 (log=2)
+
10027008 inodes, 20027008 blocks
+
1001350 blocks (5.00%) reserved for the super user
+
First data block=0
+
612 block groups
+
32768 blocks per group, 32768 fragments per group
+
16384 inodes per group
+
Superblock backups stored on blocks:
+
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
+
        4096000, 7962624, 11239424
+
+
Writing inode tables: done
+
Creating journal (32768 blocks): done
+
Writing superblocks and filesystem accounting information: done
+
+
This filesystem will be automatically checked every 25 mounts or
+
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+
 
+
Make a file system on the swap partition:
+
{{hc|# mkswap -L NEW-SWAP /dev/sdb1|<nowiki>
+
Setting up swapspace version 1, size = 271314 kB
+
LABEL=NEW-SWAP, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9
+
</nowiki>}}
+
  
== Copy data ==
+
== Copy the data on the array ==
  
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system
+
{{Warning|It is recommended to copy the data from another system, such as a live image, to minimize the risk of the data changing in the middle of the copy. Alternatively, switch to single-user mode with {{ic|systemctl isolate rescue.target}}.}}
  
=== Mount the array ===
+
Mount the array:
  
 
  # mkdir /mnt/new-raid
 
  # mkdir /mnt/new-raid
 
  # mount /dev/md0 /mnt/new-raid
 
  # mount /dev/md0 /mnt/new-raid
  
=== Copy the data ===
+
Now copy the data from {{ic|/dev/sda1}} to {{ic|/mnt/new-raid}}, for example using [[Full_system_backup_with_rsync|rsync]].
  
# rsync -avxHAXS --delete --progress / /mnt/new-raid
+
== Boot on the new disk ==
  
Note that by using the '''-x''' option you are limiting rsync to a single file system.  If you have a more traditional file system layout, with different partitions for /boot, /home, and perhaps others, you will need to rsync those file systems separately.  For example:
+
=== Update the boot loader ===
# rsync -avxHAXS --delete --progress /boot /mnt/new-raid/boot
+
# rsync -avxHAXS --delete --progress /home /mnt/new-raid/home
+
  
Alternatively, you can use tar instead of the above rsync command if you prefer.  rsync will, however, be quicker if you are only copying over changes. The tar command is: {{ic|<nowiki>tar -C / -clspf - . | tar -xlspvf -</nowiki>}}
+
Create a new entry in the boot loader to load the system from the RAID array in the new disk.
  
=== Update GRUB legacy ===
+
==== GRUB legacy ====
 +
 
 +
{{Accuracy|The following configuration has not been verified after the article has been reorganized on September 2015.}}
  
 
Use your preferred text editor to open {{ic|/mnt/new-raid/boot/grub/menu.lst}}.
 
Use your preferred text editor to open {{ic|/mnt/new-raid/boot/grub/menu.lst}}.
Line 172: Line 83:
 
  root  (hd1,0)
 
  root  (hd1,0)
 
  #kernel /vmlinuz-linux root=/dev/sda1 ro
 
  #kernel /vmlinuz-linux root=/dev/sda1 ro
  kernel /vmlinuz-linux root=/dev/md0 md=0,/dev/sda2,/dev/sdb2
+
  kernel /vmlinuz-linux root=/dev/md0 md=0,/dev/sda1,/dev/sdb1
 
  --- SNIP ---
 
  --- SNIP ---
 
Notice we added the {{Ic|fallback}} line and duplicated the Arch Linux entry with a different {{Ic|root}} directive on the kernel line.
 
Notice we added the {{Ic|fallback}} line and duplicated the Arch Linux entry with a different {{Ic|root}} directive on the kernel line.
Line 178: Line 89:
 
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{ic|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:
 
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{ic|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:
 
  - # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro
 
  - # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro
  + # kopt=root=/dev/md0 ro md=0,/dev/sda2,/dev/sdb2
+
  + # kopt=root=/dev/md0 ro md=0,/dev/sda1,/dev/sdb1
 
   
 
   
 
  ## default GRUB root device
 
  ## default GRUB root device
Line 185: Line 96:
 
  + # groot=(hd0,1)
 
  + # groot=(hd0,1)
  
=== Update GRUB ===
+
See [[GRUB Legacy]] for more information.
  
Please refer to [[GRUB#Other_Options|GRUB]] article.
+
==== GRUB ====
 +
 
 +
Please refer to [[GRUB#Other Options|GRUB]] article.
  
 
=== Alter fstab ===
 
=== Alter fstab ===
  
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.
+
You need to tell fstab on the '''new''' disk where to find the new device. It is recommended to use [[Persistent block device naming]].
  
To find the UUID to use:
+
{{hc|/mnt/new-raid/etc/fstab|
{{hc|# blkid|2=
+
/dev/md0   /   ext4    defaults  0 1
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212"
+
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP"
+
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE"
+
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2"
+
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid"
+
 
}}
 
}}
  
Look for the partition labeled "NEW-SWAP", on {{ic|/dev/sdb1}}, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add {{ic|/dev/md0}}, as our root mount point.
+
=== Rebuild the initramfs ===
  
{{hc|# cat /mnt/new-raid/etc/fstab|<nowiki>
+
==== Chroot into the RAID system ====
/dev/md0    /    ext3    defaults  0 1
+
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0
+
</nowiki>}}
+
 
+
=== Rebuild initcpio or initramfs ===
+
 
+
==== First chroot into the RAID system ====
+
  
 
  # mount --bind /sys /mnt/new-raid/sys
 
  # mount --bind /sys /mnt/new-raid/sys
Line 220: Line 121:
 
If the chroot command gives you an error like {{ic|chroot: failed to run command `/bin/zsh': No such file or directory}}, then use {{ic|chroot /mnt/new-raid/ /bin/bash}} instead.
 
If the chroot command gives you an error like {{ic|chroot: failed to run command `/bin/zsh': No such file or directory}}, then use {{ic|chroot /mnt/new-raid/ /bin/bash}} instead.
  
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, regardless of the Linux variant).
+
==== Record mdadm's config ====
  
==== Record mdadm's config ====
+
[[Edit]] {{ic|/etc/mdadm.conf}} and change the {{ic|MAILADDR}} line to be your email address, if you want emailed alerts of problems with the RAID 1.
  
For Arch Linux, use {{ic|/etc/mdadm.conf}}, for Ubuntu or Debian, use {{ic|/etc/mdadm/mdadm.conf}}}
+
Then save the array configuration with UUIDs to make it easier for the system to find {{ic|/dev/md0}} at boot. If you do not do this, you can get an {{ic|ALERT! /dev/md0 does not exist}} error when booting:
# nano /etc/mdadm.conf
+
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.
+
  
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on boot-up. If you do not do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :
 
 
  # mdadm --detail --scan >> /etc/mdadm.conf
 
  # mdadm --detail --scan >> /etc/mdadm.conf
  
==== For Arch Linux: rebuild initcpio ====
+
==== Rebuild initcpio ====
  
Edit {{ic|/etc/mkinitcpio.conf}} to include {{Ic|mdadm}} in the HOOKS array. Place it after {{Ic|autodetect}}, {{Ic|sata}}, {{Ic|scsi}} and {{Ic|pata}} (whichever is appropriate for your hardware).
+
Follow [[RAID#Add mdadm hook to mkinitcpio.conf]].
# mkinitcpio -p linux
+
# exit
+
  
==== For Ubuntu or Debian: rebuild initramfs ====
+
=== Install the boot loader on the RAID array ===
  
Then rebuild initramfs, incorporating the two above changes:
+
{{Expansion|Support more boot loaders, simplify.}}
# update-initramfs -k $(uname -r) -c -t
+
  
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):
+
==== GRUB Legacy ====
$ ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u
+
Then substitute/script in these others so that all are available for use with the new RAID setup.
+
 
+
=== Install GRUB on the RAID array ===
+
  
 
Start GRUB:
 
Start GRUB:
Line 275: Line 166:
 
  grub> quit
 
  grub> quit
  
== Verify success ==
+
=== Verify success ===
  
Reboot your computer, making sure it boots from the new RAID disk ({{ic|/dev/sdb}}) and not the original disk ({{ic|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.
+
Reboot the computer, making sure it boots from the new RAID disk ({{ic|/dev/sdb}}) and not the original disk ({{ic|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.
  
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{ic|menu.lst}} earlier.
+
Once the boot loader on the '''new''' disk loads, make sure you select to boot the new system entry you created earlier.
  
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.
+
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.
 
{{hc|# mount|
 
{{hc|# mount|
  /dev/md0 on / type ext3 (rw)
+
  /dev/md0 on / type ext4 (rw)
 
}}
 
}}
 +
 +
{{Accuracy|The output of the following command has not been verified after the article has been reorganized on September 2015.}}
  
 
{{hc|# cat /proc/mdstat|
 
{{hc|# cat /proc/mdstat|
 
  Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
 
  Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
  md0 : active raid1 sdb2[1]
+
  md0 : active raid1 sdb1[1]
 
       40064 blocks [2/1] [_U]
 
       40064 blocks [2/1] [_U]
 
   
 
   
Line 294: Line 187:
 
}}
 
}}
  
Also:
+
If the system boots fine, and the output of the above commands is correct, then you are running off the degraded RAID array, as expected.
{{hc|# swapon -s|
+
Filename                Type          Size    Used    Priority
+
/dev/sdb1              partition      4000144 16      -1
+
}}
+
Note it is the swap partition on {{ic|sdb}} that is in use, nothing from {{ic|sda}}.
+
 
+
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.
+
  
 
== Add original disk to array ==
 
== Add original disk to array ==
Line 307: Line 193:
 
=== Partition original disk ===
 
=== Partition original disk ===
  
Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout.
+
Copy the partition table from {{ic|/dev/sdb}} (newly implemented RAID disk) to {{ic|/dev/sda}} (second disk we are adding to the array) so that both disks have exactly the same layout:
 +
 
 
  # sfdisk -d /dev/sdb | sfdisk /dev/sda
 
  # sfdisk -d /dev/sdb | sfdisk /dev/sda
  
Alternate method - this will output the /dev/sdb partition layout to a file, then it's used as input for partitioning /dev/sda.
+
Alternative method: this will output the {{ic|/dev/sdb}} partition layout to a file, then it is used as input for partitioning {{ic|/dev/sda}}.
 +
 
 
  # sfdisk -d /dev/sdb > raidinfo-partitions.sdb
 
  # sfdisk -d /dev/sdb > raidinfo-partitions.sdb
 
  # sfdisk /dev/sda < raidinfo-partitions.sdb
 
  # sfdisk /dev/sda < raidinfo-partitions.sdb
Use the --force if needed.
 
# sfdisk --force /dev/sda < raidinfo-partitions.sdb
 
  
 
Verify that the partitioning is identical:
 
Verify that the partitioning is identical:
 +
 
  # fdisk -l
 
  # fdisk -l
  
==== Note ====
+
{{Note|If you get an error when attempting to add the partition to the array:
  
If you get an error when attempting to add the parition to the array:
 
 
  mdadm: /dev/sda1 not large enough to join array
 
  mdadm: /dev/sda1 not large enough to join array
You might have seen an earlier warning message when partitioning this disk that the kernel
+
 
still sees the old disk size - a reboot ought to fix this, then try adding again to the array.
+
You might have seen an earlier warning message when partitioning this disk that the kernel still sees the old disk size: a reboot ought to fix this, then try adding again to the array.}}
  
 
=== Add disk partition to array ===
 
=== Add disk partition to array ===
  
{{hc|# mdadm /dev/md0 -a /dev/sda2|
+
{{hc|# mdadm /dev/md0 -a /dev/sda1|
  mdadm: hot added /dev/sda2
+
  mdadm: hot added /dev/sda1
 
}}
 
}}
  
Verify that the RAID array is being rebuilt.
+
Verify that the RAID array is being rebuilt:
{{hc|# cat /proc/mdstat|<nowiki>
+
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
+
md0 : active raid1 sda2[2] sdb2[1]
+
      80108032 blocks [2/1] [_U]
+
      [>....................]  recovery =  1.2% (1002176/80108032) finish=42.0min speed=31318K/sec
+
+
unused devices: <none>
+
</nowiki>}}
+
 
+
Syncing can take a while.  If the machine is not needed for other tasks the speed limit can be increased.
+
  
 
{{hc|# cat /proc/mdstat|<nowiki>
 
{{hc|# cat /proc/mdstat|<nowiki>
 
  Personalities : [raid1]  
 
  Personalities : [raid1]  
md0 : active raid1 sda3[2] sdb3[1]
+
md0 : active raid1 sda1[2] sdb1[1]
      155042219 blocks super 1.2 [2/1] [_U]
+
      2930034432 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.0% (77696/155042219) finish=265.8min speed=9712K/sec
+
      [>....................]  recovery =  0.2% (5973824/2930034432) finish=332.5min speed=146528K/sec
     
+
      bitmap: 22/22 pages [88KB], 65536KB chunk
unused devices: <none>
+
</nowiki>}}
+
  
Check the current speed limit.
+
unused devices: <none>
 
+
{{hc|# cat /proc/sys/dev/raid/speed_limit_min|
+
1000
+
}}
+
{{hc|# cat /proc/sys/dev/raid/speed_limit_max|
+
200000
+
}}
+
 
+
Increase the limits.
+
 
+
# echo 400000 >/proc/sys/dev/raid/speed_limit_min
+
# echo 400000 >/proc/sys/dev/raid/speed_limit_max
+
 
+
Then check out the syncing speed and estimated finish time.
+
 
+
{{hc|# cat /proc/mdstat|<nowiki>
+
Personalities : [raid1]
+
md0 : active raid1 sda3[2] sdb3[1]
+
      155042219 blocks super 1.2 [2/1] [_U]
+
      [>....................]  recovery =  1.3% (2136640/155042219) finish=158.2min speed=16102K/sec
+
     
+
unused devices: <none>
+
 
</nowiki>}}
 
</nowiki>}}
  
=== Add second swap partition ===
+
== See also ==
 
+
The partition was created with sfdisk, but it still has to be formatted for swap.
+
{{hc|# mkswap -L SWAP /dev/sda1|<nowiki>
+
Setting up swapspace version 1, size = 271314 kB
+
LABEL=SWAP, UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9
+
</nowiki>}}
+
 
+
Then add this UUID to the fstab exactly like the other one earlier.  When done, it should look similar to this:
+
{{hc|# cat /mnt/new-raid/etc/fstab|<nowiki>
+
/dev/md0    /    ext3    defaults  0 1
+
UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9 none swap sw 0 0
+
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0
+
</nowiki>}}
+
 
+
It can be activated immediately:
+
# swapon /dev/sda1
+
 
+
=== Verify that email alerts are working ===
+
 
+
If you run this command, then you should get a notification email showing the contents of /proc/mdstat :
+
# mdadm --monitor --test --oneshot /dev/md0
+
Check that you get the test email notification. This way you can be aware if one of the disks in the array fails (otherwise it may fail silently, putting you at risk of data loss if another drive should also fail). If you do not get an email notification, check the "MAILADDR" line in mdadm.conf, and also check that sending an email to this address from the command line (e.g. using the "mail" command) works.
+
 
+
== Automatic conversion tool alternative - Raider ==
+
 
+
You may consider to use {{AUR|Raider}}, which is a tool that is able to convert a single disk in to a Raid system ('''1, 4, 5, 6''' or '''10''') with a two-pass command.
+
  
* Website: http://raider.sourceforge.net/
+
* [http://askubuntu.com/questions/252795/convert-running-system-to-raid-5 Convert running system to RAID 5] — Example using RAID 5

Latest revision as of 13:56, 29 April 2016

This guide shows how to convert a functional single-drive system to a RAID 1 setup after adding a second drive, without the need to temporarily store the data on a third drive. The procedure can also be adapted, simplifying it, to the conversion of simple non-root partitions, and to other RAID levels.

Tip: You may consider using RaiderAUR, which can convert a single disk into a RAID system with a two-pass command.

Scenario

This example assumes that the pre-existing disk is /dev/sda, which contains only one partition, /dev/sda1, used for the whole system. The newly-added disk is /dev/sdb.

Warning: Backup important data before proceeding.

Prepare the new disk

Partition the disk

The first step is creating the partition on the new disk, /dev/sdb1, that will be used as the mirror for the RAID array. In general, in this step it is not needed to recreate the exact partitioning scheme of the pre-existing drive; RAID can even be configured on whole disks, and partitions or logical volumes created later.

Make sure that the partition type is set as FD. See RAID#Prepare the Devices and RAID#Create the Partition Table (GPT) for more information.

Create the RAID device

Next, create the RAID array in a degraded state, using only the new disk. Note how the missing keyword is specified for the first device: this will be added later.

# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: Why would mdadm not see /dev/sdbX? And is rebooting the only way to fix it? (Discuss in Talk:Convert a single drive system to RAID#)

Note: If the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.

If you want to use Syslinux, then specify --metadata=1.0 (for the boot partition). As of Syslinux 6.03, mdadm 1.2 is not yet supported in Syslinux. See also Software RAID and LVM.

Make sure the array has been created correctly by checking /proc/mdstat:

# Personalities : [raid1]                                                                                                                                                                       
md0 : active raid1 sdb1[1]                                                                                                                                                                    
      2930034432 blocks super 1.2 [2/1] [_U]                                                                                                                                                  
      bitmap: 22/22 pages [88KB], 65536KB chunk                                                                                                                                               
                                                                                                                                                                                              
unused devices: <none>

Make file system

Create the needed file system on the /dev/md0 device.

Copy the data on the array

Warning: It is recommended to copy the data from another system, such as a live image, to minimize the risk of the data changing in the middle of the copy. Alternatively, switch to single-user mode with systemctl isolate rescue.target.

Mount the array:

# mkdir /mnt/new-raid
# mount /dev/md0 /mnt/new-raid

Now copy the data from /dev/sda1 to /mnt/new-raid, for example using rsync.

Boot on the new disk

Update the boot loader

Create a new entry in the boot loader to load the system from the RAID array in the new disk.

GRUB legacy

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: The following configuration has not been verified after the article has been reorganized on September 2015. (Discuss in Talk:Convert a single drive system to RAID#)

Use your preferred text editor to open /mnt/new-raid/boot/grub/menu.lst.

--- SNIP ---
default   0
color light-blue/black light-cyan/blue

## fallback
fallback 1

# (0) Arch Linux
title  Arch Linux - Original Disc
root   (hd0,0)
kernel /vmlinuz-linux root=/dev/sda1

# (1) Arch Linux
title  Arch Linux - New RAID
root   (hd1,0)
#kernel /vmlinuz-linux root=/dev/sda1 ro
kernel /vmlinuz-linux root=/dev/md0 md=0,/dev/sda1,/dev/sdb1
--- SNIP ---

Notice we added the fallback line and duplicated the Arch Linux entry with a different root directive on the kernel line.

Also update the "kopt" and "groot" sections, as shown below, if they are in your /mnt/new-raid/boot/grub/menu.lst file, because it will make applying distribution kernel updates easier:

- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro
+ # kopt=root=/dev/md0 ro md=0,/dev/sda1,/dev/sdb1

## default GRUB root device
## e.g. groot=(hd0,0)
- # groot=(hd0,0)
+ # groot=(hd0,1)

See GRUB Legacy for more information.

GRUB

Please refer to GRUB article.

Alter fstab

You need to tell fstab on the new disk where to find the new device. It is recommended to use Persistent block device naming.

/mnt/new-raid/etc/fstab
/dev/md0    /    ext4     defaults   0 1

Rebuild the initramfs

Chroot into the RAID system

# mount --bind /sys /mnt/new-raid/sys
# mount --bind /proc /mnt/new-raid/proc
# mount --bind /dev /mnt/new-raid/dev
# chroot /mnt/new-raid/

If the chroot command gives you an error like chroot: failed to run command `/bin/zsh': No such file or directory, then use chroot /mnt/new-raid/ /bin/bash instead.

Record mdadm's config

Edit /etc/mdadm.conf and change the MAILADDR line to be your email address, if you want emailed alerts of problems with the RAID 1.

Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 at boot. If you do not do this, you can get an ALERT! /dev/md0 does not exist error when booting:

# mdadm --detail --scan >> /etc/mdadm.conf

Rebuild initcpio

Follow RAID#Add mdadm hook to mkinitcpio.conf.

Install the boot loader on the RAID array

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Support more boot loaders, simplify. (Discuss in Talk:Convert a single drive system to RAID#)

GRUB Legacy

Start GRUB:

# grub --no-floppy

Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:

grub> find /boot/grub/stage1
(hd0,0)
(hd1,1)

Then we tell GRUB to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.

grub> device (hd0) /dev/sdb

Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:

grub> root (hd0,1)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded. succeeded
 Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded
 Done
grub> quit

Verify success

Reboot the computer, making sure it boots from the new RAID disk (/dev/sdb) and not the original disk (/dev/sda). You may need to change the boot device priorities in your BIOS to do this.

Once the boot loader on the new disk loads, make sure you select to boot the new system entry you created earlier.

Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.

# mount
 /dev/md0 on / type ext4 (rw)

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: The output of the following command has not been verified after the article has been reorganized on September 2015. (Discuss in Talk:Convert a single drive system to RAID#)
# cat /proc/mdstat
 Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
 md0 : active raid1 sdb1[1]
      40064 blocks [2/1] [_U]
 
 unused devices: <none>

If the system boots fine, and the output of the above commands is correct, then you are running off the degraded RAID array, as expected.

Add original disk to array

Partition original disk

Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout:

# sfdisk -d /dev/sdb | sfdisk /dev/sda

Alternative method: this will output the /dev/sdb partition layout to a file, then it is used as input for partitioning /dev/sda.

# sfdisk -d /dev/sdb > raidinfo-partitions.sdb
# sfdisk /dev/sda < raidinfo-partitions.sdb

Verify that the partitioning is identical:

# fdisk -l
Note: If you get an error when attempting to add the partition to the array:
mdadm: /dev/sda1 not large enough to join array
You might have seen an earlier warning message when partitioning this disk that the kernel still sees the old disk size: a reboot ought to fix this, then try adding again to the array.

Add disk partition to array

# mdadm /dev/md0 -a /dev/sda1
 mdadm: hot added /dev/sda1

Verify that the RAID array is being rebuilt:

# cat /proc/mdstat
 Personalities : [raid1] 
md0 : active raid1 sda1[2] sdb1[1]
      2930034432 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.2% (5973824/2930034432) finish=332.5min speed=146528K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

unused devices: <none>

See also